none
Handling large database RRS feed

  • Question

  • Hi All,

    I have large database of 2.5 TB which has more than 100 tables from different projects with separate schema, there are few more projects coming with 5 to 10 new tables which may grow in future.

    Is it right approach to keep tables with separate schema in same database while DB size is huge?


    SQL Server DBA

    Wednesday, July 3, 2019 9:34 AM

Answers

All replies

  • It is a good approach for the readability but the schema has no impact in the performance of the database (nor in the query performance). 
    Wednesday, July 3, 2019 10:15 AM
  • Is it right approach to keep tables with separate schema in same database while DB size is huge?

    Schemas are useful as a namespace to group related objects and to optionally provide a security boundary (e.g. different schema owners). Schemas alone do not improve manageability of large databases.



    Dan Guzman, Data Platform MVP, http://www.dbdelta.com

    Wednesday, July 3, 2019 10:44 AM
    Moderator
  • In addition, yes  schema is more about security access , I think you need to keep this approach on .

    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Wednesday, July 3, 2019 10:45 AM
    Moderator
  • My main question is manageability because for that huge database u cant setup daily backup while if u keep new database for new project tables it gives u flexibility to backup and restore and handling ?

    SQL Server DBA

    Wednesday, July 3, 2019 12:22 PM
  • Yes, schema perfectly  fits  for manage sql server  database....For backups, you can look into creating a file group  and put the new tables in, then you can backup \restore  that specific filegroup

    https://www.mssqltips.com/sqlservertutorial/17/sql-server-filegroup-backups/

    https://www.mytechmantra.com/SQL-Server-Tutorials/FILEGROUP-Backup-in-SQL-Server-Step-by-Step-Tutorial-with-Examples/


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Wednesday, July 3, 2019 12:28 PM
    Moderator
  • If you create different databases (with or without schemas), provides more a flexible recovery plan. This assumes the data is not so closely related as to require transactional consistency.

    You might also consider separate filegroups. That would allow you to perform piecemeal restores so that critical objects can be restored first and made available before restoring less critical data.


    Dan Guzman, Data Platform MVP, http://www.dbdelta.com

    Wednesday, July 3, 2019 12:29 PM
    Moderator
  • ---- We just do a FILEGROUP backup, not a FULL backup. This is the main 
    ----point.
    ----
    ----You basically:
    ----a) Move your NCIXs to another filegroup
    ----b) Change your full db backup script to only backup the primary filegroup.
    ----
    ----That's it!

    set nocount on
    go
    --create testdb
    create database fgtest_prod
    go
    --create secondary NCIX_FG and file
    use [master]
    go
    alter database [fgtest_prod] 
    add filegroup [ncix_fg]
    go
    alter database [fgtest_prod] 
    add file (
      name = N'fgtest_prod_ncix'
    , filename = N'd:\fgtest\fgtest_prod_ncix.ndf' 
    , size = 3072kb 
    , filegrowth = 1024kb ) to filegroup [ncix_fg]
    go

    use fgtest_prod
    go

    --create & populate test object
    create table dbo.t1 (
     c1 int not null primary key clustered
    )
    go

    insert into dbo.t1 values (1)
    go

    --create ncix on NCIX_FG
    create nonclustered index t1_ncix_1 
    on dbo.t1 (c1) on [ncix_fg]
    go

    --perform a full bak before FG bak is allowed
    backup database fgtest_prod
    to disk='d:\fgtest\fullbackup.bak'
    with init
    go

    --backup primary FG
    backup database fgtest_prod
    filegroup='primary'
    to disk='d:\fgtest\primary_fg.bak'
    with init
    go

    --partially restore primary FG
    restore database fgtest_restore
    filegroup='primary'
    from disk='d:\fgtest\primary_fg.bak'
    with recovery, partial,
    move 'fgtest_prod' to 'd:\fgtest\fgtest_restore.mdf',
    move 'fgtest_prod_ncix' to 'd:\fgtest\fgtest_restore_ncix.ndf',
    move 'fgtest_prod_log' to 'd:\fgtest\fgtest_restore_log.ldf'
    go

    use [fgtest_restore]
    go

    --add another ncix FG 
    alter database [fgtest_restore] 
    add filegroup [ncix_fg2]
    go
    alter database [fgtest_restore] 
    add file (
      name = N'fgtest_prod_ncix2'
    , filename = N'd:\fgtest\fgtest_prod_ncix2.ndf' 
    , size = 3072kb 
    , filegrowth = 1024kb ) to filegroup [ncix_fg2]
    go

    --rename existing index in offline FG
    exec sp_rename 't1.t1_ncix_1', 't1.t1_ncix_1b'
    go
    --re-create index in new FG
    create index t1_ncix_1 on t1 (c1) on [ncix_fg2]
    go
    --run query with index hint
    select c1 from t1 with (index=t1_ncix_1)
    go


    --cleanup
    use master
    go
    drop database fgtest_restore
    go
    drop database fgtest_prod
    go

    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Wednesday, July 3, 2019 12:31 PM
    Moderator