locked
35+ million tables and store procedures in a database, good or bad? RRS feed

  • Question

  • Hi Expert,

    Currently we are re-considering our database design, we have almost 5000+ databases in a SQL Server database instance.After investigating, we plan to migrate these database into one database and will be recognized through schema.

    Thus, we have to create 5000+ schemas in one database, and then migrate each database into the appropriate schema. we caculated roughly that almost 35 million tables and store procedures in this database.

    So my first question is: is there any known risk or problem for the design?

    the second questioin is: may I change the schema name because I cannot find any clue to change the schema name.

    Thanks in advance!


    amoschb

    Sunday, December 22, 2013 6:43 AM

Answers

  • Managing 100s of database and 100s of schemas are the same thing.  You will not gain anything by changing this.

    You should change your entire model and have 1 set of tables with a customerid in each.

    • Proposed as answer by Kalman Toth Monday, December 23, 2013 3:24 PM
    • Marked as answer by Fanny Liu Sunday, December 29, 2013 9:06 AM
    Monday, December 23, 2013 1:27 PM
    Answerer
  • The background is that we have to isolate customer data strictly.

    You need to define exactly how strict is "strictly" according to real requirements.  Below are the options as I see them.  As you can see, manageability and isolation are inversely proportional.  But all of these can isolate customer data.

     

    • Single set of objects, logically partitioned by customer
    • Separate schema in same database per customer
    • Separate database for each customer
    • Separate SQL Server instance for each customer
    • Separate VM host server for each customer
    • Separate physical host server for each customer

    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

    • Proposed as answer by Tom PhillipsEditor Monday, December 23, 2013 7:05 PM
    • Marked as answer by Fanny Liu Sunday, December 29, 2013 9:06 AM
    Monday, December 23, 2013 1:40 PM
    Answerer

All replies

  • That means ~70,000 tables per schema. That is pretty huge...I strongly suggest you to start looking into Ola's great utility to rebuid/reorg indexes in the smart way.

    http://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html

    2.

    First you need create a new schema and then transfer the objects.

    ALTER SCHEMA NewSchema TRANSFER OldSchema.Object


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Sunday, December 22, 2013 7:49 AM
  • >So my first question is: is there any known risk or problem for the design?

    We do hope that you will let us know the answer to that question once you finish.


    Kalman Toth Database & OLAP Architect IPAD SELECT Query Video Tutorial 3.5 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012


    Sunday, December 22, 2013 8:06 AM
  • Hi Uri,

    Sorry for my wrong description, the fact is that 350(150+ tables + 150+ store procedures) objects/schema,

    for 10,000 schemas, the total tables and store procedures should be 3.5 million.

    for 100,000 schemas, the total tables and store procedures should be 35 million.

     Is this still huge?


    amoschb

    Sunday, December 22, 2013 8:13 AM
  • Yes it is....you will need probably a third party tool to manage such database like www.red-gate.com.

    How big is the database? Can you predict the growth of the database in the near future?


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Sunday, December 22, 2013 11:49 AM
  • Sorry for my wrong description, the fact is that 350(150+ tables + 150+ store procedures) objects/schema,

    for 10,000 schemas, the total tables and store procedures should be 3.5 million.

    for 100,000 schemas, the total tables and store procedures should be 35 million.

     Is this still huge?

    Yes, millions of database objects is a large number.  I personally don't know of designs with this many objects, which should provide a clue that you could be going down the wrong path.

    I assume each schema will contain an identical set of objects that differ only by the schema-qualified name and data.  In that case, what is the reason for the multiple schemas/databases in the first place?  It seems you could add a column to segregate and logically partition the data. That way, you would have only 150+ tables and procs.  I expect that approach would greatly reduce storage requirements and improve performance and manageability.  But of course there is much we don't know about the application in question.


    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

    Sunday, December 22, 2013 7:52 PM
    Answerer
  • Hi Uri,

    caculate roughly, for 10000 schemas (3.5 million tables and store procedures), the database size (no any real data, only database objects) is almost 30GB.

    for the growth of database, 10% of 10,000 schemas will have 200MB data for each year, the data of the rest of schemas will less than 20MB, just roughly estimation.

    So, after 1 year, the database size will increase to 30+200+180=410GB.

    By the way, you metioned the solutions from www.red-gate.com, would you please more specific?

    Thanks very much!


    amoschb

    Monday, December 23, 2013 10:27 AM
  • Hi Dan,

    The background is that we have to isolate customer data strictly.

    The current implentation is one database (150+ tables & 150+ procs) for one customer, and now we realized it's very difficult to manage those databases, so we want to see if we can migrate the existing database into the appropriate schemas in one database.

    Do you have any other ideas?


    amoschb

    Monday, December 23, 2013 12:46 PM
  • Managing 100s of database and 100s of schemas are the same thing.  You will not gain anything by changing this.

    You should change your entire model and have 1 set of tables with a customerid in each.

    • Proposed as answer by Kalman Toth Monday, December 23, 2013 3:24 PM
    • Marked as answer by Fanny Liu Sunday, December 29, 2013 9:06 AM
    Monday, December 23, 2013 1:27 PM
    Answerer
  • The background is that we have to isolate customer data strictly.

    You need to define exactly how strict is "strictly" according to real requirements.  Below are the options as I see them.  As you can see, manageability and isolation are inversely proportional.  But all of these can isolate customer data.

     

    • Single set of objects, logically partitioned by customer
    • Separate schema in same database per customer
    • Separate database for each customer
    • Separate SQL Server instance for each customer
    • Separate VM host server for each customer
    • Separate physical host server for each customer

    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

    • Proposed as answer by Tom PhillipsEditor Monday, December 23, 2013 7:05 PM
    • Marked as answer by Fanny Liu Sunday, December 29, 2013 9:06 AM
    Monday, December 23, 2013 1:40 PM
    Answerer