locked
Compression forecasting. RRS feed

  • Question

  • Hi, quick question guys…

    I was given a server, with 5 databases, 250GB each database… All sort of different information in-there…

    Here is my question; it’s a sql 2014; I was asked to calculate/forecast how much space would we save with compression… they wanted to do page compression; as the server is used by  just 3,4 for users; and their main concern is to save space, not performance.

    How can I accomplish this? I don’t want to go index by index calculating this… any thoughts?

    Thursday, July 16, 2015 3:44 PM

Answers

  • I have used sp_estimate_data_compression_savings, but that means going index by index... I want something I can execute in master; and being able to get summarized results per DB; like:

    First create this temp table:

    CREATE TABLE #compress_savings (db_name sysname NOT NULL DEFAULT db_name(),
                                    object_name sysname NOT NULL,
                                    -- all other columns from sp_estimate_data_compression_savings
                                    )

    Then run the below in each of the five databases (since there are so few of them, it is not worth the effort to implement something that loops over all the databases.) Again, you will need to add the columns from the result set yourself.

    DECLARE @sql nvarchar(MAX)

    SELECT @sql =
       (SELECT 'INSERT #compress_savings (object_name, schema_name, ...)
              EXEC sp_estimate_data_compression_savings ' +
              quotename(s.name) + ', ' +
              quotename(o.name) + ', ' +
              ltrim(str(p.index_id)) + ', ' +
              ltrim(str(p.partition_number)) + ', ''PAGE''' + char(13) + char(10)
       FROM   sys.partitions p
       JOIN   sys.objects o ON p.object_id = o.object_id
       JOIN   sys.schemas s ON s.schema_id = o.schema_id
       WHERE  o.type NOT IN ('S', 'SQ', 'IT')
       FOR XML PATH(''), TYPE).value('.', 'nvarchar(MAX)')

    PRINT @sql
    EXEC(@sql)


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    • Proposed as answer by Charlie Liao Tuesday, August 4, 2015 7:41 AM
    • Marked as answer by Charlie Liao Wednesday, August 5, 2015 2:10 AM
    Thursday, July 16, 2015 9:54 PM

All replies

  • sp_estimate_data_compression_savings


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    • Proposed as answer by Martin Guidry Thursday, July 16, 2015 4:05 PM
    Thursday, July 16, 2015 4:01 PM
  • plz try the below code

    EXEC sp_estimate_data_compression_savings 'dbo', 'salesorderdetail', NULL, NULL, 'page' ;

    change schema, and table names

    Thursday, July 16, 2015 4:11 PM
  • apart from what was already posted, is it sql 2014 enterprise edition? compression is enterprise feature only 

    the reason this dawned upon me is - you said - the system is used 3-4 users - i would be surprised if you are using enterprise in such case but then i do not compeletly know your case...

    select SERVERPROPERTY('edition')

    should tell you your sql edition...


    Hope it Helps!!

    Thursday, July 16, 2015 4:19 PM
  • I ´ve been investigating the sql market for years but didn´t figure this question out. However, it´s a highly interesting question. Thank you for your cooperation!
    Thursday, July 16, 2015 5:54 PM
  • As others have said, call the SP!  If you're on Enterprise ...

    Your result will be, it will save about 50%.

    But a terabyte of disk storage is about $49 at Staples, is this trip really necessary?

    Josh


    • Edited by JRStern Thursday, July 16, 2015 7:20 PM
    Thursday, July 16, 2015 7:19 PM
  • Yes; it’s enterprise!

    I have used sp_estimate_data_compression_savings, but that means going index by index... I want something I can execute in master; and being able to get summarized results per DB; like:

    DB X
    actual size : a GB.
    future size: b GB.

    DB Y
    actual size : c GB.
    future size: d GB.

    (I know that with page compression I would save around 40%, but I need to justify it…)

    “terabyte of disk storage is about $49 at Staples” I agree! Thing is that the server is in the cloud; so it’s a bit more expensive; plus we would also save on backups…
    Thursday, July 16, 2015 7:54 PM
  • I have used sp_estimate_data_compression_savings, but that means going index by index... I want something I can execute in master; and being able to get summarized results per DB; like:

    First create this temp table:

    CREATE TABLE #compress_savings (db_name sysname NOT NULL DEFAULT db_name(),
                                    object_name sysname NOT NULL,
                                    -- all other columns from sp_estimate_data_compression_savings
                                    )

    Then run the below in each of the five databases (since there are so few of them, it is not worth the effort to implement something that loops over all the databases.) Again, you will need to add the columns from the result set yourself.

    DECLARE @sql nvarchar(MAX)

    SELECT @sql =
       (SELECT 'INSERT #compress_savings (object_name, schema_name, ...)
              EXEC sp_estimate_data_compression_savings ' +
              quotename(s.name) + ', ' +
              quotename(o.name) + ', ' +
              ltrim(str(p.index_id)) + ', ' +
              ltrim(str(p.partition_number)) + ', ''PAGE''' + char(13) + char(10)
       FROM   sys.partitions p
       JOIN   sys.objects o ON p.object_id = o.object_id
       JOIN   sys.schemas s ON s.schema_id = o.schema_id
       WHERE  o.type NOT IN ('S', 'SQ', 'IT')
       FOR XML PATH(''), TYPE).value('.', 'nvarchar(MAX)')

    PRINT @sql
    EXEC(@sql)


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    • Proposed as answer by Charlie Liao Tuesday, August 4, 2015 7:41 AM
    • Marked as answer by Charlie Liao Wednesday, August 5, 2015 2:10 AM
    Thursday, July 16, 2015 9:54 PM
  • (I know that with page compression I would save around 40%, but I need to justify it…)

    “terabyte of disk storage is about $49 at Staples” I agree! Thing is that the server is in the cloud; so it’s a bit more expensive; plus we would also save on backups…

    Backup compression is separate ... isn't it?

    I sort of assume SQL Server decompresses the compressed database then lets backup recompress it, though I don't know that for a fact.

    Josh

    Thursday, July 16, 2015 11:07 PM
  • Microsoft has already written a whitepaper on this please read it

    https://technet.microsoft.com/en-us/library/dd894051%28v=sql.100%29.aspxggg

    the idea to just go for space is incorrect you must look for query performance as well.


    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it

    My Technet Wiki Article

    MVP

    Friday, July 17, 2015 4:22 AM
  • I sort of assume SQL Server decompresses the compressed database then lets backup recompress it, though I don't know that for a fact.

    Backup reads pages from a disk, and don't care too much about what's in those pages. It certainly does not decompress compressed columns. That would be utterly pointless.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    Friday, July 17, 2015 10:00 AM