locked
Index Fragmentation Issues RRS feed

  • Question

  • I have production database in SQL2005 SP3 plus uptodate cumulative update. The server has weekly index rebuilding job created via maintenace plan using supplied index rebuild feature( not my own custom script). Similarly I am running three times a week index reorganise job and same number of update statistics job. Today I ran following query in the database to see the status of indexes...

    SELECT

     

    tbl.[name] TableName,ind.[name] IndexName,mn.index_type_desc IndexType,mn.avg_fragmentation_in_percent [FRAG_%] FROM sys.dm_db_index_physical_stats (null, null, null, null, null ) AS mn inner join sys.tables tbl ON tbl.[object_id] = mn.[object_id] inner join sys.indexes ind ON ind.[object_id] = mn.[object_id] WHERE [database_id] = db_id('myproddb') ORDER BY mn.avg_fragmentation_in_percent DESC

    Results are found like below

    TableName         IndexName         IndexType             FRAG_%

    ECL1                   NULL                    HEAP                     99.99

    ECL2                  INDX_2                HEAP                      99.00

    USER1               NULL                    UNCLUSTERD        88.00

    USER2               INDEX_3             UNCLUSTERED      85.00

    LEDG1               INDEX_4              CLUSTERED           84.71

    Above results are sample only. Real results resembles to above (not pasting here because of large numbers).

    I am  concerned ..do we need to worry so many heaps with more thean 90% fragmentation? Also found that many numbers of clusted and non clustered indexes are above 70% fragmentation. I was thinking that rebuilding and reorganising helps to mimimize fragmented percentage. 

    What is the suggestion in this case.

    Mike

    Tuesday, May 4, 2010 7:04 AM

Answers

  • Hi Mike,

    You dont have to worry about fragmentation on an index where the pages are less than 1000 or so. Normally you will have to plan for reorganise or rebuild for index whose frag percent is greater than 50 / 60 and their total pages count is greater than 1000. For Heap tables , try to create a clustered index and leave it as it is and never craete a clustered index for the sake of bringing the fragmentation to a lesser value and drop the clustered key again - the reason is here from Paul Randal (http://www.sqlskills.com/BLOGS/PAUL/post/A-SQL-Server-DBA-myth-a-day-(2930)-fixing-heap-fragmentation.aspx)

    Normally i use this script from sqlservercentral.com (http://www.sqlservercentral.com/scripts/Automated+index+defragmentation+script/68802/) where it determines the frag percent , no of pages and then it decides what to do on the index based on the above results. I recommned you to go for this instead of rebuilding or reorganizing the indexes thrice a week using maint plans.


    Thanks, Leks
    Tuesday, May 4, 2010 10:35 PM
    Answerer
  • Doing a blind reorg - rebuild is waste, even for smaller tables look at the possible gains you will get by adding an index (read speed) and losses ( while doing inserts and deletes).
    Putting up a appropriate page fill factor will be helpful in reducing index fragmentation.

    To access fragmentation I take a full result set for every index on every table in my database
    -> DBCC SHOWCONTIG WITH TABLERESULTS, ALL_INDEXES
    -> send it to a temp table
    ----> order by  ( row count, fragmentation level, page fullness you may even look at extent jumps)
    ----> sequence of these three depend on your priority
    -> manually script for defragmenting using rebuild or reorg.

    You may like to see Halengren's scripts for maintenance are comprenhensive, and have good documentation associated at :http://ola.hallengren.com/


    yup
    Tuesday, May 4, 2010 7:48 AM

All replies

  • You need to rebuild the indexes with such high rate of fragmentation.

    Heaps are useful, if you are only inserting the data to the table....however, if your table is also having some update & delete operations, than heaps can cause a lot of I/O issues, and not recomended.

    Its also recomended that you should have a cluster index on each table.

     


    SKG: Please Marked as Answered, if it resolves your issue.
    Tuesday, May 4, 2010 7:19 AM
  • Just to add the above post, Rebuilding of indexes consumes a lot of server resources. You need to find a suitable time to do the same, else your users may face some performance issues.
    SKG: Please Marked as Answered, if it resolves your issue.
    Tuesday, May 4, 2010 7:21 AM
  • Doing a blind reorg - rebuild is waste, even for smaller tables look at the possible gains you will get by adding an index (read speed) and losses ( while doing inserts and deletes).
    Putting up a appropriate page fill factor will be helpful in reducing index fragmentation.

    To access fragmentation I take a full result set for every index on every table in my database
    -> DBCC SHOWCONTIG WITH TABLERESULTS, ALL_INDEXES
    -> send it to a temp table
    ----> order by  ( row count, fragmentation level, page fullness you may even look at extent jumps)
    ----> sequence of these three depend on your priority
    -> manually script for defragmenting using rebuild or reorg.

    You may like to see Halengren's scripts for maintenance are comprenhensive, and have good documentation associated at :http://ola.hallengren.com/


    yup
    Tuesday, May 4, 2010 7:48 AM
  • This is one of the very popular issues faced by production & busy SQL Server databases. You should pay special attention to your indexes and tables without indexes i.e. heaps.

     

    With such a high percentage of fragmentation for your indexes, one can say you have a very busy OLTP (or maybe the hybrid types with reporting also happening simultaneously and/or scheduled) databases here. It is a good idea to have index maintenance plans and the frequency depends on your work load absolutely. Though BOL has guidelines defined (e.g. > 10 & < 30 do re-organize, > 30 do re-build) about re-organize and re-build; but these are not written in stones. Saying that means, over the period of time you can come up with your own thresholds based on your experience and tests. But, if frequent index rebuild/reorganize is not helping we have to look at the index design (are your indexes too wide, do you have clustered index on any random-value newid() holding columns etc) as well - while this will help in better performance it will also help alleviate (remember, fragmentation will anyhow happen in OLTP databases, but you can save a lot of time and resource if what used to be fragmented in 2 days, now gets in 5 days or so) much of the fragmentation issues like proper judicial use of a fill-factor, pad index.

     

    And, for the heaps, your strategy depends on the usage of such tables. Typically, hot (even warm ;-) tables should never be heaps. So, if these tables are getting hit by your OLTP queries you should at least create a clustered index on them. Even for occasional reporting purposes, one should have indexes for better performance, so that the report queries don’t play havoc with your system by pushing out hot pages from the buffer and placing huge number of pages from these heaps for scan purposes in the lack of index. If you are using them for MDM/ETL/ELT purpose, it depends a lot on your design and where such tables fit in e.g. you can have a heap for doing bulk-uploads, but obviously if you are doing any read on this table after the upload, you should create index(s) on it after the bulk-write and before the read. Also, size of your heap table(s) plays a vital role. If your heap table(s) is not much huge (say within the bounds of few 2-4 pages depending on the data type), perhaps the optimizer will choose to scan than to use the index(s) at all, but you can tell for sure after you do a proper test.

    If you can share with us more information on the usage-pattern of the tables, we can dig deeper into it.

     

    Pradyumna

    Tuesday, May 4, 2010 10:13 AM
  • 1. Is the autoshrink option on in you database

    Or

    2. Are you manuallu doing any database file shrink as well. If so read this

    http://www.sqlskills.com/BLOGS/PAUL/post/Why-you-should-not-shrink-your-data-files.aspx



    SQLEnthusiast
    http://sqlsimplified.com/
    Please click the Mark as Answer button if a post solves your problem!
    Tuesday, May 4, 2010 10:31 AM
  • The tables/indexes need to be defragmented.

    If you want the heap tables to remain heaps - which I would only recommend under very rare circumstances - the you can create a clustered index and drop it again to defrag it.

    If you wish to apply a clustered index, which I usually recommend, then choose it carefully. As a very brief guideline: If you have many nonclustered indexes on that table, the clustering key should be selective, and if you do not have too many nonclustered indexes then choose the most often used noclustered index to become clustered.

    In case of the heaps wanted: If the tables are big, it might be wise to drop all nonclustered indexes prior to creating and dropping the clustered index as the nonclustered indexes will be recreated with any change to the clustered index which can be very time consuming.


    Bodo Michael Danitz - MCT, MCITP - free consultant - performance specialist - www.sql-server.de
    Tuesday, May 4, 2010 8:20 PM
  • As a suggestion i will suggest you to rebuild your all indexes and update there statistics with full table scan.

     


    Mohd Sufian www.sqlship.wordpress.com Please mark the post as Answered if it helped.
    Tuesday, May 4, 2010 9:57 PM
  • Hi Mike,

    You dont have to worry about fragmentation on an index where the pages are less than 1000 or so. Normally you will have to plan for reorganise or rebuild for index whose frag percent is greater than 50 / 60 and their total pages count is greater than 1000. For Heap tables , try to create a clustered index and leave it as it is and never craete a clustered index for the sake of bringing the fragmentation to a lesser value and drop the clustered key again - the reason is here from Paul Randal (http://www.sqlskills.com/BLOGS/PAUL/post/A-SQL-Server-DBA-myth-a-day-(2930)-fixing-heap-fragmentation.aspx)

    Normally i use this script from sqlservercentral.com (http://www.sqlservercentral.com/scripts/Automated+index+defragmentation+script/68802/) where it determines the frag percent , no of pages and then it decides what to do on the index based on the above results. I recommned you to go for this instead of rebuilding or reorganizing the indexes thrice a week using maint plans.


    Thanks, Leks
    Tuesday, May 4, 2010 10:35 PM
    Answerer