locked
Delete operation costs more in execution plan RRS feed

  • Question

  • User-1684678200 posted

    Hi all,

    I have table which has almost 3 Lakhs of data and has 8 indexes. When i try to delete data from this table it is taking much time, may be because of Indexes. 

    And delete operation costs 95% in execution plan.

    As how can i perform delete operation very quickly even with the index or without the index. 

    Thanks

    Thursday, February 18, 2016 8:32 AM

Answers

  • User753101303 posted

    Which criteria is used to delete those rows? In short is an index used to select which rows should be deleted?

    I'm not used to delete that much rows and suspect that even if all is already optimised, it does take longer to delete a fair amount of rows rather than just a single row. Any actual value ? Does it take 30 s or 2 hours ? What is "much time"?

    I  find convenient to set a reference when doing this kind of work. So here I would likely create a similar test table with the amount of rows I want to delete, maybe without any index and would delete them all without any criteria.  My goal by doing so is to compare this time which should be basically the best time I can hope with the one that is actually see with the selection criteria so that I can first understand if I'm currently at 10 %, 50% or 90% of the performance I can expect.

    In short for now, I'm trying to understand what exactly is slow and even what I could expect before taking action based on those findings. I find harder to optimize without knowing exactly what is slow or which performance level I could expect to reach.

    • Marked as answer by Anonymous Thursday, October 7, 2021 12:00 AM
    Friday, February 19, 2016 10:46 AM

All replies

  • User1083584480 posted

    hi,

    Ideally a table should not have more than 3-4 indexes which has more DML operations. Therefore reducing index should be there and check whether you can delete all record in that case  you can use truncate.

    other solution is normalize vertically. i mean break records in two table. move older records to other table therefore you current transaction time on table will up with significant number.

    Thursday, February 18, 2016 11:24 AM
  • User753101303 posted

    Hi,

    And the plan doesn't give some more details about what is slow? Do you have JOINs statements in addition to your WHERE clause. Is this slow as well if doing a SELECT COUNT(*) for the same query rather than a delete?

    BTW 95% doesn't mean much. It could be 95% of 10 minutes or 95% of 10 ms. How many rows do you delete and how much time does it take?

    Thursday, February 18, 2016 3:44 PM
  • User-1684678200 posted

    Hi,

    Thanks for your valuable answer. I am deleting lakhs of rows and it does not have any JOINS. This is what taking much time to delete. 

    If i take SELECT COUNT(*) will work faster than the DELETE operation. 

    As i said earlier it has 8 indexes. As how can i improve the performance??

    Friday, February 19, 2016 9:49 AM
  • User753101303 posted

    Which criteria is used to delete those rows? In short is an index used to select which rows should be deleted?

    I'm not used to delete that much rows and suspect that even if all is already optimised, it does take longer to delete a fair amount of rows rather than just a single row. Any actual value ? Does it take 30 s or 2 hours ? What is "much time"?

    I  find convenient to set a reference when doing this kind of work. So here I would likely create a similar test table with the amount of rows I want to delete, maybe without any index and would delete them all without any criteria.  My goal by doing so is to compare this time which should be basically the best time I can hope with the one that is actually see with the selection criteria so that I can first understand if I'm currently at 10 %, 50% or 90% of the performance I can expect.

    In short for now, I'm trying to understand what exactly is slow and even what I could expect before taking action based on those findings. I find harder to optimize without knowing exactly what is slow or which performance level I could expect to reach.

    • Marked as answer by Anonymous Thursday, October 7, 2021 12:00 AM
    Friday, February 19, 2016 10:46 AM