Answered by:
Unable to delete rows

Question
-
Hi All,
Trying to delete rows which are considered as bad data.
Table size is: 200,000,000 (Table has one PK, one FK & column called date).
Trying to delete rows which are less than 1/1/2012 but receiving following error:
" Transaction (process ID 56) was deadlocked on lock resources with another process.."
Please advise.
Thank you.
Thursday, November 29, 2012 4:53 PM
Answers
-
Please check how many rows are effected by delete statement
Select count(*) from NC_compchanges where fieldid is null and fieldlabel is null and value1 is null and value2 is null and CreateDate < '06/01/2011'
If it is too big it is easier to use TOP (…) delete
Following script might help.
declare @rows int set @rows = 1 while @rows > 0 BEGIN Delete TOP (...) from NC_compchanges where fieldid is null and fieldlabel is null and value1 is null and value2 is null and CreateDate < '06/01/2011' set @rows = @@ROWCOUNT END
- Proposed as answer by Naomi N Thursday, November 29, 2012 5:54 PM
- Marked as answer by Kalman Toth Wednesday, December 5, 2012 6:48 PM
Thursday, November 29, 2012 5:48 PM
All replies
-
Can you please provide the DDL for tables in question (CREATE statements, foreign keys, etc.) ?
It will help everyone in trying to provide an appropriate solution.
Thursday, November 29, 2012 4:54 PM -
Do you have trigger on the table where you trying to delete from, while deleting the records does anything else accessing the table? .. please read this
pls post xdl file
since it a large number of records you deleting its is better to page it.. i mean N records at a time
vt
Please mark answered if I've answered your question and vote for it as helpful to help other user's find a solution quicker
- Edited by SimpleSQL Thursday, November 29, 2012 5:08 PM
Thursday, November 29, 2012 5:01 PM -
delete from NC_compchanges where fieldid is null
and fieldlabel is null and value1 is null and value2 is null
and CreateDate < '06/01/2011'-- ComponentOID is FK
-- I just want to delete rows as its bad data and no one using them.
Thursday, November 29, 2012 5:15 PM -
Please check how many rows are effected by delete statement
Select count(*) from NC_compchanges where fieldid is null and fieldlabel is null and value1 is null and value2 is null and CreateDate < '06/01/2011'
If it is too big it is easier to use TOP (…) delete
Following script might help.
declare @rows int set @rows = 1 while @rows > 0 BEGIN Delete TOP (...) from NC_compchanges where fieldid is null and fieldlabel is null and value1 is null and value2 is null and CreateDate < '06/01/2011' set @rows = @@ROWCOUNT END
- Proposed as answer by Naomi N Thursday, November 29, 2012 5:54 PM
- Marked as answer by Kalman Toth Wednesday, December 5, 2012 6:48 PM
Thursday, November 29, 2012 5:48 PM -
We don't have any trigger applied on table.
Anyway I can avoid rows which are locked and delete other rows?
Thanks
Thursday, November 29, 2012 5:57 PM -
Please post DDL, so that people do not have to guess what the keys, constraints, Declarative Referential Integrity, data types, etc. in your schema are. If you know how, follow ISO-11179 data element naming conventions and formatting rules. Temporal data should use ISO-8601 formats. Code should be in Standard SQL as much as possible and not local dialect.
This is minimal polite behavior on SQL forums. You have been both absurd and rude.Sample data is also a good idea, along with clear specifications. It is very hard to debug code when you do not let us see it.
--CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking in Sets / Trees and Hierarchies in SQL
Thursday, November 29, 2012 6:12 PM -
Can you please post the Create Table statements, the indexes , FKs etc? Is your foreign key column indexed, otherwise that could possibly create a Table/Index Scan which makes your DELETE statement run for longer time.... Look at the execution plan.
If nothing wrong there, then it could be just the volume of data.
Regards
SatheeshThursday, November 29, 2012 6:16 PM