locked
Prevent concurrency issues RRS feed

  • Question

  • Are these good recommendations to developers to avoid concurrency?

    1) WITH (NOLOCK) or WITH (READUNCOMMITED) in select statements?

    2) BEGIN TRAN in updates / inserts / deletes statements

    Any suggestions is greatly appreciated.

    Tuesday, July 1, 2014 5:07 PM

Answers

  • Are these good recommendations to developers to avoid concurrency?

    1) WITH (NOLOCK) or WITH (READUNCOMMITED) in select statements?

    2) BEGIN TRAN in updates / inserts / deletes statements

    Any suggestions is greatly appreciated.

    1. Its not recommended as a good practice specially if you are dealing with OLTP database you might get uncommitted data/inconsistent data. Only case where NOLOCK can be useful when you are querying a history table where you are sure no update is going on but in general avoid using it

    2. Its good practice to keep begin tran and commit in DML statements with try catch block. Begin tran commit is used to make transaction atomic. I have this implemented as a coding standard in my environment.

    PS: Above are good practices but things often change as per requirements and surrounding


    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.

    My TechNet Wiki Articles


    • Edited by Shanky_621MVP Tuesday, July 1, 2014 5:49 PM
    • Proposed as answer by Sofiya Li Wednesday, July 2, 2014 7:30 AM
    • Marked as answer by Sofiya Li Tuesday, July 8, 2014 8:36 AM
    Tuesday, July 1, 2014 5:48 PM

  • Neither are good. READ UNCOMMITTED isolation level can mask concurrency issues for readers (select queries) in price of inconsistent data. It is not only about reading rows that have not been committed. By using that isolation level, you tell SQL Server that you do not care about consistency. It means, you can get duplicated reads, skipped rows and all sorts of phenomenons. It also would not help in case of the blocking by writers. 

    Explicit transactions would not help much either - it controls how long exclusive (X) locks are held. Keeping data consistency requirements out of discussion, longer transactions lead to longer time exclusive (X) locks are held, which is till the end of transaction. In REPEATABLE READ and SERIALIZABLE levels shared (S) locks are also held till the end of transactions.

    Enabling READ COMMITTED SNAPSHOT could help to mask reader/writer blocking in price of higher tempdb load and fragmentation. 

    The best way to deal with locking and blocking issues is eliminating root cause of the blocking, which in most part of the cases, occur due to nonoptimized queries that acquire more locks than needed. This link provides you the good starting point in understanding how SQL Server works with locking.


    Thank you!

    Dmitri V. Korotkevitch (MVP, MCM, MCPD)

    My blog: http://aboutsqlserver.com

    • Proposed as answer by Sofiya Li Wednesday, July 2, 2014 7:31 AM
    • Marked as answer by Sofiya Li Tuesday, July 8, 2014 8:37 AM
    Tuesday, July 1, 2014 7:48 PM
  • Will enabling Committed Snapshot Isolation in the database help in OLTP? I was reading http://blogs.technet.com/b/sql_server_isv/archive/2010/12/21/using-read-committed-snapshot-isolation-mini-lab.aspx

    Ami,

    Its difficult to say what can be best Isolation level for your environment as we dont know about workload and queries. Read committed snapshot has advantages and disadvantages as well you should read below link before implementing. I would suggest you to do a through testing before going with Snapshot isolation level.

    If you are not facing heavy blocking and application is going file I would like you to continue with default read committed isolation level. If you are in idea that Optimistic isolation level is always good then you are wrong please read following resource and various sections especially cost of row versioning based Isolation level

    http://technet.microsoft.com/en-us/library/ms188277%28v=sql.105%29.aspx


    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.

    My TechNet Wiki Articles

    • Proposed as answer by Sofiya Li Wednesday, July 2, 2014 7:30 AM
    • Marked as answer by Sofiya Li Tuesday, July 8, 2014 8:36 AM
    Tuesday, July 1, 2014 9:58 PM
  • Read this great book http://www.red-gate.com/community/books/defensive-database-programming

    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    • Proposed as answer by Sofiya Li Wednesday, July 2, 2014 7:31 AM
    • Marked as answer by Sofiya Li Tuesday, July 8, 2014 8:36 AM
    Wednesday, July 2, 2014 7:26 AM
    Answerer

All replies

  • Are these good recommendations to developers to avoid concurrency?

    1) WITH (NOLOCK) or WITH (READUNCOMMITED) in select statements?

    2) BEGIN TRAN in updates / inserts / deletes statements

    Any suggestions is greatly appreciated.

    1. Its not recommended as a good practice specially if you are dealing with OLTP database you might get uncommitted data/inconsistent data. Only case where NOLOCK can be useful when you are querying a history table where you are sure no update is going on but in general avoid using it

    2. Its good practice to keep begin tran and commit in DML statements with try catch block. Begin tran commit is used to make transaction atomic. I have this implemented as a coding standard in my environment.

    PS: Above are good practices but things often change as per requirements and surrounding


    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.

    My TechNet Wiki Articles


    • Edited by Shanky_621MVP Tuesday, July 1, 2014 5:49 PM
    • Proposed as answer by Sofiya Li Wednesday, July 2, 2014 7:30 AM
    • Marked as answer by Sofiya Li Tuesday, July 8, 2014 8:36 AM
    Tuesday, July 1, 2014 5:48 PM
  • Will enabling Committed Snapshot Isolation in the database help in OLTP? I was reading http://blogs.technet.com/b/sql_server_isv/archive/2010/12/21/using-read-committed-snapshot-isolation-mini-lab.aspx
    Tuesday, July 1, 2014 6:36 PM

  • Neither are good. READ UNCOMMITTED isolation level can mask concurrency issues for readers (select queries) in price of inconsistent data. It is not only about reading rows that have not been committed. By using that isolation level, you tell SQL Server that you do not care about consistency. It means, you can get duplicated reads, skipped rows and all sorts of phenomenons. It also would not help in case of the blocking by writers. 

    Explicit transactions would not help much either - it controls how long exclusive (X) locks are held. Keeping data consistency requirements out of discussion, longer transactions lead to longer time exclusive (X) locks are held, which is till the end of transaction. In REPEATABLE READ and SERIALIZABLE levels shared (S) locks are also held till the end of transactions.

    Enabling READ COMMITTED SNAPSHOT could help to mask reader/writer blocking in price of higher tempdb load and fragmentation. 

    The best way to deal with locking and blocking issues is eliminating root cause of the blocking, which in most part of the cases, occur due to nonoptimized queries that acquire more locks than needed. This link provides you the good starting point in understanding how SQL Server works with locking.


    Thank you!

    Dmitri V. Korotkevitch (MVP, MCM, MCPD)

    My blog: http://aboutsqlserver.com

    • Proposed as answer by Sofiya Li Wednesday, July 2, 2014 7:31 AM
    • Marked as answer by Sofiya Li Tuesday, July 8, 2014 8:37 AM
    Tuesday, July 1, 2014 7:48 PM
  • Will enabling Committed Snapshot Isolation in the database help in OLTP? I was reading http://blogs.technet.com/b/sql_server_isv/archive/2010/12/21/using-read-committed-snapshot-isolation-mini-lab.aspx

    Ami,

    Its difficult to say what can be best Isolation level for your environment as we dont know about workload and queries. Read committed snapshot has advantages and disadvantages as well you should read below link before implementing. I would suggest you to do a through testing before going with Snapshot isolation level.

    If you are not facing heavy blocking and application is going file I would like you to continue with default read committed isolation level. If you are in idea that Optimistic isolation level is always good then you are wrong please read following resource and various sections especially cost of row versioning based Isolation level

    http://technet.microsoft.com/en-us/library/ms188277%28v=sql.105%29.aspx


    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.

    My TechNet Wiki Articles

    • Proposed as answer by Sofiya Li Wednesday, July 2, 2014 7:30 AM
    • Marked as answer by Sofiya Li Tuesday, July 8, 2014 8:36 AM
    Tuesday, July 1, 2014 9:58 PM
  • Thank you both for the suggestions. I will go through the articles.
    Wednesday, July 2, 2014 2:32 AM
  • Read this great book http://www.red-gate.com/community/books/defensive-database-programming

    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    • Proposed as answer by Sofiya Li Wednesday, July 2, 2014 7:31 AM
    • Marked as answer by Sofiya Li Tuesday, July 8, 2014 8:36 AM
    Wednesday, July 2, 2014 7:26 AM
    Answerer