none
Recovering from a SQLCommand Timeout in .Net RRS feed

  • Question

  • I have a web application that calls a stored procedure via a SQLCommand that updates records on our server.  The other night, another application created a long lock that caused a command timeout.  My code caught the exception, and attempted to do the next call to the stored procedure.  The problem is that it appears that the stored procedure never releases its locks (it uses transaction processing) so each of my successive calls also failed and everyone on our system was also locked out of the system.  I know I can just increase the timeout value, but that doesn't really resolve the problem.  I'm wondering what the correct way to recover from this situation is?

    My stored procedure follows this format:

    begin transaction
    update mytable set myval = @val
    IF @@error > 0
    BEGIN
        ROLLBACK
        RETURN
    END
    commit

    And my code looks like this:

    int myVal;
    while (qMyVals.Count > 0)
    {
    try
            {
                    myVal = Convert.ToInt32(qMyVals.Dequeue());
                    mycommand.Parameters[0].Value = myVal;
                    mycommand.ExecuteNonQuery();
            }
            catch (Exception exc)
            {
    // what do I do here?
            }
    }
    con.Close();

    Thanks.
    Friday, February 1, 2008 2:48 PM

All replies

  • I moved this thread to a forum, more suitable for the question. The original forum it appeared on cover SQLCLR, i.e CLR hosted inside SQL Server.

    Niels
    Friday, February 1, 2008 4:32 PM
  • Hello,

     

    Try for the Catch

     

    Catch ( SqlException exc )

    {

    String str ;

    str = exc.Message + Environment.NewLine;

    str += "  SqlErrors numbers = " + exc.Errors.Count.ToString() + Environment.NewLine;

    int i = 1;

    foreach ( SqlError err in exc.Errors )

    {

    str += "  SqlError " + i.ToString() + "  " + err.Message + Environment.NewLine;

    i++;

    }

    MessageBox.Show(str," Error with stored procedure something");

    // something is for example the name of your stored procedure

    ...other code as Close connection and so on and away from the prog

    }

     

     

     I think that 's because the code is not complete , but i've not seen the definition of the parameter

    mycommand.Parameters.Add("@val",int);

    or it's in another place of the code and not appearing

     

    With this code you would be able to have more knowledge about your lock

     

    Try also to look about locks in the monitor with SqlServer Management Studio launched before your program and you set the refresh with an interval of 5 seconds

     

    I know that there is a system stored procedure which permits to identify the lock problem sp_lock something

     

    Have a nice day

     

    Remark : your SqlCommand is in a While loop. Nice but how to you get out of the while loop ?

    Or you are using a Queue class ? ( with DeQueue for walking in the Queue )

    Friday, February 1, 2008 5:58 PM
  • There's really no WHERE clause on that UPDATE statement?  Every iteration of the loop you're setting the same column in every row in the table to a specific value?

     

    What evidence do you have that the stored procedure timed out?  Are you sure the problem was a timeout, and not a deadlock?

    Saturday, February 2, 2008 9:02 AM
  • You actually need to move con.Close() line of code into finally {} section of exception handler. In this case connection will be closed regardless if application got exception or not and all the resources associated with this connection on database server side will be released, including locks.

    Saturday, February 2, 2008 12:14 PM
    Moderator
  • In response to the earlier comment, yes there is a where clause in the update.  In fact, and number of different tables get updated by the stored proc.

    As for closing the connection in the finally block, I understand that.  But what I would like the exception handler to do is to recover, reopen the connection, and then continue where it left off, somewhat like:

    while records to update
       call stored procedure with next recordid
       if timeout because of lock
          close connection
          if max timeouts reached
             exit
          else
             reopen connection
             reset loop to same recordid
          end if
       end if
    end while






    Tuesday, February 5, 2008 1:06 PM
  • Hello,

     

    I surprised with :

    In fact, and number of different tables get updated by the stored proc.

    Do you mean that more than one table is updated inside the stored procedure ?

     

    If it's the case, i'm not surprised that you have timeout problems.

     

    Maybe , you should have a look to triggers.

    But i've never used them ( my black hole with Sql Server ). So i fear that i will not be able to help you but higher specialists like Robert Rossney or VMazur should be able to help you better than ( i don't mention the many others ones, the list is too long )

     

    I hope that will help the thread to advance towards the solution.

     

    Have a nice day

    Tuesday, February 5, 2008 1:22 PM
  • I can't tell without examining things more closely, but it sure seems to me like you have a design problem that you're trying to code around.  All that writing a brute-force method that retries when it gets a timeout is going to do is make the problem worse:  now not only is your database under strain and failing, but you're going to increase the level of strain by trying to update it over and over again.

     

    Assuming that you really are getting timeout exceptions, and not deadlocks, the first thing I'd do is look at the execution plan for the queries in the SP and look for table scans.  You can often fix problems very quickly by identifying a poorly-formed join, or a WHERE clause that could be using an column that's indexed but instead is using one that's not.

     

    If you're making one SP call per record ID, it's likely that you may need to design the SP.  Database servers are most efficient when they operate on sets of records.  Making one SP call per record is very wasteful:  you incur all of the costs of setting up and tearing down a command, beginning and commiting the transactions, and so on, for every single record.

     

    If possible, you should design SPs so that they operate on a set of rows.  For example, let's suppose your SP is executing these statements:

     

    Code Snippet

    BEGIN TRANSACTION

    UPDATE table1 SET x = 0 WHERE ID = @ID

    UPDATE table2 SET y = 0 WHERE ID = @ID

    UPDATE table3 SET z = 0 WHERE ID = @ID

    COMMIT

     

    You could instead insert all of your IDs into a table and then execute these statements:

     

    Code Snippet

    BEGIN TRANSACTION

    UPDATE t SET t.x = 0

       FROM table1 t

       JOIN idlist i ON t.ID = i.ID

    UPDATE t SET t.y = 0

       FROM table2 t

       JOIN idlist i ON t.ID = i.ID

    UPDATE t SET t.z = 0

       FROM table3 t

       JOIN idlist i ON t.ID = i.ID

    COMMIT
     
     

     

    This sort of change requires a little more design work (like, you have to create and manage the idlist table), but it often speeds up performance dramatically. 

     

    Tuesday, February 5, 2008 8:44 PM
  • Thanks for all the replies.  Turns out the solution was easier that I thought.  Basically, close the connection, reopen it, attach the new connection to the command and continue on my merry way:

    // inside the error handler
    conn.Close();  // this releases any locks
    conn.Open();
    cmd.Connection = conn;  // not sure if necessary

    And back to the top of my loop

    Thursday, February 7, 2008 3:38 PM