none
SqlDataReader and large databases - strange behaviour RRS feed

  • Question

  • I`m having a strange problem with SqlDataReader. I have a database table that has about 3,6mln rows. Whenever i call a query
    Code Block

    SELECT id,word FROM dictionary WHERE word IN ( {0} )

    , where {0} is filled with ~7 thousand words i get only 750 rows result - always. No matter how many words i fill into the query. I know for sure, that those words are there - I have checked it with 7000 single select queries, and it all ran fine - but too long. The database engine is SQL Express and I run it on 1gb laptop Indifferent (I really can`t believe what I wrote there Wink).

    Any suggestions? What can I do in such a situation? Where should i seek the solution? Maybe I`m using a wrong class for such a task, or maybe a totally wrong query?

    I tried setting ROWCOUNT to 0 but it didnt help. I don`t think i had it set to limit results before.

    Any help or suggestions would be appreciatied.


    BTW. The database file weights only about 200 MBs
    Tuesday, January 22, 2008 3:09 AM

All replies

  • You cannot pass comma delimited list of the values as a parameter for IN clause. It is not supported. There are several ways to bypass this limitation.

    1. In a case of SQL Server (and probably some other database server) you could pass array of values as XML into stored procedure. See my KB article about it

     

    http://support.microsoft.com/kb/555266/en-us

     

    2. If values for IN clause are stored inside of another table, you might create joined SQL statement and do not need to pass values from the application.

    3. You need to build SQL statement dynamically inside of application and then execute it without passing parameter.

     

    Tuesday, January 22, 2008 11:26 AM
    Moderator
  • Thank you for your reply.
    I was building SQL statement dynamically using StringBuilder and pasting comma delimited values to the query string.
    I found the solution for this problem. I was stupid enough not to see the size of the transaction file. After one select on this table, transaction file grew to 2gb, so I thought that maybe thats the limit for the express version. I got standard edition for testing and it all worked just fine and much faster than on express. Transaction file grew to 2.6 gb there.
    I lost few days due to my lack of experience with big databases, at least big to some. It is the biggest (in means of rows per table) database I have ever had to do something with.

    Thanks for the links and for the tips. The article is interesting.
    Tuesday, January 22, 2008 8:48 PM