none
Views vs Select RRS feed

  • Question

  •  

    I've done some research on this topic and am looking for some quality comments and suggestions.

     

    I have an application I've been working on for quite some time. As the software continues to be used the clients are reporting a gradual slowing of the app as more data has been entered. I've done all the normal things such as check the server's load and sql server but the thought occurs to me that it might be a flaw in my design.

     

    Currently there are two main tables in the SQL 2005 database. People and Cases. Cases has a key that ties it to people. (ie case 200432 record has a people_id field of 15 to tie it to people record 15). To open the main list of cases I need to include info from the people table and do a select * from cases, people where people.id = cases.people_id. This works fine but returns very slowly. I decided later to remove the * and return only the 20 or so columns that were needed. While this helped quite a bit the ever increasing quanity of information is starting to slow the app down again.

     

    For example there are currently around 12,000 'cases' and 15,000 'people'. I have the query limited to the top 800 and it takes about 45 seconds to return and populate my grid.

     

    So.... Here's my question. Should the query be made into a database view and then access that rather than the query every time? Or would a stored procedure work better? I can't help but think there has to be a way to tell SQL that ''you're going to need this... keep it in memory" and it keep an updated copy ready for the user.

     

    My solution in the past has been to limit the top x to less and less records and have the user search for anything older. However this is becoming less and less of a solution.

     

    Any insight would be apperciated.

    Tuesday, December 9, 2008 2:56 AM

Answers

  • Neither View or stored procedure is going to help because the cause of your problem is the volume of data.  I don’t think the database compiler is spending a lot of time in parsing and compiling the query.  But certainly would recommend creating a stored proc that might save some cpu cycles (although database do cache sql queries now a days).

    Here are things I would try, if you have not done before.

    Create indexes on people.id and cases.people_id.

    Create materialized view (not regular view) on you top 800 rows that will be refreshed nightly or based on your requirements.

    Cache the query results in your application and refresh them as needed (similar to above mentioned materialized view, except this is in application layer instead of database layer)

    Remove historical records to an OLAP cubes or different table.  Sounds like you need only 800 rows from your case table, so that is your transactional rows.  You are probably keeping both OLTP and historical rows in the same table hence your table size has grown larger.  So you would have to offload unused/historical rows to a different table/OLAP cube.  Quality DBA or Data architect from your company can provide guidance on history table/OLAP cubes.

    Hope this helps.

    { Gaja; }

    http://gajakannan.com/netarch.aspx

    Tuesday, December 9, 2008 5:56 AM

All replies

  • Neither View or stored procedure is going to help because the cause of your problem is the volume of data.  I don’t think the database compiler is spending a lot of time in parsing and compiling the query.  But certainly would recommend creating a stored proc that might save some cpu cycles (although database do cache sql queries now a days).

    Here are things I would try, if you have not done before.

    Create indexes on people.id and cases.people_id.

    Create materialized view (not regular view) on you top 800 rows that will be refreshed nightly or based on your requirements.

    Cache the query results in your application and refresh them as needed (similar to above mentioned materialized view, except this is in application layer instead of database layer)

    Remove historical records to an OLAP cubes or different table.  Sounds like you need only 800 rows from your case table, so that is your transactional rows.  You are probably keeping both OLTP and historical rows in the same table hence your table size has grown larger.  So you would have to offload unused/historical rows to a different table/OLAP cube.  Quality DBA or Data architect from your company can provide guidance on history table/OLAP cubes.

    Hope this helps.

    { Gaja; }

    http://gajakannan.com/netarch.aspx

    Tuesday, December 9, 2008 5:56 AM
  • Thank you very much for taking the time to respond. Following you general ideas I went on to create indexs on the cl_id and id fields and additional research led me to add a handful of other columns that the inital query requested to the index as included columns. I did this for both tables. That yeilded a stunning 50% increase in speed as previously a query took 34seconds to pull 800 rows. The same exact query was responding in 17s.

     

    I went a step beyond that and created a view. I'm not sure if it's materialized or not as I haven't researched that term yet. However when i created the view using the query that had been in the application layer I increased the record count to 2000 just so more would be availibe. Once I changed my application layer to access the new view rather than the SQL command directly accessing the table the results blew me away. I was and am currently able to populate the 2,000 records (more than enough) in less than 3 seconds. The response is virtually instant.

     

    I've very glad that I took the time to make this post and you took the time to respond. I will continue this technique throughout the program. I'm beginning to learn that my database skills are very basic and with a little work performance can be enchanced greatly.

     

    Again thank you for your time and assistance.

     

    Wednesday, December 10, 2008 11:05 PM
  • good the the solution worked for u. thought to add few more points against your last post

    the view u created is, under all probability, not a materialize element. materialized view saves the view physically and requires different treatment.

    if u r sure to retrieve top 2000 or so rows from the query u can try "Set RowCount 2000" which will give u even better performance if u r using top with order by. dont forget to set the rowcount to 0 when done.

    Thursday, December 11, 2008 7:34 AM