Cache or group database requests? RRS feed

  • Question

  • Hi,
    Currently I have basically a 2 tier application that creates a lot of database hits.  This application is running on about 10 clients, and for the most part it is requesting the same information from the database for each user.  This is resulting in my sql server running quite high CPU utilisation.

    I was thinking that maybe I should have a middle tier which handles the database requests, but this alone won't reduce the load on the server.  Is there an easy way to cache or group together the database requests.  Eg, if user 1 and user 2 are both asking for the same thing within a certain time of each other, is there an easy way to only request that information once from the database and send it to both clients?

    I tried a basic cache where a server held in memory the table which is automatically refreshed every 5 seconds and requests hit that in memory representation rather than requesting from the sql server... but this seemed to be a lot slower to process requests (I assume because SQL server has indexes etc to speed up requests which the list doesn't).

    Is there some other architectural or design principals I could use?

    The database requests are very lightweight, and the table is indexed appropriately, I am just trying to reduce the shear number of requests.

    The application is an excel rtd server that processes requests from excel functions.  Being RTD, it refreshes from the database every 5 seconds.  A user could be getting a few hundred different records, say 500 for example so 500 selects every 5 seconds.  10 users, 5,000 every 5 seconds.

    95% of those records will be the same for every user, hence my question of being able to either cache or group requests together.

    I am an accidental dba, so I am by no means an expert on sql server performance, however the select is very basic and it is definitely using an index.

    Any help appreciated.

    Monday, February 22, 2010 1:16 PM


All replies