none
REST DataTable Server Side Caching RRS feed

  • Question

  • We are trying to build a new stateless REST-style architecture to support a web front-end to an existing business application.  In our current state-full architecture, we POST an inquiry object to conduct a search from the database.  Every pull from the DB is considered an expensive operation, so we only want to make the call to get the full set of search results once, and keep a cached version of the datatables object.  Currently we are getting results by having context for each user stored in the Session variable in the business layer, and using the session variable in the web service to store an unordered "cached" search results DataTable per user.  From that we GET subsequent requests to the service for filtered, ordered results, etc.

    We are trying to move to a completely stateless architecture - we've removed session from the biz layer and are sending credentials with each REST request.  However, we now have the dilema of the search results at the service layer.  

    Since we are not supposed to depend on the Session variable to cache the large search results object - what would be the best practice for server side caching the DataTable, on a per user basis, so that it is available for subsequent filtering, paging, etc.?

    Sunday, March 3, 2013 6:27 PM

All replies

  • So many questions. First off, how many user data sets are you expecting, 1, 5, 500,000,000? What is the size of the data sets, how many servers are you using?


    http://pauliom.wordpress.com

    Monday, March 4, 2013 6:51 AM
  • So many questions. First off, how many user data sets are you expecting, 1, 5, 500,000,000? What is the size of the data sets, how many servers are you using?


    http://pauliom.wordpress.com

    Could be as many as 10,000,000 initial results before filtering, sometimes as little as 10.  
    Monday, March 4, 2013 5:15 PM
  • How many users are looking to cache data for at any one time?

    http://pauliom.wordpress.com

    Monday, March 4, 2013 10:08 PM
  • Anywhere from 1 to 4000 user at a time.
    Tuesday, March 5, 2013 12:27 AM
  • So you want to cache 40,000,000,000 results (each of size X) on a web server? I'm not saying that is not possible, but is this really the best design? What sort of response time are you looking for, how old can the results be?


    http://pauliom.wordpress.com

    Wednesday, March 6, 2013 2:11 PM
  • Well, the alternative is caching the results in JavaScript on the client - which makes the browser totally unresponsive - not to mention the initial call would take forever.
    Wednesday, March 6, 2013 5:18 PM
  • My initial thought was stick with the database. Datasets that large do not seem sensible to be stored in memory. Memory is cheap these days so it's not impossible but just doesn't feel right to me. I would prefer to just move the database to the web server, i.e. do what you want but query against a database. Perhaps some form of replication. It concerns me that even if you consider the original database query to be expensive I can't believe transporting 10,000,000 records is going to be considered cheaper. So if you really can't run queries on the database (and again really?) then I would consider some form of replication possibly even a shared disk.

    The other angle is to consider how stale your data can get. Since you're using REST you'll be trying to make use of all the good http caching mechanisms available to you. These work better the older the data can be. E.g. if you make 10 simple calls to the database and return over http then in theory if you ask for any one of those 10 again one of the caching mechanisms will provide it without ever hitting your web server let alone the database. However if your data is stock values with immediate expiration then that is just going to cause an overhead rather than provide an advantage


    http://pauliom.wordpress.com

    Thursday, March 7, 2013 1:42 PM