Tuesday, March 06, 2012 10:51 AM
I am planning to use the appfabric 1.1 write behind functionalty . I have a scenario like below .
We have a sharepoint list as a datastore . We have sequential updates to the sharepoint lists rows ie: for each updates a row version will be created . This we are planning to update the cache first and then asynchrously update the sharepoint list through write-behind provider. If we have a single row update in millisecond delay ie: Two users are updating the list at the same time so that list will be updated with millisecond delay . For this scenario what we do is we will be adding the items to the cache as datarow objects then the DataCacheStoreProvider will take those Objects and update the sharepoint . For achieving this we will be inserting cache twice . one for each update to the single row (This happends in millisecond delay) . Now the DataStoreProvider overrridden method in the Appfabric hosts fires . Here we are getting a Dictionary Object which contains a collection of cache Keys which are updated in the cache . The problem which i am facing is that the order of items in the Dictionary is not sequential . In my case the first item(the 0 th index element) in the Dictionary may be second updated row to the cache . In this case datastore will be updated with second updated element to the cache which is not correct . Do we have any setting in the powershell to make this collection sequential .. Any help on this will be really appreciated ..
Thanks in advance
Thursday, March 08, 2012 3:57 AM
The default IDictionary implementation does not maintain the sequence of original operations. Nor do we retain at any point the original sequence of cache operations, considering ordering is not defined across multiple cache hosts in a distributed environment. Order is defined only for multiple accesses to the same (cache name, region, key) entry. If you must take a dependency on the order of operations, I'd recommend sending the time in the payload when writing a row. Essentially, use cache.Put(Key, Value: new Tuple<object, DateTime>(actualValue, DateTime.UtcNow)) and ((Tuple<object, DateTime>)cache.Get(Key)).Item1.
DataCacheItemVersion (found in the field DataCacheItem::Version) defines IComparable<DataCacheItemVersion> and this order is defined for multiple accesses in a single region/key. This will not work when comparing item versions across regions.
- Proposed As Answer by Arijit Sengupta [MSFT]Microsoft Employee Monday, April 16, 2012 2:13 PM
Friday, March 16, 2012 9:12 AM
Thanks a lot for the reply.. When i use cache.add and senting the datetime in the payload solves my problem ... But i need to test this with the two node cluster . Currently i have only one machine with appfabric installed . I couldn't understand how the DataCacheItemVersion solves the problem ? How will i track the datacache item for each and every update . If i update a certain key in 500 ms delay , the latest update is only sent to the provider .
Friday, March 16, 2012 9:40 AM
Version comparison can work across items in a region. If you need to scale beyond a single region, you'll have to build something on the cache client, such as sending DateTime in the payload. I'm glad to head that solution is working for you.