I searched for my question in the forums and found one that was similar but not fully answered. So, here it goes.
Goal: Export Health Record Items from a user's HealthVault record matching a certain filter condition (e.g. all items between a date range) into a text file for download
I am using the .NET SDK in a HealthVault web application. So, far I've tried this using the HealthRecordSearcher and HealthRecordExporter classes. Both approaches work to the point where I am able to transform the data xml using my own xsl template. See code below based on exporter class.
System.Xml.Xsl.XslCompiledTransform transform = new System.Xml.Xsl.XslCompiledTransform(); transform.Load("path to xsl template"); HealthRecordExporter exporter = new HealthRecordExporter(PersonInfo.SelectedRecord, transform); HealthRecordFilter filter = new HealthRecordFilter(); //add a whole lot of typeids based on which data types user selected for export if (ckAllergies.Checked) filter.TypeIds.Add(Allergy.TypeId); if (ckBloodGlucose.Checked) filter.TypeIds.Add(BloodGlucose.TypeId); //add a whole lot of typeids based on which data types user selected for export filter.EffectiveDateMin = beginDate; filter.EffectiveDateMax = endDate; filter.MaxFullItemsReturnedPerRequest = 2; exporter.Filters.Add(filter); string output = exporter.ExportItems();
This very helpful post from Eric Gunnerson got me thinking about the performance aspects of retrieving xml if the filters allow a large xml result. http://blogs.msdn.com/b/ericgu/archive/2011/11/07/improve-healthvault-query-efficiency-with-final-transforms.aspx
1. HealthRecordExporter.ExportItems() performs transformation on HealthVault server-side, correct?
2. The documentation for MaxFullItemsReturnedPerRequest property says- "if you want further control over the count of full items retrieved on each request,can be set to optimize for smaller sets of data". This seems to suggest that it could be used for paged query to HealthVault. Is that true?
I found that HealthRecordExporter.ExportItems() ignores the setting for filter.MaxFullItemsReturnedPerRequest. Although, I set it to 2 in the code above, the output from the transformation had full xml for all matching items.
3. Is there a better/efficient way to perform queries so that huge xml streams are not sent to client for a given call? Instead having some way to page the query?
- Edited by Manish Yohannan Tuesday, February 28, 2012 5:10 AM
In search of an answer for my question # 1, I looked at the source code of the HealthRecordExporter class and it looks like the xsl transformation is performed on the client side! Can someone please verify this?
Internally, HealthRecordExporter.ExportItems() method is using a HealthRecordSearcher.GetMatchingItems() method which in turn returns a collection of HealthRecordItems. Which would mean that there is no transform being executed on the HealthVault server side.
After the items are returned, they are converted to xml and then transformed using the xsl template which was provided to the exporter. This was so time-consuming to figure out since the .NET SDK documentation is sparse.
In search of an answer to my question #3: I found two clues (phew! this is like detective work)
In the source code of the HealthRecordExporter class, I found this comment:
"// We are leveraging the HealthRecordItemCollection paging here to retrieve
// all the data from HealthVault if it doesn't come with the first request."
The documentation for HealthRecordItemCollection.Count property says-
"This number can include partial results returned from the server if the maximum number of items returned is reached. If accessed, the partial items are retrieved automatically from the server. "
So, looks like one way to accomplish paged queries is to use the HealthRecordItemCollection returned from a Searcher to access the items in a record. As each item in the collection is accessed by application code, the collection will determine whether to go out to HealthVault servers and retrieve the actual data for the item.
BTW, this also explains answer to my question #2- inside the HealthRecordExporter.ExportItems(), after the matching items are retrieved using the Searcher. There's a loop through the returned HealthRecordItemCollection, I suspect it's this iteration through the items of the collection that causes the full data to be retrieved for all items even though MaxFullItemsReturnedPerRequest was set to 2.
A couple of follow-up questions in the same context (exporting items from a HealthVault record):
A. When there's a potential for large data volume in users' HealthVault records: From a performance perspective, is it more efficient to work at an xml level by using the HealthRecordSearcher.GetTransformedItems() method OR work at an object level by using the HealthRecordSearcher.GetMatchingItems() method and then, handling the returned HealthRecordItemCollection and transforming the underlying item xml on client-side?
B. When working at an xml level e.g. using the GetTransformedItems() method, application code will use XPaths on the HealthVault item xml. Can we rely on the current HealthVault xml schema to always be supported as future versions of HealthVault introduce changes?
Thanks for any help.
Hi Manish. I've read through your goal and your questions. Well, I have a similar query. I don't know if you haven't already figured it out but please let me know if you did.
My goal : I want to access(read) the data from the HV webserver of a particular user.
Do you have code for that? Please share if yes. I'd be obliged.
Secondly, I'm building something else that works on Java and is there a way to do the same access records(read them) using Java?
Thanks in advance. I'm new to programming. So please ignore my amateurish doubts.