Answered by:
Disconnected Entities and WCF

Question
-
The threads on this topic are numerous, with many suggestions, but I have yet to find a recommended solution that could be considered the definitive standard.
I would like to continue to use the EF, however, I want to be able to scrap it easily too, if things don't work out. This is leading me to create an SOA application with the following architecture:
BL:
- Business.UserEntity
DAL:
- Model.UserEntity
The DAL contains the EF model, and translator classes that use reflection and recursion to "copy" a Model.User to an instance of a Business.User. The rest of the application uses the Business.UserEntity object so that if things don't go well for the EF, I can hopefully write a new DAL and as long as its methods return a Business.UserEntity, the rest of the application should be happy.
Now like everyone else, I am faced with working out the best solution to the disconnected model entity challenge. Further to this, I would like the solution to this challenge to play nicely with whatever solutions the EF team come up with in v2 of the EF.
I've read several blogs and posts (some links to these at the bottom) and have gathered some basic patterns for this challenge. They are:
1. Reload the entity from the model (call it X), and use the disconnected entity (call it Y) to update X from Y. Then save changes on X.
- this is obviously comes with the bitterness of having to load X from the model before it can be edited
2. Have the client return original (X) and edited (Y) versions of an entity with each method call, so an update method would include X and Y as parameters, allowing the DAL to attach X to the context and then apply the changes from Y, and then save.
- the thought of always passing around 2 copies of every object doesn't seem too appealing either
3. Attach the changed entity X to a context, and then use reflection to loop through all the properties and mark them as changed, then save.
- reflection is worrisome from a performance point of view
- the update will be forced to update ALL properties, regardless of whether or not they changed.
In each case, the BL will call a DAL method, passing in the Business.UserEntity and inside the DAL, the translator class will copy/convert it into a Model.UserEntity, for use by the DAL to invoke one of the methods described above.
Each method above seems clunky. I would like to avoid writing a bunch of custom code that will eventually be replaced or made redundant in a future version of the EF, so if one of the above methods is better suited to that, I'd love to hear it.
Otherwise, I'd like to hear any opinions on these methods, for my given scenario. Are there any methods I've missed or described incorrectly?
Some references:
http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2007/09/19/9825.aspx
http://www.thedatafarm.com/blog/CommentView,guid,5ce17cb6-d965-4a32-b20a-f8fe7a67587d.aspx#commentstart
http://blogs.msdn.com/dsimmons/archive/2008/01/20/entitybag-part-i-goals.aspx
Thanks.
Friday, August 1, 2008 3:27 PM
Answers
-
Lefty,
This is a good write-up of some of the challenges faced when trying to define a serialization contract using the EF. The pattern you choose really depends on what kind of concurrency you want to use.
#1 really speaks to not using concrurrency. The pattern here is that the value received from the client should be forced into the database with no concurrency checks. This is a behavior some scenarios might want, but is probably not appropriate for everyone.
#2 This gives you the same concurrency that you'd have if you remove serialization from the picture because you are capturing the original values and the current values. There likely is some redundant data in here (non key and concurrency values), and so is a bit bulky. For an update operation, this is probalby the simplest approach with EF.
#3 This will preserve concurrency because you are using your current values as the original values and so is a nice choice. The drawback, as you noted, is that all fields are marked as requiring an update and so will be included in the generated store command (SQL).
There are also options with using a message/facade over your entity where you just pass the type of update you want. For example, if you want to update X.Name, you can create a message:
Code Snippetclass Message
{
EntityKey customerKey;
string newName;
TimeStamp timeStamp;
}
ServerCode:
Code Snippetvoid UpdateCustomer(Message m)
{
using(MyContext context = new MyContext())
{
Customer c = Customer.CreateCustomer(...);
c.EntityKey = m.customerKey;
c.TimeStamp = m.timeStamp;
context.AttachTo("Customers", c);
c.Name = m.Name;
context.SaveChanges();
}
}
The decision comes down to your need for concurrency across your service.
On another note, we'd really like you to be able to just have one kind of entity in your system (i.e. not two as in Model and Business)...could you describe your reasons for wanting two kinds of entities and using an Object-to-Object transform?
Friday, August 1, 2008 8:35 PM
All replies
-
Lefty,
This is a good write-up of some of the challenges faced when trying to define a serialization contract using the EF. The pattern you choose really depends on what kind of concurrency you want to use.
#1 really speaks to not using concrurrency. The pattern here is that the value received from the client should be forced into the database with no concurrency checks. This is a behavior some scenarios might want, but is probably not appropriate for everyone.
#2 This gives you the same concurrency that you'd have if you remove serialization from the picture because you are capturing the original values and the current values. There likely is some redundant data in here (non key and concurrency values), and so is a bit bulky. For an update operation, this is probalby the simplest approach with EF.
#3 This will preserve concurrency because you are using your current values as the original values and so is a nice choice. The drawback, as you noted, is that all fields are marked as requiring an update and so will be included in the generated store command (SQL).
There are also options with using a message/facade over your entity where you just pass the type of update you want. For example, if you want to update X.Name, you can create a message:
Code Snippetclass Message
{
EntityKey customerKey;
string newName;
TimeStamp timeStamp;
}
ServerCode:
Code Snippetvoid UpdateCustomer(Message m)
{
using(MyContext context = new MyContext())
{
Customer c = Customer.CreateCustomer(...);
c.EntityKey = m.customerKey;
c.TimeStamp = m.timeStamp;
context.AttachTo("Customers", c);
c.Name = m.Name;
context.SaveChanges();
}
}
The decision comes down to your need for concurrency across your service.
On another note, we'd really like you to be able to just have one kind of entity in your system (i.e. not two as in Model and Business)...could you describe your reasons for wanting two kinds of entities and using an Object-to-Object transform?
Friday, August 1, 2008 8:35 PM -
Hi Jeff, thanks for the reply.
I need to play around with the examples a bit more. So far, I've implemented a prototype using method #3, following sample code from option #1 here:
http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2007/09/19/9825.aspx
My experience so far with this method is that there are no concurrency checks, the client changes win and overwrite the current database contents.
My implementation of #3 uses this general pattern within the DAL:
- Update(BusinessEntity simpleObject)
- new up a ModelEntity simpleObject
- call object to object translator to "copy" BusinessEntity to ModelEntity type
- attach ModelEntity using AttachTo method
- get ObjectStateEntry for the ModelEntity just attached
- call entry.SetModified()
- call SetEntryModified(entry) - to loop thru and set modified on each model entry property, see example code at link above
- call SaveChanges()
As for your question about multiple entity types... my thoughts behind this were these:
- it provides a layer of abstraction from the DAL to calling code
- middle tier defines the BusinessEntity type, so a client that depends on the middle tier, avoids having to also reference the DAL
- the DAL contains "query" classes logically organized for the different types, which contain LINQ queries as applicable for each method and each method returns an instance of BusinessEntity - so the ModelEntities never travel outside the DAL
- The EF is a great start, but if this version doesn't work out for us, this layer of abstraction will allow us to write a new DAL using standard ADO.NET, and as long as it returns BusinessEntity objects, no other code should require changes (a very different case if we use EF model objects throughout and then need to remove them later)
Actually, for similar reasons to the above, our current plan involves 2 levels of abstraction/translation.
1. DAL >> Business Layer (ModelEntity to BusinessEntity)
2. Business Layer >> WCF Implementation (BusinessEntity to DataContractEntity)
The initial thoughts behind the second translation are these:
- WCF datacontract attributes can be applied to the DataContract entities
- BusinessEntity objects could be expanded, if needed, to include behaviours or any other customizations for use in the middle tier, and are cleanly left behind when translated to DataContract entities
- again, it provides another layer of abstraction between the clients consuming our services and the business layer that implements those services
We have reviewed several sample applications and have pulled ideas from several of them, including StockTrader 2.0.... in that case, they perform 1 level of translation further out, within the WCF client code in the web app, translating a DataContractEntity into a UIEntity for final use by the site for databinding, etc.
The system we are working on is a very large enterprise class system, that is being re-architected using the latest and greatest technology, moving forward from a mix of classic ASP and ASP.NET. My initial reaction to passing a ModelEntity from DAL all the way out to web site client is that it would be fine for small/medium sized projects, but that it might not be a good idea for very large projects.
I'd be interested to hear your thoughts.- Proposed as answer by M Ozvat Tuesday, June 16, 2009 2:07 PM
Tuesday, August 5, 2008 4:32 PM