locked
LINQ to SQL Patterns & Practices RRS feed

  • Question

  • With LINQ to SQL, what do the recommended practices look like?

     

    LINQ to SQL as the Data Access Layer, with a Business Logic Layer orchestrating?

     

    It seems that you could reasonably incorporate LINQ to SQL in the Business Logic Layer, since CRUD is abstracted, orchestration of multiple "object" transactions is easier here and you easily create anonymous types. All this relieves you of dealing with database specifics.

     

    On the other hand, you do need some knowledge of the tables and relations, so that would argue back to LINQ to SQL as a pure Data Access Layer? But then how best to orchestrate transactions, how best to pass the anonymous types?

     

    Thoughts?

     

     

    Thursday, November 22, 2007 4:32 AM

Answers

  • Depends on the size of the project , for SOA services i like a seperate message layer - experience has shown its much more important than a seperate data layer and for non huge projects im loathed to have both.So i pretty much used LINQ as the data for business processes  and the business object tend to be procedural in style . I have gone a bit away from OO with small services as the domains tend to be fairly anaemic and the services are simple.  Im most happy with his model for nTier.

     

    I have played around with workflow services ( Silver)  and since i want a seperate messaging layer the workflow works directly on the message , this means the LINQ stays as a data layer.

     

    For small to medium 1 or 2 Tier apps LINQ would be fine for both BL and Data and i would use the partial classes to add logic to the classes.  For larger apps its probably worth having a seperate OO domain and use LINQ as a data layer.

     

    Regards,

     

    Ben

    Thursday, November 22, 2007 7:47 AM
  • Regarding the message and nTier apps - I tend to look at the serialization . On the wire its xml ( even the WCF "Binary " is a form of xml and very different to the Binary serialization in .NET  ) , so i tend to support what xml supports

     

    ie No inheritance , abstraction , interfaces etc  Just plain objects with simple types such as string and int. Im even hesitant to use enumerations and chars as they have caused problems. This does not mean your application cannot use these techniques only that the messages are translated into an OO layer.  And the messages are designed to transfer the maximum information in one Bussines action/use case.  This also decouples the client from the server and its quite valid  for Vehicle in the client and Vehicle in the server to be different things and objects. The client is focused on display hence Vehicle may be a Vehicle displaid in a grid  , the bank end has the logic .

     

    Datasets are a 2 Tier tool they can work in NTier but its not worth the trouble,

     

    Note in most cases i would not use NTier for smaller amounts of users really want 200+ or  less in a very distributed enviroment. Its much more expensive .

     

    Regards,

     

    Ben

    Friday, November 23, 2007 7:44 AM
  •  

    Hi ,

     

    That comment was based on Larger 2 Tier apps .Im not really the best to comment on Distributed OO apps but i honestly believe they are too hard and preffer simple SOA services. Assuming 2 Tier apps. forr smaller projects its easy to go OO as its a partial class so just add your methods to a new partial class. For larger OO aps its convention to have a seperate datalayer ( and it should be easy to change your data layer)

     

    This is normally accomplished by

     

    1)   Having a BL project with real OO objects

    Having a DataInterface project  ( can probably exctract the Interfaces from Data COntect and add using partial classes )

    Having a DataImplementation project

     

     

    as a common alternative more like Proceduaral code ( ie not really OO)  , note easer but changing the data layer is much more painfull ,

    You jave

    1) A BL object that process objects

    2) A Project with DataObjects ( Linq Data COntext)

    3) A data implementation  which persists and queries the data objects -seperating thsi prevents the business objects having the code themselves and allows for resource / common data access code . This is less neccesary with LINQ for SQL but still good practice.

     

     

    In terms of syncing DLINQ with your OO objects ( also a problem in nTier systems)  you have to tell it about your changes as you mentioned . Now inserts , delets are not a problem but Update is .

     

    For update i normally do something like this

     

    public void UpdateContact(Contact contact)

    {

    //Validate new object if needed

     

    var oldContact = context.Contacts.Single(c => c.Id == contact.Id); //Get what LINQ thinks it is

     

    //Audit if needed

     

    // Check concurrency

     

    //set all the variables.

    oldContact.Name = contact.Name;

    //TODO rest

    }

     

     

    This is pretty much what you done and for larger rojects its probably best . For smaler projects however there is nothing stopping you adding methods ( using partial class) to your LINQ data model and using just those objects , LINQ actually makes this easier model possible for larger projects than before. Note in purist OO terms this its frowned upon becasue your OO model is tightly coupled to your data however its much cheaper to code .

     

    Regards

     

    Ben

     

    Friday, November 23, 2007 8:53 AM
  • If you don't expect to be accessing the information directly -- ie, dtoObject.FooProperty -- then here's an idea that be worth exploring.

     

    Using reflection and Lightweight Code Generation, you could analyze the anonymous type, create a wrapping type with whatever markup you needed, and then return an instance of the wrapping type containing the anonymous instance.  A simple extension method could pipeline this:

     

    Code Block

    var q = db.Customers.Select(c => new { c.Name, c.Age });

     

    var wrappedSequence = q.Wrap();

     

     

    where:

     

    Code Block

    public static IEnumerable<object> Wrap<TEntity>(this IEnumerable<TEntity> sequence)

    {

    // does the reflection and LCG, caches the type for later reuse

    var wrappingType = CreateWrappingType(typeof(TEntity));

     

    foreach (var item in sequence)

    {

    yield return Activator.CreateInstance(wrappingType, item);

    }

    }

     

     

     

    Remember, just because a type is anonymous doesn't mean the type doesn't exist.  "Object" is just the best name at compile time that you, the developer,&nbspcan use.  The compiler still has access to it.

     

    The downside of this approach is that while dynamically-created methods are collectable in the current CLR, dynamically-created types are not (hence the caching).  If that's truly a problem, then you might see what the DLR can provide.

     

    Here are a couple links on LCG:

     

    http://wesnerm.blogs.com/net_undocumented/2005/06/lightweight_cod.html

    http://msdn2.microsoft.com/en-us/library/system.reflection.emit(vs.90).aspx

     

    Tuesday, November 27, 2007 7:05 AM

All replies

  • Depends on the size of the project , for SOA services i like a seperate message layer - experience has shown its much more important than a seperate data layer and for non huge projects im loathed to have both.So i pretty much used LINQ as the data for business processes  and the business object tend to be procedural in style . I have gone a bit away from OO with small services as the domains tend to be fairly anaemic and the services are simple.  Im most happy with his model for nTier.

     

    I have played around with workflow services ( Silver)  and since i want a seperate messaging layer the workflow works directly on the message , this means the LINQ stays as a data layer.

     

    For small to medium 1 or 2 Tier apps LINQ would be fine for both BL and Data and i would use the partial classes to add logic to the classes.  For larger apps its probably worth having a seperate OO domain and use LINQ as a data layer.

     

    Regards,

     

    Ben

    Thursday, November 22, 2007 7:47 AM
  • If i were you i would read the Business Objects for c# 2005 latest edition, the author explain alot about the logical and phisical layers of the domain and the architectural pattern you can use. that will guide you and give some ideas on what is the best way to code your layers logically and phisically. linq enables you to access data in many ways (linq to sql, xmlweb services, REST etc) the book will give the a walkthrough on what question you should be asking to decide how to structure you domain. Just read the first 3 to 4 chapters and you will know strait away what u want. and what is best suited your architecture.

     

    hope this helps

     

    dancoe

     

    http://undocnet.blogspot.com

    Thursday, November 22, 2007 2:41 PM
  • Hi,

    BenK

    What you saying make perfect sense ,however regarding you comment:

    For larger apps its probably worth having a seperate OO domain and use LINQ as a data layer.

     

    This is exactly what I am trying to do,I have managed to do it by having properties of my Custom Entities

    calling properties of the LinqSqlType so that DataContext could track changes,but involved having a CustomType for each LinqSqlType.

     

    Is there a better way of doing it?If so I would be grateful if you could share.

     

    Have you created 2 actual physical assemblies EG MyCompany.Data and MyCompany.Business?

     

    Thanks again for any suggestions

     

     

     

     

    Thursday, November 22, 2007 4:52 PM
  • Ben,

     

    Messaging is a good strategy for decoupling and scaling. It does get us back to deciding what is in the message. There are the usual suspects - XML, DataSet and friends, (serializable) typed object - but what about the new anonymous types? I do not know enough to know if using anonymous type is practical, advantageous..?

     

    Also we have Windows Communication Foundation and even Windows Workflow in the transport/protocol/messaging arena.

     

    Thoughts on these?

     

    Thursday, November 22, 2007 9:49 PM
  • Regarding the message and nTier apps - I tend to look at the serialization . On the wire its xml ( even the WCF "Binary " is a form of xml and very different to the Binary serialization in .NET  ) , so i tend to support what xml supports

     

    ie No inheritance , abstraction , interfaces etc  Just plain objects with simple types such as string and int. Im even hesitant to use enumerations and chars as they have caused problems. This does not mean your application cannot use these techniques only that the messages are translated into an OO layer.  And the messages are designed to transfer the maximum information in one Bussines action/use case.  This also decouples the client from the server and its quite valid  for Vehicle in the client and Vehicle in the server to be different things and objects. The client is focused on display hence Vehicle may be a Vehicle displaid in a grid  , the bank end has the logic .

     

    Datasets are a 2 Tier tool they can work in NTier but its not worth the trouble,

     

    Note in most cases i would not use NTier for smaller amounts of users really want 200+ or  less in a very distributed enviroment. Its much more expensive .

     

    Regards,

     

    Ben

    Friday, November 23, 2007 7:44 AM
  •  

    Hi ,

     

    That comment was based on Larger 2 Tier apps .Im not really the best to comment on Distributed OO apps but i honestly believe they are too hard and preffer simple SOA services. Assuming 2 Tier apps. forr smaller projects its easy to go OO as its a partial class so just add your methods to a new partial class. For larger OO aps its convention to have a seperate datalayer ( and it should be easy to change your data layer)

     

    This is normally accomplished by

     

    1)   Having a BL project with real OO objects

    Having a DataInterface project  ( can probably exctract the Interfaces from Data COntect and add using partial classes )

    Having a DataImplementation project

     

     

    as a common alternative more like Proceduaral code ( ie not really OO)  , note easer but changing the data layer is much more painfull ,

    You jave

    1) A BL object that process objects

    2) A Project with DataObjects ( Linq Data COntext)

    3) A data implementation  which persists and queries the data objects -seperating thsi prevents the business objects having the code themselves and allows for resource / common data access code . This is less neccesary with LINQ for SQL but still good practice.

     

     

    In terms of syncing DLINQ with your OO objects ( also a problem in nTier systems)  you have to tell it about your changes as you mentioned . Now inserts , delets are not a problem but Update is .

     

    For update i normally do something like this

     

    public void UpdateContact(Contact contact)

    {

    //Validate new object if needed

     

    var oldContact = context.Contacts.Single(c => c.Id == contact.Id); //Get what LINQ thinks it is

     

    //Audit if needed

     

    // Check concurrency

     

    //set all the variables.

    oldContact.Name = contact.Name;

    //TODO rest

    }

     

     

    This is pretty much what you done and for larger rojects its probably best . For smaler projects however there is nothing stopping you adding methods ( using partial class) to your LINQ data model and using just those objects , LINQ actually makes this easier model possible for larger projects than before. Note in purist OO terms this its frowned upon becasue your OO model is tightly coupled to your data however its much cheaper to code .

     

    Regards

     

    Ben

     

    Friday, November 23, 2007 8:53 AM
  • Hi,

    the only problem I see with your approach is that you make an extra querying step by retrieving the original object and then update and them submitChanges will be able to track changes.

     

    My way to deal with partial populated object and track changes is to create a custom type EG "EmailUserBO"

    Code Block

     

    private EmailUser _emailUser; // reference to the EmailUser LinqToSql Table

     

    ///This will be called when doing projection

    internal EmailUserBO(EmailUser emailUser)

    {

    this._emailUser = emailUser;

    }

     

    ///Set and get the EmailUser LinqToSql Properties via CustomType

    public string Surname

    {

    get

    {

    return this._emailUser.Surname;

    }

    set

    {

    if ((this._emailUser.Surname != value))

    {

    this._emailUser.Surname = value;

    }

    }

    }

     

    and when doing projections I would project into my custom Type.Bit more work but this way I could keep track of the changes.

     

    var query=from e in db.EmailUsers

                   where e.ID==1

         select new EmailUserBO(e)

     

     

     

    However it has it's drawback as you have to create more code.I wish there was a better way.

     

     

       

     

     

     

    Friday, November 23, 2007 10:07 AM
  • Thanks Ben,

     

    This has been really helpful. I must have read 50 articles and blogs to support a conclusion. My conclusion is based on the major reasons for the existance of a Business Layer, namely - abstract from the database, be able to program to an object model, orchestrate multiple database entities to get work done for a complex object and finally implement business logic.

     

    So it appears that using LINQ to SQL satisifies all but the last, and as you note above, the business logic can be nicely handled as a partial class. So this will be my architecture.

     

    Thanks,

     

    Alex

     

    BTW, it still leaves the issue of serialization to the presentation tier.

    Saturday, November 24, 2007 4:20 AM
  • I just found and read the "Dinner Now" project on CodePlex (http://www.codeplex.com/DinnerNow) which is a complete example of using LINQ in a multi-tier scenario. This follows widely accepted patterns & practices of complete separation of concerns, so serves as a reference example.

     

    This raises an interesting question - how good a coding practice is this? Given the "goodness" widely accepted, it has some certain "badness" too!

    • It is very complex to follow. There are a lot of classes to read through to figure out what is happening. This would raise maintenance costs (about half the total cost of an implementation).
    • There is a lot of code, and if bugs are proportional to lines of code (widely, though not universally accepted), then there is potential for more bugs. This will also raise the cost.
    • And of course there are more test cases to write, also raising the cost.

    So would this not argue for a simpler implementation?

     

    Perhaps this would lead us back to using the LINQ code in the Business Logic Layer, and the appropriate methods returning serializable "objects" to the presentation layer?

     

    Not sure about this yet, but perhaps use LINQ to create XML in this Business Logic Layer, so that LINQ to XML on the presentation tier can manage nicely?

     

    Now we have way less code, less fragmented.

    Sunday, November 25, 2007 6:03 PM
  •  

    I agree there really needs to be a better way as creating reams of DTOs is just silly and the complication of writing a multi-tier app could be way easier.  If we look at something like Juval Lowy and his Every Class is a WCF service

    http://channel9.msdn.com/ShowPost.aspx?PostID=349561#349561 , this tends to make sense to me.  What if every class was a service and I could develop an app without care about layers as I could Refactor the layers using UI Modeler after the fact and the system would just do the right thing for us automatically.

     

    public class Client

    {

    public void GetCustomers()

    {

    var q = from cust in db.Customers; // Note DB is the server proxy-not the typical db in linqtosql.

    ObjectDumper.Write(q);

    Customer c = db.GetCustomer(2);  // Call methods on service just like sprocs (but not).

    }

    }

     

    // Linq service provider. Located in-proc or out-of-proc via WCF.

    public service Server

    {

    public Customer GetCustomer(int num)

    {

    return db.Customer.Where(c=>c.ID == num);

    }

    }

     

    Now that is extremly easy.  Your Linq service provider does *all the BL. Client side BL could also be added for speed to save round-trips for common validations.  Now just use the designer to move around the Server object to another machine.  The Publish function will rip it out and do all the required hook-ups automatically using WCF.  The wire format could be a simple binary reader (like sqlreader) so do not need the overhead of object construction just to turn around and do xml serialization.  The client and server proxy layers would read/write objects as byte[] but produce objects at the public api edges.  Maybe the format is left as native sql memory format all the way from the server query to the client until the client hydrates the results as an object graph - that would be about as fast as you could get.  All the bells and knobs would be in the Model designer and this is the direction MS is going anyway (i.e. Workflow service project) so it seems to make perfect sense as the next step.

     

    Monday, November 26, 2007 4:41 PM
  • Perhaps this would lead us back to using the LINQ code in the Business Logic Layer, and the appropriate methods returning serializable "objects" to the presentation layer?

     

    That's what I've been doing, and it has worked great for my app so far.  But my app has been primarily readonly up until this point, and I'm still trying to figure out the best way to track changes on the presentation layer side and reconcile on the server side...

     

    Monday, November 26, 2007 4:57 PM
  • Works good, but is still a real pain.  This is essensially the normal WCF model.  Have a client proxy call WCF server api and returning DTOs.  Linq itself is abstracted behind the server so it is not really in the problem space as it could be some other DAL such as CSV.  IMO, the problem is 2 things.

     

    1) you can't use linq to do your queries. You can query the returned lists, but linq's value get somewhat reduced on the client side.

    2) you have to create tons of DTO classes.  It would be nice to just reuse the entity objects, but I still have issue with circular refs using this method unless this was addressed in RTM.  Has this been addressed?

    3) The general complexity rises with all the layers and DTOs and debugging.  It should be as easy as writting a local client, all the remoting stuff should be abstracted away.  When you think about, we don't really care about the plumbing, it should just work.  That said, I see the smell the good progress - I feel they will get there.
    Monday, November 26, 2007 7:52 PM
  • The Anonymous Class that LINQ produces is almost what we want for DTOs. However, it only has method scope. It is type Object outside the method (if I understand correctly). We need anonymous types extended  to be serializable in a useful way. What comes to mind is the Record type from F#.

     

    I do not know enough yet, but is there some way to use the Data Contract idea to turn a serialized anonymous object into something useful?

     

     

     

     

    Tuesday, November 27, 2007 12:54 AM
  • The Juval Lowey video is very thought provoking. This reminds me of the Erlang concept of Processes being the main unit of computation. The system then shows promise Erlang's near linear scalability, fault tolerance, ability to recover from failure...

     

    Tuesday, November 27, 2007 12:58 AM
  •  DataArchitect wrote:

    The Anonymous Class that LINQ produces is almost what we want for DTOs. However, it only has method scope. It is type Object outside the method (if I understand correctly). We need anonymous types extended  to be serializable in a useful way. What comes to mind is the Record type from F#.

     

     

    I was thinking the same thing about the F# record type.

    Tuesday, November 27, 2007 2:00 AM
  • If you don't expect to be accessing the information directly -- ie, dtoObject.FooProperty -- then here's an idea that be worth exploring.

     

    Using reflection and Lightweight Code Generation, you could analyze the anonymous type, create a wrapping type with whatever markup you needed, and then return an instance of the wrapping type containing the anonymous instance.  A simple extension method could pipeline this:

     

    Code Block

    var q = db.Customers.Select(c => new { c.Name, c.Age });

     

    var wrappedSequence = q.Wrap();

     

     

    where:

     

    Code Block

    public static IEnumerable<object> Wrap<TEntity>(this IEnumerable<TEntity> sequence)

    {

    // does the reflection and LCG, caches the type for later reuse

    var wrappingType = CreateWrappingType(typeof(TEntity));

     

    foreach (var item in sequence)

    {

    yield return Activator.CreateInstance(wrappingType, item);

    }

    }

     

     

     

    Remember, just because a type is anonymous doesn't mean the type doesn't exist.  "Object" is just the best name at compile time that you, the developer,&nbspcan use.  The compiler still has access to it.

     

    The downside of this approach is that while dynamically-created methods are collectable in the current CLR, dynamically-created types are not (hence the caching).  If that's truly a problem, then you might see what the DLR can provide.

     

    Here are a couple links on LCG:

     

    http://wesnerm.blogs.com/net_undocumented/2005/06/lightweight_cod.html

    http://msdn2.microsoft.com/en-us/library/system.reflection.emit(vs.90).aspx

     

    Tuesday, November 27, 2007 7:05 AM
  • Hi devnet247 , You didnt  leave your name..

    the only problem I see with your approach is that you make an extra querying step by retrieving the original object and then update and them submitChanges will be able to track changes.

     

    > In 99% ( well almost)  of cases it will be in the lookup cache , and i find it usefull to do server related auditing / Concurrency checks at this point...

     

    > It sounds like you usin a 2 Tier or 2.5 Tier ( web) app  , when working with nTier there is no guarantee the object will be in memory for Linq to check again so you must fetch whether you do it manually or via Synch .  Storing it manually is probably not advisable as you would  be creating an in memory DB and it would never be released. .

     

    Regards,

     

    Ben

     

     

     

    Friday, November 30, 2007 9:09 AM
  • Hi Alex ,

     

    "BTW, it still leaves the issue of serialization to the presentation tier."

     

    This is really the key .. To me the client to service interface ( or service to service) is much more important  ( and troublesome) than the data  , and i nearly always use a seperate messaging layer that folds/ combines objects and sends extra information or deletes extra  fields like the Id of a Child when the child is attached - sending duplicates just confuses people .

    Be very carefull about samples and theories here many fall apart on real projects.  Distributed OO  for real projects is VERY Hard and i have seen many disasters.  Unless its a small project I recommend simpler SOA style services if you have not done many distributed .NET apps. - and here you will find you will get an aenemic OO domain ..

     

     

    Regards,

     

    Ben

     

     

    Friday, November 30, 2007 9:22 AM
  • Regarding Juvals " Every Class is a WCF service" This was  discredited 7 years ago why has it come back ?  ( I mean remoting , DCOM and J2EE have all tried this it works for examples but not  under real loads ) . Over the last 7 years i have seen a number of disastrous distributed app an in each case they tried to abstract the comms , sure the software was architected to best practics  to be flexible but after all the comms hacks the objects got pretty uggly and the software then needed further hacks to run at acceptable speed .  You cannot abstract cross process boundaries they are and should be a first class design decision and IMHO more important than the domain model .

     

     

     3-7 Class is my guideline with a few classed being individual services.

     

    You may be interested but i have started using workflow in my SOA services but the workflow just process messages which are mapped to data just for persistance ..  I mean with workflow  in tehory you dont even need a db  ( except for workflow) - just create data for a reporting db and use reports but iim not brave enough for that.

     

    Looking ok so far but there is a bit pof a learning curve for workflow

     

    Regards,

     

    Ben

    Friday, November 30, 2007 9:37 AM
  • Hi Ben,

     

    On the "Every Class is a WCF Service" comment you have - it may indeed be true that the WCF implementation is problematic. I do not know, having neither seen reports nor tried it.

     

    However, dismissing the idea does not ring true. I mentioned Erlang above, and Erlang, probably the most successful commercially viable distributed programming environment, and it uses "processes" and messages as its primary programming model. And this is similar in concept to WCF Services - isolated processes, all work done by sending mesages. Maybe the issue is that WCF Services are very far from "light weight", which is what Erlang processes are?

     

    Joe Armstrong, one of the principal designers of Erlang, writes about a Unversal Binary Format (UBF) for messaging, which may be an interesting idea to pursue for data serialization.

     

    Alex

     

    BTW, what is 3-7 Class?
    Saturday, December 1, 2007 11:45 PM
  • Hi Alex ,

     

     

     

    If performance doesnt matter its ok ... A lot of succesfull systems run at sub 10 transactions persecond .  Com + is VERY lightweight ( binary and C++ )  and it fails to do this -, i have been on projects where we had a low  service to process  count they failed and I had to try and fix it . A colleague is currently trying to fix a system thats not working - its based on Juval and with 1 user to quote him "it runs like a dog".

     

    The definition of process here is critical , im assumining a  Windows process.

    One of the lessons learned with middleware was for your 1: m ( especially child) relationship data to be contained in a single Service to client message , this was integrated into SOA ( chunk messages)  and allows you to have smaller services though i think its far to optomistic to go to 1 class to 1 service , the industry only 4 years a go was  buildiing 100 classes into 1 service and thought that was a good idea ( and IMHO works better than 1:1) .

     

    I give you a simple example and yes there are work arounds but you will hit that problem time and time again.

     

    Client  or ( Service)  Needs 2000 Contacts  for a search result ,mailer ,  report , look at sales team daily work  ..  ( Or orders , Jobs etc )

    Contact Service  calls Address service for each contact it makes 2,000 calls to the address servic.  Your fast 20 ms calls  ( real life not test systems with 1 user per service ) means this simple operation takes 40 seconds . For 20,000 it would take 7 minutes... and saturate your now slow service.  Even woese your single query now becomes 2001 queries and saturates your db)   So you have to add load balancing machines , lazy loading , caches  , complex fetches ( eg send a list of 20K  ids and get the result ) etc etc introducing more complexity , yet if the address class was part of Contact  none of this would be needed in addition  you can also easily run a transaction when a new contact with address are inserted,

     

    The DB is critical here sometimes in reallife systems your queries might hit 3-4 seconds due to system load ( or higher)  having lots of queries for something that can be done in 1 or 2 is a bad idea. The only way around this is for Contact Service to have direct access to the DB and generate Addresses but then you address service is no longer the sole repsonsible service for addresses really defeating the point of having a 1: 1 relationship .

     

    Services need to contain closely related information . period. Design your system on tighly coupling this information and loosely coupling more loosly related information worry about OO etc last .

     

    The only way you could make it work is if you used something like R Kiss Null adaptor ( which does memory to memory communication  and unlike the netPipe it doesnt do the serialization/authentication   which is the biggest cost) . You then build a framework where you move  related services into the same process but IMHO you dont get much and all you are doing is building closely related services dynamically - its not hard to move a class your  a different service - the good thing about snall services is change is easy. Note even in this model you stil suffer context swithes and there is significant overhead for each message.

     

     

    Regards,

     

    Ben

    Monday, December 3, 2007 9:58 AM