locked
Why would I ever use ServicedComponents for in-service data access? RRS feed

  • Question

  • If this question belongs in another forum, please let me know where. Otherwise ...

    I've designed some data access objects that hit SQL Server 2005. They're to be used inside service boundaries in high-volume situations where scalability is an issue, so they'll execute on a separate server from business logic. Now I have to decide how to host them. I'm an experienced architect and developer (or I like to think so, anyway), but my .NET experience is modest, so I've been reading up.

    Based on my research, one conclusion I've arrived at is that at least for SQL Server 2005, it doesn't make sense any more to use ServicedComponents for data access. Yet so many of the examples of how to create high-volume data access components use EnterpriseServices / ServicedComponents as the hosting mechanism, that I'm puzzled. As I said, my .NET experience is modest, so I'm putting the question out there: why would I ever use ServicedComponents for data access at this point in history? My reasoning is below. Can anyone tell me if I've made any errors? 'Preciate it.

    -BillyB

    1. One of the main reasons to use EnterpriseServices is for distributed transactions, but the new System.Transactions namespace stuff introduced in .NET 2.0 is faster, more flexible, and all-around better at this.
    2. Another big reason for using EnterpriseServices is object pooling. Yet object pooling doesn't make sense for data access objects. The benefits of object pooling only outweigh the costs when you're doing a lot of work in your constructor and/or holding state, yet well-designed data access objects open and shut database connections as quickly as possible and are stateless. Even with stateless data access objects, it used to make sense to use object pooling because dbms providers didn't manage connection pool sizes well; you could use object pool sizes to make sure you didn't overallocate licensed dbms connections. This too is no longer necessary, at least with SQL Server and Oracle providers.
    3. You can use straight remoting to host your data access objects on separate servers and achieve good scalability: if you use IIS to host 'em, IIS will use its own thread pool to create the remote objects, and you can always scale up and out with hardware. A side benefit is that when you don't need remoting, you omit the remoting configuration process and your MarshalByRefObject-derived data access objects will just act like plain vanilla, lightweight local data access objects.
    4. One case where ServicedComponents with object pooling might still be used for data access are read-only types of components that read static data in the constructor and leave it out there, cached, for some time. But again, you can also use straight remoting objects configured as 'Singleton' to achieve the same results, without object pooling costs.
    5. Leaving out the ServicedComponent inheritance makes it easier to move to a web-serviced hosted approach in .NET 3.0, when web service transactions and an optimized, binary transport mechanism will become available.
    Friday, January 19, 2007 11:03 PM

All replies

  • I agree with your reasoning.  In .NET 1.1, transactions were either local to the connection (basically a "BEGIN TRAN", etc.) on the wire or you used the DTC (via ServicedComponent or by directly importing DTC transactions).  DTC transactions are not cheap and involve lots of COM interop and security goo when multiple machines were involved (say, your two bottom tiers).  You also have to configure the machines to accept network transactions.  .NET 2.0 and 3.0 replumbed the transactions to give you a reasonably good programming model.  Also, you have a lot more control over the isolation level required in each operation (COM+ basically assumed all operations in a component were the same transaction type and isolation level).  The transaction management falls-back to a more expensive mechanism (local --> .NET-managed --> DTC) depending on the circumstances.  One thing to remember is that neither .NET 1.1 and .NET 2.0 let you federate a SQL connection and an MSMQ operation (against a transactional queue) in the same transaction.  This was one reason we had to use the DTC directly in our situation.
    Saturday, January 20, 2007 3:28 PM
  • Thank you for the input, Erik.

    Question, though - I was under the impression that you CAN include SQL Server and MSMQ in a transaction together when using the .NET 2.0 System.Transactions stuff; it's just that this forces transaction promotion to a distributed transaction and yes, gets MSDTC involved. Or am I misunderstanding what you mean by 'directly'? I'm very interested in the answer to this question; we're in early development now, we'll have MSMQ transactions, and we're trying to decide whether to become early adopters of .NET 3.0 or not.

    What about the other points? Any disagreements there?

    Anyone else have any opposing views or other thoughts?

    P.S. My stepson is also Erik-with-a-K Johnson, by the way, so ... Salut!

    Saturday, January 20, 2007 8:34 PM
  • I was definately wrong about the MSMQ transaction bit.  I just ran a quick test and the transactional queue respected the System.Transactions.TransactionScope state I had surrounding the queuing code.  But I did have to call Send() with a MessageQueueTransactionType.Automatic for it to work.  Sorry for causing more confusion -- and Salut back to you and your Erik with-a-K.
    Monday, January 22, 2007 7:06 PM