CLSA Framework - why do I dislike it? RRS feed

  • Question

  • Why is it then whenever the CLSA is mentioned my hackles go up? Ok so way-back when I worked on a VB5/6 using Business Objects I was bitten by the poor implementation but the .net variant is a lot better. So what is it that I dislike so much? The central theme of the framework is essentially;
    1. When you model an object you end up with a class
    2. You want 3-layer separation
    3. Implement the 3 layers in one class
    So I can't argue with 1-2 but 3 makes my skin crawl but why? Ok so there is a bit of code bloat but really who cares? Ok it doesn't, IMO, encourage separation as it's rather tempting to make cross layer calls, but that can be code-reviewed out. It doesn't allow security by physical separation, hmm yes that's a bit of a nasty constraint. It encourages (but doesn't prescribe) the inclusion of data access code into object, again you don't have to. It uses serialization everywhere, ok that's a bit irksome (and is exactly the thing that stank in VB6 implementation) but it's not a great overhead. Perhaps it's the model I don't like. When I create a domain model I like to keep it as separate as possible from any IT restrictions - hence I quite like Active Record. So the thought of bundling everything together just feels dirty. Is it the coupling to the framework, hmm, no I think you tend to have a fair amount of utility coupling in your projects. Is it that given an interconnected set of services you don't want to be passing data transport objects around that contain behaviour when you may not have .net to rehydrate the code, hmm well yes that isn't nice but practically it wouldn't take much to create façades for interop' layers. Is it just old hat as it doesn't allow for user defined business rules, hmm yes I do dislike its old school binary representation of a rule but is that enough to make me cringe so? Is that is has a real small app' feel to it, yes but where does that come from?

    Sure there are other parts of the framework such as, IMO, the inappropriate use of base classes but that's all fluffy stuff. So I know at least one other person here that dislikes it and I hope a few will leap to its defence. So what do you think?

    Tuesday, April 29, 2008 10:03 PM

All replies

  • Although everywhere I look, Lhotka seems to be presenting on CLSA, I haven't had a chance to really look at it myself.


    However, I know I want separation of concerns, so the "implement the 3 layers in one class" bit makes me shudder. How can you abstract your data access layer with something like that? And presentation elements have no business in a domain class.


    Wednesday, April 30, 2008 3:03 PM
  • The data abstraction can be quite sophisticated, basically you mark up the classes properties with various data attributes, not unlike some SQL LINQ mechanisms (or other ORMs) I've seen. The basic premise is, what is the difference between implementing MyClassPresentationRules + MyClassBizRules + MyClassDataServices and MyClass {PresentationRules + BizRules + DataServices}

    Wednesday, April 30, 2008 4:32 PM
  • 1) A domain class should either inherently own the business rules or be able to call a business rules engine to handle the business rules.


    2) Presentation logic does not belong on a domain class. Domain classes should be completely presentation-agnostic.


    3) A domain class shouldn't be responsible for accessing the data. You shouldn't have a "MyClassDataServices" class either. That's a rather concrete implementation. There are several options for ORM, but I do believe it's better to be implemented outside of the class so the domain class isn't tightly coupled to the database.

    Wednesday, April 30, 2008 6:11 PM
  • Hi


    I guess the main problem I have with this framework is that a lot of his ideas date

    from the first implementations of the framework.  He's been writing the same thing for years now..

    I don't get the importance he puts on databinding, and all the *** he's written for it ^^

    but hey that's just me Stick out tongue






    Wednesday, April 30, 2008 6:53 PM
  • frederikm, yes that doesn't fill me with confidence either, especially as the earlier version burnt me.

    Chris, first off I'm playing devils advocate here but;
    1) It does own it's own business rules doesn't?
    2) Is there a real problem with decorating your class with db-like but abstract attributes, does that mean it's acting like it's responsible for data access? After-all you have to put the mapping somewhere why not keep that mapping as close to the object it's mapping as possible?
    What I'm getting at Chris is that I dislike it but in an emotional way. However, experience shows that if my gut feeling is no it's just that my brain hasn't seen why yet!

    Wednesday, April 30, 2008 8:10 PM
  • 1) Yea, I don't have a problem with CLSA regarding that.


    2) It depends on the application. If I'm designing an Enterprise application, I don't want the business classes tied directly to the database (which is what you get by decorating). That is one problem I have with Linq To Sql, but I think Linq to Entity Framework solves that problem (I haven't looked too deeply into it yet though).


    I think analyzing frameworks such as CLSA is a great thing to do. I generally don't agree with all the design decisions, but it's great to look at the solutions others have come up with.

    Wednesday, April 30, 2008 8:20 PM
  • Thanks Chris, I think we share a very similar view as I too turned my noise up at the decorating in LINQ. The way I try to think of it is that if I employed three super developers each only specialised in one of the layers would the mechanism work? E.g. my business developer hasn't a clue what a primary key is so would be unable to appropriately decorate the biz' class. Ok it's an extreme point of view but that's what I use as a barometer to measure a concept.
    Thursday, May 1, 2008 5:45 AM
  •  frederikm wrote:



    I guess the main problem I have with this framework is that a lot of his ideas date

    from the first implementations of the framework.  He's been writing the same thing for years now..

    I don't get the importance he puts on databinding, and all the *** he's written for it ^^

    but hey that's just me






    I think your quote on is relevant here to. Due to the lack a separate DTO in the CLSA you don't have the luxury of that abstraction.

    Saturday, May 3, 2008 8:31 AM

    Hello all,

    I think there are some fundamental misconceptions expressed in this thread, especially around data access and CSLA .NET. These are pretty common, and flow from the idea that CSLA plays in the ORM space – which is most certainly does not. I have absolutely no desire to play there, though I’m quite happy that Microsoft now has two offerings in that area (it will be fun to see which, if either, wins in the end)!

    CSLA doesn’t require, or necessarily even suggest, that you put the data access code into your objects. Certainly that is one option, and one that I sometimes use and do talk about. But it is absolutely critical to understand that CSLA specifically does not care how you talk to your database. This is one of its key strengths, allowing CSLA users to adapt from ADO.NET 1 to 2, then to LINQ to SQL then to the Entity Framework and to whatever comes next, with minimal or no impact on their UI or business code.

    (as an aside, Microsoft is now replacing their data access technology every 1.1 years on average – so coupling any framework to one data access technology seems very silly to me)

    On the CSLA forum and my blog I have discussed the DAL issues. The core issue is preservation of encapsulation. If the data fields are in the business object – which CSLA does require – then the trick is figuring out how to get the data into/out of those fields without breaking encapsulation (too badly) and without impacting performance (too badly).

    If the data access code is actually inside the same class – though separated perhaps in another partial class or something – this all becomes a non-issue. You don’t have to break encapsulation or worry about performance (because you have direct access to the fields). But I totally understand and agree with the arguments for a separate DAL assembly, because that buys you some things that partial classes or other techniques simply can’t provide.

    If you have an external DAL then you have to make some tough choices. Somehow that external code must interact with the fields of your business object. You can’t interact with the properties, because they include authorization, validation and other business behaviors that you really don’t want invoked during the persistence process. In fact, you’ll typically have major perf issues if you authorize, validate and run all business logic on your object as you load it from the database – especially when loading a collection of a few hundred items – a common scenario.

    So then how do you load the object’s fields?

    1.       Define a DTO contract to pass between the business object and DAL – a clean approach, service-oriented – with commensurate overhead (which is to say there’s a perf impact). On the whole this is my preferred approach, but not if perf is the top priority.

    2.       Define an ADO.NET contract (DataReader, etc) to allow the business object and DAL to interact. In this model the business object typically calls the DAL to get back an open DataReader, so the code to load the object’s fields is still in the object – so you get excellent perf. The problem here is that you are coupled to ADO.NET – but that is offset (in my mind) by the fact that the base level functionality of ADO and ADO.NET (connection/command/parameter/reader) hasn’t really changed for 10 years. So I like this model just fine, especially if perf is the top priority.

    3.       You can define an interface on every business object by which the DAL can set the fields of the object. This typically means implementing a parallel set of properties to the “real” properties, but a set that doesn’t invoke authorization/validation/business rules. This is fast, and has clear intent, but it adds up to a lot of code to write and maintain. If you have a code-generator it may be a fine solution, otherwise it seems way too costly to me. And there’s no way to ensure that only the DAL can call this interface – so you’ve opened up a nice hole by which a UI developer can cheat and bypass the object’s logic – and that seems bad to me as well.

    4.       You can use reflection in the DAL to do complex mapping of the data into the private fields. There are numerous variations on this theme – using dynamic method invocation, pre-compiling the reflection like the XmlSerializer does, etc – but they all blatantly break encapsulation by having external code muck around with the private members of another object. And even with dynamic method invocation there’s a very real perf hit. While this is perhaps the coolest solution, it is my least favorite.

    Obviously there are many other potential solutions – I’ve just listed the ones that most people come up with when trying to address this issue. As I note, I prefer numbers 1 and 2 respectively. If you look at the CSLA reference app in version 3.5 you’ll see that I am essentially using number 1, with LINQ to SQL as the DAL implementation. But I’ll confess that I’m not completely thrilled with that implementation because it is still a bit too coupled to L2S, and I’m not convinced that L2S will be the best solution (for the next 1.1 years Smile

    In the end, my key points remain always consistent:

    1.       Software should be written for maintainability over time (which implies a consistent architecture and approach over a period of years – constantly pursuing the latest fads is counter to this principle)

    2.       A logically layered architecture is critical to success, this has been reinforced time and time again for at least 20 years

    3.       The hardest layer boundary to get right is between the UI and business layer (hence my fixation on data binding btw – that and the reality that data binding saves HUGE amounts of code in the UI; and it is my strong conviction that the goal for the UI should be zero lines of code, which is why WPF/XAML are so darn cool)

    4.       Logical layering doesn’t necessarily mean separate assemblies – it can be done using regions, partial classes and various other techniques – as long as the separation is understandable and consistent; assemblies are good, but they are merely one tool at our disposal

    My goal over the past 12 years of doing this (yes, I’ve been talking about layered architecture for that long) has been to convey the complexity and importance of thinking these things through. And as a side-effect I created CSLA .NET, which is an implementation resulting from making many of these choices. If you made a different set of choices you’d come up with a different framework. Not better. Not worse. Different.

    In the final analysis the question is whether the architecture (and any related framework/tools/etc) meet the needs of the consumer (the business). Some architectures have failed. Some have worked in limited settings, or with specific people making them happen. Some work in a broad sense across many organizations/applications/etc.

    I am happy that CSLA .NET is in that third category – in use by hundreds (perhaps thousands) of organizations around the planet – certainly by many thousands of developers. It is used in all sorts of industry verticals, by huge organizations and by single developers.

    Monday, June 2, 2008 4:01 PM
  • Thanks for the post Rocky. Apart from my pain with the VB5 attempt (LSET nightmare Wink) I don't have any real technical arguments to complain about the framework. However, and I realise this maybe impossible for you to respond to, there is some untangable thing I don't like about it. What this post is about is either teasing that out of me or reassuring me that I'm being paranoid. Some of the issues that I feel have arisen are about the DAL and as you mention there are a number of ways of skinning that cat. The issues that nag at me are, and please feel free to tell me otherwise, 1. The use of grouping all tiers into one assembly and how that may encourage calls that skip tiers 2. "Hardcoding" business rules into deployable components 3. The mixing of skill sets (really related to 1.) and does include using some DAL attribute decoration 4. Having a thin DTO is useful when moving across service boundaries, i.e. it's not a true object since it has no rules/behaviour, the services enforce the rules (might be an advantage not to encourage this). Let me be clear about my worries, I absolutely agree with the premises of the abstractions in the biz' layer the framework is striving. Sure I dislike some of the derived container classes but that's the sort of thing you can argue about forever so they doesn't stop me using it. I just find myself uneasy with it and I don't know why. I realise that you don't need to justify it, but it would be good if you would (nicely) set me straight Wink

    Monday, June 2, 2008 8:40 PM
  • No problem, I enjoy sharing information about the architecture and framework.

    I don’t think it is really fair to relate CSLA .NET to CSLA Classic. I totally rewrote the framework for .NET (at least 4 times actually – trying different approaches/concepts), and the only way in which CSLA .NET relates to CSLA Classic is through some of the high level architectural goals around the use of mobile objects and the minimization of UI code.

    The use of LSet in VB5/6, btw, was the closest approximation to the concept of a union struct from C or Pascal or a common block from FORTRAN possible in VB. LSet actually did a memcopy operation, and so wasn’t as good as a union struct, but was radically faster than any other serialization option available in VB at the time. So while it was far from ideal, it was the best option available back then.

    Obviously .NET provides superior options for serialization through the BinaryFormatter and NetDataContractSerializer, and CSLA .NET makes use of them. To be fair though, a union struct would still be radically faster Smile

    Before I go any further, it is very important to understand the distinction between ‘layers’ and ‘tiers’. Clarity of wording is important when having this kind of discussion. I discuss the difference in Chapter 1 of my Expert 2005 Business Objects book, and in several blog posts – perhaps this one is best:

    The key thing is that a layer is a logical separation of concerns, and a tier directly implies a process or network boundary. Layers are a design constraint, tiers are a deployment artifact.

    How you layer your code is up to you. Many people, including myself, often use assemblies to separate layers. But that is really just a crutch – a reminder to have discipline. Any clear separation is sufficient. But you are absolutely correct, in that a great many developers have trouble maintaining that discipline without the clear separation provided by having different code in different projects (assemblies).


    CSLA doesn’t group all layers into a single assembly. Your business objects belong in one layer – often one assembly – and so all your business logic (validation, calculation, data manipulation, authorization, etc) are in that assembly.

    Also, because CSLA encourages the use of object-oriented design and programming, encapsulation is important. And other OO concepts like data hiding are encouraged. This means that the object must manage its own fields. Any DAL will be working with data from the object’s fields. So the trick is to get the data into and out of the private fields of the business object without breaking encapsulation. I discussed the various options around this issue in my previous post.

    Ultimately the solution in most cases is for the DAL to provide and consume the data through some clearly defined interface (ADO.NET objects or DTOs) so the business object can manage its own fields, and can invoke the DAL to handle the persistence of the data.

    To be very clear then, CSLA enables separation of the business logic into one assembly and the data access code into a separate assembly.

    However, it doesn’t force you to do this, and many people find it simpler to put the DAL code directly into the DataPortal_XYZ methods of their business classes. That’s fine – there’s still logical separation of concerns and logical layering – it just isn’t as explicit as putting that code in a separate assembly. Some people have the discipline to make that work, and if they do have that discipline then there’s nothing wrong with the approach imo.


    I have no problem writing business rules in code. I realize that some applications have rules that vary so rapidly or widely that the only real solution is to use a metadata-driven rules engine, and in that case CSLA isn’t a great match.

    But let’s face it, most applications don’t change that fast. Most applications consist of business logic written in C#/VB/Java/etc. CSLA simply helps formalize what most people already do, by providing a standardized approach for implementing business and validation rules such that they are invoked efficiently and automatically as needed.

    Also consider that CSLA’s approach separates the concept of a business rule from the object itself. You then link properties on an object to the rules that apply to that object. This linkage can be dynamic – metadata-driven. Though the rules themselves are written as code, you can use a table-driven scheme to link rules to properties, allowing for SaaS scenarios, etc.


    This is an inaccurate assumption. CSLA .NET requires a strong separation between the UI and business layers, and allows for a very clear separation between the business and data access layers, and you can obviously achieve separation between the data access and data storage layers.

    This means that you can easily have UI specialists that know little or nothing about OO design or other business layer concepts. In fact, when using WPF it is possible for the UI to only have UI-specific code – the separation is cleaner than is possible with Windows Forms or Web Forms thanks to the improvements in data binding.

    Also, when using ASP.NET MVC (in its present form at least), the separation is extremely clear. Because the CSLA-based business objects implement all business logic, the view and controller are both very trivial to create and maintain. A controller method is typically just the couple lines of code necessary to call the object’s factory and connect it to the view, or to call the MVC helper to copy data from the postback into the object and to have the object save itself. I’m really impressed with the MVC framework when used in conjunction with CSLA .NET.

    And it means that you can have data access specialists that only focus on ADO.NET, LINQ to SQL, EF, nHibernate or whatever. In my experience this is quite rare – very few developers are willing to be pigeonholed into such a singularly uninteresting aspect of software – but perhaps your experiences have been different.

    Obviously it is always possible to have database experts who design and implement physical and logical database designs.


    I entirely agree that the DTO design pattern is incredibly valuable when building services. But no one pattern is a silver bullet and all patterns have both positive and negative consequences. It is the responsibility of professional software architects, designers and developers to use the appropriate patterns at the appropriate times to achieve the best end results.

    CSLA .NET enables, but does not require, the concept of mobile objects. This concept is incredibly powerful, and is in use by a great many developers. Anyone passing disconnection ADO recordsets, or DataSets or hashtables/dictionaries/lists across the network uses a form of mobile objects. CSLA simply wraps a pre-existing feature of .NET and makes it easier for you to pass your own rich objects across the network.

    Obviously only the object’s field values travel across the network. This means that a business object consumes no more network bandwidth than a DTO. But mobile objects provide a higher level of transparency in that the developer can work with essentially the same object model, and the same behaviors, on either side of the network.

    Is this appropriate for all scenarios? No. Decisions about whether the pattern is appropriate for any scenario or application should be based on serious consideration of the positive and negative consequences of the pattern. Like any pattern, mobile objects has both types of consequence.

    If you look at my blog over the past few years, I’ve frequently discussed the pros and cons of using a pure service-oriented approach vs an n-tier approach. Typically my n-tier arguments pre-suppose the use of mobile objects, and there are some discussions explicitly covering mobile objects.

    The DTO pattern is a part of any service-oriented approach, virtually by definition. Though it is quite possible to manipulate your XML messages directly, most people find that unproductive and prefer to use a DTO as an intermediary – which makes sense for productivity even if it isn’t necessarily ideal for performance or control.

    The DTO pattern can be used for n-tier approaches as well, but it is entirely optional. And when compared to other n-tier techniques involving things like the DataSet or mobile objects the DTO pattern’s weaknesses become much more noticeable.

    The mobile object pattern is not useful for any true service-oriented scenario (note that I’m not talking about web services here, but rather true message-based SOA). This is because your business objects are your internal implementation and should never be directly exposed as part of your external contract. That sort of coupling between your external interface contract and your internal implementation is always bad – and is obviously inappropriate when using DTOs as well. DTOs can comprise part of your external contract, but should never be part of your internal implementation.

    The mobile object pattern is very useful for n-tier scenarios because it enables some very powerful application models. Most notably, the way it is done in CSLA, it allows the application to switch between a 1-, 2- and 3-tier physical deployment merely by changing a configuration file. The UI, business and data developers do not need to change any code or worry about the details – assuming they’ve followed the rules for building proper CSLA-based business objects.

    Everything I’m saying here, I’ve said in much more detail in my Expert Business Objects books, and my blog ( and on the CSLA .NET discussion forum ( I’m happy to discuss it here as well, but obviously I don’t want to repeat a lot of things I’ve covered elsewhere Smile


    • Unmarked as answer by pkr2000 Sunday, June 7, 2009 4:58 PM
    Wednesday, June 11, 2008 1:17 AM
  • Very quick reply (time pressure), thanks for that. I think that clarifies a number of points for me why it doesn't fit the projects I'm working on but equally dismisses a number of the misconceptions so it's been very useful. Thanks again.

    Wednesday, June 11, 2008 6:36 AM
  • Sorry for the delay in responding, I've haven't been plotting an epic reply (honestly)  but someone reminding me of the post and I realised I owed you an explanation.


    My concern is primarily in the area of SOA, and yes before you're instantly put to sleep by buzzword-itus I mean in a very specific definition. What does CLSA it bring to SOA? My view of a true SOA isn't simply about what message format to use, it's more fundamental than that. The core concept, for me, is that we have silos of information that we'd like to use. These silos may or may not be packaged up as one or more applications. When there is no "application" there then we'll usually find a database that has some business logic engraved on the tables, all very non OO but realistically that's what you'd find. On the other hand those wrapping the information within one or more Applications project the silo's information via some form of public API, typically these days this is a question of format/transport but essentially they publish some way of pushing/pulling information to/from the silo.  Inside the black box of the Application there will be some business rules. There are a number of ways of implementing business rules but IMO it boils down to either using pre-canned (and by that I mean compiled) components or running some form of rules engine. None of this is new and non of it is directly relates to SOA. So what does CLSA bring to the party. CLSA at it's core is a working example of good practices related to logical/physical separation of tiers, abstractions of sources and end-points, etc. All good stuff. One particular bugbear the CLSA strives to solve is how to ensure the business rules are adhered to regardless of the logical layer via remote objects.  Now just like data abstraction there are a number of ways of skinning this issue too, otherwise Martin Fowler would be out of job Wink. Each has their pro's & con's, the CLSA isn't a bad stab at it, I think of it as DTO with bells & whistles - a good thing. However, it's in the SOA (or my definition of it) sphere that things polarize for me. I'm now writing what could be considered as a super-set of the silos, a set of applications that bring information, I hate to use the term but Enterprise mash-up seems to sum it up in 'tek shpeak'. Now ignoring the run-of-mill oh looked what I've done with my Digg + Facebook API I'm talking a business utilising information as and when it's required. My company wants a Business Analyst to be deciding what information goes where and how. Yes for a humble code jockey that sounds scary but that's the goal. This is why Rules Engines are so compelling (and expensive no doubt) in this Enterprise arena. So assuming (and it's a big assumption) you buy into the idea that Rules Engines are very good when helping to orchestrate these silos of information I've now invested heavily in getting the Rules Engine up and running. So I've now been brought back down my Ivory Tower and I'm back at the Silo floor. I need to implement some business rules, I've already invested in a Rules Engine so do I want to hand craft some rules in G#Beans or re-use the Rules Engine, well for consistency alone the Rules Engine sounds good. Now not only can our good friend the BA understand the business process at a macro level they can drill down into it too. So now when the fuel tax increases they can change their rule without any compilation, indeed they can simulate the change, What If analysis, etc, etc. Far more then the'd get from my little business component. Now it's never as easy as all that; 1. Selecting a Rules Engine that really is that BA friendly 2. Ensuring business rules are adhered to, surely the remote object solves this - well yes and no. Ultimately both mechanisms can be circumvented, the CLSA plumps for putting the code in one place to help alleviate the maintenance issues but a Rules Engine by its nature does too. Now that's not to say there isn't some plumbing still to be done when using a Rules Engine but for me, this is no longer a question of, 'why don't I like the CLSA'. It has simply become an issue of what I believe is required for a proper SOA business system and are the CLSA features offered suitable to plumbing my Rules Engine work together? IMO if you negate the business rule features from the CLSA then it becomes overkill. So to be clear, you've stated from your post (and your books including specific references to Rules Engines) that the CLSA isn't a one-size fits all and I happen to believe my problem is that I've got 3 arms Wink Let's face it, I think my scenario is pretty narrow in comparisons to the range of projects out there but for me CLSA isn't required.


    Tuesday, June 24, 2008 5:24 PM
  • Mr Lothka is a very smart guy, (certainly smarter than me)! and the CSLA architecture is very clever. 
    However, from my real world perspective, we have 35 engineers working on a commercial application using CSLA and WPF.  The bottom line is that CSLA is hard to use, slows down development and it has a steep learning curve. It does not couple well with WPF and complex GUI requirements and it is definitively very slow when it comes to the database access (very bloated). We have no choice, we are using it, we even have created code generators to create our bussines objects ..... IMO CSLA violates my golden rule of software development ... keep it simple.
    Friday, May 29, 2009 2:19 PM
  • 1) A domain class should either inherently own the business rules or be able to call a business rules engine to handle the business rules.


    2) Presentation logic does not belong on a domain class. Domain classes should be completely presentation-agnostic.


    3) A domain class shouldn't be responsible for accessing the data. You shouldn't have a "MyClassDataServices" class either. That's a rather concrete implementation. There are several options for ORM, but I do believe it's better to be implemented outside of the avchd class so the domain class isn't tightly coupled to the database.

    Thanks for your instruction! Nice writing.
    Tuesday, September 21, 2010 4:07 AM