locked
Your favorite O/R Mapper? RRS feed

  • Question

  • User1904710788 posted
    Tell me which O/R Mapper is your favorite and why. I'm using Paul Wilson's O/R Mapper, and It's been great so far but my needs are growing as projects become more complex. Is there anything new going on in this arena, or is LLBLGen Pro and EntityBroker still the top two?
    Friday, July 30, 2004 4:05 PM

All replies

  • User-528039901 posted
    Still he two top.
    Monday, August 2, 2004 3:18 AM
  • User-58325672 posted
    does someone have a comperation matrix between the two ? :D
    Monday, August 2, 2004 9:59 AM
  • User-956878918 posted
    Thona will flame me. I respect his knowledge and his product, but here's my two cents. First off, I don't have a comparison matrix. I still highly suggest LLBLgenpro though as it has a lower learning curve. I think thona's Entity Broker has more features, but it's also more difficult to use. And if you're like me, you've got to have solutions quickly... and using an O/R mapper needs to be a quick and easy process, with minimal learning time and maximum efficiency. I think that Frans Bouma's product is superior when the requirements listed above are high on your priority list. I feel as though Entity Broker would perhaps offer more advantages if you had the time to learn it, and deal with its idiosyncracies. The new entity broker might be awesome, but it's still in the early stages of development and therefore it's difficult for me to say. Frans is building a new LLBLGenpro as well (I assume to coincide with the release of asp.net 2.0). I feel as though llblgenpro will give you everything you need and is a major leap over writing your own data access layer. Plus you can go from relational model to compiled object model in less than 5 minutes. That's pretty cool. Cheers, -a
    Monday, August 2, 2004 6:59 PM
  • User-1000197047 posted
    I would say LLBLGen Pro and EB are the top two getting exposure on this board. Whether or not they would be considered the top two best is questionable. I believe LLBLGen uses DataSets and anything using DataSets in my book loses big marks. One problem I had with EB was that it lacked Oracle support. However I believe it supports that now. A couple to look into include DataObjects.Net and Genome.
    Thursday, August 5, 2004 12:58 PM
  • User-255404864 posted
    Almost all mappers mentioned in this forum are too complicated. By that I mean that the syntax and API are not clear or intuitive enough for me to be able to understand and use within an hour or so of experimentation. (especially DataObjects.Net and Genome). On the other end of the spectrum is the Wilson OR Mapper, which IMHO has the best interface for the "average" developer with beginner to intermediate OO knowledge - at a price of more advanced features such as a decent query engine and support for all but the simplest inheritance. (but maybe it'll evolve....) The big two mentioned above (EB and LL) do a good job of being easy (intuitive) to use and having powerful features, which is why they will remain popular. Another thing that is attractive about Wilson's is that it is the most "non-intrusive" or mapper I've see so far - no "PersistentObject" base classes. In fact the business objects do not need to have any reference the the mapper itself, which is how it should be. (Yes you need to use the ObjectHolder for lazy loading of to-one relations, but that's a questionable need in many cases.)
    Thursday, August 5, 2004 5:24 PM
  • User-1569077614 posted
    Here are some CodeSmith templates. These may not qualify as O/R Mappers, but they are very helpful, and FREE :) Data Layer This little friend is very similar to a business object model defined in a Wrox's book, but with many nice enchancements. This script will generate standard CRUD Stored Procedures, plus several kind of lookup stored procedures based on defined foreign keys (Customer_GetByCompanyID, Customer_GetByTaxID, Cusotmer_Find, ect ect). It will also create C# Table Objects, Row Objects and Collection objects to hold everything. The objects will even have sub-collections. Example: Company collection can have a Employees collection in it, although you have to manually load. It comes with a simple transaction manager, and all your DB activity can be wrapped in a transaction. All your Table objects have neat methods like .Insert, .Update, .GetByXXX, .Find, and so one... Its really pretty good, but you kind of have to build your own Business Layer around the generated code if you really want to take the bull by the horns. The script has a few small bugs (I fixed them on the version I downloaded), and you may have to tweak some of the generated code a bit. Stored Proc Wraper with MSDAAB support This guy creates all your CRUD stored procs, plus a few simple FK lookup stored procs (not as many as the one above..bu it is still very nice. It then has a second script you run that creats C# wrapper code for the stored procs. It will actually create C# wrapper code for any stored proc you tell it to, not just the ons it generated. Its really pretty good. What it produces is good enough for most small applications, and if need to take things a step further you can handcode a small business object layer to sit on top of it.
    Saturday, August 7, 2004 11:05 AM
  • User-528039901 posted
    ::Here are some CodeSmith templates. These may not qualify as O/R Mappers, but they are ::very helpful, and FREE :) You fail also to mention that they are useless. They basically are not O/R mappers. It is not that they do not qualiy as O/R mappers. THey are - sorry - stupid inflexible DAL's. Missing are, among tons of other things: * Inheritance (i.e. the ability to work with OBJECTS, not structs). * Query subsystem. Sorry, "GetByCompanyId" is about as idiotic as it can get. How about "get me all customers having open invoices" They are a nice step forward, but anyone putting something like this into the foundation of his architecture should seriosuly get a beginner book into architecture of persistence layers - and read the first chapters on what a persistence layer should provide ans why.
    Saturday, August 7, 2004 12:11 PM
  • User-1569077614 posted
    well OK....Thank you for the informative and diplomatic response.
    Saturday, August 7, 2004 12:38 PM
  • User1341256997 posted
    Contrary to Thona Theory (you know the one where Thona is right and nobody else knows what the hell they are doing), I believe these templates are a decent starting point. You see the great thing about CodeSmith, and unlike most O/R mappers, is that YOU, the developer, have control over the output of your templates and hence control over your own architecture. While I do believe that EntityBroker is a good product, I hate the thought of being locked into any particular architecture, any particular way of doing things, and I really hate the thought of being dependant on someone else to make a change to their product to better suit my needs. CodeSmith focuses on enabling as many code generation scenerios as possible and not on trying to dictate any particular architecture.
    Monday, August 9, 2004 2:48 PM
  • User-528039901 posted
    Contrary to what you say, though, Codesmith makes a piss louse O/R mapper. And there is a reason. The problem with the CodeSmith code generator is that it can only look at the OBJECT. Not the object in context. Unless you proviude templates for the complete O/R mapper, yo ubasically end up with a bad coded and powerless mapper. The problem is two fold: * With Codesmith you basically geenerated the code at compile time. This sucks, as this code is not extensible ad runtime. The EntityBroker is compiling the final schema when you start the program, allowing things like dynamic extensions of the mapped classes. * Codesmith focuses on the class. THis is nice, but way not enough. You have to focus on the classes you map - in context of all other classes. So, unless someoner stands up and starts writing a CodeSmith based O/R mapper, Codesmith is not even in a leage to seriously talk about mapping objects. Anything that maps one class (no inheritance) to a database table is basically just a toy - stuff like inheritance etc. is what makes a mapper valuable. So eric, sorry. Codesmith is not better as a tool to write your own O/R mapper as VS.NET is. But it is simply not even the beginning of an O/R mapper in itself.
    Monday, August 9, 2004 3:40 PM
  • User1341256997 posted
    I did not claim that CodeSmith was a full-blown O/R mapper. In fact, CodeSmith does not currently support any mapping capabilities by default (although it is 100% extensible and those features could easily be added on). You see not all people believe in Thona Theory, some people want simple CRUD wrappers to their databases and some people want really complex OO models driven from an O/R map. The great thing about CodeSmith is that it can do either. Also, unlike O/R mappers, CodeSmith is not limited to the DAL as it can generate code for whatever you can dream up (stored procedures, data access classes, business classes, presentation layer, documentation, etc). You did not seem to address the fact that people are effectively locked into Thona's way of doing things when they go with EntityBroker and at Thona's mercy to implement a feature (no offense, but that could be a while judging by the EntityBroker 2004 release). I personally would never live with those constraints no matter how good or bad your architecture is. In this industry things change quickly and I'm just not going to lock myself into some black box and hope for the best.
    Monday, August 9, 2004 4:27 PM
  • User381371869 posted
    I have to agree with Eric, I don't think I saw anywhere where he reported CodeSmith to be a full blown O/R mapper. That's the beauty with CodeSmith - it's simply an enabler of a solution. It, by no means, provides a full end-to-end solution for O/R mapping, but it provides a *great* environment to build your own. And, as Eric mentions, the one big problem with O/R mappers I've seen out there is that they are a big black box. You point it somewhere, define your relationships, and press the button. You have no say over the code it generates. Perhaps LLBLGen and EB are different. I'll admit, I have no experience with either. But, I know I didn't like the J2EE world and entity beans for that very reason: The specs left that open to the application server vendors, and you were basically bound to the IDE that generated the O/R code. I never like when VS.NET generates code for me; didn't like a few years ago, still don't like it now. VS.NET has gotten better about that, but why would I move back to that with a EB or LLBLGen based solution? -Jason
    Monday, August 9, 2004 4:54 PM
  • User-1308937169 posted
    I think you missed the _ENTIRE_ point of CodeSmith. It's entire premise is that it is 100% extensisble with the templating framework. _YOU_ have to provide the templates to generate _YOUR_ O/R Mapper the way _YOU_ want it. _YOU_ have to create the templates, but then _YOU_ control how simple or complex they are. The Templates are only as good as thier author. I don't have any experience with your product so I can't speak to it, but I do have experience with LLBGen (not pro), and CodeSmith. I stopped looking when I found CodeSmith since it allows me to do things my way.
    Monday, August 9, 2004 5:14 PM
  • User-560067886 posted
    bwha-wha-what ?!?!?! > THey are - sorry - stupid inflexible DAL's. If you write a DAL template, you get a DAL class...if you write a full set of architecture templates (with biz obj) you get full architecture > * Inheritance (i.e. the ability to work with OBJECTS, not structs). The templates my company uses provide us a with a common (publishable) data structures, full DAL (and sprocs) supporting all table relations and 99% of query needs based on keys, indexes, and metadata, and a full business object layer including complete internal and relational validation. Almost all of our code is generated into "actively generated" base classes and anything custom we place in the derived objects. >* Query subsystem. Sorry, "GetByCompanyId" is about as idiotic as it can get. How about "get me all customers having open invoices" Problem with a system designed to focus on "get me all customers having open invoices" is that the bulk of all data calls are by key or relation...but because those custom queries are sometimes needed our templates generate our query sprocs with the ability to accept additional query arguments... > The problem is two fold: > * With Codesmith you basically geenerated the code at compile time. This sucks, as this code is not extensible ad runtime. The EntityBroker is compiling the final schema when you start the program, allowing things like dynamic extensions of the mapped classes. I consider "generated at compile time" an advantage...runtime O/R mapping = reflection = sloooowwww. I have yet to find an O/R mapper that can perform anywhere near as well as pure clean tightly compiled code. Code generation gives me the development advantage of not having to code mundane tasks with the performance of strongly typed tightly compiled code. No contest IMHO. > * Codesmith focuses on the class. THis is nice, but way not enough. You have to focus on the classes you map - in context of all other classes. When using a database schema as input, much of the notion of context is contained within the data relations and table relations. If that does not provide enough contextual or business definition you can always use an XML schema to drive your templates and therefore *completely* define your object space before gen'ing. And as you define new business rules or relations you can modify your xml and your templates to understand the new business concepts. You own the schema so it can be as advanced or as simple as you'd like. We tried (unfortunately) a couple runtime O/R mappers on a project for one of our larger clients....HUGE mistake. Performance was shameful and scalability was out the window. We have since engineered a full set of architecture templates that generate 90% of all the code that we produce. Now, our time-to-market is of course way faster, and our application's performance is amazingly good. As we improve our architecture, we improve our templates, and instantly those changes are reflected in all of our objects (well, ok, not instantly, it takes 2 maybe 3 seconds to completely regenerate everything). We've been able to make our object model so easy to use as developers that new employees are up to speed and coding within a day. In fact I'm not sure our junior developers are even aware of the data model or how the business objects get and persist their data...they just do. Company company = new Company(companyId); and company.Save(); are pretty damn simple...no hand coding for any of it. Also, unless I missed something when using them, runtime O/R mappers also = dynamic sql. shame shame shame. If I ever caught one of our developers using dynamic sql on a performance critical (heck on any) project I would puke first and slap them across the back of the head. Any halfway respectable sql architect will absolutely demand stored procedures be used for both performance and security.
    Monday, August 9, 2004 6:24 PM
  • User1356982465 posted
    I think code generation is great -- at least if the only option is doing it all manually! :) Seriously, O/R mappers may not be for everybody, but lets at least get the facts right. Most good O/R mappers have ways to avoid reflection and perform very comparably. I know tests with mine perform better than stored procs and datasets in some cases, because I load objects internally with a datareader which is much faster than datasets. True, if you want to load 1000's of records then the typeless dataset will be some faster, but that's a questionable design usually, and it isn't due to stored procs vs dynamic sql. Finally, while O/R mappers may prefer dynamic sql, since its much more flexible and just as performant with modern databases, most good ones can use stored procs too! I've worked on systems in my stored proc days where now I know the stored procs were actually slower since they had to internally create dynamic sql to be very flexible, and they had the same security "issues" due to their needs to work with tables directly. I'll admit this "issue" is important to many, but you can use stored procs with mappers, and you certainly can close most security "issues" even with dynamic sql if you try to. Anyhow, if code gen works for you then great -- seriously its better than all manually. Oh, and after reading Eric's posts, I just want to add that I agree its great to not be tied in to any black box -- but the question is do you have time to do it all yourself, when you create the templates, and when you have to update and regen for changes. If you have the experience and time to build great templates then it may be justified, but if you lack the rampup time or proper experience then a black box may instead. Again, its not that one is better than the other -- it depends -- lets just be factual here.
    Monday, August 9, 2004 6:52 PM
  • User-560067886 posted
    I mean not to discount all O/R mappers as a solution. I speak based soley on the ones I have used. Personally, I just have trouble accepting that determining anything at runtime that could have been determined at compile time can perform as well. Obviously, if those things are left for runtime that work will have to be done then and that represents extra work for the application. If the advantages of runtime mapping are a requirement, the runtime O/R mapping is obviously a great solution. But I've seen too many developers incur the cost of runtime mapping for no other reason that develop-time convenience. It's great that Wilson uses readers internally, but you imply that code developed against sprocs doesn't - simply not true. Our architecture uses datareaders exclusively, both for performance and because the nature of our applications requires very large sets of data, which would kill memory usage on an app server in a dataset world. As far as dynamic sql goes, yes its true that database developers have greatly improved sql compilers so dynamic sql isn't nearly the hit it used to be. But again doing work at runtime that could have been done before doesn't make sense if the work doesn't change. A compiled stored procedure (albeit one that does not build up dynamic sql internally) will simply perform better than dynamic sql because the query plan is already calculated. It seems to me that the problem with an all-dynamic-sql solution is that "dynamic" simply isnt needed in most interactions. As I said before, the majority of queries will be based on keys, indexes, etc - for which dynamic sql is not needed. In the few cases where dynamic sql for query flexibility is needed we use it. But most O/R mappers dont provide us the flexibility to determine when to use that option - its *all* dynamic sql, and for no other reason than developer convenience. As for "the time to do it all yourself", I've perused the CodeSmith forums before and seen many public domain template sets providing full architectures. And if a more custom architecture is needed, neither those templates nor O/R mappers will provide a solution anyway. But template-based code generation provides the platform and flexibility to build your own generatable [is that a word? ;)] architecture.
    Monday, August 9, 2004 7:36 PM
  • User765121598 posted
    "yes its true that database developers have greatly improved sql compilers so dynamic sql isn't nearly the hit it used to be." If your queries need work in your ORM, then profile them. If you see non-sargeable clauses, modify your generator. If you know how to write a performant query, you should be able to write a query generator just as performant, without all the work. Plus, you don't have to depend on devs who may or may not know how to write a good query. Devs who think FK's are slow, nulls are just a nuiscance, "*" is a shortcut, and haven't even ever heard of a sargeable clause. "But again doing work at runtime that could have been done before doesn't make sense if the work doesn't change. A compiled stored procedure (albeit one that does not build up dynamic sql internally) will simply perform better than dynamic sql because the query plan is already calculated." 1: You can always cache your queries if your profiling tells you that's a performance bottleneck. I'd be very surprised if it were though. 2: Note true in MSSQL2K, but you didn't specify your db. "neither those templates nor O/R mappers will provide a solution anyway." An ORM is just a way to map your objects to a relational backend. That your Domain Logic would even care wether you use a templated ActiveRecord, or an ORM says there's something wrong IMO.
    Monday, August 9, 2004 8:24 PM
  • User1356982465 posted
    I too once had trouble "accepting" the performance of mappers -- until I saw it for myself. That forced me to do new research -- and what I had been taught was often no longer true. So I no longer see much, if any, "cost" associated with runtime mapping vs. other methods. Also, note that there has been a lot of discussion on these forums and in the various blogs about the old belief that stored procs perform better due to pre-compilation and caching -- all I can say, since I'm not the expert on that, is that its not so cut-and-dry anymore ! I also don't want to imply there is no place a store proc won't perform better, but its rare. Where I do NOT agree is that most applications don't need a "dynamic" flexible query engine. Most every "real" application I've ever been involved with demanded that the users be able to search and sort on just about any combination imaginable, as well as limit returned fields. Its exactly that type of terrible to write and maintain set of stored procs, and all the layered interaction in the application that work with those stored procs, that made me love mappers. Granted, generating it all solves that too, but I've yet to see anyone successfully generate a solution that actually had as much flexibility as my "real" applications have demanded. Sorry if I implied you didn't use readers -- my intent was simply to show that there are cases where mappers actually outperform what is very commonly used in the .NET world.
    Monday, August 9, 2004 9:16 PM
  • User-498097622 posted
    I disagree. Those particular templates might not be of use to everyone, but thats the great thing about CodeSmith, you can write the templates that are of use. I have given 3 user group presentations with a coworker of mine on CodeSmith and have recieved awesome feedback from each one. CodeSmith gives the developer the power to decide how to implement features and ideas. When you use an O/R Mapper you don't necessarily have that same functionality. In my experience O/R Mappers attempt to be generic enough to solve the world's data problems. Some of them have more success than others. I will fully concede that CodeSmith isn't an O/R Mapper. That also doesn't mean O/R Mapper is the ONLY solution to every problem. Some of the things we talk about in our presentation is the power that CodeSmith affords you... CodeSmith allows developers complete control over ever facet of their architecture, from Sql to Presentation...I don't know of many pure O/R Mappers that can say this. CodeSmith provides a great mechanism for implementing and adhering to standards. CodeSmith can be used for repetive tasks beyond just writing code. Take for example a script a coworker of mine wrote to output a batch file for executing isqlw statements for batch execution of ddl scripts. CodeSmith is very ASP.NET and ASP like. If you have worked in ASP.NET or ASP for even 5 minutes, you will be able to pick up the syntax very easily. Once we had worked out the design of our classes we were able to implement them as CodeSmith templates in about 2 hours time. CodeSmith usage and adoption can be very gradual, we started out using it just for strongly typed collections. Once we saw the power of template based code generation we started applying it to more and more of our archictecture. I hear people say template generation takes too long because you still have to do the work, I would counter that statement is somewhat false. With CodeSmith I can create 1 perfect data class, 1 perfect edit page, 1 perfect save stored procedure. Chances are my code will be very repetitious and allow me to build this common code into a template. We have built our entire framework around CodeSmith and the environment that Eric has supplied with it. Heck we even have a pretty "magic button" GUI that executes all of our templates based on the database tables we check to generate. Here are the templates we used in our presentation, they are 100% identitical to the templates we use at work, but with some of the namespaces changed. http://www.objectfoundry.com/templates.zip I have everthing from Sql to UI in there. They might not be too useful to everyone else because they are based around our architecture and our framework, but they could be used for ideas on how to begin your own architecture.
    Monday, August 9, 2004 10:29 PM
  • User-1406164332 posted
    :) Ah yes, I see there a few people still left who think sprocs have some consequential performance benefit over dynamic SQL. The myth lives on. This is not a quick read, but it will help to disspell one of the most pervasive myths I've ever seen over data-access. When it comes to performance, so few people actually run their own tests and just take the word of others as gospel... Frans vs. Rob thread.
    Monday, August 9, 2004 11:40 PM
  • User381371869 posted
    Ah yes, the fabled debate between Frans and Rob. I believe the final word was: http://weblogs.asp.net/rhoward/archive/2003/11/18/38446.aspx "If the ad-hoc sql is a parameterized statement, then the perf will generally be similar. Otherwise the folks are incorrect - especially if the statement does anything complex such as a join, a sub-query, or anything interesting in the where clause AND the statement happens more than once. This is to say nothing about other DBA issues such as securing objects, maintenance, abstraction, and reducing network traffic, troubleshooting, etc." So, only one case, parameterized SQL, where this is the case. Regardless, let's get back to the "black box" theory - in the world of O/R mappers, how does one customize the code that's output? To me, that's key. -Jason
    Monday, August 9, 2004 11:59 PM
  • User-560067886 posted
    > Where I do NOT agree is that most applications don't need a "dynamic" flexible query engine. You misunderstand what I was trying to get across. Let me clarify. I have never come across an application that did not need flexible queries. That is an absolute must, particularly on applications such as ours that server up record sets sometimes 70k rows at a time. But we can serve that need in one or few sprocs designed to meet that need for a particular part of the app (UI). But considering the total collection of the sprocs that support the application, those needed to support flexible queries represent a small percentage. The bulk of the queries made in the application are not user queries, but rather system supportive calls - ones in which we have a single specific id for an object or a specific fk id, etc). These calls may be used to pull a single record for displaying the detail of an object or for pulling all of the, lets say CompanyContact's for a Company. All of which are conditions for which the generator can understand at design time and write solid concrete methods and sprocs. Our templates also generate the sprocs we use for flexible filtering / sorting. But because we control those templates we are able to constantly optimize and hone those templates/sprocs to improve performance and meet our needs. And if our data model is forced to go in a different direction because of client requirements, we are able to adapt very quickly and create a new data access architecture that is the best optimized for that data model. > So I no longer see much, if any, "cost" associated with runtime mapping vs. other methods. > Also, note that there has been a lot of discussion on these forums and in the various blogs > about the old belief that stored procs perform better due to pre-compilation and caching As for neglible performance hit I have no numbers to adequately debate the subject. I speak only based on the simple notion that work done now is work that doesn't have to be done later. Due to the scale of our applications, we've found that it is prudent to take every possible step to improve performance, no matter how small the benefit. Code Generation allows us to make those improvements without a major undertaking of recoding and without creating a rat's nest of code. Also, on dyn-sql vs sprocs perf I don't have data necessary to debate that and as I conceded before I know that dyn-sql is super improved in recent releases of MSSQL (particularly with the addition of parameterized queries), and may even perform as well as sprocs if the queries are appropriately static (allowing caching). Of course if they are so static, it seems dynamic sql isn't particularly necessary. Stored Procs work best for us due to code isolation, readability, version control, error handling, easy "hot-fix" deployment (single sproc deployment is less risky than a .Net app dll push), and ease of generation. If well written dyn-sql queries now run as well as sprocs, I can accept that, tho I'd find it hard to believe they perform better. Simply put, the code generation approach gives us what we feel is the most flexibility, room to grow, and performance. If an O/R mapper meets all the needs of a particular development effort, then it is a certainly a good and viable option. For our needs they fell a bit short, and we weren't afforded the access to improve or modify the "engine".
    Tuesday, August 10, 2004 12:15 AM
  • User-560067886 posted
    Oh and Gabe, not looking to start a shouting match, but Frans hardly convinced me. http://weblogs.asp.net/rhoward/archive/2003/11/18/38446.aspx edit: well, strike that...I see *some* of his points, but I'm unconvinced of the advantage of dyn sql over sprocs for our use.
    Tuesday, August 10, 2004 12:27 AM
  • User1341256997 posted
    Wow... looks like we opened a hornets nest here. This is EXACTLY the reason that I think template based code generation is the best way to go. Go figure, it seems that people really like having control of their own architecture, flexibility to do things their way and not being locked into someone else's idea of "the right way". As we all know, there are certainly some BIG egos out there that think they know everything and that their way is "the right way", but in my experiences I learn more and more every day and the more people I work with I tend to learn more and more and with that new knowledge I might want to change the way I do some things. It sure is nice to have the flexibility to do so. If I were to go with an O/R mapper those decisions wouldn't be left up to me. I would be stuck with a pretty black box that I can't open and IMO that sucks! Paul, you seem to refer to code generation as if it were only a slight step above manual coding? I think you may be thinking about passive generation where the generated code is used merely as a keystroke saver and not as part of your architecture/build process. In my use of code generation my templates are my source code. I do not modify the generated code and the code is re-generated as part of my build process. If I make a change to the template it is reflected throughout my application as soon as I rebuild. This is called active generation and I think that we should all strive to use code generation this way as much as possible. There are some situations where that might not be possible like generating presentation layers. In most cases this would be generated passively as a kick start and then hand modified because of the very customized nature of presentation layers. Also, CodeSmith has the ability to preserve custom code within source files so that you can have the best of both worlds. You can still actively re-generate these classes, but still have the ability to add custom logic to them. I think this is extremely powerful.
    Tuesday, August 10, 2004 12:58 AM
  • User-1406164332 posted
    << I think you may be thinking about passive generation where the generated code is used merely as a keystroke saver and not as part of your architecture/build process. >> Ever heard of CASE tools? I have first-hand knowledge of multi-million dollar projects that ended up as failures because managment bought the siren call of faster-development through code generation tools. I recall one in particular that was saved only because a very intelligent and persistant developer (who later became an architect) kept showing management and senior engineers the facts that the generated code would never work for what they needed. After millions lost he finally convinced them to pull-out the generator and the project was implemented, almost 3 years late. I've never been able to pin-point why, but sometimes code-generation work for me, but other times its obviously the devil. I will note one pattern, any code-generator I wrote never lives long. They may save me a bunch of work for a specific task one time, and I may even try to design them with reuse in mind, but they always end up on shelf. Perhaps this because the technology de jour keeps changing and its too much work to constantly maintain a generator. They are throw-away tools.
    Tuesday, August 10, 2004 1:19 AM
  • User1341256997 posted
    You seem to refer to specialized code generation tools that have little to no flexibility. I personally have no use for these kind of tools. I refer to these tools as magic button generators. You press a button and *poof* out comes the magic code. The type of code generation I'm referring to is template based code generation where the output of the templates is up to the template developer and the only reason that template based code generation would fail is because the template author failed. CodeSmith is a platform for code generation, it's up to the developer to use it correctly and make it work for them. Also, CodeSmith does not target any one specific language. It can output any ASCII based language and the templates themselves can be written in C#, VB.NET or JScript. I personally don't see .NET going away any time soon so I'm not too worried about "technology de jour".
    Tuesday, August 10, 2004 1:44 AM
  • User-498097622 posted
    For the record, if anyone bothered to look at the templates we wrote, you would see that we use dynamic sql inside of a stored procedure. We use sp_executesql and parameterize the query as well. As Eric mentioned active generation is a concept alot of people aren't familiar with. In our templates that I provided a link to; we demonstrate the use of active generation as well as passive. I have blogged about it here (http://weblogs.asp.net/jgonzalez/archive/2004/01/15/59115.aspx) We implemented what we like to call "Poor man's partial classes" We have a Generated namespace and we write all of our actively generated code into this folder/namespace. CodeSmith also provides a mechanism for you to write code directly into regions via merge strategies. I want my code to look the way I want it to look. I want my code to work the way I want it to work. I believe that EntityBroker and LLBLGen are solid products, but they are under the control of someone other than me. I am at their mercy to optimize performance or to trust them to get everything correct. Those products are simply not powerful enough when compared to CodeSmith. I can use CodeSmith for damn near anything that I do. If I can write it in ASCII, guess what... so can CodeSmith. I don't want to turn this into a reglious war over what is better, but I would like to put Thomas and Frans' argument that their products/their thinking is the only way to go to bed. Clearly CodeSmith is an alternative. To say anything contrary is illiberal. I would like to reiterate that CodeSmith isn't the ONLY solution, but its the best i've seen.
    Tuesday, August 10, 2004 2:21 AM
  • User-498097622 posted
    CodeSmith does require competency, O/R Mappers do not necessarily. I could plug an O/R Mapper and not understand exactly what its doing. It could seem like "magic" to an unskilled developer. With CodeSmith you can gradually build it into your architecture. I think the whole Code Generation == Evil is pure nonsense. First Generation code generators don't last long, those are magic button, black box code generators. The code for generation is inside the executing application. Second generation code generators are like moving the template code to an external file, and replacing @ codes in the file. While it will get the job done, it is a pain to maintain, not nowhere near as dynamic as an environment like CodeSmith. CodeSmith actually provides an interface for retrieving schema information from the database and allows you to bind your template to your metadata for the generated output. This is what is usually referred to as a Third Generation code generator.
    Tuesday, August 10, 2004 2:42 AM
  • User-58325672 posted
    :: You can still actively re-generate these classes, but still have the ability to add custom logic to them. I think this is extremely powerful. AND :: I do not modify the generated code and the code is re-generated as part of my build process. do you now modify it or not? :-) anyway, Like thona said, i think your generated code pieces are seperated classes. with no context or relations. what about trying to 'address' the Address of Client like this. _Client.Address ? and Quering for Clients with streetnames like "Text%" ? with your option you need some more lines of code to get this working. and dynamically generate SQL is in this case just more powerfull (or you need to generate a SP for every situation/field). good O/R-mappers have also a nice caching mechanism, inheritance features and brings all or your classes in 'relation' when needed. In the past I tried codesmith code-generation in combination of dotnetnuke. it IS just a keystroke saver. maybe you can build some O/R-mapper above it, and use it to dynamically generate the SQL :-) I would say, download a trial version of EntityBroker or llblgen Pro.
    Tuesday, August 10, 2004 5:32 AM
  • User1356982465 posted
    I certainly didn't mean to imply code gen is only a slight step above manual coding -- I've tried to say the opposite -- that it is much much better than manual coding. I've also tried to not say anything negative about CodeSmith and its template approach -- in fact I have recommended it many times in these forums, articles, and blogs also. I tried to only correct some very wrong statements so people can choose unbiased -- given that opportunity some will choose code generation and some O/R mappers. I think that choice is often comparable to buy vs build -- but certainly not always.
    Tuesday, August 10, 2004 6:53 AM
  • User-58325672 posted
    :: I want my code to look the way I want it to look. I want my code to work the way I want it to work. I believe that EntityBroker and LLBLGen are solid products, but they are under the control of someone other than me. I am at their mercy to optimize performance or to trust them to get everything correct. Those products are simply not powerful enough when compared to CodeSmith. I can use CodeSmith for d*mn near anything that I do. If I can write it in ASCII, guess what... so can CodeSmith. You can chose to use an O?R-mapper and the code will work like you want it to work. and about mercy of the vendors of the O/R-mapper. that point is for everything. if you use .net, you're hoping for some mercy of microsoft to continue the support of the platform. if you some components, or any third-party things, your in the same problem. do you build ALL the things you need self???
    Tuesday, August 10, 2004 7:56 AM
  • User-1308937169 posted
    no way... i say you sum it up like this: Get the Black Box, Live the Black Box, Love the Black Box OR Get CodeSmith and have it your way. > i think your generated code pieces are seperated classes. with no context or relations. > what about trying to 'address' the Address of Client like this. _Client.Address They are not _HIS_ templates any more than the code they generate is _HIS_. They are _YOURS_. If the template auhtor only wanted seperate classes, then that is what they write the template to generate. If they want related clases, or classes that inherit from other classes, or classes made up of other classes, then they write templates for that. You can write a template that generates dynamic sql. You can generate a template that generates a sproc that generates a Dynamic SQL Statement... it is all about the templates... You guys might as well as be talking apples and oranges here. I think that EntityBroker and broker and LLBGen are just imporved data access blocks... At least that is the way they are used.
    Tuesday, August 10, 2004 10:23 AM
  • User-498097622 posted
    If you bothered to look at the templates I posted, you would notice that all of our classes DO have a context. Take for example the following data model... Person ============= PersonId FirstName LastName PersonTypeId PersonType ============= PersonTypeId PersonType In our DataAbstraction classes we generate a person class with properties as all of the columns. If a foreign key is detected then we create a property that corresponds to the fk table. Your interface would be something similar to Person.PersonType.PersonTypeId. We also implement lazy loading so that you don't actually load the related data unless you need it. This pattern is closely related to the table gateway pattern that Martin Fowler suggests in his book on Enterprise Patterns. For every table in the database we have a generated class of the same name. A table called Person would generate a class called Person_Generated. We also generate a stub class for additional logic that inherits from Person_Generated. This class would be called Person. This is the interface you would develop against. Anytime the schema changes we can simply overwrite the code Person_Generated and have the latest and greatest. We also generate a Filter generated class and stub filter class for selecting data via "queries." In our filter classes we also have the ability to query based on columns that we index. If we have an index added to the database, a FilterElement is automatically generated and added to the corresponding Filter class. In my opinion there is a very narrow vision of the possibilities being recognized. We wrote our architecture in under 6 months while working on another project. So CodeSmith ISN'T just a key stroke saver. If you want to keep using an O/R Mapper obviously you are more than welcome, but please don't make statements that just aren't true. Just because you didn't have a good experience with CodeSmith doesn't mean it isn't a viable solution/alternative for others.
    Tuesday, August 10, 2004 10:26 AM
  • User1341256997 posted
    Paul, I really appreciate you keeping this discussion professional. I only wish that others could follow your lead and try not to make their point by insulting other people/products. I am also only trying to correct some very wrong statements. And I totally agree that there is certainly room for both runtime O/R mappers and template based code generation products and that some people will choose one and some people will choose another.
    Tuesday, August 10, 2004 10:28 AM
  • User-498097622 posted
    Bluemagics the only issue I have with what you are saying is that most O/R Mappers require you to implement their object model. Usually you have to inherit from their base classes or you have to decorate your object with certain attributes. This is very intrusive. What if there is an error in the base class? I will have to wait on frans or thomas to fix it. I am at the mercy of Microsoft that is true, but at this point I trust them a WHOLE lot more than I trust LLBLGen or EntityBroker. I have seen how the authors of both of those products react to criticism. I certainly wouldn't want to depend on people who are so close minded.
    Tuesday, August 10, 2004 10:34 AM
  • User-560067886 posted
    BlueMagics, > anyway, Like thona said, i think your generated code pieces are seperated classes. with no context or relations. what about trying to 'address' the Address of Client like this. _Client.Address ? Our templates do build hierarchical BL object structures. So we can reference Company.PrimaryContact and Company.PrimaryContact.Address. And don't confuse O/R vs. Template-based Code Generation for Dyn-Sql vs. Sprocs. Your templates can generate Dyn-Sql as well. Also, "your generated code pieces are seperated classes". Whose code pieces? Eric's? Eric doesn't have code pieces. Perhaps you're referring to some *sample* templates. You seem to be missing the point that CodeSmith generates EXACTLY what you "teach" it to generate. Anything you can type by hand it can generate with a well written template. And as I mentioned before, the sprocs our templates generate allow us to flexible queries with no extra work. We can query for client with streetnames like "Text%".
    Tuesday, August 10, 2004 11:16 AM
  • User-58325672 posted
    @likwid I'm not saying the codesmith or any other tool is garbage. it have only some bad experiece with it in combination of dotnetnuke (using this templates: http://www.asp.net/Forums/ShowPost.aspx?tabindex=1&PostID=450682) you keep generating classes, and you have no view of desinger to manage you'r objectmodel. I foudn myself just generating things and paste them as sp's in sql server or adding the files to my vs.net project/solution. maybe It's my fault and i have used them the wrong way. and in My dotnetnuke timeframe, the templates use reflection (maybe it's changed now. ps: i know that you can use other dataaccess methods in dotnetnuke) in a way that i have to use the same property-names in my classes as in my datatables. I'm NOT against code smith. code-smith is nothing more or nothing less than a code-generator that can generate any piece of code based on the template files. If your template files sucks - your architecture will suck 2. but i found the designers of llblgen and entitybroker usefull. and they deliver good caching and inheritance mechanims. and there is no need for generating and pasting code more than one time. again. it depends on the templates you use. maybe i need to take the time and take another look to it. but - as i see it now. one of the bad things of this approach is the query support - can it handle quering for more properties/datafield at the same time? and deep quering for related objects?
    Tuesday, August 10, 2004 11:17 AM
  • User-58325672 posted
    guys don't get me wrong :-) code-smith is an excelent code-generator. it all depends on the templates you use. maybe i need to think about it. and try of make better templates than the old ones i have seen before. I'm not a software architect but a new graduated student, I'm open for new things - so take it easy! seriously: I'm using entitybroker now, In the begining I had some problems when I was searching for the best architecture for my (transactional) appz. I found that O/R-mappers save me the biggest amount of work. and at the same time deliver (using the guidelines) a nice architecture that can be maintained easly. but the counter argument was that i will stick with the O/R-mapper vendor. but I'm sticking with a big list of vendors. like GUI components and so on. so that's not really the problem. anyway. I found using O/R-mappers usefull. and i WILL try some templates or try to make my own as a testcase. so dudes. no offence! PS: any links for better templates (or documents) than the ones i have used in the past? ;-)
    Tuesday, August 10, 2004 11:35 AM
  • User765121598 posted
    "I want my code to look the way I want it to look. I want my code to work the way I want it to work." Then buy a mapper with source that best fits your model and be done with it. A mapper *SHOULD* be a black box. That's the entire point... These statements aren't CodeSmith vs. ORMappers. They're (templated) ActiveRecord vs. Mappers. That's what's really being argued... and well, no matter how good the template is, it's still an ActiveRecord. CodeSmith has nothing to do with this. Arguing about MetaData, Reflection, Attributes, etc, just obscures the issue. The issue is some people seem to prefer ActiveRecord. I personally think that preference shows a fundamental resistance to OO, but that's just me, and I'm no genius. :)
    Tuesday, August 10, 2004 11:48 AM
  • User-498097622 posted
    no offense taken. I just hate to see a great tool like CodeSmith get bashed. If someone has used it and understood it and they still don't like it, I can live with that. A lot of naysayers though haven't even used it, or dont quite understand the power of CodeSmith. To me if you don't understand something, you shouldn't try to speak to it. Thanks for the friendly debate :)
    Tuesday, August 10, 2004 11:50 AM
  • User-199378454 posted
    Let me introduce myself as I'm new in this discussion. My name is Sébastien Ros, I'm the CTO of Evaluant and made a code generation O/R tool, named Data Tier Modeler. You can check it on our web site www.evaluant.com. In my opinion, code generation and reflexion based approaches can provide exactly the same result from a user perspective. Indeed, at the end, the same instructions will be executed to make an object persisted (without going into the details of each product ... there is not a thousand ways to put a data into a RDBMS). Thus, we shouldn't take this as an argument for choosing a tool or another one. Instead, you should ask yourself whether you want a non intrusive, "beautiful concept", and inovative solution, or if you want something less "beautiful" but which is not a black box and were you can debug into the executed code. But this is only if your choice will be taken from a technology perspective. I'd prefer you would take your decision after you make a list of all the functionnalities you need and you test the tools which comply with it. The choice can then be made on: - the ease of use (according to your way of programming) - the existing user experience (in production if possible) - the support and the user documentation - the compliance to your needs (performances, functionnalities, evolutivity, ...) So, before saying some tools are bad, please try them. Some tools said "good" don't always work as they pretend to ...
    Tuesday, August 10, 2004 11:57 AM
  • User-1938370448 posted
    I believe LLBLGen uses DataSets and anything using DataSets in my book loses big marks. LLBLGen 1.x, the old, open source DAL generator used DataTables. LLBLGen Pro (http://www.llblgen.com) doesn't use datasets. For entity based views and typed views we use DataTables but for entities, entity collections etc. we use custom classes and custom collections.
    Tuesday, August 10, 2004 12:02 PM
  • User-1938370448 posted
    Oh and Gabe, not looking to start a shouting match, but Frans hardly convinced me. That's ok. It took me some time too. Do a search on this forum for Thomas' and me flaming eachother's head off over stored procedures (me: stubborn and pro procs, THomas against procs). I lost 3 months of work on LLBLGen Pro because I went the proc route. Never ever will I make that mistake again. But of course, you are fully entitled to your opinion :) Oh, and if you're not convinced, try MAINTAINING 2000+ procs for 3 years on an average database with 2 or more applications on it.
    Tuesday, August 10, 2004 12:07 PM
  • User-1938370448 posted
    I am also only trying to correct some very wrong statements. And I totally agree that there is certainly room for both runtime O/R mappers and template based code generation products and that some people will choose one and some people will choose another. I saw a lot of wrong statements here, and you failed to correct one: Codesmith's templates do not generate a runtime query producing engine. Most people think it's easy to produce mapping code. Well, that's easy indeed and codesmith can do that too. But the core of a runtime engine of an O/R mapper is the query engine. I can tell you: that's not easy. With templates you will not get there (or you have to generate the complete engine again and again, or write one your own targeted through templates but what's the use of a template generator then). People should leave Codesmith out of this discussion, OR this discussion should be about 'your favorite data-access solution'. Because THAT's what I think this discussion is about, however the topic title suggests that this thread is about O/R mappers and then codesmith has nothing to do with it.
    Tuesday, August 10, 2004 12:12 PM
  • User-528039901 posted
    Ok, time to get in here. First the marketing - wit the EntityBroker you CAN change the runtime. You can get the source code AND we support it. I am aware of 5 different versions of our runtime at the moment. Now, to the arguments. * The "CodeSmith is as good as an O/R mapper" lie. Codesmith is a template generator. Maybe not a stupid one. But - an O/R mapper is a lot of work. Once you hit the easy stuff (primitive CRUD operations) things start to get ugly and use a lot of time to get right. UNLESS someone makes an O/R mapper using codesmith and publishes it, codesmith is jsut a tool for code generation. A significant amount of developers out there is unable to see whether an O/R mapper is good or bad because they lack the theoretical knowledge on how an O/R mapper works. What exactly is making the CodeSmith dudes on this thread think that these would be able to come up with working and good templates? The templates is where the work starts... and.... * CodeSmith is a good approach, as it is code generation. Well, frankly - everyone using CodeSmith templates to run his O/R mapper should imho get a beginner course in object oriented programming. Sure templates MAY do the trick - but if you sit down and write your own O/R mapper in a code generator, then frankly, I question your ability. Why? Simple. Because a lot in an O/R mapper is very repetitive code. This naturally leaves to a runtime library. So, IF you would go along this parth, then imho you would arrive with a code generator approach that is STILL using a runtime library. Simply becvause it makes a lot of sense. Unless, naturally, you have a little problem with all this "layer", "reuse" and other concepts - that interesting enough make OO shine in the first place. As an example - in the EntityBroker I am using code generation. Every business class is inheriting from a stub, which makes the binding of the persistent ields to the O/R mapper based data container. Now, you can argue whether this is elegant (but anything besides full business object generation has only solutions with other negative side effects). But once you hit the DAL layer, generated code is exactly one thing: supeflous. Not necessary. Sign of terrific bad profgramming. Code bloat. Because the whole DAL layer is absolutly repetitive over and over - and no code generated is still better than great amounts of large code. Now, some people have templates, as it looks like. I don't want to say they are bad, but, dioes it appear to any of you that you are limiting yourself tremendously? I heard about someone generating all the nice query possibilities basedon indices on the table (hey dude, get real - not all conditions validate an index on the database table. You work based on wrong conceptual bases). Someone (the same person) said he would automaticall genertate a foreign key relationship when he finds on in the db. Yeah. Right. Talk about a cheap mans O/R mapper. Why? Because -it is NOT an O/R mapper coming from the object model. And NOT coming from the object model, there is just a LOT if can NOT do and it DOES NOT. Because - an object model does include a terrific amount MORE information than the data model ever be. And starting with a less information rich data model, your object model will always and only reflect - the less information rich data model. There is a lot you miss. Inheritance hierarchies, for example, are not part of the data model anymore, so you can not map them starting from the data model without user interaction. Besides, on top of this - seeing the amount of flags the EntityBroker starts to expose in it's object model (to control how the object model / database interaction is taking place) makes me seriously wonder the capacity of any mapper working from the database. You just need a lot more information than you can take from there. Ask Frans Bourma - he is very strong (with llbgen) in syncing with the db schema. Still he has an editor because of the amount of additional info making sense. I stand to it. CodeSmith is as good as an O/R mapper as the C# VS.NET ide itself. It does nothing to facilitate O/R mapping. It is a code generator, and everyone is free to write his own O/R mapper on top of it (as templates). But frankly, what do you get? If I would use CodeSmith in our EntityBroker, all I would basically save is a class that is about 5 pages long, full of CodeDom code generating our baseclasses. It is repetitive. The rest would stay exactly as it is. And without having an O/R mapper - codesmigh is just a useless code generator for anyone wanting to work with objects. And any O/R mapper built on top of a code generator and with a code generator as main means of operations is a sign of a pretty bad school of programming - because a lof of the functionality of an O/R mapper is generic per definition, so it does actually lead to a runtime environment. And, finally, the problem with an O/R mapper is not the code generation (this is the laughable part). It is getting the runtime right. Which is seriously non-trivial once you get into little nifty things like layers (haing the DAL running on a separate server), prefetch paths (ObjectSpaces called them spans - any O/R mapper without them is not worth a dime and a look, if you ask me, for more than totally trivial apps) and a powefull query subsystem (and no, finder methods simply do not cut it - they are a sign of arrogant programming. When making an object model, I should never be so arrogant to assume I know how the user of my object model may need to ask for the objects).
    Tuesday, August 10, 2004 12:14 PM
  • User-1938370448 posted
    no offense taken. I just hate to see a great tool like CodeSmith get bashed. If someone has used it and understood it and they still don't like it, I can live with that. A lot of naysayers though haven't even used it, or dont quite understand the power of CodeSmith. To me if you don't understand something, you shouldn't try to speak to it. I think the discussion isn't helped with the attitude a lot of nvidia/ati fanboys have when it comes to discussing their topic. Codesmith is a tool, very useful, but has also weak spots. That's fine, every tool has weak spots and strengths. If someone says: "That tool isn't great", you can flame that person because he touches your precious application you adore so much, but that is NOT helping the discussion, neither is critizism without arguments btw. I have to say, codesmith can help you in a great deal, some of our customers have created codesmith templates to generate GUI asp.net forms for example, which target the O/R mapper code. Great stuff. I though have to say that I find it a bit funny that some people think codesmith can give the same power as an O/R mapper can. That's nonsense. Using your words: "To me if you don't understand something, you shouldn't try to speak to it.". :) Now, some people might step up and bash me with a big hammer. If these people have to do that, fine, but it might be that I do know what I'm talking about.
    Tuesday, August 10, 2004 12:33 PM
  • User-1445306016 posted
    We're overlooking a great free Open Source O/R mapper. Gentle.NET Home Page SourceForge.NET project page I've used it for smaller projects and one large one. It has a short learning curve and it's very active. I'm not bringing it up to disparage the O/R mappers created by Thomas, Frans, or Paul. Just adding another great tool into the mix. My $0.02 on O/R mappers vs. Code generation. I think the context makes all the difference. CodeGen works great if the dev team is small and the schema is static. If you want the OM to be extendable by end users (meaning other developers) I think an O/R mapper works out much better. There's a shorter learning curve IMO and if they want to change the established schema they don't have to re-gen a lot of code. Just change a few attributes or a mapping file and off you go. With CodeSmith/CodeGen you're really talking about oranges and nectarines though. You can use CodeSmith to generate UI and BizLayer code as well as DAL code. We'll probably end up using a combination of CodeSmith templates and an O/R mapper in our latest project (BioInformatics application.
    Tuesday, August 10, 2004 12:34 PM
  • User-1445306016 posted
    Oh I forgot to mention one little piece of irony regarding Gentle.NET. It comes with CodeSmith templates for generating it's O/R mapping classes. :)
    Tuesday, August 10, 2004 12:38 PM
  • User-1938370448 posted
    Oh I forgot to mention one little piece of irony regarding Gentle.NET. It comes with CodeSmith templates for generating it's O/R mapping classes. :) hehe :) Well, generating code with templates like codesmith's is not hard. I mean: - everything inside <% and %> is code. Add that to string code - everything else is stuff that has to be written. Per %> and <% add a statement to the code string like WriteString(the text between these two markers) (and WriteString is a method you write yourself in the utility lib I'll mention below) - add a reference to a common lib with a class with utility methods to the references, also add a reference to each assembly mentioned in directive statements in the template. - wrap string code in a main() routine, add using statements. - use CodeDom's C# compiler to compile that string to an assembly in memory, specify the references constructed. - Run the compiled assembly's main routine. et voila :) (simplistic overview). You don't even need a parser :). I think Eric will admit the real power of codesmith is not in the code generator core (because that's really small as I described) but the variation in data you can access from within the template, like schema data.
    Tuesday, August 10, 2004 12:58 PM
  • User-560067886 posted
    While I don't have the time (or frankly, the energy) to completely debate this much more. I would like to reiterate a few points. 1. CodeSmith is NOT an O/R mapper. So "CodeSmith is as good as an O/R mapper" is not true. CodeSmith is simply a generation platform. However, if you don't need to modify your object model during runtime I see no advantage in using an O/R over a well written BL/DAL (be it written by hand, or generated). "Ease of use" simply doesn't hold water because CodeSmith is just as easy to use. There are a ton a free template sets available that will generate everything needed for the BL/DAL. My companies templates generate a very extensible DAL so that really-one-time changes can be made without touching the generated code. 2. Thona, you mention that those using templates are limiting themselves tremendously. Why? That's the whole point. I am not limiting myself at all. I can enhance and enhance and tweak as much as I like. I'd be limiting myself by using someone else's black box (the $999 source license to EntityBroker aside). > not all conditions validate an index on the database table Really? i guess I never noticed that when I said that our generated code allows to query against ANY column or combination of thereof. We use indices and keys to generate the bulk of the sprocs, not all of them. If you're application is querying against non-indexed data a lot, then I say ur schema needs work. > When making an object model, I should never be so arrogant to assume I know how the user of my object model may need to ask for the objects I know that I won't be searching for a Company by GorillaId. I know that. Call it intuition or the simple fact that I know that neither Company nor any of its related objects have a GorillaId. We build our queries based on all the possibilities. The most likely ones (indexed, etc) are pulled out into their own dedicated sprocs, and the others are caught in a generic sproc for that object that quite possibly matches the dyn-sql an O/R mapper would use anyway. Also, I and my fellow developers *are* the users of our objects. We know how we will ask for them. We know that 99% of the queries we make will be based on indexed/key information. 3. Using a database schema as an input mechanism is only ONE possibility. True, a database schema cannot possibly contain all of the relational and biz info that a full object model needs. THAT is why you can use an XML schema as input. Technically, the xml map used in an O/R mapper itself could be used as an input mechanism to the templates. In fact the CodeSmith schema provider is completely extensible so you can use anything as an input mechanism. You're not locked into a single xml mapping file format. 4. Anything that can be coded by hand (btw that is everything) can be coded from templates. Obviously with increased intelligence and functionality comes template complexity. But to those who are just using a public template set, that set is just another product. They don't have to understand how it works. If they get to the point where they want or need to make changes they can. 5. Exactly what makes you think that runtime you've engineered is perfect for every application? We use templates because we can change out individual templates within our sets to build DAL/sprocs and even BL to be the most efficient for the particular app we are building. Apps that are very heavy on raw data pulls (deep) are thusly optimized for that, and apps that are more focused on data relation (wide) can be optimized for that. Also, we have templates that DO allow to seperate the DAL to a different server. 6. >CodeGen works great if the dev team is small and the schema is static. If you want the >OM to be extendable by end users (meaning other developers) I think an O/R mapper >works out much better. There's a shorter learning curve IMO and if they want to change >the established schema they don't have to re-gen a lot of code. Just change a few >attributes or a mapping file and off you go. Changes in schema are not a problem with O/R mappers or CodeSmith. Regenerating code is absolutely trivial if using the CodeSmith Visual Studio Custom Tool. When using the active generation approach one click regenerates everything. Rather than changing a few attributes of a mapping file, you just change your schema (in db or xml file or whatever...completely extensible) and click "Run Custom Tool". Done. Everything is updated.
    Tuesday, August 10, 2004 1:35 PM
  • User-560067886 posted
    And of course above all. If an O/R mappers meets ur development and performance needs it is great solution. If it doesn't as in our case, CodeSmith offers a fantastic middle ground. Our architecture, our way, our code, one click. Our two week build cycles are down to 2 days. Edit: Btw, how is security handled in the O/R BL? I ask only out of curiosity. When we used O/R mappers our app did not need internal security. However, our applications now have to be very secure with individual user permissions being checked on every action in both the UI and BL - some at multiple points within a single process within the BL.
    Tuesday, August 10, 2004 1:37 PM
  • User-1445306016 posted
    >Changes in schema are not a problem with O/R mappers or CodeSmith. Regenerating code is >absolutely trivial if using the CodeSmith Visual Studio Custom Tool. When using the active >generation approach one click regenerates everything. Rather than changing a few attributes of >a mapping file, you just change your schema (in db or xml file or whatever...completely >extensible) and click "Run Custom Tool". Done. Everything is updated. Yes, but it still requires a recompile (as do attribute-based O/R mappers). If your O/R mapper is using a mapping file you never have to touch the code. which means less regression testing. If you have end users that are customizing your generated code, you end up overwriting their customizations. The specific case I'm dealing with at work, creating an application with an extensable framework, is a special case. I believe for 80%-90% of the applications out their either an O/R mapper or CodeGen techniques will work fine. Actually, the "Run Custom Tool" remark above reminds me of another good CodeGen based DAL creator. SPInvoke, although i had some issues with the way it handled DBNulls.
    Tuesday, August 10, 2004 1:47 PM
  • User-1308937169 posted
    thona, so... Since I am not using your shiny tool, I am an idiot? bwaahhaahaaa I am still not buying into your message here. Are you calling me a "beginner" that lacks the "theoretical knowledge" (on your version) of OR mapping? So please educate me... why would I _SETTLE_ for a Run Time when I can have compiled code? I don't see that as the inevitable outcome. Why in the world would a black box with my buiseness logic in it be a good thing? It is not hard to imagine that your engine trying to figure out my metadata at runtime, is going to be slower than my "Code" that has already figured out the metadata at compile time. Not hard to prove either. It is all generated code, yours is done at run time, mine at compile time... Which one do think is faster? I change my schema, I re-run my templates, compile and I am done. Re-running the templates is nothing compared to changing the schema... with Schema you have to worry about data integrity. Much more work goes into that than the code generation. Repetaive coding is the reason for the Template.... It sounds to me like you are making a case for "All developers are stupid, I am smarter, you must use my tool or you are an idiot". What am I missing here? This was a civil debate...
    Tuesday, August 10, 2004 2:06 PM
  • User1341256997 posted
    Frans, Yes, you are somewhat correct. The real power of CodeSmith is in the flexibility. The ability to create and use ANY type of metadata that you can think of to drive your templates, the ability to write your templates in several .NET languages, the intuitiveness of the template syntax, and the familiarity that it provides to ASP.NET developers. I think your simplification of the template engine is extremely simplistic, but in general it is correct. One thing of note there though is that CodeSmith goes to great extremes to allow you to output the EXACT code that you want and in the EXACT format that you want it. Most template based generators don't pay attention to these details, but IMO this is one of the reasons that CodeSmith is so extremely popular. I'm sorry this has turned into a CodeSmith debate, but I'm sure you can understand from your past encounters with Thona Theory that I have to defend my product.
    Tuesday, August 10, 2004 2:31 PM
  • User-1308937169 posted
    skoon, Pretty sure that my QA department will not buy that argument... "Hey we just changed the schema for this, but since we did not have to recopile, you don't have to test it as much". If SPInvoke were template based, you could have fixed that.... a perfect example of a tool that was limted because you could not alter the template... You are able to get the source, but I think this is great example of what we are debating here. If you are fine with someone else defining the majority of your architecture, then by all means get the black box. If you want more control, you need to build it. CodeSmith is great way to do both.
    Tuesday, August 10, 2004 2:33 PM
  • User-528039901 posted
    ::What am I missing here? This thing called object orientation, you know. Or the fact that you may now know your schema. I know that I am propably the only one with replacable object models, with hinheritance hierarchies that get extended at program start by having another DLL in a directory that just hooks itself up and extends the object model. Still for a lot of apps this is needed. Fact is that templating is something that goes simply against OO principles. I am not saying "buy my black box". I am saying everyont relying on templates instead of pulling common functionality into a runtime is - well - my opinion puts him pretty low. Like some dude doing everything in assembler for "the lat piece of performance. ::Why in the world would a black box with my buiseness logic in it be a good thing? Well, why not? See, you do OO, right? You believe you do archtiecture, right? You belive you make tiered applications, right? Did it ever occur to you that in the beginner secitoin of the acthecture books when they explain encapsulation and say that goo OO is about hiding implementation details, they did actually mean it? Black-Boxing is common practice. I do want my logic to be black boxed as much as possible. This is all that modern programming practices are about. ::What am I missing here? See, there are two things simply wrong - on a funny level already - in your statements: First: ::... why would I _SETTLE_ for a Run Time when I can have compiled code? I am not sure you grab the concept of a JIT (Just in time compiler) and where compilation takes place in the .net runtime Second: ::It is not hard to imagine that your engine trying to figure out my metadata at runtime, is ::going to be slower than my "Code" that has already figured out the metadata at compile ::time. Now, with this second thing there are two things wrong. First you assume (why I have no clue - not reading, assuming I am an idiot, something along the line of alarge ego - no clue) that I do such stuff at runtime. Interesting enough I do not. I do it at startup time. Like the .NET linker links hen a class is first used, you ahve to define the objects to be used when you instantiate a mapper (EntityServer), and THEN and ONLY THEN do we run our analysis. From there on we run preconfigured classes. Now, you do not work with an EntityServer - you work with an EntityManager, created from a server, and this is a cheap operation. So, see, when a non-idiotic programmer does something, sometimes there is something in betwen black and white. I never found waiting another 0.1 seconds at program start to be an issue. There is a LOT going on normally there anyway. Second, the performance part. Now, my answer for this is (blatantly): who gives a shit. See. Who gives a shit whether one solution is 0.1% faster than the other. Standard practice with code path optimization is to use a profiler. Standard approach is not to optimize before you have an issue. And then to use a profiler. Yes, I am slower than you. But who givea shit. My routines do NOT show up in the profiler log of all appliocations I have here as critical. Either they take nearly no time, or they are SLOQ - but not in the routine, but in a claled subroutine. A db opertation over the network takes the same time. So again, who cares. Performance gains should only get into the fact when - carefull - they are relevant. Premature optimization is naturally a sign of immature programming, as the name does impply. And the term Premature optimization is a standard term. A typical beginner error - which is a reason caling someone doing this error a beginner, by normal logic. We have been put through MS scalability labs, and the customer doing so did NOT find ANY critical issue in our runtime. Which is all I cae about. I personally do not care whether my app is 1% slower or faster in the DB layer - I am blowing my performanceoin the database and the presentation layer anyway. As long as the profiler says I have no problem in the mapper, it is fine with me. Now, otoh, you are a typical code bloater. YOu will have hundreds of unnecessary classes that actually - make your perfoarmance optimizations hard to trace, use up a lot of additional memory, make your program execute slower (lower cache hits in the cpu core) and a lot of other negatives. Besides your dll's are bloated. I mean, have 300 business objects in a library - we have 600 classes (300 stubs, 300 busienss objects). How many thousand do you have? This makes it hard to run a profiler. ::This was a civil debate... Yes, when the reasons given were not irrelevant. ::Since I am not using your shiny tool, I am an idiot? bwaahhaahaaa I never said that, so laugh about yourself. What I say though is that when you dio it yourself, AND program like an idiot (and I leave it to your to determine whether you do this or not - I am merele reflecting your own argument) then you ARE an idiot. And code bloat is not what people have considered to be good programming practice for many years now. Code generators are marvelous tools. But a fool with atool - is still a fool. If code generation is so great, dude, then why does the .NET runtime come with reusable classes all over the place, instead of generating the source for every grid right into the code of the window class? You seem to believe this is a viable approach. Any sane person will - when doing something as complex as an O/R mapper based on a code generator - come up with a shared runtime instead of repeating the same coe over and over and over again. I am not saying MINE - I am saying yours. If you have failed to move common functioanlity of your code into helper methods and to move what is a library into a libarry, then yes, then I do call this a text boox example of technological abuse and very bad programming. You should try to understand that using code generation does not prohibit you from following good programming practices. Or do they pay you by lines of code?
    Tuesday, August 10, 2004 2:43 PM
  • User-1938370448 posted
    I think your simplification of the template engine is extremely simplistic, but in general it is correct. One thing of note there though is that CodeSmith goes to great extremes to allow you to output the EXACT code that you want and in the EXACT format that you want it. Most template based generators don't pay attention to these details, but IMO this is one of the reasons that CodeSmith is so extremely popular. Oh, I'm sure you do a lot more behind the scenes :). It's just that there is not that much magic involved to get going with a template based generator. Code formatting is indeed an important issue. My patternmatching based template parser/generator also tries to keep that in tact. I'm sorry this has turned into a CodeSmith debate, but I'm sure you can understand from your past encounters with Thona Theory that I have to defend my product. I don't have a single problem with Thomas, I must say. Heh, you don't need to defend your product, your legion of fans will do that for you ;) :)
    Tuesday, August 10, 2004 2:47 PM
  • User1341256997 posted
    Thona, You are correct on some points, but as always you make them in your blatently arrogant way. What's funny is that you call other posters arrogant... talk about hypocrisy. Anyway, I think you make a good point about relying on schema information alone to drive your templates. There are certainly a lot of situations where this is less than ideal and in those cases, CodeSmith fully enables the template author to drive their templates based on any other metadata they would like. They could even base their templates on an EntityBroker mapping file if they wanted to. What you fail to understand is that in a lot of cases, basing your templates on schema information alone is more than sufficient and makes for a very easy to maintain solution, but I'm sure you will be too arrogant to accept that and call some more people stupid for thinking so. As far as the runtime thing goes, to a degree I agree with you. A generic runtime can certainly reduce the overall amount of code, but at what price? By the very nature of it being generic, I am paying a price for using it. Code generation scenerios may produce more code, but this is not code that should be maintained by hand anyway so who cares? If the end result is a more performant application that is more customized to my needs on this particular project, then I think it's well worth it. You also seem to talk about code generation and a runtime library as if they were mutually exclusive. I guess you think because we use code generation, that we must be too stupid to understand the concepts of OO and code reuse? How extremely arrogant once again. I really just don't understand why you feel the need to constantly call people stupid and assume that your way is always "the right way". It's quite rediculous and all it does is piss people off and turn them away from these forums.
    Tuesday, August 10, 2004 2:58 PM
  • User-1938370448 posted
    If you are fine with someone else defining the majority of your architecture, then by all means get the black box. If you want more control, you need to build it. CodeSmith is great way to do both. Which makes using a tool pointless. What you say here is typical for the NIH syndrome. You create your own grid controls as well? I hope not :). If you want excellent DAL code, you can't write it from scratch or you'll be busy for months, maybe even years. Don't stand up to tell me to shut up yet, I'm mostly agreeing with you on the code generation part. LLBLGen Pro generates the mapping info into the code as well, to make sure the code is as fast as possible. (It's a choice, you can differ in opinion about this). However the generated code is mostly customizing generic code compiled into a generic runtime lib which comes with a dynamic query engine per database. (customizing it through f.e. the strategy pattern and the dao pattern). Here you and I part ways. You say that what I have in my runtime lib and query engines is what you acn generate too with codesmith. Of course you can, but IF you decide to do that, you have to write that code by yourself. The fun thing is: it's generic code, so you don't even need a code generator to create it. Now, why would you opt for generated code over a runtime solution? I don't know really. I can create my filters in C#, with typed syntaxis, right there when I need it. I can even create dynamic lists based on entity fields, using typed expressions, if I want to.The fun thing is: I can alter the filters when I need to, right there where I need it. Say, in your case, if I have a stored procedure which gives me all orders for a given date and customer, I can call that from where I need it and use the results. However what if I want to filter on a given shipping date as well? Or only want a limited amount of rows? I have to alter the proc. That means that I have to change the signature of the proc. That might break code. So I'd better make a NEW procedure. However, what if the old one is no longer used? I have an orphaned procedure. This is probably not a big deal in your projects, but with projects of 100+ tables and thousands of procedures, it IS a problem. With the runtime solution, I just add a new predicate object to the filter where I need it, eventually a limit, perhaps an expressoin on a field, selected by the user, who knows... Nothing will break. Furthermore, and this is more important: I can use the same code for SqlServer, Oracle or other databases. If I have to alter the filter, I can just do that there, in C#, the filter will work on Oracle, SqlServer or other databases supported by the runtime lib. Again, this might not be important to you, but for a lot of projects it is essential. I haven't even started on recursively saving objectgraphs which probably get identity values in the database (or use sequences) and where FK's are synced with these PK's during the save action, automatically, so you can save a tree of new objects with just 1 line of code, without having the need to sync the data on the fly. Perhaps you want concurrency checking with that? Validation during save? IF you want all that, you have to write that BY HAND. Codesmith will not give you that, because that's not it's PURPOSE. Someone has to write the template for that and the generic code. THAT's why there is a difference and why runtime libs are way more flexible than static code which does not do things at runtime but solely calls into procs.
    Tuesday, August 10, 2004 3:03 PM
  • User-498097622 posted
    Frans and Thomas you are both missing my point. We also have a "runtime" engine. Our templates do not generate a DAL, they generate a Data Abstraction layer. Our DataAbstraction layer can fully support querying any column, we just have to add it to the filter class. As p00k said we dont need that to begin with. We don't allow our customers to query on any field in the database. We could but we just don't need to. There is no reason we can't couple a runtime engine with our template generated code. You have pointed out our "short-sightedness", but to me you couldn't be more wrong. I guess you didn't bother to look at the templates or you didn't understand them. You also pointed out code bloat. Code bloat means repitive uncessary code. Our generated code is built on top of an entity framework that makes maximum use of reuse. I have stated repeatedly that this isn't an argument about whether or not O/R Mappers are applicable. They surely are. However neither you nor Thomas will dictate to me what works and what doesn't. Tell the 5,000 automotive dealerships we have running on our architecture that our platform doesn't work. It took 2 developers working on our architecture as well as 2 applications consuming the architecture 6 months to develop everything. CodeSmith enabled us to do that and still maintain control of our environment. I am not saying an O/R Mapper wouldn't do that also, but we chose CodeSmith and have been extremely happy about it. Maybe now you and Thomas can quit trying to decide everything for the rest of us .
    Tuesday, August 10, 2004 3:11 PM
  • User-1938370448 posted
    As far as the runtime thing goes, to a degree I agree with you. A generic runtime can certainly reduce the overall amount of code, but at what price? By the very nature of it being generic, I am paying a price for using it. Code generation scenerios may produce more code, but this is not code that should be maintained by hand anyway so who cares? This is not what's about, Eric, and you know that. A generic runtime library is not slow because it is generic. A generic runtime lib is code which requires input to function, so to use it properly you have to provide it with proper input. Code generation can help there, a runtime engine which produces that input based on a mapping file and class reflection can do that too. Point is: an entity is an entity is an entity. The saving logic will never change one bit, only the destination table(s) and fields will change per entity (type). That kind of generic code. My SQL generator engines generate SQL based on very complicated input. Input produced by generated code executed at runtime AND developer written code using that generated code. But the engine is generic and definitely not slow because it is generic. It's just common sense to migrate code which is shared among a lot of classes into a generic common library (hey, .NET is just that! :)). It is then a matter of give and take when to stop moving code to a generic library to prevent making the overall code less flexible.
    Tuesday, August 10, 2004 3:21 PM
  • User-1445306016 posted
    >Pretty sure that my QA department will not buy that argument... > >"Hey we just changed the schema for this, but since we did not have to recopile, you don't have >to test it as much". Well I can't help what your QA department wants to do. he he :) In the past a recompile to my QA department meant unit testing of the methods and properties of the code affected plus functionality regression testing. Changes that did not require a recompile meant just functionality regression testing. >If you are fine with someone else defining the majority of your architecture, then by all means >get the black box. If you want more control, you need to build it. CodeSmith is great way to do both. Well I'm certainly not going to write my own ASP tag parser in C# so if you are using a Framework (ASP.NET, JSP, PHP) you by default are allowing someone else to define the majority of your architecture. If I test out a component and find it's stable and meets my needs I have no problem buying/using a third-party component. I agree with Frans point about NIH syndrome.
    Tuesday, August 10, 2004 3:25 PM
  • User-560067886 posted
    Frans, I appreciate your civil responses. Thank you for helping to keep this thread what is hopefully beneficial to developers who may have questions about O/R or code generation. Anyway, > However what if I want to filter on a given shipping date as well? Or only want a limited amount of rows? I have to alter the proc. Again, you're assuming codesmith only generates solutions that work with sprocs. Not true. It can certainly gen ad-hoc queries as well. However, our templates do generate sprocs and those sprocs accept filters on any parameter and provide sql-side paging, so we can return any number of rows as well. Those parameters are all exposed out of the DAL, so it very easy to work with and very clean. We don't have to write anything by hand to get that other than the call where we need it with the appropriate parms set. When a schema changes and therefore possibly changes the signature of a sproc, it is not a problem because everything that references that sproc is also regen'ed at the same time based on the same schema. Also, our hierarchical biz objects do save then entire tree when necessary. Our biz object model is designed so that objects understand child objects recursively and state management is handled all the way down. As far as the argument of "writing that all by hand" is too time consuming, sure it takes a smidge more time than dropping in a ready-made solution. But seriously, the most complex of our templates has never taken me any longer than a couple hours tops to write. Chances are most developers already have a good deal of code written by hand. It is quite easy to convert that code to a template. And you only have to do it once. From then on it can be used against all objects and its nothing but "click...done". If code is all over the place and cannot be easily converted to generic templates, then the developer was probably producing some hard to maintain and understand code in the first place. Also, and this is directed to whoever said that code generation = no OO/reuse,.... ?!?!?!?!?!?! What are you talking about? ?!?!?!?!?!?!?! Code Generation generates code. The same code that would otherwise have been written by hand. Are you saying that until O/R mappers came along OO and code reuse didnt exist? Is this a new concept that YOU developed and are now championing?
    Tuesday, August 10, 2004 3:30 PM
  • User-1938370448 posted
    Maybe now you and Thomas can quit trying to decide everything for the rest of us . Erm... I tried to be fair and balanced in this debate. As soon as these kind of remarks pop up, I'm out. No offence but you're way out of line here. I don't know you likwid, so chances are you don't know me either. If you want to start insulting me, go ahead, the stage is yours. However to me you're doing to me now what you think I did to you (but I didn't). You have pointed out our "short-sightedness", but to me you couldn't be more wrong. I guess you didn't bother to look at the templates or you didn't understand them. You also pointed out code bloat. Code bloat means repitive uncessary code. Our generated code is built on top of an entity framework that makes maximum use of reuse. I tried to explain things. If you can't stand another's opinion, that's fine. I never used any rude word in my postings to this thread. I tried to be professional and reasonable. I never used words like short-sightedness, however I see it mentioned here and refer to me as well. I just want to explain things how I see things, from my POV. But apparently you are so full of hate or something that whatever I seem to say, I'm dictating what others have to do. Since when is that the case? I spend a lot of time on LLBLGen 1.x, a free dal generator which helped hundreds of thousands of developers around the world. I damn right know what I'm talking about, Likwid. I also spend 2.5 years working on LLBLGen Pro. You may say I'm wrong, that's fine by me, but don't come to me I'm not entitled to say what I think about the subject at hand. If there is ANYTHING I hate more in this world it is people who think what I have to say is somehow telling them to do what I have to say. It's just my opinion. If you don't like it, so be it, but don't start insulting me. I'm trying to be professional, polite, open and fair. I get the feeling you want to be threated as such as well. Then do that to others as well.
    Tuesday, August 10, 2004 3:31 PM
  • User-1938370448 posted
    I appreciate your civil responses. Thank you for helping to keep this thread what is hopefully beneficial to developers who may have questions about O/R or code generation Thanks. I hope it will indeed benefit developers to make a choice. What we all should avoid is creating myths. An O/R mapper has weaknesses, codesmith's template solutions has them as well. Again, you're assuming codesmith only generates solutions that work with sprocs. Not true. It can certainly gen ad-hoc queries as well. However, our templates do generate sprocs and those sprocs accept filters on any parameter and provide sql-side paging, so we can return any number of rows as well. Those parameters are all exposed out of the DAL, so it very easy to work with and very clean. We don't have to write anything by hand to get that other than the call where we need it with the appropriate parms set. Ok, I know that solution, it's a way of doing things. You have some problems with filters spanning more than one table, but alas, for the sake of the argument, let's say they're not there :) What I tried to explain is that if you opt for codesmith, and you want O/R mapper functionality, chances are you have to write a lot of that functionality by hand. (either in custom templates or in a generic lib targeted by your templates). When a schema changes and therefore possibly changes the signature of a sproc, it is not a problem because everything that references that sproc is also regen'ed at the same time based on the same schema. But what if there are more than 1 app targeting the db? A lot of legacy systems have this: the database is in production for several years, it has an accounting system running on it and now it also has to be connected to the intranet using .NET. Whoops :) Chances are most developers already have a good deal of code written by hand. It is quite easy to convert that code to a template. And you only have to do it once. From then on it can be used against all objects and its nothing but "click...done". If code is all over the place and cannot be easily converted to generic templates, then the developer was probably producing some hard to maintain and understand code in the first place. True, writing the code out first and then migrate them to a template is the way to do it. I use that approach as well (and move the generic code the runtime lib). Problem is though that a lot of code is very complex. The synchronization of fk's and pk's for example. myOrder.Customer = myCustomer; you then want that myOrder is in myCustomer.Orders as well. Plus if you do: myOrder.Customer = myCustomer; that myOrder.CustomerID == myCustomer.CustomerID. It's simple at first, but hard to get it right. That's what I was referring to: these kind of things are not addable by anyone. It takes time to get this right, and a team waiting for a solution to target their DB doesn't want to spend a month fixing this but wants to spend that month on writing BL code and the GUI.
    Tuesday, August 10, 2004 3:43 PM
  • User1341256997 posted
    Frans, I 100% agree. I'm obviously not saying that code reuse and OO principals are a bad thing. They are obviously a great thing. What I'm saying is that in some situations I have the option of using code generation techniques and in some cases I happen to think that is the right way to go. I maintain a single template. I think this is code reuse to the extreme. I get the benefits of code reuse and don't pay the penalties associated with making it so generic that it can handle all the scenerios I will throw at it. Again, this doesn't make sense in all cases, but I believe that it does make sense in some and I think that the DAL is a good example of when it does make sense. Obviously things like the .NET framework classes wouldn't be cases where code generation would make sense as these are a bunch of classes for specific purposes and do not lend themselves to code generation techniques.
    Tuesday, August 10, 2004 3:44 PM
  • User-1308937169 posted
    Ok, I consider my self a good devloper. Been doing this for ten+ years. made lots of money. Getting better all the time. In your posts, when you make broad statments like: > A significant amount of developers out there is unable to see whether an O/R > Mapper is good or bad because they lack the theoretical knowledge on how an O/R > Mapper works and > everyone using CodeSmith templates to run his O/R mapper should imho > get a beginner course in object oriented programming and > but if you sit down and write your own O/R mapper in a code generator, then frankly, > I question your ability. Why? Simple. Because a lot in an O/R mapper is very > repetitive code. This naturally leaves to a runtime library. Well, since I am included in "Significant amount of developers" and "everyone using codesmith", then you are talking about me. You may not have called me an idiot directly, but you sure as hell implied it. This is not my first rodeo, and I have pissed off bigger folks than you. You have indeed said I was an idiot, and a lot of other folks too. You are pissed right? Cause your language is getting a little sloppy... Having to reach for babble fish on some of it. > But a fool with atool - is still a fool A fool with an OR mapper is still a fool. An ass with an OR mapper to sell and a keyboard is still an ass. > Now, otoh, you are a typical code bloater. YOu will have hundreds of unnecessary classes > that actually - make your perfoarmance optimizations hard to trace, use up a lot of > additional memory, make your program execute slower (lower cache hits in the cpu core) > and a lot of other negatives. Besides your dll's are bloated. I mean, have 300 business > objects in a library - we have 600 classes (300 stubs, 300 busienss objects). How many > thousand do you have? This makes it hard to run a profiler. That is pretty balsy... WTF do you know about my architecture? I write templates to generate the code I use. No bloat, just good solid tested code. > If code generation is so great, dude, then why does the .NET runtime come with reusable > classes all over the place, instead of generating the source for every grid right into the > code of the window class? You seem to believe this is a viable approach. Sure, if the Grid did not do what i need it to do, then i buy another one, or make it myself. If MS would put the source were i could get it, then that would make it easier. GRIDS and BL/DL are way different. Are you trying to imply that my complex BL should be as generic as a grid? > Any sane person will - when doing something as complex as an O/R mapper based on a > code generator - come up with a shared runtime instead of repeating the same coe over > and over and over again. I am not saying MINE - I am saying yours. So, now I am insane for wanting more control? > You should try to understand that using code generation does not prohibit you from > following good programming practices. DER!!! that is what I have been trying to tell your punk ass. > Or do they pay you by lines of code? I get paid to produce results. Repeatable, quantifiable results. How do you get paid?
    Tuesday, August 10, 2004 3:44 PM
  • User-498097622 posted
    Frans, For the sake of professionalism as you put it, I will be more direct in targeting my responses for the correct person. You invited me to "bash you with a big hammer" in your own words. I never stated that CodeSmith provided the exact same functionality as LLBLGen. More of my hostility has been aimed at Thona and his unintelligble ramblings. I have seen you participate in countless debates where you expressed your opinion in such a way that it appeared your way is the only way. If I was out of line, then I apoligize. Do you provide the source code for your runtime engine? If not... What happens if I find a bug? What if you don't implement validation the way I want, what if you dont implement concurrency the way I want. I am left to your mercy. With CodeSmith I can write what I want, I can implement it is gradually as I want, and I can make sweeping changes to my architecture simply by regenerating with new code. Again I will reiterate, I am not saying that your way is bad. I have never said that. This is a Build or Buy scenario and I would rather Build. I learn by building. What if I want to create my own product as you have in 5 years. If I always depended on your application, I would lack the knowledge of implementing it myself. I am sure there is some NIH syndrome, but I do not see it as a bad thing. I have always learned better by doing rather than consuming. Your application is fine, but for us we want to make the mistakes, and learn to correct them. We will learn by building and doing rather than using your libraries. I have said repeatedly CodeSmith is not for everyone. We spent a lot of time at work and out of work designing and developing our current architecture. I am sure we will continue to spend alot more time doing this. As for things your application does like recursive object graphs, 1 line of code to save a tree of objects, we haven't implemented that. Guess what, we don't need it right now.
    Tuesday, August 10, 2004 3:47 PM
  • User-528039901 posted
    ::but I believe that it does make sense in some and I think that the DAL is a good example ::of when it does make sense Frankly, Eric, my findings are just the opposite. For the same reason I am also not using the DAAB in the EntityBroker. I found that I can handle all DAL operations with a handfull of classes - I don't want to get into the number details, because it is a question of counting, but if you ignore the query definition classes, then I basically have 5 or 6 classes ONLY that deal with ADO.NET etc. And NONE of them would benefit in any way I would consider sane from being put under a code generator. ESPECIALLY in the DAL layer, you basically end up with code that is generic ANYWAY. See, you deal with the System.Data namespace. So all the stuff you do is generic. Now, there are some points where I could really benegit from unrolling a loop - but really, this is so minor, the loop does not even show up on a profiler in the first place. Like for example my CRUD class. We don't use SP's - just for info. It is responsible for one table mapping,. It is generated for this mapping, and is preparing the SQL statement, reusing it from there on. If it has to make an insert, it is then getting the fields of the object in an objet array (and it has to be an abstract container anyway, as we allow the DAL to be remote), looping over them, position by position, coying the value into the prepared parameters for the insert statement, executing the statement. There are a handfull of checks for riggering side functionaltiy (retrieving the identity values etc.), but the whole thing is firs totally time-uncritical, and secondly generic anyway and per definition. Especially the DAL, having to go through a generic interface anyway, is where I think code generation is abused. Further up - points may change. But not there.
    Tuesday, August 10, 2004 3:50 PM
  • User1356982465 posted
    Wow -- this has really degenerated. We need a separate forum for just these debates. :) Here's my take -- I love simplicity, time-savers, flexibility, laziness, and lots of money. My job as an architect is to stop all the unnecessary complexity that so often abounds, do whatever I can to speed up the process, without also sacrificing too much flexibility, so that I can sit back and do very little real coding, while hopefully optimizing my take. So what does that mean? It depends -- depends on the client and their politics mostly! I've had clients that refused to listen to any advice and just wanted me to code their way. Guess what? Once I gave my advice, forcefully if they were asking for something stupid, then I did it their way -- and if it was really bad I did what I could to find a new gig also. Sometimes I have to use stored procs, sometimes CSLA (yuck), sometimes code gen, sometimes O/R mappers, sometimes some of each where it makes sense and I can do it. My personal best experiences have been using O/R mappers for all the data access, and either code gen or some other automated UI tool (my UI Mapper is still in beta). I've also really really appreciated these debates here -- including the dreaded Thomas. I watched on the sidelines when Frans was against dynamic sql and Thomas won out. I learned a great deal, read a lot, did my own test, product comparisons, and converted. I've used CodeSmith and others (maybe not enough) and they haven't won me over. Of course I've also took over some doomed projects that relied on some ugly code gen, and maybe that has swayed my feelings -- I haven't seen one successfully work well. But in the end -- I do whatever works best for my client and their particular situation.
    Tuesday, August 10, 2004 3:58 PM
  • User1341256997 posted
    *** TIME OUT!!! *** Holy shit, thona posted without calling someone stupid! This has got to be a first. If all that comes out of this thread is that you realize how much of an ass you come across as most of the time in these forums and other mediums then it will have been an AMAZING thread. Please try and use some introspection and attempt to not piss so many people off. My work here is done.
    Tuesday, August 10, 2004 3:59 PM
  • User-528039901 posted
    ::This is a Build or Buy scenario and I would rather Build. I learn by building. No., you get fired by building. See, there are these people in india. Tey MAY not be smart, but they are CHEAP (note: i do not imply they are not smart - I am just trying to make a point about how things look even if you have idiots in india - if you get a smart outsource, the facts just turn worse). Now, you have these people in india. Clueless (happens oft enough) with an adventuroius understanding of english. They just want to get the jkob done, and do so cheaply. And then you are there. Expensive, and you want to waste company ressource on building something isntead of buying. Guess who looses his job. Tip: is is NOT the dude in india. Fact is that in a Build ./. Buy sitaution, Build is most of the time a stupid decision. You are not being paid to learn, but to get a job done. Learning you can do on the side. You can sit down and get your own knowledge together. But Building on the go instead of buying is not smart. I COULD define a point where I can argue it is fraud. You bloat the time you spend on something above what is reasonable just for personal profit - financially AND by gaining kwnoledge. Typical example for time-fraud. IN a project, as I said, taking a build vs buy situation, buy is normally a much saner decision. And building is a typical example of the NIH-Syndrome. ::What if I want to create my own product as you have in 5 years. If I always depended on ::your application, I would lack the knowledge of implementing it myself. What is this? NIH coupled with "I dont like reading books"? Frankly, when I started going along with the EntityBroker (because I needed something along the lines), my first steps were going out to the net and doig ny homework. And this really brought me long enough to have a very good understanding how things should look. Ever since prototype 1, the main architecture has not changed - and interesting enough it was not even my invention. And before people pop up telliung my to follow my own rules. I do. I would never even dream of creating a grid or a tree. I use the infragistics controls. Perfect. If I want to learn, I can look up their code and play around. But when I program, my timeplan does not include trying to be a less stupid programmer on the customers expense. ::As for things your application does like recursive object graphs, 1 line of code to save a ::tree of objects, we haven't implemented that. Guess what, we don't need it right now. This always makes me wonder how simple the programs are that people do. I mean, great if this is the case with you, but in our object graphcs objects may manipulate sub-objects with the user knowing expliclty (like a sub.object containing a copy of the original data, when a oiece of information is changed, turning into an archive version). I can not imagine having to run through my forms and start thinking on whether a particular object may change something specidic. Especially because the vast majority of my forms do not even know whether the object they communicate with is actually the object they were programmed to work against (or a sub-object, which may add such requirement).
    Tuesday, August 10, 2004 4:04 PM
  • User-528039901 posted
    :;Holy sh*t, thona posted without calling someone stupid! First time nothing stupid was said, maybe? ::If all that comes out of this thread is that you realize how much of an a$ you come across ::as most of the time in these forums If you ever realize how little I care what people I consider not to be significant for my life actually think about me, you may realize how irrelevant this sentence was. I just don't care. IF you want me to listen, you better have valid sound points. If you don't, or represent your case in a stupid way, I will tell you. That simple. In MY world, being right in the first place is more important than being nice. Being nice is a plus, but if you are incompetent, I don't care how nice you are.
    Tuesday, August 10, 2004 4:09 PM
  • User-1938370448 posted
    For the sake of professionalism as you put it, I will be more direct in targeting my responses for the correct person Thank you. I know I can be harsh and bashing and arrogant sometimes, but I know when I am and when I'm not (I hope ;)). So in this situation I wasn't, at least it wasn't my intention. You invited me to "bash you with a big hammer" in your own words. Sure but that was meant as a sarcastic twists. Ah well, never mind. I have seen you participate in countless debates where you expressed your opinion in such a way that it appeared your way is the only way. I know I do that sometimes, but I can't help that. Often it is also part of the context of the debate and the reader (generally speaking). Sometimes it's me. You sound like a codesmith specialist. You obviously will know that when you talk about codesmith a lot and the advanced features and benefits, people will become hostile, or at least will become less friendly, because you seem to know a lot (well, you do) and sometimes people can't stand that. I'm as liberal as you can possibly get, and having your own opinion is something very valuable for me. So the last thing I want to do is force my opinion onto another person. If I was out of line, then I apoligize. Thanks. Accepted :) Do you provide the source code for your runtime engine? If not... Yes. I use templates as well btw. Templates ran through a parser which also comes with sourcecode. The generated code targets a runtime lib and a dynamic query engine, which sourcecode is available to customers at no extra charge. So in a way, you get more sourcecode than with codesmith ;) hehe. What happens if I find a bug? You report it, we fix it. What if you don't implement validation the way I want, what if you dont implement concurrency the way I want. I am left to your mercy. No :) Well, of course if you want a certain feature and it's not there yet (like the runtime updates I'm almost done with, like expressions and aggregates, prefetch paths etc.), changes are you have to wait. However the framework is setup in such a way that you can add your own validation logic and concurrency logic by implementing interfaces. This is done especially for the purpose you describe: a customer doesn't want to get stuck with just 1 concurrency scheme or simple validation which doesn't work in all situations: he wants to extend it or add his own. With CodeSmith I can write what I want, I can implement it is gradually as I want, and I can make sweeping changes to my architecture simply by regenerating with new code. Of course, just as the person who doesn't use codesmith but writes every line by hand. The problem is: often a developer doesn't want to start with writing templates, he just wants to click a button and go. Codesmith comes with a lot of templates (or are available to the codesmith user), so chances are the developer can get started pretty easily, however it perhaps requires more deep down involvement for the developer than he wants. This is a Build or Buy scenario and I would rather Build. I learn by building. What if I want to create my own product as you have in 5 years. If I always depended on your application, I would lack the knowledge of implementing it myself. of course, but there are just 24 hours in a day and chances are you only have 2 hands :). The stuff I have written in 2.5 years by full time development can't be replicated by another developer in say 3 months, doing development only at night. So there are choices to be made. I understand the urge to develop it by yourself, heck that's why I wrote llblgen 1.x in the first place and I'm sure Eric wrote codesmith because of that. However a lot of developers don't have that luxury: they require a tool NOW because the deadline is in 3 weeks (I'm not kidding here, this happens :)) We spent a lot of time at work and out of work designing and developing our current architecture. I am sure we will continue to spend alot more time doing this. If you have that time and funding, it's of course great. Most developers however don't have that: the time they have to write software have to be billable hours, and every minute spend on an architecture which can also be bought is perhaps extending the project over the deadline. As for things your application does like recursive object graphs, 1 line of code to save a tree of objects, we haven't implemented that. Guess what, we don't need it right now. Of course you can do it manually :) I was just making an example. You see, often people simply expect it to be that way: "Hey, I save Customer and the new order objects in Customer.Orders are not saved, how come?". They're users, they want functionality that makes them save hours, days, weeks, and thus money. Every developer knows what a tool can do they can do by hand as well, it only takes a lot of time and again money. Again a choice to be made, and often people don't have the ability to make that choice on what they like to do, sadly enough.
    Tuesday, August 10, 2004 4:16 PM
  • User-1308937169 posted
    fransbouma you make some good points, but i am going to call BS on some of it: > Here you and I part ways. You say that what I have in my runtime lib and query engines is > what you acn generate too with codesmith. Of course you can, but IF you decide to do > that, you have to write that code by yourself. The fun thing is: it's generic code, so you > don't even need a code generator to create it. No, I am saying i have no idea what is in your box. I have no idea what SQL ciode will get generated, until i profile it. Then i am not sure what i can do to change the query you are out putting. If you are trying to say that "I should not worry about that" then that just won't fly in any environment i have ever worked. A bad query can and has brung down our site. That is what happes when you try and drink from a raging river. > Now, why would you opt for generated code over a runtime solution? I don't know really. I > can create my filters in C#, with typed syntaxis, right there when I need it. I can even > create dynamic lists based on entity fields, using typed expressions, if I want to.The fun > thing is: I can alter the filters when I need to, right there where I need it. great, can you pull the generated code up, look at, touch it, change it, fix it, alter it? do a code review on it? $500 per minute is what goes through our site. No way am i giving up control. Maybe... Maybe that would be different if scale wasn't such an issue. but i don't really see why i would have to fall back to an OR mapper when i can use the same methodology on my large site, as i can on my customer care app? If I have a query based on an indexed view, does your tool know that it needs to set a few options on the connection before it can call the proc? Or in your case run the query? You are making some assumptions about my architecture. 23 SQL Servers, 45 Web Servers, and Searching a DB is what we do. 30+ million hist per day... this is not hello world. > Furthermore, and this is more important: I can use the same code for SqlServer, Oracle or > other databases. If I have to alter the filter, I can just do that there, in C#, the filter will > work on Oracle, SqlServer or other databases supported by the runtime lib. Again, this > might not be important to you, but for a lot of projects it is essential. I have never had that as a business requiment. Just like I have never had to "cross platforms". Maybe i am fortunate. I can see a few uses for it, but to base all my architectures on it does not make sense to me. I supose my templates could be altered to produce code for Oracle, or others... Glad i don't have too. Oracle is a PITA to work in... > Say, in your case, if I have a stored procedure which gives me all orders for a given date > and customer, I can call that from where I need it and use the results. However what if I ? > want to filter on a given shipping date as well? Or only want a limited amount of rows? I > have to alter the proc. That means that I have to change the signature of the proc. That > might break code. So I'd better make a NEW procedure. However, what if the old one is no > longer used? I have an orphaned procedure. This is probably not a big deal in your > projects, but with projects of 100+ tables and thousands of procedures, it IS a problem. It is not a problem if you have good change management. Essential on any project, not just small ones. If you change an interface to a method, does the caller not break? We can at least add an overload method with the new interface... if you never call the old interface, then you now have dead code.... same problem... same fixes... make a new one, depricate the old one, track it's use, remove it when it is clear.. We have strict rules for breaking a procs interface. And a standard method to deal with it. And yes in our environment stored procs rule. it is an important layer in our architecture. Please don't shut up, this is a great debate...
    Tuesday, August 10, 2004 4:21 PM
  • User-1938370448 posted
    I maintain a single template. I think this is code reuse to the extreme. I get the benefits of code reuse and don't pay the penalties associated with making it so generic that it can handle all the scenerios I will throw at it. Sounds good. It's often a challenge though when to make something generic and when to keep things not generic. My SQL generators are not based on a common SQL-92 CRUD engine with specialisations, but per database I use a completely separated engine. This causes a little bloat but gives the oppertunity to finetune the SQL for Oracle with Oracle specific stuff for example without much hassle. Obviously things like the .NET framework classes wouldn't be cases where code generation would make sense as these are a bunch of classes for specific purposes and do not lend themselves to code generation techniques. Well, I think some good stuff is there left for implementation through code generation though. Because .NET is single inheritance, often you want to inherit behaviour, like IComparable but have to write everything by hand. Code generation can help there, or better: runtime mixing of code, as in AOP, but that's another story :)
    Tuesday, August 10, 2004 4:28 PM
  • User-1445306016 posted
    >> Furthermore, and this is more important: I can use the same code for SqlServer, Oracle or >> other databases. If I have to alter the filter, I can just do that there, in C#, the filter will >> work on Oracle, SqlServer or other databases supported by the runtime lib. Again, this >> might not be important to you, but for a lot of projects it is essential. >I have never had that as a business requiment. Just like I have never had to "cross platforms". >Maybe i am fortunate. I can see a few uses for it, but to base all my architectures on it does not >make sense to me. I can speak to this one a little bit. On two separate occasions, two different jobs even, I've had to migrate the database of my application from one platform to another or access data on different platforms. The first occasion was when I was working for a large company and they switched from an existing financial system, AS/400 based, to an MS SQL based financial system (Great Plains? I'm not sure if that was it or not). In that case we had to re-write the DAL (VB 6.0 wheeeeeee) to use MS SQL instead of the AS/400 driver. In that case having a multi-platform O/R mapper would have saved us a BUNCH of time and effort IMO. The second occasion is at my current job working for a large clinical research facility. We are creating an on-demand data warehouse for the different clinical DB systems. Some Oracle based, some Ingres, some which spit out a flat text file we have to parse. In this case, we are looking for a multi-platform DAL/ O/R Mapper. We haven't gotten that far along, but the need to only use one DAL to access multiple DB platforms is one we identified early on. I'm not saying that you should always architect your applications JUST IN CASE something like this happens. For 90+% of the business applications out there the DB remains the same for the life of the application. For shrink-wrapped applications where you don't know what DB the end user is running, a multi-platform O/R mapper that can be re-targeted without having to recompile sounds like a good idea to me.
    Tuesday, August 10, 2004 4:39 PM
  • User-560067886 posted
    Yay, we're on page 4! :p
    Tuesday, August 10, 2004 4:45 PM
  • User-528039901 posted
    ::I maintain a single template. I think this is code reuse to the extreme. I get the benefits of ::code reuse and don't pay the penalties associated with making it so generic that it can ::handle all the scenerios I will throw at it. Am I the only one thinking this statement contains an oxymoron? If you have one single template, and if it is so generic that it handles all your scenarios, does it then not be so generic that it can handle alls the scenarios you will throw at it, which is something you say it is not?
    Tuesday, August 10, 2004 4:53 PM
  • User-1569077614 posted
    When I made my original post, it was just to share some information that I found VERY helpful with another community member. I never claimed I was O/R Mapping, I even stated that. What I got was a reply saying what I posed was useless and stupid, and anyone considering it needed to hit the books. Ya, ok, whatever. Nice contribution to the discussion, jackass. Everyones comments have been pretty helpful, and I learned a litle bit more about O/R Mappers. I think the only useless thing out of this has been Thona's belligerent stance and over inflated ego. If you are curious why I responded to this post, its pretty simple. I'm part of a project that involves the creation of a fairly data intensive application. I suggestes several times we try an O/R Mapper, for many reasons. ne of the obvious is reducing time spent coding. The other people on the project didn't want to go down the O/R road for many reasons. Then I suggested CodeSmith with some straight forward Data Abstraction templates. They were willing to do this, so enter CodeSmith and its tmeplate friends. Its very worked out well too. Codesmith is great, but I still like the idea of O/R Mappers too. I tend to be rather agnostic when it comes to coding and useing different tools for different situations. I don't just follow one pattern. I can't follow one pattern because I'm a contractor. I don't work for an in-house IT group. Sometimes I work alone, sometimes on a team. Sometimes I get to choose the architecture, sometimes its handed to me. Sometimes I'm banned, or limmited with choosing commercial components. Each job presents me different situation, and I can't do everything one way. Eric has done a great job with CodeSmith and it has saved people lots of time and pain. Like wise with the creators of O/R Mappers. To me, both approaches are different but they both have similar rewards for developers. Now, the underlying architecture is a different story and no architecture out there will ever satisfy everyone. With the way things are going in the .Net world, I'll probably come into contact with O/R Mappers at some point. If the situation calls for it, I'll probably suggest using one on a future project. Although, it probably won't be EntityBroker, mostly because of the very charming spokesperson. I also plan on putting CodeSmith to use more often too. I do lots of DotNetNuke work and lots of work with databases. The CodeSmith templates for both scenarios are priceless to me and save days, sometimes weeks of work.
    Tuesday, August 10, 2004 4:53 PM
  • User-1938370448 posted
    you make some good points, but i am going to call BS on some of it: whoa, I never knew I would hit the BS mark ;) No, I am saying i have no idea what is in your box. I have no idea what SQL ciode will get generated, until i profile it. Then i am not sure what i can do to change the query you are out putting. If you are trying to say that "I should not worry about that" then that just won't fly in any environment i have ever worked. A bad query can and has brung down our site. That is what happes when you try and drink from a raging river. Well, I'll just say: O/R mapping gives some conveniance, but at a price. Highly handoptimized stored procedures will probably be more efficient, but so is COM+, ADO and C++ instead of .NET, SqlClient and C#. If you want highly optimized queries, tunable at every statement, then O/R mapping is not for you, it's that simple. great, can you pull the generated code up, look at, touch it, change it, fix it, alter it? do a code review on it? $500 per minute is what goes through our site. No way am i giving up control. Maybe... Maybe that would be different if scale wasn't such an issue. but i don't really see why i would have to fall back to an OR mapper when i can use the same methodology on my large site, as i can on my customer care app? The volume of data/money flowing through the site says nothing about the amount of code required to make it work. Data-access is overhead, what you do to produce the code to handle that overhead is up to you, but everything comes with a price. You probably wanted to give in some flexibility for performance. Given the context it's perhaps the right choice. However the choice comes with a price. It's perhaps nice if you mention that price as well. (but to answer your question, yes you can change the code if you want to, why not?) If I have a query based on an indexed view, does your tool know that it needs to set a few options on the connection before it can call the proc? Or in your case run the query? SET ARITHABORT ON you mean to save data in tables used in an indexed view? sure. You are making some assumptions about my architecture. 23 SQL Servers, 45 Web Servers, and Searching a DB is what we do. 30+ million hist per day... this is not hello world. Again, the amount of data flowing through an app is nice to know but that's not important. It's the complexity of the application on top of it. For performance reasons it perhaps is important. But when databases get big in amount of tables, the BL gets complex, the application has hundreds of screens, it gets complicated, really complicated. I have never had that as a business requiment. Just like I have never had to "cross platforms". Maybe i am fortunate. I can see a few uses for it, but to base all my architectures on it does not make sense to me. You would be surprised how many applications today have to support for example SqlServer and Oracle. On the fly. At the same time. It is not a problem if you have good change management. Essential on any project, not just small ones. If you change an interface to a method, does the caller not break? We can at least add an overload method with the new interface... if you never call the old interface, then you now have dead code.... same problem... same fixes... make a new one, depricate the old one, track it's use, remove it when it is clear.. But your situation is not everybody's situation. :). A lot of legacy stuff is out there, and 10 to 1 management is not going to start a rewrite of the applications running on the legacy database when the data in the db has to be made available through a webapplicatoin as well.
    Tuesday, August 10, 2004 5:02 PM
  • User-528039901 posted
    ::For 90+% of the business applications out there the DB remains the same for the life of the ::application. For shrink-wrapped applications where you don't know what DB the end user is ::running, a multi-platform O/R mapper that can be re-targeted without having to recompile ::sounds like a good idea to me. Same here. NOW - there is one core point that makes all this platform independance really funny. And this is that basically outside the DAL (and even there in only a part) there simply is no reason at all to even have one line of code that knows about what type of database you are talking to. IN the DAL, this is naturally different, but even then most things can be pretty standardized - even though the .NET data access classes could be modelled better in this cae, it is still doable to have only VERY limited specific functionality. So, basically, with a proper architecture in the system, there is no need not to be database independant. And, using standard patterns (factory), there is also no need to recompile at all. I have customers running against SQL Server and Oracle, and we are having one app running against SQL Server, Access and soon MySql (web dudes- a CMS sort of has to support MySQL today). Both without code change, and even with database maintenance in our case. For shrink wrapped software this IS a sales argument par excellance.
    Tuesday, August 10, 2004 5:03 PM
  • User-498097622 posted
    The great thing about our architecture is that we have rules. We only use Sql Server. Guess what no reason for us to support other databases. We will never need to accomodate special considerations like legacy databases, all of our legacy work is done via EAI and/or database replication. We developed this product for our company and we license it to automotive dealers. My boss was perfectly find with us learning lessons and making mistakes along the way. I do plenty of actual working even when I am not in the office, my wife would argue I work alot more for this job than I should. Do not presume to know me or my situation Thona. Plenty of my learning has been outside of work. Most of it actually. Why did you build Entity Broker instead of buying something else?
    Tuesday, August 10, 2004 6:02 PM
  • User-498097622 posted
    Thona, On my blog you accused me of being racist. What is your country of origin? I want to make sure I am hating the correct people.
    Tuesday, August 10, 2004 10:03 PM
  • User-1406164332 posted
    :: Plenty of my learning has been outside of work. Most of it actually. Excuse me but I'd just like to clarify something. Are you under the impression that LEARNING at your job is a sin? Like stealing time from your employer? I didn't realize anyone had taken thona's comment seriously, the one about "you should be fired for learning". I will say one thing about this forum, there's never a dull moment with some of these ummm...interesting POVs.
    Tuesday, August 10, 2004 11:21 PM
  • User-528039901 posted
    ::Are you under the impression that LEARNING at your job is a sin? I never said so. But abusing a project budget for overcoming your own incompetencies (radically said) is fraud. Learning on the job is not, but using a project budget for learning because you are not competent in what you have to do IS fraud. @likwid: ::On my blog you accused me of being racist. What is your country of origin? I want to make ::sure I am hating the correct people. I did not accuse you of being racist. I stated the fact that you ARE. This comment prooves it.
    Tuesday, August 10, 2004 11:57 PM
  • User-1445306016 posted
    You guys need to take it to email. We don't care about your pissing contest.
    Wednesday, August 11, 2004 1:09 AM
  • User-546867306 posted
    >>Thona, >>On my blog you accused me of being racist. What is your country of origin? I want to >>make sure I am hating the correct people. Thank you likwid for causing me to laugh at loud this morning. Of course, I'm not saying that I agree with this statement but it's still very funny.
    Wednesday, August 11, 2004 4:22 AM
  • User-1938370448 posted
    The great thing about our architecture is that we have rules. We only use Sql Server. Guess what no reason for us to support other databases. We will never need to accomodate special considerations like legacy databases, all of our legacy work is done via EAI and/or database replication. I'm not familiar with the term EAI, I'm not a manager. Could you explain a bit what it means? Reading your above quote, I can come to just 1 conclusion. I'll say it bluntly, impolite, unprofessional and rude: you don't get it. The reason for this is that this discussion is not about you, your rules, your tools or your fav. dinner. It's about offering choices so people who READ this discussion (and not taking part of it) LEARN a thing or two so they can make a BETTER decision what to use in a given project. You accused some people here that they tend to force their opinions on others. Why is it that I get the feeling you were refering to yourself? You sound like what you're working with is the way to go for everybody, and if they do follow that, they'll have the least to worry about. Not a single second, Jeff. Database replication is not the solution. Perhaps for static legacy databases it is, but the vast majority of legacy databases are active and active data has to be used in the various applications running on top of that database. "We have rules". I'm happy for you, but that doesn't mean a thing. Just because you don't deal with it, doesn't make it bogus, Jeff, on the contrary. "We only use Sql Server. Guess what no reason for us to support other databases.". That must be a great relief. It however doesn't help others reading this thread for a single second. Oh, they should migrate their big iron sun boxes with oracle to sqlserver just because then they have a homogeneous environment? Others might have that problem. Others might have to deal with that and find a solution. Throwing in your standard 'Guess what's' is not going to help them, on the contrary. If you really want to evangelize Codesmith as the solution, offer them that solution, however you're now sounding like a zealot who tries to mitigate a real-life problem by stating that the problem is non-existend. Legacy database problems + multi-vendor database problems are the top two problems a LOT Of developers have to deal with and have to find solutions for. You see, Jeff, things which are easily solved are not the reasons people are going to look for answers. It's the things which are hard to solve, which bring people to online forums and newsgroups to find answers. We developed this product for our company and we license it to automotive dealers. My boss was perfectly find with us learning lessons and making mistakes along the way. I do plenty of actual working even when I am not in the office, my wife would argue I work alot more for this job than I should. I think your wife is right. Unless you owe stock options of the company you work for, spending a lot of free time in work is only helping your employer. Unless you use that time to learn things, as you've suggested was the case. I'm not sure if you realize it, but your position is pretty extraordinairy: your employer didn't care if you spend time at work fiddling around to try things out, you simply ignore legacy problems by throwing in database replication (which doesn't solve it), you were allowed to learn on the job etc. Most developers are in a different situation: they have deadlines, tight budgets and tight project limitations they have to live with. There is no time to learn on the job, because there is hardly enough time to do the project. There is no money to spend a lot of time on stuff that can be bought for a small fee, because extra money spend makes the profit on the project less and less (and also the time restrictions play a big part). Project limitations often state that a legacy database (or more!) have to be used and that's it, no debates possible. These developers, and the vast majority of the developers are in that situation, trusts me, have problems you don't have to deal with. These developers will look for answers, for clues what to do to solve their problems, to win time so they don't have to spend 16+ hours a day in the crunch-time before a deadline. These developers will read, among others, this thread and this forum. I can tell you, Jeff, the last thing these developers want to hear is a 'Guess what' and a simple downplay of what is reality for them. Please realize that, as I'm sure you want to help them out as well as I do. Thanks.
    Wednesday, August 11, 2004 4:43 AM
  • User-1938370448 posted
    Excuse me but I'd just like to clarify something. Are you under the impression that LEARNING at your job is a sin? Like stealing time from your employer? I didn't realize anyone had taken thona's comment seriously, the one about "you should be fired for learning". I think he referred to the fact that often consultancy firms send in a team of 'specialists', which are no specialists but trainees. The company hiring the consultancy firm, expects (and is often told) specialists, as that company pays top dollar (say 80$ or more per hour) for these specialists. It is simply fraud to send in a trainee as a specialist, as the trainee will use the 8 hours in a day, costing 8*80 = 640$ (or more) a DAY (!) mostly for learning what should have been in his/her head already, as he/she's presented as a specialist. You may not believe me, but here in The Netherlands the big corps, like Cap Gemini, LogicaCMG, Ordina, Gentronics, all use this tactic, A LOT. Employees at Cap Gemini were even told to work slower because revenue was down. (A friend of mine works there). You know, they don't call a person who has had 2 training courses in C# and .NET a trainee anymore. Come again? This also happens with smaller companies. If the company hiring you is told that it gets a trainee who has a lot to learn, and this trainee is coming for free, no big deal, but if it's not, the money paid by the hiring company is thus paid for a learning experience. When a project is done on a fixed price basis, the hiring company is not at risk, after all it pays a fixed price. The problem is though that the contracting firm doing the project may have increased the price because it knows it has unskilled developers working on it. Also, and this is worse, the developer could have lied about the skills he has to fulfill a projectassignment. No employee would say "No boss, I can't do that". Doing a fixed priced project then is a big risk for the company doing the project: it might not make any money or even lose money over the project because the employees were using part of the project time to learn things instead of spending it on development/the project. Sometimes an employer allows internal projects to be done as a learning project. These are not the projects which were referred to I think. I think the projects referred to were projects where a hiring company was paying big money for services which were not worth that money.
    Wednesday, August 11, 2004 5:01 AM
  • User-117267662 posted
    <cite>On my blog you accused me of being racist. What is your country of origin? I want to make sure I am hating the correct people. </cite> lol...my first good laugh of the morning....what's your blog
    Wednesday, August 11, 2004 9:32 AM
  • User1081703888 posted
    ::Wow... looks like we opened a hornets nest here. It seems a touchy subject between pro and con ... so what do you expect? ::This is EXACTLY the reason that I think template based code generation is the best way to go. It didn't really have a 1+1=2 straigt forward answer here ... ::Go figure, it seems that people really like having control of their own architecture, flexibility to do things their way and not being locked into someone else's idea of "the right way". Indeed it seems it's someone else's bright idea. But why invent the wheel over and over again if you didn't really want to make special wheel for your high performance drag-star-car. ::As we all know, there are certainly some BIG egos out there that think they know everything and that their way is "the right way", but in my experiences I learn more and more every day and the more people I work with I tend to learn more and more and with that new knowledge I might want to change the way I do some things. Getting personal is not that intell ;) So that means you didn't really knew what you where doing when you started on something anyhow? As I see it these O/R mapper just take a lot of crappy load of coding plus RSI away from you workingspaces. ::It sure is nice to have the flexibility to do so. If I were to go with an O/R mapper those decisions wouldn't be left up to me. I would be stuck with a pretty black box that I can't open and IMO that sucks! You probably use a Windows flavor in a day by day basis I guess... You use all kinds of features that this wonderous blackbox (green or blue if you instantiate it) of software. Do you see the widgets? I guess you trust the inner workings of these blackbox thingies. In my opinion your argument is void.
    Wednesday, August 11, 2004 9:56 AM
  • User-1308937169 posted
    >> It didn't really have a 1+1=2 straight forward answer here ... Did you expect an easy answer? >> Indeed it seems it's someone else's bright idea. But why invent the wheel over and over >> again if you didn't really want to make special wheel for your high performance >> drag-star-car. Maybe we are not making wheels? >> Getting personal is not that intell ;) If you honestly think anyone on this list is stupid, you need a mental evaluation... These are bright people engaged in heated debate. yes a little shit slinging is par for the course. >> So that means you didn't really knew what you where doing when you started on >> something anyhow? As I see it these O/R mapper just take a lot of crappy load of coding >> plus RSI away from you workingspaces. bwwhhaatt? Were is my universal translator... WTF does this mean? >> You probably use a Windows flavor in a day by day basis I guess... You use all kinds of >> features that this wonderous blackbox (green or blue if you instantiate it) of software. Do >> you see the widgets? I guess you trust the inner workings of these blackbox thingies. In >> my opinion your argument is void. Some people like more control. Some people like it there way. Variety is the spice of life. Some people don't give a rat’s ass about control. Some people are fine letting someone invent their wheel. One size does not fit all. IMO you don't have an argument. As i see it, the goal of all of the tools mentioned here is to keep from writing repetitive code. Some give you more control than others... Some people like that, some people don't care.
    Wednesday, August 11, 2004 10:37 AM
  • User-58325672 posted
    Okey, hot thread ! I think both methods can be good depending of the requirments you have. O/R-mappers are the superb choice at some levels. but when it comes to high performance (when this IS a requirment, like banking systems etc.) you can better choose for another solution. like using reporting tools for somes cases. If you have enough money you can buy an O/R-mapper with it's souce code. and the black-box will become white. if you have 0,00 dollar/euro: use templates or a make your own mapper... do it your way. In most situations you have the fix the damage self if you make the wrong choice. be ready for that let's stay professional. and not blame each other. it's all about share the knowledge here;-)
    Wednesday, August 11, 2004 11:03 AM
  • User-58325672 posted
    PS: where are the moderators in this arena?
    Wednesday, August 11, 2004 11:04 AM
  • User-58325672 posted
    PS2: what i don;t understand here is - this discussing was about O/R-mappers - What do the code-generators here? no offend. but they ARE different. O/R-mappers are PERSISTENCE LAYER mechanism/tool (for the data access). codesmith is a code GENERATOR. codesmith have nothing to do with architecture or presitence mechanisms for the DAL, it's just a (powerfull) code generator. nothing more, nothing less. maybe you can use it to 'fit' in you solution and let it generate a big amount of code for you. but it will never produce a presistance data access layer for you. or you need to make a big hook up.
    Wednesday, August 11, 2004 11:12 AM
  • User-546867306 posted
    >>what i don;t understand here is - this discussing was about O/R-mappers - What do the >>code-generators here? I think you'll find that that the discussion went mad after a useful post saying "Here are some CodeSmith templates. These may not qualify as O/R Mappers, but they are very helpful, and FREE :)" which was followed by a typical arrogant Thona comment of "You fail also to mention that they are useless." Not surprisingly all h*ll broke loose and it degenerated into a standard "I'm not an idiot, you're the idiot, no he's the idiot" argument that happens a lot in this forum.
    Wednesday, August 11, 2004 11:50 AM
  • User223104247 posted
    I walk 5 miles to work each day. I’ve done this for the last 7 years. I end up getting here around 10:30am. Recently, a local bike dealer has been attempting to sell me on the merits of his new transportation device; a bicycle. I’ve also been approached by a car dealer; he says he has just the thing to get me to work. My manager states that he’s going to replace me, I spend too much time getting to the point where I can do meaningful work (my office), and not enough time actually doing work that will help the company. He’s thinking of replacing me with Billy, who gets to work at 8am each morning. I make $25 an hour, Billy makes $12.50. The bike manufacturer states that the car dealer is selling overkill; a complex device that will break down and leave me with no knowledge of how to fix it. His bike will get me to work more efficiently and allow me to be more productive. “You can see how my bike works!” he states. “Nothing is hidden. What you see is what you get.” “But what’s under the hood of that car?” he asks - preying on my worst fears. The car dealer states that his car will get me to work in the most efficient manner. “Who cares what’s under the hood?” he bellows. “It comes with the backing of the manufacturer, and plenty of people use it each day.” He throws users manuals, support documentation and pictures of greasy mechanics my way. He shows me pictures of happy employees driving their cars to work, some with jobs more important than mine, some with less. I look out and see the cars, zooming by the walking pedestrians or the occasional biker, and I realize that I want to be more productive as well. I choose the car. An OR Mapper is my car. I’ll avoid naming the tool I use as that would automatically disqualify my post as a fanboy post. It’s not. I cannot with any intelligence speak about any of the other offerings being discussed within this topic, nor do I have any reason to do so. I do hope that the many people that view this topic come away with some good information on the options that are available and some of the benefits as well. An OR Mapper can be your vehicle as well, and in many (most) cases I think it makes a lot of sense. I support an e-commerce site, as well as multiple Enterprise windows/web applications for the Department of Education. Using an OR Mapper has changed the way I develop; I had always thought of myself as a good developer – this tool allowed me to be a good and productive developer. Two months development time shaved off the e-commerce site. Five months off the current Enterprise application. The flexibility to work with the data how I want it, when I want it, in a fully OO manner. The ability to change/modify/extend the ‘black box’ by providing an SDK with full source code - reassuring me that I will never be left with a car that cannot be fixed. Performance metrics that have consistently fallen within the level of tolerance. Hey, each developer has to find his/her own car. There are many vehicles out there, don’t be stuck walking to work. Mike Davidson
    Wednesday, August 11, 2004 12:11 PM
  • User939856567 posted
    >> PS: where are the moderators in this arena? Enjoying the fight ? ;-)
    Wednesday, August 11, 2004 12:17 PM
  • User-1938370448 posted
    yasky517: Excellent post! If you have a blog, you should put it up there. It's too good to be wasted on page 4 of a thread on a crowded forum.
    Wednesday, August 11, 2004 12:51 PM
  • User-560067886 posted
    It is absolutely ridiculous to assume that because template code generation is more flexible that it is too complex or too time consuming to use. It does NOT take long to write templates. They are not particularly complex. If ASP.Net is over your head, then you might have trouble with template writing. If you've ever written an ASP page you'll likely have *zero* trouble jumping right into template-based code generation. What a number of people seem to be missing is that a template set is just as much a "ready-made package" as an O/R mapper is. It absolutely can be used without an ounce of understanding as to how it works. Again, if an O/R mapper fits your needs it is an excellent solution. But the O/R fanboys (no that does not refer to all the O/R pros here) are not trying to convince everyone that code generation is a "bike". They are trying to convince everyone that code generation is a space shuttle that comes unassembled with no manual. We have found templates *extremely* easy to write, and in all reality, although our templates have evolved a lot over the last 2 years as our architecture evolves, we have spent a grand combined total of maybe two days on template writing in that 2 years. If I were an agnostic on the subject (yes I know should be but I simply don't have sufficient experience with O/R mappers), it would seem to me that if I were starting a brand new project and had never used either and my project managers told me implementation was completely up to me and I had to meet no company standards, and I did not have to interface with any legacy systems, then an O/R mapper would be a very attractive solution. However, if I am looking to ease the pain of manual coding within an existing application or achitecture, or I have to adhere to certain company coding standards or my DAL had to interface with a non-standard data interface (i.e. DAL talks directly to third party WebService for all data transactions), or I simply wanted my architecture to work some special way, then code generation would seem a far more practical way to go. I'll give an example of where the ability to evolve our architecture through templates has been helpful. 1 year into a project for one of our largest clients, we were given the mandate that all data changes had to be logged into an audit log. They wanted to know every column that was changed, what it changed from and to, and who made the change. Possible solutions could have included putting triggers on every single table that would fill the audit log table, but that would be a lot of work and all data transactions would require the user identity to be passed in. The solution we used was to modify a single template - our business object template to track changes to individual properties (map that as columns) by way of a hashtable. This was very easy to thread into the property setters. Then when the BusinessObject.Save() method was called we simply fired off a record to audit log table with the column changes and the identity of the person who called the Save() method. This whole endeavour took about thirty minutes to implement in the template. Then we simply pressed "go" again inside visual studio and all of our business classes were instantly updated. I cannot say that this flexibility and feature would not be possible with an O/R mapper because I simply do not know. I'm sure it would be possible with a source license. But $89 for CodeSmith is a bit more palatable than a $999 source license to an O/R mapper. And quite frankyl I wouldn't want to have try understand somebody else's code for an O/R mapper. I'm sure they are not easy to write and take a lot of effort as they are performing a very complex task.
    Wednesday, August 11, 2004 12:52 PM
  • User-1938370448 posted
    We have found templates *extremely* easy to write, and in all reality, although our templates have evolved a lot over the last 2 years as our architecture evolves, we have spent a grand combined total of maybe two days on template writing in that 2 years. Hmm. so you wrote in 2 days what I wrote in 2 years? I must be really retarted then ;) All kidding aside, you can't compare the two. You can't compare a template set written in 2 days with a pack of functionality written in months, years maybe, and I think you'll understand that. Everyone knows that writing templates is not hard. It's the code you are templating which is hard. Of course, the CRUD wrappers are not hard either, I mean, how hard can it be, writing a wrapper around DAAB or similar. However the more complex code IS hard to write. Multi-table filters, db generic filters, recursive saves, fk-pk syncing, etc. etc. If you can write all that in 2 days, please drop me a line and we'll talk business. :) The point is: if you settle for the kind of functionality you can write in 2 days, that's great, but don't compare it to an O/R mapper, because you can't, it's apples and oranges. However, if I am looking to ease the pain of manual coding within an existing application or achitecture, or I have to adhere to certain company coding standards or my DAL had to interface with a non-standard data interface (i.e. DAL talks directly to third party WebService for all data transactions), or I simply wanted my architecture to work some special way, then code generation would seem a far more practical way to go. And why's that? You say you don't have enough experience with O/R mappers, still you think you're able to make the claim I quoted above :). Of course, O/R mappers force you probably in doing things a given way. As Infragistics does with their controls, as .NET does with the FCL classes. There are always limitations and if a wacky corp. policy doesn't allow you to use a given tool, that's too bad, use another tool. However perhaps the corp. policy is then not suitable enough either, as no company wants to lose money just because the policy is retarted. :) (audit log addition) I cannot say that this flexibility and feature would not be possible with an O/R mapper because I simply do not know. It will take me 2 minutes, tops. But $89 for CodeSmith is a bit more palatable than a $999 source license to an O/R mapper. Not every O/R mapper costs 999$, per developer. for 3 developers, we're less expensive than codesmith (studio) btw. :) And quite frankyl I wouldn't want to have try understand somebody else's code for an O/R mapper. This is of course a non-argument. If you start using a template set for codesmith you have to understand exactly what's going on in these templates as well then. And are these templates supported? How many people use these templates in databases with 1000+ tables for example? Or in database clusters? Are these performant? And a big team of developers, can they work with the code generated by the templates or is this code just creating more overhead? Silly questions perhaps, but real issues when people try to choose between homebrewn templates and a proven solution, which is, and this is important, a turn key solution in most situations: fire up the tool, tic-tac-toe, click, and *hop* you have the code you can build your application on top of. Of course, if you can write that functionality in 2 days, no-one can beat that of course ;) and I wouldn't look any further than to put you behind the keyboard to cook up a big pile of functionality in that short timeframe :D :) cheers. :)
    Wednesday, August 11, 2004 1:11 PM
  • User-1445306016 posted
    >All kidding aside, you can't compare the two. You can't compare a template set written in 2 days with a pack of functionality written in months, years maybe, and I think you'll understand that. I think Frans hit it right on the head here. Comparing CodeSmith to an O/R mapper is like comparing the entire .NET Framework to ADO.NET or ASP.NET. O/R mappers are tools designed to solve a specific problem where CodeGen tools are designed to solve a general problem. You can solve some of the same problems using a CodeGen tool that you solve with an O/R mapper. > >>(audit log addition) >>I cannot say that this flexibility and feature would not be possible with an O/R mapper because I >>simply do not know. >It will take me 2 minutes, tops. >But $89 for CodeSmith is a bit more palatable than a $999 source license to an O/R mapper. Gentle.NET includes logging (using Log4Net) and is free and OSS. Not trying to undercut Frans price, his O/R Mapper provides a lot more functionality than Gentle.NET. Just showing that features don't always have to cost a mint. Thomas, no need to reply. I already know I'm ignorant and stupid and need to learn a lot.
    Wednesday, August 11, 2004 1:25 PM
  • User-560067886 posted
    EDIT: damn, copied out of another app...hosed my cites. <cite>You can't compare a template set written in 2 days with a pack of functionality written in months, years maybe, and I think you'll understand that. </cite> <cite>if you settle for the kind of functionality you can write in 2 days, that's great, but don't compare it to an O/R mapper</cite> I didn't mean to imply that our architecture only took two days to develop. We have evolved that architeture over a much longer period of time. See, we didn't starting development as tards who didn't know how to write DAL, etc. We of course had done it many times. But template-based generation allowed to automate what we already knew so we could focus on the higher level concepts in our applications. As new technologies become available we are able to adapt our architecture to it without rewriting everything by hand. What I was saying is that it only took two days to "templatize" that architecture and make it generate-able. We feel our architecture performs better than most O/R mappers, particularly for what we (read clients) need it to do. Again, if you're starting from scratch and can use any development method/tool then O/R is great. I don't believe in my albeit short 10 yr career that I have come across that freedom more than maybe twice. Chances are a developer is hired on to work on an existing application or works for a development group that has already established certain standards. The beauty of template-based generation is that, because you have control over the template, you can integrate the new approach slowly over time. You can write templates that meet your new needs/architecture but still adhere to legacy interfaces. <cite>It will take me 2 minutes, tops. </cite> Yes but you wrote the thing in the first place :) And btw, the time I quoted included the full process - data model, UI interface to view the audits, etc...because yes we use templates to generate the core of most of UI as well (obviously passive generation tho). I'd also like to point out that dropping in an O/R mapper isn't necessarily a breeze either, especially if you are already underway into your development cycle. O/R mappers still have a learning curve to be overcome. In the couple I have used all of my classes had to inherit from the O/R base classes (can be problematic if I already have them inheriting from others). And we hard to learn the mapping file xml schema. Not terribly complex but it certainly requires more than a passing glance. E-R and Biz rules are hard to define in any platform. <cite> for 3 developers, we're less expensive than codesmith (studio) btw. :) </cite> Does that include source? If so, good for you! Seriously, I really hate it when a developer gets it in his head that he is some kind of god whose earthly gifts should come with some ridiculous price. Also, CodeSmith is actually free. The $89 is just for the custom studio ide. <cite>And are these templates supported? How many people use these templates in databases with 1000+ tables for example? Or in database clusters? Are these performant? And a big team of developers, can they work with the code generated by the templates or is this code just creating more overhead? </cite> These are all issues which could be addressed in documentation/marketing of the template set just as if it were a product like the O/R mapper. Without those docs a developer/architect would have the same questions about the O/R mapper. Template sets can be "proven" just an O/R product can be. In fact, although the template community seems to be graciously posting template sets as freeware. It would not suprise me if as templates become particularly intelligent and advanced (and documented) that they themselves become commercial products. <cite>You say you don't have enough experience with O/R mappers, still you think you're able to make the claim I quoted above :). </cite> Because I get the concept, dude. I have worked with them some and I have garnered enough info from this thread to deduce that O/R mappers are not wide open without source (and the problem with the source arg is that that really is a discussion far above the original concept of debate here...yeah sure with source I can do about anything - hell I could evolve an O/R mapper into a full blown operating system if I put enough into it). If my project requires that I use a WebService as a relative DAL, I'm pretty sure I'm gonna have to recode much of the O/R mapper to do it. And if I have to modify source then I am certainly no better off than if I had gone with generation in thr first place. I'm fairly certain I can work with and modify *my* code, that I'm already familiar with, faster than I could get in and understand the O/R code. If I'm wrong, I apologize in advance.
    Wednesday, August 11, 2004 2:03 PM
  • User-560067886 posted
    skoon, I agree that CodeSmith cannot be compared to O/R mappers directly. CodeSmith is a simply platform and O/R mappers are products. My intent is simply to illustrate that using CodeSmith can provide a developer with the tools and space to develop the functionality served by an O/R mapper with far less effort than is being mentioned by some of the O/R guys. As others have mentioned in this thread, the two can certainly be used side by side as CodeSmith fulfills a larger need and can be used for tasks outside of data persistence and querying. We use it to create business services outside our business objects and in our UI as well. We use it for BL/DAL back because we have specific needs in those layers and quite frankly, because we already had it for the rest. We didn't see a need to bring in a second product to satisfy a problem that could be addressed with something we were already using anyway.
    Wednesday, August 11, 2004 2:15 PM
  • User-1308937169 posted
    BlueMagics (...thona?) Beyond all the theory. Beyond the symantics. Taking the database out of the picture. What does an OR mapper buy you over code that you can write? Time savings? Yes. However, I only have to write my templates once, and i don't have to learn anything about the object model of the OR mapper. I do have to learn enough about CodeSmith to write my templates. I'll call that one a wash. What are my options for debuging the OR mapper? I guess I have to assume that the OR Mapper doesn't have any bugs... Even tho it has been said that writing a good OR Mapper is very complex and difficult. It seems that if I went with thonas' solution, I had better ask really smart questions or he will just blab about how stupid I must be for asking a question about his great architecture... Not sure i would get any help from him. Frans might be more receptive but I still have to wait for a patch, workaround etc, if it a bug with the Mapper. With _MY_ templates, i have control over the output and i can make changes as time allows. How much time have i saved if the OR mapper is the cause of my bug, but since it is all generated at startup <assumption>maybe as a dynamic assembly</assumption> I can't even step into it? I supose i could try to attach to the process, but without Source.... How many times have you started with a datagrid only to resort to a datarepeater because the datagrid was not flexible enough? If I need _ANY_ formatting now I go straight for the datarepeater out of experience. WTF is so wong with a little NIH? NIH is rooted in experience.... Right or Wrong.... > codesmith have nothing to do with architecture or presitence mechanisms for the DAL, it's > just a (powerfull) code generator. nothing more, nothing less. maybe you can use it to 'fit' > in you solution and let it generate a big amount of code for you. but it will never produce a > presistance data access layer for you. or you need to make a big hook up. err... persisting what? the DAL code, the Data that is suposed to go into your database? It can generate a data layer for persistance... not sure what you are getting at here. This statement seems very thona'ish... I'll have to admit, I am learning more about OR Mapping, but I am still not sold on it. One size does not fit all, and word hunger is what it is. I don't beleive every problem has the same solution. Way to many variables.... I am still voting for more control.
    Wednesday, August 11, 2004 2:15 PM
  • User-1308937169 posted
    SKoon, > I think Frans hit it right on the head here. Comparing CodeSmith to an O/R mapper is like > comparing the entire .NET Framework to ADO.NET or ASP.NET. O/R mappers are tools > designed to solve a specific problem where CodeGen tools are designed to solve a general > problem. You can solve some of the same problems using a CodeGen tool that you solve > with an O/R mapper. See, I would say that a little differently: O/R Mappers are designed to solve all problems, while template generated code is designed to solve specific problems. With the templated approach, your solution to a problem can be very specific or very generic. Your choice. That is not what i am hearing from the ORM crowd. I am hearing that they have a solution to what i need. Without knowing what my needs are... If I only need a tack hammer, then why get the sledge?
    Wednesday, August 11, 2004 2:22 PM
  • User-1938370448 posted
    See, I would say that a little differently: O/R Mappers are designed to solve all problems, while template generated code is designed to solve specific problems. No, O/R mappers are designer to solve the data-access problem. Code generators do nothing at first, you have to write templates to get them going. Code generators solve thus a problem with a solution you have to provide yourself, or better: help you with solving the problem by helping with the solution you have to provide by yourself. With the templated approach, your solution to a problem can be very specific or very generic. Your choice. Using that logic, VS.NET is the answer to all problems as well. Oh, and your 2 hands of course. That is not what i am hearing from the ORM crowd. I am hearing that they have a solution to what i need. Without knowing what my needs are... WHen someone has a data-access problem, an O/R mapper can help. That's it. If you have a GUI control problem, a suite with gui controls can help. I don't think that is very hard to understand.
    Wednesday, August 11, 2004 2:49 PM
  • User-1938370448 posted
    (p00k: good points, just some remarks on some) <cite>It will take me 2 minutes, tops. </cite> Yes but you wrote the thing in the first place :) And btw, the time I quoted included the full process - data model, UI interface to view the audits, etc...because yes we use templates to generate the core of most of UI as well (obviously passive generation tho). ok, point taken :) I'd also like to point out that dropping in an O/R mapper isn't necessarily a breeze either, especially if you are already underway into your development cycle. O/R mappers still have a learning curve to be overcome. In the couple I have used all of my classes had to inherit from the O/R base classes (can be problematic if I already have them inheriting from others). And we hard to learn the mapping file xml schema. Not terribly complex but it certainly requires more than a passing glance. E-R and Biz rules are hard to define in any platform. True, the learning curve is there, no question about it. Every product which has more than 2 buttons might have a learning curve that has to be overcome. However I think learning is a bit easier than having to fight with problems while writing the code otherwise. Some O/R mappers are requiring you to do things, correct. We do too force you to inherit from our base classes. That's a choice we made. The same as Microsoft made with ServicedComponent: you can't add COM+ functionality without inheriting from that class. <cite> for 3 developers, we're less expensive than codesmith (studio) btw.Smiley</cite> Does that include source? If so, good for you! Seriously, I really hate it when a developer gets it in his head that he is some kind of god whose earthly gifts should come with some ridiculous price. Also, CodeSmith is actually free. The $89 is just for the custom studio ide. Source of the runtime lib, parser, interpreter, code generator core, task execution engine (we use a nant style code generation engine so you can add whatever task you like to be included during code generation, it is f.e. possible to add codesmith as a task in theory). Of course I was refering to the studio ide :) <cite>And are these templates supported? How many people use these templates in databases with 1000+ tables for example? Or in database clusters? Are these performant? And a big team of developers, can they work with the code generated by the templates or is this code just creating more overhead? </cite> These are all issues which could be addressed in documentation/marketing of the template set just as if it were a product like the O/R mapper. Without those docs a developer/architect would have the same questions about the O/R mapper. Template sets can be "proven" just an O/R product can be. In fact, although the template community seems to be graciously posting template sets as freeware. It would not suprise me if as templates become particularly intelligent and advanced (and documented) that they themselves become commercial products. In theory it might, in practise it is a loooong road. Supporting a product with good documentation, customer support etc. is time consuming. A lot of open source products suffer of this too. A single page of docs, 1 example project, a gui which is hard to understand and there you go. :) That doesn't work for project teams which have to get WORK done, something that's often overlooked.
    Wednesday, August 11, 2004 3:00 PM
  • User-1308937169 posted
    It is obvious that we have two very different views of the same problem. > See, I would say that a little differently: > O/R Mappers are designed to solve all problems, while template generated code is > designed to solve specific problems. > No, O/R mappers are designer to solve the data-access problem. Code generators do > nothing at first, you have to write templates to get them going. Code generators solve thus > a problem with a solution you have to provide yourself, or better: help you with solving the > problem by helping with the solution you have to provide by yourself. My post did not say Code Generator... my post said Generated Code. So, OR Mapping is suposed to solve all my data access problem?
    Wednesday, August 11, 2004 3:03 PM
  • User-1938370448 posted
    What does an OR mapper buy you over code that you can write? Well, how does 'a lot of time' sound? Time savings? Yes. However, I only have to write my templates once, and i don't have to learn anything about the object model of the OR mapper. I do have to learn enough about CodeSmith to write my templates. I'll call that one a wash. Oh, you only have to write the templates once. Cool. That's however 1 time too many. Also, learning is not the same as struggling with bugs in your own code you have to fix, complex functionality you have to implement, design of your own code, features your co-workers need but you don't. etc. Learning is about spending some time (perhaps a day or two) to get started with the basics and work your way up to master of the tool, often that's not needed. What are my options for debuging the OR mapper? Why would you want to do that? Do you debug .NET classes as well? Or vs.net code? I assume you do. You use any 3rd party control for your guis? You debugged them too till you couldn't find any bug left? Do you realize how stupid, pardon my french, that sounds? :) But of course, if you get the source for the runtime code, debugging is possible, why not. I guess I have to assume that the OR Mapper doesn't have any bugs... Even tho it has been said that writing a good OR Mapper is very complex and difficult. And your point? Oh, codesmith doesn't have any bugs also? ;) It seems that if I went with thonas' solution, I had better ask really smart questions or he will just blab about how stupid I must be for asking a question about his great architecture... Not sure i would get any help from him. Frans might be more receptive but I still have to wait for a patch, workaround etc, if it a bug with the Mapper I think you'll have a hard time topping our customer support, I'm afraid. Sure, if there is a bug, you have to report it, wait for the fix. That might take some time, say 30 minutes. I'm still waiting for the fixes for the bugs I found in .NET 1.0 and 1.1, some I reported back in april 2002, they're still here. So following your logic and reasoning, I pressume you're not using .NET, but your own written (oh sorry, generated) .NET clone, running on top of your own operating system, using your own GUI controls :) Some guys have all the luck :) With _MY_ templates, i have control over the output and i can make changes as time allows. How much time have i saved if the OR mapper is the cause of my bug, but since it is all generated at startup <assumption>maybe as a dynamic assembly</assumption> I can't even step into it? I supose i could try to attach to the process, but without Source.... but what if time doesn't allow you to make changes? Because a co-worker found a bug in your templates and you have no time fixing it because you have a deadline to catch and you're already late? Oh, you never have that? Again, some guys have all the luck. How many times have you started with a datagrid only to resort to a datarepeater because the datagrid was not flexible enough? If I need _ANY_ formatting now I go straight for the datarepeater out of experience. WTF is so wong with a little NIH? NIH is rooted in experience.... Right or Wrong.... Nothing is wrong with a LITTLE NIH. When little gets bigger, NIH is bad. For whom is it bad? Well, for your employer it is. If you're selfemployed it's bad for you and your family. If you are a contractor, it's bad for the hiring company. Because it costs time you waste on things you could have avoided. No-one can write everything themselves. It's the definition of 'progress' that you base your own achievements on achievements made by others, not solely your own. Way to many variables.... I am still voting for more control. I always thought the more variables, the more control, but that's perhaps me...
    Wednesday, August 11, 2004 3:13 PM
  • User-1938370448 posted
    So, OR Mapping is suposed to solve all my data access problem? O/R mapping is A solution for THE problem about data-access. It depends on how you define your data-access problems of course. But as they say, nothing beats handoptimized assembler... My post did not say Code Generator... my post said Generated Code Ah, and that generated code comes from... where? You can buy that in the supermarket? :) . You see: code generation is great, but unless you have something to feed the generator, you get nothing out of a generator. That's the sad part of it.
    Wednesday, August 11, 2004 3:15 PM
  • User-560067886 posted
    > However I think learning is a bit easier than having to fight with problems while writing the code otherwise. Yes, depending on the developer it very well could be. > The same as Microsoft made with ServicedComponent true > Supporting a product with good documentation, customer support etc. is time consuming. A lot of open source products suffer of this too. That's why I mention template sets can become full commercial products for a company set up to do it. We have toyed with the idea of releasing our templates as a product and shyed away for the very reason that we don't want to take on the resulting workload (support, etc) right now. But an O/R mapper has the exact same issues. You have to support your product and document it well with performance specs / examples before I will choose it for an important project. I think cooler heads here see that both approaches are quite valid and very useful considering the alternative of hand coding. As with most things it is simply a matter of specific needs and preference. I think we are also starting to see that the comparison here is somewhat invalid, but I believe these discussions are important. They can be enlightening for people on both sides of the fence, and hopefully they can foster fresh thought and problem solving among developers, and of course to dispel any myths ;).
    Wednesday, August 11, 2004 3:22 PM
  • User-560067886 posted
    > You see: code generation is great, but unless you have something to feed the generator, you get nothing out of a generator. That's the sad part of it. I think we are trying to say that most developers are quite capable of getting something out of it. And for far less work than some are suggesting. For me, if you can't write this stuff by hand, then I really don't want you on the team. Obviously I'm not gonna make you write it by hand, that's what the generator is for. But I want to know that you understand this domain enough to do it if you had to :)
    Wednesday, August 11, 2004 3:28 PM
  • User-1308937169 posted
    Nice warp of my statements.... have to remember to be more defensive with my wording... > e savings? Yes. However, I only have to write my templates once, and i don't have to > learn anything about the object model of the OR mapper. I do have to learn enough about > CodeSmith to write my templates. I'll call that one a wash. > Oh, you only have to write the templates once. Cool. That's however 1 time too many. > Also, learning is not the same as struggling with bugs in your own code you have to fix, > complex functionality you have to implement, design of your own code, features your co- > workers need but you don't. etc. Learning is about spending some time (perhaps a day or > two) to get started with the basics and work your way up to master of the tool, often that's > not needed. Once I get the template squared away, it won't produce bugs. Once the templates are done, my Data Access "problem" is solved. > What are my options for debuging the OR mapper? > Why would you want to do that? Do you debug .NET classes as well? Or vs.net code? I > assume you do. You use any 3rd party control for your guis? You debugged them too till > you couldn't find any bug left? Do you realize how stupid, pardon my french, that > sounds? :) > But of course, if you get the source for the runtime code, debugging is possible, why not. It only sounds stupid because of your frame of mind. Your product generates code. Are you telling me it doesn't have bugs? Your product, or the code it produces? 3rd party controls have bugs too. > I guess I have to assume that the OR Mapper doesn't have any bugs... Even tho it has > been said that writing a good OR Mapper is very complex and difficult. > And your point? Oh, codesmith doesn't have any bugs also? ;) CodeSmith will have bugs just as your solution will. My templates will have bugs as well. At least I get the output of the template. Bugs are easier and faster to track down when you have the source of the bug. You want me to believe that the Data Access code your tool produces is bug free. > It seems that if I went with thonas' solution, I had better ask really smart questions or > he will just blab about how stupid I must be for asking a question about his great > architecture... Not sure i would get any help from him. Frans might be more receptive but I > still have to wait for a patch, workaround etc, if it a bug with the Mapper > I think you'll have a hard time topping our customer support, I'm afraid. Sure, if there is a > bug, you have to report it, wait for the fix. That might take some time, say 30 minutes. > > I'm still waiting for the fixes for the bugs I found in .NET 1.0 and 1.1, some I reported back > in april 2002, they're still here. Perhaps MS decided it was not worth fixing? When you are trying to appeal to the masses, you will leave some behind. You can't make everyone happy. One size does not fit all. > So following your logic and reasoning, I pressume you're not using .NET, but your own > written (oh sorry, generated) .NET clone, running on top of your own operating system, > using your own GUI controlsSome guys have all the luck :) Funny... What if i had a major issue and it was the result of code your product produced, and you decided that it was not worth fixing? Buy your source license and fix it myself? Then what happens when you have an upgrade? [And yes, I do have all the luck :)] > With _MY_ templates, i have control over the output and i can make changes as time > allows. How much time have i saved if the OR mapper is the cause of my bug, but since it > is all generated at startup <assumption>maybe as a dynamic assembly</assumption> I > can't even step into it? I supose i could try to attach to the process, but > without Source.... > but what if time doesn't allow you to make changes? Because a co-worker found a bug in > your templates and you have no time fixing it because you have a deadline to catch and > you're already late? Oh, you never have that? Again, some guys have all the luck. In that case, I would not have the 30 minutes you claim to get a bug fixed in your product either. Mute argument. My Co-Worker can fix the template and generate his own code... at 2 am in the morning the night before his release. I simply don't buy that your product will generate perfect bug free code... Anyones product for that matter. > How many times have you started with a datagrid only to resort to a datarepeater > because the datagrid was not flexible enough? If I need _ANY_ formatting now I go > straight for the datarepeater out of experience. WTF is so wong with a little NIH? NIH is > rooted in experience.... Right or Wrong.... > Nothing is wrong with a LITTLE NIH. When little gets bigger, NIH is bad. For whom is it bad? > Well, for your employer it is. If you're selfemployed it's bad for you and your family. If you > are a contractor, it's bad for the hiring company. Because it costs time you waste on things > you could have avoided. No-one can write everything themselves. It's the definition > of 'progress' that you base your own achievements on achievements made by others, not > solely your own. Agreed, so there is a grey area where NIH is acceptable, and even desirable... Anything carried too far is a bad thing... > Way to many variables.... I am still voting for more control. > I always thought the more variables, the more control, but that's perhaps me... Right, like variables in my architecture. Too many variables and things get fragile and complex. That is where good design comes in, as I am sure you know. I would never claim to have a solution to all the worlds Data Access problems (although two lines of code to connect a DB, Three lines to execute a statment... hardly a "Problem") Code Generation is a tool. ORM's are tools too. They both strive to reduce the amount of code a programmer has to write. The less they have to write, the less they have to test. No one is arguing with that.
    Wednesday, August 11, 2004 5:24 PM
  • User-498097622 posted
    Frans, why did you start developing LLBLGen?
    Wednesday, August 11, 2004 5:36 PM
  • User-1406164332 posted
    @p00k :: My intent is simply to illustrate that using CodeSmith can provide a :: developer with the tools and space to develop the functionality served :: by an O/R mapper with far less effort than is being mentioned by some :: of the O/R guys. This is not correct. I know many people here believe that thona preachs the difficulty of O/R Mapping because he's trying to corner a market and wants to discourage other developers from writing their own mappers so they'll buy his. As for his motives, that might be entirely possible. But he is right. O/R Mapping is not trival for any real data-model. If you take a few textbook examples like Employee->Address and try to write a mapper for these, you may be pleasently surprised how easy it is a first. But as you introduce the natural variations of real-world models into your design, you'll quickly discover the enormity of the task. O/R Mapping is hard, no if ands or buts. And anyone who thinks the writing your own is a snap, they can either take my word that its not, or go try it themselves. But until then, people like p00k really need to qualify their statements with something like "I don't know what I'm talking about", or even "theoritically it should be easy but I've never tried it". That way you're not misleading people.
    Wednesday, August 11, 2004 6:53 PM
  • User338448625 posted
    Its been a little while since Thomas posted here, so I don't know whether he is following this thread any more. However, having just gone through the first 4 screens, I am left with a very sour taste in my mouth with respect to Thona. Thomas, the attitude that comes through in your posts in this thread, as well as some other public postings that I have seen on the web is not just the reflection of your personality, it also reflects on your company and your product. Frankly, the more I am exposed to your diatribe the less I am inclined to consider your product, regardless of its merit. I am not saying this to bash you, rather I want to encourage you to consider the harm you are doing to your company, your employees, and yourself by behaving the way you do in public. You are the public face of your company, try to remember that. -Michael
    Thursday, August 12, 2004 1:39 AM
  • User-528039901 posted
    ::Frans, why did you start developing LLBLGen? I can not answer for frans, but I can answer for me why I started with the EntityBroker. It was a sign of arrigance. Not the arrogance like "I am great", but the arrogance to refuse to work with what I consider totally bad tools for a real application (dataset / dataaspater pairs, dumping your DAL into the forms) after having some years of java experience where O/R mappers are common. I simply refused to go down to the level VS.NET tried to impose to me (which we all agree is ridiculous - which is why the next version of vs.net puts up a different model, alternatively). At this time there awas no VS.NET product (just a beta), and the application blocks were zero. IBuySpy started to be around, and frankly, IU was amazoed to see what is considered to be a busienss logic layer and data layer by the people making IBuySpy - always comparing this to Enterprise Java Beans. At this time there alsow as no tool out or on the horizon that could be considered not laughable, otherwise I would just have gone out and bought one. Having had about 5 or 6 years of experiene USING O/R mappers (which somehow strikes me about possible being 10 or 20 times as long an experience as the one of most of the people here trying to tell people O/R mappers are simple) I was eager to actually get a grip on how they work. And I was sort of aware of what they actually can/should to, having hat a chance to work with THE O/R Mapper (toplink, now an oracle product - it was long considered to be the best). And this is how we started making something we calls BoB (Business Object Broker). Time came, it was named EntityBroker and on a backburner (as in: ObjectSpaces was announced, as a .NET 1.1 thing, which looked perfect for me, I was eager to switch), and time went (and ObjectSpaces was moved to .NET 2.0 - not acceptable when you are working on projects - and finally cancelled). Having known about ObjectSpaces VERY early (I reviewed some of the original design papers), BoB (ObjectBroker / EntityBroker) was very primitive on purpose in the beginning. Basically "has to be good enough to carry is through until ObjectSapces is done". Like V1 (never seen outside our company) had no inheritance (whow - we could live with this. Interesting, given he tremendous use we make out of this feature now), had a VERY simple search system in the beginning (conditions only on the table queried for) an stuff like this. The EntityBroker as you see it now is iteration 3 of the architecture, with 2 and 3 being realtively similar (switching form the EB subclassing an abstract business class to having a generated stub in top) Looking back - EVEN with more than 5 years of using experience, and even (contrary, again, to most people posting here how easy they are) though I did my homework and read up on the topic, I failed to fully understand al the warnings in about every white paper: they are complicated. Fact is, like Gabe says: O/R mappers are a honeytrap. It looks terrific easy to start with. But it gets worse VERY fast. That said, I learned a lot. That also said, I would not habe been ableto do it would I have been so arrogant to start the task without knowing what an O/R mapper is - based on a lot of experience from using them in the java-world (and in some smalltalk adventures - you guys do know how old this technology is, right?) I am still not liking certain things about the EntityBroker - mostly the requirement for a base class. Sadly, I fail to see another way to do things WITHOUT introducing other side effects I like even less. This is one area where I see something in the java world MS has shut down: the ability to change the bytecode of a loaded method through intercepting the classloader. BTW - areas where I am struggling now are distributed caching and cache invalidation (and no, don't come with the .NET 2.0 mechanisms, they simply do not work as a general solution, invalidating all objects when one object changes is a killer if you run an order system, you can as well just turn off the cache - and things just get really interesting when you get distributed), optimising the API for a distributed environment (we are soon introducing, first of, multi-queries - a query api for submitting multiple qwueries to the O/R mapper in one call. Why? Well, our DAL may be on another computer, which may be behind a slow internet connection - cutting out round trips can be VERY valuable here. And we are working on a replicated client/server cache, persisting to disc - so that the server knows what the client knows and can cut down the traffic - again, everyone saying costs memory on the server etc. should realize that this is interesting when you care about performance more than about server ressources AND your client is behind a slow connection, like a modem or a mobile phone - not transmitting data is better than compressing it, if the data may be a jpg image, for example). I found, btw, this part o the "O/R mapper market ignorance" pretty interesting - and I am not talking about .NET mappers here. O/R mappers are used in tiered oo architectures. Standard rules demand a tier has to be able to run on another computer (which makes sense -the DAL as near to the database as possible can really make a difference - which is why I am working on a DAL runnin INSIDDE the database with SQL Server 2005). But this, then may be a slow connection, and all this stuff about using web services etc. does not cut it. When you are behind a slow connection, for example, and you have to commit 250 changeds objects, the data transfer may be slow - all other O/R mappers I know of ignore this, we now have made changes so that we actualy DO expose progress information, allowing the end user to see a progress bar which the developer can build on top of our own api (which is important - the last thing you want when an O/R mapper shields you from the lowe levels is having to figure out how to get progress information from a web service or remoting layer you aretechnically not supposed to deal with - and when you work on a CMS and the dude may upload 6mb of data in business objects, and you are bhind a modem, believe me, you WANT a progress bar). Coming back to my original argument, most O/R mappers (especially non-commercial ones) seem to totally ignore this fundamental rule about layers, though. But this is about as good an explanation as I can find on I started the EntityBroker. And now, about three years into the project and with about three man years invested (with multiple people - just while it was not a product w had LONG periods nothing was basically done, when it was BOB and ObjectSpaces was supposed to be out any moment) all I can really do is warn people o get onto the endeavour without doing their homework. This IS A honeypot. It LOOKS easy, but once you leave the "ah, yeah, right - person->address, can not be complex" textbook example it turns a hell more complicated than it originally looks like. It is not so much taht there are tremendously complex subsystems, jsut there are so many and it all has to fall together.
    Thursday, August 12, 2004 1:49 AM
  • User-58325672 posted
    @tcarrico: ::What does an OR mapper buy you over code that you can write? to citate someting from sctoo ambler's whitepaper: "a robust persistence layer that maps objects to persistence mechanisms (in this case relational databases) in such a manner that simple changes to the relational schema do not affect your object-oriented code. The advantage of this approach is that your application programmers do not need to know a thing about the schema of the relational database, in fact, they don’t even need to know that their objects are being stored in a relational database. This approach allows your organization to develop large-scale, mission critical applications. The disadvantage is that there is a performance impact to your applications, a minor one if you build the layer well, but there is still an impact." I'm only saying that a good O/R-mapper can save you a big amount of time to maintain your application. as I have said before. it don't have that much experience with the templates of codesmith, i have only some (little) experience with the templates used in the dnn community. and all i can say, from the achritectual view they suck big time. you need to generate the sp's and paste them in sql server. view months later you end up with a very large list of sp's (and you didn't now witch one can be deleted or edited without searching in all the code of your applications witch one are used. anothr bad thing, was the seperate generation of the database classes for each db you want to support. an O/R-mapper can have on of the box support for more databases. and byt the way, about control over the code of the O/R-mapper itself. if you work with big projects, it's not (in most cases) a problem to spend some money for tools and ''time-savers'. did you ever buy some third-party components; and about the bugs.. most top O/R-mapper have exelent service. and a counter point: the code of the O/R-mapper is used by more campanies (and used in enterprise-projects), and so it's tested more and better. and you can get bug-fixes even for the bugs you didn't find self. again. did you (ever) use some third-party components/tools? the bugs can be everywhere. even in the .net core. ::err... persisting what? the DAL code, the Data that is suposed to go into your database? It can generate a data layer for persistance... not sure what you are getting at here. This statement seems very thona'ish... presistance the DAL-code. your data is a variable;-) I doubt if your templates can automacticly generate a presistance-datalayer for you. if it's it true that it IS presistant. they you had the chance to reuse that layer. and not generate it every time. only the metadate will be enough ::I'll have to admit, I am learning more about OR Mapping, but I am still not sold on it. One size does not fit all, and word hunger is what it is. I don't beleive every problem has the same solution. Way to many variables.... I am still voting for more control. some am I ;-) look, I didn't say the codesmith sucks. and I didn't say the using templates sucks too. all i have says is that - i citate myself: "I think both methods can be good depending of the requirments you have. O/R-mappers are the superb choice at some levels. but when it comes to high performance (when this IS a requirment, like banking systems etc.) you can better choose for another solution. like using reporting tools for somes cases. " I'm using an O/R-mapper now, and it saves me a big amount of developing time. I get good service and fast bug-fixes from the that company - even if i haven't buy it yet. I'm open for other technologies or tools. and I'm still searching for better soluations, call it evolution.
    Thursday, August 12, 2004 6:25 AM
  • User-1938370448 posted
    Frans, why did you start developing LLBLGen? Should I write it in English or Dutch, which one do you prefer, Jeff? Ok, let's pick English. I could of course refer to Thomas' reply as you think we're the same person anyway, but hey, let's ignore that for a second. In jan 2002, I started with .NET, using the latest beta of vs.net 2002, and tried to learn the new dataset, datadesigner and other goodies MS had advertised which would be great. I was assuming that I now would be able to design/generate what I otherwise would have to write by hand in VB6 or VC++ and ADO. It very soon turned out that what VS.NET offered wasn't what I wanted. No Sql strings in my code please. So I searched for a solution as I didn't want to write a lot of procs by hand for my .NET testapp (a bugtrack app). I ran into a simple tool which generated procs and simple C# classes. The tool was not that great, buggy and lacked serious things. However that one gave me the idea of writing such a generator myself, to learn myself .NET. So I started LLBLGen 1.x. First it generated only C# and CRUD procs together with some more advanced procs like filter on FK field. It soon turned out to be a nice utility others would like to have as well, so I published it on the net, with sourcecode and docs. It became a huge success. I added VB.NET generation to it as well. We're a small ISV, mostly writing database driven webapplications with our own CMS. However as more and more competition was arriving in the CMS market and prices got lower and lower we looked for another market. By september 2002 we had over 40,000 downloads of llblgen 1.x and numerous requests for all kinds of features. I saw a market for tools generating code for data-access, as others did too, but not that many. Because we had a huge install-base, and a lot of feature requests for features which were not that easy to add to our old LLBLGen 1.x sourcecode, we decided to give it a try: rewrite the tool to become a major player in the data-access solution market. So I started designing the new app, we made sure we had the proper funding for a long development cycle (we estimated at least 6 months). In december 2002 development started. The original design was, as said, to rewrite the old concept of LLBLGen, thus with stored procedures, then calling classes and on top of that classes which did call these calling classes. The main issue with the old one was that people couldn't define custom filters, like SELECT * FROM Users where firstName = @fn and lastname like 'Smit%': it required to write a proc yourself, add the calling code yourself, which was cumbersome. I ran into trouble around April 2003. By that time I was almost done with my stored procedure designer, fully visual, and was doing some last work on my expression designer, complete with glyphs etc. It suddenly hit me: it will take ages before people have defined the proper procs, and then they have to define the classes and after that the bl classes, it's just not productive. What's worse: I suddenly understood that the stored procedure route wasn't the way to go, what if a developer wanted to add a filter to a given proc? He would have to go into the designer, fiddle around with the designers to get the new proc, re-generate the code etc. etc. Very cumbersome, time consuming and thus not productive. I decided I had to throw away a lot of work and redesign the app. So I dropped what I had so far and started from the other side: what should be generated? I spend a month on O/R mapping, what it was like, what it could do, what it couldn't do, where weaknesses are etc. After that month my O/R mapping approach was designed and I wrote the complete generated layer by hand for Northwind in about a month. After that moving it to generic code, and the rest to templates wasn't that hard. I picked up the GUI I already had and refactored it into the GUI framework I use even today. Because the concepts were different I had to write alot of additional code to make it all function. The end result was even better than I anticipated. The tool was productive, it allowed you to get started on your real code in a couple of minutes, maybe less, and it also allowed you to be in control of what you want to retrieve/update/delete in the database and how you want to do that. Of course, the first version had some limitations, as we ran out of time in August. We needed a product on the market in september 2003 and luckily we made it. From day one it was a huge success and because of our well known name 'LLBLGen' it was immediately well known as well. It also introduced some additional features to an O/R mapper which weren't seen in O/R mapper land till then. One of them are 'Typed Lists'. These are read-only views on related entities and build with a subset of the fields from these related entities. The advantage of this was that you could pull 1 one go a set of readonly data from the database (which you could filter also) much faster than with objects. Since september 2003 a lot of new features have been added. A whole new paradigm has been added (adapter based, so no more persistence logic in the entities), more persistence logic has been added, the gui has been revisited since with more advanced logic in the typed list editor (which embeds quite some sophisticated logic like finding all paths in relation graphs in a schema in reasonable time, finding the minimal set of relations in a relation graph which keep a given set of related entities together etc.) and a lot of code has changed since then, to make the tool better, faster and even more productive. Our goal is to be the best tool on the market by the end of 2004 when it comes to data-access and productivity in software development. Best in featureset, productivity, customer support and also price. Looking at the past year since LLBLGen Pro is on the market, I think we're on track for meeting that goal :)
    Thursday, August 12, 2004 6:34 AM
  • User-2113706185 posted
    My 2 cents Instead of using time on ranting (I mean, even though it's a great discussion, I don't see anyone switching sides), the sceptics should really just sit down and try an O/R mapper on a small (maybe pet) project. Code generators (or templates or whatever) are of course great tools, since they optimized som processes, but when it comes to data access and domain objects, then there is nothing that can beat a proper O/R mapper. Periode (I've seen the light). That is why they exists. This discussion would never occur on a Java board! Why! Everyone has realized that you just do object -> relational db with an O/R mapper. So please get down from your high horses and just try it! Don't fear the unknown! Frans actually wrote a code generator before the mapper, so I know he has worked on both sides of the war. But have all you (code generators, templaters) tried an O/R mapper?! (and here I mean more than just downloading a trial, making a .aspx page and deciding it's crap) Well... Let the ranting begin ;o) Sane
    Thursday, August 12, 2004 6:37 AM
  • User-58325672 posted
    Still want to see a comperation matrix between the two: EntityBroker VS llbgen Pro :-)
    Thursday, August 12, 2004 6:50 AM
  • User-1611549905 posted
    Precisely. This thread had gotten waaaaaaaaaaaaaaaaaaaaaaay off topic. It's probably more interesting to see what end users--rather than the vendors themselves--think of the various o/r mappers. It's all very well hearing about the benefits of mappers from thona and frans, but they are selling the things and hence have vested interests in defending their particular approaches/products. So a comparison--preferably an independent one--between the Big Two would be useful, but it would also be good to see some reviews of some other similar products, e.g. Mongoose Solutions' Objectz.net, which I have been taking a look at. Also, how good is the technical support that the respective vendors offer?
    Thursday, August 12, 2004 8:13 AM
  • User908927243 posted
    As the author of the second bicycle - er.. CodeSmith template - that BrandonC mentions on page 1, let me just make some small points. I wrote that template (and the crud template he refers to, at http://www.ericjsmith.net/codesmith/forum/default.aspx?f=9&m=1680) simply to save myself from a ridiculous ammount of repetitive typing. Prior to using CodeSmith, I had written something similar in ASP.NET/C#, and prior to that in ASP/VBScript. The templates are being used daily here at Mind Over Machines, but sadly only on my current project. Nevertheless, we do save a tremendous ammount of time using the templates. I love CodeSmith - it is a great tool - however the templates are, as yasky517 puts it, bicycles in a world with cars. I'd rather drive a car. Problem is so far I have yet to come across a car (a.k.a O/R mapper) that I have found worth buying. So for the time being, I'll continue biking... Oskar
    Thursday, August 12, 2004 10:08 AM
  • User-1938370448 posted
    I love CodeSmith - it is a great tool - however the templates are, as yasky517 puts it, bicycles in a world with cars. I'd rather drive a car. Problem is so far I have yet to come across a car (a.k.a O/R mapper) that I have found worth buying. So for the time being, I'll continue biking... Could you elaborate a abit on this, please, what has turned you away from the current crop of O/R mappers?
    Thursday, August 12, 2004 10:26 AM
  • User-560067886 posted
    > But until then, people like p00k really need to qualify their statements with something like "I don't know what I'm talking about", or even "theoritically it should be easy but I've never tried it". I very specifically referred to the" functionality served by an O/R mapper" - Data Access. I HAVE done this. Other than runtime changes, an O/R mapper offers me ZERO functionality that I don't already have in my architecture. Developers in this day and age are not just now deciding that they need data access. We've all done it. The O/R mapper developers are NOT the first to tackle this problem. They are NOT the only intelligent developers out there. CodeSmith offers a platform for me turn that knowledge and existing architecture that I have ALREADY done a million times into generatable code. I work in this industry and I do know what a "real data-model" is. I've built and worked with them as a senior developer for ten years. I understand how to interact with them, I understand the features that an O/R mapper provides, and quite frankly I could write one if I thought they were the right solution for me. So do not dare sit there and presume to say that "I don't know what I'm talking about."
    Thursday, August 12, 2004 10:42 AM
  • User-560067886 posted
    > Precisely. This thread had gotten waaaaaaaaaaaaaaaaaaaaaaay off topic. lol....it was only "on-topic" for 7 posts. By the immense number of posts since, I would say the topic has officially changed. :)
    Thursday, August 12, 2004 10:59 AM
  • User908927243 posted
    Frans, in many ways it's simply FUD. And my limited research was done 10 months ago. Tools have been updated since then, and I should probably take a second look at some of them. But here are some of the impressions I got back then - nothing new here, Eric and others have definitely covered them, but you asked, so... * Almost every application I have ever worked on started off with some sort of legacy database. Therefore any O/R mapper that starts off with an object model - and builds a relational data model from that object model - is useless from my standpoint. * Even among the remaining O/R mappers - those that start with the relational data model and build on top of that - there seems to be a strong emphasis on hiding the "nasty relational stuff" so that the developer can focus on the "nice, orderly object model" (both of these are fictitious quotes, btw). I don't fear the db, or SQL. Hence "all SQL statements are created for you automatically" is not a strong selling point for me. * The last point I want to make is one of openness. If I could have an O/R Mapper that starts off with the relational model (or better yet - goes both ways), and that is open so that I could modify the generated code - or rather the generating code - then I'd be very interested; that's the kind of car I'd buy. 10 months ago, I didn't find a tool like that, so when I stumbled upon CodeSmith I quickly built my own template and have been mostly happy with it since. It didn't start out perfect, and along the way I have made several changes, but I was able to make those changes, and that is the best part of the whole experience.
    Thursday, August 12, 2004 11:04 AM
  • User-1308937169 posted
    IMO one of the main problems with Java is that thier data access is lacking. I would harldy think that the reason to adopt an OR Mapper is because they do it in Java. They do lots of tings in Java.... Entity Beans might be a Java developers dream, but I don't see it. (All we need is for this thread to spin into .Net vs Java) The idea that if you are not using an OR mapper, means you are not using OO is rediculous. Comparing ORM to Grids, and Widgets, and Operating Systems is rediculous as well. What makes you think the sceptics havn't tried an OR Mapper? I'll be honest here. I havn't tried one since LLBGen's earliest beginnings. I didn't buy into the concept then, I am not buying into it now. I am sure there is lots of great thinking that went on to create it, but don't think it will save me as much time in the long run. Maybe I'll get a chance to try it again. I already have a solution to my data access problem, but i also believe in trying new things. It sounds like OR Mapping has come a long way, and it will have to. It is a large problem to solve... solving the worlds Data Access probems is huge. I would also point out that CodeGenerators will also generate code for other operations, not just data access. They are the ultimate in felxibility next to writing the text for yourself. This comes at a price, and that is you have to write a template if it is not writen already. They can also compliment each other. Templated Code Generation and OR Mapping do not have to be mutualy exclusive. It sounds like they can be made to work together. Keep in mind. Discovery and Design take way longer than writing the code. At least they should. You still have to create the schema, you still have to define the business rules, you still have to test it, blah blah. Reducing the code that has to be written is the cat, and there are many ways to skin it.
    Thursday, August 12, 2004 11:24 AM
  • User-1611549905 posted
    Is it my imagination, or are the most vociferous voices in favour of o/r mappers on these forums those of the vendors?
    Thursday, August 12, 2004 11:44 AM
  • User765121598 posted
    "The idea that if you are not using an OR mapper, means you are not using OO is rediculous." Perhaps it's just me, but if you have a lot of duplication in your *source*, (which the ActiveRecord pattern implies), and if the application contains more than a few persistent entities, then the architecture is a poor example of reuse, encapsulation, and refactoring. You may say: "But with this tool I overcome!", and maybe you're right, but the program itself is a static, fragile, behemoth that can be brought down with only the slightest of schema changes, and you'll invariably be tempted to mix your domain logic with your data layer. Perhaps you're more diciplined than that.
    Thursday, August 12, 2004 3:06 PM
  • User-1308937169 posted
    er... I think we are getting into academics here. You may be assuming that my templates are not very complex, and that I am not trying to enforce any or all of the OO rules. I am applying OO rules. The code that I am generating is using Encapsulation, Inheritance, and Reuse etc. Just because I am using a templated solution is no excuse to ignore good design. You are correct, if you have a lot of duplication of code, you have not done a very good job of Reuse... Although I am not sure that matters if you are not writing it. If the OR Mappers generate a lot of duplicate code, and they work, who cares? Again, I don't see one clear winner here. If I am not writing it, therefore I am not maintaining it, aren’t we just talking academics?
    Thursday, August 12, 2004 4:26 PM
  • User765121598 posted
    "If I am not writing it, therefore I am not maintaining it, aren’t we just talking academics?" Maybe so, I'm not real confident on it. It's just like I said, no matter how much you *try* to follow good OO principles, if what you end up with looks like an ActiveRecord Pattern, then you're falling far short. After all, all an ORMapper is is a group of patterns organized in such a way as to abstract the Data Source. If your patterns don't look very similar, then you've either generated more code than really needs to be there, or the ORMapper isn't itself architected very well. Atleast that's my (admittedly limited) understanding. The way I see it, an ORMapper isn't a "black box" in theory, only (ideally) in usage. When I ask myself what an ORMapper actually is, I come up with a number of patterns. When you say "code generation", I'm not thinking of CodeSmith and some templates; I'm thinking of the resultant patterns. And one set of patterns is clearly superior to another IMO. That's my experience anyways, but it helps that people like the GoF, Martin Fowler, et all, endorse this line of thought. Sure, ActiveRecord is presented as a solution, but the *only* reason ORMapper isn't presented as the "always use this" pattern is the complexity of building one yourself. But if you don't have to, then why not? If you could make it as easy to use Domain Logic as Transaction Script, then why ever use the latter? And that's exactly what we have here as I see it. But I could always be wrong. :)
    Thursday, August 12, 2004 4:56 PM
  • User1104621235 posted
    jammycakes "Is it my imagination, or are the most vociferous voices in favour of o/r mappers on these forums those of the vendors?" You cant blame the vendors for defending their products now can you? I am not a vendor but a user of two of these applications. Last fall I starting looking at OR mappers. I looked at the following products 1. CodeSmith - (Yes I know its not an OR mapper but I came across it during my search) 2. LLBLGen 3. Olymars 4. Entity Broker One requirement I had was that I didn't want to use Stored Procedures. So immediately I ruled out CodeSmith templates and Olymars. Like Mtri mentioned a few posts back I didn't even consider EntityBroker based on thona's negative attitude in these forums. At the same time I starting using LLBLGen Pro. Not only is LLBLGen Pro an awesome product but the support you get from Solutions Design is by far the best support I have ever seen from any software vendor I have ever dealt with. Yes that is a strong statement but true. The support that is given via their support forum is fast and every question is answered and worked on until solved. The other product I also use is CodeSmith....like others have said before the power of codesmith is extensibility. I have created a template set that uses the metadata from the LLBLGen PRo Project file. Using the codesmith engine I generete a Busineess Layer Class, WebControl classes and a WebUI. So my picks are LLBLGen - Best OR mapper CodeSmith - BEst tool to generte templated code Bert
    Thursday, August 12, 2004 5:44 PM
  • User-1342539384 posted
    Yup I am also using LLblgen Pro and i am very satisfied with it. I started with Deklarit then i moved to Pro because i wanted to avoid DataSets and been using it since then. Thanks.
    Thursday, August 12, 2004 6:07 PM
  • User2131024838 posted
    Im also evaluating LLBLGen and I like it. Im also trying CodeSmith after Im done with Pro. :p
    Thursday, August 12, 2004 10:00 PM
  • User30447099 posted
    Indeed a very interesting discussion... The idealist in me is about to escape :) How about a consistent semantic from the client down to the data tier. What I mean is shouldn't there be a common namespace and data types that can be use to work with data across all the tiers. My feeling is those characterizes should imbued right into the platform obviating the need for an O/R mapper. Frankly an object is simply an instantiation of a single row of data where the objects properties map to a relational tables columns. The problem is that if you retrieve 100,000 rows do you instantiate 100,000 objects to represent that data. I would have to say no. In fact, couldn't a single instance of the code be used to parse the in memory data structure providing access to that data through indexers. My comments may appear a little naive as I'm sure these techniques are already employed. My point is the problems are simply software engineering issues that have not satisfactorily been addressed yet.. If they had we would not have a requirement for O/R mappers. A table = an object. The columns in the table represent an objects properties. Is it not then a simple matter of the database providing a native object API to the underlying objects (Tables) All kinds of abstractions can then be done above this API to reconstitute, abstract, shape and mold the data to your liking. Microsoft adding the CLR to SQL Server and various other database vendors adding Java virtual machines is a step in the right direction. Regards Kevin
    Friday, August 13, 2004 3:04 AM
  • User765121598 posted
    "The problem is that if you retrieve 100,000 rows do you instantiate 100,000 objects to represent that data." I'm not sure of any of this, so bare with me. :) Isn't that what you're doing with a DataSet? Aren't you, somewhere deep in there, instantiating 100,000 DataRows, with x number of DataCells, containing string Values? But what if a particular column is a bit? Or ntext? Or DateTime? Isn't it more efficient to strongly type these? Memory-wise I would imagine so, but computationally that's gotta be a lot of work if one of your columns is a DateTime, but you have to cast it from a string. "In fact, couldn't a single instance of the code be used to parse the in memory data structure providing access to that data through indexers." But isn't it safe to assume that if you get back 100,000 rows, it's because you want to display them? Most likely with some sort of formatting? If so haven't you just lost what little advantage you had with loose typing? "A table = an object." That's just the point of ORMappers. A Table is not an object. I can't create a table like this: Create Table [dbo].[Person] ( [Id] int Identity(0,1) not null, [Name] varchar(50) not null, [MyCars] Car[] not null ) "Is it not then a simple matter of the database providing a native object API to the underlying objects (Tables)" That's exactly what an ORMapper does. It provides persistance for Objects into *a different data structure* used by Relational Tables. It just doesn't come from Microsoft/Oracle/et all.
    Friday, August 13, 2004 9:15 AM
  • User-528039901 posted
    ::That's just the point of ORMappers. A Table is not an object. I can't create a table like this: This is about it. Thanks to stuff like inheritance and subclasses habving potential additional fields, just technically, a object is NOT a table in anything but the most trivial case. O/R mappers are NOT about mapping tables to objects, they are about working with objects getting their data persisted to tables. The difference is that an O/R mapper will still allow things such as inheritance (data driven )to happen automatically. And believe me, once you have done data driven inheritance for the first time and see the benefits (in certain situations) you never want to go back. ::"The problem is that if you retrieve 100,000 rows do you instantiate 100,000 objects to ::represent that data." Now, let's ignore for a moment a DataReader (which is about the only way out of this, and then you better are making asp.net only and can ignore the object immediatly). How exactly would you love working with the data of 100.000 rows (letting aside the fact that noone with his mind right will get 100.000 rows into memory unless he really has to, and this is rare), without actually hacing them in SOME sort of object? As byte array they are basically useless. ANY sort of structure - is an object automatically. Whther a business object or a generic data container (DataTable in a DataSet) - you will always have at leat 100.000 objects for 100.000 rows.
    Friday, August 13, 2004 9:48 AM
  • User30447099 posted
    "That's just the point of ORMappers. A Table is not an object. I can't create a table like this:" Create Table [dbo].[Person] ( [Id] int Identity(0,1) not null, [Name] varchar(50) not null, [MyCars] Car[] not null ) I say why not. The data store could internalize the mapping of Person to the Car composite object (ie table). A simple 1-Many relationship. I think the problem is we are so conditioned to thinking about databases and objects in completely different contexts. Once you accept that fact that a table could potentially be a container (a strongly typed persistence store if you will) for an objects properties the notion of an O/R wrapper just fades away. The fact we can have an in memory representation of the data and relationships, in the form of objects, implies a persistence of that data and relationship is also possible. Fundamentally, the only difference is volatile RAM memory vs. persistent disk storage and the resultant performance tradeoffs Once performance issues can be addressed I see no need for O/R wrappers. I don't mean to trivialize any of this as I'm sure the challenges are daunting but I have no doubt these issues will be addressed as our industry matures. Kevin
    Friday, August 13, 2004 9:30 PM
  • User30447099 posted
    "100.000 objects for 100.000 rows" Conceptually sure that is the natural way to think about it... what I'm saying is that there only needs to be one copy of the code for reduced memory overhead that maps to underlying data (properties). If concurrent access to that code becomes an issue then I suppose some form or object pooling could be baked into that platform (ie Microsoft COM+ transaction monitor) Again I may be naive about how difficult all this is but it all boils down to a software engineering exercise. One that will be addressed in time. None of what I'm suggesting is new but the fact that O/R mappers are still required suggests that many of these ideas need to be repackaged/reconstituted to address this problem space.
    Friday, August 13, 2004 9:54 PM
  • User1660651112 posted
    I wandered into this late, and I'd like to make several comments. I tagged onto this post of Eric's only because its late in the thread and I am going to weigh in on the side of code gen. To state my bias up front, I'm the author of "Code Generation in Microsoft .NET" (Apress). My passion and my business is empowering people and groups to do code generation on their own. I don't want to see everyone reinventing the wheel, just to understand and control what they are doing. CodeSmith is one way to do that, my tools (XSLT, CodeDOM, or Brute Force based and to be updated soon) are another. (I don't know the degree to which you can tweak tools like LLBGen and remain in their update path (an absolute necessity)). I'd like to point out a few things that appear to have been missed. The point of generating into objects (as opposed to run time mapping) is not performance. It's strong typing. Partnering with your process tools (strong typing as a compiler partner in this case) is a key element in sane development. The point of stored procs is not performance. Stored procs allow you to elminate whole categories of privileges on your run time account - allowing a much more secure system. I consider design time abstraction to be OR/M. I differ from Eric in the importance of this mapping. A control layer (an abstraction) between your database structure and your object design is something that I consider essential. I understand you can extend Codesmith to provide this, but I'll warn you these are very difficult waters to navigate. My first abstraction sucks. Oh, sorry, did I say that. I mean it does the job for many databases, but my initial mapping is exceedingly difficult to maintain, and I'm currently in beta on its replacement. A major slowdown is that I want to rewrite than chapter so you understand what's happening. But its in use at my clients and other than backwards compatibility problems (ick) it has been successful. I say that because its important to realize that the space Thona and others are working in here is hard stuff. Whether you do it at design time or run time, defining these mappings in a sufficiently flexible manner relying on the database definitions is not for the faint of heart. By sufficiently flexible (person/employee/manager), I mean things like allowing tables to be logically derived from other tables, and children to be retrieved selectively, numerous strongly typed retrieval mechanisms, etc. I think these are base, not optional features. If you've got a compelling reason for your data to be morphed at runtime - often because you don't know the structure until then - you are in a non-strongly-typed scenario. While you may elect not to use DataSets, its a dataset mentality. If that's what you need, its what you need. If you can define the objects at design time (and note that the number of objects you can define from a set of tables is limited only by your imagination and ability to intelligently name) you get strong typing and you expose the objects in precisely the manner you want in Intellisense to your UI and business logic programmers and the compiler. That is incredibly powerful and abandoning these for runtime control is not something to undertake lightly. I think the good news is we're having this conversation. We aren't saying "is it better to hand code?" We're moving into a new realm. Whether its design time or runtime, CodeSmith or XSLT, hashing whole files or replacing regions, these are three broad levels of things to actively discuss (and they are important) within the space of avoiding hand coding for repetitive segments. Kathleen Dollard Author "Code Generation in Microsoft .NET" Microsoft MVP
    Sunday, August 15, 2004 10:04 PM
  • User765121598 posted
    "I say why not." Because then it wouldn't be a *Relational* Database, it'd be an Object-Oriented Database. "I think the problem is we are so conditioned to thinking about databases and objects in completely different contexts." Not at all, just *Relational* Databases. You don't seem to be aware that there's more than 1 type of database. What you're describing already exists. It just isn't called Microsoft SQL Server. "Once you accept that fact that a table could potentially be a container (a strongly typed persistence store if you will) for an objects properties the notion of an O/R wrapper just fades away." Once you accept the fact that Relational Databases will never, by definition, work that way (since if they did, they'd no longer be *Relational*), then you can understand that O/R Mappers exist solely to map *Objects* into a *Relational* model. Of course in an OODB there's no need for an O/RMapper. But that's not what the topic is about. If you want an OODB, buy one. For those of us that *have* to use MSSQL, or Oracle, or whatever, we'll need (or want one anyways) an ORMapper, wether that's 3rd party, or eventually gets supplied by the vendor of the RDBMS doesn't change the fact that it's still there. Re: Strongly Typed Objects: Er... that's again, exactly what an O/R Mapper does.
    Monday, August 16, 2004 10:09 AM
  • User30447099 posted
    Hi Moot My comments are forward looking but I thought that was obvious. To paraphrase myself... I look forward to the day we can use Object relational databases... PS. Never is a long time :) In the interim I wrote my own CRUD generator that generates mapping at design time to avoid runtime penalties. I looked at a variety of other products and for various reasons (cost, complexity, architecture) wasn't satisfied with any of them. The CRUD generator is free to anyone who requests it and includes source code. More information about CRUD generator can be found elsewhere in these forums. Regards Kevin
    Monday, August 16, 2004 9:25 PM
  • User-1241273394 posted
    It's probably more interesting to see what end users--rather than the vendors themselves--think of the various o/r mappers. It's all very well hearing about the benefits of mappers from thona and frans, but they are selling the things and hence have vested interests in defending their particular approaches/products. So a comparison--preferably an independent one--between the Big Two would be useful, but it would also be good to see some reviews of some other similar products, e.g. Mongoose Solutions' Objectz.net, which I have been taking a look at. Also, how good is the technical support that the respective vendors offer? ------------------------------- I do not post very often to these forums, and my opinions may be blown off by some of the more reglar posters here, but I still do not believe there is any one best O/R mapper out there. I don't just mean that there is preferences to be taken into account either. I mean that for one of my applications, there doesn't appear to be a single mapper which perfectly fits our needs. That being said we wound up using Objectz.NET for the project. Most Mappers seem to ignore features such as full-text searching, something which we had to get the source code from Mongoose for and integrate ourselves. I'm not that familiar with many of the other mappers, past the trials I played with, but I know that Mongoose does not allow you to search on fields which are not mapped. This is causing a major headache for us due to the fact that some of our objects have tons of text, yet in a lot of the screens we do not care to show the text. More customization work is still required to add that feature to Mongoose's Objectz.NET. That being said, the support from Mongoose is somewhere between non-existant and horrendous. They basically abandoned the product. I believe they saw ObjectSpaces as solving the O/R mapping problem in the same manner that their product does, and figured they would simply stop development and basically support as well. Another key point to take note of with Objectz.NET is that they do use DataSets within their code. This is also how they handle their cache. This obviously is not the fastest solution, and I'm willing to bet some of these other mappers provide better performance. I still like Objectz.NET the best out of all the mappers I've tried though. One key feature for me is that the objects do not require a base class. You might see some users say "so what", and explain that there has never been a single case where that is a problem, etc. etc. etc., but that really is not the case for me. Requiring the extension of a base class makes your objects dependant on another assembly, more bloated and less portable. For example, I have one application which I have worked on which uses the same objects on the compact framework, and then can send those objects to a server which maps them to the database using Objectz.Net. This allows me to share the same business logic between multiple applications using the same assembly, whereas one maps to a database, the other does not. There are countless examples where you might use the same objects to map to a database at one part of the application, yet map the object to another persistance mechanism on the other end. For example the same object may be mapped to an EDI file to transmission to another application yet stored locally in a database and be mapped there. If the EDI mapping required a base class, then of course it wouldn't work. Basically I believe in keeping the persistance seperate from the business objects. The business object should not know anything about how it is persisted. That's one issue I have with a lot of mappers as well, I do not like the mechanisms such as customer.Save(). That is outside the customer's scope, and that object should not understand such a method. Some others may disagree with me there, but those were a lot of the strengths I found with Objectz.Net. It's a shame they abandoned it though, they had a lot of good ideas and principles. Also, one thing that has confused me through all of this... Why is there so much arguing over code generation versus OR mappers when products such as LLBLGen Pro are considered O/R Mappers. Isn't LLBLGen just a code generator with a UI which assists in confifguring the templates to map between the objects and database? It just seems really stupid to me to argue that code generation is not the way to go, and you should use an O/R Mapper like LLBLGen Pro. What am I missing there? I'm not saying that it's bad that LLBLGen Pro uses code generation, just that it's an argument that makes no sense to me.
    Tuesday, August 17, 2004 10:13 AM
  • User-1938370448 posted
    Most Mappers seem to ignore features such as full-text searching, something which we had to get the source code from Mongoose for and integrate ourselves. Full text search is very database specific, so it's not that weird it's ignored. If you would have chosen LLBLGen Pro, you would have been able to add a fulltextsearch predicate class (we use a full OO query system) within 2 minutes tops and without any intervention from us (it will be added to the library within a month with the upcoming runtime lib upgrade, so all users will have it). That being said, the support from Mongoose is somewhere between non-existant and horrendous. They basically abandoned the product. I believe they saw ObjectSpaces as solving the O/R mapping problem in the same manner that their product does, and figured they would simply stop development and basically support as well. Hmm. ORM.NET, Pragmatier, Objectz.net.... all dead now... who's next? One key feature for me is that the objects do not require a base class. You might see some users say "so what", and explain that there has never been a single case where that is a problem, etc. etc. etc., but that really is not the case for me. Requiring the extension of a base class makes your objects dependant on another assembly, more bloated and less portable. It totally depends on where you define where an 'entity' lives. No, don't come with the dreaded 'but there is just 1 definition', because that's not true. dr. Peter Chen defined the 'Entity' in the late 70's as a concept living in the database (E/R model does ring a bell I think), fowler/Evans defined it as a concept living outside the database. This is key to understand why some solutions work differently than others. Besides that, a base class is often a requirement. I'll name a few features you otherwise have to write by hand, because of the single inheritance mechanism in .NET: - XML serialization WITH cyclic references and interface typed members. - databinding support (ITypedList etc.) - IBindingList support - IEditableObject support - custom sorting - in-object multi-versioning of fields etc. With classes you write yourself, you have to go through hoops to get this all in. With a base class where this is already implemented, it's a breeze. For example, I have one application which I have worked on which uses the same objects on the compact framework, and then can send those objects to a server which maps them to the database using Objectz.Net. This allows me to share the same business logic between multiple applications using the same assembly, whereas one maps to a database, the other does not. ...until you use a SortedList in your classes and the CF doesn't support them. You see, of course there are situations where it might be handy that a class isn't derived from a base class. However, there are also a lot of situations where the opposite is true. There are countless examples where you might use the same objects to map to a database at one part of the application, yet map the object to another persistance mechanism on the other end. For example the same object may be mapped to an EDI file to transmission to another application yet stored locally in a database and be mapped there. If the EDI mapping required a base class, then of course it wouldn't work. Of course, then it wouldn't work. However if the O/R mapper is designed properly it is doable. As an example, our code can fetch an entity from sqlserver and persist it at runtime in for example an Oracle database, using a different adapter class. I can also write an adapter which persists the entity to a file, MSMQ, or whatever. Basically I believe in keeping the persistance seperate from the business objects. The business object should not know anything about how it is persisted. It's a choice: is persistence 'behavior' (and therefore should be added to the entity) or is persistence a 'service' (and therefore should be provided by an external object). We provide both mechanisms, so it's up to you what you choose to use. That's one issue I have with a lot of mappers as well, I do not like the mechanisms such as customer.Save(). That is outside the customer's scope, and that object should not understand such a method. Often the motivation (which you somewhat lack to describe) for this is that a team working on a big application is not allowed to use persistence logic like 'entity.Save()' but should use a black-box BL library (tier) for that. By separating the persistence logic from the data container (the entity object), you can achieve that. Also it makes usage of it in remoting scenario's more logical. On the other hand it might seem weird: if you open a file in a streamreader, you're also not asking some other object to read the data for you, that behavior is embedded in the streamreader class, and everyone expects that. (although one may argue that the streamreader IS in fact the add-on object for the object 'file' which is virtual (i.e.: an outside resource) to the developer) Also, one thing that has confused me through all of this... Why is there so much arguing over code generation versus OR mappers when products such as LLBLGen Pro are considered O/R Mappers. Erm, you think LLBLGen Pro is not an O/R mapper? If not, would you like to share the definition of an O/R mapper with me? Why is LLBLGen Pro NOT an O/R mapper? Because it maps classes to tables, contains dynamic query engines, an object query language etc? True, LLBLGen Pro is not a pure O/R mapper as some theoretical focussed people like to define (and there is no definition really, like there is no single definition for N-Tier development and N-Tier application). LLBLGen Pro does more than just O/R mapping: it allows you to define read-only views (typed lists we call 'em) on related entities, so you can work around the disadvantage of working with objects: sets are hard to construct. We also support views as read-only lists, and calls to stored procedures to directly call them with 1 line of code. We will take this further with GetScalar functions, aggregates, group by, having clauses, SQL expressions per field (for db side data mangling or filters) within a month. That's why it's an O/R mapper-code generator, not a pure O/R mapper: it's a total Data-access solution which uses O/R mapping techniques to make it possible. Isn't LLBLGen just a code generator with a UI which assists in confifguring the templates to map between the objects and database? No, the GUI is just the designer: the code generated is code which customizes generic code compiled in the runtime library (through the strategy pattern). The generated code, together with the generic library it builds on, forms a layer which already contains your business entity classes and provides a true OO model for working with your database (and more :)). That's why there is code generation: because you DO need a class 'Customer' and a class 'Order' and if you want, a typed collection CustomerCollection. It just seems really stupid to me to argue that code generation is not the way to go, and you should use an O/R Mapper like LLBLGen Pro. What am I missing there? I'm not saying that it's bad that LLBLGen Pro uses code generation, just that it's an argument that makes no sense to me. ignoring code generation is indeed making no sense. After all, as I said, you do need a customer class, an order class and perhaps 500 more of them. You are going to write these by hand? I hope not. However, without a proper base class to customize with your own code, it will be a hell of a job to get that all up and running. Unless code generation is used of course :). ---------------- About using O/R mapping vs. using an O/R mapper Yes there is a difference. You see, in the .NET world (or the MS world really), people are aware of the 'data-access' problem: every database targeting application requires some code to work properly with a relational model in an RDBMS. It's overhead code really. So developers in the MS world start looking for a solution to solve that 'data-access' problem. What do they want? A solution for the data-access problem. Now, there are some people who talk about O/R mapping and O/R mappers and pure O/R mappers etc. I like O/R mapping as a technique, I work with it for 2.5 years now. There is a problem though: before a developer understands O/R mapping is a solution to the data-access problem, the developer first has to learn what O/R mapping is. In general, developers in the MS world are not that O/R savvy. They are Windows DNA savvy, ADO, recordset objects, stored procedures and with .NET: dataset savvy. In short: set-based SQL savvy. What's practical here? To preach pure O/R mapping theory? No. What's practical is what works to solve the data access problem. You can solve that problem with a solution which uses O/R mapping as a technique but also has other functionality as a sidenote to make the picture complete. That's 'using O/R mapping'. You can also say: "I use an O/R mapper". How does that solve the data-access problem? That's unclear. Does the O/R mapper have nice functionality so you can create your crystal reports with the same codebase? probably not, that's mostly set-based logic. The market for pure O/R mappers and pure O/R mapper frameworks is very small and looking at the products which go out of business at the moment, the market is getting smaller and smaller. The reason is simple: It's not important that you use an O/R mapper, it's important that you solve the data-access problem. How is not important, just solve it. In the end, it counts if you make the deadline, if you make the budget, if you make the featureset to implement, if you make the maintainability demands etc. etc.
    Tuesday, August 17, 2004 3:20 PM
  • User-1241273394 posted
    Frans, I didn't mean to single out your product like it was a negative thing, I was just trying to use it as an example. Erm, you think LLBLGen Pro is not an O/R mapper? If not, would you like to share the definition of an O/R mapper with me? Why is LLBLGen Pro NOT an O/R mapper? Because it maps classes to tables, contains dynamic query engines, an object query language etc? True, LLBLGen Pro is not a pure O/R mapper as some theoretical focussed people like to define (and there is no definition really, like there is no single definition for N-Tier development and N-Tier application) My point was more that LLBLGen Pro uses Code generation, so to argue that it is better than code generation, or that code generation is better seems stupid. It just to use code generation for a very specific purpose. Like I said, I wasn't trying to argue that that is bad, or that "pure" O/R mappers are better. I was just trying to state that if you call LLBLGen Pro an O/R Mapper, it makes it hard to debate code generation versus O/R Mappers, since O/R Mappers can use a lot of code generation. ...until you use a SortedList in your classes and the CF doesn't support them. My classes were developed with the compact framework in mind. They do not include methods which are not supported by the compact framework. This has not caused any issues as of yet, but I understand your point that there are other limitations with such a scenario. What's practical here? To preach pure O/R mapping theory? No. What's practical is what works to solve the data access problem. You can solve that problem with a solution which uses O/R mapping as a technique but also has other functionality as a sidenote to make the picture complete. That's 'using O/R mapping'. You can also say: "I use an O/R mapper". How does that solve the data-access problem? That's unclear. I agree 100% with you. We didn't go with Objectz.NET because it was a "pure" O/R mapper. But rather it was features like their query engine, their lack of requiring a base class and ease of use. The issue is of course the time it takes to develop a solution combined with teh quality of that resulting solution. And just as a side point, I have nothing against code generation. I can't really stress that enough. In the same projects that I've used Objectz.NET I've used Codesmith for my strongly typed collections. Strongly typed collections have been invaluable to me, and doing them any way besides code generation seems silly to me.
    Tuesday, August 17, 2004 7:07 PM
  • User-528039901 posted
    Strongly typed collections have been invaluable to me, and doing them any way besides code generation seems silly to me.
    In a year using code generation will look silly to you, when we get generics. SADLY this is about a year away for most teams - or more.
    Wednesday, August 18, 2004 12:52 AM
  • User-1938370448 posted
    My point was more that LLBLGen Pro uses Code generation, so to argue that it is better than code generation, or that code generation is better seems stupid. It just to use code generation for a very specific purpose. Like I said, I wasn't trying to argue that that is bad, or that "pure" O/R mappers are better. I was just trying to state that if you call LLBLGen Pro an O/R Mapper, it makes it hard to debate code generation versus O/R Mappers, since O/R Mappers can use a lot of code generation. Ok, agreed. Well, in the end, what an O/R mapper can do, you can generate it of course, although generally speaking: what's seen as a pure code generation solution doesn't use a generic set of classes which do all the work.
    Wednesday, August 18, 2004 9:12 AM
  • User-560067886 posted
    > In a year using code generation will look silly to you, when we get generics. lol lol lol lol lol...bwhaahhahahahaha way to completely demonstrate that you don't have the slightest clue as to the power of template based code generation.
    Wednesday, August 18, 2004 11:23 AM
  • User-1938370448 posted
    lol lol lol lol lol...bwhaahhahahahaha On a serious note: if you want to be treated like a grown up adult and not like a 12 year old kid, please, act like one. This is a serious discussion and the last thing we all need is a childish flame fest. Thank you.
    Wednesday, August 18, 2004 11:28 AM
  • User-528039901 posted
    ::lol lol lol lol lol...bwhaahhahahahaha :: ::way to completely demonstrate that you don't have the slightest clue as to the power of ::template based code generation. Not at all. Someone posted he values code generation ESPECIALLY to get type safe collections, and I answered that in a year generics will make this better. I did not say anything else. And frankly, using code generation just because we do not have generics is stupid. It sucks. It HAS to be done somehow if you want strongly typed collections, but gneerics fulfill THIS part of code generation MUCH nicer. But then, seeing the obvious in my statement does not seem something you like to do, right?
    Wednesday, August 18, 2004 11:29 AM
  • User908927243 posted
    p00k: I believe - I hope - thona was aiming that remark ONLY at the act of using Chris Nahr's templates for typed collections. And though Chris' templates might not exactly look *silly* when generics come around, the major reason for using them WILL disappear. But in the meantime they are a great tool - as thona also states, generics is a year away, realistically speaking. Where are the moderators anyway?
    Wednesday, August 18, 2004 11:30 AM
  • User-560067886 posted
    Sorry dude, when someone drops a completely baseless and humerous comment like he did that makes me laugh so much I simply must tell the world. Thona has spent a good portion of this thread flaming code-gen'ers as the stupid uninformed. When he makes a comment like that it reveals that, while he may know o/r mapping, he simply doesn't get code generation at all. He obviously sees it as nothing more than a keystroke saver which is quite frankly short-sighted. If someone is gonna stand as an "expert" on why a particular technology/approach doesn't work, perhaps he should first understand it. Btw, out of curiosity, why is it always 12? Nobody ever accuses someone of acting like a 6 year old, or at 15 year old...always seems to be 12...hmm..weird. On a serious note: I apologize if I started this thread down a less professional path. It certainly was not my intention.
    Wednesday, August 18, 2004 11:39 AM
  • User1356982465 posted
    I'm curious about the occasional "where are the moderator" comments -- what exactly do you think the moderators should be doing on this thread? Certainly Thona is rude most of the time (I don't think anyone disagrees), but this has been a very good discussion, including Thona's role in it mostly. Also, since some of us moderators are also active participants in this one, it would be quite biased if we were to "moderate" without very good cause.
    Wednesday, August 18, 2004 2:43 PM
  • User908927243 posted
    Well, you have a point. thona does ask for it, but at the same time, blatant troll-posts ought to be returned to sender with some statement like "clean it up - then repost". Of course in this instance p00k seems to have misread thona's post as a troll, and thereby repeated in kind, inadvertently making himself look less-than-professional in the process.
    Wednesday, August 18, 2004 2:50 PM
  • User-110219988 posted
    Could you recommend such a book? I'm interested in the subject.
    Friday, August 20, 2004 11:17 AM
  • User-508912166 posted
    I know I'm jumping into this thread pretty late, but I'd like to get a chance to give y'all some of my thoughts on code generation vs. O/R mapping. First of all, what are the problems that these techniques attempt to solve (feel free to add to this if I've missed anything)? - Code generation is a solution to the drudgery of writing repetitive code. At it's most basic level, code gen takes the form of default class templates in VS (so you don't have to write class and namespace declarations for every class, or hook up Page_Load methods for every aspx code-behind file). Using a templating tool like codesmith allows you to take this a *lot* further, by generating strongly typed collections (this particular template will be invalidated in 2.0), and even generating entire DALs and webforms based on a database structure. We all use code generation in some way, unless we're writing our applications entirely in Notepad. - O/R Mapping is an attempt to solve the problem that comes about when we have a nice object-oriented system that needs to be persisted to an efficient relational data store. The goal is to be able to write object-oriented persistence code for our system. Nothing more, nothing less. So why do we end up having conversations about "O/R mapping vs. Code Generation", when they're not even really attempting to solve the same problem. These techniques are not mutually exclusive, indeed O/R Mapping is a type of code generator almost by definition; It takes an object and generates SQL code based on the mappings. Most commercial O/R mappers integrate code generation into the whole mapping process, generating objects and mapping files for you based on the database schema. They're totally complementary techniques, people. That said, let it be known that I have used an O/R mapper in almost *all* the projects I've worked on in the last year and a half (since I un-wittingly wrote my own simple O/R mapper before I even knew what they were). The one exception was a job application system I built almost a year ago. We had an extremely tight deadline, and so I looked to Frans' original free version of llblgen (which was not an O/R mapper at all). It seemed like a great solution at the time, since I could point the application at my database, and have llblgen generate the *entire* DAL of my application, including stored procedures, objects, and persistence code. However, while I may have gained initially, I still had a large number of stored procedures and c-sharp code files that had to be actively maintained. And once I'd customized the generated code according to the needs of the client I had a huge maintenance problem on my hands. This is something that an O/R mapper greatly reduces (not entirely, though). Based on my own personal experiences (and yours may differ), I think that code generation is something that we all would be stupid *not* to use. However, if you think you can simply go ahead and generate an entire layer or two of your application, be prepared to maintain all that code as if you'd written it by hand in the first place. Once you initially generate your code, it becomes difficult to add in your own custom stuff without running the risk of overwriting it when you have to change your database structure due to conditions un-foreseen at the start of your project. That said, it'll be interesting to see what happens on the code gen front once Whidbey goes final and partial classes come into play.
    Saturday, August 21, 2004 1:31 PM
  • User-435756917 posted
    1. CodeSmith - (Yes I know its not an OR mapper but I came across it during my search) 2. LLBLGen 3. Olymars 4. Entity Broker Most of these use code generation that is compiled to create a data access layer. A better question given these choices would have been: "what is your favourite data access layer generator?". By most popular definitions O/R Mappers generate SQL based on some mapping between objects and a relational database. Examples include NHibernate, OJB.Net, and WinFS. The new LLBLGen is a hybrid of code generation and O/R Mapping and is definitely worth a look. All O/R Mappers handle things differently so you need to look at a few of them to determine what is right for you. WinFS isn't out yet, so isn't an option. And the choice between OJB.Net and NHibernate and 'the rest' can be made on price (the former are open-source and free.) Assuming you wanted to go the open-source/free route, OJB.Net handles the object persistence in a more transparent way whereas NHibernate has better documentation, pre-fetching and seems to allow you to control the persistence and retrieval a little tighter. I prefer OJB.Net. I'm using it on a smallish project and haven't hit any limitations. I have experimented with NHibernate a bit though, in case the lack of pre-fetching in OJB.Net becomes a problem in the future. - Chris http://weblogs.asp.net/chrisgarty/
    Friday, August 27, 2004 2:06 AM
  • User-528039901 posted
    Well: ::4. Entity Broker and ::Most of these use code generation that is compiled to create a data access layer. To everyone: the "most of them" is all EXCEPT the EntityBroker, which does NOT generate the DAL.
    Friday, August 27, 2004 12:10 PM
  • User1904710788 posted
    Sooo.... I come back to the forums after my vacation to see whats going on with my posts. Then I see my name at the top. 7 pages. Wow, you guys have been busy. Oh and my question was never answered 7 pages later :)
    Friday, August 27, 2004 9:33 PM
  • User1104621235 posted
    maybe you should read all the pages... I answered your question... ;) bert
    Friday, August 27, 2004 11:09 PM
  • User-1938370448 posted
    Well: ::4. Entity Broker and ::Most of these use code generation that is compiled to create a data access layer. To everyone: the "most of them" is all EXCEPT the EntityBroker, which does NOT generate the DAL. No, and neither does LLBLGen Pro. Both generate classes AND use a runtime lib. You do generate code, which is what he meant, he didn't mean you were using the pragmatier approach ;)
    Saturday, August 28, 2004 3:40 AM
  • User-437066291 posted
    I little off topic, but can you guys post some of your VB.Net Codesmith templates? ScAndal
    Tuesday, September 7, 2004 12:52 PM
  • User-1992441347 posted
    This may be a dumb question, but one application requires to persist object to xml first, later persist the xml (object) to RMSDB. Does any O/R mapper support this function? To convert object to xml, there are many ways, however, neither serialize/deserialize or xmldom is a efficient way of generating xml from object. Has there been any product that does persist xml to relational DB (not as blob of course)?
    Sunday, September 12, 2004 11:59 AM
  • User-1241273394 posted
    This may be a dumb question, but one application requires to persist object to xml first, later persist the xml (object) to RMSDB. Does any O/R mapper support this function? Well, I believe they all do. .NET comes with an XML Serializer, so you can use the same object to serialize/deserialize the object to XML, and then use the O/R mapper to map to and from the database. We have used this technique to support pending objects. So we would leave an object as XML on a client application, and only after the object is valid to be added to the database would it be added to the database. Until then it would stay as XML. Does that make sense, or is it all simliar to what you're talking about? This technique has worked really well for us.
    Tuesday, September 14, 2004 8:11 AM
  • User-1992441347 posted
    The scenario is a bit different though. (all are round trip) O/R mapper does from object -> db, or db -> object. I assume in your project, it's like object -> xml, then when it's time, xml -> object then via O/R mapper object -> db. What I have asked is object -> xml then xml -> db. For object -> xml, there is .Net built in serializer, but it has a bad reputation as very slow thought at least there is a way. for xml -> db, I've not found anything. Even though mssqlxml can serve the role, but is the xml generated by mssqlxml can be used to deserialized to object?
    Tuesday, September 14, 2004 10:59 PM
  • User203732434 posted
    << there is .Net built in serializer, but it has a bad reputation as very slow thought at least there is a way. >> who's given it that reputation? sure, it's using xml (slow) and reflection (slow), but all in all, ive found it to be pretty fast for what it's doing. *anything* you write using xml is going to be slower than binary by it's very nature.. no matter how you code the xml (de)serialization.
    Wednesday, September 15, 2004 4:08 AM
  • User-1938370448 posted
    What I have asked is object -> xml then xml -> db. For object -> xml, there is .Net built in serializer, but it has a bad reputation as very slow thought at least there is a way. for xml -> db, I've not found anything. Even though mssqlxml can serve the role, but is the xml generated by mssqlxml can be used to deserialized to object? May I ask WHY you want to go from xml directly to the db? You can also deserialize the xml and put that into the database, using the persistence logic of the o/r mapper. *anything* you write using xml is going to be slower than binary by it's very nature.. no matter how you code the xml (de)serialization Not necessarily. A webservice for example produces C# code which is compiled behind the scenes which is actually a clever way to produce XML / consume XML the hardcoded way. This way it can be very fast. (but also due to this, it has severe limitations....)
    Wednesday, September 15, 2004 4:17 AM
  • User-1903736277 posted
    TheServerSide.NET has posted an open debate on ORM vs. Code Generation. We welcome your continuted comments on our site. You can view the posting at http://www.theserverside.net/news/thread.tss?thread_id=29071.
    Wednesday, September 29, 2004 11:44 AM
  • User-2113706185 posted
    I get a "Thread doesn't exist" error. Sane
    Thursday, September 30, 2004 3:14 AM
  • User-1970482731 posted
    This may be a dumb question, but if your object ID is auto-generated (like SQL Server's identity column) then how O/R return ID of created object in case it use dynamic SQL statement, I guess it is a batch statement like: INSERT table_name (...) VALUES (...); SELECT SCOPE_IDENTITY() but because this statement is executed from DAL instead of an stored procedure, should it return correct value? thanks,
    Sunday, October 3, 2004 11:58 PM
  • User-528039901 posted
    ::but because this statement is executed from DAL instead of an stored procedure, should it ::return correct value? According to the documentation (and my experience in the EntityBroker) - it does. How do you get the idea this works only within stored procedures?
    Monday, October 4, 2004 1:18 AM
  • User579948574 posted
    Thona and Frans. Thank you guys for participating in this rather hot discussion :)) I will pass on that and go straight to the forgotten topic: Favorite O/R mapper. I have a new dotnet project starting soon, and want to chose myself an ORM. My background is mostly java and I'm using on my java projects Hibernate (very succesfully). I know that there's port of this excellent orm to dotnet (NHibernate). But it is in early development stage (alpha). So i'm also looking at alternatives. It looks to me that your tools (LLBLGen Pro and EntityBroker) are most popular in this forum. And I would like to include them into my evaluation. Now, it is much easier to me evaluate any orm tool, comparing it to my experience with Hibernate. So finally we come to my quiestion. Could you please give me comparative description of your ORM tools with NHibernate ? I know that NHibernate lacks visual tools for generating xml metadata. Although it has command line tool for generation classes out of xml metadata. But you have to write xml file manually. Any other differences ? in architecture maybe ?
    Monday, October 4, 2004 11:54 AM
  • User-528039901 posted
    ::Could you please give me comparative description of your ORM tools with NHibernate ? Why should we do your homework? And basically - I do not feel comfortable putting out a comparison of the EntityBroker against a tool I do not know in that much detail. WAY too dangerous to start bashing NHibernate for things it does - somehow, somewhere. Architecture differences right now is that the EntityBroker is STRONG (the strongest one) for multi tiered architectures. Means you can have the DAL on a different server, talking to it through remoting, and there is a LOT in that most people can not do. Progress indicator (if you transfer a lot of data), evenrts when operations start (yes, the simple one upject update may take a second, thanks to a modem in betwen). A disc based object cache (for those dealing with images like me) is due next week. But for the rest you really have to get into it yourself.
    Monday, October 4, 2004 1:53 PM
  • User-855094386 posted
    Paul Wilson's O/R mapper (http://www.ormapper.net) is remarkably similar to Hibernate and its also very stable. I evaluated several of the leading .net mappers and the simplicity and similarity to Hibernate made it the clear winner for me. There is also a GUI utility (and CodeSmith templates) to generate the xml metadata and entities from an existing databases.
    Monday, October 4, 2004 2:17 PM
  • User579948574 posted
    2 thona >>>Why should we do your homework? You definitely should not. I was asking just in case if you or Frans already have information about nhibernate. And thank you for reply. From architecture point of view, i'm intrested how EntityBroker or LLBLGen Pro handles creation of select, update statements. For example if i changed object several times without saving changes and then at the end did save it. WIll it create separate update statements for each change ? or merge them into 1 update statement. If i'm intrested to retrieve just a couple of fields, will it retrive all objects fields, or just the ones i asked for ? What changes i have to do to convert my application from MS SQL Server to Oracle or DB2 ?
    Monday, October 4, 2004 4:00 PM
  • User579948574 posted
    Thank you SuperDev. I will look into ormapper.
    Monday, October 4, 2004 4:01 PM
  • User402141087 posted
    I would still take a look at NHibernate. At work I use WilsonORMapper and it really shines in its simplicity and will do fine for not-so-complicated tasks. But, NHibernate (even in its current state) is way more powerful and (sorry Paul) is more robust when doing more complex things like persisting a deep object graph. The only thing that I miss with NHibernate now is lazy loading of many-one and one-one relations but that is currently being addressed.
    Monday, October 4, 2004 5:47 PM
  • User-541866472 posted
    It seems we should add something too. We've noticed there is a set of persons really inspired by template-based approach in this forum - so let's criticize it a bit: 1. First of all, we think any code generation techniques should be used only when it's really more effecient then use of other well-known approaches. Moreover, it's preferable to use e.g. inheritance rather then templating to implement a set of similar cases, since the first option is natural to OOP, and the second isn't. Upcoming generics brings some benefits of templating techniques to .NET, but do this naturaly (this doesn't brake all object-oriented features\concepts of .NET). 2. Some people tend to love template-based code generors also because they think such approach brings noticeable performance improvement. In most of cases this is incorrect - imagine, how template-based ORM code generator may help to fetch a record faster (or run a query faster), if: - Minimal query execution time is ~ 0.2 ms (5000 queries per second) - Shortest record fetch time is ~ 0.005 ms (200000 single-value records per second) - Virtual\interface method call requires ~ 10 ns (100 000 000 calls per second) - Instance method call requires ~ 2 ns (500 000 000 calls per second) Use of template-based techniques always just eliminates a set of method calls, but it's obvious that "economy" is incomparable to average operation time (it's less then even 1%). 3. And finally, completely template-based code generator can't be as feature reach as ORM layer provided by eg. DataObjects.NET - simply because usually these templates provide only _very_ basic functionality (usually - CRUD operations + a set of GetProduct(Guid uid),GetProducts(string categoryName)-like queries). To make template-based ORM layer comparable to eg. DataObjects.NET by its feature set, ~ 5 Mb of DataObjects.NET source code should be added to some of templates. Btw, DataObjects.NET also utilizes .NET code generation facilities (CodeDOM), but does this in the runtime, and completely transparently for developers. And actually the aim of code generation in DataObjects.NET is a bit different.
    Monday, October 4, 2004 7:59 PM
  • User579948574 posted
    2 Tijn Could you please elaborate on nhibernate lazy loading ? According to this post on nhibernate forum http://sourceforge.net/forum/forum.php?thread_id=1145480&forum_id=252014 lazy loadng works in nhibernate.
    Monday, October 4, 2004 8:01 PM
  • User-541866472 posted
    Our 2 cents about the popularity - we'd suggest you to look into our support forum, there are ~ 1100 messages now. We'd like to know if any other ORM product vendor provides such level of support.
    Monday, October 4, 2004 8:08 PM
  • User1356982465 posted
    I'm not sure what the quantity of posts has to do with anything, but since you brought it up I have actually between 1000 and 1100 posts as of right now (very much just like you do), but I also have other topics besides O/R mapping, as well as private forums, so the total number of posts is actually higher -- again I wouldn't say that means a lot, but there you go.
    Monday, October 4, 2004 8:46 PM
  • User-1938370448 posted
    Ok, going to address a lot of posts in one reply :) > Re: Your favorite O/R Mapper? by Vagif Verdi > >>>Why should we do your homework? > > You definitely should not. I was asking just in case if you or Frans already have information about > nhibernate. And thank you for reply. As Thomas also said, it's a bit hard to go into detail about what another application can do if you don't know all the details. What I do know is that Hibernate is developed in more than 3 years by more than 1 person and in the last years full time (payed by JBoss foundation), and that the code is very complex. NHibernate is not developed in full time, not by a group of people and has a long road ahead of it before it can say it is a port of Hibernate to .NET. Also don't forget that Java has some features which are not there in .NET, for example a standarized JDBC like interface to all databases, a global object awareness system for true, safe caching across machine boundaries and a couple of standards like JDO and EJB-CMP. Also, on .NET it is not recommended (by MS) to modify byte code in memory to track changes on entity properties. In java this is common practise for every O/R mapper. > From architecture point of view, i'm intrested how EntityBroker or LLBLGen Pro handles creation of > select, update statements. For example if i changed object several times without saving changes and > then at the end did save it. WIll it create separate update statements for each change ? or merge > them into 1 update statement. LLBLGen Pro just logs changed fields, so if you change an entity object's fields in several steps and after that you save the entity, all changed fields are saved with the values they have at that time. All non-changed fields are not saved and are not embedded in the Update query. We support a very flexible concurrency method using a predicate factory you can implement which produces at runtime the filter set to use for the update statement, so optimistic, pessimistic, timestamp based or whatever, it's up to you. LLBLGen Pro also supports multi-versioning of entity fields. So you can save your entity fields state under a name inside the entity, alter the fields and if you're not happy with the results, roll back to a previous state, in memory. Select statements are always generated on the fly. You can have lazy loading selects, or with prefetch paths (load a graph (multi-directional) of objects together with the object(s) you request, 1 query per graph node), eventually with filters and sorting on those prefetch paths as well. > If i'm intrested to retrieve just a couple of fields, will it retrive all objects fields, or just > the ones i asked for ? LLBLGen Pro is more than just an O/R mapper, it's a DataAccess solution so it offers functionality besides O/R mapping or based on top of the O/R mapper core. So if you're interested in a list of orderid's and customerid's, you can create such a list in the LLBLGen Pro designer and that list is typed, and fully filterable using the same predicate (filter) objects as you use with loading entities. We also support the creation of dynamic lists in code, supporting SQL Expressions, Aggregate functions and group by. For example: Get orderid's and the total price of that order, with a total price > 1000$ in one list, using SQL expressions: DataAccessAdapter adapter = new DataAccessAdapter(); ResultsetFields fields = new ResultsetFields(2); fields.DefineField(OrderDetailsFieldIndex.OrderId, 0, "OrderId"); // use unitprice field because it is already of the type we want. fields.DefineField(OrderDetailsFieldIndex.UnitPrice, 1, "TotalPrice"); // Expression for total price: ((unitprice * quantity) - ((unitprice * quantity) * discount) ) IExpression productPriceExpression = new Expression( EntityFieldFactory.Create(OrderDetailsFieldIndex.UnitPrice), ExOp.Mul, EntityFieldFactory.Create(OrderDetailsFieldIndex.Quantity)); IExpression discountExpression = new Expression( productPriceExpression, ExOp.Mul, EntityFieldFactory.Create(OrderDetailsFieldIndex.Discount)); IExpression totalPriceExpression = new Expression( productPriceExpression, ExOp.Sub, discountExpression); fields[1].ExpressionToApply = totalPriceExpression; fields[1].AggregateFunctionToApply = AggregateFunction.Sum; IGroupByCollection groupByClause = new GroupByCollection(); groupByClause.Add(fields[0]); IPredicateExpression havingFilter = new PredicateExpression(); havingFilter.Add(new FieldCompareValuePredicate(fields[1], null, ComparisonOperator.GreaterThan, 1000.0f)); groupByClause.HavingClause = havingFilter; try { DataTable tlist = new DataTable(); adapter.FetchTypedList(fields, tlist, null, 0, null, true, groupByClause); } finally { adapter.Dispose(); } This utilizes the dynamic SQL engine build in, using entity fields but still produces flexible lists with the power of the relational model. This code is database generic so it will look the same on Oracle, Sqlserver, firebird, access... > What changes i have to do to convert my application from MS SQL Server to Oracle or DB2 ? Create a new relational model on Oracle or DB2 (DB2 support is currently in development), be sure the entity layout is the same, generate code using Adapter and you're done. > -------------------------- > Re: Your favorite O/R Mapper? by tijn > But, NHibernate (even in its current state) is way more powerful and (sorry > Paul) is more robust when doing more complex things like persisting a deep object graph. and Paul's not able to persist a deep object graph? > The only thing that I miss with NHibernate now is lazy loading of many-one and one-one relations but > that is currently being addressed. > -------------------------- You call something robust when it can't do 50% of the relationship types available? How am I able to load my order entities with their customer entities if I can't use m:1 relations? > -------------------------- > Re: Your favorite O/R Mapper? by x-tensive It seems we should > add something too. > 1. First of all, we think any code generation techniques should be used only when it's really more > effecient then use of other well-known approaches. Moreover, it's preferable to use e.g. inheritance > rather then templating to implement a set of similar cases, since the first option is natural to > OOP, and the second isn't. Upcoming generics brings some benefits of templating techniques to .NET, > but do this naturaly (this doesn't brake all object-oriented features\concepts of .NET). Why is it of any importance that something is 'natural to OOP'? does that make the software suddenly better? Perhaps the code generation is using the strategy pattern to customize through polymorphism and inheritance the generic code in a runtime lib? > 2. Some people tend to love template-based code generors also because they think such approach > brings noticeable performance improvement. In most of cases this is incorrect - How can the way the code is produced be of any influence to the speed it has during execution? > imagine, how template-based ORM code generator may help to fetch a record fast er (or run a query > faster), if: [Random funfacts deleted] > Use of template-based techniques always just eliminates a set of method calls, but it's obvious > that "economy" is incomparable to average operation time (it's less then even 1%). All nice, but what's your point? :). Template based generated code is bad? If so, in-memory generated code is good? Code which needs reflection to get the info it needs is faster? > 3. And finally, completely template-based code generator can't be as feature reach as ORM layer > provided by eg. DataObjects.NET - simply because usually these templates provide only _very_ basic > functionality (usually - CRUD operations + a set of GetPro duct(Guid uid),GetProducts(string > categoryName)-like queries). To make template-based ORM layer comparable to eg. DataObjects.NET by > its feature set, ~ 5 Mb of DataObjects.NET source code should be added to some of templates. Agreed, pure template based code generators don't have a common lib with generic code which is actually the core of the system. > -------------------------- > Re: Your favorite O/R Mapper? by x-tensive Our 2 cents about the popularity - we'd suggest you to > look into our support forum, there are ~ 1100 messages now. We'd like to know if any other ORM > product vendor provides such level of support. > -------------------------- whoa, 1100??? And you call that popular? :) Our support forum has 7864 postings at the moment, and counting. Please x-tensive, don't make this a pissing contest. :) It doesn't have any relevance to the popularity of the product, although it shows how many users are active in the community around the application, if you take into accound that the majority of users never visits a forum or at least doesn't post messages. Btw, our C# / VB.NET template engine (ala codesmith) went beta yesterday, which allows users to add the engine as a task performer (we have a task based code generator framework, similar to the task system of Nant) to a generator configuration to generate additional code, docs or whatever using templates written in C# or VB.NET besides our own templates using a pattern matching language.
    Tuesday, October 5, 2004 5:18 AM
  • User1356982465 posted
    My mapper IS able to persist a deep object graph, as well as lazy-load relationships. Of course I've tried to never say it was as robust or advanced as the best of them, but I do believe (maybe mistakenly) that NHibernate has a little maturing to do still.
    Tuesday, October 5, 2004 6:00 AM
  • User-1308937169 posted
    Oh PLease... > 1. First of all, we think any code generation techniques should be used only when it's really > more effecient then use of other well-known approaches. Moreover, it's preferable to use > e.g. inheritance rather then templating to implement a set of similar cases, since the first > option is natural to OOP, and the second isn't. Upcoming generics brings some benefits of > templating techniques to .NET, but do this naturaly (this doesn't brake all object-oriented > features\concepts of .NET). Just because you are using template based code generation does not mean you should throw away good OO design principles. Templated code can use inheritance and it should when the design calls for it. Yes Generics are a great thing, not just for code generation I would think. Partial classes will be huge for code generation as well, and again I would think that the ORM's would take advantage of them as well. Never having written or used an OR Mapper leaves me at a disadvantage to talk pro/cons about them, but I have used Code Generation for several years now. I am qualified to talk about that. > 2. Some people tend to love template-based code generors also because they think such > approach brings noticeable performance improvement. In most of cases this is incorrect - > imagine, how template-based ORM code generator may help to fetch a record faster (or > run a query faster), if: <snip> > Use of template-based techniques always just eliminates a set of method calls, but it's > obvious that "economy" is incomparable to average operation time (it's less then even > 1%). 2. How the code is generated (either design time, JIT compile time, or at runtime) is not as important as what is generated. > And finally, completely template-based code generator can't be as feature reach as ORM > layer provided by eg. DataObjects.NET - simply because usually these templates provide > only _very_ basic functionality (usually - CRUD operations + a set of GetProduct(Guid > uid),GetProducts(string categoryName)-like queries). To make template-based ORM layer > comparable to eg. DataObjects.NET by its feature set, ~ 5 Mb of DataObjects.NET source > code should be added to some of templates. I think you miss the point of what can be done with template based code generation. If you write templates that generation code for only CRUD operations, then that is what you get. You are not limited to this architecture by the code generator, but you are limited by your templates. Thinking of templated code generation VS handwriting the code is not unlike outputting HTML to a web browser. You can hand write it, or you can use something like ASP.Net to Generate it. Are you limited to simple HTML and Form Post operations because you are not using some fancy HTML Object Mapper? I could create a base class manually, and have my templates generate code that inherits from the hand written base class. And it could have 5 mb of code. The needs as I see it comes down to a few things: 1. Design Time support is important to me. I love my intelisense and I won't care for a solution that takes that away from me. 2. Performance is important, but maintainability is important as well (more so IMO). 3. Run Time debugging is important 4. Support from the Vender is also important Blah, I think in the end Code Generation can compliment an OR Mapper. I think code generation can remove the need for an OR Mapper if your templates are up to the challenge. Some OR Mappers are using template based code generation. Not sure it is a slam dunk either way. To try and "criticize" something that you either don't understand very well or trying to mislead others as to the real possibilities and limitations is not IMO in the best interest of the forum. This sounds like a blatant attempt to pimp your product.
    Tuesday, October 5, 2004 10:13 AM
  • User779750837 posted
    One of the things that I have not heard discussed often, but that seems important to me is the implied testing of code generation vs. a more hands off approach in an O/R Mapper. I have not done a lot of "Test First Development", but one of the principles that appeals to me is that you are only responsible for testing your own code. So, 3rd party code and components are not part of your unit test plans. Naturally, you have to get comfortable with the vendor and quality of the product at first, but building and repeatedly testing the product's functionality is not necessary. With code generation, though, it seems that the DAL is indeed your code so you are responsible for testing all of the features/functions of the code. Naturally as a developer you're responsible for ensuring that the solution works. But there can be varying levels of work put into testing a product. For instance, I would think that there are no test plans that include tests for the TextBox control. A solid, 3rd party O/R Mapper could be looked at in similiar terms. I'm curious as to what others think along these lines and whether anyone thinks this is a viable point of view or is irrelevent. Thanks, Smoke
    Tuesday, October 5, 2004 12:50 PM
  • User402141087 posted
    > My mapper IS able to persist a deep object graph, as well as lazy-load relationships. > Of course I've tried to never say it was as robust or advanced as the best of them, > but I do believe (maybe mistakenly) that NHibernate has a little maturing to do still. I didn't mean to say that WilsonORMapper isn't able to persist deep object graphs but there are (were?, I'm still using a modified 2.2 version) some issues that NHibernate handles better right now. > You call something robust when it can't do 50% of the relationship types available? How > am I able to load my order entities with their customer entities if I can't use m:1 relations? No no, it's just the lazy loading of those relations (many-one and one-one). You can use them right now, but they only are not lazy loaded but the parent object is fetched at the same time as the child objects (with an inner or outer join). The cause of this problem is the lack of run-time bytecode modification in .NET as you already mentioned. Lazy loading of one-many relations works fine. The point I'm trying to make is that many people are afraid of even trying NHibernate because of its alpha status. That really doesn't justify the current quality. I think I can say I tried almost every OR mapper for .NET and it does a better job than a lot of others (yes I tried the crappy ones too :). Of course, one rather essential feature is missing but please people, go and have a look. Then you'll know if you like it or not. If you consider the database as the center of the application you'll be better off with LLBLGen Pro, but especially for those who design with objects in mind, NHibernate will be worth a try.
    Tuesday, October 5, 2004 1:11 PM
  • User-541866472 posted
    > All nice, but what's your point? :). Template based generated code is bad? If so, in-memory generated code is good? Code which needs reflection to get the info it needs is faster? I mean exactly what I said - presence of code generation can't make your application run significantly faster in this case. Thus it isn't absolutely necessary here (eg. XPO doesn't use any code generation techniques). > Whoa, 1100??? And you call that popular?Our support forum has 7864 postings at the moment, and counting. Frankly speaking, it's not very honest - to compare a product that initially was freeware to a purely commercial product :) > Please x-tensive, don't make this a pissing contest. It doesn't have any relevance to the popularity of the product, although it shows how many users are active in the community around the application, if you take into accound that the majority of users never visits a forum or at least doesn't post messages. 1) Let's be gentlemen. Personally I dislike such tone. 2) You're wrong. You could be right only if there is a _zero_ correlation between [the number of active users in the community around the application] and [the total number of users]. I hope it's obvious that almost any available statistical data should show there is non-zero correlation factor. > Just because you are using template based code generation does not mean you should throw away good OO design principles. Templated code can use inheritance and it should when the design calls for it. I don't mean that use of template-based approach brakes all OOP techniques. But in most of cases this approach favours to avoid them. A simple example: - Let's think we have already working set of templates - We've generated a set of our classes, and noticed that 90% of them have the same (or very similar) method set. This situaton requires interface extraction (small refactoring), but most likely you'll avoid this, since this may require you to update the template (that's usually out of your interest). Btw, I don't think that code generation is useless. I may help significantly in some very specific cases. Currently I can enumerate just one technology where it is really necessary and plays a key role - ASP.NET. But note that ASP.NET isn't a template-based code generator It uses CodeDOM - hopefully it's obvious why. > Are you limited to simple HTML and Form Post operations because you are not using some fancy HTML Object Mapper? Hopefully you understand that I'm not the person that can seriously answer on such question. But it seems I should repeat the key idea once more: I _didn't wrote_ that code generation can't solve some problems. But I wrote that in most of cases it isn't required to use code generation to solve them, and moreover, it isn't effecient. If it is really as good as it's considered by some people here, at least one serious player on this market (such as Microsoft or Sun) would already provide us with some perfect tool solving this problem. But currently it seems that eg. guys from Micorsoft think completely differently. Btw, Rational products provide code generation facilities. But again, the purpose and approach here is completely different to most of template-based code generators I've seen (UML-based, two-way binding between UML model and generated code - it isn't a template-based code generation, etc.). > I could create a base class manually, and have my templates generate code that inherits from the hand written base class. And it could have 5 mb of code. I understand this. Let's think we have exactly this scenario. Then I have just one question -do you _really have_ (or _know_) a code generator that is able to utilize all benefits of eg. 5Mb code base (ie. it can generate a code that uses eg. 80% of public\protected members of underlying code base)? Most likely you don't, that mean that just a small part of our large code base is really used by generated code. > To try and "criticize" something that you either don't understand very well or trying to mislead others as to the real possibilities and limitations is not IMO in the best interest of the forum. Hopefully now you understand that I'm a person with acceptable level of knowledge in this area. My original intension to criticize template-based approach was true - IMO this approach devotes even more critics :) > This sounds like a blatant attempt to pimp your product. 1) Hopefully you've paid attention that DataObjects.NET was already mentioned earlier, and not by me. I've found this page by our website backlink analysis, and decided to add few comments to really interesting discussion. 2) Moreover, a set of products was also mentioned in posts their vendors in this forum (eg. Thona, LLBLGen Pro). Hopefully you haven't forgot to argue about such PR :) Btw, it's certainly easier to blame others in such facts, rather then proof the concept.
    Tuesday, October 5, 2004 2:37 PM
  • User-541866472 posted
    P.S. I'll be waiting for a good argumentation - hopefully such sensitive person as you won't find this as one more attempt to pimp our product \ discreditize your approach \ push some heretical idea into less clever developers' heads, etc.
    Tuesday, October 5, 2004 2:45 PM
  • User-1308937169 posted
    It would be allot easier to agree with you if you did not make rash assumptions and generalizations about what I or others do with our templates. Statements like "In most cases" leads to over generalization and flagrant misrepresentation of the facts. You have a vested interest in pimping your product (as do others on this thread), and it seems that instead of hailing the benefits of your ORM, you berate template based code generation as "Not adequate", "Inefficient", "Limited". I don't have a problem with any vendor explaining the finer points of there widget, however, that is not what you are doing. You seem to also confuse the generator with the template. My generator of choice is CodeSmith. I write some pretty kick ass templates that do a lot of the things that a good ORMapper does. Yes I had to write the templates, and yes I have to support them. The code I generate uses all the correct OO methodologies, and provides strong typing and intelisense. I can debug 100% of the code with one debugger. These things are important to _ME_. They are _MY_ templates written for _ME_. > Hopefully now you understand that I'm a person with acceptable level of knowledge in this > area. My original intension to criticize template-based approach was true - IMO this > approach devotes even more critics :) I understand that you can talk about ORMappers, but I don't think you know didly about template based code generation. it sounds like you might have tried a template based code generator, but did not try creating your own templates. It sounds like you just used the out of the box templates. Again, I am making some assumptions here. > This situaton requires interface extraction (small refactoring), but most likely you'll avoid > this, since this may require you to update the template (that's usually out of your interest). Uh, it is my template; it most definitely would be in my best interest to update my template. In most cases my templates are part of the source code, and get managed as such. The term you should Google is "Active Generation". As for the real essence of the thread, code generation is not OR Mapping. The two techniques can get confused since they can solve the same problem. http://www.neward.net/ted/weblog/index.jsp?date=20041003#1096871640048 The OR Mapper VS other data access techniques is not new, and in this blog entry the author mentions "object-relational mapping is the Vietnam of Computer Science". He goes on to elaborate on the amount of effort that goes into ORM's and benefit that they provide. There is an unbalanced amount of effort at work here. Tons of effort creating the different ORMappers, and there doesn't seem to be as much benefit. The biggest issue I have is the notion that _ONE_ data access methodology is the right one. No product can solve all of the issues with data access. There is no Silver Bullet. One size does not fit all. I think that writing an OR Mapper must be a huge task, and that you will not be able to please all people all the time. I simply want more control over my architecture, and I am willing to write my own templates to do it. With template based code generation, I am not limited to data access methodologies that you chose. Furthermore, I can write interface, middle tier, etc. All with the same tools, languages, and techniques.
    Tuesday, October 5, 2004 3:53 PM
  • User579948574 posted
    2 tcarrico Let me try and explain you what others tried but failed. Writing templates is a complex task. Much more complex than learining good and rich framework, or library, or ORM tool. You are right that many things could be acomplished by templates, but this is a halfbaked solution, not a ready to use product. And when you say "these are MY templates" you do not notice, that you push people to use solution that works only for YOU. On the contrary, ORM tools are made for average Joe who does not have time to create complex templates for complex functionality and is not satisfied with simple CRUD functionality that out of box templates provide. So please stop flaming. People are right when they say that ORM and codegeneration are different beasts. And the fact that you use codegeneration for your ORM needs does not proove anything. Now lets go to that argument about true OO principles and design practices and templates. I have no doubt that your code and your templates conform to best OO practices. But templates that come with CodeSmith - they do not. They actually suck big time from programming practices POV. Because 95% of the code that is generated out of those templates is just a Copy/Paste of same functions with different names. This is worst design decision that programmer can ever make. And i can understand why it is this way. Because template producers do not want distribute many files, libraries and let people manually include those into projects just to be able to use that template (although it is right way). Much simpler just to generate all code with 1 template. Actually the only templates i've seen that conformed to OO principles and did not contain repetitive code were (ironically) ORM mapping files for Gentle.NET and NHibernate. And it is easily understood why. Because they did not contain code, just metadata. I personally see only one correct application of code generators - generate metadata, be it in xml format, ini files or c# classes. Does not matter. Template should not include any actual code (fucntions), but rather generate information feed for libraries.
    Tuesday, October 5, 2004 9:23 PM
  • User-541866472 posted
    > You have a vested interest in pimping your product (as do others on this thread), and it seems that instead of hailing the benefits of your ORM, you berate template based code generation as "Not adequate", "Inefficient", "Limited". I don't have a problem with any vendor explaining the finer points of there widget, however, that is not what you are doing. Oh :) Do you really think that self-praising is better then third person critics? It's certainly better for every product vendor, as well as certainly not - for average user. Moreover, as you may see, I haven't criticized _a product of your choice_ or _your own product_ - I criticezed the approach in general. And finally - certainly I could write about the benefits also, but: 1) Most likely you'd say I'm pimping the product, but not the approach in general, as well as could add that it's certainly _possible_ to achieve the same with template-based approach. May be I'm making too many assumtions, but this seems very probable - taking into account your reaction on my first post. 2) Frankly speaking, all benefits and unique features of DataObjects.NET are listed in the "Benefits" section of our web site. If you're really want to read it - you're certainly welcome :) Moreover, you can criticize _everything you want_ in our support forum (most of newcomers visit it - may be your point of view will be interesting to them?). And finally, 2 tcarrico: just noticed that you occasionally fogot to add a "pipping notice" to this France's post: "Btw, our C# / VB.NET template engine (ala codesmith) went beta yesterday, ...". Just want to help you to identify all "pimpers" - keep your eyes open, they're behind you back :) Oh... Certainly it's a joke :)
    Tuesday, October 5, 2004 11:52 PM
  • User-541866472 posted
    > I have no doubt that your code and your templates conform to best OO practices. But templates that come with CodeSmith - they do not. They actually suck big time from programming practices POV. Because 95% of the code that is generated out of those templates is just a Copy/Paste of same functions with different names. This is worst design decision that programmer can ever make. Face the fact - this is true, and this is what we have in reality. Nevertheless people love the idea of code generation, and expect _much more_ from template-based codegens, then they currently allows to do. It's in our nature - everyone wants to just describe the problem, and get its solution within few clicks. Really, why a lot of people thinks that generated code is better then the same code, but written once, and containing few more "if"s \ "for"s, etc.? All "generation intelligence" (that is highly advertised by their vendors) is _nothing more then few more "if"s_. Franly speaking, such way of code generation brakes most of OOP rules. It's certainly appears that pre-generated code should work faster, but as I've already explained, in case with ORM the performance increase factor should be less then 1%. Personally I think that the same is correct for 99% of other cases. > I personally see only one correct application of code generators - generate metadata, be it in xml format, ini files or c# classes. Does not matter. Template should not include any actual code (fucntions), but rather generate information feed for libraries. I also want to enumerate a set of areas where code generators are really important to me (I also try to classify them): 1) Well-known generators of lexical\syntax analyzers such as YACC and BISON. Most of such tools are finite state machine generators. Regex is one more good example - but usually f.s.m. for a particular Regex is generated in runtime. 2) ASP.NET, DataObjects.NET, and as I understand, upcoming WindowsForms 2.0. Let's call this class of code generators as "extenders" - all these tools transparently extend a set of base classes (by overriding a set of virtual methods and implementing abstract ones) with additional, but common functionality (decoration). New functionality is usually described by external files (.aspx\.ascx\.xml) or attributes (eg. in DataObjects.NET). Notes: - Such generators usually don't generate new public methods and properties (nevertheless a set of private methods can certainly be generated), so they just change the runtime behavior of classes they decorate. - They don't expose generated classes directly (ie. provide their source code or assemblies) - instances of these classes are available via special API. Moreover, no one binds directly to generated classes - any external type is bound to well-known (not autogenerated) ancestors of these classes, or to well-known interfaces they implement. 3) A set of tools like tlbimp.exe and wsdl.exe. Stub generators. No any comments here. That's all. Btw, someone mentioned that code generation become really available only with Java\.NET. Frankly speaking, this is incorrect: 1) Such tools as YACC\BISON are available for almost 20 years. 2) C++ has a set of features providing perfect compile-time code generation. See eg. this page: http://www.boost.org/ What was really brought by .NET is the simplicity of _writting_ (but not _using_) a template-based code generator. Runtime binding, reflection, CodeDOM, built-in compilers, attributes... It's a paradise for developers of CodeSmith-like tools :) But frankly speaking, there are no any principal changes - as you write, it's still the same _template-based_ code generation (well-known for 50-60 years). Wide variety of code generation tools now is not an effect of some shift in this area - people love the idea of fully automatic code generation (I've just explained, why), and as consequence, we have lots of tools allowing to solve the problem... But the results of these tools' work has absolutely the same quality as it could be 20 years ago :)
    Wednesday, October 6, 2004 1:12 AM
  • User-541866472 posted
    2 Vagif Verdi: I completely agree with your last post.
    Wednesday, October 6, 2004 1:17 AM
  • User-1938370448 posted
    > -------------------------- > Re: Your favorite O/R Mapper? by x-tensive > > All nice, but what's your point? :). Template based > generated code is bad? If so, in-memory generated code is > good? Code which needs reflection to get the info it needs is faster? > > I mean exactly what I said - presence of code generation > can't make your application run significantly faster in this > case. Thus it isn't absolutely necessary here (eg. XPO > doesn't use any code generation techniques). Still I do think that at runtime determination of certain constructs which can be handed to you by generated code can have a benefit. Also, typed constructs and shortcuts (like pre-fabricated query / predicate objects) can be helpful to the developer. > > Whoa, 1100??? And you call that popular?Our support forum > has 7864 postings at the moment, and counting. > > Frankly speaking, it's not very honest - to compare a product > that initially was freeware to a purely commercial product :) I am honest. 7864 (even more now) postings on the LLBLGen Pro support forum. The old llblgen dal generator never had and never will have a support forum. LLBLGen Pro was a commercial product from the first day of its release (sept 8 2003) > > Please x-tensive, don't make this a pissing contest. It doesn't have > > any relevance to the popularity of the product, although it shows how > > many users are active in the community around the application, if you > > take into accound that the majority of users never visits a forum or at least doesn't post > > messages. > > 1) Let's be gentlemen. Personally I dislike such tone. Well, I dislike people questioning the truth in what I say. > 2) You're wrong. You could be right only if there is a _zero_ > correlation between [the number of active users in the > community around the application] and [the total number of > users]. I hope it's obvious that almost any available > statistical data should show there is non-zero correlation factor. Ok, the more customers you have, the more activity there will be on a forum, as with, say, 100 customers, you'll never get a crowded community. Is that what you were trying to explain? But I stick to the fact that the vast majority of users never posts on a forum, they mostly use direct email support, if they need support at all. Oh and btw, my name is Frans, not France. ;)
    Wednesday, October 6, 2004 5:02 AM
  • User-541866472 posted
    Frans, sorry about the name ;) Concerning everything else - as you see, there were no _falsificated_ facts in my posts.
    Wednesday, October 6, 2004 5:16 AM
  • User-1938370448 posted
    Concerning everything else - as you see, there were no _falsificated_ facts in my posts. Erm, you didn't call me honest, using an assumption from your side to make my remark dishonest, which was not correct, I was/am stating what is true, your assumptions were/are wrong. And this is precisely why you shouldn't start a pissing contest to try to make your product look like the best on the block.
    Wednesday, October 6, 2004 6:29 AM
  • User1509363267 posted
    I have read the posts in this thread and must admit that many of the assumptions made here about software development leave me a bit mystified. I wear two hats now, the head developer and the owner of the company and I feel that all of the vendors/products here have certain similarities which are troubling, at least to me. In choosing an O/R Mapper to use, my criteria was pretty simple: 1- I do not want to write another piece of ADO 2- I want to limit my stored procedures to as few as possible 3- I do not want to write a single byte of vendor proprietary code, i.e, I will not learn or use an API which in any way ties me to a specific product. 4- I want efficient dynamic sql generated and a low overhead framework. 5- I want to be able to switch between products with a minimum of impact. My initial choice was and remains LLBLGen Pro. I have been using it to generate my DAL for about a year now and it has fulfilled my wishes. The DAL is generated and built under VS, and I do not touch it in any way. I use the provided Entity Classes, Transaction Mgr, Stored Prodcedure caller classes and bordering on violating my criteria, COllection Classes. Thats it, period. Where my designs, sepcifically data model, do not fit LLBLGens model, I use either stored procs or exclude those elements from its access. I am a loyal user, which means of course, if another product comes/came along which fulfills my needs faster and/or more efficiently, I am going to need 10-12 seconds to decide what to do. The troublng items I see are this over-blown and , to me, irrelevant battle over code-generation or pure mapping, I personally do not care if the tools work by clairvoyance, are OO-based, or are written in one long basic program. I do not care if they generate or don't generate, spew design patterns or the biggest hacks in the history of computer software, I just care that what they do works and works well. I need a tool that increases my productivity, and which will not hinder my ability to put in place designs I feel needed and whose authors remain dedicated to providing for a user like me. As I watch the different authors here, and you are all extremely talented and brilliant, I hope you do not lose sight of these simple needs.
    Wednesday, October 6, 2004 6:56 AM
  • User-1308937169 posted
    > Writing templates is a complex task. Much more complex than learining good and rich > framework, or library, or ORM tool. You are right that many things could be acomplished > by templates, but this is a halfbaked solution, not a ready to use product. I do not agree with your conclusion. Half-baked? Not ready to use product? They arre templates that generate code. Code I don't have to write. Isn't that the point of all this? > And when you say "these are MY templates" you do not notice, that you push people to use > solution that works only for YOU. The point here is that they are free to create there own. They don't have to use my templates. you get a code generator and then you create your templates to do exactly what you want them to do. It is not an out of the box solution. ORM sounds like they are, but you still have to decorate your code with attributes, or inherit from there base classes etc. There is a certain amount of work that needs to be done for all of the presented solutions. It sounds like choosing an ORM is also like choosing architecture. You can write templates to fit into an existing application and architecture. We don't always get to re-architect a solution when we start working on it. > On the contrary, ORM tools are made for average Joe who does not have time to create > complex templates for complex functionality and is not satisfied with simple CRUD > functionality that out of box templates provide. No argument there. ORM's provide some very cool features, such as lazy loading, versioning, etc. You could write templates that generate that, but it is a bit of work. I don't think I have ever worked on an app that needed those features, but I can see where they might be useful. That being said, getting an ORM to support a feature that you need in order to deliver your product in a couple of weeks might be another story. > So please stop flaming. People are right when they say that ORM and codegeneration are > different beasts. And the fact that you use codegeneration for your ORM needs does not > proove anything. Thona is the only vendor that I intended to flame. x-tensive is the user that decided to criticize something that I don't think he fully comprehends. > Now lets go to that argument about true OO principles and design practices and templates. > > I have no doubt that your code and your templates conform to best OO practices. Some people on the forum keep making assumptions that they "Can't". Thanks for acknowledging that they can. > But templates that come with CodeSmith - they do not. > They actually suck big time from programming practices POV. Because 95% of the code > that is generated out of those templates is just a Copy/Paste of same functions with > different names. This is worst design decision that programmer can ever make. Uh, maybe we got different templates with our install. The templates I received were not as you describe. I agree that the out of the box templates are not a completed ORM. They do a pretty good job of teaching the user how to create more templates. It is not a magic bullet that you install and never have to worry about data access. If that is what you are after, you need something else. > I personally see only one correct application of code generators - generate metadata, be it > in xml format, ini files or c# classes. Does not matter. Template should not include any > actual code (fucntions), but rather generate information feed for libraries. You are certainly entitled to your opinion, and you are correct in that they can generate your metadata. If they can generate your metadata, then they could also generate a DAL as simple or as complicated as you need. They could also generate Proxies, Factories, and a host of other patterns. Templated Code Generation is not limited to the DAL. Of course you have to write or acquire your templates for that...
    Wednesday, October 6, 2004 9:55 AM
  • User579948574 posted
    btw. I did my choice. NHibernate ! Thanks to my experience with java hibernate and very good books that are published on hibernate. 2 tcarrico >>The point here is that they are free to create there own. Wrong. As I said, most developers do not have a choice here. They CANNOT create templates, because they do not have time to create fully functional and thus complex templates. What you are saying is like: "They are free to go to work on a bike". No they are not, because they have to be at work at 8 AM. They do not have a choice. >>then they could also generate a DAL as simple or as complicated as you need. They could >>also generate Proxies, Factories, and a host of other patterns. Did I say they can't ? I said they should not ! They definitely can generate anything. It is developer who should decide is it right way or not. I mean you can use Copy / Paste to create new fucntions, just replacing their names. It does not mean that you should do it just because you CAN do it. And mind you, when i say metadata I also include into it code for classes, proxies etc. As soon as those classes do not contain repetitive function bodies. But unfortunately that's what almost all of templates that included in CodeSmith contain. They contain huge amount of function bodies, that repeat itself hundreds of times during code generation. Because it is good to do so ? No, because it is convenient to do so ! like copy and paste - very convenient !
    Wednesday, October 6, 2004 1:46 PM
  • User-1308937169 posted
    This sounds like arguing for the sake of arguing. This banter is not helping the thread along. The argument I am making is that Code Generation done right can remove the need for an OR Mapper, and give the developer more control. No it is not free, and no it is not a magic plug in. Developers can and do make templates, just like they still write a lot of code the old fashioned way. The name of the thread is "Your favorite O/R Mapper". My favotire O/R Mapper is not an O/R Mapper at all. The fewer black boxes the better...
    Wednesday, October 6, 2004 3:05 PM
  • User-1938370448 posted
    This banter is not helping the thread along. The argument I am making is that Code Generation done right can remove the need for an OR Mapper, and give the developer more control. No it is not free, and no it is not a magic plug in. Developers can and do make templates, just like they still write a lot of code the old fashioned way. The name of the thread is "Your favorite O/R Mapper". My favotire O/R Mapper is not an O/R Mapper at all. The fewer black boxes the better... All fine, but everyone who has written a solid O/R mapper or is currently working on one knows that your argument is bogus. If writing an O/R mapper costs megabytes of sourcecode and a couple of manyears of fulltime development by skilled developers, how are you going to explain you can offer the SAME functionality with a couple of templates? I'll give you the answer: you can't. Not by far. Now, I'm not the most stupid developer on the planet if I may say so, and it took me more than 2 years of full time (that is 6 days a week of 6-10 hours) of programming and design to write the core library with the functionality it provides now. No, there is no redundant code in there which could have been provided by code generation. Sorry, but claims like you make here are pretty funny then:) You probably can mimic / emulate SOME of the most used functionality through code generation, but even that will take a lot of work. As someone clever said earlier today in this thread: most people don't have the time to write templates, they have to start with their work at 8AM and get things done.
    Wednesday, October 6, 2004 3:18 PM
  • User579948574 posted
    >>>This sounds like arguing for the sake of arguing. >>>The name of the thread is "Your favorite O/R Mapper". >>>My favorite O/R Mapper is not an O/R Mapper at all. Exactly tcarrico! What the hell are you doing in this topic ? :)) Are you looking for O/R mapper? definitely not. Are you proposing an O/R Mapper? Again not. As you said "My favorite O/R Mapper is not an O/R Mapper at all" So what exactly are you doing here? Flaming? :)) You are constantly saying things like "This works for me" or "I did not ever seen such and such features required in my projects". You know what? This kind of attitude...is absolutely CORRECT!...when you are looking for a tool or solution for yourself. But are you? If you come here and proposing to other people use code generation as O/R mapping tool, then this attitude is WRONG. You cannot tell them that it works as O/R mapping tool, and at the same time tell them that you do not care about what exact features they need or what additional work (writing templates) they need to do. So I hope you will find correct place for your arguing (hint: it is not here)
    Wednesday, October 6, 2004 3:24 PM
  • User-1308937169 posted
    > All fine, but everyone who has written a solid O/R mapper or is currently working on one > knows that your argument is bogus. If writing an O/R mapper costs megabytes of > sourcecode and a couple of manyears of fulltime development by skilled developers, how > are you going to explain you can offer the SAME functionality with a couple of templates? > I'll give you the answer: you can't. Not by far. How many poeple take advantage of more than 50% of your features? How many of your users now about more than 50% of your features? I might be able to deliver the functionality I _NEED_ in a couple of templates, but not a complete OR Mapper. My argument is that I don't need a complete OR Mapper... Your argument is based on all developers need all of your features all the time and I don't buy it. Data Access has a million solutions. >> most people don't have the time to write templates, they have to start with their work at >> 8AM and get things done. I don't have a problem getting things done, and I only have to write the templates once. Then update them as they need to be updated. A small price to pay for more control over my data access IMO. These are tools I created and use them to get my job done.
    Wednesday, October 6, 2004 5:23 PM
  • User-1308937169 posted
    rofl... > Exactly tcarrico! What the h*ll are you doing in this topic ? :)) > Are you looking for O/R mapper? definitely not. > Are you proposing an O/R Mapper? Again not. As you said "My favorite O/R Mapper is not > an O/R Mapper at all" I honestly don't remeber why I am on this thread. Maybe it was that I was interested in the idea of a better way. Maybe looking for a better mouse trap. Maybe if the thread didn't keep wandering into code generation bashing I wouldn't have felt the need to correct you and the others. If I remeber correclty what got this crap started again was this statement: >>> x-tensive It seems we should add something too. We've noticed there is a set of >>> persons really inspired by template-based approach in this forum - so let's criticize it a >>> bit: So what.... I am suposed to let that lay? He was completly wrong in my book, and deserved a rebutle > > So what exactly are you doing here? Flaming? :)) > > You are constantly saying things like "This works for me" or "I did not ever seen such and > such features required in my projects". You know what? This kind of attitude...is absolutely > CORRECT!...when you are looking for a tool or solution for yourself. But are you? Is that you Thona? The dialects and arguments are very similar... You don't want me around? You don't want my input? Stop talking about code generation and stick to OR Mapping...
    Wednesday, October 6, 2004 6:37 PM
  • User579948574 posted
    2 tcarrico >>>Is that you Thona? No, it is me a newcomer, Vagif Verdi. Btw you should've notice that i chosed Nhibernate for my needs, not Thona's product. >>>You don't want me around? I do not care if you are around or not. >>>You don't want my input? I definitely want your input...related to subject: O/R mapping tools, not substitutes that work so wonderfully for you. If I need to talk about code generation tools, i will find appropriate topic, not this one.
    Wednesday, October 6, 2004 6:50 PM
  • User-541866472 posted
    > Erm, you didn't call me honest, using an assumption from your side to make my remark dishonest, which was not correct, I was/am stating what is true, your assumptions were/are wrong. Frans, you aren't honest - LLBLGen was initially free for a very long period. So saying that LLBLGen Pro was initially a completely commercial product isn't a complete true. And I hope everyone understands it. Adding "Pro" suffix is nearly the same as changing a version number. Moreover, product website is still the same - www.llblgen.com. So be honest :) > And this is precisely why you shouldn't start a pissing contest to try to make your product look like the best on the block. 1) I never said it is the best 2) Moreover, I never mentioned something that can make you feel it's best. Certainly every product has its own benefits and disadvantages. DataObjects.NET is definitely no.1 in support of OOP concepts (ie. none of ORM tools except DO support queries by interface properties, explicitely defined n-ary relationships, etc.), security features, it provide excellent import\export capabilities (including usual serialization and serialization to DataSets) and good some other areas. And certainly there are some areas where it offers nothing (e.g. it doesn't provide support for legacy databases). P.S. Frans, my posts are very close to the topic of this forum. But such posts of yours - aren't.
    Wednesday, October 6, 2004 10:47 PM
  • User-1938370448 posted
    I'm starting to get a little tired of mr. x-tensive... > -------------------------- > Re: Your favorite O/R Mapper? by x-tensive > > Erm, you didn't call me honest, using an assumption from > your side to make my remark dishonest, which was not correct, > I was/am stating what is true, your assumptions were/are wrong. > > Frans, you aren't honest - LLBLGen was initially free for a > very long period. So saying that LLBLGen Pro was initially a > completely commercial product isn't a complete true. Erm. LLBLGen the DAL generator has the name 'LLBLGen' in common with LLBLGen Pro, but LLBLGen Pro is a complete rewrite, from scratch. Not a single line is based on the old code. > And I hope everyone understands it. Adding "Pro" suffix is nearly > the same as c hanging a version number. Moreover, product > website is still the same - www.llblgen.com. So be honest :) No, LLBLGen's website was and still is: http://www.sd.nl/software , www.llblgen.com was registered on may 8th, 2003. LLBLGen the dal generator was released somewhere in 2002. On llblgen.com, the website went live on september 8th 2003, before that no website was there. You now see, why your false accusations annoy me? You're assuming and claiming things which are false and accuse ME, the one who knows the things related to the products we release way better than you do, of false information. I AM honest, if you want to accuse me of something, come with facts or just don't say anything. Our support forum is also not reachable from the outside, only customers can access it and people who downloaded the demo (demo users have access since late june 2004). We're close to 7900 postings now, in just 1 year. The funny thing is, there is not a SINGLE ONE about the old LLBLGen dal generator. Not one!. Now, as Paul Wilson also said before: it doesn't matter, but YOU brought it up, and for a reason. Now, because you run into me telling you your product is apparently less popular than ours, my information is false, based on nonsense. If you can't stand the heat, stay out of the kitchen. Besides that, why would competitors choose our brand name in their google adwords campagnes and try to get some more hits to their websites? > > And this is precisely why you shouldn't start a pissing > contest to try to make your product look like the best on the block. > > 1) I never said it is the best I'm not born yesterday, I know why you (that's not me or anyone else) brought up the subject of how many posts you had on your support forum. You wondered who would give better support than you do. Well, you now know who does. > 2) Moreover, I never mentioned something that can make you feel it's best. then stop bringing up irrelevant subjects like how many postings you have on your public accessable! forum. > Certainly every product has its own benefits and > disadvantages. DataObjects.NET is definitely no.1 in support > of OOP concepts (ie. none of ORM tools except DO support > queries by interface properties, explicitely defined n-ary > relationships, etc.) this is pure BS or must I say, marketing? our query system is solely based on objects and interfaces, in fact, you can't specify a raw string as the query nor do we have something like OPath style queries in which you can make typo's. Furthermore, all code is build with extensive usage of polymorphism, inheritance and patterns like Data Access Object, Data Transfer, strategy and based on pure database theory. Do not confuse 'domain model' with OOP, domain model uses OOP, but it's not equal to each other. > secur ity features, it provide > excellent import\export capabilities (including usual > serialization and serialization to DataSets) Oh, and we don't? :D > and good some other areas. And certainly there are some areas where it > offers nothing (e.g. it doesn't provide support for legacy > databases). right. There are some things you don't have like there are some things we don't have. We're working on those, like you're working on the features you're lacking. > P.S. Frans, my posts are very close to the topic of this forum. But such posts of yours - aren't. Your remarks about what your product can do and how many postings you have on your forum, these things are close to the topic? Please read my postings in this topic and see how mnay are ON and how many are OFF topic. You started this pissing contest about forum postings and support value. Now you're losing that contest it's suddenly off topic. Of course it is, but you brought it up.
    Thursday, October 7, 2004 3:29 AM
  • User-541866472 posted
    > Besides that, why would competitors choose our brand name in their google adwords campagnes and try to get some more hits to their websites? I suppose they choose "LLBLGen", but not "LLBLGen Pro" - so I really don't understand why you should worry about this - taking into account your point of view on relationship between LLBLGen and LLBLGen Pro. Anyway, I suggest to stop argue about this - I think everyone who reads this topic has alreay made their conclusions. If we continue - we'll simply waste ours and theirs time. > Then stop bringing up irrelevant subjects like how many postings you have ... As I've mentioned, it isn't irrelevent - it's obvious that such fact can be interesting for anyone who chooses the product. If it isn't, than why have you paid so many attention to it. Moreover, this is a moderated forum, so I'd suggest you to do the same - to stop waste words on such statements. Be closer to the subject, and nothing more. I really tired to answer on similar posts from your side - it seems you're the most annoying person here :( > This is pure BS or must I say, marketing? our query system is solely based on objects and interfaces, in fact, you can't specify a raw string as the query nor do we have something like OPath style queries in which you can make typo's. Furthermore, all code is build with extensive usage of polymorphism, inheritance and patterns like Data Access Object, Data Transfer, strategy and based on pure database theory. > Do not confuse 'domain model' with OOP, domain model uses OOP, but it's not equal to each other. > Oh, and we don't? :D Frans, it seems you aren't familiar with some concepts, and thus completely don't understand me. I don't mean that our queries are object-based (ie. each query criteria is combined from newly created objects of special type). DataObjects.NET provides regular text-based queries, that's definitely more convenient (as well as eg. Genome - we think this is may be the most feature-reach competitor of DO). I mean the following: 1) You can run the following queries in DataObjects.NET: "Select IHasAddress objects where {City}='New York' and {Country}='USA' ", where City and Country are properties of IHasAddress interface. Such query will fetch all objects that supports this interface and satisfy specified criteria. Moreover, you can run even such queries: "Select IHasAddresses objects where {Addresses.item.City}='New York' and {Addresses.count}>2 " (IHasAddresses.Addresses is a collection of Address or IAddress objects (or even structs)). 2) DataObjects.NET supports so-called paired relationships. This means that it automatically maintains sychronized a set of collection\reference properties from different types. The simplest example: execiting "joe.Manager = bob" autometically leads to update of bob.Empoyees collection, ie. running "bob.Employees.Contains(joe)" will evaluate to true, and nothing additional should be done to get this result. DataObjects.NET supports all imaginable types of pairing - pairing a collection to reference property (1-n relationship), collection to collection (m-n), reference property to reference property (1-1 relationship), moreover, it supports paired interface properties (i.e. you can declare that IManager.Employees (collection if IEmployee) is paired to IEmployee.Manager (IManager)), as well as pairing to interface properties. 3) N-ary relationships are relationships between more then 2 objects. Usually this type of relationship is "emulated" by adding an intermediate type which instances represents the instances of such relationships. Well-known example is security relationship between SecureObject, Principal (User or Role) and Permission. This is a ternary relationship. Certainly you can emulate this relationship by adding an intermediate type (ie. SecurityRelationship containing 3 references; some ORM tools use this way to emulate regular 1-n\m-1\m-n relationships), but DataObjects.NET supports this type of relationship explicitely. Moreover, it supports paired (in this case - simply related) collections of such relationships declared in different types in the same fashion as regular paired collections\properties. 4) DataObjects.NET provides NTFS-like access control system, that supports per-instance access control lists, users & roles, custom permissions, permission inheritance, etc. _None_ of other ORM tools supports these features. But eg. interfaces are one of fundamental OOP concepts. And in lots of cases it's _necessary_ to perform queries like in my example. 5) Usual serialization and serialization to DataSets... Well, some ORM vendors implements this. But I don't know other tools that fully supports such range of serialization options. Eg. DataObjects.NET can serialize\deserialize a graph containing persistent as well as non-persistent objects, supports add\overwrite deserialization options, can serialize only specified instances, and others - as references (so only a part of the whole object graph will be serialized, but all references to non-serialized persistent entities will be correctly restored on deserialization), etc. So hopefully this is enough. And please stop using BS and similar phrases. I, as well as most part of visitors in this forum aren't your relatives or good friends. I expect to hear a bit more interesting things \ facts in this forum. Better keep silence.
    Thursday, October 7, 2004 5:15 AM
  • User-1938370448 posted
    > Re: Your favorite O/R Mapper? by x-tensive > Anyway, I suggest to stop argue about this - I think everyone > who reads this topic has alreay made their conclusions. If we > continue - we'll simply waste ours and theirs time. Then don't start a pissing contest :). You go on and on about others who are at fault here, but it is YOU and you alone who started the argument about the amount of postings on a forum, and you alone questioned the numbers of others. (and providing false assumptions) > > Then stop bringing up irrelevant subjects like how many postings you have ... > > As I've mentioned, it isn't irrelevent - it's obvious that > such fact can be interesting for anyone who chooses the > product. If it isn't, than why have you paid so many attention to it. I payed attention to the false claims from your side about my facts on the subject you introduced into this discussion. As I said before, how many postings a forum has is highy irrelevant, as most customers will not post on forums but use direct support lines like email. Perhaps a lot of posts are posted by a small group of people? Will that say anything? No, not at all. Of course, it can (but doesn't have to! that's why we have a different opinion on this) be relevant as an indication for the activity in a community around a product. If there are 500 postings in a year and the last one was posted 2 months ago, it is not a very active community, i.e.: asking a question CAN be resulting in no answer. But is that something you can conclude by looking at that forum? No, not at all. Perhaps no-one HAS TO ask questions because the documentation is great or there are no issues. It can also be that customers like to use other forms of giving feedback, asking questions and reporting issues. Does a forum postcount say anything about the response time of the support personell? No, nothing. Also it doesn't say anything about the speed issues are fixed, new features are added and how fast emails are answered. That's why I questioned the relation between having a large amount of forum postings and the quality of support. Of course I'm happy with our community and that competitors are not having that much activity on their forums but what I find more important is that a customer is/will be happy with the product, something that can't be determined directly from a total amount of posts on a forum. > Moreover, this is a moderated forum, so I'd suggest you to do > the same - to stop waste words on such statements. Be closer > to the subject, and nothing more. I really tired to answer on > similar posts from your side - it seems you're the most > annoying per son here :( For the last time: I didn't bring up postcounts on a forum, you did. I didn't question your numbers, you questioned mine. And please, don't lecture me what I can and can't do as if I am your employee or your child. If a moderator has a problem with what I have to say, the moderator can email me. I now hope the issue with the postcounts is over, we have almost 8 times more posts on our forums than you do, and it is me who doesn't make that important but you do. We have a different opinion about if # of postcounts is relevant. > > This is pure BS or must I say, marketing? our query system is solely > > based on objects and interfaces, in fact, you can't specify a raw > > string as the query nor do we have something like OPath style queries > > in which you can make typo's. Furthermore, all code is build with extensive usage of > > polymorphism, inheritance and patterns like Data Access Object, Data > > Transfer, strategy and based on pure database theory. > > Do not confuse 'domain model' with OOP, domain model uses OOP, but it's not equal to each > > other. > > Oh, and we don't? :D > > Frans, it seems you aren't familiar with some concepts, and > thus completely don't understand me. I'll probably know the concepts, it wasn't clear to me what you meant with what you wrote. > I don't mean that our queries are object-based (ie. each > query criteria is combined from newly created objects of > special type). DataObjects.NET provides regular text-based > queries, that's definitely more convenient (as well as eg. > Genome - we think thi s is may be the most feature-reach > competitor of DO). I mean the following: ah, and you support group by, aggregates, sqlexpressions and the like too, or prefetch paths with filtering in your most feature rich query system? I also don't see why text based queries are 'OOP' as you claimed you had. Remember: you said: "DataObjects.NET is definitely no.1 in support of OOP concepts (ie. none of ORM tools except DO support queries by interface properties, explicitely defined n-ary relationships, etc.)". No.1 in support of OOP concepts, that's what I read. Now your interface typed feature is great, no question about that, but a claim as 'definitely no.1. in support of OOP concepts' is pretty huge. Especially if others use deep OOP constructs as well. That's my problem with your postings. You claim a lot. Please don't do that. This thread is not about advertising products, it's about discussing concepts, for example why you support a given feature and you don't support another. Wouldn't it be much more interesting to debate why you for example mangle the database model to support features and why we don't do that? For the reader it would be better, so he/she can base his/her decision which application to pick more on actual facts than marketing rethoric like 'definitely no.1 in support of OOP concepts'. > 1) You can run the following queries in DataObjects.NET: > "Select IHasAddress objects where {City}='New York' and > {Country}='USA' ", where City and Country are properties of > IHasAddress interface. Such query will fetch all objects that > supports this inte rface and satisfy specified criteria. Nice feature. > 2) DataObjects.NET supports so-called paired relationships. > This means that it automatically maintains sychronized a set > of collection\reference properties from different types. The > simplest example: execiting "joe.Manager = bob" autometically > leads to update of bob.Empoyees collection, ie. running > "bob.Employees.Contains(joe)" will evaluate to true, and > nothing additional should be done to get this result. Oh but we do that too, my friend :). myOrder.Customer = myCustomer which automatically adds myOrder to myCustomer.Orders, synchronizes PK of myCustomer with the FK in myOrder (if myCustomer is new, otherwise it is done during the recursive save of the graph). See, 'definitely no.1' claims can backfire on you. Make no mistake about it, your product is very feature-rich and looks to me a solid product, but so are others on the market, which probably also have features you have and more. For marketing purposes, claims can go as far as your imagination, but lets leave marketing out of this discussion :) > DataObjects.NET supports all imaginable types of pairing - > pairing a collection to reference pr operty (1-n > relationship), collection to collection (m-n), reference > property to reference property (1-1 relationship), moreover, > it supports paired interface properties (i.e. you can declare > that IManager.Employees (collection if IEmployee) is paired to > IEmployee.Manager (IManager)), as well as pairing to interface properties. We call that mapping a field on a relation. Including references with self. I like the interface approach you have though, but I doubt the functionality you describe here is unique. Most other O/R mappers have similar features. > 3) N-ary relationships are relationships between more then 2 > objects. Usually this type of relationship is "emulated" by > adding an intermediate type which instances represents the > instances of such relationships. in NIAM/ORM this is called an objectified relationship: the relationship becomes an entity of itself. > Well-known example is security relations hip between SecureObject, Principal (User > or Role) and Permission. This is a ternary relationship. > Certainly you can emulate this relationship by adding an > intermediate type (ie. SecurityRelationship containing 3 > references; some ORM tools use this way to emulate regular > 1-n\m-1\m-n relationships), but DataObjects.NET supports this > type of relationship explicitely. Moreover, it supports > paired (in this case - simply related) collections of such > relationships declared in different types in the same fas > hion as regular paired collections\properties. isn't this just supporting compound PKs and each (most) field(s) in the PK is an FK ? In theory, (prof. Halpin, prof. Nijssen) this is just objectifying the relationship between 2 or more entities and in fact see that objectified relationship as a 'relation' or entity, which can have attributes of itself. No emulation with an intermediate object, that's the way you have to implement it in a relational model. a lot of the O/R mappers support compound PKs, so querying the objectified relationship (the SecurityRelationship entity) is not a problem. In fact, we offer the objectified relationship as a separate entity as well, directly usable in your object model. > 4) DataObjects.NET provides NTFS-like access control system, > that supports per-instance access control lists, users & > roles, custom permissions, permission inheritance, etc. This feature is unique indeed, although some O/R mappers do offer security features. > _None_ of other ORM tools supports these features. But eg. > interfaces are one of fundamental OOP concepts. And in lots > of cases it's _necessary_ to perform queries like in my example. none? only feature 4) is somewhat unique to your product. > 5) Usual serialization and serialization to DataSets... Well, > some ORM vendors implements this. But I don't know other > tools that fully supports such range of serialization > options. Eg. DataObjects.NET can serialize\deserialize a > graph containing persis tent as well as non-persistent > objects, We can too, call WriteXml() and the complete graph is exported to Xml, call ReadXml() and you can re-instantiate a complete graph from Xml. Of course all xml is generic, so directly usable in any system which can consume xml. Of course serialization / deserialization using soap and binary formatters is also supported. Or, if you want to work with datasets, no problem. > supports add\overwrite deserialization options, can > serialize only specified instances, and others - as > references (so only a part of the whole object graph will be > serialized, but all references to non-serialized persistent > entities will be correctly restored on deserialization), etc. This is more advanced than we support at the moment. I doubt it is of that much importance though, as it is likely a call for data on a service is likely coming from the outside, so the data will be fetched using the query system and then serialized. > So hopefully this is enough. And please stop using BS and > similar phrases. I, as well as most part of visitors in this > forum aren't your relatives or good friends. I expect to hear > a bit more interesting things \ facts in this forum. Better > keep silence . Well, then stop claiming things as 'definitely no.1'. I can drop a list of features we support that you don't. Interesting? perhaps. But I'm not interested in using this forum for marketing purposes and I don't think it is of your interest to do that too. As I said earlier on, it's much more interesting to debate aspects of the application, why you have opted for this and not for that, than claim things like you're the no.1. and NONE of the competition can do a given set of features...
    Thursday, October 7, 2004 6:24 AM
  • User-541866472 posted
    > > Moreover, this is a moderated forum, so I'd suggest you to do > > the same - to stop waste words on such statements. Be closer > > to the subject, and nothing more. I really tired to answer on > > similar posts from your side - it seems you're the most > > annoying per son here :( > For the last time: I didn't bring up postcounts on a forum, you did. I didn't question your numbers, you questioned mine. And please, don't lecture me what I can and can't do as if I am your employee or your child. If a moderator has a problem with what I have to say, the moderator can email me. Nice idea, but as you could notice, I wanted to say exactly this :) I mentioned that this is a moderated forum because you've mentioned that one of posts is off topic. Other comments will be added shortly...
    Thursday, October 7, 2004 9:24 AM
  • User579948574 posted
    2 Frans and x-tensive Guys, you intrigued me with features like pairing, n-ary relations, querying by interfaces, and NTFS like security (mostly x-tensive though :)) ) I did skipped though your aacusations of each other. It is just boring, and I do not have time to read all that. My question is. DO I have to subclass my data objects from your classes/interfaces in order to use all that cool functionality ?
    Thursday, October 7, 2004 12:41 PM
  • User-1342539384 posted
    Use reference to these data objects. I am using Llblgen pro. So if i want to make use of the functionality of CustomerEntity (generated by Pro), I create a seperate class called CustomerBusiness and have an instance variable DataObject which will return CustomerEntity, then add a few more instance variables/properties like Id in CustBusiness which will return DataObject.Id and so on for other properties or variables. Finally when ready to save call CustomerBusiness.Save() which will call DataObject.Save(). This way you can have your own object model for your business objects and have them refer to the data objects for any kind of data and implement business logic in the business objects. Ex: class CustomerBusiness: BaseBusinessAbstract { private CustomerEntity data; public CustomerEntity DataObject { get{ if(data == null) return (data = new CustomerEntity(Id)); else return data;} set{;} } public int Id { get{return data.Id} set{data.Id = value;} } public void save() { data.Save(); } //... other business logic methods related to Customer can be implemented here. } CustomerEntity is the one generated by Pro, CustomerBusiness belongs to our object model. Similarly we can add other business classes per entity/per collection. Thanks.
    Thursday, October 7, 2004 1:12 PM
  • User-541866472 posted
    Let's continue: > Ah, and you support group by, aggregates, sqlexpressions and the like too, or prefetch paths with filtering in your most feature rich query system? We specially don't allow to fetch anything except objects in queries - the reasons of this decision are the following: Any data should be accessed via transactional properties or methods of persistent instances or special services (DataServices - non-persistent objects having transactional methods). This approach allows to ensure that middle-tier code will be able to process any attempt to get some persistent data, and eg. deny it (by thorowing an exception) in some case; the simplest example is when current user has no necessary security permission. Some properties shouldn't be accessible outside of a middle tier. Aggregate queries provide a way to do this indirectly, thus they are disabled everywhere except subqueries in where clause. If it's necessary to perform some aggregation on SQL Server level, a low-level API accessible only from the middle tier should be used. Such things as like, expressions and joins are certainly supported. Concerning prefetch paths - we provide very similar solution: 1) You can run any query with LoadOnDemand option. In this case such query will fetch just object identifiers and version numbers (second column allows to mark a subset of cached objects as "valid in active transaction", and thus don't run cache validity checks for them further). Nevertheless returned QueryResult will look absolutely the same as any other, DataObjects.NET will simply load it part-by-part further. This is quite useful e.g. when distinct option with several joins is used. Any collection behaves the same by default. 2) QueryResult, Session, DataObjectCollection and ValueTypeCollection provide Preload(...) methods. Preload allows to push a set of objects into the cache, as well as preload some of their [LoadOnDemand]\collection fields. Preload fetches only uncached objects with minimal amount of queries. A combination of both these methods allows to decrease the amount of queries to minimal - e.g. you're fetching a set of objects (let's say root objects), preload some of their collections (usually - by a single query), determine identifiers of objects you need on the next step (by processing root objects), and preload them + may be some of their collections (again, by one-two queries), etc... This is more intelligent then use of prefetch paths, since: - Only necessary data is fetched (but not every object on the prefetch path) - Any object\collection is fetched just once (recursive paths aren't unusual) > I also don't see why text based queries are 'OOP' as you claimed you had. I didn't, I just said they're more convenient. This is obvious - e.g. you're writing on C#, not on its CodeDOM model. > Wouldn't it be much more interesting to debate why you for example mangle the database model to support features and why we don't do that? For the reader it would be better, so he/she can base his/her decision which application to pick more on actual facts than marketing rethoric like 'definitely no.1 in support of OOP concepts'. Well... Mangle isn't a word that can be used here - simply because there is nothing to mangle. Almost any existing database can't be used with DO, thus it can be mangled. We've choosen different way in comparison to LLBLGen and most of similar tools - DO doesn't allow to map properties of persistent objects in the way you wish, except it establishes the mapping by its own (it allows to specify some mapping properties, such as column names, but generally it doesn't allows to map anything as you wish). This approach provides us with a set of benefits, some of them (such as support of queries by properties of interfaces) I've already mentioned. I'll enumerate few others further. So if we'll try to describe this in brief, the main difference between our tools are: 1) DO considers that persistent classes (their properties, applied attributes, etc.) is the only information source. On contrary, LLBLGen Pro puts database model into the corner of the room. 2) DO builds and constantly maintains the database for a particular Domain. It handles this completely automatically. It is capable of automatically updating the database schema, as well as objects that are already contained in it (some changes in persistent types require this). On contrary, LLBLGen Pro updates the persistent model when database schema changes. So if you're going to switch to DO, you should be ready to convert your database. This isn't really complex task, eg. DO supports import\export to DataSets. DataObjects.NET.Data.Adapter is a component that handles this task. You should establish a mapping between its tables\columns and persistent types\properties (any type of mapping is allowed here, mappings are edited in VS.NET designer), fill it with legacy data, and call Adapter's Update method to push it into the DataObjects.NET-maintained database. > Nice feature. It's not simply nice, but really useful. Really, can you write really complex system without use of interfaces in C#? Fogot to add: DataObjects.NET allows to use interface properties, collections of interfaces, etc. And all such features doesn't seriously impact on performance (moreover, there is almost no noticeable performance impact). Further you've mentioned that this feature isn't unique - so which ORM tool supports it also? > Oh but we do that too, my friend :). myOrder.Customer = myCustomer which automatically adds myOrder to myCustomer.Orders, synchronizes PK of myCustomer with the FK in myOrder (if myCustomer is new, otherwise it is done during the recursive save of the graph). Just want to make it clear: so if I understood everything correctly, exactly this code won't throw an exception: myOrder.Customer = myCustomer; if (!myCustomer.Orders.Contains(myOrder)) throw new ApplicationException("Frans lies."). > a lot of the O/R mappers support compound PKs, so querying the objectified relationship (the SecurityRelationship entity) is not a problem. I said that we explicitely support this type of relationship, but not via additional type of entity. Do you feel the difference? Basically the difference is following: usually any regular entity is a complex object that have a set of additional fields (such as update history, etc.), thus it's less effecient to use them (they eat more RAM, etc.), and moreover, less convenient in comparison to the case when n-ary relationships are supported explcitely (so we simply provide more convenient and optimized way to work with them). > none? only feature 4) is somewhat unique to your product. Really? So you're definitely layer. I think - 1,3,4 - definitely (we're certainly speaking about .NET ORM tools?), and 2 is under the question. Nevertheless 2 is less complex feature from mentioned set. Anyway, this set is just what I remembered almost at random. I easily can add few more really nice features that aren't supported by anyone else, but feel that Frans will continue to sing his song about BS, marketing, etc. So let's switch to things suggested by Frans - I really would like to know which _unique_ features LLBLGen Pro has. And there is one more question that is really interesting to me (and most likely for others): how LLBLGen Pro deals with inheritance. Eg. I have a Person type, and a set of its descendants: Employee (Person descendant), Manager (Employee descendant). Let's think some type (eg. PersonGroup) has Persons collection (a collection of Person objects). Questions: 1) Can I put Employee into it? 2) How many queries needed to fetch all objects from a particular somePersonGroup.Persons collection? 3) Will nearly this query fetch Employees & Managers also: "Select Person objects where {Name}='John' "? Or is it possible to run a query that will select all Person, Manager and Employee instances (i.e. Person and all its descendants) with Name=="John"? 4) Does any of these opportunities depend on underlying database schema?
    Thursday, October 7, 2004 1:20 PM
  • User-1938370448 posted
    > -------------------------- > Re: Your favorite O/R Mapper? by x-tensive Let's continue: > > > Ah, and you support group by, aggregates, sqlexpressions > and the like too, or prefetch paths with filtering in your > most feature rich query system? > > We specially don't allow to fetch anything except objects in > queries - the reasons of this decision are the following: > > Any data should be accessed via transactional properties or > methods of persistent instances or special services > (DataServices - non-persistent objects having transactional > methods). This approach allows to ensure that middle-tier > code will be able to pr ocess any attempt to get some > persistent data, and eg. deny it (by thorowing an exception) > in some case; It's a choice of course, you have to make: offer also the power of a relational model or not. Offering feature XYZ also has consequences and you decided not to take those consequences. I personnaly find the relational model very powerful and I think without features which unleash that power, a lot of functionality is missed. For example, reporting benefits greatly of the power of the relational model. Also silly lists like orders with teh customer name in a separate column (read from the customer entity)... it's a thing developers have to face every day and a model with pure objects can be limiting in that context. Of course, if the O/R mapper requires a fixed database format (as you do), the power of the relational model is pretty much gone, so your choice is understandable. > Concerning prefetch paths - we provide very similar solution: > 1) You can run any query with LoadOnDemand option. In this > case such query will fetch just object identifiers and > version numbers (second column allows to mark a subset of > cached objects as "valid in active transaction", and thus > don't run cache validit y checks for them further). To clear things up: you store cache control parameters in the database as well? > Nevertheless returned QueryResult will look absolutely the > same as any other, DataObjects.NET will simply load it > part-by-part further. This is quite useful e.g. when distinct > option with several joins is used. Any collection behaves > the same by default. I don't quite understand what you mean here. Are you saying that, if I load a set of customers and I also want their order entities, you load per customer its set of orders separately? (what load-on-demand actually does, which can be good, but in some situations not that great) > 2) QueryResult, Session, DataObjectCollection and > ValueTypeCollection provide Preload(...) methods. Preload > allows to push a set of objects into the cache, as well as > preload some of their [LoadOnDemand]\collection fields. > Preload fetches only uncached objects with minimal amount of queries. How does it know which objects not to fetch? Say I want all customers with an order in July. The query has to consult the database, it can't rely on the cache, as another thread, another application instance, can have added a customer, altered data or whatever, which means that every query actually has to be executed on the database and the results have to be checked with the cache. > A combination of both these methods allows to decrease the > amount of queries to minimal - e.g. you're fetching a set of > objects (let's say root objects), preload some of their > collections (usually - by a single query), determine > identifiers of objects y ou need on the next step (by > processing root objects), and preload them + may be some of > their collections (again, by one-two queries), etc... This is > more intelligent then use of prefetch paths, since: > - Only necessary data is fetched (but not every object on the prefetch path) So if I want to fetch: a set of customers and the graph: Customer.Orders Customer.Orders.OrderDetails Customer.Orders.Products Customer.Address How many queries do I need for say 50 customers ? 5? We can filter on prefetch paths btw, so if I only want to prefetch the last order for every customer I can do that, it automatically limits the graph further on. Btw, how will your prefetch paths work when you have multiple fields in the PK? > > Wouldn't it be much more interesting to debate why you for example > > mangle the database model to support features and why we don't do > > that? For the reader it would be better, so he/she can base his/her > > decision which application to pick more on actual facts than marketing rethoric like > > 'definitely no.1 in support of OOP concepts'. > > Well... Mangle isn't a word that can be used here - simply > because there is nothing to mangle. Almost any existing > database can't be used with DO, thus it can be mangled. Well, ok I have to explain that. I'm a relational theory purist (you're an OO purist if I may call you that, which causes our different ways of looking at things: mapping relational model on classes vs. mapping classes on 'a' relational model) If I design my relational model in NIAM or ORM (http://www.orm.net ), I get an abstract model. When I create an E/R model from that, it has to be 'mangled' to make your tool work on it, i.e.: transformed to something sometimes far away from the actual abstract model, which renders it void. If you don't care about NIAM and E/R model, of course, then the relational model changes are of no concern to you :) however a lot of developers have to (and want to) obey relational rules. > We've choosen different way in comparison to LLBLGen and most of > similar tools - D O doesn't allow to map properties of > persistent objects in the way you wish, except it establishes > the mapping by its own (it allows to specify some mapping > properties, such as column names, but generally it doesn't > allows to map anything as you wish). > This approach provides us with a set of benefits, some of > them (such as support of queries by properties of interfaces) > I've already mentioned. I'll enumerate few others further. True, it has advantages, as the relational model doesn't come into your way when you want to work with 'objects' so the conversion between data and a physical object is likely very simple. The backside of this is ofcourse that the database isn't usable by other applications. This can be a big problem when you want to write that e-commerce application which has to work with the big oracle box also used by several departments. > So if we'll try to describe this in brief, the main difference between our tools are: > 1) DO considers that persistent classes (their properties, > applied attributes, etc.) is the only information source. On > contrary, LLBLGen Pro puts database model into the corner of the room. I don't know what you mean with corner of the room, because the relational model is the heart of LLBLGen Pro's design philosophy: i.e.: first design your abstract datamodel using NIAM or ORM, create an E/R model and with that the relational model. This is common procedure in a lot of organisations for a long time already. Use that relational model (or better the abstract model it was made of) as the source for the application layer(s) on top of that. > 2) DO builds and constantly maintains the database for a > particular Domain. It handles this completely automatically. > It is capable of automatically updating the database schema, > as well as objects that are already contained in it (some > changes in persi stent types require this). On contrary, > LLBLGen Pro updates the persistent model when database schema changes. Yes, it comes down to: DO: maps class model on an own database model. LLBLGen Pro: maps relational model on a class model. Both have their advantages and disadvantages, which are quite distinctive I'd say, so if one approach appeals more to you, you automatically won't like the other approach. > So if you're going to switch to DO, you should be ready to convert your database. This isn't > really complex task, eg. DO supports import\export to DataSets. some customers of ours have databases with over 2000 tables and terabytes of data. Converting those can be a real pain I think, especially if the model will change, how are you handling schema changes? Because a DBA in a big corp will likely demand to see the change scripts, test them first, before the large data is migrated, to limit downtime. > > Nice feature. > > It's not simply nice, but really useful. Really, can you > write really complex system without use of interfaces in C#? LLBLGen Pro is completely written with interfaces :) so no, I can't imagine a system written without interfaces. That is: when genericity is required. When you work with theory developed by dr. P. Chen, yourdon, Codd, Halpin, you're focussed on entities. How you address these is not important, the only thing that's important is THAT you can address them. I do see the necessity for it when you work your way down from class towards the database. > Further you've mentioned that this feature isn't unique - so > which ORM tool supports it also? Well, every O/R mapper which supports inheritance querying does. True, if you implement more than one interface on a class, your feature is unique, however when will that be necessary when you can also query on a base type? Most O/R mappers supporting inheritance do that. That's why I said that I don't think it is a unique feature per se. > > Oh but we do that too, my friend :). myOrder.Customer = myCustomer which automatically > > adds myOrder to myCustomer.Orders, synchronizes PK > > of myCustomer with the FK in myOrder (if myCustomer is new, otherwise > > it is done during the recursive save of the graph). > > Just want to make it clear: so if I understood everything correctly, exactly this code won't > throw an exception: > > myOrder.Customer = myCustomer; > if (!myCustomer.Orders.Contains(myOrder)) > throw new ApplicationException("Frans lies."). That code won't throw an exception. The 'Customer' property in myOrder will setup synchronization with myCustomer, which triggers the addition of myOrder to Orders. But of course, to be sure I've tested it [Test] public void SyncTest() { CustomerEntity myCustomer = new CustomerEntity("CHOPS"); OrderEntity myOrder = new OrderEntity(); myOrder.Customers = myCustomer; if(!myCustomer.Orders.Contains(myOrder)) { throw new ApplicationException("foo"); } } Which worked. I got a little scared at first when I made a typo in th Contains(): I wrote "myCustomer", which of course threw the exception, but *pfew*... it was just my crappyness :) > > a lot of the O/R mappers support compound PKs, so querying > the objectified relationship (the SecurityRelationship entity) is not a problem. > > I said that we explicitely support this type of relationship, > but not via additional type of entity. Do you feel the > difference? Please keep in mind that I'm looking through the NIAM/ORM glasses at the world, which means that what you described, is in a relational model simply a new entity. You may of course call it something different and THUS claim you have something unique, but don't blame me if I call it something non-unique. See: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dv_vstechart/html/vstchvsea_ormoverview.asp > Basically the difference is following: usually any regular entity is a complex object that > have a set of additional fields ( such as update history, etc.), thus it's > less effecient to use them (they eat more RAM, etc.), and > moreover, less convenient in comparison to the case when > n-ary relationships are supported explcitely (so we simply > provide more convenient and optimized way to work with them). Ok, if you make an optimization about it, I can live with that, however it's not something unique. It's very old in fact, but let's put the claiming aside. What I'd like to see is how you'd use it in practise. For example, what stops me from formulating a query on 3 pk fields to retrieve an entity? > > none? only feature 4) is somewhat unique to your product. > > Really? So you're definitely layer. I think - 1,3,4 - I'm a liar? > definitely (we're certainly speaking about .NET ORM tools?), > and 2 is under the question. Nevertheless 2 is less complex > feature from mentioned set. > > Anyway, this set is just what I remembered almost at random. > I easily can add few more really nice features that aren't > supported by anyone else, but feel that Frans will continue > to sing his song about BS, marketing, etc. Well, I can also put some really nice features not supported by anyone else, especially you, would that make you more confortable? :) > So let's switch to things suggested by Frans - I really would > like to know which _unique_ features LLBLGen Pro has. Well, we let users formulate typed lists, based on entities, which unleaches the power of the relational model, we offer very powerful functionality to produce constructs in C# or VB.NET which produce data ready for reporting functionality, with sql expressions, aggregates, group by, having clauses. We have multi-versioning of entity fields in memory with rollback, we support multiple databases (types also) in one application so you can read from sqlserver and write into oracle in the same routine. We support truly context free entity class usage, so you can grab an entity from service A, edit it in the client and store in using service B. We can work with any legacy database out there and support all database-constructs also supported by ADO.NET (thus no UDT's on Oracle). Our concurrency support model uses pluggable factories so you can have a very finegrained concurrency scheme, on a per object basis. Although I don't think any of them are really 'unique'. There will be applications out there which support this or that. Frankly I don't care that much about if some feature is supported by a competitor as well, or that we're not the most 'purest O/R mapper' on the planet. What counts is solving the data-access problem on a very efficient and most of all: productive way; users can start writing BL code after spending perhaps a minute, using the designer, even with databases with thousands of tables and views. > And there is one more question that is really interesting to > me (and most likely for others): how LLBLGen Pro deals with > inheritance. Eg. I have a Person type, and a set of its > descendants: Employee (Person descendant), Manager (Employee > descendant). Let's think some type (eg. PersonGroup) has Persons collection > (a collection of Person objects). As inheritance is hard to model in a relational model, we left it untouched for now. We do support single table inheritance in code, but the full inheritance functionality is planned to be implemented this winter. As I said earlier, we look with different visions at the same problem. For you, inheritance is a corner stone, for me it's not that relevant, as with a relational model, inheritance is likely to be absent or not that important. If I implement inheritance with the single table inheritance model and the adapter paradigm (we support 2 paradigms: persistence as behavior (selfservicing) and persistence as a service (adapter)) I'll try to answer the following questions: > Questions: > 1) Can I put Employee into it? Yes. > 2) How many queries needed to fetch all objects from a > particular somePersonGroup.Persons collection? 1, if I understand you correctly > 3) Will nearly this query fetch Employees & Managers also: > "Select Person objects where {Name}='John' "? Or is it > possible to run a query that will select all Person, Manager > and Employee instances (i.e. Person and all its descendants) > with Name=="John" ? yes. > 4) Does any of these opportunities depend on underlying database schema? Yes, a separate column has to be present, with the class type. One of the problems with the relational model is that you can't 'model' inheritance without adding values.
    Thursday, October 7, 2004 3:18 PM