none
Optimization by JIT Compiler vs C# Compiler vs Programmer (and Code Analysis) RRS feed

  • Question

  • Disclaimer:  Yes, I know that the vast majority of code need not worry about optimization... it just doesn't matter.  Correctness and maintainability are far more important.  But there usually is a 3% of code that you do need to worry about optimization... because if you don't, it'll make the overall user experience a bad one.

    In another thread we have been discussing speed of operators and inlining... and studying that has lead me down a path of looking at the very different expectations regarding optimization that I should have as a .NET / C# developer rather than my former life as a traditional C++ developer.

    In C++ with modern compiler technology, you could be pretty sloppy optimization-wise.  For example, you could write foo->bar->baz->doOneThing() and on the next line write foo->bar->baz->doAnotherThing() and know that the compiler would reliably do foo->bar-baz just once, hold the result in a register, and make the two calls off that pointer.  Often I would see loops where code could be moved outside the loop by adding a few extra local vars, but wouldn't bother rewriting that person's code because I knew the compiler would do it for me.  In C++, having layers of classes with layers of functions that just call other functions is common... and with no worries as we know that will all be inlined away.

    In C#, it seems the C# compiler does very little (or possibly no) optimization, leaving that for the JIT Compiler.  But the JIT Compiler is somewhat limited in the sophistication of the optimization that it can do.  It can do more than peephole optimization, but it is limited to fairly local optimization and fairly local heuristics.  For example, the inlining heuristics can recognize code in an inner loop, but not code in an equivalent recursion.

    In fact, based on looking at code regarding inlining, it seems many of the fairly local optimizations are not performed.  So that many of the things I'd have ignored in C++, I need to be hand-optimizing as a programmer in C#.  Initially, I was thinking that's "horrible"... but as I started thinking about the hand optimizations that I need to do, it occurred to me that many of those things (although extra work for the programmer) arguably improve readability and thus maintainability.  For example, common sub-expression elimination requires you to watch for that, and add extra lines of code with extra local variables.  But it eliminates cut-n-paste sub-expressions (no worry about changing one and forgetting to change the others) and gives you a local variable name that explains your intent/expectation to the reader.  And that can make debugging easier as you have extra places to put breakpoints and extra variables you can watch.  Hmmm....

    Pondering that reminded me of my excursion into the Code Analysis rules that MS programmed in for us.  For example, one rule I thought was moronic complains about code in constructors where you initialize things to zero.  It tells you "The code will run faster if you don't set values to zero as they are pre-initialzed to zero by the system anyway."  I thought that was stupid because that would be a trivial optimization of the compiler... so, why force the user to do that... and further, I believe its good documentation to have those lines in there (so you know I intended to initialize those to zero... I didn't just forget).  So, now I just comment out all such lines so readers know my intent, but Code Analysis doesn't complain.  I considered turning off that rule, but thought there might be a good reason for it...

    Now I realize there is a good reason for it:  the C# / .NET environment doesn't give you the same level of optimizations.

    Putting all that together, I see a potentially very different way of getting the sophisticated optimization of the traditional C++ world with some side benefits:

    Move many of the traditional optimizations into Code Analysis rules encouraging programmers to perform those optimizations on their code... with the side benefit that it reduces the maintained code and improves readability and debuggability and so on.  And maybe that was MS's intent... I need to go back through the Code Analysis rules with that idea in mind... maybe a lot of those rules will make more sense now.

    On the other hand, there's a number of optimizations that traditional compilers do that would be BAD for readability or would require breaking object-oriented abstractions and such.  Those sorts of optimizations do not belong in Code Analysis and may not be possible to do in the JIT Compiler... those really should be performed by the C# Compiler going to IL code.  Maybe we'll get that someday.

    Anyway, now for my QUESTION...  all of the above is me drawing conclusions from what I am seeing in analyzing the speed of the code generated for certain key methods and operators and structs and such in my code.  Has the MS CLR / compiler team ever issued:

    (1)  A document explaining their intent for optimizations in the JIT-compiled world of .NET?

    (2)  A document listing which of the traditional optimizations the JIT does, which of the traditional optimizations they put in the compiler that generates IL, and which of the traditional optimizations they moved into Code Analysis?

    Or has anybody else done like I am doing and figured it out from what's happening and documented that anywhere?

    Thursday, January 6, 2011 2:37 PM

Answers

All replies

  • First let me say that both the other thread and this are an interesting read, being a C++ dev I get your concern, it seems that the compiler doesn't optimize by default, you need to turn this option yourself

    http://msdn.microsoft.com/en-us/library/t0hfscdc.aspx

    http://msdn.microsoft.com/en-us/library/vslangproj80.csharpprojectconfigurationproperties3.optimize.aspx

    I think there was an article on what you want, let me see if I can fetch it for you

    Regards

    Thursday, January 6, 2011 2:59 PM
  • http://flylib.com/books/en/4.453.1.45/1/

    http://www.codeproject.com/KB/dotnet/JITOptimizations.aspx

    This seems like it could shed some light on this, most of the information on the web is well, wrong, I can't seem to find the article I was talking about.

    Regards

    • Marked as answer by Neddy Ren Thursday, January 13, 2011 5:53 AM
    Thursday, January 6, 2011 3:34 PM
  • There are a couple of specialists in this field. And the all have a blog as far as I can remember. searching for .net clr performance blog in google gives you a few. This has always been a very tricky subject... The trouble is that, as the C# compiler compiles to IL, and IL is compiled to machine native code, the real optimizations will take place during JIT compilation.

    When .NET was first released there really only was one target platform, being x86 .NET 1.0, but over time the number of platforms have increased and each requires their own set of optimizations.

    We now have:

    - x86
    - x64
    - Windows
    - The upcoming Windows 8 ARM
    - Windows CE (.NET Compact Framework)
    - Linux & Mac (Mono, Silverlight)
    - Embedded chips (Micro Framework)

    It is the JITs job to optimize for each platform. There are some optimizations, and boundary checks that can be activated or deactivated using compiler switches, which will influence the exact IL the compiler will emit, but these generally don't improve performance by very impressive amounts.

    You can pre-JIT (ngen) your assemblies, which will improve initial loading performance, but in the long run won't add much performance.

    The real problem areas are those specific conditions that won't allow your code to be JITted, or won't allwo pre-JITted code to be loaded. These usually happen when assemblies are loaded through reflection, or when you're building your appdomain from the ground up. It's these conditions that you should at all times try to avoid.

    And then there are a few platform specific optimizations you can apply... When loading an app on Windows Mobile, it will be hashed before executing, to ensure it's safe to load. S making the initial executable as small as possible and putting the biggest part of your code in separately loaded assemblies improves performance on these platforms by a very large amount.

    The point to take away from this is, even though C# and the .NET framework are platform independent, you can still do some optimizations which will improve performance considerably on one platform, while they may result in degraded performance on others. There's just not one set of rules that applies to all...

     

    Interesting reads:
    http://blogs.msdn.com/b/ricom/
    http://ayende.com/Blog/category/493.aspx
    http://msdn.microsoft.com/en-us/magazine/cc163610.aspx

     

    Thursday, January 6, 2011 3:44 PM
  • There's also a number of 3rd party tools that might interest you... Xenocode Postbuild (now part of Spoon Studio) can prune any unused code fragments and optimize your IL after the C# compiler had been finished with your code. This tool can keep track of the calling assemblies and even remove code for a specific context.
    Thursday, January 6, 2011 3:46 PM
  • There's also a number of 3rd party tools that might interest you... Xenocode Postbuild (now part of Spoon Studio) can prune any unused code fragments and optimize your IL after the C# compiler had been finished with your code. This tool can keep track of the calling assemblies and even remove code for a specific context.


    Such a tool would be great... but it appears Xenocode Postbuild VANISHED inside Spoon Studio.  They don't indicate that capability at all... Spoon Studio is just about getting my program to run on their Spoon Server.

    So, I looked for others offering such a tool... the closest that I found was .NET Reactor which is primarily a obfuscator sorta thing but does do a few optimizations as well.  I suppose Dotfuscator also claims a few small optimizations.

    Are there any other IL->IL optimizers out there?

    Saturday, January 8, 2011 3:20 PM
  • Similarly, besides Microsoft's Code Analysis tool, are there any tools designed to run over your code and recommend things to change to optimize it?

    OR, are there any other companies selling additional rules that you can load into Microsoft's Code Analysis to detect such opportunities to improve your code?

    Saturday, January 8, 2011 3:23 PM
  • Well, I use Resharper and FxCop a lot, there are other tools like that one. Resharper can be given additional rules so it could be what you need.

    Regards

    • Marked as answer by Neddy Ren Thursday, January 13, 2011 5:55 AM
    Monday, January 10, 2011 2:09 PM
  • Hello again,

    You can check the CLI /C# specification if you haven't already.

    Hope it helps.


    Eyal, Regards.

    Any fool can write code that a computer can understand. Good programmers write code that humans can understand. -- Martin Fowler.

    SharpHighlighter - is an extension for Visual Studio, a fairly simple code highlighter for C#.
    Monday, January 10, 2011 3:26 PM
    Moderator
  • Are there any other IL->IL optimizers out there?


    http://social.msdn.microsoft.com/Search/en-us?query=edit+IL

    Editing the IL yourself is also an option. 

    I am done with this thread.  Just wanted you to be aware of that choice.

    Happy Coding.


    Mark the best replies as answers. "Fooling computers since 1971."

    http://rudedog2.spaces.live.com/default.aspx

    Monday, January 10, 2011 3:38 PM
    Moderator
  • Not that I know of.
    Monday, January 10, 2011 3:40 PM
  • Well, I use Resharper and FxCop a lot, there are other tools like that one. Resharper can be given additional rules so it could be what you need.

    Hmm... when I re-check MS Code Analysis rules for optimization wisdom, I'll also check out Resharper and see if it has good optimization rules in it.  Thanks.

    Monday, January 10, 2011 5:46 PM