locked
Tracing - Performance impact. RRS feed

  • Question

  • Hello all,

    I am planning to add Tracing instrumentation to our application. This would result in tons of adding Tracing and logging statements in almost all methods of main areas of application. Even when tracing is disabled, at minimum these statements would atleast evaluate TracingEnabled==False/True logic, which i think will have some performance impact. Any better ideas are appreciated.

    Thanks.
    Friday, November 14, 2008 5:14 PM

Answers

  • Oh, so you're looking for performance. 

    I feel that performance would be enhanced if you look at what you are doing for logging rather than wondering if there's any other way you could check for error condidtions. You may probably have to use 'if-else' logic, but the overhead incurred in writing down the log to a file is definitely more than having a memory mapped log information that you could write at regular intervals in bursts.

    So, here are some possible answers to this:

    1. Use memory mapped logs that you write to files at intervals. This would help if you're writing a lot of information, a lot of times.

    2. Minimize the amount of information that you put to a log. Creation of information is an overhead if you're doing it a lot of times. For example, 


    fileStream.WriteLine("LOG : " + number.ToString());


    Here, the conversion of variable 'number' to a string would take little time, but that too might become important if you keep doing it over and over again.

    Minimization of information can also be done by just writing codes in the log file that have a unique meaning for each of the errors that might occur.

    For example, let's write ERR01235, WARN2335 or EXCP6536. Your developers should know what these mean.

    Hope this helps.


    Elroy D.
    • Marked as answer by Harry Zhu Wednesday, November 19, 2008 2:43 AM
    Saturday, November 15, 2008 2:53 AM
  • Code_Breaker said:

    Oh, so you're looking for performance. 


    I feel that performance would be enhanced if you look at what you are doing for logging rather than wondering if there's any other way you could check for error condidtions. You may probably have to use 'if-else' logic, but the overhead incurred in writing down the log to a file is definitely more than having a memory mapped log information that you could write at regular intervals in bursts.

    So, here are some possible answers to this:

    1. Use memory mapped logs that you write to files at intervals. This would help if you're writing a lot of information, a lot of times.

    2. Minimize the amount of information that you put to a log. Creation of information is an overhead if you're doing it a lot of times. For example, 


    fileStream.WriteLine("LOG : " + number.ToString());


    Here, the conversion of variable 'number' to a string would take little time, but that too might become important if you keep doing it over and over again.

    Minimization of information can also be done by just writing codes in the log file that have a unique meaning for each of the errors that might occur.

    For example, let's write ERR01235, WARN2335 or EXCP6536. Your developers should know what these mean.

    Hope this helps.


    Elroy D.



    These are great ideas, especially memory mapped approach.

    I've embedded logging/tracing stuff several times, usually on older minicomputer applications/systems. One approach here is to simply log to an in memory structure, create a memory based log table or list.

    This can be some structure that is efficient in terms of speed and overall impact, and at some time of your choosing the application can persist the accumulated data to a text file or whatever. You could persist it to a spreadsheet and then also do some analysis of the results. The key point I'm making is that the runtime data capture is quite distinct from the creation of a detailed persistent record.

    H
    • Marked as answer by Harry Zhu Wednesday, November 19, 2008 2:43 AM
    Saturday, November 15, 2008 10:03 AM

All replies

  • Why don't you try the C# conditional compilation feature. It lets you build code that could be used for debugging apps and also doesn't appear in your release build. So, no performance worries. 

    You can find some help here.

    Elroy D.
    Friday, November 14, 2008 5:29 PM
  • I am planning to put this in release. So that customer can enable tracing and send the trace log to our support team.
    Friday, November 14, 2008 6:03 PM
  • santoshk1 said:

    I am planning to put this in release. So that customer can enable tracing and send the trace log to our support team.

    So what exactly are you asking here, are you asking:

    1. Is there a way to modify the code without inserting tons of if/then logging logic?
    2. Is there a way to do this with minimal performance impact?
    3. Does .NET offer anything to make this easier?

    If you can clarify your question like this, it might be easier to respond positively.



    Friday, November 14, 2008 6:44 PM
  • kernel , its your 2nd question i am looking answer for. 3rd i have answer and if you have answer for 1st please post it, which will be helpful.

    Thanks.
    Saturday, November 15, 2008 12:17 AM
  • Oh, so you're looking for performance. 

    I feel that performance would be enhanced if you look at what you are doing for logging rather than wondering if there's any other way you could check for error condidtions. You may probably have to use 'if-else' logic, but the overhead incurred in writing down the log to a file is definitely more than having a memory mapped log information that you could write at regular intervals in bursts.

    So, here are some possible answers to this:

    1. Use memory mapped logs that you write to files at intervals. This would help if you're writing a lot of information, a lot of times.

    2. Minimize the amount of information that you put to a log. Creation of information is an overhead if you're doing it a lot of times. For example, 


    fileStream.WriteLine("LOG : " + number.ToString());


    Here, the conversion of variable 'number' to a string would take little time, but that too might become important if you keep doing it over and over again.

    Minimization of information can also be done by just writing codes in the log file that have a unique meaning for each of the errors that might occur.

    For example, let's write ERR01235, WARN2335 or EXCP6536. Your developers should know what these mean.

    Hope this helps.


    Elroy D.
    • Marked as answer by Harry Zhu Wednesday, November 19, 2008 2:43 AM
    Saturday, November 15, 2008 2:53 AM
  • Code_Breaker said:

    Oh, so you're looking for performance. 


    I feel that performance would be enhanced if you look at what you are doing for logging rather than wondering if there's any other way you could check for error condidtions. You may probably have to use 'if-else' logic, but the overhead incurred in writing down the log to a file is definitely more than having a memory mapped log information that you could write at regular intervals in bursts.

    So, here are some possible answers to this:

    1. Use memory mapped logs that you write to files at intervals. This would help if you're writing a lot of information, a lot of times.

    2. Minimize the amount of information that you put to a log. Creation of information is an overhead if you're doing it a lot of times. For example, 


    fileStream.WriteLine("LOG : " + number.ToString());


    Here, the conversion of variable 'number' to a string would take little time, but that too might become important if you keep doing it over and over again.

    Minimization of information can also be done by just writing codes in the log file that have a unique meaning for each of the errors that might occur.

    For example, let's write ERR01235, WARN2335 or EXCP6536. Your developers should know what these mean.

    Hope this helps.


    Elroy D.



    These are great ideas, especially memory mapped approach.

    I've embedded logging/tracing stuff several times, usually on older minicomputer applications/systems. One approach here is to simply log to an in memory structure, create a memory based log table or list.

    This can be some structure that is efficient in terms of speed and overall impact, and at some time of your choosing the application can persist the accumulated data to a text file or whatever. You could persist it to a spreadsheet and then also do some analysis of the results. The key point I'm making is that the runtime data capture is quite distinct from the creation of a detailed persistent record.

    H
    • Marked as answer by Harry Zhu Wednesday, November 19, 2008 2:43 AM
    Saturday, November 15, 2008 10:03 AM
  • thanks guys, great ideas.
    Tuesday, November 18, 2008 9:24 PM