none
Stopwatch seems to be non-linear RRS feed

  • Question

  • I try to do as accurate and precise logging of bytes entering a serialport.

    The reading is done by a blocking call (only for testing and proof of concept) which returns only when a byte is actually read from the serialport.
    Baudrate is 38400, none parity, one stopbit and 8 databits, should give approximately 260 microseconds between each byte (assume sender sends continually).

    I use a Stopwatch and capture ElapsedTicks after each byte is read and expect approximately 714 ticks between each byte (Stopwatch.Frequency = 2742886).

    This seems to be true, but only on average, the number of ticks taken for each byte is mostly around 8 or 9, sometimes 50 or 60, but much less than the expected 714.
    Then once in a while adding up with around 270000 which corresponds to about 0.1 seconds.
    In other words, it seems to go far too slow and then each 0.1 seconds it syncs up to correct again, then start lagging again a.s.o.

    Thus the Stopwatch is useless for my purpose of timestamping each byte accurately (seemingly even worse than DateTime.Now).

    Something obvious here?

    Sunday, December 17, 2017 10:59 AM

Answers

  • Hi EuroEager,

    Thank you for posting here.

    Stopwatch ticks are different from DateTime.Ticks. Each tick in the DateTime.Ticks value represents one 100-nanosecond interval. Each tick in the ElapsedTicks value represents the time interval equal to 1 second divided by the Frequency.

    As I know, I think it is linear. The Frequency value depends on the resolution of the underlying timing mechanism. If the installed hardware and operating system support a high-resolution performance counter, then the Frequency value reflects the frequency of that counter. Otherwise, the Frequency value is based on the system timer frequency.

    Because the Stopwatch frequency depends on the installed hardware and operating system, the Frequency value remains constant while the system is running.

    For the relationship, you could refer to the Source code.

    https://referencesource.microsoft.com/#System/services/monitoring/system/diagnosticts/Stopwatch.cs,2a82c0c84cf9f625

    Best Regards,

    Wendy


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.


    Monday, December 18, 2017 7:17 AM
    Moderator

All replies

  • Hi EuroEager,

    Thank you for posting here.

    Stopwatch ticks are different from DateTime.Ticks. Each tick in the DateTime.Ticks value represents one 100-nanosecond interval. Each tick in the ElapsedTicks value represents the time interval equal to 1 second divided by the Frequency.

    As I know, I think it is linear. The Frequency value depends on the resolution of the underlying timing mechanism. If the installed hardware and operating system support a high-resolution performance counter, then the Frequency value reflects the frequency of that counter. Otherwise, the Frequency value is based on the system timer frequency.

    Because the Stopwatch frequency depends on the installed hardware and operating system, the Frequency value remains constant while the system is running.

    For the relationship, you could refer to the Source code.

    https://referencesource.microsoft.com/#System/services/monitoring/system/diagnosticts/Stopwatch.cs,2a82c0c84cf9f625

    Best Regards,

    Wendy


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.


    Monday, December 18, 2017 7:17 AM
    Moderator
  • Thanks for your reply.

    Most of your text is found in the documentation, but it does not make me understand my observations anyway.

    As you probably understand, I stay in a lopp doing only a synchroneous ReadByte operation (not by SerialPort, but another and better implementation) and fetching the tickcount from the stopwatch and constructing a simple object with tics and byte value and add this object to a generic list.

    If the tick count delta between returns of the read operation was a bit longer than expected 714 ticks, it would be understandable (a few cpu cycles, perhaps GC etc.), but the observation is that each iteration only takes 8 or 9 ticks or so and suddenly it takes about 270000 ticks.
    That is what I cannot understand (I have checked the interrupt levels from FIFO and they are set to 1 so a byte on the line should be received in the application approximately synched with the line + some ISR, DPC etc.).

    Of course, using the incoming bytes (leading to a return of the sync read) as a clock to check the high performance timer (which stopwatch is a wrapper around) is not ideal, but why the super small increments of 8 or 9 ticks (corresponding to approx. 3 microseconds)?

    I will also try with a very simple implementation which does ReadFile without any other stuff (but still .net and thereby PInterop), but I don't beleive that will change anything.

    Monday, December 18, 2017 10:08 AM
  • Sometimes ReadByte simply returns a byte from buffer:


    To confirm that the data are buffered, check the value of BytesToRead.


    • Edited by Viorel_MVP Monday, December 18, 2017 11:35 AM
    Monday, December 18, 2017 11:33 AM
  • But I stated that I don't use System.IO.SerialPort.
    I use an open-source project as a replacement (due to the very much criticized SerialPort).As said before, later today I will try a very raw call to CreateFile and ReadFile to check if the problem is a higher level implementation issue or not, reporting back...

    Monday, December 18, 2017 2:42 PM
  • I just tried with a raw reading by native ReadFile and yippee, working just fine as expected.

    The expected 714 ticks between bytes (corresponding to 260.31 microseconds) was now average of 754 ticks (corresponding to 274.89 microseconds) and a variance/deviation of which seems to be reasonable and acceptable (probably due some anomalies not yet understood and taken care of (it is managed code after all), can fairly easily be filtered out of dataset anyway).

    Thanks for not accepting my first conclusions (I would have given up then)!

    Monday, December 18, 2017 7:56 PM