none
TimeSpan or Stopwatch not as accurate as Environment.TickCount

    Question

  • Can someone, maybe preferably from Microsoft, confirm a behavior that I'm noticing concerning the lesser accuracy of using TimeSpan and the Stopwatch object to time something in milliseconds versus using Environment.TickCount.

    Specifically, I'm noticing that in my Ping class that when I use Environment.TickCount to measure how long a host responds to my message, the result in milliseconds is accurate compared to the results when I use Ping.exe from the command prompt.  However, when I measure it using either TimeSpan or the Stopwatch class, the results are off about 100 to 200 milliseconds (more).

    Here's pseudo code of how I'm using it.  I can post the actual code if the pseudo code isn't enough to help you help me ...




    // using Environment.TickCount
    // generally yielding the same results as what Ping.exe does from the command prompt
    int trips = 6;
    int tickStart, tickAccum = 0;
    int ping;

    for(int i=0; i<trips;i++)
    {
       tickStart = Environment.TickCount;
       ... Send the packet to the host
       ... Recieve a "reply" from the host
       tickAccum += (Environment.TickCount - tickStart);
    }
    ping = tickAccum / trips;

     




    // using TimeSpan
    // generally yielding 150 ms more than what Ping.exe does from the command prompt
    int trips = 6;
    DateTime start;
    TimeSpan span = new TimeSpan(0);
    int ping;

    for(int i=0; i<trips;i++)
    {
       start = DateTime.Now;
       ... Send the packet to the host
       ... Recieve a "reply" from the host
       span += DateTime.Now.Subtract(start);
    }
    ping = (int)span.TotalMilliseconds;

     




    // using Stopwatch
    // generally yielding 100 ms more than what Ping.exe does from the command prompt
    int trips = 6;
    Stopwatch stopwatch = new Stopwatch();
    int ping;

    for(int i=0; i<trips;i++)
    {
       stopwatch.Start();
       ... Send the packet to the host
       ... Recieve a "reply" from the host
       stopwatch.Stop();
    }
    ping = (int)stopwatch.ElapsedMilliseconds;

     


    My only guess is that there's more overhead using DateTiime/TimeSpan and Stopwatch.  A couple notes about my usage of the Stopwatch class:
    1.  When I looked at Stopwtach.IsHighResolution, it was true.
    2.  I tried using Stopwatch.ElapsedTicks in combination with the Stopwatch.Frequency as well, and yielded pretty much the same, "inaccurate" results.
    Monday, October 17, 2005 11:58 PM

Answers

  • >  How is it possible, that Ping.exe, from your assumption that it's not using a high res timer, would return a smaller tick value than that off the Stopwatch, if it functions on longer intervals?

    A low-res timer return a smaller tick value.  Consider the following: timer A has a resolution of 10ms, timer B has a resolution of 500ms, both timers start running at 10:00:00

    You time an interval of 1305ms from 10:00:00.100 to 10:00:01.405.

    Timer A (hi-res) will return a value of 1300ms (too low by 5ms)

    Timer B (low-res) will return a value of 1000ms (too low by 305ms)

    If you time an interval of 1305ms from 10:00:00.409 to 10:00:01.714ms:

    Timer A returns 1310ms (too high by 5ms)

    Timer B returns 1500ms. (too high by 195ms)

    I.e. the value for a low-res timer is less accurate than the hi-res timer, but can be higher or lower.

    > you're saying for me to not trust Ping.exe's results?  Something that's been used by users, professionals, etc. for years? 

    I don't think Ping was ever intended to accurately measure network latency between machines on the same subnet. 

     

     

    Thursday, December 22, 2005 7:46 AM
  • > So let me reiterate the question: "Why is there a discrepancy between using Environment.Tick and the Stopwatch class or the Timespan class?"

    Let me reiterate the answer: because they use different resolution timers internally.  As CommonGenius suggested, you can use Reflector to see the internal implementation.

    The most accurate of the ones you mention is Stopwatch, which uses hi-res timers internally.

    You can increase the accuracy of times you measure by:

    - using a higher-res timer, or

    - repeating the measurement multiple times and taking the average.  This is perhaps one reason why ping supports pinging repeatedly.

    To illustrate this, consider an extreme example: assume you want to measure something that takes 1 second, and the only timer you have has a resolution of 1 minute.

    I.e. if you start a test between nn:00 and nn:59, you will get a result of 0 minutes.  This will happen on average 59 times out of every 60 test runs.

    If you start a test between nn:59 and <nn+1>:00, you will get a result of 1 minute = 60 seconds.  This will happen on average once out of every 60 test runs.

    Now if you run say 10 tests, you will probably get all zeroes, and get an underestimated average time.  You may get one or more tests that give a result of 1 minute - which will give an overestimated average.

    But if you run a very large number of tests, you will get a result of 1 minute on average once every 60 test runs - so the average result will become more accurate.

     

    Thursday, February 16, 2006 6:04 PM

All replies

  • You will get differing results using stopwatch & Env.Tickcount, see this post

    http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=105972

    Tuesday, October 18, 2005 9:28 AM
  • thanks for the reply.  unfortunately, that thread doesn't quite hit the spot.  what i'm trying to demystify is why Environment.TickCount is more accurate than grabbing a span from two DateTime's or the Stopwatch class, whereas, in your thread you were trying to output ms using DateTime and TickCount and seeing unsynchronized behavior.  Maybe the fact that you can't output ms via DateTime is related to this...
    However, the google thread (within your thread) does allude to DateTime being accurate only up to 1 second.  But that's in CF.  Is that the same for the normal framework?  I'm thinking that maybe this behavior should be documented in MSDN so that programmers won't unknowningly get stuck with inaccurate results should they use the DateTime class.

    Also, I wonder if the Stopwatch class uses the DateTime class underneath.
    Just looking for a solid confirmation on this and maybe help educate others who may not know this (upon confirmation).

    Or maybe I'm doing something wrong with my usage of DateTime and/or Stopwatch.

    Tuesday, October 18, 2005 2:17 PM
  • How are you able to determine what time differences stem from the innaccuracies in the time measuring mechanisms as opposed to the code that is being run?  Specifically, how are you determining what is accurate?  My main question is that doing something like pinging a host and waiting for a response is an action that can vary in length greatly.  Are you experiencing the same issues when doing actions such as sleeping for a set period of time?
    Wednesday, December 21, 2005 7:27 PM
  • It seems likely that Stopwatch is more accurate than Environment.TickCount as it is using a high resolution timer.

    Presumably ping.exe is using something equivalent to Environment.TickCount, which is why it is giving similar results, but these are likely to be less accurate than Stopwatch.

    Wednesday, December 21, 2005 7:36 PM
  • Let me answer the latter question first, "How are you determining what is accurate?"  I agree, pinging a host can vary.  However, my two points of references are Ping.exe and the game that connects to this host (Battlefield 2).  Both applications show ping times very similar (maybe within 5ms at the most) to each other, which, compared to the three mechanisms I have outlined, the Environment.Tick was the closest (again, within 5ms at the most).  While the other two mechanisms were too far off the mark.

    As far as determining where the inaccuracy is stemming from, I have deduced it to the very mechanisms themselves.  In the examples I have provided, and the specific code itself, the "pinging" action was the only thing between the "start" and "end" time captures.

    I don't quite remember my results of doing Sleeps.

    Wednesday, December 21, 2005 7:39 PM
  •  JocularJoe wrote:

    It seems likely that Stopwatch is more accurate than Environment.TickCount as it is using a high resolution timer.

    Presumably ping.exe is using something equivalent to Environment.TickCount, which is why it is giving similar results, but these are likely to be less accurate than Stopwatch.

    That's what I don't understand...the Stopwatch uses a high resolution timer (if the cpu supports it), but it's yielding 100 milliseconds more (than Ping.exe).  I would trust Ping.exe's results since the app has been around for a long time, which leads me to believe that Stopwatch and DateTime are inaccurate.

    Wednesday, December 21, 2005 7:46 PM
  • > I would trust Ping.exe's results since the app has been around for a long time

    I would assume Ping.exe is less likely to be using a high resolution timer as it's been around for a long time, and probably uses only portable APIs

    Wednesday, December 21, 2005 9:18 PM
  • Correct me if I'm wrong, but a "high-resolution" timer is simply one that can achieve very short intervals enabling it to achieve accurate "ticks".  If that's the case, then conversely, a mechanism not using a high-resolution is one with longer intervals.  How is it possible, that Ping.exe, from your assumption that it's not using a high res timer, would return a smaller tick value than that off the Stopwatch, if it functions on longer intervals?  One would think it would be the other way around.
    Also, just because it's been around for a long time, doesn't mean it's not using a high res timer.

    If I'm gathering your thoughts correctly, you're saying for me to not trust Ping.exe's results?  Something that's been used by users, professionals, etc. for years?  I would still put my money on it.  However, the heart of the topic is why there's a discrepency between the accuracy of the ticks from Environment.Tick and Stopwatch/Timespan.  I have a hard time buying that it's because of the high res timer because if it was that, I would expect the results to be reversed, where Environment.Tick is returing a longer duration.

    I'm actually amazed nobody has seen this behavior.  If you're one bit interested, I would suggest writing a simple ping program and trying out the 3 different techniques.  In fact, here's the Ping class that I've written (using TickCount):

    public static class Ping

    {

    private const int DEFAULT_ACTUAL_TIMEOUT = 3000;

    private const int TIMEDOUT_PING = 999;

    private const int ICMP_ECHO = 8;

    private const int PACKET_SIZE = 32;

    private struct ICMP_HEADER

    {

    public byte type; // Type

    public byte code; // Code

    public ushort checksum; // Checksum

    public ushort id; // Identification

    public ushort seq; // Sequence

    public int Size

    {

    get

    {

    return 8;

    }

    }

    }

    private struct PACKET

    {

    public ICMP_HEADER Header;

    public byte[] Data;

    public int Size

    {

    get

    {

    return Header.Size + Data.Length;

    }

    }

    }

    private static byte[] _sendBuffer;

    static Ping()

    {

    PACKET pkt;

    pkt.Header.type = ICMP_ECHO;

    pkt.Header.code = 0;

    pkt.Header.checksum = 0;

    pkt.Header.id = ushort.Parse("45");

    pkt.Header.seq = ushort.Parse("0");

    pkt.Data = new byte[PACKET_SIZE];

    for (int i = 0; i < pkt.Data.Length; i++)

    pkt.DataIdea = (byte)'#';

    _sendBuffer = CreatePacket(pkt);

    }

    public static int PingHost(string address, out bool successful)

    {

    return PingHost(address, 4, 500, out successful);

    }

    public static int PingHost(string address, int trips, int timeout, out bool successful)

    {

    Socket socket;

    byte[] buffer_rec = new byte[256];

    int ping;

    int tickStart = 0, tickAccum = 0;

    int actualTrips = 0;

    int actualTimeout = timeout;

    successful = true;

    if (actualTimeout == 0)

    actualTimeout = DEFAULT_ACTUAL_TIMEOUT;

    socket = new Socket(AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Icmp);

    socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, actualTimeout);

    try

    {

    socket.Connect(new IPEndPoint(IPAddress.Parse(address), 0));

    }

    catch (Exception)

    {

    successful = false;

    return TIMEDOUT_PING;

    }

    for (int i = 0; i < trips; i++)

    {

    tickStart = Environment.TickCount;

    socket.Send(_sendBuffer);

    try

    {

    socket.Receive(buffer_rec, 0, buffer_rec.Length, SocketFlags.None);

    tickAccum += (Environment.TickCount - tickStart);

    actualTrips++;

    }

    catch (SocketException) { break; }

    }

    socket.Shutdown(SocketShutdown.Receive);

    socket.Close();

    if (actualTrips > 0)

    {

    ping = tickAccum / actualTrips;

    if (timeout != 0 && ping >= timeout)

    ping = TIMEDOUT_PING;

    }

    else

    {

    successful = false;

    ping = TIMEDOUT_PING;

    }

    return ping;

    }

    private static byte[] CreatePacket(PACKET pkt)

    {

    byte[] buffer_ret = new byte[pkt.Size];

    buffer_ret[0] = pkt.Header.type;

    buffer_ret[1] = pkt.Header.code;

    Array.Copy(BitConverter.GetBytes(pkt.Header.checksum), 0, buffer_ret, 2, 2);

    Array.Copy(BitConverter.GetBytes(pkt.Header.id), 0, buffer_ret, 4, 2);

    Array.Copy(BitConverter.GetBytes(pkt.Header.seq), 0, buffer_ret, 6, 2);

    for (int i = 0; i < pkt.Data.Length; i++)

    buffer_ret[i + 8] = pkt.DataIdea;

    //calculate checksum

    int iCheckSum = 0;

    for (int i = 0; i < buffer_ret.Length; i += 2)

    iCheckSum += Convert.ToInt32(BitConverter.ToUInt16(buffer_ret, i));

    iCheckSum = (iCheckSum >> 16) + (iCheckSum & 0xffff);

    iCheckSum += (iCheckSum >> 16);

    //update byte array to reflect checksum

    Array.Copy(BitConverter.GetBytes((ushort)~iCheckSum), 0, buffer_ret, 2, 2);

    return buffer_ret;

    }

    }

     

    I can't seem to wrap this code in a "code" block....so Idea = [ i ]

    Wednesday, December 21, 2005 9:36 PM
  • >  How is it possible, that Ping.exe, from your assumption that it's not using a high res timer, would return a smaller tick value than that off the Stopwatch, if it functions on longer intervals?

    A low-res timer return a smaller tick value.  Consider the following: timer A has a resolution of 10ms, timer B has a resolution of 500ms, both timers start running at 10:00:00

    You time an interval of 1305ms from 10:00:00.100 to 10:00:01.405.

    Timer A (hi-res) will return a value of 1300ms (too low by 5ms)

    Timer B (low-res) will return a value of 1000ms (too low by 305ms)

    If you time an interval of 1305ms from 10:00:00.409 to 10:00:01.714ms:

    Timer A returns 1310ms (too high by 5ms)

    Timer B returns 1500ms. (too high by 195ms)

    I.e. the value for a low-res timer is less accurate than the hi-res timer, but can be higher or lower.

    > you're saying for me to not trust Ping.exe's results?  Something that's been used by users, professionals, etc. for years? 

    I don't think Ping was ever intended to accurately measure network latency between machines on the same subnet. 

     

     

    Thursday, December 22, 2005 7:46 AM
  • thanks for the reply joe, and sorry to bring this thread back up...ive been away from msdn for a while.

    anyways, i agree that the value from a low-res timer can be higher or lower compared to the actual (theoretical) value, however, my question isn't specifically directed at the fact that Environment.Tick is returning a lower duration, but the fact the values that it does return is closer to Ping.exe, and other applications that measure latency.  And it's not a question of whether Ping.exe was intended to measure latency this "accurately", but it's simply a reference point.  So let me reiterate the question: "Why is there a discrepancy between using Environment.Tick and the Stopwatch class or the Timespan class?"  I would like to know how those two (three) classes function internally.

    Maybe this 100ms discrepancy that I've noticed isn't much to most applications, but in the world of multiplayer gaming (and other 'networked' worlds), if players have atuned themselves to thinking that a server pinging 50ms is a "good" server, and my server browser starts displaying a "50ms server" at 150ms then the player may percieve this "50ms server" unacceptable.

    Tuesday, February 14, 2006 3:35 PM
  • If you want to know how the various classes work internally, try using Reflector: http://www.aisto.com/roeder/dotnet/
    Wednesday, February 15, 2006 4:30 AM
  • > So let me reiterate the question: "Why is there a discrepancy between using Environment.Tick and the Stopwatch class or the Timespan class?"

    Let me reiterate the answer: because they use different resolution timers internally.  As CommonGenius suggested, you can use Reflector to see the internal implementation.

    The most accurate of the ones you mention is Stopwatch, which uses hi-res timers internally.

    You can increase the accuracy of times you measure by:

    - using a higher-res timer, or

    - repeating the measurement multiple times and taking the average.  This is perhaps one reason why ping supports pinging repeatedly.

    To illustrate this, consider an extreme example: assume you want to measure something that takes 1 second, and the only timer you have has a resolution of 1 minute.

    I.e. if you start a test between nn:00 and nn:59, you will get a result of 0 minutes.  This will happen on average 59 times out of every 60 test runs.

    If you start a test between nn:59 and <nn+1>:00, you will get a result of 1 minute = 60 seconds.  This will happen on average once out of every 60 test runs.

    Now if you run say 10 tests, you will probably get all zeroes, and get an underestimated average time.  You may get one or more tests that give a result of 1 minute - which will give an overestimated average.

    But if you run a very large number of tests, you will get a result of 1 minute on average once every 60 test runs - so the average result will become more accurate.

     

    Thursday, February 16, 2006 6:04 PM
  • Not sure if this is the answer, but it might be worth posting since I got to this thread by looking for the most accurate / recommended way to time intervals.

    In the MSDN Page for the StopWatch.ElapsedTicks property

    http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.elapsedticks.aspx

    There's the following note:

    Stopwatch ticks are different from DateTime. . :: . Ticks . Each tick in the DateTime. . :: . Ticks value represents one 100-nanosecond interval. Each tick in the ElapsedTicks value represents the time interval equal to 1 second divided by the Frequency .

    Which I guess could explain the difference of Ticks you're talking about.

    Hope this helps.
    Thursday, August 20, 2009 3:44 AM
  • Hi,
             I have used Stopwatch to time the methods of the App and used Elapsed Ticks method to get the Ticks out of the StopWatch object.But when I try passing it as a parameter to the Timespan to get ms,sec etc I get wrong nos.

    TimeSpan ~ttmeSpan = new TimeSpan (~enelapsedTime );
    double ~tmeMilliSecs = ~ttmeSpan .TotalMilliseconds ;

           Now my understanding is that High Res timer is being used by the chipset in arriving at the Elapsed ticks.Also when i print the result the elapsed ticks seems to be ok but the Conversion to ms and secs and mins is not ok.It should have been divided by the frequency as hi res timer is being used.

    Program/Main(string[] args) : void|Entry Ticks: 720|Exit Ticks: 473970121290|Total Elapsed |Ticks: 473970120570|Milliseconds: 47397012.057|Seconds: 47397.012057|Minutes: 789.95020095
           
    Frequency = 2992490000 ticks per sec    so secs = 47397012/2992490000 = 0.015838653  ~ 15.83 ms ~ 0.000263977 mins which is reasonable .
           
            The Timespan.TotalMilliseconds here just divides by 10000 to arrive at the result and ignores the Frequency . Is there some problem with the Timespan internal dynamics when a Stopwatch is passed so that it doesnt take into account the hi res freq of the system ?

            Also on Core 2 Duo's will above StopWatch method of timing the most accurate way and best practise to time various application branch methods or is there some more better way ? Also does Stopwatch deliver correct timing with Hyperthreading enabled processors?

            MS Experts ! please help ?

    Thx
    Wednesday, November 18, 2009 12:18 PM