locked
C# math error? RRS feed

  • Question

  • After the fifth iteration, fC ends up with and additional 0.000037.  This can be seen only in the debugger.  After the seventh iteration, an extra .0001 shows up in the console.  Is this a math error in the language?  I ran two versions of this code on two different computers, but they have the same processor.

     

     

    using System;

    using System.Collections.Generic;

    using System.Text;

    namespace ConsoleApplication1

    {

    class Program

    {

    static void Main(string[] args)

    {

    float fC = 1000.00F;

    float fN = 23.23F;

    float fInterferer = 45.20F;

     

    for (int i = 0; i < 15; i++)

    {

    fC = fC - (fN + fInterferer);

    System.Console.Out.WriteLine( fC.ToString() );

    }

    }

    }

    }

    Monday, October 29, 2007 1:18 PM

Answers

  • Please read this: http://docs.sun.com/source/806-3568/ncg_goldberg.html

    "What every computer scientist should know about floating-point arithmetic"

     

    Basically the bottom line is trying to fit all floating point numbers into fixed 32-bit or 64-bit values in the range they support, is not possible. There are more numbers than 32/64 bits can truly represent. So you get an approximation at best. This is not a .NET / C# / or even a Java (since the doc is hosted at sun.com) specific problem -- all computer languages share it.

     

    Since you're dealing with 2 places of decimals, I'd suggest you scale your numbers by 100 and just deal with simple integer arithmetic.

    Monday, October 29, 2007 3:04 PM
  • I'd follow eradicator's suggestion, or if you can't go to integer math, you can switch to decimal which is more accurate than float but has a different range (does not appear to be a problem).

    the document referenced has nothing to do with Java, sun is more than just java (though that's what they are primarilly known for anyways).

    Specifically it is an IEEE 754 (floating point) specification problem, encoding a decimal (base 10) number system in a binary (base 2) system. A fixed point system may not have the same issue, but it has it's own issues (very limited range, and just inability to represent certain values and percisions)
    Monday, October 29, 2007 5:35 PM

All replies

  • It looks like cumulative rounding errors to me.  But, you're saying this only happens in the debug version of the program?  That is truely strange.

    Monday, October 29, 2007 2:20 PM
  • Please read this: http://docs.sun.com/source/806-3568/ncg_goldberg.html

    "What every computer scientist should know about floating-point arithmetic"

     

    Basically the bottom line is trying to fit all floating point numbers into fixed 32-bit or 64-bit values in the range they support, is not possible. There are more numbers than 32/64 bits can truly represent. So you get an approximation at best. This is not a .NET / C# / or even a Java (since the doc is hosted at sun.com) specific problem -- all computer languages share it.

     

    Since you're dealing with 2 places of decimals, I'd suggest you scale your numbers by 100 and just deal with simple integer arithmetic.

    Monday, October 29, 2007 3:04 PM
  • I'd follow eradicator's suggestion, or if you can't go to integer math, you can switch to decimal which is more accurate than float but has a different range (does not appear to be a problem).

    the document referenced has nothing to do with Java, sun is more than just java (though that's what they are primarilly known for anyways).

    Specifically it is an IEEE 754 (floating point) specification problem, encoding a decimal (base 10) number system in a binary (base 2) system. A fixed point system may not have the same issue, but it has it's own issues (very limited range, and just inability to represent certain values and percisions)
    Monday, October 29, 2007 5:35 PM
  • Thanks IsshouFuuraibou and Eradicator.  I read the document on the Sun website, and it helped me immensely.  I also switch to using decimal in place of float.  This solved my "rounding" error.  I took this approach, because even though my example was to only 2 decimal places, the real data will have varying level.  Thanks again.

     

    Monday, October 29, 2007 6:11 PM
  • Just remember decimal doesn't solve all the problems of using decimal values in a binary world, but because it encodes the information about the value differently it will encounter different challenges, most likely none that you would face in your application.
    Monday, October 29, 2007 6:48 PM
  • beleive it or not but now I came across this problem working with decimals. After some intensive calculations I check for 0 and it throws an exception. And here is the ouput from the watch window. I am utterly confused and do not know what to do. Of course I can round the number, but this is not the behaviour I expected from decimal numbers. Any suggestions?

    eop    0.0000000000000000000001    decimal

    Saturday, March 8, 2008 10:52 AM
  •  IsshouFuuraibou wrote:
    Just remember decimal doesn't solve all the problems of using decimal values in a binary world, but because it encodes the information about the value differently it will encounter different challenges, most likely none that you would face in your application.


    what are those challenges?
    Saturday, March 8, 2008 10:57 AM
  • I see so many programmers nowadays who just don't get floating point numbers, and how they work.

     

    Instead of fixing their floating point code to account for how floating point numbers work, they switch to the INCREDIBLY SLOW decimal type, and think they've solved the problem. Well, they might have done - but more likely they haven't, and it's just going to bite them in the arse later on.

     

    Anyone working with floating point numbers should read up and understand floating point limitations, and how to work around the inevitable rounding errors. Blindly switching to the decimal type as if it's some kind of silver bullet is just WRONG.

     

    Consider the following code:

     

    Code Snippet

    static void Main(string[] args)
    {
        decimal d1 = 1m/3m;
        decimal d2 = 0;

     

        for (int i = 0; i<99999; ++i)
        {
            d2 += d1;
        }

     

        Console.WriteLine(d2);
    }

     

     

    Programmers who blindly change "double" to "decimal" when they see rounding errors will no doubt think that the code above will work just fine.

     

    Well it doesn't. It prints 33332.999999999999999999973870, whereas the "correct" answer is obviously 33333.

     

    So come on people. Educate yourselves about floating point, and stop this mindless throwing of "decimal" at every rounding error! I shudder to think of the horribly slow and STILL wrong code that must be around now.

     

    And just to drive the point home: Here's a test program to compare decimal with double. On my system (and in release build), the decimal code is more than 150 times slower than the double code.

     

    Code Snippet

    static void Main(string[] args)
    {
        Stopwatch stopwatch = new Stopwatch();

     

        stopwatch.Start();
        decimal result1 = test1();
        stopwatch.Stop();
        TimeSpan time1 = stopwatch.Elapsed;
        Console.WriteLine("Decimal took " + time1 + " and returned " + result1);

     

        stopwatch.Reset();
        stopwatch.Start();
        double result2 = test2();
        stopwatch.Stop();
        TimeSpan time2 = stopwatch.Elapsed;
        Console.WriteLine("Double took " + time2 + " and returned " + result2);

     

        Console.WriteLine("Decimal took " + (time1.TotalSeconds/time2.TotalSeconds) + " times as long as double.");
    }

     

    static decimal test1()
    {
        decimal d1 = 1m/3m;
        decimal d2 = 0;

     

        for (int i = 0; i<9999999; ++i)
        {
            d2 += d1;
        }

     

        return d2;
    }

     

    static double test2()
    {
        double d1 = 1d/3d;
        double d2 = 0;

     

        for (int i = 0; i<9999999; ++i)
        {
            d2 += d1;
        }

     

        return d2;
    }

     

     

    Monday, March 10, 2008 11:09 AM
  • Well, as Matthew stated, Decimal is slow. It's also not an atomic operation by default (too many bits to be automatically atomic) so it either needs to capture atomicity or allow itself to be able to be pre-empted when writing or reading a value (which is inherently very bad).

    Also, you're not getting rid of percision errors, you're changing the level at which it will occur and the numbers it might occur at. The fact is unless you're using a structure that uses an infinite set of bits you will never have any structure that can be 100% free of percision errors, it's the nature of different bases and numbers. So the best way to solve this is know the minimal usuable digit and round to that many digits. It's not a perfect solution.

    Read the documentation: System.Decimal (it's a 128 bit value, I believe, so can't be automatically atomic even in 64-bit machine)

    If you haven't I'd read the link posted earlier about what every computer programmer needs to know about floating point.
    Also, read up on Binary number system, specifically representing real numbers: wiki link
    If you're unfamiliar with Atomicity read about it here: Atomic Operations
    Monday, March 10, 2008 3:03 PM
  • so what are the good rules to work with floating numbers?

     

    round the numbers before any comparissions? this is it?

     

    decimal numbers might be slow, but they offer much higher degree of precisions when scientific numbers are not required, i.e. in the case of financial applications the extra speed requirements might be DAMN inexpensive when compared with extra financial COST of rounding.

     

    is there any way to get the best of both: speed and precision?

     

    Monday, March 10, 2008 5:33 PM
  •  sn75 wrote:

    so what are the good rules to work with floating numbers?

     

    round the numbers before any comparissions? this is it?

     

    decimal numbers might be slow, but they offer much higher degree of precisions when scientific numbers are not required, i.e. in the case of financial applications the extra speed requirements might be DAMN inexpensive when compared with extra financial COST of rounding.

     

    is there any way to get the best of both: speed and precision?



    Figuring out the good rules can be a master's thesis, but like said here rounding to the lowest usable digit (when you do this is more dependant on the use of said value, but most likely when you're storing the value and/or during the calculations). Some of it depends on how much error, if any, is introduced during a series of calculations which may require you to round during a series of calculations such that the error doesn't have a change to propigate through the calculations because given time and operations, an error will propigate into a larger error.

    Technically doing the comparisons as x == y is bad form, it should be done with a more precision tolerant format such as Math.Abs(x - y) < ToleranceLevel and so on for the other forms of comparison because of the need for tolerance because of precision errors and 1.0 can be represented as 0.9999999 or 1.0000001 depending on several factors, neither of which is == 1.0.

    As for best of both: no, like I stated to have precision you'd need an infinate amount of bits ( you can get by with less when you limit range and precision). This unfortunately means that the more space you used for representing the numbers the more clock cycles it takes to process anything with these values unless you're using a processor with the same bits.

    The best way if you can sacrifice the range and precision is to use the integer math, as it is fast and accurate. However you lose range and limit your precision.

    You've got basically 4 aspects to a encoded real number: number of bits, range, and precision. All of these affect one another. So if you're wanting to increase precision something else has to change: range going down or number of bits going up. Unfortunately depending on how things change it can affect the speed of using the value (as seen with decimal). IEE754 was developed because it was an efficient use of the number of bits vs the range vs the precision. Other systems that have been around had high precision but lacked range or had high precision and good range but used a lot of bits.

    The problem isn't even solved by moving to a different base (allowing bits to be more than two state). In Base 10 you can't represent the fraction 1/3 as a number. So we have to deal with understanding the restrictions of the system we're in and as long as we can get that below our requirements and can deal with problems like precision issues we have a workable system.

    I hope I'm making some sense.

    Edit: Just noticed a phrase in the binary number system link I provided last post which illustrates the concepts well:

     wikipedia wrote:
    It may come as a surprise that terminating decimal fractions can have repeating expansions in binary. It is for this reason that many are surprised to discover that 0.1 + ... + 0.1, (10 additions) differs from 1 in floating point arithmetic. In fact, the only binary fractions with terminating expansions are of the form of an integer divided by a power of 2, which 1/10 is not.

    (the value 0.1 is a repeating binary fraction)
    Monday, March 10, 2008 7:37 PM
  •  sn75 wrote:

    so what are the good rules to work with floating numbers?

     

    round the numbers before any comparissions? this is it?

     

    decimal numbers might be slow, but they offer much higher degree of precisions when scientific numbers are not required, i.e. in the case of financial applications the extra speed requirements might be DAMN inexpensive when compared with extra financial COST of rounding.

     

    is there any way to get the best of both: speed and precision?



    It's too large a subject to go into detail here, I'm afraid.

    But let me point this out: Most financial packages over the past couple of decades have been written using IEEE 64-bit floating point. It is certainly not necessary to use the "decimal" type when writing financial applications.
    Tuesday, March 11, 2008 9:36 AM