Monday, April 30, 2012 4:34 PM
i get (-0.00999999999999995) rather than -0.01
Monday, April 30, 2012 4:38 PM
Yay computer science! This is a problem with doubles and floats always.
If you want to only show the accuracy to a certain point, you can try using the string formatter (Probably ToString("0.00"))
If you rather need to tell that the difference is -0.01, you should probably make a function that checks whether or not the difference between two numbers is relatively small. This is pretty hard to do well, though:
Another way is to store the value as an integer, just interpret it as "number of 1/100" (its like saying something is 100 pennies rather that $1.00). You'll never get more accurate that 1/100, and your max value will only be ~+/-2^24 but you'll always have an exact value which can be used for comparison.
You can also use the decimal type instead of doubles:
Which is still possible to have the same type of error, the instance is just a lot smaller. However, it is a 16 byte data type, so it takes up a lot more memory and is a lot slower to use.