# Precision loss at large values in data type double • ### Question

• Why exactly does a loss of precision occur when the values become very large in double?

Example code:

```double e1 = 1e20 + 8192;
double e11 = 1e20 + 8193;
double e2 = 1e20;

Console.WriteLine("E-notation");
Console.WriteLine(e1 == e2);
Console.WriteLine(e11 == e2);```

The output:

Normal notation
True
False

And why does the notation play a role, whether i write it as normal number or in E-notation:

```double d1 = 10000000000000000000 + 1024;
double d11 = 10000000000000000000 + 1025;
double d2 = 10000000000000000000;

Console.WriteLine("Normal notation");
Console.WriteLine(d1 == d2);
Console.WriteLine(d11 == d2);```

E-notation
True
False

• Edited by Tuesday, September 18, 2018 9:06 AM
Tuesday, September 18, 2018 9:03 AM

### All replies

• Because a double is only 8 bytes. As defined by IEEE a double only has precision between 12 and 15 digits. Anything beyond that and you will lose precision. It is a consequence of how IEEE lays out doubles in memory. A portion is allocated for the exponent and the rest for the fraction. This is true in almost every language because AFAIK all languages uses IEEE for floats.

Additionally the smallest possible representative value (epsilon) is a consequence of the standard. There are more values in the double range than it can accurately show so you get rounding errors. If you're combining very large and very small values together you tend to see these problems. But you can also see them with even simple numbers. Doubles are useful for floating point calculations but not very large or very small.

In C# you should use decimal for large calculations such as financial. It has a larger precision and is designed to solve these problem.

Michael Taylor http://www.michaeltaylorp3.net

Tuesday, September 18, 2018 2:12 PM
• > Why exactly does a loss of precision occur when the values become very large in double?

Because doubles are not infinitely large.  The Wikipedia article on floating point might help you with this.  A "double" is essentially a 52-bit integer, multiplied by a 12-bit power of 2.  In your first example, the integer 1E20 requires 66 bits.  When converted to a double, it keeps the 52 most significant bits and sets the exponent to 2^14, which is 8192.  1E20 / 8192 is a fraction that cannot be represented exactly in binary, so the value stored is CLOSE to 1E20.

8192 is 2^14.  Since the exponent of the double is already 8192, nothing below that value will change the mantissa part of the number.

Consider this example.  Let's say I have a decimal floating point unit, which stores 5 digits plus a two digit exponent.  So, the value 10 might be stored as

1.0000  E 01

So the number 100,000 would be stored as:

1.0000 E 05

If I want to add 50 to that, I have to adjust the exponents to match.  50 becomes:

0.0005 E 05

so the sum is

1.0005 E 05

which would print as 100,050.

But now, what happens if I want to add 4 to that number?  To add numbers, I need to match the exponents.  And 4, with an exponent of 5, becomes

0.0000 E 05

It's below the precision of my numbers.

> And why does the notation play a role, whether i write it as normal number or in E-notation:

It doesn't.  In your second example, the number you wrote is 1e19, not 1e20.

Tim Roberts | Driver MVP Emeritus | Providenza &amp; Boekelheide, Inc.

Tuesday, September 18, 2018 7:01 PM