none
Decimal vs. Double - difference?

    Question

  •  

    Hopefully an easy question,

     

    what is the difference between a decimal and a double object? They seem the same to me.

     

    Thanks,

    Wednesday, December 19, 2007 2:37 PM

Answers

  • The decimal has more significant figures than the double, therefore it can be more precise- it also takes up slightly more memory. Other than certian math or physics-related algorithms, the double or float should do fine.

    One other thing to remember is that the decimal, double and float are real numbers (ie. 1.5, 1.83, or 3.33) whereas the short, int and long are integers (ie. 75, 600, and -9). You would use an integer as a counter on a 'for' loop, for example; whereas a float would be used for a monetary or interest-rate-calculating app, or anything else that requires fractions.

    The chart below details most of the comon variable types, as well as thier size and possible values.
     
    C# Type .Net Framework (System) type Signed? Bytes Occupied Possible Values
    sbyte System.Sbyte Yes 1 -128 to 127
    short System.Int16 Yes 2 -32768 to 32767
    int System.Int32 Yes 4 -2147483648 to 2147483647
    long System.Int64 Yes 8 -9223372036854775808 to 9223372036854775807
    byte System.Byte No 1 0 to 255
    ushort System.Uint16 No 2 0 to 65535
    uint System.UInt32 No 4 0 to 4294967295
    ulong System.Uint64 No 8 0 to 18446744073709551615
    float System.Single Yes 4 Approximately ±1.5 x 10-45 to ±3.4 x 1038 with 7 significant figures
    double System.Double Yes 8 Approximately ±5.0 x 10-324 to ±1.7 x 10308 with 15 or 16 significant figures
    decimal System.Decimal Yes 12 Approximately ±1.0 x 10-28 to ±7.9 x 1028 with 28 or 29 significant figures
    char System.Char N/A 2 Any Unicode character (16 bit)
    bool System.Boolean N/A 1 / 2 true or false


    Thursday, December 20, 2007 2:44 AM

All replies

  • despite the MaxValue/MinValue range difference, the methods/properites are not the same either.

     Quilnux wrote:

     

    Hopefully an easy question,


    what is the difference between a decimal and a double object? They seem the same to me.

     

    Thanks,

    Wednesday, December 19, 2007 2:44 PM
  • The decimal has more significant figures than the double, therefore it can be more precise- it also takes up slightly more memory. Other than certian math or physics-related algorithms, the double or float should do fine.

    One other thing to remember is that the decimal, double and float are real numbers (ie. 1.5, 1.83, or 3.33) whereas the short, int and long are integers (ie. 75, 600, and -9). You would use an integer as a counter on a 'for' loop, for example; whereas a float would be used for a monetary or interest-rate-calculating app, or anything else that requires fractions.

    The chart below details most of the comon variable types, as well as thier size and possible values.
     
    C# Type .Net Framework (System) type Signed? Bytes Occupied Possible Values
    sbyte System.Sbyte Yes 1 -128 to 127
    short System.Int16 Yes 2 -32768 to 32767
    int System.Int32 Yes 4 -2147483648 to 2147483647
    long System.Int64 Yes 8 -9223372036854775808 to 9223372036854775807
    byte System.Byte No 1 0 to 255
    ushort System.Uint16 No 2 0 to 65535
    uint System.UInt32 No 4 0 to 4294967295
    ulong System.Uint64 No 8 0 to 18446744073709551615
    float System.Single Yes 4 Approximately ±1.5 x 10-45 to ±3.4 x 1038 with 7 significant figures
    double System.Double Yes 8 Approximately ±5.0 x 10-324 to ±1.7 x 10308 with 15 or 16 significant figures
    decimal System.Decimal Yes 12 Approximately ±1.0 x 10-28 to ±7.9 x 1028 with 28 or 29 significant figures
    char System.Char N/A 2 Any Unicode character (16 bit)
    bool System.Boolean N/A 1 / 2 true or false


    Thursday, December 20, 2007 2:44 AM
  •  

    another issue to take care of with decimal is the following. Assume you have

     

    Code Snippet
    decimal y = x / 12;

     

    You were lazy to do the casting for 12.0 into decimal (otherwise opertor / is on different types) and decided that a shorter way is to drop 0 and simply use 12.

    Next assume x is int and equals, say, 6. In this case your result for y will be 0 and not 0.5 as you would expect precisely because the operator / above is on performed on two ints and its result is, of course, another int.

    Wednesday, January 23, 2008 9:15 PM
  •  cds333 wrote:
    The decimal has more significant figures than the double, therefore it can be more precise- it also takes up slightly more memory. Other than certian math or physics-related algorithms, the double or float should do fine.

    One other thing to remember is that the decimal, double and float are real numbers (ie. 1.5, 1.83, or 3.33) whereas the short, int and long are integers (ie. 75, 600, and -9). You would use an integer as a counter on a 'for' loop, for example; whereas a float would be used for a monetary or interest-rate-calculating app, or anything else that requires fractions.

     

    I think you have this backwards. Floating point numbers are intended for scientific use where the range of numbers is more important than absolute precision. Decimal numbers are an exact representation of a number and should always be used for monetary calculations. Decimal fractions do not necessarily have an exact representation as a floating point number.

    • Proposed as answer by Spivonious Friday, November 13, 2009 8:48 PM
    Friday, January 25, 2008 1:53 AM
  •  

    Sorry for the delay. Thanks for your responses.

     

    Quilnux

    Monday, February 18, 2008 7:17 PM
  • And that's why god invented shorthand suffixes to denote "hey, this value is an int, but I want it represented as a decimal in this equation".

    decimal y = x / 12M;

     

    //Daniel

    Thursday, September 30, 2010 5:45 PM
  •  

     cds333 wrote:
    The decimal has more significant figures than the double, therefore it can be more precise- it also takes up slightly more memory. Other than certian math or physics-related algorithms, the double or float should do fine.

    One other thing to remember is that the decimal, double and float are real numbers (ie. 1.5, 1.83, or 3.33) whereas the short, int and long are integers (ie. 75, 600, and -9). You would use an integer as a counter on a 'for' loop, for example; whereas a float would be used for a monetary or interest-rate-calculating app, or anything else that requires fractions.

     

     

    I think you have this backwards. Floating point numbers are intended for scientific use where the range of numbers is more important than absolute precision. Decimal numbers are an exact representation of a number and should always be used for monetary calculations. Decimal fractions do not necessarily have an exact representation as a floating point number.


    Thank you for saying this. Floating point integers should NOT be used for monetary or currency related calculations either. That is to say that cds333's much voted answer is fundamentally flawed as it downplays the real difference between two types.

    "The decimal has more significant figures than the double, therefore it can be more precise- it also takes up slightly more memory. "

    If you will pardon the unintentional pun, that's not very precise. Compared to floating-point types, the decimal type has BOTH a greater precision and a smaller range. The main difference between decimal and double data types is that decimals are used to store exact values while doubles, and other binary based floating point types are used to store approximations.  A binary based floating-point number can only approximate a decimal floating point number, and how well it approximates is directly correlated with it's precision.

     

    • Proposed as answer by GonzoKnight Thursday, September 30, 2010 7:49 PM
    Thursday, September 30, 2010 7:09 PM
  • i have done the following ,

    double r=120,j=256;

    double lr=j/r;

    Anwser:

    lr=2;

     

     

    if i change double to decimal:

     

    decimal r=120,j=256;

    decimal lr=j/r;

    Anwser:

    lr=2.1333333333333;

    --------

    i'm wondering wat happens????!!!!!!can u explain wat happen here???

     

    Tuesday, July 05, 2011 11:31 AM
  • Excellent table cds333!!

    The complexity resides in the simplicity Follow me at: http://smartssolutions.blogspot.com
    Tuesday, July 05, 2011 12:56 PM
  • The fundamental difference is that the double is a base 2 fraction, whereas a decimal is a base 10 fraction.

    double stores the number 0.5 as 0.1, 1 as 1.0, 1.25 as 1.01, 1.875 as 1.111, etc.

    decimal stores 0.1 as 0.1, 0.2 as 0.2, etc.

    The double cannot store something like 0.3 as a plain binary fraction, so i think it uses an approximation (im not sure)

    Tuesday, July 05, 2011 2:57 PM
  • i have done the following ,

    double r=120,j=256;

    double lr=j/r;

    Anwser:

    lr=2;

     

     

    if i change double to decimal:

     

    decimal r=120,j=256;

    decimal lr=j/r;

    Anwser:

    lr=2.1333333333333;

    --------

    i'm wondering wat happens????!!!!!!can u explain wat happen here???

     

    That is not true. The results are the same with decimal having greater precision.
    Tuesday, July 05, 2011 4:22 PM
  • I think what he originally meant to question was:

    double lr = 256/120;

    which would evaluate 256 as an integer, divided by 120 as an integer, resulting as 2, then assigning double lr equal to 2

    By explicitly setting numerator and denominator as double, or if he put  

    double lr = 256.0/120.0;

    he would have got 2.13333333333333

    Tuesday, October 18, 2011 8:38 PM
  • Doubles use Floating Point storage in base 2, where as the Decimal stores the information in base 10.

    So, for example, 2.25 as a decimal would be stored as 225 * 10 ^ -2 (underlined numbers are actually stored) or some variation thereof.  

    The double would store 1001 * 2 ^ -10 (underlined numbers are actually stored and they are in base 2).

    You can think of integer binary numbers as each digit as having a power of two, i.e.

    128 64 32 16 8 4 2 1

    for a floating point number, you just need to extend that to negative powers of two as well, i.e.

    16 8 4 2 1 1/2 1/4 1/8 1/16

    or 

    16 8 4 2 1 .5 .25 .125 .0625

     

    Some of the implications:

    In my example I picked a number that is easily represented in binary format, but some numbers that are short/simple base 10 fractions are very long, if not irrational, binary fractions.  This means that when using the double the number can sometimes be off from what you would expect  (More Info)


    Tuesday, October 18, 2011 9:04 PM
  • Doubles use Floating Point storage in base 2, where as the Decimal stores the information in base 10.

    So, for example, 2.25 as a decimal would be stored as 225 * 10 ^ -2 (underlined numbers are actually stored) or some variation thereof.  

    The double would store 1001 * 2 ^ -2 (underlined numbers are actually stored and they are in base 2).

    You can think of integer binary numbers as each digit as having a power of two, i.e.

    128 64 32 16 8 4 2 1

    for a floating point number, you just need to extend that to negative powers of two as well, i.e.

    16 8 4 2 1 1/2 1/4 1/8 1/16

    or 

    16 8 4 2 1 .5 .25 .125 .0625

     

    Some of the implications:

    In my example I picked a number that is easily represented in binary format, but some numbers that are short/simple base 10 fractions are very long, if not irrational, binary fractions.  This means that when using the double the number can sometimes be off from what you would expect  (More Info)


    Hi!

    I dont't know much about this, but why is 2,25 stored in double as 1001 × 2 ^ -2 and not 2002 × 2 ^ -3 or 4004 × 2 ^ -4?

    It's just because this way, numbers stored in double can be stored in different ways.


    João Miguel




    • Edited by JMCF125 Saturday, November 05, 2011 2:58 PM
    Saturday, November 05, 2011 2:36 PM
  • I dont't know much about this, but why is 2,25 stored in double as 1001 × 2 ^ -2 and not 2002 × 2 ^ -3 or 4004 × 2 ^ -4?


    Because binary, as its name implies, only supports two digits: 0 and 1.
    Monday, November 07, 2011 8:52 AM
  • I dont't know much about this, but why is 2,25 stored in double as 1001 × 2 ^ -2 and not 2002 × 2 ^ -3 or 4004 × 2 ^ -4?


    Because binary, as its name implies, only supports two digits: 0 and 1.

    But then, it wouldn't store 1001 × 2 ^ -2 but instead 1001 × 10 ^ -10, since 2 in binary is 10.


    João Miguel
    Sunday, November 13, 2011 3:16 PM
  • But then, it wouldn't store 1001 × 2 ^ -2 but instead 1001 × 10 ^ -10, since 2 in binary is 10.


    You are absolutely right. But I guess the purpose was to explain the rounding problem you can have when trying to store a number such as 0.2 (binary 0.00110011...). The exponent is not relevant for that.

    Monday, November 14, 2011 12:37 PM
  • Thank you Louis.fr, that's what I wanted to hear.
    João Miguel
    Monday, November 14, 2011 4:19 PM
  • The decimal has more significant figures than the double, therefore it can be more precise- it also takes up slightly more memory. Other than certian math or physics-related algorithms, the double or float should do fine.

    One other thing to remember is that the decimal, double and float are real numbers (ie. 1.5, 1.83, or 3.33) whereas the short, int and long are integers (ie. 75, 600, and -9). You would use an integer as a counter on a 'for' loop, for example; whereas a float would be used for a monetary or interest-rate-calculating app, or anything else that requires fractions.

    The chart below details most of the comon variable types, as well as thier size and possible values.
     
    C# Type .Net Framework (System) type Signed? Bytes Occupied Possible Values
    sbyte System.Sbyte Yes 1 -128 to 127
    short System.Int16 Yes 2 -32768 to 32767
    int System.Int32 Yes 4 -2147483648 to 2147483647
    long System.Int64 Yes 8 -9223372036854775808 to 9223372036854775807
    byte System.Byte No 1 0 to 255
    ushort System.Uint16 No 2 0 to 65535
    uint System.UInt32 No 4 0 to 4294967295
    ulong System.Uint64 No 8 0 to 18446744073709551615
    float System.Single Yes 4 Approximately ±1.5 x 10<sup>-45 </sup>to ±3.4 x 10<sup>38 </sup>with 7 significant figures
    double System.Double Yes 8 Approximately ±5.0 x 10<sup>-324 </sup>to ±1.7 x 10<sup>308 </sup>with 15 or 16 significant figures
    decimal System.Decimal Yes 12 Approximately ±1.0 x 10<sup>-28 </sup>to ±7.9 x 10<sup>28 </sup>with 28 or 29 significant figures
    char System.Char N/A 2 Any Unicode character (16 bit)
    bool System.Boolean N/A 1 / 2 true or false


    nice patch....
    Wednesday, November 23, 2011 3:47 PM
  • What about this;

    float     10.139f * 5f = 50.695

    double 10.139d * 5d = 50.694999999999993

    decimal 10.139m * 5m = 50.695

    double is more precise than the float but as you see above the result is not as expected.

    Tuesday, April 03, 2012 8:07 AM
  • What about this;

    float     10.139f * 5f = 50.695

    double 10.139d * 5d = 50.694999999999993

    decimal 10.139m * 5m = 50.695

    double is more precise than the float but as you see above the result is not as expected.

    How are you displaying those results?  Neither is the actual representation of the data, merely an approximation.  My guess is the float was rounded and the double wasn't.
    Tuesday, April 03, 2012 1:52 PM
  • Decimal is 16 bytes!
    Thursday, July 19, 2012 5:49 PM
  • Decimal is 16 bytes!
    That information (along with much more) is already listed in the accepted answer.
    Thursday, July 19, 2012 5:52 PM
    • Decimal is specially used for financial and monetary calculation which requires higher accuracy.
    • It is not adequate for scientific applications.
    • A certain loss of precision is acceptable in scientific calculation. Loss of precision is not acceptable in financial calculations.
    • In most operations, decimal is much slower than float and double. Because, float and double are represented by binary. Decimal is represented by base 10. Hardware processes the float and double. Software calculates the decimal.
    • Decimal has smaller value range compare to double.

    Wisen Technologies

    .Net Training

    Thursday, September 20, 2012 5:27 AM
  • If you want "as high precision as possible", and the performance is also a plus, Double is the way to go.

    If you need to avoid rounding errors or use a consistent number of decimal places, Decimal is the way to go.

    Decimals have much higher precession and are usually used within financial applications that require a high degree of accuracy. Of course decimals are much slower than a double\float.

    Decimal uses the most space and is the most accurate, but it's also quite a bit more expensive in processor time too as it is not an intrinsic type. One advantage of Decimal is that it is optimized for financial calculations.

    Monday, September 24, 2012 5:07 PM
  • Decimals have much higher precession and are usually used within financial applications that require a high degree of accuracy.

    But be aware that high precision does not guarantee high accuracy; and high accuracy does not necessarily require high precision - check out the Wikipedia articles on 'Accuracy and Precision' and 'Numerical Methods/Errors'. Much depends on the algorithm used - especially in iterative calculations.


    Regards David R
    ---------------------------------------------------------------
    The great thing about Object Oriented code is that it can make small, simple problems look like large, complex ones.
    Object-oriented programming offers a sustainable way to write spaghetti code. - Paul Graham.
    Every program eventually becomes rococo, and then rubble. - Alan Perlis
    The only valid measurement of code quality: WTFs/minute.

    Monday, September 24, 2012 5:32 PM
  • i have done the following ,

    double r=120,j=256;

    double lr=j/r;

    Anwser:

    lr=2;

    Can someone explain this? Why would this be calculated in int? The variables are already in double. Even though it didn't say, double r=120.0, a double is still a double right? Also the code seems to be fine z=2.1333333 on my machine. I am using .Net3.5 in C#. Maybe he was using C++ or something?

    Also I just want to make sure I get this right.

    • Double has higher range, thus, has the potential to store something much closer to the precision of actual value.
    • Decimal has "exact" precision within its smaller range, thus, is more suitable for financial applications. But, it is not used in scientific application because you lose its range and ended up rounding up/ truncate too short.
    • Decimal is a lot slower and is 16 Bytes.
    • Remember to say decimal z = x / 5m; to make sure it is calculated in decimal.

    Am I right?

    Thank you.


    Tuesday, January 08, 2013 5:20 PM