Answered by:
Can anyone explain clearly about FLOAT Vs DECIMAL Vs DOUBLE ?
Question

Hi Fellows.,
plz answer me for the following questions.,
1.what is FLOAT,DECIMAL & DOUBLE ?
2.why we use all the three instead of using one among these ?
3.which is the best one to use ?
4.Float Vs Double Vs Decimal
PLZ DON'T REDIRECT TO THE MSDN SITE !
tell me in a simple english !
thanks in advance !Sunday, February 21, 2010 10:01 AM
Answers

0. what is floatingpoint values?
example: 101.01 = (1 * 2^2) + (0 * 2^1) + (1 * 2^0) + (0 * 2^1) + (1 * 2^2) = 5.25
1.
1.1. float, stores 32bit floatingpoint values. range: ±1.5 × 10^−45 to ±3.4 × 10^38
1.2. double, stores 64bit floatingpoint values. range: ±5.0 × 10^−324 to ±1.7 × 10^308
1.3. decimal, indicates a 128bit data type. Compared to floatingpoint types, the decimal type has more precision and a smaller
range, which makes it appropriate for financial and monetary calculations. range: ±1.0 × 10^−28 to ±7.9 × 10^28
2. Because we need different precisions in different conditions. e.g. double for computing distance between an Atom and it's core. decimal for orfinancial and monetary calculations.
3. As you see in 2, it's dependant to the environment.
4. You see in above.
Please help us improve this community forum for visitors by marking the replies as answers if they help and unmarking them if they provide no help.
Thanks. Edited by Yasser Zamani  Mr. Help Sunday, February 21, 2010 1:11 PM floatingpoint values
 Proposed as answer by farooqaaa Tuesday, February 23, 2010 2:55 AM
 Marked as answer by Liliane Teng Sunday, February 28, 2010 8:08 AM
Sunday, February 21, 2010 10:24 AM 
Hello S.Silambarasan,
Welcome to MSDN Forum!
I understand your problem, all replies above are also good.The following is the information about your four problems. I hope this could help you understand clearly.
The first one:
A float is a processor native type. It is a 32bit floating point value that can represent values in the range of negative 3.402823e38 to positive 3.402823e38.
A double is also a processor native type. It is a 64bit floating point value that can represent values in the range of negative 1.79769313486232e308 to positive 1.79769313486232e308.
A Decimal is not an intrinsic type. It is a 128bit floating point value that can represent values in the range of positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335.
The second one:
Because the problems are different, we need different precisions. These three types are fit for different environments.
If your accuracy needs are not particularly great, Float is the way to go. Float uses less space and has the lowest accuracy/precision.
If you want "as high precision as possible", and the performance is also a plus, Double is the way to go.
If you need to avoid rounding errors or use a consistent number of decimal places, Decimal is the way to go.
Decimals have much higher precession and are usually used within financial applications that require a high degree of accuracy. Of courese decimals are much slower (up to 20X times in some tests) than a double\float.
Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.
The third one:
Which is best is the one that is best for your particular problem and solution.
•For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.
•For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals
The fourth one:
float is a single precision (32 bit) floating point data type as defined by IEEE 754 (it is used mostly in graphic libraries).
double is a double precision (64 bit) floating point data type as defined by IEEE 754 (probably the most normally used data type for real values).
decimal is a 128bit floating point data type, it should be used where precision is of extreme importance (monetary calculations).
More information,you can see
http://en.wikipedia.org/wiki/Singleprecision
http://en.wikipedia.org/wiki/Doubleprecision
http://en.wikipedia.org/wiki/Decimal
If you have any problems,please feel free to contact me.
Best regards!
Liliane Teng Marked as answer by Liliane Teng Sunday, February 28, 2010 8:13 AM
Tuesday, February 23, 2010 2:46 AM 
All three of them are computer representations of real numbers.
A float is a processor native type. It is a 32bit floating point value that can represent values in the range of negative 3.402823e38 to positive 3.402823e38.
A double is also a processor native type. It is a 64bit floating point value that can represent values in the range of negative 1.79769313486232e308 to positive 1.79769313486232e308.
A Decimal is not an intrinsic type. It is a 128bit floating point value that can represent values in the range of positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335.
Basically the difference is storage space versus accuracy/precision. Float uses less space and has the lowest accuracy/precision. Decimal uses the most space and is the most accurate, but it's also quite a bit more expensive in processor time too as it is not an intrinsic type. One advantage of Decimal is that it is optimized for financial calculations (where only two decimal points are significant).
The most important thing to know about real numbers on a computer is that the computer cannot represent any arbitrary real number. A computer can only work with a finite number of bits whereas real numbers have infinite numbers of positions. Therefore all floating point calculations on a computer are always approximations. You must consider this and using rounding (round, floor, ceiling, min, max, etc.) to get the "most correct" results for your particular task.
Which is best to use? Only you can answer that question. Which is best is the one that is best for your particular problem and solution. Proposed as answer by Stephen ClearyMVP Monday, February 22, 2010 3:34 PM
 Marked as answer by Liliane Teng Sunday, February 28, 2010 8:11 AM
Sunday, February 21, 2010 10:37 AM
All replies

0. what is floatingpoint values?
example: 101.01 = (1 * 2^2) + (0 * 2^1) + (1 * 2^0) + (0 * 2^1) + (1 * 2^2) = 5.25
1.
1.1. float, stores 32bit floatingpoint values. range: ±1.5 × 10^−45 to ±3.4 × 10^38
1.2. double, stores 64bit floatingpoint values. range: ±5.0 × 10^−324 to ±1.7 × 10^308
1.3. decimal, indicates a 128bit data type. Compared to floatingpoint types, the decimal type has more precision and a smaller
range, which makes it appropriate for financial and monetary calculations. range: ±1.0 × 10^−28 to ±7.9 × 10^28
2. Because we need different precisions in different conditions. e.g. double for computing distance between an Atom and it's core. decimal for orfinancial and monetary calculations.
3. As you see in 2, it's dependant to the environment.
4. You see in above.
Please help us improve this community forum for visitors by marking the replies as answers if they help and unmarking them if they provide no help.
Thanks. Edited by Yasser Zamani  Mr. Help Sunday, February 21, 2010 1:11 PM floatingpoint values
 Proposed as answer by farooqaaa Tuesday, February 23, 2010 2:55 AM
 Marked as answer by Liliane Teng Sunday, February 28, 2010 8:08 AM
Sunday, February 21, 2010 10:24 AM 
All three of them are computer representations of real numbers.
A float is a processor native type. It is a 32bit floating point value that can represent values in the range of negative 3.402823e38 to positive 3.402823e38.
A double is also a processor native type. It is a 64bit floating point value that can represent values in the range of negative 1.79769313486232e308 to positive 1.79769313486232e308.
A Decimal is not an intrinsic type. It is a 128bit floating point value that can represent values in the range of positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335.
Basically the difference is storage space versus accuracy/precision. Float uses less space and has the lowest accuracy/precision. Decimal uses the most space and is the most accurate, but it's also quite a bit more expensive in processor time too as it is not an intrinsic type. One advantage of Decimal is that it is optimized for financial calculations (where only two decimal points are significant).
The most important thing to know about real numbers on a computer is that the computer cannot represent any arbitrary real number. A computer can only work with a finite number of bits whereas real numbers have infinite numbers of positions. Therefore all floating point calculations on a computer are always approximations. You must consider this and using rounding (round, floor, ceiling, min, max, etc.) to get the "most correct" results for your particular task.
Which is best to use? Only you can answer that question. Which is best is the one that is best for your particular problem and solution. Proposed as answer by Stephen ClearyMVP Monday, February 22, 2010 3:34 PM
 Marked as answer by Liliane Teng Sunday, February 28, 2010 8:11 AM
Sunday, February 21, 2010 10:37 AM 
In addition to Yasser's answer: If you need to avoid rounding errors, go with the decimal type. The decimal type internally uses 1bit for the positive/negative sign, a 96bit *integer* for the representation of the number, and a scaling factor specifying what portion of the internal integer should be used for decimal representation. But more exactitude comes with a cost: diminished calculation speed.
If you need computational performance and if small rounding errors are not very important to you go with the other types.
Marcel
Sunday, February 21, 2010 10:46 AM 
Hello S.Silambarasan,
Welcome to MSDN Forum!
I understand your problem, all replies above are also good.The following is the information about your four problems. I hope this could help you understand clearly.
The first one:
A float is a processor native type. It is a 32bit floating point value that can represent values in the range of negative 3.402823e38 to positive 3.402823e38.
A double is also a processor native type. It is a 64bit floating point value that can represent values in the range of negative 1.79769313486232e308 to positive 1.79769313486232e308.
A Decimal is not an intrinsic type. It is a 128bit floating point value that can represent values in the range of positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335.
The second one:
Because the problems are different, we need different precisions. These three types are fit for different environments.
If your accuracy needs are not particularly great, Float is the way to go. Float uses less space and has the lowest accuracy/precision.
If you want "as high precision as possible", and the performance is also a plus, Double is the way to go.
If you need to avoid rounding errors or use a consistent number of decimal places, Decimal is the way to go.
Decimals have much higher precession and are usually used within financial applications that require a high degree of accuracy. Of courese decimals are much slower (up to 20X times in some tests) than a double\float.
Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.
The third one:
Which is best is the one that is best for your particular problem and solution.
•For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.
•For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals
The fourth one:
float is a single precision (32 bit) floating point data type as defined by IEEE 754 (it is used mostly in graphic libraries).
double is a double precision (64 bit) floating point data type as defined by IEEE 754 (probably the most normally used data type for real values).
decimal is a 128bit floating point data type, it should be used where precision is of extreme importance (monetary calculations).
More information,you can see
http://en.wikipedia.org/wiki/Singleprecision
http://en.wikipedia.org/wiki/Doubleprecision
http://en.wikipedia.org/wiki/Decimal
If you have any problems,please feel free to contact me.
Best regards!
Liliane Teng Marked as answer by Liliane Teng Sunday, February 28, 2010 8:13 AM
Tuesday, February 23, 2010 2:46 AM 
My SQL Server training book states that the range for the decimal data type is 10^38 +1 through 10^38 –1, which differs from your response...? I am wondering what the +1 and 1 portions refer to. I believe we lose one digit of precision to account for 0, but why do we lose the other? Thank you!!!Friday, December 17, 2010 6:11 PM

The range of the SQL decimal type is not the same as the C# decimal:
http://msdn.microsoft.com/enus/library/364x0z75(v=vs.80).aspx
Approximate Range
±1.0 × 10<sup>−28</sup> to ±7.9 × 10<sup>28</sup>
The +1 and 1 are exactly that: add 1 to 10^38 and subtract 1 from 10^38.
10^38 has 39 digits: 1 followed by 38 zeros.
Saturday, December 18, 2010 5:43 AM 
Thank you Louis.fr, but can you please tell me why we lose those two digits? Is it related to the need to account for zero and the negative sign? Thank you!Monday, December 20, 2010 3:47 PM

Let's take a smaller range: a 2digits number.
The range is from 99 to 99. Right?
99 = 10^2+1
99 = 10^21
The bigger number you can write with n digits is always 10^n1.
Monday, December 20, 2010 9:42 PM 
Hello,
In the book Write Great Code: Volume 1: Understanding the Machine there's outstanding chapter about this subject and more! :)
Eyal, Regards.
Any fool can write code that a computer can understand. Good programmers write code that humans can understand.  Martin Fowler.
Visual Studio Command Browser 2.0 CodeVolume.PresentersMonday, December 20, 2010 10:13 PM