Money question about decimal and/or double usage when dealing with sql float.

• Question

• Hi,

We have some database that implements money as float. Now the direct .NET type would here be Double right? But still this isn't quite true since "0.55 + 0.05" on the sql server is "0.6" and in .NET the precision fails.

As I see it the question boils down to: Do I convert from sql float <-> Decimals and accept that conversion precision loss? Or do I use Double in my application and accept the precision loss when we are talking about dealing with money?

Which is better you think?
Tuesday, April 21, 2009 9:14 AM

• You can't fix the accuracy problem you get when you read the db value into Decimal.  If you do any further math on it, you can indeed avoid accumulating problems with rounding.  The Double type is accurate to 15 digits, good enough to store the USA budget deficit with a precision of \$0.000003.  It is all in the presentation, don't show more than 2 digits in the fraction.  And watch out when you compare, use Math.Abs(money1 - money2) < 0.005 to compare equality.
Hans Passant.
• Marked as answer by Monday, April 27, 2009 3:33 PM
Tuesday, April 21, 2009 2:08 PM

All replies

• The SQL Server result is inaccurate too, it just hides it by using a limited number of digits in the fraction.  Do the same thing in .NET with, say, string.Format("{0:N2}", moneyValue) so the result gets rounded to the nearest penny.  Or fix the real problem, money shouldn't be stored as float.
Hans Passant.
Tuesday, April 21, 2009 10:19 AM
• The SQL Server result is inaccurate too, it just hides it by using a limited number of digits in the fraction.  Do the same thing in .NET with, say, string.Format("{0:N2}", moneyValue) so the result gets rounded to the nearest penny.  Or fix the real problem, money shouldn't be stored as float.
Hans Passant.

The db fixing is not an option unfortunately. So would you go with Doubles thoughout the application or convert to sql float to Decimals? I know this cant be answered completely with knowledge about the operations that the data is gonna go through, but what I am after is more best practice in general. My co-workers want to go with doubles all over, but this being non-scientific data (money) handled in an environment where performance isnt important, my instinct (and Google) tells me that anything but Decimals would be bad.

So basically Im looking for reasons why I should NOT enforce my preferred method using Decimals. I cant see why converting sql floats to Decimals would be bad when we are talking money.

--
Tuesday, April 21, 2009 11:55 AM
• You can't fix the accuracy problem you get when you read the db value into Decimal.  If you do any further math on it, you can indeed avoid accumulating problems with rounding.  The Double type is accurate to 15 digits, good enough to store the USA budget deficit with a precision of \$0.000003.  It is all in the presentation, don't show more than 2 digits in the fraction.  And watch out when you compare, use Math.Abs(money1 - money2) < 0.005 to compare equality.
Hans Passant.
• Marked as answer by Monday, April 27, 2009 3:33 PM
Tuesday, April 21, 2009 2:08 PM
• You can't fix the accuracy problem you get when you read the db value into Decimal.  If you do any further math on it, you can indeed avoid accumulating problems with rounding.  The Double type is accurate to 15 digits, good enough to store the USA budget deficit with a precision of \$0.000003.  It is all in the presentation, don't show more than 2 digits in the fraction.  And watch out when you compare, use Math.Abs(money1 - money2) < 0.005 to compare equality.
Hans Passant.

Ok thanks nobugz.
Wednesday, April 22, 2009 6:43 AM