Answered by:
Decimal numbers
Question

Hi guys ,
I have been wondering how banks or financial institutions deal with decimal numbers? and how C# deals with decimal numbers. how decimal data type handles decimal part of a number and i wonder how developers deal with decimal numbers in real life if they work for a financial institution.
for instance 3.02 cents how they show after decimal point? 011 is equal to 3 but how they show 02 cents?
cents may not be important for individuals but i belive it is important for banks etc. because they have millions customers if they lose one cent from each customers it wont be good for banks.
Thanks in advance
Answers

Refer the below page which has got very good info on floating point numbers.
http://docs.oracle.com/cd/E1995701/8063568/ncg_goldberg.html
Please mark this post as answer if it solved your problem. Happy Programming!
All replies

Refer the below page which has got very good info on floating point numbers.
http://docs.oracle.com/cd/E1995701/8063568/ncg_goldberg.html
Please mark this post as answer if it solved your problem. Happy Programming!

When dealing with currency you don't use floating point storage precisely because it's so imprecise (pun intended).
It stores the data as integers, i.e. the number of cents that you have, rather than the number of dollars as a floating point value with a decimal counterpart.
In C# there is both a double type and a decimal type. double uses floating point storage and as such is subject to floating point errors, but it is is quite fast at performing it's operations and uses a small amount of memory for what it does. The decimal type on the other hand stores its data using integers. There is an integer representing a value and an integer representing the exponent that it's shifted by. This means that the math operations are slower, it uses more space, has a more limited range, but doesn't have floating point errors within that range.
 Edited by servy42 Friday, March 30, 2012 5:04 PM

A floating point number stores the value in scientific notation, meaning there is only one number 'before' the decimal, and the rest are after it, plus an exponent. Essentially it stores a fractional number and an exponent. A 'decimal' type stores an integer number greater than zero, plus an exponent. The representation of that positive integer number can still be in base 2, because an integer number can be stored in base two without any loss of data (as opposed to a fractional number stored in base 2 vs. base 10).

Hello,
I'd like to recommend this book "Write Great Code" by Randall Hyde to anyone that wants to understand how it really works, although most of the book is about HLA there's some great chapters there about the numeral system and its representation as the computer reads it.
Great series of books for anyone that is bored or forced to learn HLA. haha ;)
Eyal (http://shilony.net), Regards.

Standard in framework 1.0 was bankers rounding used IEEE Standard 754, that has given much confusion and issues, because most persons don't use it.
It is still the standard way but currently many other ways are possible.
http://msdn.microsoft.com/enus/library/3s2d3xkk.aspx
Be aware that the most used program language at Banks has always been Cobol. That program language has not standard floating point data type only decimals and computable ones (binaries).
It is never clever to uses in C# floating point datatypes for money because it will often not give the correct rounding in the case of money.
(You can take the decimal type and then the rounding which fits like you see in the above link, bankers way is default).
http://en.wikipedia.org/wiki/IEEE_7542008
Success
Cor 
In C# you may use the type `decimal`.
Accordingly to MSDN:
The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no roundoff errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding. For example, the following code produces a result of 0.9999999999999999999999999999 rather than 1.
decimal dividend = Decimal.One; decimal divisor = 3; // The following displays 0.9999999999999999999999999999 to the console Console.WriteLine(dividend/divisor * divisor);
A decimal number is a floatingpoint value that consists of a sign, a numeric value where each digit in the value ranges from 0 to 9, and a scaling factor that indicates the position of a floating decimal point that separates the integral and fractional parts of the numeric value.
The binary representation of a Decimal value consists of a 1bit sign, a 96bit integer number, and a scaling factor used to divide the 96bit integer and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28. Therefore, the binary representation of a Decimal value is of the form, ((296 to 296) / 10(0 to 28)), where 2961 is equal to MinValue, and 2961 is equal to MaxValue.