Float VS Double Vs Decimal RRS feed

  • Question

  • I Have a class named A which having three overloaded methods when i try to call in below manner which method will call first and why???

    class Program
            static void Main(string[] args)
                A a1 = new A();

    public class A
            public void demo(float a)

            public void demo(decimal b)

            public void demo(double b)



    Sunday, July 8, 2018 8:28 AM

All replies


    Double is more precise than float and decimal is more precise than double.

    Sorry I read title only :) Default type for number with decimal point is double. So if you want to use another methods you will need specify that number is float (1.2f) or decimal (1.2m). 
    Sunday, July 8, 2018 9:18 AM
  • As an addition to Petr, you need to consider the following three factors:
    1. The size would be reserved in the memory for each data type:

    Float: 4 bytes
    Double: 8 bytes
    Decimal: 16 bytes

    2. The capacity of each data type:

    You can display the Min/Max values of each type using this simple code:


    The same above code can be applied on double and decimal as well.

    3. The default compilation for all the floating point data types is double:
    For example, if you checked all the methods come with the System.Math class, you will see it's expecting double.
    If you executed this code:

    Console.WriteLine((1.5 + 2).GetType());
    the output would be System.Double.

    That means, try to use the double if possible to avoid wasting the compiler resources to convert from the other floating point data types to double in some scenarios.

    I hope you find my reply helpful!

    Monday, July 9, 2018 5:03 PM
  • The main difference is Floats and Doubles are binary floating point types and a Decimal will store the value as a floating decimal point type. So Decimals have much higher precision and are usually used within monetary (financial) applications that require a high degree of accuracy. But in performance wise Decimals are slower than double and float types.

    Float - 7 digits (32 bit)

    Double-15-16 digits (64 bit)

    Decimal -28-29 significant digits (128 bit)

    Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float. Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

    Thursday, January 10, 2019 4:44 AM