# Conversion from int to float

### General discussion

• There are a couple things with casting that have bothered me for a while now.

first, "conversion from 'int' to 'float', possible loss of data"
The advice is always "just cast it, it's fine", but that ignores the actual error possibility!
How large of an integer is too large for a float? Or a double, for that matter? Can you overflow the value of a float variable by casting a MAX_INT integer to it?

The second question is a bit deeper, and I'm not really sure that I'll understand (or even care, for that matter) the possible answers. What about speed? What does cast actually do, and is there possibly a function or algorithm that can do better under specific circumstances (I'm mostly thinking about declarative assignments, here)? I've seen a lot of talk about converting from floating point values to integer values, but I haven't seen anything other then mentioning that it can be done when it comes to converting ints to floats.

So, I'd really like to see what anyone has to say about any of this. Thanks!
Sunday, February 22, 2009 9:27 PM

### All replies

• Don't consider a cast to be a conversion. Avoid casts whenver possible; always look for something else other than a cast.

Look in the documentation for a page that shows the sizes and ranges of each fundamental type; it is in there.
Sam Hobbs; see my SimpleSamples.Info
Sunday, February 22, 2009 9:48 PM
• There is no trouble with the size:

INT_MAX  2147483647
FLT_MAX  3.402823466e+38
DBL_MAX  1.7976931348623158e+308

but float will not maintain all the digits of a large int. One reason always to use double rather than float.

Conversion of int to float or double will be very fast, so I would not worry about it.

David Wilkinson | Visual C++ MVP
Sunday, February 22, 2009 9:53 PM
• Thank you David for a more complete answer. I know that mine was quite minimal.
Sam Hobbs; see my SimpleSamples.Info
Sunday, February 22, 2009 9:58 PM
• Thanks again dave.
:)

The thing is, what simple samples says above: "avoid casts whenever possible", is what throws a wrench in everything. Ideally, functions that use floating point parameters should always be sent floating point variables, etc... We all know that this isn't always possible, however. Well, I guess that it could be with templates, but the variable back store is invariably always either a float/double or int/long fundamental type somewhere along the way...

So, the "always look for something else" statement is really where I'm coming from. if you can't change your local variable, and you can't change the parameter type, what does that leave other then a cast?
Sunday, February 22, 2009 10:01 PM
• Perhaps you miss the point of the warning.  It is just a reminder: "hey, did you think about what might go wrong if you do this?"  And it has a good point:

#include "stdafx.h"
#include <stdio.h>
#include <conio.h>

int _tmain(int argc, _TCHAR* argv[])
{
int ix = 16777217;
float f = (float)ix;
printf("integer = %d\n", ix);
printf("float   = %.0f\n", f);
_getch();
return 0;
}

How many programmers are out there that say: "well, that's okay."  Of course, it is okay because that's how floating point math works.  You explicitly tell the compiler that you know how it works with:

float f = (float)ix;    // The ix value is always less than 167777217 and I've tested this thoroughly

or

Hans Passant.
Sunday, February 22, 2009 10:42 PM
• ohms_law said:

The thing is, what simple samples says above: "avoid casts whenever possible", is what throws a wrench in everything.

I did not say don't use casts.

ohms_law said:

if you can't change your local variable, and you can't change the parameter type, what does that leave other then a cast?

Can you provide a specific example?

Sam Hobbs; see my SimpleSamples.Info
Sunday, February 22, 2009 10:50 PM
• ohms_law said:

How large of an integer is too large for a float? Or a double, for that matter? Can you overflow the value of a float variable by casting a MAX_INT integer to it?

Did you look in the documentation? Specifically, did you see Numerical Limits?
Sam Hobbs; see my SimpleSamples.Info
Sunday, February 22, 2009 11:21 PM
• Simple Samples said:

ohms_law said:

How large of an integer is too large for a float? Or a double, for that matter? Can you overflow the value of a float variable by casting a MAX_INT integer to it?

Did you look in the documentation? Specifically, did you see Numerical Limits?
Sam Hobbs; see my SimpleSamples.Info

Yes, of course!
:)

That's actually partially where the question is coming from, since I don't see any way to "lose data" by casting an int to a float or double...

I think that nobugz is hitting the nail on the head here though, in that the warning is really just a generic warning. It's the same message whether the conversion is from int to float as it is from int to char, so it's really not a big deal I guess.

In the end, I think that I'm simply looking for an easier solution then putting the work into rewriting a few hundred classes and class functions with either templates or explicit typed overrides. It's a PITA with the original programmer/programming team simply ignored the issue...

I really miss C# and the .NET framework.
lol
Sunday, February 22, 2009 11:58 PM
• ohms_law said:

I don't see any way to "lose data" by casting an int to a float or double...

The documentation is not clear to me but there does seem to be a possible loss of data; try the following;

```int i1(INT_MAX), i2;
float f(i1);
i2 = f;
std::cout << i1 << ' ' << f << ' ' << i2 << '\n';
```

The output is:

`2147483647 2.14748e+009 -2147483648`

The difference is minor and I don't know why there is a difference but it does show a potential loss of data in the form of incorrect data at least.

Sam Hobbs; see my SimpleSamples.Info
Monday, February 23, 2009 1:27 AM
• I admit that the loss of data is not be as much as I thought.

Sam Hobbs; see my SimpleSamples.Info
Monday, February 23, 2009 1:42 AM
• That's a good example of rounding loss. The conversion itself isn't actually loosing anything, it's just that there's no real way to represent fractional values properly in a computer. The value of 1/3 is the best example, since the real value of 1/3 is .333333... infinitely. As is obvious to us all though, computers cannot represent infinite values. So, the value of 1/3, or PI, or any other fractional value with an infinitely repeating sequence is approximated by rounding at the furthest value possible.

But... how all of that actually happens is up to the microchip that you're using at the time, not the programming. There's a reason why integers, longs, floats, doubles, etc... are called "fundamental types". The reason is because that's what the CPU uses, so that's what is given to the compiler.

I think that I just reasoned my own answer to the original post, here. I just need to use casts to tell the compiler "yea, I know that I'm converting this". It just sucks is all, to me. I'm really very much a maintenance programmer, not a designer or anything, so all warnings seem much more important to me then they seem to be to most others.
I guess that ultimately I'm looking to improve my understanding of the conversion between floating point values and integral values in order to come up with more bulletproof general use code in the end. I've seen suggestions to use a union where each value has both an integral and floating point value for example, but I'm not really sure of the costs or benefits involved in that.
Monday, February 23, 2009 2:12 AM
• You have not provided a specific example where conversion from int to float is required, but my guess is that a union would not work. If a union would work, then perhaps you could use the VARIANT type; also see the _variant_t Class.

On the subject of C# and the .NET framework, I think that they just don't support the float type; so if you like it like C# and the .NET framework, then take David's advice and always "use double rather than float". Other than that, I am not aware of a relevant difference between C++ and C#.

One thing that I am unimpressed about C# is the excessive use of casts; for ADO.Net and HTML DOM programming, we must use casts very frequently and I wish the language designers would have had more imagination than to require use of casts so much. The frustrating thing is that it is so difficult to determine what can be cast to something else; using the IDE, an object is often just an object (such as in the HTML DOM) and I don't know how to easily dertermine what it can be cast to or even what type of object it is. That however is off-topic for this thread and I apologize for cluttering this thread with off-topic stuff, except the point is that all languages have advantages and disadvantages.
Sam Hobbs; see my SimpleSamples.Info
Monday, February 23, 2009 3:18 AM
• For a specific example of a "required" conversion from float to int or vice versa: say for example that you write a large program which uses float values in classes. The Windows API uses integers. You obviously can't change the Windows API, and changing your own classes would involve large scale changes. Even if you changed your own code, somewhere along the line a conversion would be required, assuming that your using floats for a real reason...

Obviously, that sort of situation can be considered a design problem, but in the real world there are always design problems. Someone somewhere decides to use floats or doubles, and someone somewhere else decides to use ints or longs. In order to use both "components" together a cast has to be made somewhere along the line, whether or not you can see it.

One of the great things about C#/.NET in general is that everything is an Object, ultimately. when you talk about casts in .NET you're really talking about something different from casts in C/C++. An object "cast in .NET programming usually involves very little conversion from one fundamental type to another. However, as you said above, that's all really OT for this forum.

Reading some material available on the 'net, I think that I'll need to read up on some machine language to really be able to answer my own questions regarding casts. I just don't want to do that, I guess...
Monday, February 23, 2009 3:49 AM
• As for "Someone somewhere decides to use floats or doubles, and someone somewhere else decides to use ints or longs.". Yes, welcome to the real world.

A bank might create asomething new, such as credit cards. Other banks do to. Then the government tells the banks that they must process accounts for other bank's cards, which requires major modification of the software.

An insurance company might acquire another insurance company, which requires either maintenance of multiple applications or merging them together. Merging can cause major discrepancies.

An aircraft manufacturer might have multiple formats for part numbers, even within their company; a lack of standards can result in costly maintenance.

As a maintenance programmer, your management expects you to deal with challenges such as that without the funds to make maintenance more econamical in the future. I hope that conversion to/from float is the worst conversion problem you ever have.

I think that when you learn more, you will discover that casts in C++ and C# are very much alike.

The C# code equivalent to the C++ code above could be:
```int i1 = Int32.MaxValue, i2;
float f = i1;
i2 = (int)f;
Console.WriteLine("C#: {0} {1} {2}", i1, f, i2);
```

And the resutls are the same.

Sam Hobbs; see my SimpleSamples.Info
Monday, February 23, 2009 6:08 AM
• As already disscussed above, the loss of data introduced by conversion (casting, or whatever) from int to float is loss in the number of significant digits, which results in wrong computation by the round-off error.
Anyway, as the OP says, conversions or castings between the different types are innevitable.
It seems to me that the compiler message wants to say that the users should know what they are doing with the conversions.

Another extreme to avoid the round-off error can be use of Symbolic computation which had been introduced by some of advanced math-tools like Mathematica and the like.
There are some books - not many and not popular though - covering Symbolic computation in C++.

I'd like to add another case where the round-off error of ditigal computing causes a different result from the real world analog computation.

The MDSN - and other documents covering digital computations as well - suggests that the programmers should set their tolerance of floating point computations in their projects using the concept of epsilon (very very tiny number used to distinguish one floating point value from another).

The code fragment below is not directly related to the conversion issue, but shows error introduced by the round-off error. Try to guess the result outputs made by fprintf()'s...
 double HypotenuseByPythagorean(double a, double b) { return sqrt(a*a + b*b); } double HypotenuseByTrigonometric(double a, double b) { double θ = atan2(b, a); return a / cos(θ); } #define ε   1.0e-12 bool IsSameRealNumber(double a, double b) { return (a > b)? (ε > a-b): (ε > b-a); } void ShowRoundOffError() { double a, b, c1, c2; a = 3.0; b = 4.0; c1 = HypotenuseByPythagorean(a, b); c2 = HypotenuseByTrigonometric(a, b); fprintf(stderr, "%s\n", (c1 == c2)? "Identical": "Different"); fprintf(stderr, "%s in given epsilon\n", IsSameRealNumber(c1, c2)? "Identical":"Different"); a = 3.134095603550467; b = 4.897042945664023; c1 = HypotenuseByPythagorean(a, b); c2 = HypotenuseByTrigonometric(a, b); fprintf(stderr, "%s\n", (c1 == c2)? "Identical": "Different"); fprintf(stderr, "%s in given epsilon\n", IsSameRealNumber(c1, c2)? "Identical":"Different"); }

Monday, February 23, 2009 7:05 AM
• when put respect, when comen in front of my face is when i change my ideas..
la flaka
Tuesday, August 10, 2010 10:16 AM
• ... there's no real way to represent fractional values properly in a computer. The value of 1/3 is the best example, since the real value of 1/3 is .333333... infinitely. As is obvious to us all though, computers cannot represent infinite values. So, the value of 1/3, or PI, or any other fractional value with an infinitely repeating sequence is approximated by rounding at the furthest value possible.

To further complicate things, machines represent numbers in binary rather that in the decimal that we think of them.  The set of numbers that are finitely representable in decimal is not the same as those finitely representable in binary.  For example the number 1/10 can be easily represented in decimal as .1.  But as a binary number it would be .00011001100110011...

And it is not just infinite fractions that we have a problem with.  With integer values, we get maybe 31 bits to represent significant figures.  In single-precision float, some of those bits are used to represent where the decimal point would be, so we only get maybe 23 bits to represent significant figures.  Here is an example:

```#include <cstdio>

int main()
{
for(int i=100000000; i<100000010; i++)
{
printf("%d\n", i);
float const f = static_cast<float>(i);
printf("%.0f\n\n", f);
}
}
```

Tuesday, August 10, 2010 5:52 PM