locked
Why do we always use int? Why not short? RRS feed

  • Question

  • Just curious... Why do we never use short when working with numbers?

    I mean in simple contexts like for instance in a loop, or other i++ situations. 

    How often do you write a loop that goes more than 32K iterations?
    "32K ought to be enough for anybody"- at least in most situations.

    I just have a feeling that short would take up less resources/increase performance - but probably so little that it's not worth the strain of typing to letters more? (short/int)

    Is the convenience of typing only 3 letters, the only reason?

    Sunday, September 24, 2006 3:26 AM

Answers

  • It's a performance thing.  A CPU works more efficient when the data with equals to the native CPU register width. This applies indirect to .NET code as well.

    In most cases using int in a loop is more efficient than using short. My simple tests showed a performance gain of ~10% when using int.

    Sunday, September 24, 2006 9:13 AM
  • float should be faster than double because it uses less memory. Unlike in the short/int case the x86 CPU native floating point size 10 bytes that is more than both float (4 bytes) and double (8 bytes). So even if you use float or double the CPU will still calculate using 10 bytes and it will still need to convert from 10 byte to either 4 or 8 when it writes to a variable. So the only difference that remains seems to be size of used memory but I must say that I'm not a floating point expert by no means so I could be wrong here.

    A simple test on my machine shows that float is a bit faster than double (not twice as fast)  but you can get a different result for a different processor.

    Anyway using float or double is in most cases a matter of precision. Some applications simply cannot tolerate the low precision of float ane they need to use double no matter it is slower than float.

    Also on .NET you have the Decimal type but that's a completly different story. It's implemented in software so it's going to be slower than anything probably.

     

    Sunday, September 24, 2006 8:40 PM

All replies

  • everyone has their own way of doing things - there is no "right" way but generally people just do Integer as some people may not know what "short" is (which is unfortunate) - its just known from early days to present.

    you could use short, probably more effecient than Integer due to the type/values it holds. It's a preference really, im sure most people would explain (most likely better) and chip in!

    Sunday, September 24, 2006 4:16 AM
  • I think it has become an habbit and bo body care to save 2 bytes. I would love have more comments on this from other people.

    Best Regards,

    Sunday, September 24, 2006 4:22 AM
  • Actually using short can lead to slower code in some situations. The reasons are (the CPU stuff applies only to x86, I don't know exactly what's happening on a x64).

    a) the "native" type of the CPU is an int/unsigned int (a 32 bit value). To make the CPU use 16 bit you need to add a prefix to a 32 bit instruction and this makes for longer code. Also, since the CPU integer unit is 32 bit doing a 16 or a 32 bit operation would take a clock cycle. It's not like if you use only 16 bit the operation would be done in half a clock cycle.

    The CPU stack is 32 bit. This makes passing 16 bit parameters to function just the same as passing 32 bit parameters. They are passed as 32 bit values on the stack so no space saving here.

    Data alignment requirements may also throw space savings out of the window. If you put 2 short fields in a struct you can get a 4 byte struct. But try putting an short and an int in a struct. You won't necesarilly get a 6 byte struct because the int member needs 4 byte alignment. If the int variable comes after the short variable the you will have 2 bytes from short, another 2 unused bytes and 4 bytes for int. (Here the CLR may help and change the order of the fields so int comes first and short next to avoid te 2 unused bytes, but it's not guaranteed).

    b) the "native" type of the C# language (and C/C++ for the matter, where short/int come from) is int. Why do I say this ? For example try writing the short constant 2. How do you do it ? You can't. There is no suffix that tells the compiler that is it a short constant. You have the u suffix that tell the compiler it is an unsigned constant and you have l suffix that tells the compiler that it is a long constant. There is no such thing for short. The result is that an expression like s = a + 2 where s and a are both short variables will be evaluated as an int express (the a + 2 part) and then casted to short in the assignment.

    Additionally try doing this in C#:

    a = b + c;

    where a, b and c are short variables. It won't work just because b + c is evaluated as int. You need to do:

    a = (short)(b + c);

    Obviously casting to (and from) short needs at least an additional instruction so in general mixing types in an expression would be slower.

    That's what I can say about performance.

    There also may be an issue with code maintanbillity with shorts: their min/max values are quite low and so they are easier to overflow than an int. Not to mention that at times you may find yourself that you really need more than 32K and need to change half the fors in your code .

    And to make a summary to this way too long entry:

    Use int. If you have some code where the perf is top priority you can try to replace with short and MEASURE. Always measure the speed because there is no guarantee that using short will get faster, on the contrary.

    The most common use of short is when you know you may have a large numbers of them (a short array will obviously take half the space of an int array).

    And a bit of history: in the old days of DOS and 16 bit CPUs int were 16 bit. One of the result of this was that you could "see" this limit as an user of an application. Some editors were limited to 32767 lines for example. You might say that 32767 lines is way too much for a text file... but don't bet on it

    Sunday, September 24, 2006 9:07 AM
  • It's a performance thing.  A CPU works more efficient when the data with equals to the native CPU register width. This applies indirect to .NET code as well.

    In most cases using int in a loop is more efficient than using short. My simple tests showed a performance gain of ~10% when using int.

    Sunday, September 24, 2006 9:13 AM
  • Thanks! That makes sense. Now i don't need to worry about that anymore
    Sunday, September 24, 2006 4:25 PM
  • What about double? Is that fast, or is there something faster that can achieve decimal form?
    Sunday, September 24, 2006 4:30 PM
  • float should be faster than double because it uses less memory. Unlike in the short/int case the x86 CPU native floating point size 10 bytes that is more than both float (4 bytes) and double (8 bytes). So even if you use float or double the CPU will still calculate using 10 bytes and it will still need to convert from 10 byte to either 4 or 8 when it writes to a variable. So the only difference that remains seems to be size of used memory but I must say that I'm not a floating point expert by no means so I could be wrong here.

    A simple test on my machine shows that float is a bit faster than double (not twice as fast)  but you can get a different result for a different processor.

    Anyway using float or double is in most cases a matter of precision. Some applications simply cannot tolerate the low precision of float ane they need to use double no matter it is slower than float.

    Also on .NET you have the Decimal type but that's a completly different story. It's implemented in software so it's going to be slower than anything probably.

     

    Sunday, September 24, 2006 8:40 PM