locked
long double RRS feed

  • Question

  • I need the accuracy of a long double for a particular set of calculations.

     

    Is there an easy way to do this in visual c++ environment?

     

    (I have tried linking in a gcc compiled .dll that did the calculation but it apparently uses something from the windows runtime that made the results only accurate to 8 bytes.)

     

    Do I need to write all the math functions bytewise myself?

     

    charles

    Wednesday, July 18, 2007 11:55 PM

Answers

  • As of VC8, while long double is distinguished as a distinct type from a double, they behave the same. See http://blogs.msdn.com/ericflee/archive/2004/06/10/152852.aspx for more details.
    Thursday, July 19, 2007 12:29 AM
  • VC++ doesn't support IEEE double extended precision format for long doubles (and quite honestly, I don't see a compelling reason for it). Other compilers do.

     

    If you really want to do it with VC, I'm afraid that'll be a bit of work.

     

    -hg

    Thursday, July 19, 2007 3:28 PM

All replies

  • As of VC8, while long double is distinguished as a distinct type from a double, they behave the same. See http://blogs.msdn.com/ericflee/archive/2004/06/10/152852.aspx for more details.
    Thursday, July 19, 2007 12:29 AM
  • Would you care to explain what accuracy you expect for long double. Of course, you can simply use a long double and get the accuracy of long double. There are, however, no specific guarantees as to the size of long doubles. And in fact, at least two other sizes (apart from IEEE754 double precision) make sense on IA-32.

     

    -hg

    Thursday, July 19, 2007 8:25 AM
  • Thank you for the interesting link. 

     

    Based on this others needed a 10 byte precision and there is just hope for a future product?

    Thursday, July 19, 2007 2:52 PM
  • You are correct that I misstated the problem.

     

    I am trying to port a piece of code that needs 10 bytes for the long double and VC++ has a maximum data size of 8 bytes.  The gcc on windows through mingw has a 12 byte long double that it would be cool if I could link in for the procedure in question.

     

    When I link in a dll created with gcc, the result matches the 8 byte result instead of the 10 byte result.  (The printout of the resulting bytes is completely within the linked library so nothing is passed)

     

    The other alternative would seem to be writing out routines for +, -, *, /, sqrt bytewise.  I was hoping for an easy way.

    Thursday, July 19, 2007 2:59 PM
  • VC++ doesn't support IEEE double extended precision format for long doubles (and quite honestly, I don't see a compelling reason for it). Other compilers do.

     

    If you really want to do it with VC, I'm afraid that'll be a bit of work.

     

    -hg

    Thursday, July 19, 2007 3:28 PM
  • Holger, I beg to differ with you.  Unlike what is stated in the blog reference, this has been a long-standing problem with me and VC++.  You may not write computation applications of true complexity, but some of us do.  What annoys me most is that the old MS C++ compilers supported 10 byte precision but the newer ones do not.   I don’t believe that decreasing the precision would be considered an advancement in compiler technology by most objective individuals.  Can you believe in this day that a compiler would not be able to utilize all the register functionality and mathematical capabilities of a processor?   For some calculations, I wish more than 10 byte precision were natively available on the processor.  

     

    The extra precision is an absolute necessity for certain calculations.  Try solving deeply nested/layered, coupled, highly non-linear systems of equations with only 64 bit arithmetic.  You'll find at some levels, the 64 bit precision simply does not do the trick.  You have to resort to using something like Intel C++ or write your own C++ class in inline assembly to get the precision you need.  Unfortunately, the inline assembly approach is not acceptable because the compiler cannot optimize the calls the way it could if the extra precision were natively implemented, and this causes the inline assembly to be far too slow---too slow for use in anything that must be frequently evaluated.

     

    The problem with going with another compiler is that most of these applications have other requirements that rely upon operating system services as well (GUI, database, networking, etc.).  Most of them are not as well implemented in other numerically strong compilers as in VC++.  Therefore, you frequently end up compiling some of the application in Intel and the remainder in MS.  This is not an easy task either.  It is theoretically possible for small solutions, but my solution contains ~50 projects and > 8000 files.  While Intel says they are VC++ compatible, you find many "incompatibilities" in a mixed environment of this solution size.  Therefore, you have to be very careful about which files are compiled with which compilers.  It is not always easy to isolate the code for Intel vs. Visual C++ in this many files.  Consequently, you frequently give up on compiling with the Intel compiler and compile more code with Visual C++ resulting in more numerical instability due to the precision problems.  You then have to work around the loss of precision with extra code making your application slower than if you had not lost the precision, or by spending significant time attempting to rearrange calculations to compensate for the precision loss sometimes making the code much more difficult to read and comprehend.

     

    Why do you think many other architectures support 128 bit floating point numbers?  They are catering to scientific and engineering applications, not business or web applications.

    Wednesday, September 12, 2007 7:45 PM
  • I understand that additional precision may be convenient or required for certain applications. However, I don't buy the argument that IEEE 754 double extended precision arithmetic provides significant value. Yes, it can improve things for certain scenarios. But then, I can easily come up with a scenario where 80-bit FP isn't good enough. I can think of a scenario where I'd be happy to trade mantissa bits for exponent bits and vice versa. There are applications for which base-conversion dominates the actual computations - so why not use a base-10 representation.

     

    The story for IA-32 moving forward is pretty simple. x87 has little to no future whereas SIMD FP sees major development. As far as performance is concerned SSE is the way to go.

     

    So yes, I understand there are some applications for which IEEE 754 double precision is not good enough and double extended precision will do the trick. But IMHO the compiler should not map types it can't support efficiently unless there is a very good reason. And frankly, I don't see it in this case.

     

    And BTW, I know that Intel goes to great length to go for both source and object-level compatibility and you really shouldn't see any major difficulties interfacing VC++ code to an ICL based computation module.

     

    -hg

    Sunday, September 16, 2007 9:49 AM
  • The problem with Intel is usually not object-level compatibility, it is source level compatibility.  The compilers definitely are not 100% compatible at a source code level.  When your application is 2E6 lines of code, these compatibilities can frequently require significant effort in source code arrangement, effort that could be better used elsewhere.  It is not possible in an application that relies on OS services (GUI, database, network, COM, multithreading, etc.) to use 100% ISO C++ as well, if you want to deliver an application that provides a good user experience in this area too.  We frequently find it difficult to make our header files compliable with both compilers.  You simply cannot #ifdef things out of classes as this will change class size or vtable layout.  This all becomes something much more than a trivial experience in integration.

     

    While I agree that at times more precision in the exponent is needed, I would bet that the need in the mantissa is probably >100x more frequently required.  Mathematically it is usually easier to handle the exponent issue rather than the mantissa issue.

     

    I realize that SIMD FP adds performance, but I don't see it adding precision.  I am disappointed that the 64 bit processors have not improved in this area either.  The extra SIMD performance is of little use if you fail to converge a numerical technique or if you require additional function evaluations due to loss of precision.  In the most vulnerable areas, I am willing to use x87 to obtain the precision requirement not available with SSEx.

    Monday, September 17, 2007 2:38 PM
  • Simply writing off customers' concerns with a "I don't see a compelling reason..." is hardly useful. What you are really saying is "there is not enough money in it for us", which I presume is compared to implmenting features to make programs prettier, or making coding easier for people who don't wish to track their memory use.

    As for real world examples where we need more precision - try working with GPS data.  The added precision for long double would take us from +-1m down to +- 1mm.
    Monday, March 23, 2009 4:22 PM
  • So following the SYSV ABI (p. 3-2, a.k.a. "28") isn't "compelling"?

    This is making VC++ fail the Cython NumPy tests, and I imagine it causes no end of frustration when people try to actually *build* extensions for NumPy using VC++. Wouldn't bug me so much if I could get distutils to use MinGW, I admit...

    Wednesday, December 9, 2009 4:53 PM
  • VS2010 "sizeof(long double)" is still 8 !

    I know of an EDA company whose engineering department went overboard to respond to requests by their AE's. Their product evolved just exactly as requested. When the company failed post-mortum analysis revealed the AE's got what the customers wanted wrong.

    There was a time when pointers were 16 bits and 64k was the maximum linear address.  If Microsoft stubbornly embraced those limits, how many of today's programs would be compiled with the tool?

    64 bit floating point representation today is like 16 bit pointers of the past. They will not suffice for today's and future numerical applications.

    The ratio of "integral" developers to "numerical" developers has been, is, and always will be small. Running a popularity contest between the needs of all developers taken as a single group is no strategy for supremacy.

     

    Tuesday, August 3, 2010 9:36 PM
  • There is one solution that remains unsaid. Although the compiler does not support extended precision variables, it does support extended precision registers. So when the calculations should have high accuracy but the result may be double, things are fine: http://baumdevblog.blogspot.com/2010/11/high-precision-floating-point.html

    Regards
    Monday, November 15, 2010 5:15 PM
  • <Resurrecting an old thread.>

    Another good reason for a 80-bit long double is that it can store all possible values of 64-bit integers, both signed and unsigned.

    Thursday, January 19, 2012 10:19 AM
  • Maybe I'm old fashioned but I get around this bizarre restriction with assembly + a struct with at least 10 bytes in it usually (although I have been looking for a more convenient way...hence finding this place). Heck if its important that nothing is stupidly optimised away or reordered I code the whole fp function in asm...
    Friday, April 13, 2012 10:11 AM
  • Plese, vote (click a button) in Microsoft Connect to indicate addition of 80 bit long double to Visual C++ is important:

    connect microsoft com feedback# 691066

    80-bit-floating-point-extended-precision#tabs

    Friday, November 16, 2012 5:56 PM
  • >>  (and quite honestly, I don't see a compelling reason for it)

    and that in a nut-shell is what is wrong with Microsoft products.

    A hundred million users, and *someone* out there will have a compelling reason for it.
    Just because you can't think of one... 

    Guess my plans to port my application from Linux to Microsoft will have to wait, as I am not sufficiently motivated to write my own extended precision floating point library. Cygwin perhaps. 
    Thursday, June 13, 2019 2:35 AM