Precision Problem about an Excel-Call of .Net-COM assembly RRS feed

  • Question

  • Hi,
    I have implemented an assembly for some mathematical functions.
    I have two ways to call these functions:
     - UI (Winformes)
     - EXCEL (the assembly is exposed COM)

    But the Excel-call throw a strange problem.

    The precision of some mathematical function of .Net Framework change! 
    The Function Math.Log (namespace System, mscrolib.dll) doesn’t give it to me the same result if
    - it’s calling by UI
    - it’s calling by EXCEL (using COM Link)

    Some of digits are truncated.

    The IL instruction who “call” Math.Log doesn’t have the same memory reference.

    Does someone could explain to me how what’s going on ?

    I have a sample project for those who want to see.

    All apologize for my approximate English

    Note :
    I Tried to execute my assembly in an other AppDomain, but it’s not possible (see below)

    Thursday, December 18, 2008 4:52 PM


  • I need to guess at this a bit, I don't know the exact details.  .NET needs to provide consistent floating point results both in 32-bit and 64-bit mode.  In 64-bit mode, floating point operations are executed using XMM instructions with 64-bit precision.  In 32-bit, they are executed using FPU instructions.  The latter have by default an internal 80-bit precision.  .NET must thus reprogram the FPU to use 64-bit internal precision.

    A program like Excel, a 32-bit app, almost certainly uses the FPU in default 80-bit precision mode.  When it is asked to execute code from a .NET app, it would generate slightly different results since .NET reprogrammed the FPU.

    I wonder how MSFT is going to solve that problem when some future version of Excel is going to be 64-bits.  Sounds like a really tough one...

    Hans Passant.
    • Marked as answer by Zhi-Xin Ye Wednesday, December 24, 2008 9:09 AM
    Friday, December 19, 2008 2:43 PM