none
Precision with .NET RRS feed

  • Question

  • At this point I don't have alot of evidence to support my hunch but I was hoping to get some input from members of this group.

    I converted a bunch of BLAS and LAPACK routines to C# 'unsafe' code (so I can work with pointers). One particular example is I use DGESDD from the LAPACK and the GNU (gfortran) FORTRAN complier to compute the SVD of a Hilbert matrix. This gives me U, V, and the singular values so given the properties of SVD a good check would be U * diag(D) V' which would give me back the original matrix. So this is bhe background of what I am trying to do.

    The question that I have is that when I do the above with FORTRAN code directly I find that the "error" (the maximum deviation from what I expect - in this case it would be the maximum value of the original matrix minus the SVD check) is somewhere around 1e-15. When I use the code (all using double) from my "unsafe" code I get an error around 1e-11.

    First off 1e-11 suspiciously looks like round off that I would see from single precision float. Even though I specify double would .NET be using float? Second are there any other hints or suggestions as to why the precision seems to be less from .NET code as compare to FORTRAN code compiled with GNU FORTRAN compiler?

    Sorry this is not much to go on. I am mainly looking for suggestions.

    Thank you.

    Kevin

    Thursday, May 6, 2010 3:12 PM

Answers