none
x64 Write()??

    Question

  • I'm converting an application to x64 Windows Server 2003.  I'm currently using VS2008  C++.  I'm wondering if there are any read/write functions/methods (non- .NET) that can read / write more than 3-4 GB at once.

    I've tried read()/write(), and the read() can read well over 3GB.  The write() however cannot.  I also tried ReadFile() and WriteFile() which did not work correctly either.  I'd specify 8GB for bytes to read, and it would only return 2.9GB in the buffer, yet the out parameter of the call returned the 8GB value (in bytes).

    I'm working on a big server that has 64GB of RAM, and would like to allocate a nice chunk of that to read a file into, sort, and rewrite.

    If nothing is available, I will have to write the data in smaller chunks.

    I really find it hard to believe that there is nothing that can handle this.

    Any advise is appreciated.
    -fb



    Monday, February 23, 2009 10:13 PM

Answers

  • read/write functions work with an unsigned int parameter for the byte count which is always 32 bit.
    Similarity ReadFile/WriteFile functions work with DWORD type which is also always 32 bit.

    fread/fwrite use size_t which is 64 bit on 64-bit platforms.

    «_Superman_»
    • Marked as answer by Wesley Yao Monday, March 02, 2009 1:56 AM
    Tuesday, February 24, 2009 5:31 AM
  • fwrite() writes through an internal buffer that's very small.  You would have to use setvbuf() to increase its size.  But, the actual size of the buffer you use is immaterial, the disk hardware is at least 4 orders of magnitude slower than any buffer manipulation code.  You can speed up sequential access to a file by using the FILE_FLAG_NO_BUFFERING flag in the CreateFile() call.  Be sure to read the fine print in the MSDN article when you do that.
    Hans Passant.
    • Marked as answer by Wesley Yao Monday, March 02, 2009 1:55 AM
    Tuesday, February 24, 2009 2:08 PM
    Moderator

All replies

  • I was under the impression fread() works in x64. Am I mistaken?

    By the way, if you are looking for performance, a better use of your programming talent is to use overlapped io. Remember, just because you have a very large buffer, the underlying read/write implementation still has to break the data up into sector chunks.
    Monday, February 23, 2009 10:27 PM
  • I don't understand how you can call ReadFile() to read more than 4GB bytes, having a DWORD as an argument to pass the number of bytes to be read, and the function returning the number of bytes read equal to 8GB also in a DWORD.

    Forget what I said. I didn't notice the x64 !
    Monday, February 23, 2009 10:32 PM
  • read/write functions work with an unsigned int parameter for the byte count which is always 32 bit.
    Similarity ReadFile/WriteFile functions work with DWORD type which is also always 32 bit.

    fread/fwrite use size_t which is 64 bit on 64-bit platforms.

    «_Superman_»
    • Marked as answer by Wesley Yao Monday, March 02, 2009 1:56 AM
    Tuesday, February 24, 2009 5:31 AM
  • fwrite() writes through an internal buffer that's very small.  You would have to use setvbuf() to increase its size.  But, the actual size of the buffer you use is immaterial, the disk hardware is at least 4 orders of magnitude slower than any buffer manipulation code.  You can speed up sequential access to a file by using the FILE_FLAG_NO_BUFFERING flag in the CreateFile() call.  Be sure to read the fine print in the MSDN article when you do that.
    Hans Passant.
    • Marked as answer by Wesley Yao Monday, March 02, 2009 1:55 AM
    Tuesday, February 24, 2009 2:08 PM
    Moderator