Answered by:
Copying and hashing files

Question
-
I wonder what is more battery friendly and ultimativly more performant:
1. Copy a file and hash it then
return FileIO.readBufferAsync(sourceFile).then(function(buffer) { var hash, hashData; hashData = hashAlgo.hashData(buffer); hash = CryptographicBuffer.encodeToHexString(hashData); return sourceFile.copyAsync(folder, "" + hash + sourceFile.fileType + "-tagging", NameCollisionOption.replaceExisting).then(function(blobFile) {
I consider this variant less performant than the following, because the file is essentially read twice. Or are there any optimizations in place that a copyAsync is performing differently than a readBufferAsync + writeBufferAsync?
Also I have the problem, that FileIO.readBufferAsync fails to load very large files (tried with 3.3 GB iso image). Is there a reason for that?
2. Read a file, build the hash from the buffer and write the buffer to disk
Monday, June 4, 2012 12:20 PM
Answers
-
Hi Phil,
The readBufferAsync method creates a copy in memory which obviously is a bad thing for such a large file. Instead, see the documentation on streams, http://msdn.microsoft.com/en-us/library/windows/apps/windows.storage.streams.iinputstream.readasync.aspx, which allows the caller to incrementally read the file.
You have to work with the data by reading incrementally into your own buffer
-Jeff
Jeff Sanders (MSFT)
- Proposed as answer by Jeff SandersMicrosoft employee, Moderator Wednesday, June 6, 2012 7:23 PM
- Marked as answer by Min ZhuMember Tuesday, July 3, 2012 4:44 AM
Wednesday, June 6, 2012 7:23 PMModerator
All replies
-
I wanted to hash the 3.3GB Windows 8 ISO image using FileIO.readBufferAsync(file).then(function(buffer).
After some time the promise fails with "Not enough storage space". I assume it means "not enough memory" but I checked in the Tastmanager and the app is not consuming any more memory after the operation has started.
Since I need to be able to hash very large files whats the recommended way to do that?
- Merged by Jeff SandersMicrosoft employee, Moderator Monday, June 4, 2012 4:40 PM Duplicate
Friday, June 1, 2012 2:10 PM -
Phil,
That seems to be a hard limit. I don't think there is a workaround but have asked the product team. I will update you on what I find out!
-Jeff
Jeff Sanders (MSFT)
- Proposed as answer by Jeff SandersMicrosoft employee, Moderator Tuesday, June 5, 2012 7:35 PM
Tuesday, June 5, 2012 7:35 PMModerator -
Hi Phil,
The readBufferAsync method creates a copy in memory which obviously is a bad thing for such a large file. Instead, see the documentation on streams, http://msdn.microsoft.com/en-us/library/windows/apps/windows.storage.streams.iinputstream.readasync.aspx, which allows the caller to incrementally read the file.
You have to work with the data by reading incrementally into your own buffer
-Jeff
Jeff Sanders (MSFT)
- Proposed as answer by Jeff SandersMicrosoft employee, Moderator Wednesday, June 6, 2012 7:23 PM
- Marked as answer by Min ZhuMember Tuesday, July 3, 2012 4:44 AM
Wednesday, June 6, 2012 7:23 PMModerator