locked
Efficient way to read bytes from a file RRS feed

  • Question

  • Hello:

    Need your guidance on how I can modify the below code to efficiently process contents of file. My requirement is to be able to read the contents of the file in chunks/blocks and pass it on to a method that will process it. I am getting out of memory exception when reading files over 250 MB.

    Thanks for any suggestions!

    ==

    //split file into 100 MB blocks and process
    using (Stream source = File.OpenRead(theFile.FullName))
    {                  
        int blockid = 0;                         
    
        var file = File.ReadAllBytes(theFile.FullName);
        const int blockSizeInBytes = 100 * 1024 * 1024; //100 MB
        var prevLastByte = 0;                
        var bytesRemain = file.Length;
    
        //init bytesToSend
        var bytesToSend = new byte[blockSizeInBytes];
        var bytesToCopy = 100 * 1024 * 1024;
        do
        {
            bytesToCopy = Math.Min(bytesRemain, blockSizeInBytes);
            bytesToSend = new byte[bytesToCopy]; //getting out of memory exception here
            Array.Copy(file, prevLastByte, bytesToSend, 0, bytesToCopy);
            prevLastByte += bytesToCopy;
            bytesRemain -= bytesToCopy;
    
            //construct base64 blockid string
            var blockidBytes = System.Text.ASCIIEncoding.ASCII.GetBytes("block-" + blockid);
            string encodedBlockId = Convert.ToBase64String(blockidBytes);
            blockIdList.Add(encodedBlockId);
    
            //put block
            AzureBlobUploadHelper.UploadBlockAsync(encodedBlockId, AzureContainerName, relFilePath, fileName,
            bytesToSend, StorageAccountName, StorageAccountKey, CancellationToken.None).GetAwaiter().GetResult();
    
            blockid++;
        } while (bytesRemain > 0);
    
    
        //commit blocks
        AzureBlobUploadHelper.CommitBlocks(blockIdList, AzureContainerName, relFilePath, fileName,
            StorageAccountName, StorageAccountKey, CancellationToken.None).GetAwaiter().GetResult();
    }     

    ==

    Tuesday, January 15, 2019 7:56 PM

Answers

  • Yes then you'll need to chunk it. As already mentioned, allocate a reasonable size array, open the stream, read from the stream to your buffer and then call the upload, loop back around and read from the stream again, upload, etc. Provided you're keeping track of how much of the array is used you only need a single allocation.

    Because you're chunking to Azure you'll want to use a reasonably sized array so that you can minimize calls to Azure. You don't want Azure to throttle you so make sure you are clear on the throttling rules for your Azure subscription. It will be a tradeoff between speed and memory usage. If this is a background process that runs then use as much memory as you can. If this is part of a larger app (web or desktop) then you may not want to go as high. 100MB isn't that much memory when you're talking about x64 processes but you could start with 25MB. Personally we make buffer sizes configurable so we can adjust them based upon our needs (throughput vs memory).


    Michael Taylor http://www.michaeltaylorp3.net

    • Marked as answer by diffident Tuesday, January 15, 2019 11:08 PM
    Tuesday, January 15, 2019 10:50 PM
    Moderator

All replies

  • That's because you're reading the entire file into memory. The ReadAllBytes allocates an array large enough to hold the entire file. If you are dealing with large files you don't want to do that. In your case you load the entire file into memory and then create yet another array to then copy memory from one block to another. Not only is this inefficient it is also not useful. You already have the file in memory so just use it as is. However if the file is in the GB range (or you are already using a lot of memory) you're going to get an OOM at some point.

    The better approach is to use a Stream. You can then stream the file through anything you want. To use a stream use File.OpenRead instead. But here is where how you intend to use the file comes into play. If you need the entire file in memory in order to do something with it then a stream isn't going to help you much. It looks like you're uploading the file to Azure. The core client, I believe, supports using a stream so you should be able to use File.OpenRead to open the stream, call Azure to upload the file and then close the stream. No reason to buffer. But AzureBlobUploadHelper may be your own code so I cannot say.


    Michael Taylor http://www.michaeltaylorp3.net

    Tuesday, January 15, 2019 8:59 PM
    Moderator
  • Thanks for your pointers, Michael. I knew loading entire file into memory was inefficient and unnecessary. I modified my code and it is behaving much better and not getting OOM exceptions. As you suggested I am now using Stream.

    --

    //split file into 50 MB blocks and process
    using (Stream source = File.OpenRead(theFile.FullName))
    {
        int blockid = 0;               
    
        const int blockSizeInBytes = 50 * 1024 * 1024; //50 MB
        var bytesRemain = Convert.ToInt32(theFile.Length);
    
        //init bytesToSend
        var bytesToSend = new byte[blockSizeInBytes];
        var bytesToCopy = 50 * 1024 * 1024;
        do
        {
            bytesToCopy = Math.Min(bytesRemain, blockSizeInBytes);
            bytesToSend = new byte[bytesToCopy];
    
            source.Read(bytesToSend, 0, bytesToCopy);
    
            bytesRemain -= bytesToCopy;
    
            //construct base64 blockid string
            var blockidBytes = System.Text.ASCIIEncoding.ASCII.GetBytes("block-" + blockid);
            string encodedBlockId = Convert.ToBase64String(blockidBytes);
            blockIdList.Add(encodedBlockId);
    
            //put block
            AzureBlobUploadHelper.UploadBlockAsync(encodedBlockId, AzureContainerName, relFilePath, fileName,
            bytesToSend, StorageAccountName, StorageAccountKey, CancellationToken.None).GetAwaiter().GetResult();
    
            blockid++;
        } while (bytesRemain > 0);
    
    
        //commit blocks
        AzureBlobUploadHelper.CommitBlocks(blockIdList, AzureContainerName, relFilePath, fileName,
            StorageAccountName, StorageAccountKey, CancellationToken.None).GetAwaiter().GetResult();
    }

    --

    • Marked as answer by diffident Tuesday, January 15, 2019 9:09 PM
    • Unmarked as answer by diffident Tuesday, January 15, 2019 9:16 PM
    Tuesday, January 15, 2019 9:08 PM
  • Noticed that even the above "stream" approach is returning OOM exception for a 900 MB file. Any suggestions?
    Tuesday, January 15, 2019 9:17 PM
  • You're still not really streaming because you're allocating an array. Arrays are ref types so they have to be GC'ed. Until that happens it'll be eating up memory. At some point memory will get tight and a GC will occur. 

    Is AzureBlobUploadHelper your code or a library you're using? I'm noticing that you're allocating arrays and calling the upload in a loop. After the loop you commit. I'm wondering if it is keeping that memory allocated internally until commit. Here's an example of a library I see in GitHub that does something similar.


    Michael Taylor http://www.michaeltaylorp3.net

    Tuesday, January 15, 2019 10:01 PM
    Moderator
  • Thanks Michael. Yes, AzureBlogUploadHelper is a class I have written. Even if I modify the API to accept a stream object, I wonder how I can create a 100 MB chunk of stream object from the source stream. Do I have to use MemoryStream as the backing store? Please pardon my ignorance here.

    • Edited by diffident Tuesday, January 15, 2019 10:26 PM
    Tuesday, January 15, 2019 10:21 PM
  •         bytesToCopy = Math.Min(bytesRemain, blockSizeInBytes);
            bytesToSend = new byte[bytesToCopy];
    

    Your problem is that you are instantiating a new byte array on every pass without once calling GC.Collect()

    50*1024*1024 is a pretty insane default read size too. NTFS "sector" size for over a decade is 4096b. Use 4096b.  Using any size bigger than this affects performance (reads get faster and faster up to a certain point, then get slower and slower until some kind of crash occurs).  Using any size smaller than this wastes memory and might wear your HDD unnecessarily by doing multiple reads which also effects performance.

    The following code will not generate OOM no matter how large the input is and it's about the fastest reliable performance you can get out of filesystem reads in DotNET:

                System.IO.FileStream s = new System.IO.FileStream(@"path/to/file", System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read);
                s.Position = 0;
    
                // When doing filesystem IO in any Windows NT, use a 4096b buffer.  Always.  Period.  Always.  Exclamation point.
                // ALWAYS!!!!!1!!!!one
                byte[] buffer = new byte[4096];
                do
                {
                    // What this does is test if the buffer size is greater than the remaining bytes to read.
                    if (buffer.Length > (s.Length - s.Position))
                    {
                        // Using Array.Resize changes the size of the array if and only if necessary.
                        // That means this resize operation occurs only once if ever, on the very 
                        // last read operation.
                        Array.Resize<byte>(ref buffer, (int)(s.Length - s.Position));
                    }
                    //
                    // Some fool might tell you to zero-fill the array before you re-use it, but I call that idiot a fool because 
                    // using this method you will absolutely and positively in all conceivable circumstances overwrite every single
                    // byte in the buffer on every single pass.  This is byte-for-byte.  The buffer is never bigger than the data.
                    // Re use it a billion times and you'll never have a problem.
                    //
    
                    // The call to FileStream.Read automatically advances the Position value to just past the data consumed.
                    s.Read(buffer, 0, buffer.Length);
    
                    // Do some processing here.
    
                } while (s.Position < s.Length);
    
                buffer = null;
                s.Close();
                s.Dispose();
                s = null;
                GC.Collect();

    Tuesday, January 15, 2019 10:22 PM
  • I'm sorry but I have to fundamentally disagree with this code. Firstly it isn't exception safe. You should not be calling Close/Dispose implicitly for local variables. Secondly calling GC.Collect is almost always the wrong answer. It introduces more problems than it solves.

    Simply opening the stream (with using) and passing it on to the underlying API that accepts a stream is sufficient. There is no reason to allocate a buffer and stream the data across when passing it on to Azure.


    Michael Taylor http://www.michaeltaylorp3.net

    Tuesday, January 15, 2019 10:33 PM
    Moderator
  • Post the code of your helper but no you don't need a memory stream. If you cannot pass the stream directly to the Azure API then allocate a reasonable size array and just read the stream into it. One array can be used for the entire process and doesn't need to be very big, depends upon how you're writing to Azure. For downloading I know blob storage supports streams. For uploading I'd have to look. Your helper code should use a stream if at all possible.

    Michael Taylor http://www.michaeltaylorp3.net

    Tuesday, January 15, 2019 10:41 PM
    Moderator
  • Michael - I am using Azure Blob REST API to perform the operations. MSFT requires that files >256 MB be uploaded in blocks (upto 100 MB) and hence I am using byte arrays. Does that make sense? Do you have any suggestions. As Andrew suggested can I use byte arrays of 4096b or use a memorystream object?


    • Edited by diffident Tuesday, January 15, 2019 10:48 PM
    Tuesday, January 15, 2019 10:44 PM
  • Yes then you'll need to chunk it. As already mentioned, allocate a reasonable size array, open the stream, read from the stream to your buffer and then call the upload, loop back around and read from the stream again, upload, etc. Provided you're keeping track of how much of the array is used you only need a single allocation.

    Because you're chunking to Azure you'll want to use a reasonably sized array so that you can minimize calls to Azure. You don't want Azure to throttle you so make sure you are clear on the throttling rules for your Azure subscription. It will be a tradeoff between speed and memory usage. If this is a background process that runs then use as much memory as you can. If this is part of a larger app (web or desktop) then you may not want to go as high. 100MB isn't that much memory when you're talking about x64 processes but you could start with 25MB. Personally we make buffer sizes configurable so we can adjust them based upon our needs (throughput vs memory).


    Michael Taylor http://www.michaeltaylorp3.net

    • Marked as answer by diffident Tuesday, January 15, 2019 11:08 PM
    Tuesday, January 15, 2019 10:50 PM
    Moderator
  • I'm sorry but I have to fundamentally disagree with this code.

    I'm not actually confident that you mean my code (posted immediately before the quoted post) but I'm responding to you because my code being immediately ahead of your post implies that I should...

    Firstly it isn't exception safe.

    My understanding here is that this OP is using Azure, so that any filepath would have already been validated as existing by something like a remote web request or URI processor. The only bit that can possibly generate any exception is the original FileStream instantiation using a filepath, so that should not need Exception Handling. In reality if that were to fail then the whole codeblock would fail, so what you're suggesting is that I should have wrapped this entire codeblock in a Try/Catch instead of writing a single function that does this job and then wrapping just the one-line callout to that function, from elsewhere in Try/Catch? You'd have to be insane to give such advice.

    You should not be calling Close/Dispose implicitly for local variables.

    You're saying not to call Close/Dispose explicitly? Because I did in fact call them explicitly. What you suggest with 'using' is what calls those methods implicitly and that's exactly why I never use 'using.' In all events, you're just dead wrong. When you use 'using' Close/Dispose are called for you. When you create anything that exists in the stock Framework, if it has a Dispose method, then you do in fact need to call that method explicitly. Period. Not doing so creates a memory leak in unmanaged code. If you also fail to call Close, such as if you called Dispose without Close, you can in some circumstances end up leaving the Read Only fileshare permission stuck on that file and prohibit future calls to write into files from succeeding.

    Secondly calling GC.Collect is almost always the wrong answer. It introduces more problems than it solves.

    What? GC.Collect() runs on its own periodically no matter what you do. You can't "reclaim" anything from garbage collection so what possible detriment can you cause by calling it "early?" The reality here is that any time you create nullable objects you want them cleared out of the managed memory block as soon as possible. GC.Collect.

    Simply opening the stream (with using) and passing it on to the underlying API that accepts a stream is sufficient. There is no reason to allocate a buffer and stream the data across when passing it on to Azure.

    This very well may be true, but I responded to the OP's actual question and the specific issues OP is having with a codebit they posted.

    I'm pretty shocked seeing this kind of insanely disinformative response from this particular user. Do you and I have some kind of personal issue we need to address, CoolDadTx?

    Tuesday, January 15, 2019 11:12 PM
  • Sorry if my posts caused any trouble. Both of you helped me with my solution. I have now stopped instantiating new byte array for every iteration and resizing it dynamically. How cool is that? :)

    --

    do
    {
        bytesToCopy = Math.Min(bytesRemain, blockSizeInBytes);
        //bytesToSend = new byte[bytesToCopy];
        Array.Resize<byte>(ref bytesToSend, bytesToCopy);
    
        source.Read(bytesToSend, 0, bytesToCopy);
    
        bytesRemain -= bytesToCopy;
    
        //construct base64 blockid string
        var blockidBytes = System.Text.ASCIIEncoding.ASCII.GetBytes("block-" + blockid);
        string encodedBlockId = Convert.ToBase64String(blockidBytes);
        blockIdList.Add(encodedBlockId);
    
        //put block
        AzureBlobUploadHelper.UploadBlockAsync(encodedBlockId, AzureContainerName, relFilePath, fileName,
        bytesToSend, StorageAccountName, StorageAccountKey, CancellationToken.None).GetAwaiter().GetResult();
    
        blockid++;
    } while (bytesRemain > 0);

    --

    Tuesday, January 15, 2019 11:28 PM
  • Sorry if my posts caused any trouble.

    You certainly did not.  I'm honestly just really very confused by all of the advice CoolDadTx gave in response to my code.  He's going against 20+ years of experience and tens of thousands of individual pages of documentation both on MSDN and in various DotNET developer blogs and forum posts in that same time period.  Unless he got platforms and maybe posts confused, or thought I was calling GC.Collect inside a while loop, it seems like he was attacking me via my code for no reason at all.  Maybe he's pissed that I added my own error report to a forums bug post he made recently :-(

    Anyway, I think what CoolDadTx has tried to tell you twice is to skip the array buffer altogether.  Open the FileStream and then pass it to an Azure API function that accepts a Stream as input, and let the web service do all the heavy lifting by itself.

    Tuesday, January 15, 2019 11:38 PM
  • "You're saying not to call Close/Dispose explicitly?"

    Correct. The recommendation has always been to use a using statement. In fact with Roslyn the code fix recommendations will recommend this. CodeRush, ReSharper and other productivity tools provide the same recommendation. 

    "In all events, you're just dead wrong"

    My point is that in the case of an exception your Close/Dispose call isn't made so the stream will be leaked. Exceptions can occur at any point and as code evolves over time it is not maintainable if you need to keep looking over the code to decide that you should switch from explicit calls to a using. Just start with the using, that's why it is provided. 

    "you can in some circumstances end up leaving the Read Only fileshare permission stuck on that file"

    That would be a bug then. As the guidelines recommend, Dispose always cleans up the resource as this is the only method guaranteed to be called in all cases. If a type implements an "equivalent" method (such as Close on IO-like types) then that implementation should simply defer to the Dispose method. This is consistent with how the rest of .NET works. I'm not aware of any framework implementation that doesn't do this.

    "What? GC.Collect() runs on its own periodically no matter what you do. " 

    Yes but because of how generations work in .NET calling collect can cause objects that would normally be freed quickly to persist longer than normal. Refer to the previous discussions over the years as to why such as here, here and here.

    Ultimately here is how I would code this (not compiled for validation).

    using (var stream = new FileStream(…))
    {
       var buffer = new byte[4096];
       ...
    };

    "I'm pretty shocked seeing this kind of insanely disinformative response from this particular user"

    Please be careful here. I have provided feedback based upon the guidelines that are available (linked to above) and experience. Calling my post disinformative is disrespectful just because you may not agree. We can have a disagreement on which approach is best without making false accusations. 

    Feel free to read up on the links I provided (and the many others) and respond back with any further questions you may have.


    Michael Taylor http://www.michaeltaylorp3.net

    Wednesday, January 16, 2019 12:07 AM
    Moderator
  • So it's become obvious that somebody stole CoolDadTx's laptop or tablet or smartphone and is trolling MSDN using his account.

    The fact is that choosing to instantiate and dispose on your own vs choosing to employ the "using" keyword/block is a matter of style. It's a preference and it's a purely an aesthetic one. This is where all the trouble started.

    "You're saying not to call Close/Dispose explicitly?"

    Correct. The recommendation has always been to use a using statement. In fact with Roslyn the code fix recommendations will recommend this. CodeRush, ReSharper and other productivity tools provide the same recommendation.

    You use the word "always" to describe an article (to which you helpfully provided a link) which was written in 2017, or if you're bad at math more than 20 years after the advent of what would become DotNET. What exactly do you think "always means?"

    Worse, Roslyn is a Github Project, CodeRush is a Visual Studio Addin, and I quit googling to figure out what you're babbling about at that point. They aren't relevant to C# or DotNET or Azure.

    "In all events, you're just dead wrong"

    My point is that in the case of an exception your Close/Dispose call isn't made so the stream will be leaked. Exceptions can occur at any point and as code evolves over time it is not maintainable if you need to keep looking over the code to decide that you should switch from explicit calls to a using. Just start with the using, that's why it is provided.

    You're supporting one totally fictional narrative with additional erroneous statements. This was the point where I first started thinking this might not be the same guy I'm familiar with from a decade of MSDN usage.

    byte[] b = new byte[4096]; cannot and never will raise any exception at runtime. While it's very remotely possible that FileStream.Read may raise an Exception, even after a successful FileStream open (during constructor processing), it's so far from likely that if it ever happens to you you should go buy a lottery ticket on the spot.

    You're also flat out lying with the insane assertion that 'using' will somehow magically handle Exceptions for you.

        // This raises an exception.  If you don't try/catch on this line alone, then the exception is 
        // thrown and the function returns instantly without performing any additional processing.
        System.IO.FileStream s0 = new System.IO.FileStream("path/to/nonexistent/file", System.IO.FileMode.Open);
    
        // This raises the exact same exception.  The exact same thing happens as in the line above.
        using (System.IO.FileStream s1 = new System.IO.FileStream("path/to/nonexistent/file", System.IO.FileMode.Open))
        {
        }
        
        // These are absolutely and totally and in all ways identical if and when Exceptions are thrown.  The only difference 
        // that ever exists is if s0 is successful, at which point Unmanaged Code is invoked in the underlying Framework.
        // Handles and buffers are created.  The referenced file is locked against future handle creations per any 
        // System.IO.FileShareMode flags applied.
        // The only way to release the locks is FileStream.Close()
        // The only way to cleanly release Handles and destroy system buffers is FileStream.Dispose()
        //
        // That's it.  Period.  Have it checked by literally anyone who actually knows what they're talking about.
    

    So what you're presenting totally false information. This disinformation does real and physical harm to anyone who believes you.  That's why I put a D at the front of the word instead of an M.  Misinformation is largely moot, whether it's true or false.  "Adam West rose from the dead and started handing out antique batarangs in West Oakland!" is a good example of "mis"information.  Anybody who believes it probably isn't reading MSDN.

    "you can in some circumstances end up leaving the Read Only fileshare permission stuck on that file"

    That would be a bug then. As the guidelines recommend, Dispose always cleans up the resource as this is the only method guaranteed to be called in all cases. If a type implements an "equivalent" method (such as Close on IO-like types) then that implementation should simply defer to the Dispose method. This is consistent with how the rest of .NET works. I'm not aware of any framework implementation that doesn't do this.

    The guidelines do not say what you say. You're just slapping a link up there without having ever read (or maybe just never understood) the content on the page you linked to.

    What the guidelines do say is that the purpose of Dispose() is to allow the cleanup of unmanaged code which is set up and called to by your managed program. MSDN Documentation very clearly spells this out - you only implement a Dispose() method to clean up unmanaged resources. This is both a description of what is implemented in the stock Framework classes and also something like a "fair standard" for your own custom implementations.

    There's absolutely no documentation anywhere that suggests the Dispose() method should also perform secondary, non-system-resource-related cleanup, such as "closing" a file. Release its handle, sure. Destroy any underlying IO buffers, absolutely. Notify the filesystem that just happened? NOPE! It might and it'd be nice if it did, but it'd be utterly irresponsible to assume it would. I frankly don't know or care if any given DotNET version's implementation of FileStream.Dispose() performs all the Close() procedures for me or not - I'm going to call it in my code because some older versions of the Framework may not and it's all but guaranteed that some future Framework version will not whether all the previous and current ones do or not. Close handles the Filesystem cleanup, Dispose handles the Operating System cleanup.  The two are necessarily mutually exclusive, whether any given version of DotNET calls Close from inside Dispose or not.

    You've been using 'using' for so long that you've mistakenly come to believe it's "the right way" as opposed to "one of many acceptable ways." The problem with your logic is that any given Framework update may well flub up the way using calls Close and Dispose for you, or may change whether it calls those for you at all, and then you're stuck migrating all your code from your way to my way because my way is the only way where the programmer can absolutely and positively control every step of the operation.

    This time the argument isn't even about "an aesthetic" difference of opinion - it's this poster claiming to be CoolDadTx presenting total fiction as real fact in the effort to support his already grossly inaccurate statements. The only real fact of the matter is that whether Dispose() calls Close() for you or not, it does no harm to do it yourself and it will most likely save you a headache in the future just like defensive driving.

    "What? GC.Collect() runs on its own periodically no matter what you do. "

    Yes but because of how generations work in .NET calling collect can cause objects that would normally be freed quickly to persist longer than normal. Refer to the previous discussions over the years as to why such as here, here and here.

    You made this up entirely. Or maybe somebody at stackoverflow (SO is referenced because one of your links goes there) did - I really don't know or care about the origin of the disinformation, who innocently propagated it, et al, I'm only interested in making sure that now you've posted it on MSDN it goes no further.  The BS stops here.

    What I do know is what's printed in the link you present to https://docs.microsoft.com/en-us/dotnet/standard/garbage-collection/induced which clearly states that everything you just said is totally false and everything I've said so far is actually printed in the page content. I didn't cover it all, but I haven't said anything that isn't covered there.  Literally right at the top of the page. Second Paragraph, sentence 1. At absolutely no point whatsoever does it diverge into any hint of the nonsense you've spouted about it and implied is printed therein.

        Ultimately here is how I would code this (not compiled for validation).
    
        using (var stream = new FileStream(…))
        {
           var buffer = new byte[4096];
           ...
        };
    

    Are you pseudo-quoting me here? And then in the next sentence you spew off about "respect?"

    I did, in fact, compile my code sample for validation. I even tested it against really and truly existing files. That's apparently the difference between me and the guy whose laptop you stole. We test stuff before we spout about it.

    "I'm pretty shocked seeing this kind of insanely disinformative response from this particular user"

    Please be careful here. I have provided feedback based upon the guidelines that are available (linked to above) and experience. Calling my post disinformative is disrespectful just because you may not agree. We can have a disagreement on which approach is best without making false accusations.

    So after all of this, you're playing the victim of the crazy guy who calls you out on your pack of lies.

    The only thing we seem to disagree on is whether it's prettier to type using (FileStream) or FileStream s/Close/Dispose - then to support your aesthetic preference you came out of left field spouting all kinds of flat-out insane advice backed up by totally fictional assertions that I know CoolDadTx would know are fictional since he's been using MSDN at least as long as I have. That's the dictionary definition of "disinformation." There's no respect intended, needed, or called for when observing facts in evidence.

    Feel free to read up on the links I provided (and the many others) and respond back with any further questions you may have.

    You didn't actually provide any links that back up your nonsense statements. You made statements not found in the documentation and you slapped a link on it like a sticker that says "USDA APPROVED!" that you can buy at any gas station.

    I'm sticking to my suggestion that CoolDadTx isn't the one writing this garbage. I've never seen him pull this before, with anyone.

    Wednesday, January 16, 2019 3:24 AM
  • Andrew, you are clearly ranting and convinced you are right. The fact that I posted links to back up this information and you are still arguing you are right clarifies that. I will leave it to others in the forums to try to convince you of what I'm trying to clarify. The point of the matter is that this thread isn't about you but helping the OP and they seem to have found their answer so I'm not going to try to convince you any further.


    Michael Taylor http://www.michaeltaylorp3.net

    Wednesday, January 16, 2019 3:52 AM
    Moderator