I have searched to see if there are any documented limits to the number of blobs a storage account can have.
We are investigating the possibility of using an Azure storage account to store a large quantity (and volume) of documents. These documents are a mix of things like PDF documents, photographs, video clips and audio clips.
Our application (on mobile devices) can use a local database which the user can use to search and identify which document(s) they need to view. This can then be downloaded to the device and viewed/played.
We have tested this on a small scale (using only a few dozen documents), and it all works well. However, we now want to scale up the data storage side. We have approximately 130GB of documents, split over around 25000 separate documents.
Some documents are related (for example, part specification data, images, and installation/removal guides), so we have something like part code (six digits) and a document number (two digits). On our original system, we store these in a hierarchy using 10
top level folders 0...9 using the last digit of the part code, with each part code having it's own folder, containing all the documents. Thus, if we know a part code we can map to a document, like
Part code 001759, document 02 (How to adjust the one-shot PIT interval)
root/9/001759/00175911.itm
-- a generic extension is used, the database has metadata explaining how to interpret the data, in this case, as a PDF.
Obviously, we could simply upload the files "as is" to a single container, since none of the names clash.
However, are there limits and/or performance implications on doing this, should we implement our existing scheme, or is there a better mechanism for organising the files?
Of course, we don't know whether this would even work, or whether we are exceeding any storage limits.
Steve