locked
Stream Analytics - Blob output filename? RRS feed

  • Question

  • For Blob storage output, how do we control filename? I tried the Path pattern but it only creates the virtual folder in blob. Cannot define the filename in the output.

    Example in the path pattern I specified "test" which results in - /blobcontainer/test/1513855769_670e84ee67f643d7aff3cad3cc6d9333.csv

    Also in the path pattern I specified "test/filename.csv" - which results in - /blobcontainer/test/filename.csv/1513855769_670e84ee67f643d7aff3cad3cc6d9333.csv


    Subash.S

    Friday, April 8, 2016 1:36 PM

Answers

  • We only allow customizing the blob prefix, not the full name. Does your scenario require customizing the full name?

    Thanks,
    Kati


    This posting is provided "AS IS" with no warranties, and confers no rights.

    Friday, April 15, 2016 4:05 PM

All replies

  • We only allow customizing the blob prefix, not the full name. Does your scenario require customizing the full name?

    Thanks,
    Kati


    This posting is provided "AS IS" with no warranties, and confers no rights.

    Friday, April 15, 2016 4:05 PM
  • Thanks.

    I need the metadata information i.e. file name. So that when the data travels through various jobs I can create a lineage to keep track. Having numeric_GUID filename isn't very helpful for identifying and troubleshooting issues.

    As a workaround I can add a column and use Blobname metadata and suffix it with something in the query and bring that through.


    Subash.S

    Friday, April 15, 2016 4:37 PM
  • Thank you Subash for the additional information. Do you have multiple jobs dropping output in the same folder? Or are you hoping for the output blob file name to contain data from the input?

    Thanks,
    Kati


    This posting is provided "AS IS" with no warranties, and confers no rights.

    Wednesday, May 4, 2016 4:52 PM
  • Hi, 

    although the question is quite old I was not able to find an answer for my scenario:

    We are streaming data from local files on a per-row basis to stream analytics. We store line based with metadata columns added (filename etc.) in Document DB. 

    However our customer wants to have a copy/archive of the local files in blob storage - especially with the blobs split by original filename. We'd like to do that from stream analytics and just upload the data once. 

    As we cannot define a file name right now we have to upload the files separately to blob which isn't that nice of a solution.

    Thanks for ideas and your consideration,
    Richard

    Tuesday, January 30, 2018 10:08 AM