none
DataFactory Data copy into Cosmos failure. Failed to import mini-batch. JSRT error 0x30003

    Question

  • I had a DataFactoryV2 job running successfully for a few weeks now all of a sudden the job is failing despite nothing having been changed in the pipeline. DataFactory can successfully read the data, but it keeps failing when it tries to copy the data into CosmosDB (which is the sink). The data is being copied with "upsert" selected, so I know it's not a unique name conflict. The error message from ADF is:
    Activity Copy Data1 failed: 'Type=Microsoft.Azure.Documents.DocumentClientException,Message=Partition range id 0 | Failed to import mini-batch. Exception was Message: {"Errors":["Encountered exception while executing function. Exception = JSRT error 0x30003"]}
    ActivityId: [activityid], Request URI: [request uri], RequestStats: , SDK: documentdb-dotnet-sdk/1.21.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0. Status code was 500
    ActivityId: [activityid], documentdb-dotnet-sdk/1.21.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0,Source=Microsoft.Azure.CosmosDB.BulkImport,''Type=Microsoft.Azure.Documents.InternalServerErrorException,Message=Message: {"Errors":["Encountered exception while executing function. Exception = JSRT error 0x30003"]}
    ActivityId: [activityid], Request URI: [request uri], RequestStats: , SDK: documentdb-dotnet-sdk/1.21.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0,Source=Microsoft.Azure.Documents.Client,'

    The destination collection is not partitioned.

    Any help greatly appreciated.

    Thursday, September 20, 2018 8:44 PM

All replies

  • Hi Evan,

    Can you please provide some additional details as to the API type you are using (SQL, MongoDB, Table, etc)? 

    I am seeing the a 500 error code, which indicates an internal server error:

    ActivityId: [activityid], Request URI: [request uri], RequestStats: , SDK: documentdb-dotnet-sdk/1.21.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0. Status code was 500

    HTTP Status Codes for Azure Cosmos DB

    Is the CosmosDB single partition a fix partition...do you have space to write data to?

    Regards,

    Mike

    Friday, September 21, 2018 6:25 PM
    Moderator
  • Hi Mike,

    I am facing a similar issue. My application seems to have been running perfectly for the last one month but all of a sudden I am facing this exact same exception and it seems to be not documented. Also, there was no major change in our application.

    My scenario :
    1. SQL API CosmosDB account.
    2. No partition key defined while creating collection

    3. Application is written using BulkExecutor API (JAVA)
    4. Operation is bulkImport Json Docs

    My application seems to be running perfectly as before if partition key is defined in collection but crashes with this exception if none is defined.

    Works well with Java API but seems to be a problem with the BulkExecutor API while doing bulkimport.
    I even tested with the CosmosDB bulkImport sample Java program from Github and seen the same behavior. It also throws the same exception when it is slightly modified to work without partition key.

    Sunday, September 23, 2018 4:08 PM
  • Hi Souvik Das,

    Are you using a single, fixed partition? The Bulk API uses partition key ranges to batch data within and across partition ranges.

    "The Bulk Executor library makes sure to maximally utilize the throughput allocated to a collection. It uses an AIMD-style congestion control mechanism for each Azure Cosmos DB partition key range to efficiently handle rate limiting and timeouts."

    So, a partition key is always required: Azure Cosmos DB bulk executor library overview

    Regards,

    Mike

    Monday, September 24, 2018 10:29 PM
    Moderator
  • Hi Mike,

    I do understand that but we have been running our application for the past one month and we have test cases designed on that which were passing. We were successfully writing data to a collection (Be a partition key defined or not).

    We got this exception very recently without any much change in our code.

    Let's say it does not allow that but in that case a useful message would have been helpful. It seems to be a a exception code and a message which is not well documented.

    I am surprised to see this sudden change in CosmosDB behavior.

    Tuesday, September 25, 2018 10:50 AM
  • Also, let's say I have a collection created without any partition key defined. How would I do a bulk insert in a single nework call?
    Tuesday, September 25, 2018 11:29 AM
  • Hi There,

      We have been discussing the very same error on this thread -> 

    https://social.msdn.microsoft.com/Forums/en-US/964966e5-9c65-46ef-9b56-44c52e5ee3bb/adf-copy-error?forum=AzureDataFactory

    I don't know if this information is useful or not. 

    Tuesday, September 25, 2018 12:21 PM
  • I also logged an issue in GitHub

    https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started/issues/25

    Tuesday, September 25, 2018 12:23 PM
  • Evan,

    This is an issue with Cosmos DB. The ADF team is working with the Cosmos DB team to address this. 

    Thank you,

    Mike

    Tuesday, September 25, 2018 7:57 PM
    Moderator
  • It would appear the short term solution is to switch from non-partitioned (Fixed Storage) to partitioned storage (Unlimited) along with using a partition key. 

    Regards,

    Mike

    Friday, September 28, 2018 8:31 PM
    Moderator