none
ADF Copy Error

    Question

  • Hello , 

    I am doing data copy activity from CRM to CosmosDB. I am facing this problem and the data is not getting copied at all. 

    Below is the error message what I am receiving: 

    "'Type=Microsoft.Azure.Documents.DocumentClientException,Message=Partition range id 0 | Failed to import mini-batch. Exception was Message: {\"Errors\":[\"Encountered exception while executing function. Exception = JSRT error 0x30003\"]}\r\nActivityId: b2843abc-3fc7-46ba-ad2f-45753931b063, Request URI: /apps/51b21d3b-5be7-4bef-a937-362c138e2d0c/services/8ce0ebc3-cefa-47e2-bc6b-42c5a59699c8/partitions/cfc25aaa-ab95-422f-a3e4-1bc9c2297c54/replicas/131819406021449829p/, RequestStats: , SDK: documentdb-dotnet-sdk/1.21.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0. Status code was 500\r\nActivityId: 9f3a4493-5847-4e78-aa5a-2a17e2e5d7c2, documentdb-dotnet-sdk/1.21.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0,Source=Microsoft.Azure.CosmosDB.BulkImport,''Type=Microsoft.Azure.Documents.InternalServerErrorException,Message=Message: {\"Errors\":[\"Encountered exception while executing function. Exception = JSRT error 0x30003\"]}\r\nActivityId: b2843abc-3fc7-46ba-ad2f-45753931b063, Request URI: /apps/51b21d3b-5be7-4bef-a937-362c138e2d0c/services/8ce0ebc3-cefa-47e2-bc6b-42c5a59699c8/partitions/cfc25aaa-ab95-422f-a3e4-1bc9c2297c54/replicas/131819406021449829p/, RequestStats: , SDK: documentdb-dotnet-sdk/1.21.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0,Source=Microsoft.Azure.Documents.Client,'

    Any help would be appreciated.

    Thank you.

    Thursday, September 20, 2018 7:21 PM

All replies

  • I'm also getting the same error. I've been spinning my wheels all day. I'm simply taking a JSON doc and trying to go from Azure Blob to Cosmos. It would seem no matter what I try, Cosmos isn't happy to work through Data Factory.  I've validated that the inbound JSON is valid. I've even pasted into Cosmos, it saves just fine. It would appear that it's not an issue with the payload to me. 

    Any help would be amazing!

    Friday, September 21, 2018 6:55 PM
  • I set the pipeline to use Staging. Once I do that, the pipeline works, but the data never lands in cosmos.   I'm beginning to wonder if there is indeed something wrong with the Cosmos Connector?  I've got a flat simple json doc going in. 
    Friday, September 21, 2018 8:25 PM
  • { "errorCode": "2200", "message": "'Type=Microsoft.Azure.Documents.DocumentClientException,Message=Partition range id 0 | Failed to import mini-batch. Exception was Message: {\"Errors\":[\"Encountered exception while executing function. Exception = JSRT error 0x30003\"]}\r\nActivityId: 4fd56039-459d-4c6f-9be9-ea7305cae5fe, Request URI: /apps/806df639-a21a-4efb-8e15-02c4a490af46/services/43c0f16f-e82c-4c1c-bca2-602743e971cd/partitions/cd0eb199-3edc-4aa3-aede-7115d20b624d/replicas/131810532074365699p/, RequestStats: , SDK: documentdb-dotnet-sdk/1.21.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0. Status code was 500\r\nActivityId: f5569d5e-3565-4769-9b50-947603b14ef4, documentdb-dotnet-sdk/1.21.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0,Source=Microsoft.Azure.CosmosDB.BulkImport,''Type=Microsoft.Azure.Documents.InternalServerErrorException,Message=Message: {\"Errors\":[\"Encountered exception while executing function. Exception = JSRT error 0x30003\"]}\r\nActivityId: 4fd56039-459d-4c6f-9be9-ea7305cae5fe, Request URI: /apps/806df639-a21a-4efb-8e15-02c4a490af46/services/43c0f16f-e82c-4c1c-bca2-602743e971cd/partitions/cd0eb199-3edc-4aa3-aede-7115d20b624d/replicas/131810532074365699p/, RequestStats: , SDK: documentdb-dotnet-sdk/1.21.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0,Source=Microsoft.Azure.Documents.Client,'", "failureType": "UserError", "target": "Import-Distributors" }
    Friday, September 21, 2018 8:31 PM
  • It looks like there might be some sort if issue brewing.

    I found this on GitHub. Same Error. I know that the DataFactory uses this component. 

    https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started/issues/17

    Friday, September 21, 2018 8:34 PM
  • JSRT = JavaScript Run Time. 

    https://github.com/Microsoft/ChakraCore/wiki/JavaScript-Runtime-%28JSRT%29-Overview

    Sorry I keep adding more to this thread as I find things. Hoping it might help us both resolve the matter. 

    Friday, September 21, 2018 8:39 PM
  • Two days of frustration and I finally find this thread! Glad it's not just me!

    Especially like the so-called "user" error with the 500 code!

    {
        "errorCode": "2200",
        "message": "'Type=Microsoft.Azure.Documents.DocumentClientException,Message=Partition range id 0 | Failed to import mini-batch. Exception was Message: {\"Errors\":[\"Encountered exception while executing function. Exception = JSRT error 0x30003\"]}\r\nActivityId: 0a1805ca-9c9a-4432-8b14-1dc07d9024de, Request URI: /apps/5d79f39a-b3a4-4d88-a226-857aa1df36ae/services/c9dcd440-1d8d-4cf8-b958-f0c850c62b98/partitions/7549d205-ac08-40cc-86ef-24899547f251/replicas/131820993926784933p/, RequestStats: , SDK: documentdb-dotnet-sdk/1.21.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0. Status code was 500\r\nActivityId: 647dffab-10af-4cb6-ae6c-9107d204dd61, documentdb-dotnet-sdk/1.21.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0,Source=Microsoft.Azure.CosmosDB.BulkImport,''Type=Microsoft.Azure.Documents.InternalServerErrorException,Message=Message: {\"Errors\":[\"Encountered exception while executing function. Exception = JSRT error 0x30003\"]}\r\nActivityId: 0a1805ca-9c9a-4432-8b14-1dc07d9024de, Request URI: /apps/5d79f39a-b3a4-4d88-a226-857aa1df36ae/services/c9dcd440-1d8d-4cf8-b958-f0c850c62b98/partitions/7549d205-ac08-40cc-86ef-24899547f251/replicas/131820993926784933p/, RequestStats: , SDK: documentdb-dotnet-sdk/1.21.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0,Source=Microsoft.Azure.Documents.Client,'",
        "failureType": "UserError",
        "target": "File to Cosmos"
    }





    • Edited by ChrisWoodRB Saturday, September 22, 2018 2:57 PM Typo
    Saturday, September 22, 2018 2:55 PM
  • Just to add - this is using copy routines that were working fine until this week and have suddenly stopped. I've tried recreating them and suffer the same problem.

    There doesn't seem to be any relationship to the data source either; I've tried copying from Blob Storage to Cosmos and also SQL Server to Cosmos; neither works and both display the same problems.

    Sunday, September 23, 2018 9:38 AM
  • Now Monday and still seeing the same issue.

    In case it has any relevance, we're trying to write to a fixed size 400RU collection located in UK South. ADF is running in West Europe.

    Just tried increasing the RUs to 10,000 to see if that made any difference, but I have the exact same problem.

    We're relying on Data Factory for a large migration project next month; this is somewhat unnerving to me!

    • Edited by ChrisWoodRB Monday, September 24, 2018 10:42 AM Added additional info
    Monday, September 24, 2018 9:30 AM
  • Great thought Chris. We are in US East.  I've done the same trying 10,000 RU's.  I'd like to open a support ticket with MS, but I don't have permissions in our EA. I really feel like something broke recently.  
    Monday, September 24, 2018 11:49 AM
  • I've tried lots of options on the config side in DF. 

    1) I tried doing AS-IS (No mappings)

    2) I actually deleted my entire Data Factory and re-created it. 

    3) I double and triple checked that my JSON is valid. It inserts/updates fine via other channels. 

    I think I'll try a straight bulk-executor job.  I've got an app written and want to see if it's something the bulk executor  (Which ADF uses I think), or something unique to ADF.  I'm running out of time on this effort and am getting desperate to get the data in sync. 

    Monday, September 24, 2018 2:50 PM
  • Good luck - keep me posted! I've got a bit more time yet, but I'm still alarmed that things could break so easily without warning.
    Monday, September 24, 2018 3:08 PM
  • So the regular bulk executor has the same error message.

    I opened ticket with MS. They've picked up the ticket. 

    #fingerscrossed.

    Monday, September 24, 2018 5:14 PM
  • Thanks for the update - is there a public URL where I can track the progress on the ticket, or is it a private ticket?

    Would really appreciate if you could keep me posted, if not!

    Chris

    Tuesday, September 25, 2018 9:44 AM
  • Hi Chris,

     It's a private ticket. I'll keep you posted though... I was supposed to get a response within 4 hrs... Not sure sure what's going on just yet.. I wonder if we discovered something larger going on. 


    Tuesday, September 25, 2018 11:43 AM
  • I also posted an issue in GitHub.  Not sure if that's helpful or not... 

    https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started/issues/25

    Tuesday, September 25, 2018 11:44 AM
  • I'm not really familiar with Bulk Executor myself and how it works, so this may be a stupid suggestion, but...

    If you use the Bulk Executor locally and also run up something like Fiddler or Wireshark, do you get to see the details of the 500 error at all? Wondering if that might give a hint of what's going wrong.


    • Edited by ChrisWoodRB Tuesday, September 25, 2018 12:15 PM Typo
    Tuesday, September 25, 2018 12:14 PM
  • Great idea. I'm running it locally. I can see the traffic, but only get the same JSRT error.  Outside of the documentation here -> https://docs.microsoft.com/en-us/azure/cosmos-db/bulk-executor-overview

    I don't know much more about what goes on behind the scenes. 

    Tuesday, September 25, 2018 12:18 PM
  • So here is a post in the CosmosDB area. 

    https://social.msdn.microsoft.com/Forums/en-US/790e95b0-fc0c-4610-8538-78e6ca71d313/datafactory-data-copy-into-cosmos-failure-failed-to-import-minibatch-jsrt-error-0x30003?forum=azurecosmosdb

    Same error. 

    Tuesday, September 25, 2018 12:20 PM
  • Hi guys, we are aware of this issue and are working with Cosmos DB to address it. Meanwhile we have some options that could help you workaround. I suggest you file support ticket to Azure where we'll collect information like a list of Azure subscriptions owned by you and then dedicated engineers will work with you through the process of applying the workaround. Sorry for any convenience this has caused.
    Tuesday, September 25, 2018 12:29 PM
  • Oh.. cool. Thanks for the update!! 
    Tuesday, September 25, 2018 12:32 PM
  • Hi, have you an update ? 
    • Edited by MstaeThe Tuesday, September 25, 2018 8:52 PM
    Tuesday, September 25, 2018 8:51 PM
  • Yes, I just heard back. There was an issue with the bulk executor version.  It looks like a recent update was made to the bulk executor library. Pulling the latest version is supposed to make that work. I have not yet attempted that though. From an ADF angle... The issue is with the Partitioned collections. It would appear the short term solution is to switch from non-partitioned (Fixed Storage) to partitioned storage (Unlimited). That's something I did not try. #FacePalm 

    Tuesday, September 25, 2018 11:55 PM
  • Thanks, that's works fine :)
    Wednesday, September 26, 2018 5:01 AM
  • I am still getting the same error. Is it already fix? Thank you.
    Friday, September 28, 2018 4:19 PM
  • I'm awaiting the fix in ADF too; moving to a more expensive unlimited storage approach (1000 RUs minimum per collection) isn't something that can be approved as a workaround!
    • Edited by ChrisWoodRB Saturday, September 29, 2018 8:56 PM typos
    Saturday, September 29, 2018 8:54 PM