Azure Cosmos DB Mongo API RRS feed

  • Question

  • I'm getting a request rate too large error for a Cosmos DB with Mongo API.

    I understand the error, but am having trouble finding a work around.

    The Mongo DB has about 70k documents in it. 

    The metrics for the collection are:

    data size: 13.76GB (I thought collections can't be over 10GB in size?)

    index size: 1.9GB

    My throughput is set to 5k

    I have a query that filters on 1 field, then I scale down the response using projections, and do a sort on a single field after.  I've created indexes on the 1 filter field, and the sort field.

    I find that if I remove projections, I'm able to process results, whereas with projects, I get no data back with rate too large error.  I would think projection would help since a lot less (80-90%) data is being transmitted back. Also, I'm wondering how clients like RoboMongo handle this kind of scenario with Azure Mongo DBs.


    Monday, January 22, 2018 10:49 PM

All replies

  • This is likely due to the default batchSize setting used by the app on the driver. Other apps just use super-low settings for both batchSize and numInsertionWorkers (number of threads) when they pull data. Please see here on how to calculate the most optimal ones for your provisioned throughput and speed requirements:

    Tuesday, January 23, 2018 6:43 PM