locked
Performance changes when switching from one index to two RRS feed

  • Question

  • We initially started off with one index with 20 million documents.  When I started the stress test, it failed out with an exception after ~5 queries stating we needed to buy more replicas.

    So I broke it into two indexes, one has ~8mil, the other ~12 mil.  This did okay with 3 threads constantly searching, but crapped out with 7 threads with a similar error (Failed to execute query because not enough resources were available to cover 100% of the index (91.6666666666667% was covered). You may be reaching the limits of your provisioned capacity. Adjust the number of replicas/partitions, reduce the rate of requests, or specify a lower value for the minimumCoverage parameter. See http://aka.ms/azure-search-throttling for more information).

    So the main question here is why would moving from one index to two make a difference in regards to replicas.  Can I just create 5 or 6 indexes and that will solve the issue or do we just need to upgrade replicas?

    Additionally, when the documentation states that our plan can handle ~15 QPS and it errs out after 4 or 5, what is causing this?

    We have 15 searchable string fields and 2 searchable string arrays.  Is this just too much data to search with 1 replica?  If we need to upgrade, would it be more performant to have S2 with 1 replica or S1 with 2 replicas?

    Thanks



    • Edited by Jeremy233 Tuesday, June 6, 2017 7:53 PM
    Tuesday, June 6, 2017 7:31 PM

All replies

  • Bump
    Wednesday, June 7, 2017 5:27 PM

  • Maximal throughput is limited by the computational capacity of the hardware resources provisioned for your service. You increase that capacity by adding partitions, replicas, or moving up in the pricing tier to leverage faster CPUs and more memory, as explained in the Azure Search performance tuning article. We can't make any QPS guarantees as the query performance of your service depends on the size of the queries you issue (query complexity, size of the recall set, whether you're using filters or facets), the number and size of documents, the nature of your data corpus. That's why we recommend our customers to measure query performance in their specific scenario, like you did. Please make sure to give a cold service time to wake up its caches with a number of warm-up queries. Increase the QPS rate slowly to see at what point the query load starts overwhelming the resources. The following video explains how to perform load testing in an effective way and how to adjust your service configuration based on the results: Azure Search best practices.

    I should be able to help you understand why the change in the number of indexes influenced performance if I you provide more details about:

    - what's the structure of the index

    - what is the nature of the documents, their size, type of content

    - what are the types of queries you issue?

    - how is your stress tests designed? are your results consistent? 

    Depending on how you measure query performance, the gain you are seeing might be misleading. In the configuration with two indexes, if you are issuing two search request for each user search query, one for every index, each query needs to retrieve and score only half of the documents. Each individual query against a partial index is faster than one query issued against an index with all documents, but the overall time to search against the entire catalog should be comparable, in a general case.


    Thanks,

    Janusz


    Wednesday, June 7, 2017 8:59 PM