none
Message: Index was outside the bounds of the array

    Question

  • Hi,

    I got an error when get user from AAD.

    "Message: Index was outside the bounds of the array.
     Inner Exception: 
     Stacktrace:    at System.Array.Clear(Array array, Int32 index, Int32 length)
       at System.Collections.Generic.List`1.Clear()
       at System.Data.Services.Client.AtomMaterializerLog.MergeEntityDescriptorInfo(EntityDescriptor trackedEntityDescriptor, EntityDescriptor entityDescriptorFromMaterializer, Boolean mergeInfo, MergeOption mergeOption)
       at System.Data.Services.Client.AtomMaterializerLog.ApplyToContext()
       at System.Data.Services.Client.MaterializeAtom.MoveNextInternal()
       at System.Data.Services.Client.MaterializeAtom.MoveNext()
       at System.Linq.Enumerable.<CastIterator>d__94`1.MoveNext()
       at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
       at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
       at Microsoft.Azure.ActiveDirectory.GraphClient.Extensions.PagedCollection`2..ctor(DataServiceContextWrapper context, QueryOperationResponse`1 qor)
       at Microsoft.Azure.ActiveDirectory.GraphClient.Extensions.DataServiceContextWrapper.<>c__DisplayClass4b`2.<ExecuteAsync>b__49(IAsyncResult r)
       at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)
    --- End of stack trace from previous location where exception was thrown ---
       at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
       at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
       at Microsoft.Azure.ActiveDirectory.GraphClient.Extensions.DataServiceContextWrapper.<ExecuteAsync>d__4d`2.MoveNext()
    --- End of stack trace from previous location where exception was thrown ---
       at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
       at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
       at Microsoft.Azure.ActiveDirectory.GraphClient.DirectoryObjectCollection.<<ExecuteAsync>b__2>d__3.MoveNext()
    --- End of stack trace from previous location where exception was thrown ---
       at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
       at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

    It occured after Japan east accident last day.

    "Starting at 12:42 UTC on 08 Mar 2017, a subset of customers using Virtual Machines, HD Insight, Redis Cache or App Service \ Web Apps in Japan East may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may be experiencing impact related to this and additional services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant"

    Regards,

    Zhong

    Thursday, March 9, 2017 3:01 AM

All replies

  • The impact in the Japan East - Storage has been mitigated. Do let us know if you are still seeing the error.

    You can find the RCA of the Impact on the Azure Status History page.

    RCA - Storage - Japan East:

    Summary of impact: Between 12:40 and 14:38 UTC on 08 Mar 2017, a subset of customers using Storage in Japan East may have experienced difficulties connecting to resources hosted in this region. Azure services built on our Storage service in this region also experienced impact including: App Service \ Web Apps, Site Recovery, Virtual Machines, Redis Cache, Data Movement, StorSimple, Logic Apps, Media Services, Key Vault, HDInsight, SQL Database, Automation, Stream Analytics, Backup, IoT Hub, and Cloud Services. The issue was detected by our monitoring and alerting systems that check the continuous health of the Storage service. The alerting triggered our engineering response and recovery actions were taken which allowed the Stream Manager process in the Storage service to begin processing requests and recover the service health. All Azure services built on our Storage service also recovered once the Storage service was recovered.

    Workaround: SQL database customers who had SQL Database configured with active geo-replication could have reduced downtime by performing failover to geo-secondary. This would have caused a loss of less than 5 seconds of transactions. All customers could perform a geo-restore, with loss of less than 5 minutes of transactions. Please visit https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity for more information on these capabilities.

    Root cause and mitigation: On a Storage scale unit in Japan East, the Stream Manager that is the backend component that manages data placement in the Storage service entered a rare unhealthy state, which caused a failure in processing requests. This resulted in requests to Storage service failing for the above period of time. The Stream Manager has protections to help it self-recover from such states (including auto-failover), however, a bug caused the automatic self-healing to be unsuccessful.

    Next steps: We sincerely apologize for the impact to affected customers. We are continuously taking steps to improve the Microsoft Azure Platform and our processes, to help ensure such incidents do not occur in the future. In this case, it includes (but is not limited to):

    - The bugfix for the self-healing mechanism will be rolled out as a hotfix across Storage scale units.
    - Implement secondary service healing mechanism, designed to auto-recover from unhealthy state, as well as additional monitoring for this failure scenario.

    Provide feedback: Please help us improve the Azure customer communications experience by taking our survey https://survey.microsoft.com/313074

    Friday, March 10, 2017 3:41 AM
    Moderator
  • Hi,

    After I scaled it out by adding one instance. This error occured no longer.

    If it occures again after I scale in, I will report it.

    regards,

    Zhong.

    Friday, March 10, 2017 8:57 AM
  • Hello,

    adding one instance to SQL?

    Thanks


    Friday, August 24, 2018 12:33 PM