none
因为一个非集群的共享磁盘中断导致AlwaysOn 集群不可用 RRS feed

  • 问题

  • 在主节点和辅助节点挂载了一个  iSCSI 磁盘,但是只在主节点使用,并未加入集群资源中,

    今天这个磁盘突然掉线了导致集群不可用,请问是什么原因导致的

    报错信息如下:

    事件详细信息
    服务器上的某个组件未及时做出响应。这导致了群集资源“AGO1”(资源类型为“SQL Server Availability Group",DLL 为“hadrres. dll”)超出其超时阈值。在群集运行状况检测过程中,将执行恢复操作。群集将尝试通过终止并重新启动运行此
    资源的资源宿主子系统(RHS)进程来自动恢复。请验证与该资源关联的基础结构(如存储、网络或服务)是否正常运行。

    2019年12月30日 14:49

全部回复

  • Possible if it's dependency resource of AGO1 group, double check properties of AGO1 in cluster manager.
    2019年12月30日 17:24
  • it's not dependency resource of AGO1 group
    2019年12月31日 1:34
  • it's not dependency resource of AGO1 group
    2019年12月31日 1:34
  • i have add this shared disk 'F' to clustered disks once,then i removed it from the cluster,

    it's not dependency resource of AGO1 group

    2019年12月31日 6:39
  • 你好,

    根据你的报错信息,你需要收集以下群集日志信息,查看是什么原因导致哪个组件未及时做出响应。

    获取群集日志的方法,你可以参考这篇文章,然后根据你的日志信息我们在一起分析解决方法


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    2019年12月31日 6:53
  • 链接: https://pan.baidu.com/s/1pcIEnniD8MXCfuPKIRkXNQ 提取码: ms3x

    你好,日志已经上传,sqldb04 是主节点,我在这个节点挂了一个iSCSI磁盘,但是昨天不停联机脱机,后来alwaysOn就不可用了

    2019年12月31日 7:38
  • alwaysOn在17.17分左右变得不可用
    2019年12月31日 7:39
  • 000010ac.00005358::2019/12/30-15:48:11.908 INFO  [STM]: Got device removal notification
    00002a20.000038cc::2019/12/30-15:48:11.908 INFO  [RES] Physical Disk: PNP: \\?\UPIO#DiskHuaweiUltraPath_________________________1.001___#1&146a97f6&0&3661633735316431303032383235396135363663313164623030303030303139#{53f56307-b6bf-11d0-94f2-00a0c91efb8b} disk disappeared
    00002a20.000014d4::2019/12/30-15:48:12.125 INFO  [RES] Physical Disk: PNP: \\?\STORAGE#Volume#{4d94e9d9-b303-11e9-b38d-745aaa5dbddb}#0000000000004400#{7f108a28-9833-4b3b-b780-2c6b5fa5c062} volume disappeared
    00002a20.000014d4::2019/12/30-15:48:12.125 INFO  [RES] Physical Disk: PnpRemoveVolume: Removing volume \\?\STORAGE#Volume#{4d94e9d9-b303-11e9-b38d-745aaa5dbddb}#0000000000004400#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
    00002a20.000014d4::2019/12/30-15:48:12.125 INFO  [RES] Physical Disk: PNPDEBUG: CM_Unregister_Notification handle 00000206BBEB0760
    00002a20.000014d4::2019/12/30-15:48:12.125 INFO  [RES] Physical Disk: PNP: \\?\STORAGE#Volume#{4d94e9d9-b303-11e9-b38d-745aaa5dbddb}#0000000008100000#{53f5630d-b6bf-11d0-94f2-00a0c91efb8b} volume disappeared
    00002a20.000014d4::2019/12/30-15:48:12.125 INFO  [RES] Physical Disk: PnpRemoveVolume: Removing volume \\?\STORAGE#Volume#{4d94e9d9-b303-11e9-b38d-745aaa5dbddb}#0000000008100000#{53f5630d-b6bf-11d0-94f2-00a0c91efb8b}
    00002a20.000014d4::2019/12/30-15:48:12.126 INFO  [RES] Physical Disk: PNPDEBUG: CM_Unregister_Notification handle 00000206BBEAF880
    000010ac.00005358::2019/12/30-16:17:58.694 INFO  [Cert] Current cert from DB is installed, expiration: 2020/07/09-09:44:17.000
    000010ac.000057d8::2019/12/30-17:16:45.807 INFO  [STM]: Got device arrival notification
    00002a20.000059c4::2019/12/30-17:16:45.808 INFO  [RES] Physical Disk: PNP: \\?\UPIO#DiskHuaweiUltraPath_________________________1.001___#1&146a97f6&0&3661633735316431303032383235396135363663313164623030303030303139#{53f56307-b6bf-11d0-94f2-00a0c91efb8b} disk arrived
    00002a20.00004ea8::2019/12/30-17:16:45.891 INFO  [RES] Physical Disk: PNP: \\?\STORAGE#Volume#{4d94e9d9-b303-11e9-b38d-745aaa5dbddb}#0000000000004400#{7f108a28-9833-4b3b-b780-2c6b5fa5c062} volume arrived
    00002a20.00004ea8::2019/12/30-17:16:45.891 INFO  [RES] Physical Disk: PnpAddVolume: Adding volume \\?\STORAGE#Volume#{4d94e9d9-b303-11e9-b38d-745aaa5dbddb}#0000000000004400#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
    00002a20.00004ea8::2019/12/30-17:16:45.896 INFO  [RES] Physical Disk: PNPDEBUG: RegisterWNFDeviceHandle: device handle 00000206BBEB0100, status 0
    00002a20.00004ea8::2019/12/30-17:16:45.896 INFO  [RES] Physical Disk: PnpAddVolume: Add Volume exit, status 0
    00002a20.00004ea8::2019/12/30-17:16:46.130 INFO  [RES] Physical Disk: PNP: \\?\STORAGE#Volume#{4d94e9d9-b303-11e9-b38d-745aaa5dbddb}#0000000008100000#{53f5630d-b6bf-11d0-94f2-00a0c91efb8b} volume arrived
    00002a20.00004ea8::2019/12/30-17:16:46.130 INFO  [RES] Physical Disk: PnpAddVolume: Adding volume \\?\STORAGE#Volume#{4d94e9d9-b303-11e9-b38d-745aaa5dbddb}#0000000008100000#{53f5630d-b6bf-11d0-94f2-00a0c91efb8b}
    00002a20.00004ea8::2019/12/30-17:16:46.131 INFO  [RES] Physical Disk: PNPDEBUG: RegisterWNFDeviceHandle: device handle 00000206BBEB1640, status 0
    00002a20.00004ea8::2019/12/30-17:16:46.131 INFO  [RES] Physical Disk: PnpAddVolume: Add Volume exit, status 0
    000010ac.000023d4::2019/12/30-17:17:00.507 INFO  [GUM] Node 1: Processing RequestLock 2:165
    000010ac.000023d4::2019/12/30-17:17:00.507 INFO  [GUM] Node 1: Processing GrantLock to 2 (sent by 1 gumid: 7608)
    000010ac.000057d8::2019/12/30-17:17:00.508 INFO  [GUM] Node 1: Executing locally gumId: 7609, updates: 1, first action: /dm/update
    000028fc.00005448::2019/12/30-17:17:07.289 ERR   [RES] SQL Server Availability Group: [hadrag] Failure detected, diagnostics heartbeat is lost
    000028fc.00005448::2019/12/30-17:17:07.289 ERR   [RES] SQL Server Availability Group <AG01>: [hadrag] Availability Group is not healthy with given HealthCheckTimeout and FailureConditionLevel
    000028fc.00005448::2019/12/30-17:17:07.289 ERR   [RES] SQL Server Availability Group <AG01>: [hadrag] Resource Alive result 0.
    000028fc.00005448::2019/12/30-17:17:07.289 ERR   [RES] SQL Server Availability Group: [hadrag] Failure detected, diagnostics heartbeat is lost
    000028fc.00005448::2019/12/30-17:17:07.289 ERR   [RES] SQL Server Availability Group <AG01>: [hadrag] Availability Group is not healthy with given HealthCheckTimeout and FailureConditionLevel
    000028fc.00005448::2019/12/30-17:17:07.289 ERR   [RES] SQL Server Availability Group <AG01>: [hadrag] Resource Alive result 0.
    000028fc.00005448::2019/12/30-17:17:07.289 WARN  [RHS] Resource AG01 IsAlive has indicated failure.
    000028fc.00005448::2019/12/30-17:17:07.290 INFO  [RHS-WER] Scheduling WER ERROR report in 10.000. ReportId 0d7eba4d-610d-4b57-a6d9-b0ded59f5387;
    000010ac.000057d8::2019/12/30-17:17:07.290 INFO  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'AG01', gen(2) result 1/0.
    000010ac.000057d8::2019/12/30-17:17:07.290 INFO  [RCM] Res AG01: Online -> ProcessingFailure( StateUnknown )
    000010ac.000057d8::2019/12/30-17:17:07.290 INFO  [RCM] TransitionToState(AG01) Online-->ProcessingFailure.
    000010ac.000057d8::2019/12/30-17:17:07.290 INFO  [RCM] rcm::RcmGroup::UpdateStateIfChanged: (AG01, Online --> Pending)
    000010ac.000057d8::2019/12/30-17:17:07.290 ERR   [RCM] rcm::RcmResource::HandleFailure: (AG01)
    000010ac.000057d8::2019/12/30-17:17:07.290 INFO  [RCM] resource AG01: failure count: 1, restartAction: 2 persistentState: 1.
    000010ac.000057d8::2019/12/30-17:17:07.290 INFO  [RCM] numDependents is zero, auto-returning true
    000010ac.000057d8::2019/12/30-17:17:07.290 INFO  [RCM] Greater than restartPeriod time has elapsed since first failure of AG01, resetting failureTime and failureCount.
    000010ac.000057d8::2019/12/30-17:17:07.290 INFO  [RCM] Will queue immediate restart (500 milliseconds) of AG01 after terminate is complete.
    000010ac.000057d8::2019/12/30-17:17:07.290 INFO  [RCM] Res AG01: ProcessingFailure -> WaitingToTerminate( DelayRestartingResource )
    000010ac.000057d8::2019/12/30-17:17:07.290 INFO  [RCM] TransitionToState(AG01) ProcessingFailure-->[WaitingToTerminate to DelayRestartingResource].
    000010ac.000057d8::2019/12/30-17:17:07.290 INFO  [RCM] Res AG01: [WaitingToTerminate to DelayRestartingResource] -> Terminating( DelayRestartingResource )
    000010ac.000057d8::2019/12/30-17:17:07.290 INFO  [RCM] TransitionToState(AG01) [WaitingToTerminate to DelayRestartingResource]-->[Terminating to DelayRestartingResource].
    000028fc.00003434::2019/12/30-17:17:07.290 ERR   [RES] SQL Server Availability Group <AG01>: [hadrag] Lease Thread terminated
    000028fc.00005448::2019/12/30-17:17:07.290 INFO  [RES] SQL Server Availability Group: [hadrag] Stopping Health Worker Thread
    000028fc.0000435c::2019/12/30-17:17:07.290 INFO  [RES] SQL Server Availability Group: [hadrag] Health worker was asked to terminate
    000028fc.0000435c::2019/12/30-17:17:07.300 INFO  [RES] SQL Server Availability Group: [hadrag] Change diagnostics interval worker is stopped
    000010ac.00004310::2019/12/30-17:17:16.445 INFO  [GUM] Node 1: Processing RequestLock 1:7123
    000010ac.000023d4::2019/12/30-17:17:16.446 INFO  [GUM] Node 1: Processing GrantLock to 1 (sent by 2 gumid: 7609)
    000010ac.00004310::2019/12/30-17:17:16.446 INFO  [GUM] Node 1: executing request locally, gumId:7610, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.451 INFO  [GUM] Node 1: executing request locally, gumId:7611, my action: /dm/update, # of updates: 1
    000010ac.00005358::2019/12/30-17:17:16.474 INFO  [GUM] Node 1: executing request locally, gumId:7612, my action: /dm/update, # of updates: 1
    000010ac.000059bc::2019/12/30-17:17:16.478 INFO  [GUM] Node 1: executing request locally, gumId:7613, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.482 INFO  [GUM] Node 1: executing request locally, gumId:7614, my action: /dm/update, # of updates: 1
    000010ac.000059bc::2019/12/30-17:17:16.487 INFO  [GUM] Node 1: executing request locally, gumId:7615, my action: /dm/update, # of updates: 1
    000010ac.000059bc::2019/12/30-17:17:16.491 INFO  [GUM] Node 1: executing request locally, gumId:7616, my action: /dm/update, # of updates: 1
    000010ac.000059bc::2019/12/30-17:17:16.495 INFO  [GUM] Node 1: executing request locally, gumId:7617, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.499 INFO  [GUM] Node 1: executing request locally, gumId:7618, my action: /dm/update, # of updates: 1
    000010ac.000059bc::2019/12/30-17:17:16.499 INFO  [RCM] rcm::RcmApi::AddPossibleOwner: (AG01, 1)
    000010ac.000059bc::2019/12/30-17:17:16.500 INFO  [GUM] Node 1: executing request locally, gumId:7619, my action: /rcm/gum/AddPossibleOwner, # of updates: 1
    000010ac.000059bc::2019/12/30-17:17:16.500 INFO  [RCM] rcm::RcmGum::AddPossibleOwner(AG01,1)
    000010ac.000059bc::2019/12/30-17:17:16.500 ERR   mscs::GumAgent::ExecuteHandlerLocally: (5010)' because of 'The specified node is already a possible owner.'
    000010ac.000059bc::2019/12/30-17:17:16.500 WARN  [DM] Aborting group transaction 32:32:10079+1
    000010ac.000059bc::2019/12/30-17:17:16.501 ERR   [RCM] rcm::RcmApi::AddPossibleOwner: (5010)' because of 'Gum handler completed as failed'
    000010ac.000057d8::2019/12/30-17:17:16.501 INFO  [RCM] rcm::RcmApi::MoveGroup: (Group:AG01 Dest:1 Flags:0 MoveType:MoveType::Manual Cur.State:Pending, ContextSize:0)
    000010ac.000059bc::2019/12/30-17:17:16.503 INFO  [GUM] Node 1: executing request locally, gumId:7619, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.503 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:17:16.503 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000059bc::2019/12/30-17:17:16.506 INFO  [GUM] Node 1: executing request locally, gumId:7620, my action: /dm/update, # of updates: 1
    000010ac.000059bc::2019/12/30-17:17:16.510 INFO  [GUM] Node 1: executing request locally, gumId:7621, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.514 INFO  [GUM] Node 1: executing request locally, gumId:7622, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.517 INFO  [GUM] Node 1: executing request locally, gumId:7623, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.520 INFO  [GUM] Node 1: executing request locally, gumId:7624, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.524 INFO  [GUM] Node 1: executing request locally, gumId:7625, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.527 INFO  [GUM] Node 1: executing request locally, gumId:7626, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.531 INFO  [GUM] Node 1: executing request locally, gumId:7627, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.535 INFO  [GUM] Node 1: executing request locally, gumId:7628, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.538 INFO  [GUM] Node 1: executing request locally, gumId:7629, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.542 INFO  [GUM] Node 1: executing request locally, gumId:7630, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.545 INFO  [GUM] Node 1: executing request locally, gumId:7631, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.549 INFO  [GUM] Node 1: executing request locally, gumId:7632, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.552 INFO  [GUM] Node 1: executing request locally, gumId:7633, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.556 INFO  [GUM] Node 1: executing request locally, gumId:7634, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.560 INFO  [GUM] Node 1: executing request locally, gumId:7635, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.563 INFO  [GUM] Node 1: executing request locally, gumId:7636, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.567 INFO  [GUM] Node 1: executing request locally, gumId:7637, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.570 INFO  [GUM] Node 1: executing request locally, gumId:7638, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.574 INFO  [GUM] Node 1: executing request locally, gumId:7639, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.578 INFO  [GUM] Node 1: executing request locally, gumId:7640, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.581 INFO  [GUM] Node 1: executing request locally, gumId:7641, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.584 INFO  [GUM] Node 1: executing request locally, gumId:7642, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.588 INFO  [GUM] Node 1: executing request locally, gumId:7643, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.592 INFO  [GUM] Node 1: executing request locally, gumId:7644, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.596 INFO  [GUM] Node 1: executing request locally, gumId:7645, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.600 INFO  [GUM] Node 1: executing request locally, gumId:7646, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.603 INFO  [GUM] Node 1: executing request locally, gumId:7647, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.608 INFO  [GUM] Node 1: executing request locally, gumId:7648, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.611 INFO  [GUM] Node 1: executing request locally, gumId:7649, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.615 INFO  [GUM] Node 1: executing request locally, gumId:7650, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.618 INFO  [GUM] Node 1: executing request locally, gumId:7651, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.622 INFO  [GUM] Node 1: executing request locally, gumId:7652, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.626 INFO  [GUM] Node 1: executing request locally, gumId:7653, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.630 INFO  [GUM] Node 1: executing request locally, gumId:7654, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.633 INFO  [GUM] Node 1: executing request locally, gumId:7655, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.637 INFO  [GUM] Node 1: executing request locally, gumId:7656, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.640 INFO  [GUM] Node 1: executing request locally, gumId:7657, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.644 INFO  [GUM] Node 1: executing request locally, gumId:7658, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.647 INFO  [GUM] Node 1: executing request locally, gumId:7659, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.651 INFO  [GUM] Node 1: executing request locally, gumId:7660, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.655 INFO  [GUM] Node 1: executing request locally, gumId:7661, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.658 INFO  [GUM] Node 1: executing request locally, gumId:7662, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.662 INFO  [GUM] Node 1: executing request locally, gumId:7663, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.665 INFO  [GUM] Node 1: executing request locally, gumId:7664, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.669 INFO  [GUM] Node 1: executing request locally, gumId:7665, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.672 INFO  [GUM] Node 1: executing request locally, gumId:7666, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.676 INFO  [GUM] Node 1: executing request locally, gumId:7667, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.679 INFO  [GUM] Node 1: executing request locally, gumId:7668, my action: /dm/update, # of updates: 1
    000010ac.000057d8::2019/12/30-17:17:16.683 INFO  [GUM] Node 1: executing request locally, gumId:7669, my action: /dm/update, # of updates: 1
    000028fc.00005988::2019/12/30-17:17:17.289 INFO  [RHS-WER] -1774746764 milliseconds (-1774746 seconds) passed since last WER ERROR report 31fd81b6-daff-480c-8f83-7ed7bdfcf05f for the resource type SQL Server Availability Group call type ISALIVE. Throttling this report
    000010ac.000059bc::2019/12/30-17:17:21.504 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000059bc::2019/12/30-17:17:21.505 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000059bc::2019/12/30-17:17:26.506 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000059bc::2019/12/30-17:17:26.506 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000059bc::2019/12/30-17:17:31.507 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000059bc::2019/12/30-17:17:31.508 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000059bc::2019/12/30-17:17:36.509 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000059bc::2019/12/30-17:17:36.509 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000057d8::2019/12/30-17:17:41.510 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:17:41.511 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000057d8::2019/12/30-17:17:46.512 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:17:46.512 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000057d8::2019/12/30-17:17:51.514 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:17:51.514 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000057d8::2019/12/30-17:17:56.515 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:17:56.515 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000057d8::2019/12/30-17:18:01.517 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:18:01.517 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000059bc::2019/12/30-17:18:06.518 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000059bc::2019/12/30-17:18:06.518 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000059bc::2019/12/30-17:18:11.521 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000059bc::2019/12/30-17:18:11.521 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000059bc::2019/12/30-17:18:16.522 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000059bc::2019/12/30-17:18:16.522 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000057d8::2019/12/30-17:18:21.525 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:18:21.525 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000059bc::2019/12/30-17:18:26.526 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000059bc::2019/12/30-17:18:26.527 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000057d8::2019/12/30-17:18:31.528 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:18:31.528 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000057d8::2019/12/30-17:18:36.529 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:18:36.530 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000059bc::2019/12/30-17:18:41.531 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000059bc::2019/12/30-17:18:41.531 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000057d8::2019/12/30-17:18:46.532 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:18:46.533 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000057d8::2019/12/30-17:18:51.534 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:18:51.534 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000057d8::2019/12/30-17:18:56.535 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:18:56.536 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000057d8::2019/12/30-17:19:01.537 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:19:01.537 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000059bc::2019/12/30-17:19:06.539 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000059bc::2019/12/30-17:19:06.539 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000010ac.000057d8::2019/12/30-17:19:11.540 INFO  [RCM] rcm::RcmApi::OnlineResource: (AG01, 0)
    000010ac.000057d8::2019/12/30-17:19:11.540 ERR   [RCM] rcm::RcmApi::OnlineResource: (5023)' because of 'The API call is not valid while resource is in the [Terminating to DelayRestartingResource] state.'
    000028fc.000029b0::2019/12/30-17:22:07.311 ERR   [RHS - Timeout] Resource 'AG01' has not responded to the call TERMINATERESOURCE:0. The timeout to respond has been exceeded by 16 milliseconds, taking recovery actions.
    000028fc.000029b0::2019/12/30-17:22:07.311 INFO  [RHS] Enabling a watchdog to ensure RHS termination completes successfully with timeout 1200000 and recovery action 3 from source 5.
    000028fc.000029b0::2019/12/30-17:22:07.311 ERR   [RHS - Timeout] Health Monitoring Failure : Resource AG01 is not functioning as expected. Cancelling current operation and terminating the hosting RHS process to reload and recover the resource.
    000010ac.000057d8::2019/12/30-17:22:07.311 WARN  [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'AG01', gen(3) result 4/0.
    000028fc.000029b0::2019/12/30-17:22:07.311 INFO  [RHS-WER] About to send WER HANG report. Dump policy 0x137701110; ReportId b56d53f5-04a9-4baf-b805-92c199f34712
    000010ac.000057d8::2019/12/30-17:22:07.312 WARN  [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'AG01' has crashed or timedout; marking it to run in a separate monitor.
    000010ac.000057d8::2019/12/30-17:22:07.312 INFO  [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'AG01' consecutive failure count 1.
    000028fc.000029b0::2019/12/30-17:22:07.312 INFO  [RHS-WER] Trying to capture RHS process snapshot.
    000028fc.000029b0::2019/12/30-17:22:07.496 INFO  [RHS-WER] WER adding RHS dump 2 from RHS snapshot.
    000028fc.000029b0::2019/12/30-17:22:07.713 INFO  [RHS-WER] WER adding clussvc dump 2 from clussvc snapshot.
    000028fc.00005a48::2019/12/30-17:22:07.714 INFO  [RHS-WER] Capturing log using query <QueryList><Query Id="0"><Select Path="System">*[System[TimeCreated[timediff(@SystemTime) &lt;= 86400000]]]</Select></Query></QueryList> to C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_1.evtx.
    000028fc.00005bd4::2019/12/30-17:22:07.714 INFO  [RHS-WER] Capturing log using query <QueryList><Query Id="0"><Select Path="Application">*[System[TimeCreated[timediff(@SystemTime) &lt;= 86400000]]]</Select></Query></QueryList> to C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_2.evtx.
    000028fc.000047fc::2019/12/30-17:22:07.715 INFO  [RHS-WER] Capturing log using query <QueryList><Query Id="0"><Select Path="Microsoft-Windows-FailoverClustering/Operational">*[System[TimeCreated[timediff(@SystemTime) &lt;= 86400000]]]</Select></Query></QueryList> to C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_3.evtx.
    000028fc.00005220::2019/12/30-17:22:07.715 INFO  [RHS-WER] Capturing log using query <QueryList><Query Id="0"><Select Path="Microsoft-Windows-Kernel-IoTrace/Diagnostic">*[System[TimeCreated[timediff(@SystemTime) &lt;= 600000]]]</Select></Query></QueryList> to C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_4.evtx.
    000028fc.000054e8::2019/12/30-17:22:07.715 INFO  [RHS-WER] Capturing log using query <QueryList><Query Id="0"><Select Path="Microsoft-Windows-RPC/Debug">*[System[TimeCreated[timediff(@SystemTime) &lt;= 600000]]]</Select></Query></QueryList> to C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_5.evtx.
    000028fc.00004898::2019/12/30-17:22:07.715 INFO  [RHS-WER] Capturing log using query <QueryList><Query Id="0"><Select Path="Microsoft-Windows-FailoverClustering/Diagnostic">*[System[TimeCreated[timediff(@SystemTime) &lt;= 600000]]]</Select></Query></QueryList> to C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_6.evtx.
    000028fc.00005220::2019/12/30-17:22:07.719 INFO  [RHS-WER] Capture C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_4.evtx completed.
    000028fc.00005220::2019/12/30-17:22:07.719 INFO  [RHS-WER] Capturing log using query <QueryList><Query Id="0"><Select Path="Microsoft-Windows-FailoverClustering/DiagnosticVerbose">*[System[TimeCreated[timediff(@SystemTime) &lt;= 300000]]]</Select></Query></QueryList> to C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_7.evtx.
    000028fc.000054e8::2019/12/30-17:22:07.721 INFO  [RHS-WER] Capture C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_5.evtx completed.
    000028fc.000054e8::2019/12/30-17:22:07.722 INFO  [RHS-WER] Capturing log using query <QueryList><Query Id="0"><Select Path="Microsoft-Windows-FailoverClustering-CsvFs/Operational">*[System[TimeCreated[timediff(@SystemTime) &lt;= 86400000]]]</Select></Query></QueryList> to C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_8.evtx.
    000028fc.000054e8::2019/12/30-17:22:07.726 INFO  [RHS-WER] Capture C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_8.evtx completed.
    000028fc.000054e8::2019/12/30-17:22:07.726 INFO  [RHS-WER] Capturing log using query <QueryList><Query Id="0"><Select Path="Microsoft-Windows-FailoverClustering-NetFt/Operational">*[System[TimeCreated[timediff(@SystemTime) &lt;= 86400000]]]</Select></Query></QueryList> to C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_9.evtx.
    000028fc.000054e8::2019/12/30-17:22:07.745 INFO  [RHS-WER] Capture C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_9.evtx completed.
    000028fc.000054e8::2019/12/30-17:22:07.745 INFO  [RHS-WER] Capturing log using query <QueryList><Query Id="0"><Select Path="Microsoft-Windows-ClusterAwareUpdating/Admin">*[System[TimeCreated[timediff(@SystemTime) &lt;= 86400000]]]</Select></Query></QueryList> to C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_10.evtx.
    000028fc.000054e8::2019/12/30-17:22:07.748 INFO  [RHS-WER] Capture C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_10.evtx completed.
    000028fc.000054e8::2019/12/30-17:22:07.748 INFO  [RHS-WER] Capturing log using query <QueryList><Query Id="0"><Select Path="Microsoft-Windows-ClusterAwareUpdating-Management/Adminn">*[System[TimeCreated[timediff(@SystemTime) &lt;= 86400000]]]</Select></Query></QueryList> to C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_11.evtx.
    000028fc.000054e8::2019/12/30-17:22:07.751 INFO  [RHS-WER] Capture C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_11.evtx completed.
    000028fc.000047fc::2019/12/30-17:22:07.766 INFO  [RHS-WER] Capture C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_3.evtx completed.
    000028fc.00005bd4::2019/12/30-17:22:08.143 INFO  [RHS-WER] Capture C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_2.evtx completed.
    000028fc.00005a48::2019/12/30-17:22:08.266 INFO  [RHS-WER] Capture C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_1.evtx completed.
    000028fc.00005220::2019/12/30-17:22:08.439 INFO  [RHS-WER] Capture C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_7.evtx completed.
    000028fc.00004898::2019/12/30-17:22:10.437 INFO  [RHS-WER] Capture C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_6.evtx completed.
    000028fc.000029b0::2019/12/30-17:22:10.437 INFO  [RHS-WER] Added file C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_1.evtx to WER report.
    000028fc.000029b0::2019/12/30-17:22:10.438 INFO  [RHS-WER] Added file C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_2.evtx to WER report.
    000028fc.000029b0::2019/12/30-17:22:10.438 INFO  [RHS-WER] Added file C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_3.evtx to WER report.
    000028fc.000029b0::2019/12/30-17:22:10.438 INFO  [RHS-WER] Added file C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_4.evtx to WER report.
    000028fc.000029b0::2019/12/30-17:22:10.439 INFO  [RHS-WER] Added file C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_5.evtx to WER report.
    000028fc.000029b0::2019/12/30-17:22:10.439 INFO  [RHS-WER] Added file C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_6.evtx to WER report.
    000028fc.000029b0::2019/12/30-17:22:10.440 INFO  [RHS-WER] Added file C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_7.evtx to WER report.
    000028fc.000029b0::2019/12/30-17:22:10.440 INFO  [RHS-WER] Added file C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_8.evtx to WER report.
    000028fc.000029b0::2019/12/30-17:22:10.440 INFO  [RHS-WER] Added file C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_9.evtx to WER report.
    000028fc.000029b0::2019/12/30-17:22:10.441 INFO  [RHS-WER] Added file C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_10.evtx to WER report.
    000028fc.000029b0::2019/12/30-17:22:10.441 INFO  [RHS-WER] Added file C:\Windows\Cluster\Reports\CLUSWER_RHS_HANG_b56d53f5-04a9-4baf-b805-92c199f34712_11.evtx to WER report.
    000028fc.000029b0::2019/12/30-17:22:12.533 INFO  [RHS-WER] WER HANG report is submitted with flags 0x54. Result : WerReportQueued.
    000010ac.000059bc::2019/12/30-17:22:12.564 ERR   [RCM] rcm::RcmMonitor::RecoverProcess: Recovering monitor process 10492 / 0x28fc
    000010ac.000059bc::2019/12/30-17:22:12.570 INFO  [RCM] Created monitor process 15544 / 0x3cb8, IsDefaultMonitor::YesNonCore
    00003cb8.00004cf8::2019/12/30-17:22:12.593 INFO  [RHS] Initializing.
    000010ac.000059bc::2019/12/30-17:22:12.603 INFO  rcm::RcmMonitor::WaitForRhsToInitialize Process pid 0x3cb8 started normally
    000010ac.000059bc::2019/12/30-17:22:12.603 INFO  [RCM] About to initialize RPC handle
    000010ac.000059bc::2019/12/30-17:22:12.603 INFO  [RCM] Initialized RPC handle to value HDL( 28e03c2c3c0 )
    000010ac.000059bc::2019/12/30-17:22:12.604 INFO  [RCM] rcm::RcmMonitor::RestartResources: Monitor restart for resource AG01
    000010ac.000059bc::2019/12/30-17:22:12.604 INFO  [RCM] rcm::RcmResource::ReattachToMonitorProcess: (AG01, [Terminating to DelayRestartingResource])
    000010ac.000059bc::2019/12/30-17:22:12.604 INFO  [RCM] Separate monitor flag has changed for resource 'AG01'.  Now hosted by RHS process 0
    000010ac.000059bc::2019/12/30-17:22:12.609 INFO  [RCM] Created monitor process 19004 / 0x4a3c, IsDefaultMonitor::No
    00004a3c.000029f8::2019/12/30-17:22:12.632 INFO  [RHS] Initializing.
    000010ac.000059bc::2019/12/30-17:22:12.641 INFO  rcm::RcmMonitor::WaitForRhsToInitialize Process pid 0x4a3c started normally
    000010ac.000059bc::2019/12/30-17:22:12.641 INFO  [RCM] About to initialize RPC handle
    000010ac.000059bc::2019/12/30-17:22:12.641 INFO  [RCM] Initialized RPC handle to value HDL( 28e03c2c1a0 )
    00004a3c.00004880::2019/12/30-17:22:12.641 INFO  [RHS] Registering HDL( 25c19ffc8e0 ) as a valid RHS resource handle
    00004a3c.00004880::2019/12/30-17:22:12.641 INFO  [RHS] OpenResource: opening resource AG01 of type SQL Server Availability Group with handle HDL( 25c19ffc8e0 )
    000010ac.000059bc::2019/12/30-17:22:12.642 INFO  [RCM] Res AG01: [Terminating to DelayRestartingResource] -> WaitingToTerminate( DelayRestartingResource )
    000010ac.000059bc::2019/12/30-17:22:12.642 INFO  [RCM] TransitionToState(AG01) [Terminating to DelayRestartingResource]-->[WaitingToTerminate to DelayRestartingResource].
    000010ac.000059bc::2019/12/30-17:22:12.642 INFO  [RCM] Res AG01: [WaitingToTerminate to DelayRestartingResource] -> Terminating( DelayRestartingResource )
    000010ac.000059bc::2019/12/30-17:22:12.642 INFO  [RCM] TransitionToState(AG01) [WaitingToTerminate to DelayRestartingResource]-->[Terminating to DelayRestartingResource].
    00004a3c.00004f78::2019/12/30-17:22:12.642 INFO  [RHS] Waiting for Open call for AG01 to complete.
    00004a3c.00004880::2019/12/30-17:22:12.651 INFO  [RES] SQL Server Availability Group <AG01>: [hadrag] Open request
    000010ac.00005358::2019/12/30-17:22:12.654 INFO  [RCM] HandleMonitorReply: OPENRESOURCE for 'AG01', gen(3) result 0/0.
    00004a3c.00004f78::2019/12/30-17:22:12.654 INFO  [RES] SQL Server Availability Group <AG01>: [hadrag] Resource Host process (RHS.exe) might have been restarted.
    00004a3c.00004f78::2019/12/30-17:22:12.654 INFO  [RES] SQL Server Availability Group <AG01>: [hadrag] Issuing the Offline command due to monitor process restart.
    00004a3c.00004f78::2019/12/30-17:22:12.659 INFO  [RES] SQL Server Availability Group <AG01>: [hadrag] Connect to SQL Server ...
    00004a3c.00004f78::2019/12/30-17:22:12.714 INFO  [RES] SQL Server Availability Group <AG01>: [hadrag] The connection was established successfully
    00004a3c.00004f78::2019/12/30-17:22:12.717 INFO  [RES] SQL Server Availability Group <AG01>: [hadrag] Disconnect from SQL Server
    00004a3c.00004f78::2019/12/30-17:22:12.720 INFO  [RES] SQL Server Availability Group <AG01>: [hadrag] Offline call successful in Terminate function
    000010ac.000059bc::2019/12/30-17:22:12.720 INFO  [RCM] HandleMonitorReply: TERMINATERESOURCE for 'AG01', gen(3) result 0/0.
    000010ac.000059bc::2019/12/30-17:22:12.720 INFO  [RCM] Res AG01: [Terminating to DelayRestartingResource] -> DelayRestartingResource( StateUnknown )
    000010ac.000059bc::2019/12/30-17:22:12.720 INFO  [RCM] TransitionToState(AG01) [Terminating to DelayRestartingResource]-->DelayRestartingResource.
    000010ac.000059bc::2019/12/30-17:22:12.720 WARN  [RCM] Queueing immediate delay restart of resource AG01 in 500 ms.
    000010ac.000057d8::2019/12/30-17:22:13.220 INFO  [RCM] Delay-restarting AG01 and any waiting dependents.
    000010ac.000057d8::2019/12/30-17:22:13.221 INFO  [RCM-rbtr] giving default token to group AG01
    000010ac.000057d8::2019/12/30-17:22:13.221 INFO  [RCM-rbtr] giving default token to group AG01
    000010ac.000057d8::2019/12/30-17:22:13.221 INFO  [RCM] Res AG01: DelayRestartingResource -> OnlineCallIssued( StateUnknown )
    000010ac.000057d8::2019/12/30-17:22:13.221 INFO  [RCM] TransitionToState(AG01) DelayRestartingResource-->OnlineCallIssued.
    000010ac.00005358::2019/12/30-17:22:13.221 INFO  [RCM] Issuing Online(AG01) to RHS.
    000010ac.00005358::2019/12/30-17:22:13.221 INFO  [RCM] HandleMonitorReply: ONLINERESOURCE for 'AG01', gen(3) result 997/0.
    000010ac.00005358::2019/12/30-17:22:13.221 INFO  [RCM] Res AG01: OnlineCallIssued -> OnlinePending( StateUnknown )
    000010ac.00005358::2019/12/30-17:22:13.221 INFO  [RCM] TransitionToState(AG01) OnlineCallIssued-->OnlinePending.
    00004a3c.000059f0::2019/12/30-17:22:13.222 INFO  [RES] SQL Server Availability Group <AG01>: [hadrag] The DeadLockTimeout property has a value of 300000
    00004a3c.000059f0::2019/12/30-17:22:13.223 INFO  [RES] SQL Server Availability Group <AG01>: [hadrag] The PendingTimeout property has a value of 180000
    00004a3c.000059f0::2019/12/30-17:22:13.226 INFO  [RES] SQL Server Availability Group <AG01>: [hadrag] Connect to SQL Server ...
    00004a3c.000059f0::2019/12/30-17:22:13.255 INFO  [RES] SQL Server Availability Group <AG01>: [hadrag] The connection was established successfully
    00004a3c.000059f0::2019/12/30-17:22:13.256 INFO  [RES] SQL Server Availability Group <AG01>: [hadrag] Current SQL Instance is not part of Failover clustering
    00004a3c.000059f0::2019/12/30-17:22:13.256 INFO  [RES] SQL Server Availability Group: [hadrag] Starting Health Worker Thread
    000010ac.00005358::2019/12/30-17:22:24.715 INFO  [API] s_ApiGetQuorumResource final status 0.
    000010ac.000059bc::2019/12/30-17:22:24.819 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    000010ac.000059bc::2019/12/30-17:22:24.837 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    00003cb8.00004428::2019/12/30-17:22:24.851 WARN  [RHS] Error 2 from resource type control for restype Storage Replica.
    000010ac.000059bc::2019/12/30-17:22:24.913 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    00003058.000023d0::2019/12/30-17:22:24.922 WARN  [RES] Network Name: [NNLIB] GetComputerDomain - Machine is not a domain member. Reading from cluster database
    00003058.000023d0::2019/12/30-17:22:24.923 ERR   [RES] Network Name: [NNLIB] Unable to get 'DnsDomain' from cluster database: 2
    00003058.000023d0::2019/12/30-17:22:24.923 WARN  [RES] Network Name: [NNLIB] GetComputerDomain - Computer Domain name could not be read from clusdb - 2. Trying GetComputerNameExW
    00003058.000023d0::2019/12/30-17:22:24.923 INFO  [RES] Network Name <AG01_listener01>: Getting Read only private properties
    000010ac.000057d8::2019/12/30-17:22:24.924 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    00003cb8.00004428::2019/12/30-17:22:24.930 WARN  [RHS] Error 2 from resource type control for restype Storage Replica.
    000010ac.000059bc::2019/12/30-17:22:47.202 INFO  [API] s_ApiGetQuorumResource final status 0.
    000010ac.000057d8::2019/12/30-17:22:47.210 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    000010ac.000057d8::2019/12/30-17:22:47.226 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    00003cb8.000052c0::2019/12/30-17:22:47.257 WARN  [RHS] Error 2 from resource type control for restype Storage Replica.
    000010ac.000057d8::2019/12/30-17:22:47.268 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    000010ac.00000740::2019/12/30-17:22:47.278 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    00003cb8.00000980::2019/12/30-17:22:47.302 WARN  [RHS] Error 2 from resource type control for restype Storage Replica.
    00003058.00002b28::2019/12/30-17:22:47.365 WARN  [RES] Network Name: [NNLIB] GetComputerDomain - Machine is not a domain member. Reading from cluster database
    00003058.00002b28::2019/12/30-17:22:47.366 ERR   [RES] Network Name: [NNLIB] Unable to get 'DnsDomain' from cluster database: 2
    00003058.00002b28::2019/12/30-17:22:47.366 WARN  [RES] Network Name: [NNLIB] GetComputerDomain - Computer Domain name could not be read from clusdb - 2. Trying GetComputerNameExW
    00003058.00002b28::2019/12/30-17:22:47.366 INFO  [RES] Network Name <AG01_listener01>: Getting Read only private properties
    000010ac.00002ac0::2019/12/30-17:22:48.952 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    000010ac.00002ac0::2019/12/30-17:22:48.968 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    000010ac.000057d8::2019/12/30-17:22:49.018 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    000010ac.000057d8::2019/12/30-17:22:49.029 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    000010ac.000057d8::2019/12/30-17:22:49.062 INFO  [API] s_ApiGetQuorumResource final status 0.
    00003cb8.00004428::2019/12/30-17:22:49.115 WARN  [RHS] Error 2 from resource type control for restype Storage Replica.
    00003cb8.00004428::2019/12/30-17:22:49.158 WARN  [RHS] Error 2 from resource type control for restype Storage Replica.
    000010ac.000059bc::2019/12/30-17:23:16.532 INFO  [API] s_ApiGetQuorumResource final status 0.
    000010ac.000059bc::2019/12/30-17:23:16.543 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    000010ac.00004310::2019/12/30-17:23:16.555 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    00003cb8.000051a0::2019/12/30-17:23:16.580 WARN  [RHS] Error 2 from resource type control for restype Storage Replica.
    000010ac.00005258::2019/12/30-17:23:16.597 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    000010ac.000059bc::2019/12/30-17:23:16.607 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    00003cb8.00000980::2019/12/30-17:23:16.622 WARN  [RHS] Error 2 from resource type control for restype Storage Replica.
    000010ac.00005358::2019/12/30-17:23:18.047 INFO  [API] s_ApiGetQuorumResource final status 0.
    000010ac.00005258::2019/12/30-17:23:18.092 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    000010ac.00005358::2019/12/30-17:23:18.104 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    00003cb8.000051a0::2019/12/30-17:23:18.123 WARN  [RHS] Error 2 from resource type control for restype Storage Replica.
    000010ac.00002ac0::2019/12/30-17:23:18.141 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    000010ac.00005258::2019/12/30-17:23:18.151 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    00003cb8.000052c0::2019/12/30-17:23:18.167 WARN  [RHS] Error 2 from resource type control for restype Storage Replica.

    2019年12月31日 8:19
  • 从17.16时间点开始,每次报错都是因为Netork Name[NNLIB]是非域成员,尝试获取DnsDomain失败。

    你查一下NNLIB的权限,把NNLIB加入到域成员内,然后试试看Alwayson能不能用了


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    2019年12月31日 9:31
  • 17.26分,无法打开物理磁盘,查看一下该物理磁盘的读写权限等


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.


    2019年12月31日 9:35
  • 没有使用域控
    2019年12月31日 10:01
  • 应该不是权限问题,这个盘是iSCSI盘,在12.30号可能是网络问题导致多次连接失败,但是搞不懂为什么这个不是集群磁盘  却导致alwayson不可用了
    2019年12月31日 11:24
  • Any sql related file on that disk? Did you use it as quorum?
    2019年12月31日 18:21
  • not  quorum

    it is used for database backup

    the command just like this 'backup database to disk=<the iSCSI disk>'

    2020年1月1日 4:47
  • this iSCSI disk disconnected and connected many times on 2019/12/30 ,

    and the availability group 'AG01'  not available at last,when this iSCSI disk connected again,'AG01'  was available

    i am so confused

    2020年1月1日 4:56
  • 你好,

    能不能再收集一下SQLDAIG 日志信息,在帮你分析一下到底什么原因


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    2020年1月2日 2:53
  • 前几天的好像看不到了,只有今天的
    2020年1月2日 3:37
  • 现在已经正常了是吗

    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    2020年1月2日 5:24
  • 嗯,目前是正常的

    那个iSCSI磁盘正常了    ag就正常可用了

    2020年1月2日 6:31
  • 那应该就是iSCSI磁盘引起的

    如果觉得有回复有帮助,帮忙'标记为答案‘,这对其他人解决此类问题有益,谢谢


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    2020年1月2日 6:43
  • 但是想弄清楚原因,为什么这个iSCSI磁盘不稳定会影响集群,这个磁盘之前加入过集群中,后来移出来了,既然不是集群资源为什么会导致ag不可用

    我有个日志备份脚本半小时跑一次,备份路径是在这个iSCSI盘

    2020年1月2日 7:02
  • Did you double check AG properties?
    2020年1月2日 18:05
  • yes,i did
    2020年1月3日 1:09
  • PS C:\Users\Administrator> get-clusterresource 'ag01'|get-ClusterResourceDependency

    Resource DependencyExpression
    -------- --------------------
    ag01     ([AG01_listener01])


    PS C:\Users\Administrator>
    2020年1月3日 1:13
  • 有没有人能看出原因啊
    2020年1月6日 2:04