Microsoft could offer easy solution to long-term storage outages RRS feed

  • General discussion

  • In light of the current storage outage at the US-South data center (56 hours and counting), I believe MS could offer a simple solution to prevent this kind of problem in the future.

    Where storage is georeplicated, the replicated copy is currently hidden away and only MS can access it. What we need is a button in the Azure Management Portal to manually instigate a failover.

    Manually instigating a failover would stop the replication service and make the georeplicated copy visible to us.

    In other words:

    (1) Georeplication would be switched off for that service
    (2) The location of the data service would change to the geo-replicated location and become accessible under the same name
    (3) A few seconds' worth of transactions may be lost
    (4) When/if the original service became available, it would become accessible under a different name (e.g., mydata-recovered). Or, it might even be acceptable just to delete the original service.

    This is much better than waiting 56 hours+ for the service to come back online, and is much better than the current workaround which is, in effect, to implement our own georeplication.

    Implementing our own georeplication system is reinventing the wheel. Microsoft have already done this; it seems crazy that we should try to repeat all the work.

    Monday, December 31, 2012 2:14 AM

All replies