locked
StorageReplica Server-to-Server and Azure File Sync support? RRS feed

  • Question

  • Hello,

    I have two Win2019 on-premises Servers replicating data volume.

    Additionaly - the replicated data volume on primary Server - doing Azure File Sync -everything is working

    My question, if primary Server fails and secondary server takes over in a disaster case (it's a manual procedure because Server-to-Server SR) - can Azure File Sync be "re-attached" to the second Server without complete resync ?

    I know that Azure File Sync is supported in a Cluster scenario, but in my case there is no Cluster.

    Since Storage Replica - replication "everything" on the volume - could I re-register the second node after manual failover and Azure File Sync just resumes the work?

    Regards

    Wlo

    Friday, February 7, 2020 1:08 PM

Answers

  • Yes - files with open handles are synchronized once they are closed. Some files are also skipped by design (https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-planning#files-skipped ).

    Considering that you are already replicating data to your DR site, I gather that Azure File Sync offers somewhat limited benefits.

    Have you considered creating a separate volume/share on the DR server and setting up as a server endpoint? This way, once you failover, you can simply copy data locally from the SR-replicated volume to the share hosting the Azure File Sync endpoint - and use that share going forward for client access?

    hth
    Marcin 

    Saturday, February 8, 2020 1:43 AM

All replies

  • Based on your description, it appears that you have two servers configured as server endpoints.

    Is this correct?

    If so, both will be part of the File Sync hierarchy and there is no need for "re-attaching" either one of them in a DR scenario

    hth
    Marcin

    Friday, February 7, 2020 1:33 PM
  • negative Marcin, I can't add second server as a second server point. Because of Storage Replica - the data volume is inaccessible on replication partner, only on primary Partner the volume is accessible.

    Friday, February 7, 2020 8:19 PM
  • I missed your reference to Storage Replica.

    What's the reason that you are using Storage Replica rather than relying exclusively on Azure File Sync in this scenario?

    hth
    Marcin

    Friday, February 7, 2020 8:30 PM
  • good question - because of RPO, which is much better with SR in my case I think,  because it's a 24/7 System(Windows Fileservices) and many files stay opened for days and weeks. Azure File Sync does only once a day a snapshot to cover such files, correct me if I'm wrong.  We'd like to use Azure File Sync as a Backup, at the moment. If azure file sync coud handle opened files - that would be a perfect solution to us and no need for SR anymore.

    Saturday, February 8, 2020 12:51 AM
  • Yes - files with open handles are synchronized once they are closed. Some files are also skipped by design (https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-planning#files-skipped ).

    Considering that you are already replicating data to your DR site, I gather that Azure File Sync offers somewhat limited benefits.

    Have you considered creating a separate volume/share on the DR server and setting up as a server endpoint? This way, once you failover, you can simply copy data locally from the SR-replicated volume to the share hosting the Azure File Sync endpoint - and use that share going forward for client access?

    hth
    Marcin 

    Saturday, February 8, 2020 1:43 AM
  • Is there any update on the issue?

    If the suggested answer helped for your issue, do click on "Mark as Answer" and “Vote as Helpful” on the post that helps you, this can be beneficial to other community members.

    Monday, February 10, 2020 6:13 PM
  • Hello Marcin,

    Thank you for your answer

    that could be the solution, even though the storage on the second host must be doubled, if I understood thhat right(?)

    Could you tell if it could have some side effects after failover on the second node to have Azure File Sync Metadata/Database(?) on the SR replicated volume?Regards

    Waldemar

    Monday, February 10, 2020 9:12 PM
  • Things will not be perfect in this situation.  What happens when the new disk (with a near-realtime view of the original disk) is attached to another server and then that server is registered as a Server Endpoint to the original sync group, incremental sync will not begin immediately.  What will happen is a namespace-wide "reconciliation sync" will occur.

    This is a metadata-only sync session, where the contents of both sides are 'merged'.  For files which match in LastWriteTime/Size (most files), everything works fine.  However if there was a change on either side (your most recently churned files, generally), those files will be re-transmitted, and a conflict file will be produced.  Also if a file was deleted but the delete did not sync, it will 'come back' as a result of the merge.  Another example of reconcile/merge impacts is if a directory was renamed and the rename did not sync, it will merge back in both old and new locations.

    If you want to mitigate some of these negatives of the reconcile/merge, you can disable the conflict files being created, so that the latest LastWriteTime file is kept and the older one is lost, you can set this registry key before creating the server endpoint:

    Key: [HKLM]\"SYSTEM\CurrentControlSet\Services\FileSyncSvc\Settings";

    Value Name:SaveConflictFilesForReconcile

    Value: 0  <- This indicate conflict files would not be created.

    Also if tiering is enabled on the server, I'm not 100% sure but it's possible that the tiered files wouldn't function. There have been many improvements in the most recent versions of the agent to keep these files working across an 'unregister/re-register' boundary, but as a general statement it's not a recommended procedure for a reliable configuration.

    There is an alternate approach, though I wouldn't say necessarily better.  It's better in that it avoids namespace merging, but isn't ideal in other ways.

    - Have both servers running Azure File Sync, as well as SR as you have it now.

    - On the secondary, configure Azure File Sync with tiering enabled.  It would always bring down the namespace changes, but not the data.

    - After a manual failover event, you now have the dataset in 2 locations on the secondary server.  One is the SR copy, which you are presuming is the most up-to-date and you want to treat as authoritative.  The other is the Azure File Sync namespace only.

    - You can robocopy from the SR location to the File Sync location, which would only copy the incremental differences, and in effect catch Azure File Sync up with whatever it didn't have.  If you truly wanted the SR copy to be authoritative, you would use '/MIR'.  Just be aware that this would actually delete anything that Azure File Sync captured but SR did not), but has the benefit of having no unusual 'merge of 2 dataset' symptoms.  Alternatively, you can do a form of 'robocopy /XO', which would be more of a manual merge; in this case, it would be better to not go with this plan and just let the reconciliation sync happen as described originally (let Azure File Sync do the merge rather than robocopy).

    - At this point, you would delete the SR copy of the data.

    - If you don't want tiering on the new server, you would turn it off at this point, and recall the data in the background.  This is another downside of this setup, since you'd be receiving this data as egress even though your SR copy actually had the data you wanted.


    Friday, February 21, 2020 4:48 PM