locked
Storage Account Access Policies & SAS Tokens RRS feed

  • Question

  • Hi

    We're pushing large binary files into a storage account using blobxfer from around 10 different VMs in Azure (~200GB per VM).

    Instead of embedding storage account keys into the transfer scripts, we're looking at SAS tokens as a better, more secure way of doing this.

    We think we should:

    • add a policy limiting permissions to permit read,write,list (no delete).
    • set an expiration date/time in the policy, to maybe 45 days in the future and rotate this.
    • build SAS keys based on this policy, for each VM.

    Some questions:

    1. Can we track storage account data egress usage based on SAS tokens?
    2. Is it worth having individual SAS tokens per VM?  Or should we aim to just keep it simple?
    3. What happens in a region failover - will the existing valid SAS tokens still work to access RA-GRS storage accounts from the second region?
    4. Are there any other short-falls that we might experience, if we use SAS with RA-GRS in a failover scenario?

    Thanks.

    Thursday, May 24, 2018 4:19 PM

All replies

  • 1. Can we track storage account data egress usage based on SAS tokens?

     Ans: Yes. If you provide write access to a blob, a user may choose to upload a 200GB blob. If you’ve given them read access as well, they may choose to download it 10 times, incurring 2 TB in egress costs for you.

    Refer : Best practices when using SAS , Windows Azure Storage Logging: Using Logs to Track Storage Requests

    2. Is it worth having individual SAS tokens per VM?  Or should we aim to just keep it simple?

    Ans: You can use individual SAS token per VM. Also can create as many SAS tokens as you would like by using different combinations of permissions, expiry time and source IP address. It doesn’t affect your billing part.

    3. What happens in a region failover - will the existing valid SAS tokens still work to access RA-GRS storage accounts from the second region?

    Ans: Yes. The secret keys used to access the primary endpoint are the same ones used to access the secondary endpoint. If there is a major issue affecting the accessibility of the data in the primary region, the Azure team may trigger a geo-failover, at which point the DNS entries pointing to the primary region will be changed to point to the secondary region.

    Refer : Windows Azure Storage Redundancy Options and Read Access Geo Redundant Storage

    4. Are there any other short-falls that we might experience, if we use SAS with RA-GRS in a failover scenario?

    Ans: When a regional disaster affects your primary region, we will first try to restore the service in that region. Dependent upon the nature of the disaster and its impacts, in some rare occasions we may not be able to restore the primary region. At that point, we will perform a geo-failover. The cross-region data replication is an asynchronous process which can involve a delay, so it is possible that changes that have not yet been replicated to the secondary region may be lost.

    You can query the "Last Sync Time" of your storage account to get details on the replication status.

    Refer : What to do if an Azure Storage outage occurs

    --------------------------------------------------------------------------------------------------

    If this answer was helpful, click “Mark as Answer” or Up-Vote. To provide additional feedback on your forum experience, click here




    • Proposed as answer by Sandeep BR Thursday, May 24, 2018 7:11 PM
    Thursday, May 24, 2018 7:11 PM
  • Hey ,

    Just checking in to see if the above suggestions helped or you need further assistance on this issue

    Saturday, May 26, 2018 4:34 PM