I'm wondering how Traffic Manager would distribute load in the following scenario:
I setup the same hosted service in two different datacenters (USNC and EUN for example) and create a performance policy in Traffic Manager that includes both. I understand the dynamic here, but what if I add a second hosted service in one of the two datacenters,
USNC for example. How would a performance policy handle there being two hosted services in one of the datacenters?
In thinking about this I see two obvious options. The first is that it just uses a round robin approach for requests that are bound for the datacenter with two hosted services. The second option is that it pushes all traffic at the first hosted service listed
for that datacenter.
I am really hoping it is NOT the latter as that would eliminate the possibility of scaling within a datacenter while also supporting other datacenters.
Any know how Traffic Manager actually handles this?
I have no idea to the answer at this moment and I need to contact product team to know the answer. But before that may I know in which case will you do this? My thinking is this should not happen in real case. What benifit can you get if you add
two same hosted services in the same data center?
It is actually the latter - if one or more hosts are in the same DC it will always return the first hosted service in a performance policy.
In order to address that, one of the possibilities is to expand Traffic Manager to allow for nested load balancing policies, which would be more flexible, but such a feature is still not planned. Another possibility is to actually change the behavior to
something like what you suggest - round robin between hosts in the same datacenter - but would need to be further evaluated by the team.
Could you please post this feature to the My Great Windows Azure Idea web site? I'd appreciate that so we can keep track of this request and allow other users to vote on it as well.
For us this scenario is very valid. We have built a hosted service that uses its own Azure Storage Account. We can scale this hosted service by adding compute instances to it, but eventually it could outgrow the scaling capabilities of a single storage account.
So our further scaling capability is to create additional hosted services, each of which has their own storage account.
Our initial plan is to roll out with one of these hosted services in each of 3 different data centers. Giving us geolocation and scale together. However if one region were to outgrow the scaling capability of a single hosted service we might want to add
a second or third hosted service within the same data center.
What I am hearing is that using Traffic Manager we are currently limited to a single hosted service per data center. A significant limitation for us. In our scenario using a round robin approach for multiple hosted services within a datacenter would be just
fine, in fact exactly what we would want. Wouldn't this be fairly easy to accomplish? Nested policies is way more than we need and I suspect is much more complicated.
I will add this request to the uservoice site, but I now suspect this may never see the light of day if you guys haven't heard this request before. Ugg, what a dissapointment, a gaping hole in our architecture.
Thanks for explaining your scenario - it does make sense. I filed a work item to evaluate the possibility of changing this behavior, and will get back to you on this once we have any updates.
For the time being, I'd suggest that you check if you could change your code to allow multiple storage accounts to be used by your hosted service instances. That way you can scale out by adding more instances to the same hosted service.