none
New SQL Database Editions Performance

    General discussion

  • I am trying to compare performance of the new editions of SQL Database (Basic, Standard, Premium) with the older Web and Business. My Company is running many of our client databases in Web editions (about 1-2 GB in size) and we are checking the preview of the new SQL Database editions. 

    Based on cost comparison our first approach was to test Standard (S1) edition. It costs 3-4 times more money, but we can have online backups, disaster recovery and much more space (although we don't need so much). 

    Our first results were not at all encouraging. The performance for transferring some databases to the new model was 20 times worst. (60 secs compared to 20 minutes). The data transfer in our tests is of some 60 MB databases each one having almost 500 tables. The retrieval of the data was also much slower. If we confirm these results in future tests then we cannot stay in SQL Azure after the fall of Web and Business editions for cost reasons.

    I want to share my experiences with these of the other users trying the new editions. Do you have better results? Any information will be valuable.

    Thanks, 

    Dimitris

    Monday, April 28, 2014 12:59 PM

All replies

  • This might be similar to the problem I'm having as well. I have a 1 gig BACPAC that inflates to around 2.5 gigs in production. It took about 20 minutes to import (from storage) to one of the "old" business tier servers, but deploying it to one of the new tiers essentially fails. That is, it just goes on for hours and hours. The size eventually starts reporting as a few hundred MB, but seriously, I can't keep an app offline for days waiting for it to import.

    Not a great start for the new tier. I get it if it's throttling, but the throttling should be disabled for an import or we'll never get anything up there.

    Tuesday, April 29, 2014 6:58 PM
  • Hi Jeff,

    I am sure you have the same problem. I also could not transfer a 3 GB database using my utilities based on bcp copy (bulk copy). I tested the performance of loading data to all the configurations of the new tiers. The equivalent to Business edition was to use the new Premium 2 editions. For a job that was lasting 115 secs in Business edition I got 105 (10 tests) seconds in Premium2.

    The results for the other editions were (almost 10 samples per edition):

    Standard 1: 1200 secs

    Standard 2: 800 secs

    Premium 1: 200 secs

    Premium 2: 105 secs

    Premium 1: 45 secs.

    I can say that the new editions are very stable in their metrics and of course they have many welcomed specifications, but they are very-very expensive to get analogous performance results with the previous versions.

    One first attempt to explain the results is to consider that the new editions are "speeding you down" when they assume that you are using your edition's limits to guarantee your performance. I saw that I was reaching my limits of available Write Log percentage (100%) during the data transfer. But of course this is very unacceptable because a serious real world application some times need much more power (I/O, CPU) while other times is almost resting itself.  So if Microsoft is "slowing" your performance based on data from a small time window, you will often be very slow compared to the old versions, and that is really a big problem for many applications. I can understand that maybe there is no other way to guarantee the performance, but maybe they have to resume their resource distribution algorithms.

    I am waiting for other users to share their experiences.

    Thanks, 

    Dimitris



    • Edited by Dimitris V Wednesday, April 30, 2014 2:53 PM
    Wednesday, April 30, 2014 2:52 PM
  • We are also experiencing some significant issues with the new database model. We tried the S1 and Basic just to compare speed, and even though S1 is faster than the old Web and Business to add a lot of records simultaneously at the same time, uploading a single file to a blob takes way more time that it does with Web and Business editions. We did some metrics and here they are:

    1MB
    Web or Business: 5s
    Basic or S1: 14s

    2MB
    Web or Business: 6s
    Basic or S1: 26s

    3MB
    Web or Business: 7s
    Basic or S1: 37s

    4MB
    Web or Business: 7s
    Basic or S1: 50s

    5MB
    Web or Business: 8s
    Basic or S1: 60s

    30MB
    Web or Business: 34s
    Basic or S1: 5m15s

    This is clearly not acceptable and I really hope this will be fixed sooner rather than later to test our environments and make sure that what is working now will still work after the update. Right now it breaks our app as we now get a lot of timeout that we did not before.

    Thanks,

    Louis

    Thursday, May 1, 2014 3:19 PM
  • Louis,

    Please clarify what you meant when you say you upload a file to the blob.  Are you uploading to a Storage Account? The new tier offering applies to DBs as far as I know.

    Thursday, May 1, 2014 5:31 PM
  • We have a table with a column of type varbinary(max). That was working just fine with Web and Business but is much slower now with the new ones. We could provide a schema with a table that has this type of column but I don't think it's necessary since you could create a very simple one with that type of column and replicate our issue.

    BTW, we did our testing using EF6.1 to insert to the database though I don't think it has any impact on our result since simply changing the connection string to a previous database is faster.

    Thanks for the reply,
    Louis

    Thursday, May 1, 2014 5:46 PM
  • Folks, first off thanks all of you for trying out our new service tiers and providing your feedback. As always we are listening carefully to you. 

    Based on customer feedback, our first priority for the new service tiers has been performance predictability.

    In Basic/Standard/Premium we are applying a different design principle than Web/Business. The databases performance should be as if it is running on a dedicated computer.  As you have noticed, each performance level (Basic, S1, S2, P1, P2, and P3) corresponds to a set of bounded resources (CPU, memory, IO, and more).  This design principle is what delivers predictable performance.  

    On a positive note, it’s good that you have seen this predictability.  However clearly we aren’t done with the absolute performance yet.  Our goal is to provide great price/performance. Over the next few weeks expect to see us tune the performance levels, as well as adding features and other capabilities.  

    A database during its life has varying resource needs.  Import/Export, DB copy are examples of resource intensive operations. Basic might be the right performance level for your database in its day to day use, however a higher performance level might be best for resource intensive operations.  One of the beauties of SQL Database is that you can scale up/down performance tiers without taking the database offline. This lets you start in one performance tier (P1) for the import and move to another performance tier (Basic) later.  In the next few weeks you will be able to scale between the new tiers and Web/Business without using import/export.  

    Thanks!

    /Tobias

    Thursday, May 1, 2014 7:57 PM
  • I understand what you are saying but there is a major drawback when using the new tiers compared to the ones we are already using. Here is a quick overview of the testing we did when uploading files to the database hosted on different tiers:

    As you can see in the grid, we have a better performance with Web/Business than with a P1. I understand that these speeds are not as predictable as the new tiers, but at least, I can tell by using them for more than 3 years now that it was stable for 98% of the time for us and was a lot cheaper. We had three occurrence that I remember where the performance was affected probably by someone else using the same server in the background and not having our own CPU, but again, it was a lot cheaper, so it was expected.

    The new tiers are very welcome and we appreciate the commitment of having a more stable database, but if we need to pay thousands of dollars for a database with the same performance as the one that cost less than 100 bucks now we might have to look at other options.

    We also run tests here on prem with our app and a simple Hyper-V machine with one core and 1gb of ram give us better results than this. I really hope that there will be some performance improvements to these new tiers going forward and I appreciate the fact that you take the time to reply to our questions.

    Louis

    Thursday, May 1, 2014 8:24 PM
  • Thanks Louis,

    I understand where you are coming from. We know some customers (like yourself it seems like) have been lucky (and good enough off) with their performance experience in Web/Business. Please continue testing out the service tiers. Please remember that depending on your application you may certainly benefit from shifting between for example S1 and P2 to get both predictable performance as well as scale up to handle peak load as needed day-by-day.  

    /Tobias

    Thursday, May 1, 2014 9:02 PM
  • The thing is that it is not a predictable peak. I know that I could use the new PS Scripts to change between S1 and P2 on rush hours to get better performance if we know we have more users during that part of the day, but our issue happens even if only one user uses our system. Even if only one user upload the 30mb file it will timeout. We can increase the timeout but then we have to explain to them why what was taking 40s one week is now taking more than a minute because we "upgraded" our database. For them, it's a major drawback in performance. We did some heavy testing in doing more standard SQL queries and inserting simple rows and we did notice that S1 was not only more predictable than Web/Business but also faster. We got a 30% speed increase using them. It's just s shame that our business is handling files and that this because a lot slower with those new tiers. We will continue testing them but by the look of your reply there is not much that is going to be done regarding this issue.

    Are there any plans to solve this scenario or will it might just be fixed by chance at sometime? We have to plan now that we know that Web/Business will be automatically upgraded next year so I would like to get some input on whether or not this "should" be fixed or not. I know you do not have the specific answer but what are the chances?

    Thanks,
    Louis

    Thursday, May 1, 2014 9:09 PM
  • @Tobias, I have raised several concerns with the new SQL pricing and tiers model in a blog post.

    What I am hearing from other Azure customers is echoed on this forum thread - namely that the Standard tier is an order of magnitude step backwards compared to Web/Business.

    Frankly I think Microsoft is going about this all wrong.

    You should keep the Web/Business option available for those of us that want to scale out many small databases at an affordable cost.

    Additionally, Trying to apply a generic "one size fits all" performance guarantee in the form of the DTUs and ASB test is completely idiotic because frankly no one will ever fit your abstract vision of a typical SQL database user.

    The truth is that in the real world, database loads vary constantly based on the operations we need to undertake at any point in time.

    Sometimes we need high I/O bursts in order to run an import, sometimes we need high CPU bursts to crunch a big query.

    Giving us a DTU based guarantee is pretty much worthless in the real world.  

    What MS should be doing is disclosing the concrete resource guarantees - namely what CPU, I/O and memory allocation we are guaranteed to receive.

    I lay out this argument more fully, along with some other concerns here:

    http://pauldb.tumblr.com/post/84269543660/sql-azure-4-2014-changes-the-good-bad-and-ugly

    Would love to hear your feedback on the issues I raise.

    Friday, May 2, 2014 12:49 AM
  • Louis, I'd like to understand a bit more about your scenario. Specifically why you have elected to use SQL Database vs. Azure Storage for your blobs. Generally we recommend storing these in storage and having pointers to them in your relational store, as always it depends on your scenario. You can reach me on firstname.lastname@microsoft.com.

    Again, thanks!

    /Tobias

    Friday, May 2, 2014 4:28 AM
  • @Tobias,

    One of our company's scenarios is that we are uploading data from our customer's on premise installations to independent databases using bulk copy operations. This is happening 2-3 times a day. We are uploading the data through an application running in Azure roles, and inserting them in some db tables (lets say 60 mb data), and then we offer some applications which do sql queries on these data (not very heavy and not very often).

    For a scenario like  this we do not need much database resources, but it is important for us to do this data uploading as fast as we can. We do not need much db power on data retrieval but only for importing and not for much time. Our data bulk insertion tests run very very slowly on Standard tiers and we need P2 or P3 tier to have the same performance as we had with Web/Business tiers. If we had to pay so much money, our solution had to cost one order of magnitude more for our customers. We based our solution's pricing to Microsoft's Web/Business pricing and we will be out of the market if we change this.

    I think that Microsoft have to consider our problem. One possible solution would be to not limit the write_log resource on the new tiers. One other solution could be to offer an analogous pricing model for Standard/Premium Editions for a group of databases. Else we have to move away from SQL Azure Database, although our company was of the first adopters of Azure Solutions since 2010.

    Thanks,

    Dimitris


    • Edited by Dimitris V Friday, May 2, 2014 9:43 AM
    Friday, May 2, 2014 9:41 AM
  • I'm fine with the new tiers, but if the thing is going to require throttling during an import, it's a non-starter. Because you get charged even for switching to a certain level for a portion of the day (which I'm learned after trying to test this performance import problem), it's another thing that makes people like me wanting to ditch a dedicated server not go there.
    Sunday, May 4, 2014 10:27 PM
  • @Dimitris, @PaulDBau,

    Thanks, I definitely understand where you are coming from and we do take your feedback seriously.

    @PaulDBau,

    With regards to your observations around DTUs let me clarify. The DTU itself is a relative measure that allows you compare the resources assigned to a database across our performance levels. These resources scale equally across the various resource dimensions. The ASDB (benchmark) performance numbers are the output of a specific number of DTUs, not the other way around.

    Thanks everyone for your feedback, again we will be making adjustments to our performance levels during the preview so please do continue testing. We are planning to post updates on our blog when we make these adjustments.

    Here is some additional information:
    Azure SQL Database Service Tiers and Performance Levels
    Basic, Standard, and Premium Preview for Azure SQL Database
    Azure SQL Database Benchmark Overview

    Monday, May 5, 2014 10:46 PM
  • We have also been able to observe and measure that the Standard S1 database preform worse on average than our Web/Business databases. We've been measuring straight up application and query performance against the new service tiers.

    We've been content with the performance of Web/Business thus far. The potential of paying approximately 3 times of much (~$10/mo for Web/Business to ~$30/mo for Standard S1 after the 'preview' pricing is over) is something that is making hard to decide if an investment in Azure is the right choice for our software.

    Benchmarks against Standard S2 instances have so far, been positive. But they not financially feasible.

    Tuesday, May 6, 2014 5:24 PM
  • @Dimitris, I can only agree with what you are writing, we had comparable results.

    My Company (we are a Startup) is currently developing a Product which heavily depends on several Azure Services - especially SQL. Our Product is designed to be highly scalable and also to support high parallel workloads (as a side note: We have already implemented support for the now deprecated SQL Federations to assure that we will never get Problems with high workloads caused by raising customer numbers).

    Because of that we immediately started performance testing against the new SQL Database Editions after we received notice that they will replace the current SQL Services we are planning to use.

    The results of these tests have been alarming for us:
    The new SQL Database Editions have been 5 - 20 times slower in comparison to the current ones AND they are more expensive. As the result it would be much more expensive (we are talking here probably about a double-Digit cost factor) if we have to compensate that through higher Performance Levels. Also we are getting Exceptions very quickly now in our load tests because we reached the worker limt (e.g. max. 50 concurrent Workers @ S1). This was never a Problem before, the same tests ran without Problems against an equivalent Business Edition database.

    To sum up: The cost-efficency factor is currently (hopefully this will change in the next couple of weeks - I know it's a Preview) so bad that we have to look for alternatives because our customers easily won't pay us these costs.

    Regards,
    Andreas

    Wednesday, May 7, 2014 1:34 AM
  • @Tobias,

    Appreciate the response, forgive me if I do not hold out too much hope.

    Even if you fix the terrible S1 performance, the new pricing and tiers model is fundamentally at odds with a "many small DB, scale out" approach that is common in cloud startups.

    The core issue you need to solve is a business problem.

    Without an option that allows scale out of many small DBs at a price/performance comparable to the existing Web/Business model, I will have to move away from Azure.
    And naturally this means I would move everything off, since it would be foolish to move just DB out to something like Amazon RDS.

    Lets just say that this is a great opportunity for you to prove that MS has indeed become more responsive towards the needs of developers and customers :)

    @lexk, @Andreas, @Dimitris,

    Sounds like my startup is in a similar quandary to your companies.

    I suspect we may need to prepare ourselves to be casualties of a typical unilateral MS decision (remember Silverlight anyone?)

    We've already started investigating Amazon RDS, would be happy to compare notes if you like.





    • Edited by PaulDBau Wednesday, May 7, 2014 2:38 AM
    Wednesday, May 7, 2014 2:26 AM
  • @PaulDBau, @Andreas

    Would it be possible for you to e-mail me on firstname.lastname at microsoft.com and the I'd like to set up some time to speak with each of you about your scenarios and requirements.

    Thanks!

    /Tobias

    Wednesday, May 7, 2014 3:35 PM
  • @Tobias

    Thanks for the reply. I was trying to send you an E-Mail using the format firstname.lastname@microsoft.com but neither using Ternström nor Ternstroem as your last name in the address succeded. In the first case I'm getting a SMTP Error "invalid recipient address" from our companies E-Mail provider, in the second case from the MS Mailer Deamon. I have double checked that I did not include a typo but thats not the case. Maybe you have another E-Mail Address? You can also contact me using the identical format firstname.lastname at 3P-Soft.com

    Thank you very much,

    Andreas

    Wednesday, May 7, 2014 11:02 PM
  • @Andreas,

    Please use Ternstrom for the last name.

    HTH,

    Boris.


    Boris Baryshnikov, SQL Server

    Thursday, May 8, 2014 1:38 AM
  • @Tobias,

    I'm not having any luck with emails either - have tried the combinations mentioned by Andreas and Boris.  I only seem to get "Undeliverable" returns from the MS Postmaster mail server.

    You can reach me on paul at appenate dot com.



    • Edited by PaulDBau Thursday, May 8, 2014 3:59 AM
    Thursday, May 8, 2014 3:58 AM
  • Sorry folks,

    I didn't realize I put the "ö" in my last name (which is the correct spelling although not for my e-mail...) for my MSDN account. As Boris mentions please use tobias.ternstrom. Andreas managed to get an e-mail through and I have just sent an e-mail to yourself as well Paul.

    Thank you,

    Tobias

    Thursday, May 8, 2014 6:16 AM
  • @Tobias

    (sorry, I could not reach you by email)

    Our company needs to take some decisions about the future of our services hosted in Azure. The total performance of the new editions, although predictable and much more supported, seems to be unacceptable in terms of performance/cost, for running our services. The limitation of resources in database level is a problem for us. Maybe in the previous Web/Business Editions we were “cheating” more power than we were paying for, but the need of having independent database for each of our customers is critical to our solution. We have incorporated the usage of different schemas for each customer in our product, but we use it in small scale because large databases with hundreds of schemas is difficult to manage, transfer and load balance.

    I can understand that Microsoft wants to guarantee predictable performance to customers, but in a way we want predictable performance for a group of databases, not for each one. I can understand that this is a difficult option to consider for Microsoft, cause of the changes to the cost and pricing model you are using in the product (pricing per database).

    We have already tried and considering to use our own SQL Server VMs on Azure, and I can say that we are almost ready for this solution. We are experimenting using the new Xstore Integration capability of 2014 and the results are very encouraging. And I have a question about this capability: Can we have a system performing well with 500-1000 databases of 1GB on each server, running in azure blobs? Normally at peek time we will have not more than 40-50 connections to all of these databases. And the cost of blob lease (in terms of money or keeping Server busy – CPU, Network) to all these databases will be high? Is this a good way for us to go?

    Thank you,

    Dimitris.

    PS: I can say that if you keep offering the Web/Business solution in the current performance/pricing status, we are very happy with it, J.

     

    Friday, May 9, 2014 10:03 AM
  • Thank you for the details Dimitris, I received your e-mail and will connect with you to discuss this in further detail.

    /Tobias

    Monday, May 12, 2014 5:47 AM
  • I can confirm an alarming reduction in performance with S1 compared to Web edition. I absolutely applaud the consistency plan, for too long working with SQL Azure has been a bit like black magic but the drop in performance, for a database that costs way more, is disturbing. 

    I run a startup, I have close to 100 SQL Azure databases, I simply won't be able to afford these price increases if I need to upgrade each database to a P1 or higher. 

    I won't list all the problems I've because the posts here have covered them, but one thing that hasn't been mentioned is index rebuilding ... I have a group of 30 databases, each with one table, containing 3 indexes and about 600,000 rows. The size of each database is less than 600 MB. On web edition, with the databases under heavy use, I could rebuild the indexes in between 2 and 6 minutes .. with the new S1 databases, I'm at 2.5 hours and counting and that's with NO load on the database .. am I seriously expected to upgrade to a P1 just to rebuild some indexes on a small database?

    Thanks 

    Monday, May 12, 2014 4:11 PM
  • I think that the main problem is on the design principle of the new tiers. As Tobias describing in its answer before: 

    In Basic/Standard/Premium we are applying a different design principle than Web/Business. The databases performance should be as if it is running on a dedicated computer.  As you have noticed, each performance level (Basic, S1, S2, P1, P2, and P3) corresponds to a set of bounded resources (CPU, memory, IO, and more).  This design principle is what delivers predictable performance.

    This is exactly the problem. We all know that our databases are running in a shared server environment with many other databases. Sο the corresponding dedicated computer based on what we are paying would be a for example a 0.10 core CPU machine with its resources (we have also the cost of licences, running the service, etc). But this is completely unacceptable for a database environment. Also we all know that effective resource sharing in a real world scenarios (lets say telephone lines), works based on statistical data (f.e. the number of users that can call at the same time), and not limiting us when we need more power at some point in time (lets say make some more calls). I can understand that this design principle can guarantee predictable performance but I don't think it will work for database sharing.

    So I think Microsoft has to work more taking decisions about how to share its database offering, other way it will be a solution concerning only a small part of the current customer base of azure database services: only those who want to pay much money for predictable high performance databases and don't want to run these dbs by their own. (I can guess that these who can pay such money already have their own IT departments running their own cloud database servers).

    Thanks,

    Dimitris.

     




    • Edited by Dimitris V Sunday, May 18, 2014 10:55 AM
    Sunday, May 18, 2014 9:37 AM
  • Louis-P Perras wrote:

    I know that I could use the new PS Scripts to change between S1 and P2 on rush hours to get better performance if we know we have more users during that part of the day, but our issue happens even if only one user uses our system. 

    Actually Louis, you're charged by the day, not the hour for the new SQL Azure databases so you actually can't step up and down like that. If you know you get a high load on week days, you could step down on weekends but it doesn't benefit you to step up and down on week days from what I can tell based on my billing last month.
    Monday, May 19, 2014 3:19 AM
  • This new tier is definitely a problem.  Just curious, has anyone done a comparison against Amazon?
    Monday, May 19, 2014 5:38 PM
  • Folks,

    I have some good news for you folks on performance! During this week we will be rolling out performance upgrades to Basic, S1 and S2. The changes are as follows:

    • Basic: 1 DTU --> 5 DTU
    • S1: 5 DTU --> 15 DTU
    • S2: 25 DTU --> 50 DTU

    Unfortunately I can't tell you exactly when this will be live, but it should be world-wide by middle of next week.

    Thanks

    /Tobias

    Monday, May 19, 2014 6:44 PM
  • That's good news. Now if we had a time frame where we could switch old tier databases to the new ones, that would be swell. The various FAQ's suggest that it's both not possible and possible soon.
    Monday, May 19, 2014 7:34 PM
  • Folks,

    I have some good news for you folks on performance! During this week we will be rolling out performance upgrades to Basic, S1 and S2. The changes are as follows:

    • Basic: 1 DTU --> 5 DTU
    • S1: 5 DTU --> 15 DTU
    • S2: 25 DTU --> 50 DTU

    Unfortunately I can't tell you exactly when this will be live, but it should be world-wide by middle of next week.

    Thanks

    /Tobias

    So with this new upgrade, does it mean the performance now matches Web/Business?

    Monday, May 19, 2014 8:17 PM
  • @Jeff, we should have the ability to switch between Web/Business and the new tiers live beginning of June.

    /Tobias

    Monday, May 19, 2014 8:22 PM
  • I migrated my Business edition database to Standard a few days ago, and instead of having more predictable performance, it's actually all over the place. Not an improvement. Here's the response times from my site's home page:

    http://imgur.com/Hk2i8eO

    I did a little diagnostic work, and sure enough, the crazy variance is coming from the SQL response times, not the site itself.


    • Edited by Jeff Putz Monday, June 23, 2014 1:44 AM link edit
    Monday, June 23, 2014 1:43 AM
  • As it turns out, it's not just the general performance that's inconsistent, but the automatic backup is excruciatingly slow and it very seriously negatively impacts performance.

    I totally understand the intention of this pay-for-performance model (makes a lot more sense than database size), but unless you can differentiate between "regular" use and administrative tasks like import/export, it's a somewhat broken model.

    Monday, June 23, 2014 7:12 PM
  • I completely agree with all of the comments above.

    I have undertaken my own performance testing of the new Service Tiers.

    For the results, including comparisons with Web/Business edition, please see:

    http://wp.me/p4hBYY-9F

    Suffice to say, Standard Edition performance was poor in comparison with Web / Business - doubly so when the prices are compared between Web / Business and Standard Edition.

    Tuesday, July 8, 2014 10:58 AM
  • Thank you very much Chris for sharing your tests results. They are indeed very insightful...and pretty scary too (we are using the Web Edition today).
    No wonder I could not find the current specs of Web/Business editions online considering how poor these new tiers compare...

    To Microsoft:
    We have been an early Azure adopters and a strong supporter of the platform (even making a video testimonial with your team which is published on your website). The results from Chris are extremely worrying and you've got a PR disaster waiting to happen as this information spreads. I really recommend you explain us in great depth how these new tiers compare to the existing Web/Business edition.



    • Edited by jthierry Thursday, July 10, 2014 6:38 AM typo
    Thursday, July 10, 2014 4:07 AM
  • I concur with all the findings of the post

    @Chris @Dimitris @Paul are spot on. The SQL Azure group has a major problem on their hands to avoid a mass exodus from the platform. 
    Friday, July 11, 2014 4:29 AM
  • I'm extremely distressed by these changes and more so, the complete lack of communication from Microsoft regarding our concerns. 

    I'm a very early customer, 4 years now, and I worked quite closely with guys from the SQL Azure team when building and optimising the back-end for a social network. I followed the guidelines set down by Microsoft and two things were said to me personally that stick in my mind:

    1. I'm exactly the kind of customer Azure is aimed at (I'm an individual developer but my network is used by 1 million plus users, sending 5 million messages per day)

    2. My set-up which uses lots of small, cheap SQL Azure database scaled out horizontally is exactly what Microsoft would have recommended (they even asked me to do a talk about my set-up at their UK HQ)

    Now I'm in a situation where number 2 no longer applies, the core element of my entire network is no longer financially viable using the new databases and I have less than 9 months to completely rework this. At this moment in time, I simply have no idea how I'm going to do that ... which leads me back to number 1 .. Microsoft are aiming their services at people like me (amongst other types of customer), an individual developer, and now I have to put in MONTHS of work just to get my system into a position where it will continue as normal when they sunset the existing database tiers. I'm one person! I have plans and a roadmap for my business over the next year that don't involve this!! And now I need to spend months on this?!

    What's even worse is that I can imagine a scenario where I get stuck in early, spend 3 months making these changes to my live systems only for Microsoft to backtrack and announce that they are going to keep the existing tiers alongside the new tiers, making all this work meaningless .. but how long do I wait before doing it? The clock is ticking. 

    Thank god I abandoned Federations and migrated some of my systems to Azure tables a couple of years back .. that on top of this would have been disastrous. 

    What's going on Microsoft? You can't expect developers to follow your guidelines, the advice you give out personally, then suddenly change direction and tell them they have 12 months to rework their entire systems. It's completely unacceptable, as is your lack of communication. If I could move out of Azure right now, I absolutely would, but I feel trapped. 

    Friday, July 11, 2014 12:43 PM
  • They flipped the value proposition on its head. Before, you got amazing performance on small databases for a great price. Now, you get OK performance on gigantic databases provided you don't have a high transaction rate. For me, it actually works out because I have large-ish databases that can keep getting bigger without a cost increase. I don't have high transaction throughput. But if you have a small database that needs high performance, you're pretty much screwed.

    And I'll complain again... the limitations around import/export, and automatic backup, have completely hosed me. My migrations into Azure took entirely too long because of the performance throttling. In that case, there should not be throttling because it's a fairly isolated and rare thing that you need to import data. It should take half a day to populate 10 gigs of data.

    Then you have the export part, which I'm complaining about to billing and getting nowhere. I understand the mechanical process of having to duplicate the database to export it, I really do. The problem is that you're getting charged for another entire database while it's live. In the old tiers this was bad enough, but in the new tiers, your cost for that database has exactly doubled because if it exists for even a second as a copy, you get billed for an entire day. That shouldn't be the case. This isn't made clear anywhere in the documentation, so I've been paying double and the billing support people aren't willing or don't know how to make that right.

    Friday, July 11, 2014 1:24 PM
  • @Steven, right there with you buddy - though I have the added pain of having to migrate away from Federations as well!

    MS really seem to make a point of pushing "best practice" guidance for a couple of years, then completely abandoning said guidance in favour of whatever new flavour comes along.
    Federations is very much a case in point and being a veteran of the whole Silverlight debacle, I'm kicking myself for not having learned my lesson already.

    Going forward I'll be taking any guidance from MS with a healthy dose of arsenic.

    I originally called MS out on the fundamental business model problem a few days after they announced the new tiers.  See my post for details.

    To their credit, Guy and Tobias from MS were pro-active in reaching out and directly chatting with me, however the bottom line seems to be that the new tiers model is the future of Azure SQL Database.
    So really there is no room in their business models for any proper cloud scale database strategies that involve sharding across many small databases.  
    They did try to suggest that the Basic tier could work for me, but seriously just look at the terrible performance results on S2 (and this is after they literally DOUBLED the resources of S2).

    I've spent a fair bit of time investigating Amazon RDS now and I must say its been a pleasant surprise.

    Very easy to deploy and setup an SQL Server RDS instance, plus mirroring is now available, and I can host many small dbs on this instance, with the allocated CPU/IOPS/memory resources clearly defined and easily scaled up.

    RDS means a little bit more admin work and I still prefer the elegance of Azure SQL, but it seems that MS no longer wishes to have startups or SaaS businesses on Azure.
    Everything about the new tiers reeks of monolithic enterprise requirements with pricing to match.
    I don't blame MS for going after the high value enterprise market, just sucks that their decision to kill the pay per GB Web/Business option has now created a ton of work and risk for me in the coming months.

    Its also going to be a pain to learn EC2 properly and move all my Cloud Services stuff there, but honestly no point staying on Azure if I have to move my dbs to RDS.
    Amazon's Elastic Beanstalk seems to be pretty close to Cloud Services, still need to try it out properly.

    Oh one more thing - if you don't have much/any TSQL code in your database, take a look at Postgres on RDS.  I'm toying with a migration to Postgres given a fairly similar syntax set + cheaper RDS fees.









    • Edited by PaulDBau Friday, July 18, 2014 4:40 AM
    Friday, July 18, 2014 4:26 AM
  • This really concerns me too. We're on Business with a small database and quite pleased with the performance. Our business model doesn't require a rapid growth in db, but it does require good performance and gradual increase in transaction load so these new SQL tiers are going to kill us on cost.

    Microsoft may have some "problem" they are trying to solve with the new offering.  That is fine to come up with new SQL options that fit for companies with large db requirements.  But don't leave the rest of us out in the cold.  What logical reason would you make it financially impossible for us at the other end of the spectrum (smaller db)?

    Leaving Web/Business alone while adding the new SQL options would provide a better overall offering.

    Note we are a Bizspark startup and this will impact us significantly as currently planned by Microsoft.

    Tuesday, September 9, 2014 6:24 PM
  • This really concerns me too. We're on Business with a small database and quite pleased with the performance. Our business model doesn't require a rapid growth in db, but it does require good performance and gradual increase in transaction load so these new SQL tiers are going to kill us on cost.

    Microsoft may have some "problem" they are trying to solve with the new offering.  That is fine to come up with new SQL options that fit for companies with large db requirements.  But don't leave the rest of us out in the cold.  What logical reason would you make it financially impossible for us at the other end of the spectrum (smaller db)?

    Leaving Web/Business alone while adding the new SQL options would provide a better overall offering.

    Note we are a Bizspark startup and this will impact us significantly as currently planned by Microsoft.

    Note the above was accidently posted using an old MSDN account.  This is my correct account for this discussion.
    Tuesday, September 9, 2014 6:27 PM
  • I update my Web/Business database to a S1 and didn't notice any real performance differences. This is basically a SAAS crud app (to simplify things) and I do optimise the queries and lean on caching.

    My observations are performance issues are mostly related to bulk loading of data. When I need to do a bulk load, which is not that often, I upgrade the DB to premium, load the data and then downgrade again.

    Tuesday, September 9, 2014 10:58 PM
  • I have to agree that my performance issues were limited to the migration. The good news is that they are switching (switched?) to an hourly billing model, so if you need the transaction throughput, you don't need to pay for an entire day's worth. Since the move, the only hitch that I've seen is some slowness in the purging of some logging kind of data.
    Tuesday, September 9, 2014 11:02 PM
  • Just changed my database from Web to Standard - S2 (50 DTUs)

    The performance after the change took effect was so bad I had eventually I had to switch back.

    Doing a simple query would take 15 seconds, more complex queries would time out the app before it completed (time out is 30 seconds for the app)

    Here you can see the difference between Web and Standard - S2:

    https://onedrive.live.com/redir?resid=CE2F7E62E16A7CFD!56154&authkey=!ANASNEPuo__2XPg&v=3&ithint=photo%2cjpg



    SniperED007 - [url="http://www.torchbear.com"]Torchbear - worlds biggest relay![/url]

    Thursday, September 11, 2014 8:50 AM
  • Holy crap. S0 is the one that is comparable in price to Web and those stats are with S2. YIKES! Thanks for sharing.

    May we all make money in the sequel.

    Thursday, September 11, 2014 10:30 PM
  • A lot of people seems so shocked, but think about this... they've moved from pricing on size to pricing on computing power, which is really the expensive thing and in line with the other services. Storage is cheap. It made no sense to be charged out the nose for a giant database that wasn't being heavily trafficked. Conversely, a small database that had enormous volume got off easy when it was using more significant computing resources. Remember... you used to rent or buy the hardware and pay $4k+ per CPU for the SQL license. Let's not obscure the value proposition here.

    I'm thrilled with the change. I have a lot of data but the query volume is not huge.

    Thursday, September 11, 2014 10:41 PM
  • I would be interested to see what kind of queries you are running and how much data. I am running on S1 at the moment and most queries take only a few MS to execute. Even more complicated queries for things like reporting once I have optimised and indexed correctly only take a couple of seconds. 
    Friday, September 12, 2014 12:01 AM
  • "pay $4k+ per CPU for the SQL license"

    True. And that is a reasonable consideration for a large corporation but the other 90% of the market is what benefits immediately from going hosted and that price target market is the customer that has a VM and SQL Express running a database.  That's about $200/month dedicated.


    May we all make money in the sequel.

    Friday, September 12, 2014 12:23 AM
  • There are some serious issues with the new tiers, like it or not. We've been using Azure for more than 4 years now, and have multiple deployments on it. We have deployments where multiple customers use the same deployment, and deployments specific for other customers. We encourage all our new customers to go with our Azure offering for simplicity and speed of getting things started.

    Since we started testing these new tiers, for the first time, we are scared of what we will offer our clients. It is true that we suffered before because the way Web and Business were done, the fact that someone else on the same server could grab almost all the resources and then it would impact our app, but that would happen less that 2% of the time. It was so cheap, that we understood the risk and were living with it.

    Then came first premium, and then the new tiers standard and basic. We tested them all because we thought that would help us having our app more stable, and even though it is more stable, even with a P1 that costs about 400$/month, it is slower than what we had with Web an Business. Maybe in the kind of database you have, you won't see the difference, but if you have a database with more than 15 tables, millions of records, you start to notice the difference.

    If you really want to see the difference, then you simply add a varbinary(max) or an nvarchar(max) column to see the impact right away. I already posted timing we had to upload files to SQL Azure with new tiers, and I am happy that with some help from the Azure team, we were able to get almost the same speed we had with Web / Business with P1. It costs more, but at least we have about the same performance to upload new files.

    Then we noticed that deleting rows with varbinary was just horrible. What would take about 5 seconds on business would take 1m30s on P1, and 30s on P3. That means even paying for a multi thousand dollar database a month, would slow our system. We really don't like that ideas since we have archiving process that would take minutes to process, now they might take hours, for the same thing.

    Then another issue occurred last week. We needed to update our database schema to add a second column of type nvarchar(max) to our table, and have an update query copying data from the first column to the second column. In our test environment, we only had 1370 records to update, 111 of these had more than 8000 chars in the column. We tried to run an updated which took 2 minutes on a dev machine with the same database, and on our P1 instance, I got a message from the server saying that the transaction log was full after 35 minutes running the query. I then tried to update all records that were below 8000, it took 100ms. Updating only the 111 others would fail again and again. I then needed to do a T-SQL function to delete them one after the other and commit between them, it worked, but was slow as hell. I then tried the same thing on Business instead of P1 ... it worked. Took a while, but worked.

    We have millions of records in prod databases, and we will need to schedule a maintenance that will take a while to accomplish this. We might be moving to a VM with SQL Server on it, but then we lose all the geo-replication and backups. I REALLY hope that someone at Microsoft will read this and understand that this might be an issue and that the new model is not for real life applications.

    Friday, September 12, 2014 1:24 AM
  • ...but if you have a database with more than 15 tables, millions of records, you start to notice the difference. If you really want to see the difference, then you simply add a varbinary(max) or an nvarchar(max) column to see the impact right away. I already posted timing we had to upload files to SQL Azure with new tiers, and I am happy that with some help from the Azure team, we were able to get almost the same speed we had with Web / Business with P1. It costs more, but at least we have about the same performance to upload new files.

    I have databases with 30 tables, millions of records, varbinaries and nvarchar(MAX) columns. I'm running on S0 right now with CPU around 8% and I/O around 5%, so I really don't think the volume and types of data has anything to do with the performance.

    SQL Server (on-premise or Azure) is pretty robust in terms of performance, but I think the thing that Azure reveals is that you can get away with a whole lot when you run on dedicated hardware. Throttle the resources, and all of a sudden issues around indexing or poor querying become very obvious. You can go into the janky Silverlight management portal (the link is on your database summary page in the regular management portal), choose the "query performance" and see immediately who the worst offenders are.

    Friday, September 12, 2014 3:07 AM

  • SQL Server (on-premise or Azure) is pretty robust in terms of performance, but I think the thing that Azure reveals is that you can get away with a whole lot when you run on dedicated hardware. Throttle the resources, and all of a sudden issues around indexing or poor querying become very obvious. You can go into the janky Silverlight management portal (the link is on your database summary page in the regular management portal), choose the "query performance" and see immediately who the worst offenders are.

    This is my experience too. Having a dedicated box with lots of memory for what is a pretty trivial app will hide design faults and performance issues in your application. With SQL Azure where you might not have such a speedy box then performance as a feature really comes to the fore. Good performance is possible even with the Standard tiers, I run my SAAS www.youreontime.com off of it.

    Also, I think storing lots of blobs in the database is bad design, you probably should store them in blob storage. 

    Friday, September 12, 2014 4:41 AM
  • The amount of confusion in this thread is quite impressive.

    • "I really don't think the volume and types of data has anything to do with the performance."

      Of course the amount of data is going to have an impact on performance. If the data can be cached in the available RAM, it's going to be a lot faster than if SQL Azure has to go out to the hard drive (which still isn't SSD, right?) and read the data. So of course performance will degrade if the data you need to access can't be fit in the available RAM. This is why we have RAM in our computers.

    • "Good performance is possible even with the Standard tiers"

      Claiming that the Standard tiers have "good performance" is like claiming that a floppy drive from the early 90's has good performance, without restricting that statement to writing 1 byte per minute in an asynchronous fashion in the background. A statement like that makes zero sense and adds nothing.

      Of course you can create a solution where the Standard tier happens to work. This is not what people are complaining about and seek a solution to.



    The fact that the performance of the new Standard tiers is lower than that of the Web/Business-editions is already well documented elsewhere and linked from this thread.

    My personal experience is this:

    Microsoft keeps claiming that the performance before was not predictable, and a few users were lucky to get good performance. But I doubt that only a few users have had the good performance prior to this change. Every day we create ~15 new SQL Azure databases and import a couple of GB (30-40 or so) of data and run queries on it. We've done this for years, and hence created roughly 5000 databases per year. We have not had any performance issues when doing this. Now when we try to use a Standard S2 database, the queries simply time out and we have to use a Premium database, which just isn't economically feasible for us.

    To us, this change isn't about moving from a solution which barely worked before, to a solution which now works. It's moving from a solution which worked almost perfectly before, to one which does not work at all for us.

    The core issue is this:

    For several years, a lot of Microsoft companies have been using Windows Azure SQL Database successfully. Now Microsoft - with a pretty short notice - will force all these companies to either accept a huge price increase, or to re-architect their solutions. If Microsoft was introducing a completely new service running alongside of other services with a different design approach and a different price tag, it would not be an issue. If Microsoft gave a 5-year-notice or something like that, that would be fine as well. Companies who wanted to use it could start using it. But this isn't the case - Microsoft is disrupting a lot of companies business unless they take an active action and spend a lot of money on this.

    This is Microsoft disrupting a lot of applications without providing a sane upgrade path and without talking about what's to come in the future. Who knows, maybe they have a planned perf increase in 3 months from now, which we will learn about after we've spent tens of thousands to re-architecture the solution.

    At the same time, Microsoft keep claiming that this isn't actual an issue, because "before the performance was not predictable." as if that would somehow solve someones problem.



    • Edited by M. Knafve Friday, September 12, 2014 8:55 AM
    Friday, September 12, 2014 8:49 AM
  • I would have to echo Knafve’s feeling.

    Azure is a fantastic platform and we have invested heavily (and happily until now) on it. It baffles me that nobody in Microsoft could anticipate the market response from such poor price/performance ratio…


    Friday, September 12, 2014 9:06 AM
  • Hi guys,

    we are also very disappointed by the performance result when we ported from Business to Standard (S2).

    We would grudgingly accept the new price tag of Premium P1 (around €347/month) but all reports do say that even P1 isn't really comparing to a Business database in case of performance. I guess you all have read this article: http://cbailiss.wordpress.com/2014/07/06/microsoft-azure-sql-database-performance-tests-summary

    We would be forced to switch to P2 which is now around €693/month - and this is unacceptable for us. We are a (very) small team.

    We are now actively seeking for alternatives - especially Amazon RDS might be an option. I would be happy if others could add informations about possible solutions to escape from Microsoft's SQL threat.

    Cheers, Mark.

    Friday, September 12, 2014 9:48 AM
  • I don't think there's any confusion here... I just think there's a mismatch in expectations against database and application design.

    Of course the amount of data is going to have an impact on performance. If the data can be cached in the available RAM, it's going to be a lot faster than if SQL Azure has to go out to the hard drive (which still isn't SSD, right?) and read the data. So of course performance will degrade if the data you need to access can't be fit in the available RAM. This is why we have RAM in our computers.

    Sure, but the volume of data still isn't a determining factor of performance. You can have tens of millions of rows in multiple tables and find the one you want in a matter of milliseconds. However, if you intend to do a 10-way join in a sproc with some kind of calculation and poor indexing, the performance won't be good. As others have eluded to, that problem might be hidden if you have a dedicated box, but even then, at a certain scale you're going to hit a wall where it starts to become pretty obvious.

    Claiming that the Standard tiers have "good performance" is like claiming that a floppy drive from the early 90's has good performance, without restricting that statement to writing 1 byte per minute in an asynchronous fashion in the background. A statement like that makes zero sense and adds nothing.

    Of course you can create a solution where the Standard tier happens to work. This is not what people are complaining about and seek a solution to.

    To me saying it isn't working it "adds nothing" because my experience is exactly the opposite. I don't know the details of your situation or how your app is designed, but there are a lot of design problems that are hidden by unlimited resources, including delegating calculations to the db, poor indexing, read-intensive queries, etc. They can generally be sniffed out early on even locally when you look at the execution plans. Maybe my bigger point is that if you're not willing to look at your queries and see which ones perform poorly and why, then you're better off going back to dedicated hardware or a big VM or something. Azure won't be for you.

    And those kinds of problems definitely will surface with the new tiers, and I believe that they should. Everything else in the cloud is based on the consumption of computing resources, so why should this be any different? The idea that under the old tiers you could have a small database but incur massive I/O and CPU usage without consequence doesn't seem like a good way to keep the entire platform from crumbling under use by a relatively small number of people using most of the resources.

    And I think blobs are OK. They're not the devil people make them out to be. :)


    • Edited by Jeff Putz Friday, September 12, 2014 12:26 PM
    Friday, September 12, 2014 12:22 PM
  • Jeff,

    You do know that there have been several benchmarks when it comes to the performance of the new database tiers, and that these tiers shows a significant decrease in performance, right? I'm not sure what value it would be for me to add my own queries, since it's already a known fact that the performance has decreased significantly on average. Here's one example.

    Also I'm not sure where you're getting the idea that we're talking about heavy joins or that we don't want to watch query plans. We're seeing the performance degradation on a simple SELECT from a single table with a WHERE clause only filtering on a indexed date column. The table I tested on last contains roughly 50-60 million rows. Can you clarify why you think that the issue is caused by us not optimizing the queries, despite there being quite good benchmarks showing the significant performance decrease? 

    I don't remember reading that the Web/Business had an unsustainable pricing model which would make the platform crumble. We had several meetings where we got together with people from Microsoft when designing the solution, and they all told us that the Web/Business tiers were the way to go for us. Are you telling me that your colleagues were wrong?


    • Edited by M. Knafve Friday, September 12, 2014 2:35 PM
    Friday, September 12, 2014 2:34 PM
  • Yes... I've seen the benchmarks. Seems to me they measure exactly what one would suspect, and what I keep saying. You're paying for throughput instead of size now. The drum beat of "performance is suffering" just doesn't seem like the right argument given the new pricing model. The performance isn't suffering, you're just required to pay for the performance you want now. If the argument was, "I don't want to pay for this performance because it doesn't make sense for my business," I would totally get that and be on board. In fact, it was that argument that kept me from moving my own stuff to Azure (or any cloud) for four or five years, because the economics of bandwidth and CPU/RAM didn't make sense until very recently.

    I don't know your schema or indexing, so I couldn't possibly know why your database isn't performing to your liking.

    I have no colleagues talking to you. I don't work at Microsoft. But yes, it's certainly my opinion that pricing on size was totally the wrong thing for them to do, because storage is cheap. CPU's and throughput are not cheap. It only makes sense to me that they would lean the pricing in this direction, because it better aligns with the "get what you pay for" model of VM's.

    Friday, September 12, 2014 2:47 PM
  • Again, I (and most other people) are not complaining about the fact that Microsoft now measure and throttle on throughput. You seem to be missing the point, and you're distracting from the core issue when you're questioning other people's willingness to look at query plans and other metrics. People throwing out blanket statements like "The performance is good" also distracts the discussion from the issue at hand.

    I've already pointed out what I consider to be the core issue in my previous post, and if you're not understanding why a ton of people are very upset about this I recommend you read it again.

    (Sorry - your forum profile says "I work at Microsoft (Alumni)" so I thought you were working for Microsoft.)


    • Edited by M. Knafve Saturday, September 13, 2014 6:02 AM
    Saturday, September 13, 2014 6:02 AM
  • I would have to echo Knafve’s feeling.

    Azure is a fantastic platform and we have invested heavily (and happily until now) on it. It baffles me that nobody in Microsoft could anticipate the market response from such poor price/performance ratio…


    Just to add my penny's worth on this, I completely agree - how come MSFT didn't see this coming!

    FYI, foolishly I upgraded one of our databases to S0 from web, without looking at the background story on the new tiers and paid the price.  The application was running over ~3gb Web database, so I thought S0 would be enough to start with.... the application died!

    We tracked the issue to a view, which we have run through performance tuning, so its about as tuned as it can get.  The results (taking into account plan caching) were:

    Web 3 seconds

    Standard S0 - 90 seconds

    Standard S1 - 11 seconds

    Standard S2 - 5 seconds

    Currently we pay about 11GBP per month for this database and already I am looking at about 40GBP (we have moved to S2) but will almost certainly need to go to P1 which is 300GBP

    So MSFT, you want me to pay you 290GBP to have the same performance as I do now.

    I for one think that Amazon will be rubbing their hands in glee at this - my next bit of R&D is how to migrate to them.

    Regards

    Peter


    Newbie web developer


    Saturday, September 13, 2014 12:36 PM
  • Three seconds is a long time for a query to run in an online production environment.

    Again, I (and most other people) are not complaining about the fact that Microsoft now measure and throttle on throughput. You seem to be missing the point...

    If I'm missing your point, it's because you aren't making it clear what it is. The only thing I've gleaned from your posts is that you don't want to pay for the performance you were essentially getting for free before.

    I'm sure that like everything else, the cost for this service will go down. For this you're paying for some portion of a VM plus the license for SQL Server. Look at the pricing of the VM's with SQL Server... they're so not cheap. I think if you want to use SQL as a PaaS resource, and do it cheap, you'll have to look very hard at how your app queries it and perhaps change your approach if it isn't fast enough. And hey, at the end of the day, that's good for your customers, too.

    And by the way, I have been there. I had a particular purge query that was timing out from the web-based app (defaults to 30 second timeout), and the problem was just my indexing. It seemed a little weird that it would take so long to delete a few thousand rows, but I figured it out, and now it takes under two seconds.

    Saturday, September 13, 2014 12:57 PM
  • Jeff,

    damn right Im complaining - why aren't you !?

    My costs are going to go from ~10GBP to ~300GBP ON JUST ONE DATABASE , just in case your hung over like me - that's 30x cost base.

    Do you think that MSFT want small businesses to be on this platform? 

    Looking at the other people in this thread that have the same problem and other people I know in the UK - it looks like Azure SQL is about to become a tool only for premium customers, and not one for small ISP's like my company.

    My point is that, this level of increase is unrealistic for small businesses to absorb. 

    Peter


    Newbie web developer

    Saturday, September 13, 2014 1:17 PM
  • I guess we just have different views of what is or isn't a reasonable expense for a business. If I can't get away with spending less than $50/hour for a developer, spending even $500/month to run an app doesn't seem like that big of a deal. The market is what it is. Maybe you're better off on your own dedicated hardware and buying your own SQL license. Only you can really make that decision. To your point, ScottGu did say in his blog post this week that they're looking at a cost structure to pool resources at some point (many databases with aggregate throughput and size limits), so maybe that will help you out. It would probably cut my cost in half, but my cost as of November 1 is going to be all of $30 anyway.

    For me, I think what I'm paying is crazy cheap for a service that not only includes the licensing, but the virtual hardware that includes a four-9 SLA, replication to three copies, easy restore without understanding anything about how SQL Server works, no maintenance of logs... I think it's a steal. On my own, I'd have to buy three licenses and rent three servers and waste a bunch of time on maintenance to get the same thing.

    I did a blog post on my spend. For reference not included in that post, I do about ~250 gigs outbound data and the sites service about 1.5 million requests per month. Not huge at all, but looking at the SQL usage, I could go up 5x and still be in the same service tier. Given some of the caching I do, it could probably go even higher.

    Saturday, September 13, 2014 7:06 PM
  • I suspect it might be just a difference in the type of markets served.  Some ISVs have to silo their data per customer for regulator, policy, or other reasons.  I'd imagine in your case if the 1.5 million requests represented say 500 customers and each of those customer's data had to be kept in its own database, at 500 databases worth of cost would the business case still make sense?  (Don't answer that. It is not my business.)  It is just a different paradigm.


    May we all make money in the sequel.

    Saturday, September 13, 2014 8:05 PM
  • I am the author of the series of blog posts mentioned above.

    I think there is a lot to like about the new Azure SQL Database Service Tiers.  The biggest change for existing customers is to get accustomed to the new way of thinking about Azure SQL Database.  I.e. we now need to buy "boxes of performance" instead of "boxes of storage space".  The platform can still provide great performance, but we really have to pay lots of cash to get it.

    Based on the preview pricing, the cost comparison was poor for everyone.

    GA pricing has eased things a little compared to preview pricing, particularly for those with larger less used databases.  
    I.e. if you as a customer with a 100GB database never used more than one-third of the "maximum performance" of Web/Business, then S2 represents a great deal.  Your costs have gone from $176 per month to $75 per month.

    The difficulty many existing customers have, particularly those with small and/or performance heavy databases, is that the price change associated with this announcement, even given all the other great new features, is very substantial indeed and will no doubt push many customers to look elsewhere.  Whether it's right or not, customers have a view that I pay X to get ABC now, and on the new tiers I now need to pay Y to get ABC.  X is much less than Y so that is certain to cause some negative reaction, particularly so where that difference is significant in terms of overall spend (which I agree, for many large customers, it may not be).

    The new Service Tiers are more robust (less volatile performance over time).  I agree with Jeff that the this is the way the service should have been architected all along and paying by database size was crazy.  In hindsight, Microsoft would probably admit this was a decision made by a rush to get something into the marketplace - measuring and controlling database size is something SQL Server has done for years, it's quick and easy.  Controlling and segregating performance is more complex and the controls available in SQL Server have only partially existed until now.

    Of course, knowing that is little consolation to some customers who are now facing difficult decisions ahead...

    For information, I have also updated my earlier blog series now we are at GA.  See:
    http://wp.me/p4hBYY-cb




    Tuesday, September 16, 2014 10:54 AM
  • Folks, my name if Guy Haycock, and I'm a product planner in the Azure SQL Database space.  I'm directly working on these business model changes.

    I'm happy to talk offline with anyone that wants to.  guyhay@microsoft.com.  It difficult to make accurate comparisons from one customers application and Azure SQL Database usage to another here in this public space. 

    We have over a million databases in the SQL Database service, an in my experience they don't fall into a small set of patterns.  Your mileage may vary.

    As we hinted at shortly before GA, there is work we are doing on a follow-on business model - so feel free to email me privately. 

    Lastly we've recently bloged on how you can understand your own resource consumption on Azure SQL Database today.  This blog will be very helpful in understanding if Basic/Standard/Premium are a price decrease for you, no change in monthly bill, or a price increase.  I've personally worked with many customers that fall into all three categories, and some that need the follow-on business model.

    I'd be happy to help.

    Tuesday, September 16, 2014 8:31 PM