locked
VS2010 does not generate as many requests per second as VS2008 RRS feed

  • Question

  • Recently I started slowly moving from VS2008 to VS2010 for my testing. I was happy with VS2010 until I started to run load test on some powerful servers. The problem that I'm having is that VS2010 cannot generate comparable number of requests per second as VS2008 on the same hardware with exactly same load test.

    On the same computer VS2008 will generate 1800 req. /sec while 2010 will do only 500 req. /sec.  In both cases CPU on machine that runs the test CPU was at 100% (I specifically chose weak test machines so that the server against which the load test was running was not the bottleneck.)  This result implies that VS2010 is more than 3 times slower than VS2008. This implies that my testing rig needs to grow at least 3 times in size, and thus becomes hard to mange which might override all of the benefits VS2010 bring to the table.

    So, is there a way to make VS2010 to perform on the level of VS2008?

    Monday, July 11, 2011 11:13 PM

Answers

  • I actually found a solution, and feel somewhat stupid. The main reason for slower performance in 2010 vs 2008 is that by default setting "Timing Detail Storage" under Run settings-->[your run setting name] is set to "AllIndividualDetails" in 2010 and to "None" in 2008. Changing to None in 2010 got me much closer to the performance of 2008. To get VS2010 tests to same performance level I had to re-write the test in VS2010 from scratch, and that performed on the same level as VS2008, except that it still uses only one core even though I have Virtual User pack installed.

    Interesint thing is that I had this setting set to None in VS2008 but when I imported the project to 2010 it got changed to AllIndividualDetails.

     

    • Marked as answer by Arthur Step Thursday, September 15, 2011 5:57 PM
    Thursday, September 15, 2011 5:57 PM

All replies

  • Hello Arthur,

    Thanks for your post.

    I can reproduce the same issue as you. I run the exactly the same load test in the VS2008 and VS2010, however, the Requests/Sec is totally different from that in the VS2008 and VS2010.

    Based on my understanding, the process of the load test in the VS2010 is much more complex than that in the VS2008. That may be the reason why you get your load test runs slower in the VS2010.

    Maybe you can consider submit a feedback on the Microsoft Site here:

    https://connect.microsoft.com/VisualStudio/

    Thanks.


    Vicky Song [MSFT]
    MSDN Community Support | Feedback to us
    Get or Request Code Sample from Microsoft
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

    • Marked as answer by Arthur Step Wednesday, July 13, 2011 8:57 PM
    • Unmarked as answer by Arthur Step Thursday, September 15, 2011 5:57 PM
    Wednesday, July 13, 2011 9:19 AM
    Moderator
  • I actually found a solution, and feel somewhat stupid. The main reason for slower performance in 2010 vs 2008 is that by default setting "Timing Detail Storage" under Run settings-->[your run setting name] is set to "AllIndividualDetails" in 2010 and to "None" in 2008. Changing to None in 2010 got me much closer to the performance of 2008. To get VS2010 tests to same performance level I had to re-write the test in VS2010 from scratch, and that performed on the same level as VS2008, except that it still uses only one core even though I have Virtual User pack installed.

    Interesint thing is that I had this setting set to None in VS2008 but when I imported the project to 2010 it got changed to AllIndividualDetails.

     

    • Marked as answer by Arthur Step Thursday, September 15, 2011 5:57 PM
    Thursday, September 15, 2011 5:57 PM
  • I read through this thread as I am experiencing the same problem, however I do not have VS2008 to compare to. The timing storage detail setting had no effect in my case. I will continue to look to see what else I can find on this but I'm only getting about 320 reqs/sec out of my 2-CPU virtual Win2k8 agents but the the CPU on the agent is barely breaking 30% regardless of the virtual user count. I have a bottleneck somewhere in the agents, just not sure where yet. Will post back here if/when I find it. PS - Also noted that my test rig agents had gcServer=enabled as mentioned in another post. Apparently the agents configure this way right out of the box now because I didn't have to change it.
    Friday, November 30, 2012 2:04 PM
  • So like Arthur I found a solution and feel even more stupid. The limitation was the network. I was looking at the network counters on the agents and not seeing any output queuing and the total bytes was not exceeding the bandwidth of the network card, but it WAS exceeding the limits of the network emulation speed I had set in the loadtest. Duh! :-)

    Switched the setting to LAN in the loadtest and I was already exceeding the Requests/Sec in the first interval of my stepped stress test than I was seeing for a max at full user load.

    Posted this anyway in case anyone wanders down this same path and can't figure out why they can't get more requests/sec out of their agent machines. Our test rig and VS testing is still relatively new so vetting out some basic stuff.

    Friday, November 30, 2012 4:37 PM
  • BTW -  To get what I wanted out of this test, which was a certain load AND some page times for users running at typical internet speeds, I simply added two scenarios to my loadtest that both call the same script, but with a different Network profile.

    The vast majority of the virtual test users run at LAN speed with a step load pattern, which gives me the load I want. The other scenario runs with a 6Mbps download speed at a constant load pattern.

    The test results are aggregated by scenario, so I can seperate out what my page times look like for typical users at the various load levels.

    In case the reason for this wasn't clear, what my earlier testing seems to indicate is that by running all of my users as say 6Mbps I effectively throttled my 1Gbps agent to 6Mbps. This is not how I would have expected the tool to work, but it is what it is and the work-around was easy enough to implement.

    Thursday, December 6, 2012 5:26 PM