locked
Interpreting "Transactions Per Second" RRS feed

  • Question

  • I have a monitoring tool that is collecting this information and I'm trying to decide how to use this information.  If I can make some performance improvements, shouldn't the following generally be true:
    - The number of transactions per second should increase, in other words the instance should be able to process more transactions in the same amount of time compared to before the improvements?
    - The amount of memory should decrease due to having to fewer table scans and less use of tempdb?

    Or is the above misleading, if taken by itself?  Rather should I look at something like data being moved, eg  Mb / Sec with the thought being if more data being moved in the same amount of time should related to an increase in performance. 

    • Edited by shiftbit Tuesday, September 11, 2018 5:04 PM dfg
    Tuesday, September 11, 2018 5:00 PM

All replies

  • I find batches per second more meaningful.

    I take it the counter you are looking at for memory is \Memory\Available Bytes. I would probably look at Page Life Expectancy as a counter of internal SQL Server memory usage, but this can be inaccurate at times.

    Tuesday, September 11, 2018 5:11 PM
  • Not sure on the actual counter, as im using a commercial monitoring tool. 
    Tuesday, September 11, 2018 6:10 PM
  • This question as well as your other is not entirely simple to answer.

    Yes, it is reasonable to expect transactions per second to increase - at least if the system has an OLTP workload and is hammered by queries all the time. On the other hand, if you have a data warehouse where a few analysts run occasional query, the number of transactions may not change, but yet the analysts are happy to see that the response time has improved. But they just don't more queries because of this.

    The amount of meory may decrease, but a single operation like an index rebuild on a large table can fill the memory, that is not that likely. But you may see Page Life Expectancy increasing.

    What I primarily look at when I do performance tuning is the execution time of queries, and I try to find the most long-running queries and tune them. There will always be a top query, but when the top query seems to be one that is the core business of the system I might be content with what I have done.

    In the same vein, for your other question, I would look at using Query Store if the instances are on SQL 2016 or later. This permits me to gather data on query execution time, so I can see if there have been any difference, and also if there was a difference in workload. That is, if you only use some general measure, you may conclude that the move was successful because throughput or whatever increased. But this was not because of the move, but because the workload was lighter this week.

    Yes, this probably takes more skill and time and analyse. But then again, you have these servers to run queries, so how they perform is of uttermost interest.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    Tuesday, September 11, 2018 9:18 PM
  • Just to echo what Erland said, measuring throughput is surprisingly difficult.

    It helps if you can create a test workload that you can run repeatedly, I had a set of fifty reports I could run, so I could get a nice simple clock time to see if things were better or not.

    A big project should have a load tester like that, a standardized load of some kind, that can be run to exercise many aspects of a system.  This is separate from QA, btw.  I hate it when QA passes a new release, but then performance drops off 90% in production.

    Josh

    Tuesday, September 11, 2018 10:38 PM
  • What tool are you using, someone may know it.

    Tuesday, September 11, 2018 11:21 PM
  • The tool is miniDBA

    https://minidba.com/

    Wednesday, September 12, 2018 2:41 PM
  • Test
    Sunday, September 16, 2018 5:11 PM