I’ve been evaluating stream insight for a poc and its working as expected (I quite like it). Although I wanted to get some things cleared up before I have to do a show and tell. If you look at: (it wont let me post the link, although if you search for "Choosing a StreamInsight Edition" you should find the page).
"This topic provides guidance to help you choose the appropriate edition of StreamInsight to meet your complex event processing application requirements. There are two factors to consider :
Event rate — The number of events that must be processed per second.
Latency tolerance — The amount of time within which the events must be consumed in order to produce the desired output.
StreamInsight is designed to handle a wide variety of event-driven scenarios and is available in two editions: Premium and Standard.
We recommend the Premium edition for applications that require an event rate that exceeds 5000 events per second or has a latency tolerance requirement that is less than five seconds. We recommend the Standard edition for applications that have an event rate requirement less than 5000 events per seconds and/or a latency tolerance requirement in excess of five seconds."
I’m not really sure what latency tolerance means, I understand what the event rate limits. I’m not too sure on the wording of the latency tolerance? Could anyone clear it up for me?
Also does anyone know what would happen if I attempted to process more than 5000 events within a second and the server was running “Standard” is an error thrown from the dll? What would happen?
I see that the premium version has more than one scheduler (one per core). So it’s possible to handle more events per second.
By what I’ve managed to find information online, reaching the limit of the standard version will just increase the latency of the events being fired. EG I don’t think errors are thrown if more events are attempted to processed in a second.
However I’m still not sure what “” Really means.
Surly the latency is more linked to the hardware more than the software(aside from the scheduler per core feature) eg disk, network, cpu
- Edited by jonnysparkplugs Monday, July 21, 2014 12:59 PM
Thank you for your question. I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.
Thank you for your understanding and support.
TechNet Community Support
Sorry for not replying sooner. It's been a crazy couple of weeks.
So ... here's what happens. If you don't have enough available resources to schedule the queries, they start getting "backed up" ... this is what causes the latency that they are referencing in the docs. It's not event latency ... the latency from the generation of the event at the source to the final output, but the engine latency ... the amount of time from the enqueuing of the event at the source to the dequeuing in the sink. With standard edition, you only have 1 core doing the heavy lifting and, if it gets backed up, you wind up with engine latency.
Now ... if your input queue gets full ... over 200K events in the input queue waiting to be processed ... they'll get dropped (if you are using the Reactive model) or your input adapter will get paused (if you are using the "legacy" model) until it empties.
The exact threshold is a pretty nebulous figure. It depends a lot on the complexity of your queries, how well they are optimized and, of course, your processor speed. I have, personally, managed to fill the queue with Premium edition at rather low event rates (< 200 events/sec) due to some pretty poorly written queries. Tuning those same queries significantly increased the throughput and eliminated any queuing at all ... with over 5K events/sec on a Surface Pro. I've also pushed over 150K events/sec with Premium (average, sustained event rate) on a quad core i7 laptop for days on end with no queuing.
I know it's not the clear-cut answer that you're looking for. There isn't one. How much throughput you can handle depends most significantly on the complexity of your queries. So ... you need to test ... and test ... and test. And possibly tune. Run your queries with something close to your target event rate for several days non-stop. Keep an eye on the input queue performance counters. If your queries can use the system time, have your sink compare the event start time with the current system time to get an idea of the total engine latency. Keep in mind that some of the temporal operators will impact the appearance of engine latency, as will your CTI generation. If, for example, you generate CTIs with a delay of 10 seconds, you'll always have at least 10 seconds of latency in the engine, no matter what you do!
DevBiker (aka J Sawyer)
Microsoft MVP - Sql Server (StreamInsight)
If I answered your question, please mark as answer.
If my post was helpful, please mark as helpful.
I do a lot of research for the word "Latency tolerance". Unfortunately, there's no more explanation about that. I think the Latency tolerance is just an index which will help you identify which edition of Steaminsight could be better in a specific situation. In my opinion, Latency tolerance means the maximum toleralbe delay time for events to be comsumed. For example, if the event rate is 5000 events per second, latency tolerance "5s" means the maximum delay time that you can bear to process the 5000 events is 5 seconds. It doesn't mean if it takes longer time to process these events there will be errors thrown. But if latency is too long, then the queue will be full easily. And then there will be some problems coming out. So that's why we suggest to choose Premium edition if event rate exceeds 5000 events per second or latency tolerance requirement is less than five seconds.
We want to know whether our solution has resolved your problems?
There are several days you don't contact with us. And we will close the case.
If there're any questions, don't hesitate to contact with us.
Thank you for your time.