locked
Audio input-output synchronization under WASAPI RRS feed

  • Question

  • Hello,

    I read the basics of WASAPI and looked for a topic/discussion where the audio IO synchronization was taken under consideration - it seems that this matter was alive since the introduction of Windows Vista.

    It is not possible to start "capture & play" at the same time, because you need to use different streams for that, but is there a way to get the full information about latency introduced on the input and at the output? There is a very promising method IAudioClient::GetStreamLatency() but the description says "method retrieves the maximum latency". Is "maximum latency" the one latency that in audio we used to simply call "latency"? If yes, then is there a possibility to obtain the samples difference between starting a rendering stream and starting a capture stream? In my DSP application the latency is not that important, but confident synchronization is.

    I found some ideas like using IAudioClock::GetPostion, loopback capture to see what's heading out to the speakers.. and my favorite: to query the latency at both ends, but I would like to be sure which of this ideas are useful.

    Summarizing, what I need is:

    1. input latency

    2. output latency

    3. the difference between starting input stream and starting output stream.

    Thanks to this I will be able to develop accurate DSP algorithms.


    Tuesday, July 9, 2013 8:43 PM

All replies

  • IAudioClock::GetPosition will tell you the sample that is hitting the air at the speaker cone (for render) or is coming in from the air (for the microphone.)

    You can call QueryPerformanceCounter to get the current time.

    You can calculate the latency as the difference between IAudioClock::GetPosition and the sample that you're about to hand to IAudioRenderClient::ReleaseBuffer (for render) or the sample that you just got from IAudioCaptureClient::GetBuffer.


    Matthew van Eerde

    Wednesday, July 10, 2013 12:05 AM
  • Thanks for hints, Matthew.

    For time (samples) difference I like the simple idea using QueryPerformanceCounter(). But is it precise enough? What is the frequency of a counter? I can get that value via QueryPerformanceFrequency(), but is it more accurate than eg. 1/44100 [s] =~ 0,0227 [ms]? This can give me precise synchronization I ask for.

    What about  IAudioClient::GetStreamLatency()? Will this not be useful to determine constant latency during lifetime of the IAudioClient object? From your explanation it seems that latency may vary, because on each event we may have different number of samples in buffers (to capture or to render).

    That's not the end of the world for me, I thing big buffer will solve the problem, but do I get it right? In that case I believe that GetStreamLatency() is the size of the big buffer I mentioned.

    Wednesday, July 10, 2013 10:57 AM
  • GetStreamLatency tells you how much data you need to keep in the buffer, or risk glitching. If you have a large buffer (e.g., 5 seconds) but a small latency (e.g., 50 ms) then you can get away with keeping the GetCurrentPadding() low (on the order of 50 ms) and still avoid glitching.

    It is obliquely but not directly related to the one-way latency between the IAudioRenderClient and the speaker.


    Matthew van Eerde

    Wednesday, July 10, 2013 4:53 PM