The following forum(s) have migrated to Microsoft Q&A (Preview): Developing Universal Windows apps!
Visit Microsoft Q&A (Preview) to post new questions.

Learn More

 locked
[UWP] AudioGraph AudioFrameOutputNode gets too many QuantumProcessed callbacks RRS feed

  • Question

  • My app wants to capture and buffer incoming audio data in memory. The AudioGraph API looks very nice from a design standpoint, and I am trying to follow the code pattern for AudioFrameOutputNode from this MSDN page: https://msdn.microsoft.com/en-us/windows/uwp/audio-video-camera/audio-graphs

    However, when I try to follow that code pattern, I find that my QuantumProcessed event callback method gets called OVER 100 TIMES CONCURRENTLY FROM 100 DIFFERENT THREADS.  Sorry for shouting, but the behavior is really bizarre and inexplicable.  Moreover, most of those callbacks have zero bytes available for reading, which seems like something that should never happen.

    I was able to reproduce this behavior with the AudioCreation UWP sample app and I have created a pull request with my modifications, so as to be totally clear about what I'm doing:  https://github.com/Microsoft/Windows-universal-samples/issues/565

    Is there any actual sample code (on GitHub preferably) demonstrating how to use AudioFrameOutputNode?  And is this really the best place on MSDN to ask AudioGraph questions?  There are quite a few evidently legitimate questions about AudioGraph that have been getting little to no response in some months:  memory leaks, clicks and distortions... some responses here would be great.


    Rob Jellinghaus

    Saturday, December 31, 2016 9:23 PM

All replies

  • Hi,

    are you trying to capture the raw audio samples data for each quantum processed or are you trying to achieve something else ?

    Sunday, January 1, 2017 4:08 AM
  • Both, I suppose.  My app wants to capture and record incoming audio data, basically recording multiple in-memory loops.  It then wants to create multiple AudioFrameInputNodes, one per loop, and mix them all together with a submix node.

    I had this exact structure (in-memory live looper) working great with ASIO for several years; it's how I recorded the demos on http://holofunk.com so I know the concept can work.  But trying to port the basic idea over to AudioGraph is not yet going as hoped.  I would like to capture the audio to an in-memory buffer and then move on to replaying it, but I can't even capture it at the moment.


    Rob Jellinghaus

    Monday, January 2, 2017 8:56 AM
  • have you tried unsafe code like the one below given in MSDN documentation, i have used it to capture audio data which aids in creation of volume meter & spectrum analyzer

    [ComImport]
    [Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
    [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
    unsafe interface IMemoryBufferByteAccess
    {
        void GetBuffer(out byte* buffer, out uint capacity);
    }
    
    unsafe private AudioFrame GenerateAudioData(uint samples)
    {
        // Buffer size is (number of samples) * (size of each sample)
        // We choose to generate single channel (mono) audio. For multi-channel, multiply by number of channels
        uint bufferSize = samples * sizeof(float);
        AudioFrame frame = new Windows.Media.AudioFrame(bufferSize);
    
        using (AudioBuffer buffer = frame.LockBuffer(AudioBufferAccessMode.Write))
        using (IMemoryBufferReference reference = buffer.CreateReference())
        {
            byte* dataInBytes;
            uint capacityInBytes;
            float* dataInFloat;
    
            // Get the buffer from the AudioFrame
            ((IMemoryBufferByteAccess)reference).GetBuffer(out dataInBytes, out capacityInBytes);
    
            // Cast to float since the data we are generating is float
            dataInFloat = (float*)dataInBytes;
    
            float freq = 1000; // choosing to generate frequency of 1kHz
            float amplitude = 0.3f;
            int sampleRate = (int)audioGraph.EncodingProperties.SampleRate;
            double sampleIncrement = (freq * (Math.PI * 2)) / sampleRate;
    
            // Generate a 1kHz sine wave and populate the values in the memory buffer
            for (int i = 0; i < samples; i++)
            {
                double sinValue = amplitude * Math.Sin(theta);
                dataInFloat[i] = (float)sinValue;
                theta += sampleIncrement;
            }
        }
    
        return frame;
    }

    Tuesday, January 3, 2017 4:40 AM
  • That code is for generating audio, not capturing it, so I am not sure why that is the example you chose to include here.

    If you look at my change to the sample in my pull request -- https://github.com/Microsoft/Windows-universal-samples/pull/564/commits/dcf217ebe278ce8e9d3c931310c6adc97075a35b -- you will see that I do indeed use the unsafe IMemoryByteBufferAccess code pattern, only with an AudioFrameOutputNode.

    In your code that successfully captures audio, are you using a QuantumProcessed event handler?  Can you share your actual audio capturing code?


    Rob Jellinghaus

    Tuesday, January 3, 2017 5:51 PM
  • I was able to resolve this issue -- it turns out QuantumProcessed is actually fired on a thread pool, not from the audio thread.  The right callback to use is QuantumStarted.

    I have a pull request containing sample code that demonstrates the successful use of this type:

    https://github.com/Microsoft/Windows-universal-samples/pull/615

    Hopefully this will be useful to anyone else doing memory-buffered input with AudioGraph.

    Cheers!
    Rob


    Rob Jellinghaus

    Monday, April 3, 2017 4:03 PM