locked
[UWP]Windows UWP AudioGraph buffer recording RRS feed

  • Question

  • Hi all,

    Using the AudioGraph is clear to me, the following things are working great:

    Microphone as deviceInputNode -> record -> save to file with fileOutputNode

    Load from file with fileInputNode -> play  to Speakers with deviceOutputNode

    But in my app I need to record from microphone in some kind of internal buffer. Because I want to achieve, that the user records something, can replay it (from the buffer), maybe record something new (and so on) and FINALLY save the recording, that he wants.

    So the only way I see so far, is to record from mic to file. Open from file to replay. And so on. I wonder if there is a better solution, without expensive file io operations.

    Best regards,

    Daniel


    • Edited by Barry Wang Tuesday, April 5, 2016 2:44 AM title tags
    Monday, April 4, 2016 9:25 AM

Answers

  • I found a solution for my specific requirements. This is the way to go:

    1) Create an AudioGraph for Recording:

    • Input node is the microphone (AudioDeviceInputNode)
    • Output node is an AudioFrameNode (AudioFrameOutputNode)

    2) Create an AudioGraph for Replay:

    • Input node is an AudioFrameNoder (AudioFrameInputNode)
    • Output node is the speaker (AudioDeviceOutputNode)

    After recording some audio with the first graph, one can get the recorded AudioFrame with a call to AudioFrameOutputNode.GetFrame(). This AudioFrame can be passed as "input" to the "Replay"-AudioFrameInputNode with a call to AudioFrameInputNode.AddFrame(...).

    • Proposed as answer by Barry Wang Thursday, April 7, 2016 8:29 AM
    • Marked as answer by Barry Wang Wednesday, April 13, 2016 10:10 AM
    Thursday, April 7, 2016 8:27 AM

All replies

  • Hello LordZiu,

    It seems About WASAPI can also help you manage this. Here is the sample https://code.msdn.microsoft.com/windowsapps/Windows-Audio-Session-22dcab6b#content I've checked the document and it seems this API is more directly in doing what you want. Please check whether this is helpful in your scenario.

    Best regards,

    Barry


    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click HERE to participate the survey.

    Tuesday, April 5, 2016 5:55 AM
  • Hello Barry,

    Thanks for the hint. But it seems to me, that WASAPI is only supported for C++? Forgot to mention, that I'm developing in C#.

    Wednesday, April 6, 2016 12:30 PM
  • I found a solution for my specific requirements. This is the way to go:

    1) Create an AudioGraph for Recording:

    • Input node is the microphone (AudioDeviceInputNode)
    • Output node is an AudioFrameNode (AudioFrameOutputNode)

    2) Create an AudioGraph for Replay:

    • Input node is an AudioFrameNoder (AudioFrameInputNode)
    • Output node is the speaker (AudioDeviceOutputNode)

    After recording some audio with the first graph, one can get the recorded AudioFrame with a call to AudioFrameOutputNode.GetFrame(). This AudioFrame can be passed as "input" to the "Replay"-AudioFrameInputNode with a call to AudioFrameInputNode.AddFrame(...).

    • Proposed as answer by Barry Wang Thursday, April 7, 2016 8:29 AM
    • Marked as answer by Barry Wang Wednesday, April 13, 2016 10:10 AM
    Thursday, April 7, 2016 8:27 AM
  • @LordZiu,

    Thanks for sharing:)


    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click HERE to participate the survey.

    Thursday, April 7, 2016 8:30 AM
  • My usage scenario is similar. I want to constantly read input from the microphone and check the AudioFrameOutputNode frame to detect (relative) silences after speaking. When a silence of a particular length is encountered, I want to send the recording up to MS cognitive services.

    Is there a way to transform the AudioFrameOutputNode directly to a stream so I can pass it along without having to write a file? Something like InMemoryRandomAccessStream?

    Thursday, April 21, 2016 3:23 AM
  • any chance you can post a simple sample of this? I'm trying to pass an audioframe from an input graph onto an output graph, and although I don't get any exceptions, I am not hearing any audio. I'd really like to compare what I'm doing to what you're doing to understand what i'm doing wrong

    Wednesday, June 1, 2016 4:10 AM
  • Hi LordZiu,

    I know it has been a while since you posted this answer, but do you have sample code for the above suggestion? I'm new to using AudioGraph and have attempted to do just this scenario and am having no luck getting this to work.

    Any help would be greatly appreciated. Thanks!

    Bridget Slocum

    Saturday, March 4, 2017 8:39 PM
  • Hi SelAromDotNet,

    Did you receive a sample or get this to work yourself? I am running into similar problems trying to do the same thing - Take AudioFrameOutputNode frames and write them to a stream.

    Any help is greatly appreciated! Thanks,

    Bridget Slocum

    Saturday, March 4, 2017 8:41 PM
  • I have created a pull request for a new AudioCreation example scenario which demonstrates in-memory buffering of audio input via AudioFrameOutputNode:

    https://github.com/Microsoft/Windows-universal-samples/pull/615

    This demonstrates building up a large (up to 10MB) memory buffer based on incoming audio.  From there you can write it to file asynchronously or do whatever else you like.  The sample demonstrates playback from the buffer.

    Hope this is useful!
    Cheers,

    Rob


    Rob Jellinghaus

    Monday, April 3, 2017 4:17 PM