kinect StartAudio error RRS feed

  • Question

  • Hi,..

    I'm developing an application that has to be used to evaluate the reaction timing to some event. The most simple one is to calculate the time passed from a simple image appearing on a screen and the time the person say what is that image.

    I developed an MVVM app in which i have the view, the viewmodel and the model which is where the kinect and the audio source and all the kinect events are. I followed the scheme in the Speech example in kinect SDK: same initialization!

    I also included a button on the screen handled by the viewmodel that gives me the DataTime.Now when I click it.

    So the problem is that after the image appear and I say the word and click the button at the same time, the difference between the DateTime.Now of the click and the SpeechRecognizedEventArgs.Result.Audio.StartTime recorded in the SpeechRecognized Event is more or less 500ms, which is a lot considering the kind of experiment I'm doing. It seems that the SpeechRecognizedEventArgs.Result.Audio.StartTime does not get the correct DateTime.Now..

    private void SpeechRecognitionSpeechRecognized(object sender, SpeechRecognizedEventArgs e)
                TrackAudioStart = e.Result.Audio.StartTime;
                AnswerTime = TrackAudioStart - StartTime;
                Dispatcher.CurrentDispatcher.Invoke(new Action<SpeechRecognizedEventArgs>(InterpretCommand), e);

    I would like to know if this is a problem known or it is just due a incorrect implmentation or the using of a mvvm pattern..

    Basically the main question is: how can i extract the exact DateTime of the beginning of the audio response?

    thank you.

    • Edited by Ivan2nn Wednesday, June 5, 2013 2:05 PM
    Wednesday, June 5, 2013 9:55 AM

All replies

  • If the Kinect is able to initialize and the functionality seems to work, then I think you want more information on the specifics of how the recognition works. You are going to have some challenges with this design given that there are several layers you are working with. The Kinects functionality only provides the Speech engine the raw audio stream. From there, you have the engine itself and whatever processing it has to do. On top of that is .Net event processing.

    Below are links to where you might want to start researching but you may want to also post over on one of the .Net forums. Visual Studio Developer Center >.NET Framework Forums >.NET Framework Class Libraries

    Microsoft Speech Platform

    Speech Recognition (Microsoft.Speech)

    Tuesday, June 18, 2013 12:56 AM