none
Problem with automatic speech recognition after Win 10 april update RRS feed

  • Question

  • hi, i have a problem with automatic speech recognition Api, and only after installing the Wndows 10 april update (1803). My Wpf App (not UWP) uses external microphone, and recognize only if the asr window is in "Top Most" mode and its focused. If i change focus, the asr remain in capturing mode and doesn't recognize. I tried with check privacy policy, and signing app.

    Tks

    Friday, May 4, 2018 1:02 PM

All replies

  • Hi Alexander G84,

    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.

    Best Regards,

    Xavier Xie


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    Monday, May 7, 2018 8:19 AM
    Moderator
  • Hi Alexander,

    Which speech recognition API do you use? I tested with Bing Speech and it works fine on 1803. So could you provide more information about your issue?

    Best Regards,

    Charles


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    Thursday, May 10, 2018 6:33 AM
  • Hi Charles,

    thanks for the answer. I use the UWP speech recognition Api "windows.media.speechRecognition"  in WPF app. I need to use the recognizer in a background task.


    Best regards


    Alexander


    Thursday, May 10, 2018 1:24 PM
  • Hi Alexander,

    I still don't have enough information to figure out your issue.

    Which type constraint do you use? Predefined grammars or Programmatic list constraints or SRGS grammars or Voice command constraints?

    Which method do you use to recognize? SpeechRecognizer.RecognizeWithUIAsync or SpeechRecognizer.RecognizeAsync?

    Could you show us some code to make it clear?

    I tested with Predefined grammars and Programmatic list constraints, with RecognizeAsync method, it works fine even without focus.

    Best Regards,

    Charles


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.



    Friday, May 11, 2018 8:38 AM
  • Hi Charles,

    I use SRGSConstraint with both methods . If you star the recognition, and during the capturing you click to another window (so the process goes in the background), the recognizer system return  "Speech Recognition Failed, Status: UserCanceled" and stop the recognize. I'm testing the UWP sample. I can't use the recognizer in background App.

    Best Regards

    Alexander

    Friday, May 11, 2018 3:11 PM
  • Hi Alexander,

    What do you mean by "I'm testing the UWP sample"? Is your project a WPF project or UWP project?

    If your project is a WPF project, since StorageFile is not directly available in a WPF project, can you show us how you create the SpeechRecognitionGrammarFileConstraint object?

    What's more, if your project is a WPF project, have you tried System.Speech.Recognition namespace?

    Best Regards,

    Charles


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.


    Monday, May 14, 2018 5:57 AM
  • Hi Charles,

    my project is a WPF project, but i found the same problem with UWP example. After finding the problem in my project, I tried the SpeechRecognition UWP Sample (sample founded on Github) and i encountered the same problem (if app goes in background, the recognizer return the result "Speech Recognition Failed, Status: UserCanceled" and stop the recognize).  Now I can't change the project anymore, and can't use System.Speech.Recognition namespace. I think that the problem is in windows.media.speechRecognition Namespace.

    The inizializate of Speech Object in UWP sample is:

      /// <summary>
            /// Initialize Speech Recognizer and compile constraints.
            /// </summary>
            /// <param name="recognizerLanguage">Language to use for the speech recognizer</param>
            /// <returns>Awaitable task.</returns>
            private async Task InitializeRecognizer(Language recognizerLanguage)
            {
                if (speechRecognizer != null)
                {
                    // cleanup prior to re-initializing this scenario.
                    speechRecognizer.StateChanged -= SpeechRecognizer_StateChanged;
    
                    this.speechRecognizer.Dispose();
                    this.speechRecognizer = null;
                }
    
                try
                {
                    // Initialize the SRGS-compliant XML file.
                    // For more information about grammars for Windows apps and how to
                    // define and use SRGS-compliant grammars in your app, see
                    // https://msdn.microsoft.com/en-us/library/dn596121.aspx
    
                    // determine the language code being used.
                    string languageTag = recognizerLanguage.LanguageTag;
                    string fileName = String.Format("SRGS\\{0}\\SRGSColors.xml", languageTag);
                    StorageFile grammarContentFile = await Package.Current.InstalledLocation.GetFileAsync(fileName);
    
                    // Initialize the SpeechRecognizer and add the grammar.
                    speechRecognizer = new SpeechRecognizer(recognizerLanguage);
    
                    // Provide feedback to the user about the state of the recognizer.
                    speechRecognizer.StateChanged += SpeechRecognizer_StateChanged;
    
                    // RecognizeWithUIAsync allows developers to customize the prompts.
                    speechRecognizer.UIOptions.ExampleText = speechResourceMap.GetValue("SRGSUIOptionsExampleText", speechContext).ValueAsString;
    
                    SpeechRecognitionGrammarFileConstraint grammarConstraint = new SpeechRecognitionGrammarFileConstraint(grammarContentFile);
                    speechRecognizer.Constraints.Add(grammarConstraint);
                    SpeechRecognitionCompilationResult compilationResult = await speechRecognizer.CompileConstraintsAsync();
    
                    // Check to make sure that the constraints were in a proper format and the recognizer was able to compile it.
                    if (compilationResult.Status != SpeechRecognitionResultStatus.Success)
                    {
                        // Disable the recognition buttons.
                        btnRecognizeWithUI.IsEnabled = false;
                        btnRecognizeWithoutUI.IsEnabled = false;
    
                        // Let the user know that the grammar didn't compile properly.
                        resultTextBlock.Visibility = Visibility.Visible;
                        resultTextBlock.Text = "Unable to compile grammar.";
                    }
                    else
                    {
                        btnRecognizeWithUI.IsEnabled = true;
                        btnRecognizeWithoutUI.IsEnabled = true;
    
                        resultTextBlock.Visibility = Visibility.Visible;
                        resultTextBlock.Text = speechResourceMap.GetValue("SRGSListeningPromptText", speechContext).ValueAsString;
    
                        // Set EndSilenceTimeout to give users more time to complete speaking a phrase.
                        speechRecognizer.Timeouts.EndSilenceTimeout = TimeSpan.FromSeconds(1.2);
                    }
                }
                catch (Exception ex)
                {
                    if ((uint)ex.HResult == HResultRecognizerNotFound)
                    {
                        btnRecognizeWithUI.IsEnabled = false;
                        btnRecognizeWithoutUI.IsEnabled = false;
    
                        resultTextBlock.Visibility = Visibility.Visible;
                        resultTextBlock.Text = "Speech Language pack for selected language not installed.";
                    }
                    else
                    {
                        var messageDialog = new Windows.UI.Popups.MessageDialog(ex.Message, "Exception");
                        await messageDialog.ShowAsync();
                    }
                }
            }
    

    Best Regards

    Alexander


    Monday, May 14, 2018 2:24 PM
  • Hi Alexander,

    WPF project is different from UWP project. UWP application's lifecycle is managed by system, this can have something to do with the  result "Speech Recognition Failed, Status: UserCanceled", but WPF is different, so could you show us your code in WPF project?

    Best Regards,

    Charles


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.



    Wednesday, May 16, 2018 6:23 AM
  • Hi Charles,

    this is my code. Thank you.

      private void InitializeRecognizerSRGS(Language recognizerLanguage)
            {
                try
                {
                    if (speechRecognizerCC != null)
                    {
                        // cleanup prior to re-initializing this scenario.
                        speechRecognizerCC.StateChanged -= SpeechRecognizer_StateChanged;
                        speechRecognizerCC.HypothesisGenerated -= SpeechRecognizerCC_HypothesisGenerated;
                        speechRecognizerCC.RecognitionQualityDegrading -= SpeechRecognizerCC_RecognitionQualityDegrading;
    
                        this.speechRecognizerCC.Dispose();
                        this.speechRecognizerCC = null;
                    }
    
                    Log.Info("Lingua ASR SRGS: " + recognizerLanguage.DisplayName + " - " + recognizerLanguage.LanguageTag);
                    // Initialize the SpeechRecognizer and add the grammar.
                    speechRecognizerCC = new SpeechRecognizer(recognizerLanguage);
    
                    //NotifyUser("EndSilenceTimeout: " + speechRecognizerCC.Timeouts.EndSilenceTimeout);
                    //NotifyUser("InitialSilenceTimeout: " + speechRecognizerCC.Timeouts.InitialSilenceTimeout);
                    //NotifyUser("BabbleTimeout: " + speechRecognizerCC.Timeouts.BabbleTimeout);
    
                    int timeout = SpeakyAsrServer.Properties.Settings.Default.SoundEndTimeout;
    
                    speechRecognizerCC.Timeouts.EndSilenceTimeout = new TimeSpan(0,0,0,0,timeout);
    
                    //speechRecognizerCC.Timeouts.InitialSilenceTimeout = new TimeSpan(0, 0, 0, 1);
    
                    // EVENTI: Provide feedback to the user about the state of the recognizer.
                    speechRecognizerCC.StateChanged += SpeechRecognizer_StateChanged;
                    speechRecognizerCC.HypothesisGenerated += SpeechRecognizerCC_HypothesisGenerated;
                    speechRecognizerCC.RecognitionQualityDegrading += SpeechRecognizerCC_RecognitionQualityDegrading;
                }
                catch (Exception ex)
                {
                    Log.Error("Exception: " + ex.Message);
                    Log.Info("Exception: " + ex.Message);
                }
    
            }

    Best regards

    Alexander

    Wednesday, May 16, 2018 1:51 PM
  • Hi Alexander,

    You call RecognizeAsync or RecognizeWithUIAsync just after method InitializeRecognizerSRGS called? How do you add constraints to the recognizer? I initialized recognizer with your method and failed running my application foreground, I got exception, so did I miss something?

    Best Regards,

    Charles


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    Thursday, May 17, 2018 5:59 AM
  • Hi Alexander,

    I just got some information about Speech Recognition from other engineers, I think this is what you encountered so I will share it with you and other community members who encounters the same issue.

    "Speech APIs aren't meant to work in background, and the behavior you encountered is the correct one. A potential workaround is to package the Win32 app(your WPF app) with the Desktop Bridge and add an app service component. The app service must be triggered when the app starts and it takes care of performing the recognition in background. Every recognized command is sent back to the Win32 application through the app service, so that it can process it."

    That's also why I doubt how you create constraints because StorageFile is not available directly to WPF project but available to a packaged desktop app(Desktop Bridge) which you need to apply.

    Best Regards,

    Charles


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.






    Friday, May 18, 2018 1:16 AM
  • Hi Charles,

    Thank you. I tried the msdn sample ("RandomNumberGenerator") and modified the receive method.

    After i started the SpeechRecognition with "SpeechRecognizer.RecognizeAsync()" , the SpeechRecognitionResult is always "Unknow". And the Recognizer always remains in the "idle" mode. The grm are compiled and added correctly (if i check the grm loaded before start Recognizer, the count is 1). I added Microphone capability.

    using System;
    using System.Collections.Generic;
    using System.Diagnostics;
    using System.Globalization;
    using System.IO;
    using System.Threading.Tasks;
    using Windows.ApplicationModel.AppService;
    using Windows.ApplicationModel.Background;
    using Windows.Foundation;
    using Windows.Foundation.Collections;
    using Windows.Globalization;
    using Windows.Media.Capture;
    using Windows.Media.SpeechRecognition;
    using Windows.Storage;
    using Windows.Storage.Pickers;
    using Windows.UI.Core;
    using Windows.UI.Popups;
    using Windows.UI.Xaml;
    
    namespace RandomNumberService
    {
        public sealed class RandomNumberGeneratorTask : IBackgroundTask
        {
            BackgroundTaskDeferral serviceDeferral;
            AppServiceConnection connection;
            Random randomNumberGenerator;
            }
            public async void Run(IBackgroundTaskInstance taskInstance)
            {
               
    
                initAsr();           
    
                //Take a service deferral so the service isn't terminated
                serviceDeferral = taskInstance.GetDeferral();
    
                taskInstance.Canceled += OnTaskCanceled;
    
                //Initialize the random number generator
                randomNumberGenerator = new Random((int)DateTime.Now.Ticks);
    
                var details = taskInstance.TriggerDetails as AppServiceTriggerDetails;
                connection = details.AppServiceConnection;
    
                //Listen for incoming app service requests
                connection.RequestReceived += OnRequestReceived;
            }
    
            private void OnTaskCanceled(IBackgroundTaskInstance sender, BackgroundTaskCancellationReason reason)
            {
                if (serviceDeferral != null)
                {
                    //Complete the service deferral
                    serviceDeferral.Complete();
                    serviceDeferral = null;
                }
    
                if (connection != null)
                {
                    connection.Dispose();
                    connection = null;
                }
            }
    
            private async void OnRequestReceived(AppServiceConnection sender, AppServiceRequestReceivedEventArgs args)
            {
                var messageDeferral = args.GetDeferral();
                try
                {
                    // await Window.Current.Dispatcher.RunAsync(CoreDispatcherPriority.Normal/* the rank */,
    // () =>
    // {
        // run the code
    			await	startAsr(args);
    // });
                    
    
    
                }
                catch (Exception e)
                {
    
                   
                    var result = new ValueSet();
                    //  result.Add("result", randomNumberGenerator.Next(minValue, maxValue));
    
                    result.Add("result", e.Message);
                    //Send the response
                    await args.Request.SendResponseAsync(result);
                    messageDeferral.Complete();
                }
    
            }
    
            private Language recognizerLanguage = SpeechRecognizer.SystemSpeechLanguage;
            private SpeechRecognitionScenario recognizerDictationLastScenario = SpeechRecognitionScenario.Dictation;
            private IAsyncOperation<SpeechRecognitionResult> recognitionOperation;
    
            SpeechRecognizer speechRecognizerCC;
    
    
            public void initAsr()
            {
     
               // ApplicationLanguages.PrimaryLanguageOverride = "it";
                InitializeRecognizerSRGS(new Language("it-IT"));
            }
    
            private void InitializeRecognizerSRGS(Language recognizerLanguage)
            {
                try
                {
                    if (speechRecognizerCC != null)
                    {
                        // cleanup prior to re-initializing this scenario.
                        speechRecognizerCC.StateChanged -= SpeechRecognizer_StateChangedAsync;
                        speechRecognizerCC.HypothesisGenerated -= SpeechRecognizerCC_HypothesisGenerated;
                        speechRecognizerCC.RecognitionQualityDegrading -= SpeechRecognizerCC_RecognitionQualityDegrading;
    
                        this.speechRecognizerCC.Dispose();
                        this.speechRecognizerCC = null;
                    }
    
                    //  Log.Info("Lingua ASR SRGS: " + recognizerLanguage.DisplayName + " - " + recognizerLanguage.LanguageTag);
                    // Initialize the SpeechRecognizer and add the grammar.
                    speechRecognizerCC = new SpeechRecognizer(recognizerLanguage);
    
    
                    // EVENTI: Provide feedback to the user about the state of the recognizer.
                    speechRecognizerCC.StateChanged += SpeechRecognizer_StateChangedAsync;
                    speechRecognizerCC.HypothesisGenerated += SpeechRecognizerCC_HypothesisGenerated;
                    speechRecognizerCC.ContinuousRecognitionSession.ResultGenerated += ContinuousRecognitionSession_ResultGenerated;
                    speechRecognizerCC.RecognitionQualityDegrading += SpeechRecognizerCC_RecognitionQualityDegrading;
    
                    //txtb_log.AppendText("\nEnd Init ASR: ");
    
                }
                catch (Exception ex)
                {
                   
                }
    
            }
    
            private void ContinuousRecognitionSession_ResultGenerated(SpeechContinuousRecognitionSession sender, SpeechContinuousRecognitionResultGeneratedEventArgs args)
            {
                
            }
    
            private async Task loadGrammar()
            {
                try
                {
                    StorageFile grammarContentFile = null;
                    try
                    {
                        grammarContentFile = await StorageFile.GetFileFromPathAsync(@"C:\Users\user\Music\testAsr.grxml");
                        //string tagName = grmCommonTagPrefix + grmName;
                    }
                    catch (Exception ex)
                    {
                      
                    }
    
                    SpeechRecognitionGrammarFileConstraint grammarConstraint = new SpeechRecognitionGrammarFileConstraint(grammarContentFile);
    
                    speechRecognizerCC.Constraints.Add(grammarConstraint);
                    SpeechRecognitionCompilationResult compilationResult = await speechRecognizerCC.CompileConstraintsAsync();
    
                    if (compilationResult.Status == SpeechRecognitionResultStatus.Success)
                    {
                          //      Log.Info(Tid + " - " + "Errore Compilazione Grammatica [" + tagName + "] ErrorType: " + compilationResult.Status);
    
                       
                    }
                    else
                    {
    
                        //  Log.Info(Tid + " - " + "Grammatica Aggiornata e ricompilata con Successo: " + tagName);
                    }
                }
                catch (Exception ex)
                {
                   
                    //txtb_log.AppendText("\n grammatica non compilata. Err: " + ex.Message);
                    // 
                }
    
            }
    
    
            private void SpeechRecognizerCC_RecognitionQualityDegrading(SpeechRecognizer sender, SpeechRecognitionQualityDegradingEventArgs args)
            {
                try
                {
                    //NotifyStatus("Audio Problem CC: " + args.Problem.ToString());
                }
                catch (Exception ex)
                {
                    //txtb_log.AppendText("\nException: " + ex.Message);
                }
            }
    
            private void SpeechRecognizerCC_HypothesisGenerated(SpeechRecognizer sender, SpeechRecognitionHypothesisGeneratedEventArgs args)
            {
                try
                {
                    //NotifyStatus("Ipotesi CC: " + args.Hypothesis.Text);
                }
                catch (Exception ex)
                {
                    //txtb_log.AppendText("\nException: " + ex.Message);
                }
            }
    		
          
            private  void SpeechRecognizer_StateChangedAsync(SpeechRecognizer sender, SpeechRecognizerStateChangedEventArgs args)
            {
                try
                {
    
                    //     NotifyStatus(DateTime.Now.ToLongTimeString() + " - " + "SRGS State: " + args.State.ToString());
                }
                catch (Exception ex)
                {
                    //txtb_log.AppendText("\nException: " + ex.Message);
                }
            }
    
            private async Task startAsr(AppServiceRequestReceivedEventArgs args)
            {
                var messageDeferral = args.GetDeferral();
                try
                {
    
                   
                    await loadGrammar();
    
                     recognitionOperation = speechRecognizerCC.RecognizeAsync();
    
                   
                    SpeechRecognitionResult speechRecognitionResult = await recognitionOperation;
    
                   
                    if (speechRecognitionResult.Status == SpeechRecognitionResultStatus.Success)
                    {                  
                        await HandleRecognitionResult(speechRecognitionResult);
                    }
                    else
                    {
                       
                        var result = new ValueSet();
                        result.Add("result", "not success");
                        //Send the response
                        await args.Request.SendResponseAsync(result);
                        messageDeferral.Complete();
                    }
                    
                 
                }
                catch (Exception ex)
                {
                   
                    var result = new ValueSet();
                    
                   
                    result.Add("result", ex.StackTrace.ToString());
                    //Send the response
                    await args.Request.SendResponseAsync(result);
                    messageDeferral.Complete();
                    //txtb_log.AppendText("\nException: " + ex.Message);
                }
                finally
                {
                    //Complete the message deferral so the platform knows we're done responding
    
                }
    
            }
    
            string utterance = "";
            string semantic = "";
    
            private async Task HandleRecognitionResult(SpeechRecognitionResult recoResult)
            {
    
              
                utterance = recoResult.Text;
    
                foreach (IReadOnlyList<string> s in recoResult.SemanticInterpretation.Properties.Values)
                {
                    foreach (string st in s)
                    {
                        semantic += st;                    
                    }
                }
              
                //txtb_log.AppendText("\nUTTERANCE: " + utterance + "     SEMANTICA: " + semantic);
            }
        }
    }
    

    Alexander

    Friday, May 25, 2018 3:17 PM
  • Hi Alexander,

    I also reproduced your issue, I'm consulting.

    What's more, I also tested WPF application Packaged by Desktop Bridge without app service and SRGS works fine in background.

    Best Regards,

    Charles


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    Tuesday, May 29, 2018 1:59 AM
  • Hi Charles,

    but do you have April Update (Win10) ? Because before the update, the Recognition works fine in background.

    Thursday, May 31, 2018 9:43 AM
  • Hi Alexander,

    Yes I have April Update. I tested the srgs constraints, it works fine in WPF app without focus but doesn't work in app service.

    Best Regards,

    Charles


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    Monday, June 4, 2018 6:19 AM
  • Hi Charles,

    i packaged my WPF app by Desktop Bridge, but the recognizer doesn't start and return the "status: Unknow". If i set the wpf project  as a startup project of the solution, the recognizer works fine. If i set the packaging app (desktop bridge) as a starup project, the recognizer return "unknow".

    Maybe the problem is the configuration of packaging app of desktop bridge. I added the capability "microphone". Do I need other configurations to make the speech recognizer work?

    best regards

    Alexander

    Thursday, June 7, 2018 10:04 AM
  • Hi Alexander,

    It works fine for me when I set either project as startup. The code I refer to comes from define-custom-recognition-constraints.

    private async void Colors_Click(object sender, RoutedEventArgs e)
    {
        // Create an instance of SpeechRecognizer.
        var speechRecognizer = new Windows.Media.SpeechRecognition.SpeechRecognizer();
    
        // Add a grammar file constraint to the recognizer.
        var storageFile = await Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///Colors.grxml"));
        var grammarFileConstraint = new Windows.Media.SpeechRecognition.SpeechRecognitionGrammarFileConstraint(storageFile, "colors");
    
        speechRecognizer.UIOptions.ExampleText = @"Ex. 'blue background', 'green text'";
        speechRecognizer.Constraints.Add(grammarFileConstraint);
    
        // Compile the constraint.
        await speechRecognizer.CompileConstraintsAsync();
    
        // Start recognition.
        //Windows.Media.SpeechRecognition.SpeechRecognitionResult speechRecognitionResult = await speechRecognizer.RecognizeWithUIAsync();
    
        Windows.Media.SpeechRecognition.SpeechRecognitionResult speechRecognitionResult = await speechRecognizer.RecognizeAsync();
    
        // Do something with the recognition result.
        // Show result in my textblock.
        txt.Text = speechRecognitionResult.Text;
        //var messageDialog = new Windows.UI.Popups.MessageDialog(speechRecognitionResult.Text, "Text spoken");
        //await messageDialog.ShowAsync();
    }

    Best Regards,

    Charles


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    Tuesday, June 12, 2018 10:24 AM
  • Am using wpf using Windows.Media.SpeechRecognition, it can recognize in the background if i minimize programmatically, but soon I minimize the window myself or right click on the taskbar the recognition stops and doesn't start again until i give focus to the window can't seem to make a fix for it

    Am trying to make a service recognition but it doesn't want to initialize

    Exception thrown: 'System.Exception' in mscorlib.dll
    An exception of type 'System.Exception' occurred in mscorlib.dll but was not handled in user code
    The text associated with this error code could not be found.

    Internal Speech Error.

    at

    this.speechRecognizer = new SpeechRecognizer(recognizerLanguage);

    • Edited by Flippardo Tuesday, March 12, 2019 1:28 AM
    Saturday, March 9, 2019 2:50 AM