问题描述
您好,我正在尝试寻找一种使用Microsoft Speech API来运行Angular 5的方法我将microsoft-speech-browser-sdk用于javascript
hi im trying to find a way to get working Angular 5 with Microsoft Speech APIi used microsoft-speech-browser-sdk for javascript
https://github.com/Azure-Samples/SpeechToText-WebSockets-Javascript
我只是导入SDK从'microsoft-speech-browser-sdk'导入*作为SDK;我试图在示例中使用相同的代码
i just import the SDKimport * as SDK from 'microsoft-speech-browser-sdk';and i tried to use the same code on the example
但是我有这个错误SDK.Recognizer.CreateRecognizer不是函数我知道skd是导入的,因为它执行第一个功能
but i have this errorSDK.Recognizer.CreateRecognizer is not a functionI know that the skd is imported because it executes the first functions
我也找不到API参考有人从事这种认知服务吗?
also i cant find the API referenceIs there anyone who has got work this cognitive service with angular?
推荐答案
我遇到了同样的问题,并且似乎是博客文章中的错字,所以我在这里与SDK示例进行了比较: https://github.com/Azure-Samples/cognitive-services-speech -sdk/tree/master/samples/js/browser
I had this same issue and seems to be a typo in the blogpost, so I compared with the SDK sample here: https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser
Smael的答案本质上就是解决方法-从函数调用中删除.Recognizer,它应该可以解决(还要确保您返回的SDK参考与您要导入的SDK具有相同的名称:
Smael's answer is essentially the fix - remove the .Recognizer from the function call and that should fix it (also ensure that the SDK reference you're returning has the same name as the one you're importing:
import { Component } from '@angular/core';
import { environment } from 'src/environments/environment';
import * as SpeechSDK from 'microsoft-speech-browser-sdk';
@Component({
selector: 'app-home',
templateUrl: './home.component.html',
})
export class HomeComponent {
speechAuthToken: string;
recognizer: any;
constructor() {
this.recognizer = this.RecognizerSetup(SpeechSDK, SpeechSDK.RecognitionMode.Conversation, 'en-US',
SpeechSDK.SpeechResultFormat.Simple, environment.speechSubscriptionKey);
}
RecognizerSetup(SDK, recognitionMode, language, format, subscriptionKey) {
const recognizerConfig = new SDK.RecognizerConfig(
new SDK.SpeechConfig(
new SDK.Context(
new SDK.OS(navigator.userAgent, 'Browser', null),
new SDK.Device('SpeechSample', 'SpeechSample', '1.0.00000'))),
recognitionMode, // SDK.RecognitionMode.Interactive (Options - Interactive/Conversation/Dictation)
language, // Supported languages are specific to each recognition mode Refer to docs.
format); // SDK.SpeechResultFormat.Simple (Options - Simple/Detailed)
// Alternatively use SDK.CognitiveTokenAuthentication(fetchCallback, fetchOnExpiryCallback) for token auth
const authentication = new SDK.CognitiveSubscriptionKeyAuthentication(subscriptionKey);
return SpeechSDK.CreateRecognizer(recognizerConfig, authentication);
}
RecognizerStart() {
this.recognizer.Recognize((event) => {
/*
Alternative syntax for typescript devs.
if (event instanceof SDK.RecognitionTriggeredEvent)
*/
switch (event.Name) {
case 'RecognitionTriggeredEvent' :
console.log('Initializing');
break;
case 'ListeningStartedEvent' :
console.log('Listening');
break;
case 'RecognitionStartedEvent' :
console.log('Listening_Recognizing');
break;
case 'SpeechStartDetectedEvent' :
console.log('Listening_DetectedSpeech_Recognizing');
console.log(JSON.stringify(event.Result)); // check console for other information in result
break;
case 'SpeechHypothesisEvent' :
// UpdateRecognizedHypothesis(event.Result.Text);
console.log(JSON.stringify(event.Result)); // check console for other information in result
break;
case 'SpeechFragmentEvent' :
// UpdateRecognizedHypothesis(event.Result.Text);
console.log(JSON.stringify(event.Result)); // check console for other information in result
break;
case 'SpeechEndDetectedEvent' :
// OnSpeechEndDetected();
console.log('Processing_Adding_Final_Touches');
console.log(JSON.stringify(event.Result)); // check console for other information in result
break;
case 'SpeechSimplePhraseEvent' :
// UpdateRecognizedPhrase(JSON.stringify(event.Result, null, 3));
break;
case 'SpeechDetailedPhraseEvent' :
// UpdateRecognizedPhrase(JSON.stringify(event.Result, null, 3));
break;
case 'RecognitionEndedEvent' :
// OnComplete();
console.log('Idle');
console.log(JSON.stringify(event)); // Debug information
break;
}
})
.On(() => {
// The request succeeded. Nothing to do here.
},
(error) => {
console.error(error);
});
}
RecognizerStop() {
// recognizer.AudioSource.Detach(audioNodeId) can be also used here. (audioNodeId is part of ListeningStartedEvent)
this.recognizer.AudioSource.TurnOff();
}
}
这篇关于在Angular上使用Microsoft Speech API的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!