API在Dart和Flutter中进行实时语音识别

API在Dart和Flutter中进行实时语音识别

本文介绍了使用gcloud Speech API在Dart和Flutter中进行实时语音识别的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在用dart编写的Flutter项目中使用Google的实时语音识别api.我已经激活了一个gcloud帐户,创建了api密钥(这应该是Google语音的唯一必要的身份验证方法),并编写了一个基本apk,应该将音频流发送到Google Cloud并显示响应.我导入了googleapis/speech和googleapis_auth插件.

I want to use Google's real-time speech recognition api in a flutter project, written in dart.I've activated a gcloud account, created the api key (which should be the only necessary authentication method for google speech) and written a basic apk which ought to send an audio stream to Google cloud and display the response.I imported the googleapis/speech and googleapis_auth plugin.

但是我不知道如何设置它.他们说您必须使用gRPC,这很有意义,因为它应该易于使用,但是在github上实现其插件似乎不使用它.

But I couldn't figure out how to set it up.They say you have to use gRPC, which makes sense as it should make it easy to use, but the implementation of their plugin on github doesn't seem to use it.

那么有人可以告诉我如何使用它-设置身份验证和转录语音吗?

So can anyone tell me how to use it - setting up authentication and transcribing a speech?

推荐答案

更新:

这是一个工作示例:

https://gist.github.com/DazWilkin/34d628b998b4266be818ffb3efd688aa

您只需要插入服务帐户key.json的值,即可收到:

You need only plug the values of a service account key.json and should receive:

{
    alternatives: [{
        confidence: 0.9835046,
        transcript: how old is the Brooklyn Bridge
    }]
}

文献记载不多:-(

我熟悉Google API的开发,但是对Dart 不熟悉,对Google Speech-to-Text API却不熟悉,因此,请提前道歉.

I'm familiar with Google API development but unfamiliar with Dart and with the Google Speech-to-Text API so, apologies in advance.

请参阅: https://github.com/dart-lang/googleapis/tree/master/Generated/googleapis

Google SDK | library有2种版本,较常见的(API Client Libraries)和新的(Cloud [!] Client Libraries). IIUC,对于Dart for Speech,您将使用API​​客户端库,而使用gRPC.

There are 2 flavors of Google SDK|library, the more common (API Client Libraries) and the new (Cloud [!] Client Libraries). IIUC, for Dart for Speech you're going to use the API Client Library and this doesn't use gRPC.

我将通过肠道调整样本,所以请耐心等待:

I'm going to tweak the sample by gut, so bear with me:

import 'package:googleapis/speech/v1.dart';
import 'package:googleapis_auth/auth_io.dart';

final _credentials = new ServiceAccountCredentials.fromJson(r'''
{
  "private_key_id": ...,
  "private_key": ...,
  "client_email": ...,
  "client_id": ...,
  "type": "service_account"
}
''');

const _SCOPES = const [SpeechApi.CloudPlatformScope];

void main() {
  clientViaServiceAccount(_credentials, _SCOPES).then((http_client) {
    var speech = new SpeechApi(http_client);
    speech...
  });
}

这需要创建一个具有适当权限的服务帐户,并为其生成一个(JSON)密钥.通常,密钥文件是由代码加载的,但是在此示例中,它是作为字符串文字提供的.密钥将提供fromJson的内容.您应该(!)能够使用应用程序默认凭据"进行测试(更轻松),请参阅下面的链接.

This requires the creation of a service account with appropriate permissions and a (JSON) key generated for it. Generally, the key file is loaded by the code but, in this example, it's provided as a string literal. The key will provide the content for fromJson. You ought (!) to be able to use Application Default Credentials for testing (easier) see the link below.

以某种方式(!),Dart API将包含进行此基础REST调用的方法|函数.该呼叫需要一些配置和音频:

Somehow (!) the Dart API will include a method|function that makes this underlying REST call. The call expects some configuration and the audio:

https://cloud.google .com/speech-to-text/docs/reference/rest/v1/speech/recognize

我怀疑是这个识别,并且期望 RecognizeRequest

I suspect it's this recognize and it expects a RecognizeRequest

对不起,我没有更多帮助.

Sorry I can't be of more help.

如果您确实可以使用它,请考虑将其发布,这样其他人可能会受益.

If you do get it working, please consider publishing the same so others may benefit.

NB

  • https://developers.google.com/identity/protocols/googlescopes#speechv1
  • https://pub.dartlang.org/packages/googleapis_auth_default_credentials

这篇关于使用gcloud Speech API在Dart和Flutter中进行实时语音识别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-01 08:41