本文介绍了Jetpack Compose-语音识别的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

您知道如何在Jetpack Compose中应用语音识别(SpeechRecognizer)吗?

类似this,但在撰写中。

我按照this视频中的步骤操作:

  • 已在清单中添加以下权限:
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.RECORD_AUDIO"/>
  • MainActivity中编写此代码:
class MainActivity : ComponentActivity() {

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContent {
            PageUi()
        }
    }
}

@Composable
fun PageUi() {
    val context = LocalContext.current
    val talk by remember { mutableStateOf("Speech text should come here") }

    Column(
        modifier = Modifier.fillMaxSize(),
        horizontalAlignment = Alignment.CenterHorizontally,
        verticalArrangement = Arrangement.Center
    ) {
        Text(
            text = talk,
            style = MaterialTheme.typography.h4,
            modifier = Modifier
                .fillMaxSize(0.85f)
                .padding(16.dp)
                .background(Color.LightGray)
        )
        Button(onClick = { askSpeechInput(context) }) {
            Text(
                text = "Talk", style = MaterialTheme.typography.h3
            )
        }
    }
}

fun askSpeechInput(context: Context) {
    if (!SpeechRecognizer.isRecognitionAvailable(context)) {
        Toast.makeText(context, "Speech not available", Toast.LENGTH_SHORT).show()
    } else {
        val i = Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH)
        i.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM)
        i.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault())
        i.putExtra(RecognizerIntent.EXTRA_PROMPT, "Talk")

        //startActivityForResult(MainActivity(),i,102)
    }
}

@Preview(showBackground = true)
@Composable
fun PageShow() {
    PageUi()
}

但我不知道如何在编写中使用startActivityForResult,并执行其余操作?当我在我的手机(或模拟器)上测试它时,它总是以祝酒消息结束!

推荐答案

我将解释我自己的实现。让我先给你一个大概的概念,然后我会解释每一个步骤。因此,首先你需要每次都请求许可,然后如果许可被授予,那么你应该开始一个意图,以便听到用户说了什么。用户所说的内容保存在一个变量上,并保存到一个视图模型中。Composable正在观察视图模型上的变量,以便您可以获取数据。

1)将此内容添加到您的Manigest:

<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    package="your.package">

    // Add uses-permission
    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.RECORD_AUDIO" />

   [...]
   [...]
   [...]

    // Add above the last line  </manifest> like so:
    <queries>
        <intent>
            <action android:name="android.speech.RecognitionService" />
        </intent>
    </queries>

</manifest>

2)创建视图模型

class ScreenViewModel : ViewModel() {

    var textFromSpeech: String? by mutableStateOf(null)

}

您需要使用ViewModel来观察Composable中的变量并为干净的体系结构实现您的代码逻辑。

3)实现权限请求

build.gradle中添加以下内容:

implementation "com.google.accompanist:accompanist-permissions:$accompanist_version"

然后创建一个可组合的,如下所示:

@ExperimentalPermissionsApi
@Composable
fun  OpenVoiceWithPermission(
    onDismiss: () -> Unit,
    vm: ScreenViewModel,
    ctxFromScreen: Context,
    finished: () -> Unit
) {

    val voicePermissionState = rememberPermissionState(android.Manifest.permission.RECORD_AUDIO)
    val ctx = LocalContext.current

fun newIntent(ctx: Context) {
    val intent = Intent()
    intent.action = Settings.ACTION_APPLICATION_DETAILS_SETTINGS
    val uri = Uri.fromParts(
        "package",
        BuildConfig.APPLICATION_ID, null
    )
    intent.data = uri
    intent.flags = Intent.FLAG_ACTIVITY_NEW_TASK
    ctx.startActivity(intent)
}

    PermissionRequired(
        permissionState = voicePermissionState,
        permissionNotGrantedContent = {
            DialogCustomBox(
                onDismiss = onDismiss,
                dialogBoxState = DialogLogInState.REQUEST_VOICE,
                onRequestPermission = { voicePermissionState.launchPermissionRequest() }
            )
        },
        permissionNotAvailableContent = {
            DialogCustomBox(
                onDismiss = onDismiss,
                dialogBoxState = DialogLogInState.VOICE_OPEN_SYSTEM_SETTINGS,
                onOpenSystemSettings = { newIntent(ctx) }
            )
        }
    ) {
        startSpeechToText(vm, ctxFromScreen, finished = finished)
    }
}

DialogBox您可以像我一样创建自己的自定义,也可以使用标准版本,这由您决定,不在本答案的讨论范围内。

在上面的代码中,如果授予权限,您将自动转到这段代码:startSpeechToText(vm, ctxFromScreen, finished = finished),接下来必须实现它。

4)实现语音识别器

fun startSpeechToText(vm: ScreenViewModel, ctx: Context, finished: ()-> Unit) {
    val speechRecognizer = SpeechRecognizer.createSpeechRecognizer(ctx)
    val speechRecognizerIntent = Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH)
    speechRecognizerIntent.putExtra(
        RecognizerIntent.EXTRA_LANGUAGE_MODEL,
        RecognizerIntent.LANGUAGE_MODEL_FREE_FORM,
    )

    // Optionally I have added my mother language
    speechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, "el_GR")

    speechRecognizer.setRecognitionListener(object : RecognitionListener {
        override fun onReadyForSpeech(bundle: Bundle?) {}
        override fun onBeginningOfSpeech() {}
        override fun onRmsChanged(v: Float) {}
        override fun onBufferReceived(bytes: ByteArray?) {}
        override fun onEndOfSpeech() {
            finished()
            // changing the color of your mic icon to
            // gray to indicate it is not listening or do something you want
        }

        override fun onError(i: Int) {}

        override fun onResults(bundle: Bundle) {
            val result = bundle.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION)
            if (result != null) {
                // attaching the output
                // to our viewmodel
                vm.textFromSpeech = result[0]
            }
        }

        override fun onPartialResults(bundle: Bundle) {}
        override fun onEvent(i: Int, bundle: Bundle?) {}

    })
    speechRecognizer.startListening(speechRecognizerIntent)
}

通过这种实现,它是非常可定制的,您不会从Google获得这个弹出窗口。这样您就可以通知用户他的设备正在以您自己独特的方式监听!

5)从您的组合中调用函数以开始侦听:

@ExperimentalPermissionsApi
@Composable
fun YourScreen() {

    val ctx = LocalContext.current
    val vm: ScreenViewModel = viewModel()
    var clickToShowPermission by rememberSaveable { mutableStateOf(false) }

    if (clickToShowPermission) {
        OpenVoiceWithPermission(
            onDismiss = { clickToShowPermission = false },
            vm = vm,
            ctxFromScreen = ctx
        ) {
            // Do anything you want when the voice has finished and do
            // not forget to return clickToShowPermission to false!!
            clickToShowPermission = false
        }
    }
}

这样,每次您调用clickToShowPermission = true时,您就可以开始倾听用户所说的话...

这篇关于Jetpack Compose-语音识别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-24 13:11