问题描述
这是Android的书,我有一定的状态,使用 TextToSpeech.playEarcon()
是preferable播放音频文件(使用MediaPlayer的),因为:
An Android book I have states that using TextToSpeech.playEarcon()
is preferable to playing audio files (using MediaPlayer) because:
而不必确定时机来播放音频 线索并依靠回调来获得合适的时机,我们可以改为 排队我们earcons我们发送给TTS引擎的文字之中。然后,我们 知道我们earcons将会在适当的时间播放,并且我们 可以使用相同的途径得到我们的声音给用户,包括 onUtteranceCompleted()的回调让我们知道我们在哪里。
但我的简短的实验,这表明该不的情况:
But my short and simple experiment with this shows this isn't the case:
String utteranceId = String.valueOf(utteranceNum++);
params.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, utteranceId);
params.put(TextToSpeech.Engine.KEY_PARAM_STREAM, String.valueOf(AudioManager.STREAM_MUSIC));
tts.speak("FIRST part of sentence", TextToSpeech.QUEUE_ADD, params);
utteranceId = String.valueOf(utteranceNum++);
params.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, utteranceId);
params.put(TextToSpeech.Engine.KEY_PARAM_STREAM, String.valueOf(AudioManager.STREAM_MUSIC));
tts.playEarcon("[fancyring]", TextToSpeech.QUEUE_ADD, params);
utteranceId = String.valueOf(utteranceNum++);
params.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, utteranceId);
params.put(TextToSpeech.Engine.KEY_PARAM_STREAM, String.valueOf(AudioManager.STREAM_MUSIC));
tts.speak("SECOND part of sentence", TextToSpeech.QUEUE_ADD, params);
当我检查日志从 onUtteranceCompleted()我只看到由 tts.speak的utteranceIds(),而不是一个被打了 tts.playEarcon()
。
When I examine the logs from onUtteranceCompleted() I only see the utteranceIds of the ones played by tts.speak()
, not the one played by tts.playEarcon()
.
为什么这种差异?是否有解决方法吗?
Why is this discrepancy? Is there a workaround for this?
PS 在说明明显的风险:这三种话语都发挥了罚款,并在正确的顺序。这仅仅是 onUtteranceCompleted()未要求某种原因 tts.playEarcon()
。
P.S. At the risk of stating the obvious: All three utterances are played out fine and at the right order. It is only the onUtteranceCompleted() that isn't called for some reason for the tts.playEarcon()
.
推荐答案
回答自己。有关的令人难以置信的长,非常详细的文档 TextToSpeech.OnUtteranceCompletedListener 读(强调的是我的):
Answering myself. The incredibly long and very detailed documentation about TextToSpeech.OnUtteranceCompletedListener reads (the emphasis is mine):
调用时,话语一直合成
这是earcon是从未的综合后的结果,所以当然 onUtteranceCompleted()会的永远的调用它。这是由设计。
An earcon is never a result of synthesization, so of course onUtteranceCompleted() will never be called for it. This is by design.
这会让我们回到一个新的问题:如果要earcons没有优势播放.mp3文件(使用MediaPlayer的),为什么要使用earcons在所有
Which gets us back to a new question: If there is no advantage to earcons over playing .mp3 files (using MediaPlayer), why use earcons at all?
这篇关于咦?为什么不playEarcon()产生onUtteranceCompleted()?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!