问题描述
我正在尝试构建一个应用程序,以帮助视障人士检测阻碍他们的物体/障碍物.因此,一旦检测到对象,使用 TensorFlow 库和 android text-to-speech,应用程序就会让用户知道该对象是什么.我目前正在尝试构建 TensorFlow 提供的 Android 对象检测示例,但是我正在努力寻找边界框标签字符串的存储位置,以便在运行文本转语音时可以调用它
I am trying to build an app that will help visually impaired individuals detect objects/hurdles in their way. So using the TensorFlow library and the android text-to-speech once an object is detected, the application will let the user know what the object is. I'm currently trying to build off the Android Object Detection Example provided by TensorFlow, however I'm struggling to find where the strings of the labels of the bounding boxes are stored so that I can call this when running the text-to-speech
推荐答案
我看到了Object detection的项目.您可以在项目中的 2 个地方找到推理的结果:
I saw the project of Object detection. You can find the results of the inference in 2 places inside project:
首先你可以在里面找到它们
First you can find them inside
TFLiteObjectDetectionAPIModel.java
您可以在 227 为
认可对象
例如
Log.i("Recognitions", String.valueOf(recognitions.get(0).getTitle()));
第二内
DetectorActivity.java
您可以登录
结果对象
在线181.
然后你可以按照这个例子来集成TtS.我对结果有点悲观,因为 MultiboxTracker 每秒都会给出很多结果....如果检测到很多对象,我不知道性能!
Then you can follow this example to integrate TtS. I am a little pesimist of the result because MultiboxTracker gives a lot of results in every second....and I don't know the performance if a lot of objects are detected!!
你必须过滤一些结果.
我对结果很感兴趣!!
如果您需要更多帮助,请标记我
If you need more help tag me
快乐编码!
这篇关于如何在基于 tensorflow lite 对象检测 android 的应用程序中添加文本到语音转换?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!