问题描述
我想使用Android Vision FaceDetector
API对视频文件(例如,来自用户画廊的MP4)执行面部检测/跟踪。我可以看到许多有关使用的示例>类,可在直接来自相机的流上执行面部跟踪(例如),但在视频文件上没有。
I would like to perform face detection / tracking on a video file (e.g. an MP4 from the users gallery) using the Android Vision FaceDetector
API. I can see many examples on using the CameraSource class to perform face tracking on the stream coming directly from the camera (e.g. on the android-vision github), but nothing on video files.
我尝试通过Android Studio查看 CameraSource
的源代码,但是它被混淆了,我看不到原始的线上。我认为使用相机和使用文件之间有许多共同点。大概我只是在 Surface
上播放视频文件,然后将其传递到管道。
I tried looking at the source code for CameraSource
through Android Studio, but it is obfuscated, and I couldn't see the original online. I image there are many commonalities between using the camera and using a file. Presumably I just play the video file on a Surface
, and then pass that to a pipeline.
我也可以看到 Frame.Builder
具有函数 setImageData
和 setTimestampMillis
。如果我能够以 ByteBuffer
的形式读取视频,该如何将其传递给 FaceDetector
API?我猜很相似,但是没有答案。同样,将视频解码为 Bitmap
帧,然后将其传递给 setBitmap
。
Alternatively I can see that Frame.Builder
has functions setImageData
and setTimestampMillis
. If I was able to read in the video as ByteBuffer
, how would I pass that to the FaceDetector
API? I guess this question is similar, but no answers. Similarly, decode the video into Bitmap
frames and pass that to setBitmap
.
理想情况下,我不想将视频呈现到屏幕上,并且处理应该以 FaceDetector
API能够实现的速度进行。
Ideally I don't want to render the video to the screen, and the processing should happen as fast as the FaceDetector
API is capable of.
推荐答案
只需调用 SparseArray< Face> faces = detector.detect(frame);
其中必须创建像这样的 detector
:
Simply call SparseArray<Face> faces = detector.detect(frame);
where detector
has to be created like this:
FaceDetector detector = new FaceDetector.Builder(context)
.setProminentFaceOnly(true)
.build();
这篇关于Android人脸检测API-存储的视频文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!