本文介绍了使用Google Vision API检测眼睛眨眼的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用视觉API进行人脸检测,现在我想实现眨眼,但
仍然视力api在一只眼睛关闭时检测眼睛。



解决方案

面部的睁眼概率值是关键以检测眨眼。此外,您可以使用跟踪器随时跟踪眼睛状态,以检测指示眨眼的事件序列: b
$ b

双眼打开 - >双眼闭上 - >双眼睁开



以下是一个示例跟踪器:

  public class BlinkTracker extends Tracker< Face> {
私人最终浮动OPEN_THRESHOLD = 0.85;
私人最终浮动CLOSE_THRESHOLD = 0.15;

private int state = 0;

public void onUpdate(Detector.Detections< Face> detections,Face face){
float left = face.getIsLeftEyeOpenProbability();
float right = face.getIsRightEyeOpenProbability();如果((left == Face.UNCOMPUTED_PROBABILITY)||
(right == Face.UNCOMPUTED_PROBABILITY)){
//至少有一个眼睛未被检测到,

return; ((left> OPEN_THRESHOLD)&&(right> OPEN_THRESHOLD)){


switch(state){
case 0:
b $ b //双眼最初打开
state = 1;
}
break; ((left //双眼闭合
状态

情况1:
= 2;
}
break;如果((left> OPEN_THRESHOLD)&&(right> OPEN_THRESHOLD)){
//双眼再次打开
情况2:
Log.i(BlinkTracker,眨眼!);
state = 0;
}
break;
}
}

}

请注意,您还需要启用分类,以便让检测器指示眼睛是否打开/关闭:

  FaceDetector检测器=新FaceDetector.Builder(上下文)
.setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
.build();

然后追踪器被添加为一个处理器,用于从探测器随时间接收面部更新。例如,此配置将用于跟踪视图中最大的人脸是否已眨眼:

  detector.setProcessor(
新的LargestFaceFocusingProcessor(检测器,新的BlinkTracker()));

或者,如果您有兴趣检测所有面的闪烁,则可以使用MultiProcessor而不是LargestFaceFocusingProcessor不只是最大的一张脸)。


i'm using the vision API for face detection, now i want to implement eye blink butstill vision api detect eye while one eye is off.

please help me how to implement eye blink feature.

解决方案

The "eye open probability" values from the face are the key to detecting the blink. In addition, you can use a Tracker to keep track of the eye state over time, to detect the sequence of events that indicate a blink:

both eyes open -> both eyes closed -> both eyes open

Here's an example tracker:

public class BlinkTracker extends Tracker<Face> {
  private final float OPEN_THRESHOLD = 0.85;
  private final float CLOSE_THRESHOLD = 0.15;

  private int state = 0;

  public void onUpdate(Detector.Detections<Face> detections, Face face) {
    float left = face.getIsLeftEyeOpenProbability();
    float right = face.getIsRightEyeOpenProbability();
    if ((left == Face.UNCOMPUTED_PROBABILITY) ||
        (right == Face.UNCOMPUTED_PROBABILITY)) {
      // At least one of the eyes was not detected.
      return;
    }

    switch (state) {
      case 0:
        if ((left > OPEN_THRESHOLD) && (right > OPEN_THRESHOLD)) {
          // Both eyes are initially open
          state = 1;
        }
        break;

        case 1:
          if ((left < CLOSE_THRESHOLD) && (right < CLOSE_THRESHOLD)) {
            // Both eyes become closed
            state = 2;
          }
          break;

        case 2:
          if ((left > OPEN_THRESHOLD) && (right > OPEN_THRESHOLD)) {
            // Both eyes are open again
            Log.i("BlinkTracker", "blink occurred!");
            state = 0;
          }
        break;
    }
  }

}

Note that you need to also enable "classifications" in order to have the detector indicate if eyes are open/closed:

FaceDetector detector = new FaceDetector.Builder(context)
    .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
    .build();

The tracker is then added as a processor for receiving face updates over time from the detector. For example, this configuration would be used to track whether the largest face in view has blinked:

detector.setProcessor(
    new LargestFaceFocusingProcessor(detector, new BlinkTracker()));

Alternatively, you could use a MultiProcessor instead of LargestFaceFocusingProcessor if you are interested in detecting blinks for all faces (not just the largest face).

这篇关于使用Google Vision API检测眼睛眨眼的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-24 12:13