我需要创建一个具有视频处理功能的应用程序。
我的要求是必须使用相机预览层创建3个视图。第一个视图应显示原始捕获视频,第二个视图应显示原始捕获视频的翻转,最后一个视图应显示原始捕获视频的反色。
我开始开发此要求。
首先,我创建了3个视图和Camera Capture所需的属性

    @IBOutlet weak var captureView: UIView!
    @IBOutlet weak var flipView: UIView!
    @IBOutlet weak var InvertView: UIView!

    //Camera Capture requiered properties
    var videoDataOutput: AVCaptureVideoDataOutput!
    var videoDataOutputQueue: DispatchQueue!
    var previewLayer:AVCaptureVideoPreviewLayer!
    var captureDevice : AVCaptureDevice!
    let session = AVCaptureSession()
    var replicationLayer: CAReplicatorLayer!
ios - 如何在iOS Swift中将相机预览 View 添加到三个自定义uiview中-LMLPHP
现在,我调用AVCaptureVideoDataOutputSampleBufferDelegate来启动摄像机会话。
extension ViewController:  AVCaptureVideoDataOutputSampleBufferDelegate{
    func setupAVCapture(){
        session.sessionPreset = AVCaptureSessionPreset640x480
        guard let device = AVCaptureDevice
            .defaultDevice(withDeviceType: .builtInWideAngleCamera,
                           mediaType: AVMediaTypeVideo,
                           position: .back) else{
                            return
        }
        captureDevice = device
        beginSession()
    }

    func beginSession(){
        var err : NSError? = nil
        var deviceInput:AVCaptureDeviceInput?
        do {
            deviceInput = try AVCaptureDeviceInput(device: captureDevice)
        } catch let error as NSError {
            err = error
            deviceInput = nil
        }
        if err != nil {
            print("error: \(err?.localizedDescription)");
        }
        if self.session.canAddInput(deviceInput){
            self.session.addInput(deviceInput);
        }

        videoDataOutput = AVCaptureVideoDataOutput()
        videoDataOutput.alwaysDiscardsLateVideoFrames=true
        videoDataOutputQueue = DispatchQueue(label: "VideoDataOutputQueue")
        videoDataOutput.setSampleBufferDelegate(self, queue:self.videoDataOutputQueue)
        if session.canAddOutput(self.videoDataOutput){
            session.addOutput(self.videoDataOutput)
        }
        videoDataOutput.connection(withMediaType: AVMediaTypeVideo).isEnabled = true

        self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
        self.previewLayer.frame = self.captureView.bounds
        self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspect

        self.replicationLayer = CAReplicatorLayer()
        self.replicationLayer.frame = self.captureView.bounds
        self.replicationLayer.instanceCount = 1 //
        self.replicationLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.captureView.bounds.size.height / 1, 0.0)

        self.replicationLayer.addSublayer(self.previewLayer)
        self.captureView.layer.addSublayer(self.replicationLayer)
        self.flipView.layer.addSublayer(self.replicationLayer)
        self.InvertView.layer.addSublayer(self.replicationLayer)

        session.startRunning()
    }

    func captureOutput(_ captureOutput: AVCaptureOutput!,
                       didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
                       from connection: AVCaptureConnection!) {
        // do stuff here
    }

    // clean up AVCapture
    func stopCamera(){
        session.stopRunning()
    }

}
在这里,我使用了CAReplicatorLayer来显示3个视图中的捕获视频。我将self.replicationLayer.instanceCount指定为1,然后得到了这样的输出。
ios - 如何在iOS Swift中将相机预览 View 添加到三个自定义uiview中-LMLPHP
如果我将self.replicationLayer.instanceCount指定为3,则得到这样的输出。
ios - 如何在iOS Swift中将相机预览 View 添加到三个自定义uiview中-LMLPHP
因此,指导我如何以3种不同的视图显示捕获的视频。并提出一些将原始捕获视频转换为翻转和反转颜色的想法。提前致谢。

最佳答案

最后,我在JohnnySlagle/Multiple-Camera-Feeds代码的帮助下找到了答案。

我创建了三个视图,例如

@property (weak, nonatomic) IBOutlet UIView *video1;
@property (weak, nonatomic) IBOutlet UIView *video2;
@property (weak, nonatomic) IBOutlet UIView *video3;

然后稍微改变setUpFeedViews
- (void)setupFeedViews {
    NSUInteger numberOfFeedViews = 3;

    for (NSUInteger i = 0; i < numberOfFeedViews; i++) {
        VideoFeedView *feedView = [self setupFeedViewWithFrame:CGRectMake(0, 0, self.video1.frame.size.width, self.video1.frame.size.height)];
        feedView.tag = i+1;
        switch (i) {
            case 0:
                [self.video1 addSubview:feedView];
                break;
            case 1:
                [self.video2 addSubview:feedView];
                break;
            case 2:
                [self.video3 addSubview:feedView];
                break;
            default:
                break;
        }
        [self.feedViews addObject:feedView];
    }
}

然后在AVCaptureVideoDataOutputSampleBufferDelegate中应用过滤器
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);

    // update the video dimensions information
    _currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];

    CGRect sourceExtent = sourceImage.extent;

    CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;


    for (VideoFeedView *feedView in self.feedViews) {
        CGFloat previewAspect = feedView.viewBounds.size.width  / feedView.viewBounds.size.height;
        // we want to maintain the aspect radio of the screen size, so we clip the video image
        CGRect drawRect = sourceExtent;
        if (sourceAspect > previewAspect) {
            // use full height of the video image, and center crop the width
            drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
            drawRect.size.width = drawRect.size.height * previewAspect;
        } else {
            // use full width of the video image, and center crop the height
            drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
            drawRect.size.height = drawRect.size.width / previewAspect;
        }
        [feedView bindDrawable];

        if (_eaglContext != [EAGLContext currentContext]) {
            [EAGLContext setCurrentContext:_eaglContext];
        }

        // clear eagl view to grey
        glClearColor(0.5, 0.5, 0.5, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);

        // set the blend mode to "source over" so that CI will use that
        glEnable(GL_BLEND);
        glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        // This is necessary for non-power-of-two textures
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

        if (feedView.tag == 1) {
            if (sourceImage) {
                [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        } else if (feedView.tag == 2) {
            sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeScale(1, -1)];
            sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeTranslation(0, sourceExtent.size.height)];
            if (sourceImage) {
                [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        } else {
            CIFilter *effectFilter = [CIFilter filterWithName:@"CIColorInvert"];
            [effectFilter setValue:sourceImage forKey:kCIInputImageKey];
            CIImage *invertImage = [effectFilter outputImage];
            if (invertImage) {
                [_ciContext drawImage:invertImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        }
        [feedView display];
    }
}

而已。成功地满足了我的要求。

10-07 21:30