我使用两个播客来录制(SwiftyCam),然后合并多个录制的剪辑(Swift Video Generator)。
然而,我遇到了一个开始严重困扰我的问题。我也就此发表了一篇文章。如果你想读的话,这里有一个链接:Last video in array of multiple videos dicatates whether previous videos are mirrored。请注意(在阅读问题的摘要描述之前),所有视频都是以纵向录制的,使用前置摄像头录制的视频应该是镜像的(作为单个剪辑,但也在合并的视频中)。
总结一下:如果我只用一个相机录制剪辑,合并后的视频看起来不错(例如,只使用前摄像头:每个剪辑都是镜像的,合并时不会改变)。但是,如果我使用两个摄像机的多个剪辑,比如说我用前摄像机录制一个,然后用后摄像机录制另一个,那么第一个视频(前摄像机)将在合并的视频中“未被镜像”。如果最后一个剪辑是使用前摄像头录制的,则会出现相反的情况:在这种情况下,后摄像头的所有剪辑都将镜像到合并的视频中。
现在我试着查看视频发生器的代码,发现了这个(在swift video generator, VideoGenerator.swift, l. 309):
var videoAssets: [AVURLAsset] = [] //at l. 274
//[...]
/// add audio and video tracks to the composition //at l. 309
if let videoTrack: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid), let audioTrack: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid) {
var insertTime = CMTime(seconds: 0, preferredTimescale: 1)
/// for each URL add the video and audio tracks and their duration to the composition
for sourceAsset in videoAssets {
do {
if let assetVideoTrack = sourceAsset.tracks(withMediaType: .video).first, let assetAudioTrack = sourceAsset.tracks(withMediaType: .audio).first {
let frameRange = CMTimeRange(start: CMTime(seconds: 0, preferredTimescale: 1), duration: sourceAsset.duration)
try videoTrack.insertTimeRange(frameRange, of: assetVideoTrack, at: insertTime)
try audioTrack.insertTimeRange(frameRange, of: assetAudioTrack, at: insertTime)
videoTrack.preferredTransform = assetVideoTrack.preferredTransform //reference 1
}
insertTime = insertTime + sourceAsset.duration
} catch {
DispatchQueue.main.async {
failure(error)
}
}
}
就我而言,我想说的是问题是,在遍历资源数组(
assetVideoTrack.preferredTransform
)时,只有最后一个视频的videoTrack.preferredTransform
用于reference 1
。这就是我被困的地方。我看不出解决这个问题的办法。我考虑过根据数组中最后一个片段的
preferredTransform
更改每个片段的assetVideoTrack
,这样这个问题就不会再发生了,但它总是说preferredTransform
是一个get-only属性。。。有人能帮帮我吗?非常感谢!以下是可能有用的更多信息:
每个
assetVideoTrack.preferredTransform
总是(1280.0720.0)(实际上我有点惊讶,因为这意味着它的宽度是1280,高度是720,即使视频看起来像一个普通的纵向视频,还是我错了?这会导致问题吗?)前摄像头剪辑始终
assetVideoTrack.naturalSize
后摄像头剪辑始终
assetVideoTrack.preferredTransform =
注意720-1280=-560(我不知道这个信息是否有用)
最佳答案
通过一些研究和raywanderlich.com的帮助,我找到了解决这个问题的方法,甚至发现了另一个更深层次的问题,这是由我提到的另一个pod(SwiftyCam)引起的。由于SwiftyCam的这个问题,我不得不对我在这里提出的解决方案进行一些调整,也就是说,我必须更改通常不应该出现的CGAffineTransform
的翻译(编辑:我现在也将此代码放入解决方案中,可能需要,也可能不需要:您必须尝试看看,我目前无法解释为什么有时需要,有时不需要)。
解决方案:
首先,我们需要raywanderlich.com中的两个helper函数:
这个会告诉我们视频的方向以及是否是人像。实际上,原始函数中缺少[UIImage.Orientation]Mirrored
案例,但我也需要rightMirrored
(firstelse if
):
static func orientationFromTransform(_ transform: CGAffineTransform)
-> (orientation: UIImage.Orientation, isPortrait: Bool) {
var assetOrientation = UIImage.Orientation.up
var isPortrait = false
if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 {
assetOrientation = .right
isPortrait = true
} else if transform.a == 0 && transform.b == 1.0 && transform.c == 1.0 && transform.d == 0 {
assetOrientation = .rightMirrored
isPortrait = true
} else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 {
assetOrientation = .left
isPortrait = true
} else if transform.a == 0 && transform.b == -1.0 && transform.c == -1.0 && transform.d == 0 {
assetOrientation = .leftMirrored
isPortrait = true
} else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 {
assetOrientation = .up
} else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 {
assetOrientation = .down
}
return (assetOrientation, isPortrait)
}
此功能基于上一个功能,将为我们提供一个剪辑的
instruction
,这对于将镜像和未镜像视频合并为一个视频而不更改其他“镜像”至关重要:static func videoCompositionInstruction(_ track: AVCompositionTrack, asset: AVAsset)
-> AVMutableVideoCompositionLayerInstruction {
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
let assetTrack = asset.tracks(withMediaType: .video)[0]
let transform = assetTrack.preferredTransform
let assetInfo = orientationFromTransform(transform)
var scaleToFitRatio = 1080 / assetTrack.naturalSize.width
if assetInfo.isPortrait {
scaleToFitRatio = 1080 / assetTrack.naturalSize.height
let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
var finalTransform = assetTrack.preferredTransform.concatenating(scaleFactor)
//was needed in my case (if video not taking entire screen and leaving some parts black - don't know when actually needed so you'll have to try and see when it's needed)
if assetInfo.orientation == .rightMirrored || assetInfo.orientation == .leftMirrored {
finalTransform = finalTransform.translatedBy(x: -transform.ty, y: 0)
}
instruction.setTransform(finalTransform, at: kCMTimeZero)
} else {
let scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
var concat = assetTrack.preferredTransform.concatenating(scaleFactor)
.concatenating(CGAffineTransform(translationX: 0, y: UIScreen.main.bounds.width / 2))
if assetInfo.orientation == .down {
let fixUpsideDown = CGAffineTransform(rotationAngle: CGFloat(Double.pi))
let windowBounds = UIScreen.main.bounds
let yFix = assetTrack.naturalSize.height + windowBounds.height
let centerFix = CGAffineTransform(translationX: assetTrack.naturalSize.width, y: yFix)
concat = fixUpsideDown.concatenating(centerFix).concatenating(scaleFactor)
}
instruction.setTransform(concat, at: kCMTimeZero)
}
return instruction
}
其余的基本上只是重写raywanderlich.com中的指令,这样代码就可以应用于url数组,而不是两个url。注意,本质的区别是
exportSession.videoComposition = mainComposition
(在这个代码框的末尾),当然还有mainComposition
所需的一切:let mixComposition = AVMutableComposition()
guard let completeMoviePath = completeMoviePathOp else {
DispatchQueue.main.async {
failure(VideoGeneratorError(error: .kFailedToFetchDirectory)) //NEW ERROR REQUIRED? @owner of swift-video-generator
}
return
}
var instructions: [AVMutableVideoCompositionLayerInstruction] = []
var insertTime = CMTime(seconds: 0, preferredTimescale: 1)
/// for each URL add the video and audio tracks and their duration to the composition
for sourceAsset in videoAssets {
let frameRange = CMTimeRange(start: CMTime(seconds: 0, preferredTimescale: 1), duration: sourceAsset.duration)
guard
let nthVideoTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)),
let nthAudioTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)), //0 used to be kCMPersistentTrackID_Invalid
let assetVideoTrack = sourceAsset.tracks(withMediaType: .video).first,
let assetAudioTrack = sourceAsset.tracks(withMediaType: .audio).first
else {
DispatchQueue.main.async {
failure(VideoGeneratorError(error: .kMissingVideoURLs))
}
return
}
do {
try nthVideoTrack.insertTimeRange(frameRange, of: assetVideoTrack, at: insertTime)
try nthAudioTrack.insertTimeRange(frameRange, of: assetAudioTrack, at: insertTime)
let nthInstruction = videoCompositionInstruction(nthVideoTrack, asset: sourceAsset)
nthInstruction.setOpacity(0.0, at: CMTimeAdd(insertTime, sourceAsset.duration))
instructions.append(nthInstruction)
insertTime = insertTime + sourceAsset.duration
} catch {
DispatchQueue.main.async {
failure(error)
}
}
}
let mainInstruction = AVMutableVideoCompositionInstruction()
mainInstruction.timeRange = CMTimeRange(start: CMTime(seconds: 0, preferredTimescale: 1), duration: insertTime)
mainInstruction.layerInstructions = instructions
let mainComposition = AVMutableVideoComposition()
mainComposition.instructions = [mainInstruction]
mainComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
mainComposition.renderSize = CGSize(width: 1080, height: 1920)
/// try to start an export session and set the path and file type
if let exportSession = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) { //DOES NOT WORK WITH AVAssetExportPresetPassthrough
exportSession.outputFileType = .mov
exportSession.outputURL = completeMoviePath
exportSession.videoComposition = mainComposition
exportSession.shouldOptimizeForNetworkUse = true
/// try to export the file and handle the status cases
exportSession.exportAsynchronously(completionHandler: {
/// same as before...
关于ios - 更改assetVideoTrack的preferredTransform以解决合并视频时的镜像问题,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/57083151/