问题描述
我正在使用 AVAssetWriter 压缩视频.如果我将视频压缩文件设置为 Quicktime 电影,它工作正常,但是我想将其导出为 MPEG4,但在运行时出现此错误:
I am compressing videos with AVAssetWriter. If i set the video compression file to Quicktime movie it works fine, however i would like to export it to MPEG4, but it gives me this error while running:
为了执行到文件类型 public.mpeg-4 的传递,请在 AVAssetWriterInput 初始值设定项中提供格式提示'
这里是我声明文件类型的具体代码:
Here is the specific code where i declare the file type:
let videoInputQueue = DispatchQueue(label: "videoQueue")
let audioInputQueue = DispatchQueue(label: "audioQueue")
let formatter = DateFormatter()
formatter.dateFormat = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'Z'"
let date = Date()
let documentsPath = NSTemporaryDirectory()
let outputPath = "\(documentsPath)/\(formatter.string(from: date)).mp4"
let newOutputUrl = URL(fileURLWithPath: outputPath)
do{
assetWriter = try AVAssetWriter(outputURL: newOutputUrl, fileType: AVFileTypeMPEG4)
}catch{
assetWriter = nil
}
guard let writer = assetWriter else{
fatalError("assetWriter was nil")
}
writer.shouldOptimizeForNetworkUse = true
writer.add(videoInput)
writer.add(audioInput)
这是我的压缩的完整代码:
Here is the full code for my compression:
func compressFile(urlToCompress: URL, completion:@escaping (URL)->Void){
//video file to make the asset
var audioFinished = false
var videoFinished = false
let asset = AVAsset(url: urlToCompress)
//create asset reader
do{
assetReader = try AVAssetReader(asset: asset)
} catch{
assetReader = nil
}
guard let reader = assetReader else{
fatalError("Could not initalize asset reader probably failed its try catch")
}
let videoTrack = asset.tracks(withMediaType: AVMediaTypeVideo).first!
let audioTrack = asset.tracks(withMediaType: AVMediaTypeAudio).first!
let videoReaderSettings: [String:Any] = [kCVPixelBufferPixelFormatTypeKey as String!:kCVPixelFormatType_32ARGB ]
// ADJUST BIT RATE OF VIDEO HERE
let videoSettings:[String:Any] = [
AVVideoCompressionPropertiesKey: [AVVideoAverageBitRateKey:self.bitrate],
AVVideoCodecKey: AVVideoCodecH264,
AVVideoHeightKey: videoTrack.naturalSize.height,
AVVideoWidthKey: videoTrack.naturalSize.width
]
let assetReaderVideoOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: videoReaderSettings)
let assetReaderAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: nil)
if reader.canAdd(assetReaderVideoOutput){
reader.add(assetReaderVideoOutput)
}else{
fatalError("Couldn't add video output reader")
}
if reader.canAdd(assetReaderAudioOutput){
reader.add(assetReaderAudioOutput)
}else{
fatalError("Couldn't add audio output reader")
}
let audioInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: nil)
let videoInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
videoInput.transform = videoTrack.preferredTransform
//we need to add samples to the video input
let videoInputQueue = DispatchQueue(label: "videoQueue")
let audioInputQueue = DispatchQueue(label: "audioQueue")
let formatter = DateFormatter()
formatter.dateFormat = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'Z'"
let date = Date()
let documentsPath = NSTemporaryDirectory()
let outputPath = "\(documentsPath)/\(formatter.string(from: date)).mp4"
let newOutputUrl = URL(fileURLWithPath: outputPath)
do{
assetWriter = try AVAssetWriter(outputURL: newOutputUrl, fileType: AVFileTypeMPEG4)
}catch{
assetWriter = nil
}
guard let writer = assetWriter else{
fatalError("assetWriter was nil")
}
writer.shouldOptimizeForNetworkUse = true
writer.add(videoInput)
writer.add(audioInput)
writer.startWriting()
reader.startReading()
writer.startSession(atSourceTime: kCMTimeZero)
let closeWriter:()->Void = {
if (audioFinished && videoFinished){
self.assetWriter?.finishWriting(completionHandler: {
self.checkFileSize(sizeUrl: (self.assetWriter?.outputURL)!, message: "The file size of the compressed file is: ")
completion((self.assetWriter?.outputURL)!)
print("Completed 1")
})
self.assetReader?.cancelReading()
}
}
audioInput.requestMediaDataWhenReady(on: audioInputQueue) {
while(audioInput.isReadyForMoreMediaData){
let sample = assetReaderAudioOutput.copyNextSampleBuffer()
if (sample != nil){
audioInput.append(sample!)
}else{
audioInput.markAsFinished()
DispatchQueue.main.async {
audioFinished = true
closeWriter()
print("Completed 2")
}
break;
}
}
}
videoInput.requestMediaDataWhenReady(on: videoInputQueue) {
//request data here
while(videoInput.isReadyForMoreMediaData){
let sample = assetReaderVideoOutput.copyNextSampleBuffer()
if (sample != nil){
videoInput.append(sample!)
}else{
videoInput.markAsFinished()
DispatchQueue.main.async {
videoFinished = true
print("Completed 3")
closeWriter()
}
break;
}
}
}
}
推荐答案
通过使用 nil
outputSettings
创建你的音频 AVAssetWriterInput
你表明你想要传递您的音频数据.assetWriterInputWithMediaType:outputSettings:
头文件注释说:
By creating your audio AVAssetWriterInput
with nil
outputSettings
you indicate that you want to pass through your audio data. The assetWriterInputWithMediaType:outputSettings:
header file comment says:
AVAssetWriter 仅支持通过一组受限的媒体类型和子类型.为了将媒体数据传递到 AVFileTypeQuickTimeMovie 以外的文件,必须使用 +assetWriterInputWithMediaType:outputSettings:sourceFormatHint: 而不是此方法提供非 NULL 格式提示.
需要一个格式描述,幸运的是你可以从你遇到的示例缓冲区中得到一个:
A format description is needed and luckily you can get one from the sample buffers you encounter:
let formatDesc = CMSampleBufferGetFormatDescription(anAudioSampleBuffer)!
let audioInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: nil, sourceFormatHint: formatDesc)
既然这么容易,为什么 AVAssetWriter
不为我们做呢?我猜是因为它会笨拙地将 AVAssetWriter
的通常初始化推到你附加了几个 CMSampleBuffer
之后的某个时刻,或者(也许?)因为不是所有的 CMSampleBuffer
s 有格式说明.
So if it's that easy to do, why doesn't AVAssetWriter
do it for us? I guess because it would awkwardly push AVAssetWriter
's usual initialization until some point after you've appended a few CMSampleBuffer
s, or (maybe?) because not all CMSampleBuffer
s have a format description.
这篇关于AVAssetWriter 输入问题:MPEG4的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!