I’m trying to merge a number of movies collectively (all from totally different sources) in Swift with AVFoundation. The ensuing video ought to be in portrait format.
The perform I wrote merge movies collectively into one video. Nevertheless, movies taken from a cell phone (reminiscent of an iPhone) appear to be exported in panorama whereas the remaining is in portrait. The landscaped video would then be stretched upwards to suit the portrait facet ratio. Plainly iPhone saves the video as panorama (even whether it is in portrait), then the system makes use of the metadata to show it as portrait.
To fight this, I tried to detect if a video is panorama (or in one other rotation), after which manually rework it to portrait. Nevertheless, once I do that, it looks as if the transformation is utilized to all the observe, which ends up in all the composition rendering in panorama with a number of the movies rendering in panorama and others in portrait. I can not work out the way to apply transformations to solely a single video. I’ve tried utilizing a number of tracks, however then just one video is proven and the remainder of the tracks are ignored. Right here is an instance of the exported video (it is rendered like this, it ought to render as 9:16 however with the transformation it renders 16:9, discover the second clip is distorted though it’s initially recorded in portrait).
Here is my code:
non-public static func mergeVideos(
videoPaths: [URL],
outputURL: URL,
handler: @escaping (_ path: URL)-> Void
) {
let videoComposition = AVMutableComposition()
var lastTime: CMTime = .zero
guard let videoCompositionTrack = videoComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
for path in videoPaths {
let assetVideo = AVAsset(url: path)
getTracks(assetVideo, .video) { videoTracks in
// Add video observe
do {
strive videoCompositionTrack.insertTimeRange(CMTimeRangeMake(begin: .zero, period: assetVideo.period), of: videoTracks[0], at: lastTime)
// Apply the unique rework
if let assetVideoTrack = assetVideo.tracks(withMediaType: AVMediaType.video).final {
let t = assetVideoTrack.preferredTransform
let dimension = assetVideoTrack.naturalSize
let videoAssetOrientation: CGImagePropertyOrientation
if dimension.width == t.tx && dimension.peak == t.ty {
print("down")
videoAssetOrientation = .down
videoCompositionTrack.preferredTransform = CGAffineTransform(rotationAngle: .pi) // 180 levels
} else if t.tx == 0 && t.ty == 0 {
print("up")
videoCompositionTrack.preferredTransform = assetVideoTrack.preferredTransform
videoAssetOrientation = .up
} else if t.tx == 0 && t.ty == dimension.width {
print("left")
videoAssetOrientation = .left
videoCompositionTrack.preferredTransform = CGAffineTransform(rotationAngle: .pi / 2) // 90 levels to the best
} else {
print("proper")
videoAssetOrientation = .proper
videoCompositionTrack.preferredTransform = CGAffineTransform(rotationAngle: -.pi / 2) // 90 levels to the left
}
}
} catch {
print("Didn't insert video observe")
return
}
self.getTracks(assetVideo, .audio) { audioTracks in
// Add audio observe provided that it exists
if !audioTracks.isEmpty {
do {
strive videoCompositionTrack.insertTimeRange(CMTimeRangeMake(begin: .zero, period: assetVideo.period), of: audioTracks[0], at: lastTime)
} catch {
print("Didn't insert audio observe")
return
}
}
// Replace time
lastTime = CMTimeAdd(lastTime, assetVideo.period)
}
}
}
guard let exporter = AVAssetExportSession(asset: videoComposition, presetName: AVAssetExportPresetHighestQuality) else { return }
exporter.outputURL = outputURL
exporter.outputFileType = AVFileType.mp4
exporter.shouldOptimizeForNetworkUse = true
exporter.exportAsynchronously(completionHandler: {
swap exporter.standing {
case .failed:
print("Export failed (exporter.error!)")
case .accomplished:
print("accomplished export")
handler(outputURL)
default:
break
}
})
}
Anybody know what I’m lacking right here? Any assistance is enormously appreciated.