I am trying to achieve an animated gradient effect that changes values over time based on the current seconds. I am also using AVPlayer and AVMutableVideoComposition along with custom instruction and class to generate the effect. I didn't want to load any video file, but rather generate a custom video with my own set of instructions. I used Metal Compute shaders to generate the effects and make the video to be 20 seconds.
However, when I run the code, I get a frozen player with the gradient applied, but when I try to play the video, its stuck at 00:00 and the player won't play anything pass that.
Here is the screenshot :-
My entire code :-
import AVFoundation
import Metal
import MetalKit
func createGradientVideoComposition() -> (AVMutableComposition ,AVMutableVideoComposition) {
let composition = AVMutableComposition()
guard let videoTrack = composition.addMutableTrack(withMediaType: .video,
preferredTrackID: kCMPersistentTrackID_Invalid) else {
fatalError("Unable to add video track")
}
let duration = CMTime(seconds: 20, preferredTimescale: 600)
try? videoTrack.insertEmptyTimeRange(CMTimeRange(start: .zero, duration: duration))
// Create an AVMutableVideoComposition with a 60fps frame rate and desired render size
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRange(start: .zero, duration: duration)
let videoComposition = AVMutableVideoComposition()
videoComposition.frameDuration = CMTime(value: 1, timescale: 60)
videoComposition.renderSize = CGSize(width: 1920, height: 1080)
videoComposition.instructions = [instruction]
// Set the custom video compositor class that uses our Metal compute shader
videoComposition.customVideoCompositorClass = GradientVideoCompositoraa.self
return (composition, videoComposition)
}
class GradientVideoCompositoraa: NSObject, AVVideoCompositing {
// MARK: - AVVideoCompositing Protocol Properties
var sourcePixelBufferAttributes: [String : Any]? = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
var requiredPixelBufferAttributesForRenderContext: [String : Any] = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
// MARK: - Metal Properties
private let device: MTLDevice
private let commandQueue: MTLCommandQueue
private var pipelineState: MTLComputePipelineState!
private var textureCache: CVMetalTextureCache?
// MARK: - Initialization
override init() {
guard let device = MTLCreateSystemDefaultDevice() else {
fatalError("Metal is not supported on this device")
}
self.device = device
selfmandQueue = device.makeCommandQueue()!
super.init()
// Create a Metal texture cache for efficient texture creation
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, device, nil, &textureCache)
// Load the default Metal library and get the compute function
let defaultLibrary = device.makeDefaultLibrary()!
guard let kernelFunction = defaultLibrary.makeFunction(name: "gradientKernel") else {
fatalError("Could not find gradientKernel function")
}
do {
pipelineState = try device.makeComputePipelineState(function: kernelFunction)
} catch {
fatalError("Failed to create pipeline state: \(error)")
}
}
// MARK: - AVVideoCompositing Protocol Methods
func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {
// You can handle changes to the render context here if needed.
}
func startRequest(_ asyncVideoCompositionRequest: AVAsynchronousVideoCompositionRequest) {
autoreleasepool {
// Obtain a new pixel buffer for the output frame
guard let dstBuffer = asyncVideoCompositionRequest.renderContext.newPixelBuffer() else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "com.example", code: 0, userInfo: nil))
return
}
// Get the output image dimensions
let width = CVPixelBufferGetWidth(dstBuffer)
let height = CVPixelBufferGetHeight(dstBuffer)
// Create a Metal texture from the pixel buffer
guard let textureCache = self.textureCache else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "com.example", code: 0, userInfo: nil))
return
}
var cvTextureOut: CVMetalTexture?
let result = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
textureCache,
dstBuffer,
nil,
.bgra8Unorm,
width,
height,
0,
&cvTextureOut)
if result != kCVReturnSuccess {
asyncVideoCompositionRequest.finish(with: NSError(domain: "com.example", code: 0, userInfo: nil))
return
}
guard let cvTexture = cvTextureOut,
let outputTexture = CVMetalTextureGetTexture(cvTexture) else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "com.example", code: 0, userInfo: nil))
return
}
// Calculate the current presentation time (in seconds) to animate the gradient
let presentationTime = asyncVideoCompositionRequestpositionTime.seconds
// Create a command buffer and compute command encoder
guard let commandBuffer = commandQueue.makeCommandBuffer(),
let computeEncoder = commandBuffer.makeComputeCommandEncoder() else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "com.example", code: 0, userInfo: nil))
return
}
computeEncoder.setComputePipelineState(pipelineState)
computeEncoder.setTexture(outputTexture, index: 0)
// Pass the current time to the shader as a constant buffer
var time = Float(presentationTime)
computeEncoder.setBytes(&time, length: MemoryLayout<Float>.size, index: 0)
// Determine thread group sizes based on the output dimensions
let threadGroupSize = MTLSize(width: 8, height: 8, depth: 1)
let threadGroups = MTLSize(width: (width + threadGroupSize.width - 1) / threadGroupSize.width,
height: (height + threadGroupSize.height - 1) / threadGroupSize.height,
depth: 1)
computeEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupSize)
computeEncoder.endEncoding()
// Commit the command buffer and wait for the GPU to finish
commandBuffermit()
commandBuffer.waitUntilCompleted()
// Finish the composition request with the rendered pixel buffer
asyncVideoCompositionRequest.finish(withComposedVideoFrame: dstBuffer)
}
}
func cancelAllPendingVideoCompositionRequests() {
// Implement cancellation logic if needed.
}
}
class PreviewView: NSView {
let playerView: AVPlayerView
init() {
playerView = AVPlayerView(frame: .zero)
super.init(frame: .zero)
playerView.translatesAutoresizingMaskIntoConstraints = false
addSubview(playerView)
NSLayoutConstraint.activate([
playerView.widthAnchor.constraint(equalToConstant: 500),
playerView.heightAnchor.constraint(equalToConstant: 300),
playerView.centerXAnchor.constraint(equalTo: centerXAnchor),
playerView.centerYAnchor.constraint(equalTo: centerYAnchor)
])
let (composition, videoCompositon) = createGradientVideoComposition()
let playerItem = AVPlayerItem(asset: composition)
playerItem.videoComposition = videoCompositon
let player = AVPlayer(playerItem: playerItem)
playerView.player = player
player.play()
}
// Required for NSCoding compliance.
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
#include <metal_stdlib>
using namespace metal;
kernel void gradientShader(texture2d<float, access::write> output [[texture(0)]],
constant float &time [[buffer(0)]],
uint2 id [[thread_position_in_grid]]) {
float2 uv = float2(id) / float2(output.get_width(), output.get_height());
// Animated colors based on time
float3 color1 = float3(sin(time) * 0.8 + 0.1, 0.6, 1.0);
float3 color2 = float3(0.12, 0.99, cos(time) * 0.9 + 0.3);
// Linear interpolation for gradient
float3 gradientColor = mix(color1, color2, uv.y);
output.write(float4(gradientColor, 1.0), id);
}
I am trying to achieve an animated gradient effect that changes values over time based on the current seconds. I am also using AVPlayer and AVMutableVideoComposition along with custom instruction and class to generate the effect. I didn't want to load any video file, but rather generate a custom video with my own set of instructions. I used Metal Compute shaders to generate the effects and make the video to be 20 seconds.
However, when I run the code, I get a frozen player with the gradient applied, but when I try to play the video, its stuck at 00:00 and the player won't play anything pass that.
Here is the screenshot :-
My entire code :-
import AVFoundation
import Metal
import MetalKit
func createGradientVideoComposition() -> (AVMutableComposition ,AVMutableVideoComposition) {
let composition = AVMutableComposition()
guard let videoTrack = composition.addMutableTrack(withMediaType: .video,
preferredTrackID: kCMPersistentTrackID_Invalid) else {
fatalError("Unable to add video track")
}
let duration = CMTime(seconds: 20, preferredTimescale: 600)
try? videoTrack.insertEmptyTimeRange(CMTimeRange(start: .zero, duration: duration))
// Create an AVMutableVideoComposition with a 60fps frame rate and desired render size
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRange(start: .zero, duration: duration)
let videoComposition = AVMutableVideoComposition()
videoComposition.frameDuration = CMTime(value: 1, timescale: 60)
videoComposition.renderSize = CGSize(width: 1920, height: 1080)
videoComposition.instructions = [instruction]
// Set the custom video compositor class that uses our Metal compute shader
videoComposition.customVideoCompositorClass = GradientVideoCompositoraa.self
return (composition, videoComposition)
}
class GradientVideoCompositoraa: NSObject, AVVideoCompositing {
// MARK: - AVVideoCompositing Protocol Properties
var sourcePixelBufferAttributes: [String : Any]? = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
var requiredPixelBufferAttributesForRenderContext: [String : Any] = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
// MARK: - Metal Properties
private let device: MTLDevice
private let commandQueue: MTLCommandQueue
private var pipelineState: MTLComputePipelineState!
private var textureCache: CVMetalTextureCache?
// MARK: - Initialization
override init() {
guard let device = MTLCreateSystemDefaultDevice() else {
fatalError("Metal is not supported on this device")
}
self.device = device
selfmandQueue = device.makeCommandQueue()!
super.init()
// Create a Metal texture cache for efficient texture creation
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, device, nil, &textureCache)
// Load the default Metal library and get the compute function
let defaultLibrary = device.makeDefaultLibrary()!
guard let kernelFunction = defaultLibrary.makeFunction(name: "gradientKernel") else {
fatalError("Could not find gradientKernel function")
}
do {
pipelineState = try device.makeComputePipelineState(function: kernelFunction)
} catch {
fatalError("Failed to create pipeline state: \(error)")
}
}
// MARK: - AVVideoCompositing Protocol Methods
func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {
// You can handle changes to the render context here if needed.
}
func startRequest(_ asyncVideoCompositionRequest: AVAsynchronousVideoCompositionRequest) {
autoreleasepool {
// Obtain a new pixel buffer for the output frame
guard let dstBuffer = asyncVideoCompositionRequest.renderContext.newPixelBuffer() else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "com.example", code: 0, userInfo: nil))
return
}
// Get the output image dimensions
let width = CVPixelBufferGetWidth(dstBuffer)
let height = CVPixelBufferGetHeight(dstBuffer)
// Create a Metal texture from the pixel buffer
guard let textureCache = self.textureCache else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "com.example", code: 0, userInfo: nil))
return
}
var cvTextureOut: CVMetalTexture?
let result = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
textureCache,
dstBuffer,
nil,
.bgra8Unorm,
width,
height,
0,
&cvTextureOut)
if result != kCVReturnSuccess {
asyncVideoCompositionRequest.finish(with: NSError(domain: "com.example", code: 0, userInfo: nil))
return
}
guard let cvTexture = cvTextureOut,
let outputTexture = CVMetalTextureGetTexture(cvTexture) else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "com.example", code: 0, userInfo: nil))
return
}
// Calculate the current presentation time (in seconds) to animate the gradient
let presentationTime = asyncVideoCompositionRequestpositionTime.seconds
// Create a command buffer and compute command encoder
guard let commandBuffer = commandQueue.makeCommandBuffer(),
let computeEncoder = commandBuffer.makeComputeCommandEncoder() else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "com.example", code: 0, userInfo: nil))
return
}
computeEncoder.setComputePipelineState(pipelineState)
computeEncoder.setTexture(outputTexture, index: 0)
// Pass the current time to the shader as a constant buffer
var time = Float(presentationTime)
computeEncoder.setBytes(&time, length: MemoryLayout<Float>.size, index: 0)
// Determine thread group sizes based on the output dimensions
let threadGroupSize = MTLSize(width: 8, height: 8, depth: 1)
let threadGroups = MTLSize(width: (width + threadGroupSize.width - 1) / threadGroupSize.width,
height: (height + threadGroupSize.height - 1) / threadGroupSize.height,
depth: 1)
computeEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupSize)
computeEncoder.endEncoding()
// Commit the command buffer and wait for the GPU to finish
commandBuffermit()
commandBuffer.waitUntilCompleted()
// Finish the composition request with the rendered pixel buffer
asyncVideoCompositionRequest.finish(withComposedVideoFrame: dstBuffer)
}
}
func cancelAllPendingVideoCompositionRequests() {
// Implement cancellation logic if needed.
}
}
class PreviewView: NSView {
let playerView: AVPlayerView
init() {
playerView = AVPlayerView(frame: .zero)
super.init(frame: .zero)
playerView.translatesAutoresizingMaskIntoConstraints = false
addSubview(playerView)
NSLayoutConstraint.activate([
playerView.widthAnchor.constraint(equalToConstant: 500),
playerView.heightAnchor.constraint(equalToConstant: 300),
playerView.centerXAnchor.constraint(equalTo: centerXAnchor),
playerView.centerYAnchor.constraint(equalTo: centerYAnchor)
])
let (composition, videoCompositon) = createGradientVideoComposition()
let playerItem = AVPlayerItem(asset: composition)
playerItem.videoComposition = videoCompositon
let player = AVPlayer(playerItem: playerItem)
playerView.player = player
player.play()
}
// Required for NSCoding compliance.
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
#include <metal_stdlib>
using namespace metal;
kernel void gradientShader(texture2d<float, access::write> output [[texture(0)]],
constant float &time [[buffer(0)]],
uint2 id [[thread_position_in_grid]]) {
float2 uv = float2(id) / float2(output.get_width(), output.get_height());
// Animated colors based on time
float3 color1 = float3(sin(time) * 0.8 + 0.1, 0.6, 1.0);
float3 color2 = float3(0.12, 0.99, cos(time) * 0.9 + 0.3);
// Linear interpolation for gradient
float3 gradientColor = mix(color1, color2, uv.y);
output.write(float4(gradientColor, 1.0), id);
}
Share
edited Mar 14 at 12:13
Zaid
asked Mar 10 at 20:57
ZaidZaid
876 bronze badges
2 Answers
Reset to default 0Visual isTranslatable: NO; reason: observation failure: noObservations
is due to the default enabled Vision Text analysis that occurs for many views, You can disable this by updating fields like allowsVideoFrameAnalysis
to false. I think that log is a warning that that subsystem does not detect any VisionKit based OCR output. (ie text on screen). I believe this can safely be ignored and has nothing to do with your rendering code at all, which smells correct at first glance.
Please share the AVMutableComposition code, and not just the instruction. The Composition creation is missing, and i believe the timing there is wrong, as you have a 00:00:00 to 00:00:00 time listed in your AVPlayer View.
This tells me you did not insert an empty track and add a gap of 20 seconds to your MutableComposition which would be required, as the instruction needs underlying track data to trigger rendering.
ie, is your resulting asset 20 seconds long? If not, you wont get 20 seconds rendered.
It seems InsertEmptyRange can't triggle startRequest(_ :)
,add insert any video AssetTracks will work, but the player not do the animation, the startRequest(_ :)
method just call once. Replace the instruction with the custom GradientInstruction
class fixed it. Appened to @vade's Answer, for any duration, just insert the video asset track multi-times with different startTime, it should work fine. scaleTimeRange
don't.
class GradientInstruction: NSObject, AVVideoCompositionInstructionProtocol {
var timeRange: CMTimeRange
var enablePostProcessing: Bool = true
var containsTweening: Bool = true
var requiredSourceTrackIDs: [NSValue]?
var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid
init(timeRange: CMTimeRange) {
self.timeRange = timeRange
}
}
let instruction = GradientInstruction(timeRange: range)
videoComposition.instructions = [instruction]