I have a function that returns a frame as result. I wanted to know how to make a video out of a for-loop with this function without saving every frame and then creating the video.
What I have from now is something similar to:
import cv2 out = cv2.VideoWriter('video.mp4',cv2.VideoWriter_fourcc(*'DIVX'), 14.25,(500,258)) for frame in frames: img_result = MyImageTreatmentFunction(frame) # returns a numpy array image out.write(img_result) out.release()
Then the video will be created as video.mp4 and I can access it on memory. I’m asking myself if there’s a way to have this video in a variable that I can easily convert to bytes later. My purpose for that is to send the video via HTTP post.
I’ve looked on ffmpeg-python and opencv but I didn’t find anything that applies to my case.
Advertisement
Answer
We may use PyAV for encoding “in memory file”.
PyAV is a Pythonic binding for the FFmpeg libraries.
The interface is relatively low level, but it allows us to do things that are not possible using other FFmpeg bindings.
Here are the main stages for creating MP4 in memory using PyAV:
Create BytesIO “in memory file”:
output_memory_file = io.BytesIO()
Use PyAV to open “in memory file” as MP4 video output file:
output = av.open(output_memory_file, 'w', format="mp4")
Add H.264 video stream to the MP4 container, and set codec parameters:
stream = output.add_stream('h264', str(fps)) stream.width = width stream.height = height stream.pix_fmt = 'yuv444p' stream.options = {'crf': '17'}
Iterate the OpenCV images, convert image to PyAV
VideoFrame
, encode, and “Mux”:for i in range(n_frmaes): img = make_sample_image(i) # Create OpenCV image for testing (resolution 192x108, pixel format BGR). frame = av.VideoFrame.from_ndarray(img, format='bgr24') packet = stream.encode(frame) output.mux(packet)
Flush the encoder and close the “in memory” file:
packet = stream.encode(None) output.mux(packet) output.close()
The following code samples encode 100 synthetic images to “in memory” MP4 memory file.
Each synthetic image applies OpenCV image, with sequential blue frame number (used for testing).
At the end, the memory file is written to output.mp4
file for testing.
import numpy as np import cv2 import av import io n_frmaes = 100 # Select number of frames (for testing). width, height, fps = 192, 108, 23.976 # Select video resolution and framerate. output_memory_file = io.BytesIO() # Create BytesIO "in memory file". output = av.open(output_memory_file, 'w', format="mp4") # Open "in memory file" as MP4 video output stream = output.add_stream('h264', str(fps)) # Add H.264 video stream to the MP4 container, with framerate = fps. stream.width = width # Set frame width stream.height = height # Set frame height stream.pix_fmt = 'yuv444p' # Select yuv444p pixel format (better quality than default yuv420p). stream.options = {'crf': '17'} # Select low crf for high quality (the price is larger file size). def make_sample_image(i): """ Build synthetic "raw BGR" image for testing """ p = width//60 img = np.full((height, width, 3), 60, np.uint8) cv2.putText(img, str(i+1), (width//2-p*10*len(str(i+1)), height//2+p*10), cv2.FONT_HERSHEY_DUPLEX, p, (255, 30, 30), p*2) # Blue number return img # Iterate the created images, encode and write to MP4 memory file. for i in range(n_frmaes): img = make_sample_image(i) # Create OpenCV image for testing (resolution 192x108, pixel format BGR). frame = av.VideoFrame.from_ndarray(img, format='bgr24') # Convert image from NumPy Array to frame. packet = stream.encode(frame) # Encode video frame output.mux(packet) # "Mux" the encoded frame (add the encoded frame to MP4 file). # Flush the encoder packet = stream.encode(None) output.mux(packet) output.close() # Write BytesIO from RAM to file, for testing with open("output.mp4", "wb") as f: f.write(output_memory_file.getbuffer())