Skip to content
Advertisement

Efficiently render 3D numpy bitmap array (y, x, RGB) to window on macOS (using openCV or otherwise)

I’m rendering a dynamically changing numpy bitmap array and trying to improve my framerate.

Currently I’m using openCV:

cv2.imshow(WINDOW_NAME, 𐌎)
cv2.waitKey(1)

This takes ~20ms, which is not bad.

But can I do better?

cv2.setWindowProperty(WINDOW_NAME, cv2.WND_PROP_OPENGL, cv2.WINDOW_OPENGL)

Setting this has no noticeable effect. But does openCV offer a better technique than imshow to make use of a GL drawing surface?

And is there any viable alternative to openCV? import OpenGL is a verified can of worms.

REF: Display numpy array cv2 image in wxpython correctly

REF: https://pypi.org/project/omgl/0.0.1/

Advertisement

Answer

You may render the video using external sub-process video renderer.

I tested the suggested solution by piping the video frames to FFplay.
The solution is not working so well – the last frames are not shown.

Treat the solution as conceptual solution.

The code opens FFplay as sub-process, and write the raw frames to stdin pipe of FFplay.

Here is the code:

import cv2
import numpy as np
import subprocess as sp
import shlex
import time

# Synthetic "raw BGR" image for testing
width, height, n_frames = 1920, 1080, 1000  # 1000 frames, resolution 1920x1080

img = np.full((height, width, 3), 60, np.uint8)

def make_bgr_frame(i):
    """ Draw a blue number in the center of img """
    cx, cy = width//2, height//2
    l = len(str(i+1))

    img[cy-20:cy+20, cx-15*l:cx+15*l, :] = 0

    # Blue number
    cv2.putText(
        img,
        str(i+1),
        (cx-10*l, h+10),
        cv2.FONT_HERSHEY_DUPLEX,
        1,
        (255, 30, 30),
        2
        )

# FFplay input: raw video frames from stdin pipe.
ffplay_process = sp.Popen(
    shlex.split(
        f'ffplay -hide_banner -loglevel error'
        f' -exitonkeydown -framerate 1000 -fast'
        f' -probesize 32 -flags low_delay'
        f' -f rawvideo -video_size {width}x{height}'
        f' -pixel_format bgr24 -an -sn -i pipe:'
    ),
    stdin=sp.PIPE
)

t = time.time()

for i in range(n_frames):
    make_bgr_frame(i)
    
    if ffplay_process.poll() is not None:
        break # Break if FFplay process is closed
    
    try:
        # Write raw video frame to stdin pipe of FFplay sub-process.
        ffplay_process.stdin.write(img.tobytes())
        # ffplay_process.stdin.flush()
    except Exception as e:
        break

elapsed = time.time() - t
arg_fps = n_frames / elapsed

print(f'FFplay elapsed time = {elapsed:.2f}')
print(f'FFplay average fps = {arg_fps:.2f}')

ffplay_process.stdin.close()
ffplay_process.terminate()


# OpenCV
##########################################################
t = time.time()

for i in range(n_frames):
    make_bgr_frame(i)
    cv2.imshow("img", img)
    cv2.waitKey(1)

elapsed = time.time() - t
arg_fps = n_frames / elapsed

print(f'OpenCV elapsed time = {elapsed:.2f}')
print(f'OpenCV average fps = {arg_fps:.2f}')

cv2.destroyAllWindows()
##########################################################

Result (Windows 10):

FFplay elapsed time = 5.53
FFplay average fps = 180.98
OpenCV elapsed time = 6.16
OpenCV average fps = 162.32

On my machine the differences are very small.

User contributions licensed under: CC BY-SA
6 People found this is helpful
Advertisement