Skip to content
Advertisement

Display stream with FFmpeg, python and opencv

Situation : I have a basler camera connected to a raspberry pi, and I am trying to livestream it’s feed with FFmpg to a tcp port in my windows PC in order to monitor whats happening in front of the camera.

Things that work : I manage to set up a python script on the raspberry pi which is responsible for recording the frames, feed them to a pipe and streaming them to a tcp port. From that port, I am able to display the stream using FFplay.

My problem : FFplay is great for testing out quickly and easily if the direction you are heading is correct, but I want to “read” every frame from the stream, do some processing and then displaying the stream with opencv. That, I am not able to do yet.

Minimaly reprsented, that’s the code I use on the raspberry pi side of things :

JavaScript

On my PC if I use the following FFplay command on a terminal, it works and it displays the stream in real time :

ffplay -rtsp_flags listen rtsp://192.168.1.xxxx:5555/live.sdp?tcp

On my PC if I use the following python script, the stream begins, but it fails in the cv2.imshow function because I am not sure how to decode it:

JavaScript

Does anyone knows what I need to change in either of those scripts in order to get i to work?

Thank you in advance for any tips.

Advertisement

Answer

We may read the decoded frames from p1.stdout, convert it to NumPy array, and reshape it.

  • Change command to get decoded frames in rawvideo format and BGR pixel format:

    JavaScript
  • Read the raw video frame from p1.stdout:

    JavaScript
  • Convert the bytes read into a NumPy array, and reshape it to video frame dimensions:

    JavaScript

Now we can show the frame by calling cv2.imshow('image', frame).

The solution assumes, we know the video frame size (width and height) from advance.

The code sample below, includes a part that reads width and height using cv2.VideoCapture, but I am not sure if it’s going to work in your case (due to '-rtsp_flags', 'listen'. (If it does work, you can try capturing using OpenCV instead of FFmpeg).

The following code is a complete “working sample” that uses public RTSP Stream for testing:

JavaScript

Sample frame (just for fun):
enter image description here


Update:

Reading width and height using FFprobe:

When we don’t know the video resolution from advance, we may use FFprobe for getting the information.

Here is a code sample for reading width and height using FFprobe:

JavaScript
User contributions licensed under: CC BY-SA
1 People found this is helpful
Advertisement