Skip to content
Advertisement

Why is this MPEG-4 compression of GIF creating weird results?

I was wondering what would happen if MPEG-4 tried to compress an image with completely random pixels. So I made an image with random pixels using Pillow in Python.

The actual GIF:

GIF frame sample

This isn’t animated as the animated one was too big. The colour of each pixel is completely random. As MPEG-4 should only blend in colours that are similar. So, there should be a few blended colours due to colours being similar by chance. That’s not what happened.

One of the MP4 frames: MP4 Frame

There is a CLEAR pattern in the compressing. It appears like a matrix of sharp, uncompressed patches and compressed blended patches. What’s going on? The effect is way more clear in the .mp4 file which, click here for that. And it’s even more clear in the original file stored on my device. This is not something that should happen with pseudo-random numbers that Python generates through the random module.

All of the pixels in all of the frames were created through this:

[random.randint(0, 255), random.randint(0, 255), random.randint(0, 255)]

Advertisement

Answer

You are attempting to compress the uncompressible.

Without getting into discussions on entropy, for this case, we can assume pseudo-random = random.

AVC is a block algorithm with variable-sized blocks, DCT compression, and temporal compensation.

The integer DCT compression causes minor blurriness across the entire image. This is normal compression. I will not get into it, since it does not address the question.

The main visual artifacts, the blurred areas, are from the attempt to temporally compress similar blocks.

The pattern appearing is caused by the deterministic temporal algorithm looking for patterns in the data. It will find patterns even though they might not exist. This is like looking for object shapes in clouds.

The data will contain blocks with similar Luminance/Chrominance values as the next frame(with a shift translation). It is the compressor’s job to identify these similarities and optimize the block size to fit.

The shifts are blended in the rendered “frame” causing the blurred blocks you are seeing in the video.

In the normal video, barring a scene change, an object’s edges tend to move relatively slowly given a target framerate, from one block to the next. The temporal compression algorithm reduces the resolution and reuses the previous block with a shift blending the two frames’ blocks into a single output block. This technique works very well for movies and television.

The assumption of the video’s temporal correlation makes this very effective for compression.

For images without temporal correlation as in your random images, it can produce weird artifacts.

User contributions licensed under: CC BY-SA
2 People found this is helpful
Advertisement