Why RAW? The short answer is video that has a film look, natural light and color.
RAW video can provide a look and feel that consumer video can’t. It accomplishes this by saving the original color detail and brightness information. Consumer video, what I’ll call H.264, the name of the most popular compression CODEC in use today, is designed to deliver a pleasing moving image—sharp and with rich colors. You know video compression isn’t working well when you see over- or under-saturated colors, color-blocking and things that look too black or white.
Here is a frame taken from H.264 video shot with an EOS-M.
Here is a frame from Magic Lantern RAW shot with the same camera:
Notice how the “RAW” colors are evenly bright, unlike the H.265 where the more primary the color, the brighter it is (because compression adds contrast).
Why can’t you get both, contrast or deep colors? It comes down to convenience and size. RAW video requires the latest memory technology. It takes up a lot of storage space. Everything about it takes more time and effort.
The total pixels in a frame of 1,920 pixels wide, and 1,080 pixels high, is 2,073,600, or about 2 million pixels. In one second, we watch 30 of those frames, so that 2 million times 30, or roughly 60 million pixels per second. For a minute we’d need 60 million times 60 seconds, or 3,600,000,000 pixels per minute, or 3.6 billion. Yes, when you’re watching your HD TV your eye is viewing 3.6 billion pixels every minute.
If you could write down the color of each pixel on a legal pad in 5 seconds per pixel it would take you around 570 years working 24 hours a day.
What makes up a pixel? A color of course. Colors are often described in their red, green and blue components. That is, every color can be separated into a red, green and blue value, often abbreviated RGB. Each color is usually assigned a brightness value from 0 to 16,383 (14 bits). So you need three sets of numbers, red (0 to 16,383), green (0 to 16,383) and blue (0 to 16,383) to numerically describe ANY color. Some simple math tells us that we need a value that might reach 16,383 times 16,383 times 16,383 or 4.3 trillion
Fortunately, even 8-bits (1 byte) per color “channel” is enough to create 24-bit (8+8+8), or 16 million colors. The human eye can see about 12 million colors at best (so we don’t need 4.3 trillion colors). Let’s go back to the optimum image we’d like to see, 3.6 billion pixels per minute times 24bits (3 bytes). That would be 10.8 gigabytes per minute. As you know, you’re not streaming 10 gigabytes of video to your TV every minute. Video compression does a marvel job of cutting that down to a manageable size
HD 720p @ H.264 high profile 2500 kbps (20 MB/minute)
HD 1080p @ H.264 high profile 5000 kbps (35 MB/minute)
It is the limitations of our computing devices that we can’t have what we really want–10 gigabytes of video data every minute. If for the sake of argument we had unlimited storage and speed we’d all save and view images without compression. That’s when they have the greatest fidelity.
Consumer video cameras record video using a “distribution” CODEC, not a photographic storage method. This means they’re making an immediate decision of what part of image to save, and what to throw away. The top image is what they end up with. The bottom image is what the sensor recorded BEFORE being put through H.264.
The benefit of RAW video to me, is that I can decide how to compress the image after it has been taken. I can make the decision of what the image should look like. I can get a photographic look.