Fixing Chinese Focal Reducer to fit on BMPCC

I bought a focal reducer to use with my Nikon 24mm 2.8D on my Blackmagic Pocket Cinema camera.  Getting an effective 0.7 * 24mm, or 17mm starting at 1.8 (1 stop reduction) intrigued me. Andrew Reid, the genius behind EOSHD, is a big proponent focal reducers.  As a hobbyist, I couldn’t justify the cost of a Metabones adapter.  So I bought a Chinese knock-off on Ebay from C. Kee for $96, shipping included.  It was shipped within 24 hours and arrived exactly one week later.  Off to a good start!

Unfortunately, it would only turn a few millimeters onto the BMPCC, just enough to mount, if I handled it very gingerly.   If I forgot about it for a second, the lens would go crashing to the ground.  Anyway, I had promised Andy Lee, an expert on lenses and the Panasonic g6, who gives great advice on the EOSHD forums, that I would shoot some test footage.  First I shot Focal Reducer on BMPCC then Focal Reducer on Panasonic GF3 (which the adapter fit on perfectly).  He approved and said he was going to order one (he shoots Panasonic so wasn’t worried about fitting it to a BMPCC).

Then richg101 on the forum talked about taking it apart and checking the lens distance into the body.  When I did that I realized I could take the whole thing apart.  I studied it some more and the best I could figure is the flange on the adapter was getting stuck on the mount-flange.

Here is what the adapter looks like, looking at the MFT mount side.  Obviously, the adapter didn’t look like this when it arrived.  I had sanded it a bit trying to get it to fit.

a4_AdapterFlange2_StraightOn

Here’s another view.  If you look closely, you can see three flanges that twist under the mount flanges (which have springs underneath).  When the lens turns far enough a small bolt on the camera inserts into the adapter and locks it into place.

a2_AdapterFlange1
Here you can see a flange in profile.

a3_AdapterFlange2_Profile

In the BMPCC you can see there is a slight ridge/block that probably holds the spring into place.  I believe the adapter flange hits against this, ever so slightly.

a1_BMPCC_MountFlange

First I take off the Nikon mount of the adapter.  There are 3 little screws.

a6_TakeOffMount

After unscrewing a small locking screw,  I screw the lens out

a8_TakeOutLens2

Here are the three parts to the adapter.

a9_FocalReducerParts

And now I sand the inside of the adapter flanges down a bit.  I did a little, tested, until it fully locked into the camera.

a91_AdapterFlange5_Dremeling

Some advice.  Wear glasses.  You don’t want grit in your eye.  Eye doctors are expensive.  Also, make sure you wash/blow all grit away from the adapter before mounting it on the camera.  The smallest particle can show up on your sensor.

I may have ended up with a “bad” copy of a BMPCC mount or adapter.   These adapters don’t have names, so it is hard to see who is buying what on Ebay and Amazon, etc.  I’ve read reviews of what looks like the exact same adapter working on someone else’s BMPCC.   The good news is that all is not lost if you experience the same problem I had.  I hope you don’t though!

Now that I have it working, I ordered a Zenit 16mm.  With the adpater, I’d have 0.7 x 16 = 11mm time the 2.8 crop of the BMPCC, or 32mm.  Test footage to follow later!

UPDATE: even though I thought I cleaned it well, dust will come off it when screwing it onto mounts so clean really well, or get some really fine paper and get a blower to blow around the camera mount after you take it off.

Magic Lantern and Blackmagic RAW Video

Why RAW? The short answer is video that has a film look, natural light and color.

RAW video can provide a look and feel that consumer video can’t. It accomplishes this by saving the all the color detail it sees while recording. Consumer video, what I’ll call H.264 (the name of the most popular compression CODEC in use today) is designed to deliver a pleasing moving image—sharp, with rich colors—at a data rate that: 1) does NOT exceed most consumer electronic devices’ maximum bandwidth (usually under 4 megabytes per second); and 2) contains no more brightness, or darkness, than we can view on our display; and 3) when things are moving fast, throws out any information that we won’t notice, like the license plate number of a car in a fast-pan clip.  In short, H.264, tries to get as much image quality in a low-data stream without our noticing.  The more you get into video, the more you notice the small-data for image-quality trade-offs it makes.

Here is a frame taken from H.264 video shot with an EOS-M.  Study these images closely.  They are straight from the camera, on a light-table, with controlled exposure.

Here is a frame from Magic Lantern RAW shot with the same camera:

Notice how the “RAW” colors are evenly bright, unlike the H.265 where the more primary the color, the brighter it is (because compression adds contrast).  You will notice more noise in the RAW image, but with it, more detail and dynamic range.  You may say, but aren’t they both 8-bit images on my computer?  Yes.  If we pumped up the saturation and contrast on the RAW image it would look just like the other one.  The difference is that RAW can move up the range of brightness while retaining color information while the 8bit cannot.  In 8bit, so to speak, the colors “red-line” as soon as you pump up the brightness.

 

The total pixels in a frame of 1,920 pixels wide, and 1,080 pixels high, is 2,073,600, or about 2 million pixels. In one second, we watch 30 of those frames, so that is 2 million times 30, or roughly 60 million pixels per second. For a minute we’d need 60 million times 60 seconds, or 3,600,000,000 pixels per minute, or 3.6 billion pixels. Yes, when you’re watching your HD TV your eye is viewing 3.6 billion pixels every minute!

What makes up one of those pixels? A color of course. Colors can be described in their red, green and blue components. That is, every color can be separated into a red, green and blue value, often abbreviated RGB.  When most digital cameras take an image each color is assigned a brightness value from 0 to 16,383 (14 bits).  We need three numbers, red (0 to 16,383), green (0 to 16,383) and blue (0 to 16,383) to numerically describe ANY color. Some simple math tells us that we end up with a value that might reach 16,383 times 16,383 times 16,383 or 4.3 trillion  As expected, a single 1080p RAW frame from a Canon camera is about 4 megabytes.  

In the above images, the H.264 frame can be pulled out as a 114k JPEG.  The RAW frame, a 256 JPEG, originated from a 2.4 megabytes RAW file, which means you can choose less contrast, or more detail and noise.

Even 8-bits (1 byte) per color “channel” is enough to create 24-bit (8+8+8), or 16 million colors. The human eye can see about 12 million colors at best (so we don’t need those 4.3 trillion “RAW” colors).  That allows an H.264 to throw out over 96% of the original pixel data.

A consumer video camera can quickly figure out what we can see, and not see, in an image, so this isn’t difficult.  It takes the “brightest” data and saves it–AND THROW OUT THE REST.  However, the overall brightest image is not always the image we want!  Sometimes we want a dim image with a lot of detail in the shadows.

Let’s go back to the optimum image we’d like to see, 3.6 billion pixels per minute times 24bits (3 bytes). That would be 10.8 gigabytes per minute. As you know, you’re not streaming 10 gigabytes of video to your TV every minute. Video compression does a marvel job of cutting that down to a manageable size

HD 720p @ H.264 high profile 2500 kbps (20 MB/minute)
HD 1080p @ H.264 high profile 5000 kbps (35 MB/minute)

It is the limitations of our computing devices that we can’t have what we really want–10 gigabytes of video data every minute.  If for the sake of argument we had unlimited storage and speed we’d all save and view images without compression.  That’s when they have the greatest fidelity.

Consumer video cameras record video using a “distribution” CODEC, not a photographic storage method. This means they’re making an immediate decision of what part of image to save, and what to throw away. The top image is what they end up with. The bottom image is what the sensor recorded BEFORE being put through H.264.

The benefit of RAW video to me, is that I can decide how to compress the image after it has been taken.  I can make the decision of what the image should look like.  I can get a photographic look.

The new 4K cameras coming out will offer more resolution (4 times more), but resolution will not give me more color depth.  That isn’t to say 4K is phony; only that it doesn’t fix the color-depth problem inherent in consumer-level compressed video streams.  4K RAW is a different matter, of course.