A long day at the Exploratorium…

by Stephanie Chasteen on May 5, 2009

Biologist Charlie Carlson over at my favorite alma mater (the Exploratorium museum of science in SF) snapped this photo of me, perky and bright-eyed… but my bench-mates?  Not so much.  Looks like they had a long day of interactive science.

Photo by Charlie Carlson

Photo by Charlie Carlson

One thing we found curious about the photo was its graininess in the low light.  Charlie says that it’s because it was highly binned.  I asked him what he meant:

I think that’s the technique used to increase CCD sensitivity.  Individual pixels are lumped together to produce a grainier image at lower lighting.  I may be wrong about that, but that’s what happens with microscope cameras, and under higher illumination the images are much finer grained.

My question was, how does this help the light sensitivity?  There are the same amount of photons hitting your CCD, whether you divide it up into smaller or larger squares.  So obviously it’s not that mechanistic.

Charlie responded:

Maybe gain goes up and the amplified signal just gets down to the noise level of the detector, so random pixels increase in frequency, and the signal has to be averaged over a larger number of detectors to produce the image, and the averaging is what we see.  So that’s my conjecture first thing in the morning.

That sounds plausible to me… but I think I’m waving my mental hands here.  So I looked on Wikipedia (the source of all wisdom) and found out that CCD stands for Charge Coupled Device.  Who knew.   I remember, now, using a CCD for my dissertation.  It was actually fairly accurate, and gave me a count of “1” for each photon that hit it.  (I was detecting how many photons hit the detector over time, as I shone light on a polymer film.  Long days in the dark.)  Turns out that each pixel on the CCD collects information about the brightness of your object, but the color is spread out over several pixels.  Anyway.  Each time a photon hits the CCD, it knocks free some electrons. The resulting current is what sends a signal that light hit the detector.  The number of electrons created depends on the material. So, I bet that the CCDs in low-end digital cameras don’t create very many electrons, they’re less sensitive.  And thus, one photon hitting a single pixel won’t reliably generate a signal.  So, as Charlie suggests, perhaps many pixels are binned together, so instead you’re generating a signal from 4 pixels gathering 4 photons (instead of 1 pixel gathering 1 photon), for example.  Then the detector just knows that some light hit those 4 pixels, but it doesn’t know where, so you get a grainy image because of this lower resolution.

If anyone knows more, let us know!

{ 3 comments }

Eric Messick May 6, 2009 at 5:31 pm

Binning actually is used to reduce the noise level in an image. Small point and shoot cameras have incredibly small pixel sensors, which results in the higher noise level. Binning combines multiple smaller sensor elements into a single (virtual) element which is larger. You get lower spatial resolution and less noise.

The noise actually comes from a variety of sources. There is quantization noise, which comes from the conversion of an analog signal into a digital one. There is thermal noise, which adds a random (temperature dependent) signal before the digitization process by kicking the electrons around. There are differences in the sensitivity of different pixels based on variations in the conductive paths in the sensor. These show up as patterns in the noise. During longer exposures, different pixels have atomic defects that accumulate electrons based on time rather than light intensity.

Most of these noise sources get worse with smaller sensor dimensions. Most of them also are worse at lower light levels. The human eye has a logarithmic response to light, with small changes in dark areas being more apparent than similar changes in brighter areas. You can clearly see this in this image.

This image is also exhibiting chroma noise, which is more objectionable than luminance noise. You get chroma noise when different nearby color sensors have different noise profiles. So, two pixels next to each other may differ in just the green value, causing a color shift between the two. Again, this is more apparent in the darker areas, where a smaller change in one channel causes a larger shift in the color.

Anonymous Coward May 8, 2009 at 4:20 pm

> It was actually fairly accurate, and gave me a count of “1? for each photon that hit it.

It’s very hard to get a detector with a quantum efficiency (QE) of 1. Each photon that hits the detector has some probability ( Each time a photon hits the CCD, it knocks free some electrons. The resulting current is what sends a signal that light hit the detector. The number of electrons created depends on the material.

There are some advanced CCD’s that create multiple electrons per detected photon (although, again, with QE less than one, not every photon will create a “cascade” of electrons like this), but these usually require some sort of high voltage supply and are pretty expensive, so they’re probably not in your standard consumer camera. Although my knowledge may be 5 years out of date here.

The noise in the image is coming from the shot noise of the photons, the shot noise of the electrons (which will be worse if your QE is < 1), and the electronic noise of the readout detector that converts the electric charge into some digital bits. The last one is probably the dominant one for your camera, but all of these will have worse signal-to-noise with lower light intensities.

The shot noise, which is the more fun noise source, comes from the fact that light and current aren’t continuous, but arrives in corpuscles (photons and electrons). The # of photons/electrons you get is random, due to the randomness of quantum mechanics, and typically exhibits Poissonian statistics.

Your camera may or may not bin pixels, but the above would apply in either case.

Gautham Ganapathy May 21, 2009 at 9:32 pm

I was wondering if this was also something to with the fact that we do not actually have three color components per pixel, but use a color filter array, and some image processing algorithm in use is countering the effects of demosaicing. It’s just that I see a lot of red, green and blues in the darker regions.

http://en.wikipedia.org/wiki/Demosaicing
http://en.wikipedia.org/wiki/Color_filter_array

The white apple logo seems to be free of this. Less affected by post processing? (Or perhaps equal saturated amounts of Rs, Gs and Bs?)

Not sure if this was what was meant earlier by binning.

Comments on this entry are closed.

Previous post:

Next post: