Very frequently I come across the comparison of eyes and camera. Photography students are frequently taught that the camera works like a human eye and medical students are taught that the eyes work like a camera. This gets one wondering how are they similar and where does each of them stand out.
(Just a piece of stone or a blob with a happy face, looking up with eyes closed?)
Our eyes keep on capturing images and processing them. This gives our eyes many advantages over cameras.
Retina inside the eyes (which is frequently compared to film or sensor in a camera) has millions of photoreceptors. They are further tuned to work in two different levels of light – bright light and dim light. When the light is bright, the photoreceptors sensitive to bright light(cone cells) kick in and things are visible without being blown off. In dim lights, for example in moon-lit nights, the photoreceptors sensitive to dim light (rod cells) start working. The transition is such that we don’t realize it except in the sudden change of light levels as happens when walking into a dark room in the day time. This makes our eyes capable of a huge range of ISOs automatically. The photoreceptors that are sensitive to colors or the cone cells are of three types depending on the color they are sensitive to (green, red and blue). There are about 6-7 million cone cells (color-sensitive photoreceptors) and about 120 million rods in a human eye. These are in a way similar to the pixels in a digital sensor.
(In fact, one person has gone to the extent of calculating the megapixels of eyes to 576 MP, which in my opinion is an oversimplification of how the eye works. If you are a reader of my articles, don’t quote that flawed number anywhere)
The photoreceptors are maximally concentrated in the central part of the retina. Whenever we look at anything, automatically the eyes turn and the image forms on this area. Nothing is out of focus! (Focusing system of our eyes frequently gets affected, which is then corrected by glasses/contact lens/laser)
The retina is a curved structure and covers most of the rear part of the eye. This ensures a very large angle of view and a very simple lens structure works fine. The curved structure also keeps every part of the retina at almost the same distance from the lens. On the other hand, the camera sensors/film is a flat structure and so the distance from the center of the sensor/film to the back of the lens is relatively less than from the peripheral part. This leads to all kinds of problems in cameras like the softness of the image, light fall-off causing vignetting, color aberrations etc.
Though practically the mind shows us only the central area clearly, the peripheral part is also visible. It is desaturated and blurred due to the type and numbers of photoreceptors. Our brain actively participates in pressing unnecessary details and keeping us focused on whatever we are looking at. The peripheral vision helps only when something of importance happens (like a change in the scene or a movement).
The cornea (the curved transparent portion in the front of the eye) and the lens behind it work to focus the image on to the retina. Yes, the image formed on the retina is upside down. Is that not freaky?
The angle of view which is clearly visible is comparable to 40-45mm of a lens on a full-frame / 35mm film camera. The relative distances between various objects are perceived as by a 55-58mm lens on a full-frame or 35mm film camera. This is one of the reasons why most cameras when they bundled a prime lens as a kit, it was usually a standard 50mm lens or something close to this. (Normal Lens). Incidentally, this focal length also happens to be the sensor or film size for those cameras.
The lens is capable of changing its focusing power by becoming thick or thin. The change happens as if on instantaneous auto-focus mode which is continuously working (AF-C mode). The lens also moves slightly to the front and back. The focusing range is from infinity to almost macro and like I said, instantaneous.
My eyes are brown, yours might be blue, green or black. Your friend might have yet another shade. These colors are due to the coloration of the aperture blades or iris. Instead of being made up of multiple blades (as in a lens aperture) this is a single structure with an opening in the center. It controls the opening by changing its shape. A perfectly round aperture!
The aperture is again in program mode and automatically closes down in bright light and opens up in dim light.
While trying to focus on close objects, a mechanism called ‘accommodation’ kicks in which closes the aperture so as to increase the depth of field.
You guessed it right. It is our brain. The freaky inverted image on our retina is seen as an upright image due to the processing done by the brain. The brain interprets what the eyes see. This is the reason behind countless illusions to fool the brain.
The brain provides us with the ability to have ‘Auto’ color temperature balance. Eyes see white as white in daylight, cloudy day, inside the room and even in the evenings when the bulbs are switched on. The Auto color balance in our eyes just needs some time to begin working.
There is an added feature, ‘Auto Saturation Adjustment’. This works in the same manner as the color temperature adjustment. What it does it that it desaturates bright colors after they remain in the field of vision for long. This helps in avoiding unnecessary distractions by bright colors and in seeing things within those colors. This is made possible by a combined effort of the photoreceptors which lose their sensitivity and brain which interprets the changed information. This is also termed as color fatigue by many.
There are a lot of other processing activities too that happen in the brain. Interestingly the brain also has the so-called clone-stamping tool too. Our retina has areas where there are no photoreceptors or they have been damaged. One such place is where the optic nerve is connected to the retina. Eye care professionals call it the blind-spot. Our brain clones and extrapolates image to this area so that we don’t end up seeing any missing areas even if we close one of our eyes. Sometimes, there is predictive ‘clone stamping’ leading our eyes to see a continuous pattern even if there are actually misses in between.
This is also controlled by the brain and done by small muscles inside the eyes. The muscles outside the eye also help in moving the eyes to center the area of interest.
3 D View
Not to forget the fact, two eyes provide us with 3 D vision capability and we are able to tell the difference in ‘depth’ without any clues from relative movement of expected shapes and sizes. The two eyes also increase the field of vision by having areas that are not overlapped. For obvious reasons, there is no 3 D capability in these peripheral regions. (With the present trend of 3D movies being screened everywhere, individuals with one working/normal eye have a tough time enjoying the show.)
Natural adaptation of the photoreceptors provides us with a collection of graduated color filters including graduated ND filters. Imagine a dark foreground with a fairly bright sky (a sunset maybe?). When our eyes try to see the foreground, the eyes adjust accordingly but the bright sky which is now on the peripheral vision is made dim by our eyes and brain automatically. The same happens with contrasting colors too.
Polarizers are a different story. Our eyes do not have them. There are however theories that suggest that bees have this functionality in their eyes.
(Is that just an old tree trunk or do I see a face of an old man hidden between the trees?)
No camera, in my opinion, can be compared to the eyes. After all, we even see the photographs that we click, through our eyes.
Hi Shivam. The camera was originally designed after the human eye, but you already knew that. You’re correct, there is no comparison accept the mechanics as in the lens, aperture and sensor. Jim Anderson
Very nice article sir, really enjoyed reading it 🙂