Note: This document is not complete. Many reference figures are missing and some sections are incomplete. I will finish them as I have time and material. I promise! tb!
Moving images can also provide depth clues. Objects in the foreground are observed to have greater displacement relative to those further away at a given speed. Here again, the effect is largely a conditioned response to previously observed phenomenon. Only under highly specific circumstances can absolute depth information be extracted from moving images. These circumstances are explained later under the topic 'Pulfrich Effect' below.
So, while some depth clues may contribute to the overall visual experience, true depth perception exists exclusively in the stereoscopic domain. True stereoscopic imaging contains depth information completely independent of learned spatial relativity. Even abstract objects can be perceived in absolute spatial relationship to each other. This would be an opportunity to present another figure in support of this portion of the discussion. However, if you are new to the concept of stereographic imaging, you probably don't have the tools or experience necessary to view the example. Figure 2 illustrates this idea. Please return here when you are ready to view the example.
The perception of depth is the result of image convergence disparity, measured very precisely by the brain. Human eyes, spaced approximately 2.5 inches (6.25 cm) apart, each capture an image very slightly, but significantly, different from each other. Figure 3 demonstrates these seemingly minute differences in a rather dramatic way. In this example, two images of the same subject, taken from two very slightly varying perspectives (less than 2 inches of separation), are superimposed over each other, one in red, the other in blue. The red image displays significant foreshortening, suggesting that it was taken from a perspective to the right of the blue image.
A number of operations must occur in order to resolve the differences that result from your two eyes viewing the same object from different perspectives. First, the center of interest must be identified. Both eyes move to center this area of the scene on the optic receptors at the back of the eye. If a double image is detected, resulting from the object of interest being focused on different areas of the optic receptor matrix (retina), the eyes adjust horizontally until a single, or converged, image results. At this moment, it may be possible to detect that other objects within the same field of view do not produce converged images. As the attention is focused from one object to another, the eyes repeat the process of tracking horizontally until a converged image is obtained. Objects at a great distance require the horizontal alignment of the optic system to be parallel. Closer objects require increasing angular deviations. Angles greater than 15 degrees can be uncomfortable, and a 30 degree angle can be painful.
A simple experiment demonstrates convergence quite clearly. Focus on an object several feet distant. Raise you hand with one finger extended into your field of view. The resulting double image, two extended fingers rather than one, is quite obvious. Now, focus your attention on your finger and observe the distant object as divergence occurs. Focus on your finger as you move it closer to your face. Observe the increasing divergence occurring in the background.
Depth is then measured by interpreting the angular deviation necessary
to produce a converged image. Objects at varying depths in a given
scene have varying relative horizontal placements, depending on the perspective
of the optical unit, requiring varying alignment of the optical system to
How we perceive depth is closely tied to the baseline of our internal optical system. If that baseline is altered, a unique psychological phenomenon can be observed. Under normal conditions, an object 10' away requires a convergence angle of 1.2 degrees, using the standard 2.5 inch baseline. If the baseline is extended to 5 inches, the convergence angle becomes 2.4 degrees, the same angle that would normally indicate an object 5 feet away on a 2.5 inch baseline. Additionally, objects 50 feet away produce convergence angles of those only 25 away. This results in an enhanced sense of depth. The interpretation of this information by the brain is very interesting. Past experience cannot be ignored. If the objects presented are familiar, non abstract items, the brain interprets the distances based on the convergence angle, and psychologically reduces the size of the object to fit into that space. Figure 4 presents a series of images of the same scene with increasing baseline separation.
Rules of Composition (the practical stuff...)
In addition to the standard rules of photographic composition, stereographic photography introduces another set of basic rules to be considered in addition. A basic rule of stereographic imaging is to include foreground objects in the picture to optimize the characteristics of the technique. Otherwise, there's no point in taking a 3d picture, is there? Foreground objects are important because the effect of depth falls off dramatically as distance to the subject increases. In general, 3d scenery pictures shooting off to the horizon are ineffective. Sometimes the desire to capture a scene is so overwhelming that we ignore good judgment and snap the picture anyway, and are disappointed with the results later.Oh, by the way...
A simple technique can be employed to enhance depth and turn a less than ordinary shot into a spectacular demonstration of depth. Estimate the distance to the closest object in the scene. Then, determine the ideal placement of the object in the scene. Suppose the closest object is at a distance of 100'. Ideally, an object within 5' to 10' feet is desired. Let's place the object at 7 feet. By extending the baseline we increase the convergence angle and reduce the apparent distance to the object. Using simple ratios we can compute a new baseline using the following equation - 100 / .21 = 7 / x. The value .21 is 2.5 inches converted to feet. 100 / (.21 * 7) = 1.47 feet or slightly less than 18 inches. The psychological implications of this technique can be extrapolated further by comparing the new baseline to your own height. If you are 5'10" looking at an object 7 feet away, then an object 100 feet away with the same convergence angle would proportionately make you 83 feet tall!! That's why the objects in the image appear smaller with an increase in separation.
This technique will drive purists nuts, and my apologies. Art only imitates nature. It doesn't necessarily reproduce it verbatim. There are obvious drawbacks to this technique as well. Placing a very large object virtually too close within the observation space will cause convergence difficulties. The brain is well experienced at quickly guessing and presetting the convergence angle based on the image contents. Too much stretching of virtual depth can result in 'fishing' as the eyes search horizontally attempting to converge at expected depth rather than the actual virtual depth. This is not a good thing. Techniques for avoiding this will be discussed later.
Your eyes can be trained to do all sorts of amazing tricks. There are, however, a few things your eyes could do for which there is no practical application. And, in fact, if they do some of these things, a trip to your local ophthamologist might be in order. Your eyes are designed to work together on the same horizontal plane. If one eye were to look up and the other look down, for instance, really bad and probably painful things are happening. Your eyes only do things that make images converge, and this situation just doesn't occur in nature.
Divergence, or 'wall eyed' behavior is anomalous as well. It is actually possible to force a small degree of divergence under controlled conditions, but more than a few moments in this situation will result in a massive headache. It is not recommended even for brief experimentation.
Why do I mention these issues? Because in designing and presenting 3d images it is important NEVER to expect these things to happen. An image that is not properly aligned both horizontally and vertically will cause fatigue, pain, and general ocular discomfort. It is important to have a solid understanding of the mechanics involved so your images can be fine tuned for optimal viewing effect AND comfort.
Stereo pairs generally require external optical systems, or viewers, to present individual images to each eye with proper focus and alignment. Examples of stereo pair systems include ViewMaster and the older Stereopticon. With training and practice it is possible to view stereo pairs without the aid of a viewer. Free view images can be presented as either crossed eye or parallel view pairs. Difficulty can be encountered if the images are too large or too small.
Anaglyphs require the use of special filters covering each eye, usually one red and one blue. Without these filters, the image is jumbled and difficult to decipher. Individuals with colorblindness may not be able to view them at all. There is no amount of special training or practice that will allow the viewing of these images without the filters. Because of the filtration necessary, color rendition is generally poor, and even under the best of conditions some cross image bleed-through produces a ghosting effect.
Projecting stereo pairs combines the viewing optical system and filtration devices to allow large numbers of people to view projected stereo images. The viewing system in this case is a projector utilizing polarized filters. The dual projection system superimposes the stereo pairs through filters polarized at 90 degree angles to each other. To extract the discreet images from the superimposed composite on the screen again requires the use of polarized filters at 90 degree angles over the eyes. The screen must be specially manufactured to ensure a high coefficient of polarized reflectivity. The head must be held level while viewing images to prevent cross image bleed-through.
Lenticular images require precise cutting of images into vertical strips, re-assembled alternating left and right images, and displayed behind a special viewing matrix. Unfortunately, image clarity and resolution suffer.
The Pulfrich effect again requires filtration. This time, only a single dark filter is placed over one eye. This effect only works with moving pictures (movies). There are additional restrictions on camera movement. If the camera stops moving, the image disturbingly reverts to flat. Moving too fast creates exaggerated depth.
Additionally, in recent years, techniques have been derived
to take advantage of computer video technology. A discreet stereo pair
are interlaced on the video monitor, each image displayed on alternating
scans of the video raster. The images are de-coded, if you will, by
an electronic shutter placed over each eye. The shutter for the left
eye opens as that image is traced on the monitor. The left shutter
closes, and the right shutter opens as the right image is traced on the screen.
The shutters use electronic LCD technology to quickly darken and lighten
the viewer 'shades'. Currently, this technology is expensive and not
well standardized. Proprietary software and hardware are required to
The most significant challenge to free viewing is focal/angular displacement. When parallel viewing, the eyes are positioned for viewing at infinity. Usually, the image being viewed is placed 12 to 18 inches from the eyes. Significant and conscious effort must be exerted to allow the eyes to focus closely independently from the parallel driven urge to focus at infinity. Conversely, crossed eye viewing forces the eyes into an acute angle that would normally indicate an object at very close proximity. Consequently, considerable effort must be expended to force the eyes to focus at a distance somewhat beyond the angular cues being supplied to the brain.
Free viewing requires practice. Figure
5 contains several images to help you practice crossed eye free viewing.
Take your time. This is not an easy technique. Younger people
tend to have the greatest success. There's something about the flexibility
of youth. I learned free viewing when I was in my mid teens.
I drive my optometrist nuts! I can converge and focus on impossibly
misaligned images that are used to diagnose serious vision deficits.
Then I have to take several minutes to explain the techniques that I have
practiced and developed to a very high degree. This technique cannot damage
your eyes. When your mom told you to 'stop looking cross-eyed or your
eyes will stick like that' she was merely trying to ensure you didn't look
like a moron when you went to visit grandma. Sure, if practiced to excess
initially you'll get a splitting headache. Take it easy. Try
it for a few minutes, take a break and come back to it an hour or two later.
It can take months to develop comfortable and flexible control of your viewing
mechanism. I believe it's worth the effort, and certainly it's at least worth
Traditionally anaglyphs have been encoded using red and blue filters. Images encoded using red and green can be successful as well. Standard convention also dictates the viewing apparatus be constructed with the red lens on the left, the blue or green on the right. The notable exception to this standard is the propensity of the movie industry to reverse this convention and place the red on the right. Thus, if you have acquired a pair of cardboard 3D glasses for a network television presentation of an old 3D movie, you will need to reverse them to successfully view most web based 3D presentations.
The final presentation of the stereographic image is accomplished by encoding the image intended for the left eye in blue and the right eye in red. In this scenario, the image coded in red becomes invisible to the eye covered by the red filter. (see footnote)
Historically, left and right components were encoded from black and white images. Today, more is being done to create anaglyphs with limited true color rendition. This is accomplished by replacing the red channel of an RGB image with the red channel from the other half of a full color stereo pair. Success varies depending on the color content of the image. Unfortunately, this practice can result in excessive 'bleed through' resulting in undesirable 'halos' around some objects.
The actual technique of creating anaglyph images is almost elementary with tools such as Adobe® PhotoShop™ and similar photo editing software. This process will be described in detail later.
Horizontal alignment controls the placement of objects within the virtual depth of the scene. The angular deflection of the eyes from parallel is an indication of the virtual distance to the subject. The point at which the red and blue images coincide is perceived to also coincide with the surface of the display media. Portions of the image where the left-eye is to the right of the right-eye image are perceived to be forward of the media plane.
There is much discussion as to the 'proper' placement in virtual space. Analyze the image you are preparing for presentation. Where is the center of attraction. How do you pre-judge the placement of the main subject prior to putting on your glasses. The tendency is to focus on the surface of the medium, whether it's the printed page or a computer screen, to search for initial convergence. Place something on the surface to catch the eye and give 'perspective' to the placement of the rest of the image.
Ambiguous virtual placement results in 'fishing,' or convergence
searching. A double image will appear to oscillate horizontally until
either the eyes converge on the image or stabilize with a double image.
An individual with significant experience with stereographic image manipulation
will notice these problems less due to the visual flexibility developed
over time. One must put conscious effort into these issues if the
images are to be viewed and appreciated by the less proficient. Figure 5 presents the same image with various virtual
depth placements. Be aware of any issues you observe as you view these
The Pulfrich Effect -
Equipment - Filtration is required for the Pulfrich effect. A single, dark tinted lens must be placed over one eye. Which eye? It depends. Let's talk first about how it works...
How it works - Our friend Carl discovered that lower levels of illumination require additional time for visual perception to occur. The dimmer the image, the more time required. This delay can be as much as several hundredths of a second.
In practical application, a moving image displays a continuous stream of visual information from a constantly varying perspective. If one eye is covered with a darkening filter, it registers the scene a frame or two later than the uncovered eye. This image will be from a different perspective than the other eye is currently registering, thus potentially containing full 3-dimensional information. To maximize the effect and produce predictable and high quality 3D effects, constantly moving the camera from right to left during filming and viewing with the right eye filtered will produce impressive results. Obviously, the camera must always move the same direction to be effective. If the direction is reversed, the filter must be moved to the other eye. Requesting your audience to switch the filter to the other eye in the middle of a presentation is not considered to be good form. FigureX is a short (yet 4 meg) video clip shot from the window of a moving train to demonstrate the Pulfrich effect. View by covering the right eye with a sun glass lens. FigureY is the return trip. Filter over your left eye this time. If you don't have the bandwidth to withstand the very large video files, this simple java application will demonstrate the effect rather clearly: http://dogfeathers.com/java/pulfrich.html
Other methods - lenticular, LCD shutters
Creating Stereographic Images for Distribution via the Internet