CA1294361C - Three dimensional television system - Google Patents

Three dimensional television system

Info

Publication number
CA1294361C
CA1294361C CA000499829A CA499829A CA1294361C CA 1294361 C CA1294361 C CA 1294361C CA 000499829 A CA000499829 A CA 000499829A CA 499829 A CA499829 A CA 499829A CA 1294361 C CA1294361 C CA 1294361C
Authority
CA
Canada
Prior art keywords
video
cameras
camera
signal
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CA000499829A
Other languages
French (fr)
Inventor
Donald J. Imsand
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CA000499829A priority Critical patent/CA1294361C/en
Application granted granted Critical
Publication of CA1294361C publication Critical patent/CA1294361C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

ABSTRACT OF THE DISCLOSURE

Methods and apparatus for producing a television image having the illusion of three dimensional depth. This method is compatible with television broadcast standards and standard home TV receivers without additional home apparatus. The basic methodology is to alternate video from two stereoscopic TV cameras, using a synchronized video switch, at an alternation rate to allow the human visual perception process to perceive the two images as a single binocularly fused image. However, this basic method has an inherent flicker problem related to the "limit of binocular fusion" of the human visual perception process. The flicker problem is eliminated through video processing techniques that produce globally converged alternating binocular video.
The method is also compatible with display mediums other than television.

Description

3~i~

THREE DIMENSIONAL TELEVISION SYSTEM
BACKGROUND OF THE INVENTION

This invention relates to methods and apparatus for producing television images perceived by viewers to be stereoscopic.
Conventional stereoscopic television systems typically require special viewing aids, such as polarized glasses or ones having colored lenses, or special receiving equipment. ~rypical of the latter approach is the method for producing a three dimensional television image by alternating pictures from stereoscopic television cameras described by Carrillo in u.s. Patent Serial No. 3,457,364. The method of Carrillo requires a special picture tube for color. In addition, my tests indicate that the alternation method described by Carrillo (approximately 60 alternations per second, 30 of each picture, synchronized to the television field rate) may produce less depth illusion than methods using lo~er alternation rates.
Previously I described a method to produce the desired three dimensional effect while maintaining compatibility with existing broadcast standards and existing home TV receivers. However, when the two stereoscopic cameras are converged on foreground objects, in accordance with that method, background objects sometimes appear to flicker and jump. Similarly, when the cameras are pointed at background objects, foreground objects may appear to flicker and jump. The main contributor to this flicker and jump effect is the limit of binocular fusion of the visual perception process in combination with the flicker effect of the low alternation rate. The limit of binocular fusion phenomenon was first investigated by Panum in 1856 and is known to visual perception scientists as the limiting case of the Panum Phenomenon as disclosed in Rosenblith, Walter A.
(editor~, sensory communication, John Wiley & sons, 1961, which discusses, in chapter 32, the physiological basis for the perception of binocularly fused images as a result of the alternation of stereo images.

3Çj1 Efforts to reduce the detrimental effects of this limit of binocular fusion while still maintaining a three dimensional effect produced my invention described in u.s.
Patent Number 4,006,291. sut the method of this invention did not completely correct the problem since it is designed to minimize the effects but does not correct or remove the cause.
The method of Carrillo does not have a flicker problem since the alternation rate is far above the critical flicker frequency described in visual perception literature.
However, the limit of binocular fusion problem manifests itself as a double image.
BASIC THREE DIMENSIONAL ILLUSION METHODOLOGY
Human visual perception of three dimensions requires, in part, stereo images, one image corresponding to each eye, viewed from slightly different angles corresponding to the separation of the eyes. This causes each eye to see a slightly different image. Consequently, lZ~'~36~

1 most three dimensional movie and television systems require the viewer to use ~ome apparatus such as polarized glasses, color filter glasses or a mechanical shutter viewer in oraer to cause the left camera image to be viewed by the left eye and the right camera image to be viewed by the rlght eye.
~ owever, a less obvious ~ethod is to si~ply alternately expoRe the two stereo images to both eyes of the viewer. The three dimensional illusion is produced by presenting stereoptican pairs of images to both the viewer's eyes, one image partner at a time, fir~t one then the other, alternated several times per second. The normal visual perception process, through its significant adaptive and integrating capabilities and the physiology of the visual perception process, will interpret the images as a single three dimen~ional image, so long as the images are alternated such that binocular fusion can take place.
The visual perception system does not care which eye sees which image. As long as there i~ a disparity in the tw~ images and as long as the two images are within 80me range of corresponding blnocular positlons, a correct depth relationship will be perceived. The visual cortex apparently measures the di8paritie8 in the two images to determine the depth of an ob~ect. A depth illusion system must provide some means to present the i~ages such that the visual cortex can perform this measurement.
The first paragraph of this section contains a simplification that should now be clarified ~ince this clarification is pertinent to the present invention. The implication that 3D movies or television require the stereo cameras to be horizontally separated corresponding to the 36~

horizontal separation of the eyes is not completely accurate.
My experiments indicate the two cameras can b~ vertically (or diagonally) separated and still achieve the depth~illusion.
This result is implied by tilting one's head to the side while viewing a 3D movie: the stereoptican depth illusion is still present even with the head tilted a full 90 degrees.
This perception response to vertical disparity is also supported by visual perception literature such as "Stereopsis" by J. Mayhew in "Physical and Biological Processing of Images", edited by Braddick, O.J. and Sleigh, A.C., Springer-Verlag, 1983 and Pettigrew, John, The Neurophysiology of sinocular vision, Scientific American, August 1972.
In a method which I have previously described the stereo images are alternated at a rate near or below the critical flicker frequency. This is so that some of the retinal receptors can discharge to the visual cortex under the influence of one of the stereo images and other retinal receptors will discharge to the cortex under the influence of the other image in a time multiplexed fashion. Since the neuronal charges persist for some short time interval, the different charges from the disparate stereo images can interact in the binocular fusion process and depth is perceived.
The alternation rate of my method can vary, but generally should be on the order of 3 to 25 of each image per second. The relative eXposUre time of the tWo partner images can vary from equal eXposUre time of each to the 3~

1 subliminal exposure technique involving significantly unequal exposure times of my U. S. Patent Serial No.
4,006,291.
~owever, when the two cameras are pointed at a foreground object ~that is, converged on the foreground object), background objects may appear to jump. Likewise, when the two ca~eras are pointed at a background object, foreground objects may appear to jump. The problem, which may appear to be caused only by the ~flicker phenomenon~, ls actually caused by a combination of the flicker phenomenon and by the limit of binocular fusion first described by Panum. It 8hould be emphasized here that converged binocular ~ideo does not flicker or jump. Unconverged binocular video does flicker and jump.
The differences in the characteristics of the Carrillo and the Imsand methods suggest of a very basic difference in the way the two methods function in the perception process. The Carrillo alternation rate i9 well above the critical flicker frequency; hence it haq no ~licker problem. This implie8, however~ that all retinal receptors discharge to the visual cortex under the lnfluence of both ~tereo images. The carr illo monochrome method also uses a spacial ~eparation by placing the two s~ereo images on alternat~ng interlaced lines, thus assuring that the two images are present on a space separation basis rather than a true time multiplex basis. The resolution, focus and electron beam sharpness of color picture tubes was not adequate at the time of Carrillo's invention to provide the necessary 8pacial separation for color applications. Thus, Carrillo found it neceSsary to use ~wo contrasting colors, 3~Z~3~1 1 i~plemented with a special color picture tube, to keep the two disparate stereo pictures separated in the visual cortex so that binocular fusion could take place.
The methods and apparatus descrlbea herein to resolve the flicker problem of the Imsand method can also be used to resolve the double image problem of the Carrillo method.
In order to better explain the present invention a few elements o~ depth perception as related to the present o invention will now be reviewed. The monocular cues to depth perception are not pertinent to the invention and are not included.

BINOCULAR ELEMENTS OF DEPTH PERCEP~ION

Two important cues to the visual perception of depth are binocular cues of convergence and stereoscopic vision.

CONVERGENCE
,. . .
When an ob~ect i3 at a great di8tance, lines of fixation to the ob~ect from a viewer's separate eyes are nearly parallel. When the object i8 near, the viewer's eyes are turned toward the object and the fixation lines converge at a more notic~ble angle. If a person fixates his eyes on his finger at arm's length and then moves his finger in toward his nose wbile maintaining the fixation with his eyes, the eyes will ~cross~. This crossing or ~pointing in"
of the eyes is detectable by the sensory/control system that controls the position of the eyeballs and produces a sensation of more depth or less depth according to the size ~ ~L~ 3~

1 of the convergence angle of the eyes. However, visual perception scientists generally agree that converge`nce is a relatively minor cue to aepth perception. Probably a more important result of convergence is that it also serves to S place the two right and left eye images of the object fixated upon at ~very nearly) corre~ponding retinal points in the central retinal area of each eye.
When the eyes ate fixated ~converged) on a point, the theoretical locus of all points whose images fall in exact retinal correspondence can be shown by geometric analysi~. When the eyes are fixated on a point at the same elevation as the eyes, the locus of points in the horizontal plane lie on the circumference of a circle pa~sing through the two eyes on one 8ide of the circle and the convergence point on the other side of the circle. When this clrcle is rotated about an axis passing through the two eyes, the resulting donut shaped olid surface defines this locus of points in three space. ~he resulting locus of points are obviously not at a constant distance from the eyes.
~owever, since binocular fusion only takes place in a small area of the central retina, for practical purposes the locus of points of exact retinal correspondence may be considered to be at the same distance and in the vicinity of the point the eyes are fixated upon.

STEREOSCOPIC VISION

When a person looks at an object, the retinal image in the right eye is different (disparate) from the retinal image in the left eye. This dicparity is the result of the two eyes viewing the object from the two slightly dif ferent h~ 3~

1 positions. Experiments have shown that the human visual perception system i5 highly sensitive to the disparity of the two retinal images. The vicual perception system uses the amount of disparity as a measure of the depth of the ob~ect being viewed, with increa5ing disparity being perceived as the object being closer. No disparity is perceived as a far background object. Studies of visual perception have ~hown that this stereoscopic vision phenomenon is a much more important cue to depth perception than convergence.

BINOCULAR FUSION

When an object i~ viewed by the two eyes, although the two retinal i~ages may be different, only a single image is normally perceived. This phenomenal process which takes place in the visual cortex of the sensory system is known as binocular fusion.

~IMIT OF BINOCULAR FUSION
.... . .
When the two eyes are converged on an object severa1-feet away, two slightly different images will be viewed by the two eyes but only one binocularly fused image will be perceived. If a second object i8 immediately beside the first object, it also will be perceived as a binocularly fused lmage. If the eyes remain fixated ~convergedl on the first object and the second object is moved further away into the background, a simple geometric projection analysis ~see FIGS lA and lB) will show that the difference in retinal correspondence in the retinal images of the background object will increase. When the difference gets large enough, the sensory system can no longer binocularly fuse the object and a double imaye will result. When this occurs, the limit of binocular fusion has been reached. This is discussed as "Panum's limitiny case" in Chapter 5 of Murch, Gerald M., Visual and Auditory Perception, Bobbs-Merrill Company, 1973.
When the eyes are shifted and fixated on the background object, the background object will again become binocularly fused and the foreground object will become the double image (see Figs. lA and lC).
ACCOMMODATION
The viewer is not normally aware of a double image even though it may be present in most complex scenes. This is because when the eyes are converged on the foreground object they are also focused on that object and the double image of the background object is out of focus and is autonomously de-emphasized via the accommodation property of the visual perception process. Some texts indicate that accommodation iS (only) the focusing of the eyes' lenses.
However~ the de-emphasis that may take place in the visual cortex also Causes the VieWer to be unaWare of the out of focus double image.
The interaction of convergence and binocular fusion in binocular depth perception is recognized in visual perception literature as a complex process (refer to Julez, Bela, Foundations of Cyclopean Perception, University of Chicago Press, 1971. However, the following _ g _ 1 simplifications are consistent with visual perception literature and are pertinent to the present invention.
1. The visual perception syste~ controls eye convergence in a manner that tends to maximize the correlation (or retinal correspondence) of the left and right eye images of the object of attention within some central portion of the retina.
2. The re3ulting tWo disparate images are processed by the visual cortex to determine (relative) depth by measuring the disparities and to resolve the disparities, merging the two images into a ~ingle perceived image.
Three dimensional reproduction systems are a paradox - ~ovies, television or any system that tries to reproduce a three dimensional image on a flat screen. To recreate the sa~e conditions more exactly would imply that the stereo cameras convergence be a priori ~ynchronized with the Viewer~s eye Conver9enCe a9 he 5hifts his gaze from foreground to background. Indeed, existing 3D movies have double image problems in some complex scenes. ~he problem iS not a~ obvious in 3D moVie9 that U8e special glasses to separa~e the stereo images, but it i8 still there.
In the display or projection of moving video BceneS, as With movies or television~ the piCtUre update rate (60 fields, 30 frames per second for tel~vision) is well above the critical flicker frequency, thereby providing the well known illusion of a continuously moving picture.

However, the ~tereo partner alternation rate in accordance with my invention works better with a lower alternation rate, close to or lower than the critical flicker frequency. This may be because a lower rate permits ~%~3~

1 neuronal binocular rivalry to take place in the visual cortex, resulting in the perception of depth. If 8uc~ lower rate alternated stereo object images are adequately converged the picture alternation rate is not perceptable and no flicker results. ~owever, if the partner object images exceed the limit of binocular fusion, the partner images jump and flicker at the partner alternation rate. If the partner alternation rate is increa~ed, the flicker diminishes, but continuous double images result for object images outside the limit of binocular fu~ion and for object images within the limit of binocular fusion, the three dimensional illusion is reduced and a continuous fuzzy-edged object results.
With a three dimensional reproduction system, it would 9eem impos8ible to keep all corresponding elements of a complex scene within a viewer' 5 limit of binocular fusion. A8 a person shifts hi~ gaze from foreground to background ObjeCts in real three dimensional scenes, the convergen~e angle of his eyes change~ thereby changing the eelative position of the foreground and background object images on the retina a~ shown in FIGS lB and lC. ~owever, in 3D reproduction sy~tems the viewer' 8 eyes are converged on the screen. The viewer's eye convergence angle could change as the viewer's gaze changes from foreground to background in the ~tero reproduction scene. ~owever, a careful geometric projection analysis will reveal that ~ince the camera~ are converged on the object of primary interest only (foreground for example), when the viewer's gaze shifts to the background object, proper retinal registration of the background object image would require an unnatural ~2~q~3~1 1 convergence angle for the eyes.

SUMMARY OF T~E INV~NTION

Thi~ invention provides ~ideo processing systems 5 that will proces5 stereoscopic video signalq such that all necessary ~bjects in the stereo reproduced image i~ within a viewer's limit of binocular fusion, thus allowing binocular fusion without the jumping effect in foreground or background. This may be acco~plished by either of two general methods. One method is to alternate stereo video only ftom objects that are within the limit of binocular fusion. For objects out of the binocular fusion region only video from one of the cameras would be passed, but that video would ~e pa~sed all the time. The Becond method rearranges the video from all necessary object~ such that the stereo i~ages of each object in the scene, background and foeeground, are arranged for binocular fusion (as illustrated in FIG lD) to take place.
-~ In accordance with this invention a stereoptican pair of lmages 1~ presented, one partner at a tlme but alterna-ting the two images, with each partner exposed to both eyes si~ultaneou81y. The alternation rate ls on the order of 3 to 25 of each picture per second, but may be varied for optimal effects. The relative esposure time of each of the partners may be equal or may also be varied for optimal effects.
~Layered video techniques" generate converged binocular video combined with monocular video ~uch that the limit of binocular fusion problem is eliminatea~
Additional techniques are presented to detect ; -12-L3~-1 1 corresponding elements in the two electronic video signals from two video cameras and to detect when corresponding elements of the two electronic video ignals would cause the reproduced ~tereo elements to exceed the viewer's limit of binocular fusion.
These techniques are combined with methods and apparatus to combine parts of each pair of stereo $mages such that corresponding stereo elements within the viewer's limit of binocular ~usion are alternated, but when the corresponding elements would be outside the limit of binocular fusion, only one of the stereo video signals for that element would be passed all the time with no alternation. Stereo elements that are alternated would appear to have depth while unalternat~d elements would appear as background ~or foreground) elements.
Additional techniques are provided to rearrange video information in either one or both stereo electronic video signals such that all necessary video information from a scene,~foreground and background, are within a proper binocular relationshlp such that when the two stereo partners are alternated at the appropriate partner alternation rat* a single binocularly fused image will be perceived having the illusion of three dimensions. With these techniques, corresponding video elements that would otherwise be out of the limit of binocular fusion are repositioned such that they are within the limit of binocular fusion.
The above techniques may employ normal horizontal scan TV cameras and horizontal separation of the stereo cameras. Additional techniques will be presented involving
3&i L
vertical separation of the stereo cameras and vertically scanned TV cameras. These techniques may significantly reduce the video signal processing necessary to achieve globally converged stereoscopic video. (For the purpose of this discussion, "globally converged" means stereoscopic video all corresponding elements of which are within the limit of binocular fusion throughout the reproduced image.) An object of an aspect OL this invention is the production of methods and devices for three dimensional image production which is adaptable to (standard and non-standard) television broadcasting, closed circuit television, and artificial image production as used in video games, motion picture cartoons, television cartoons, and similar applications. These video processing techniques are also adaptable to motion picture production.
Other aspects of this invention are as follows:
A method of displaying a stereo pair of images to present a single, three dimensional, sharply focused, flicker-free image to human visual perception comprising the steps of:
(a) separately producing stereo video images of only foreground objects located approximately equal distances from a first pair of cameras against a solid ¦ color background, (b) separately producing stereo video images of only midground objects located approximately equal distances from a second pair of cameras against a solid color background, (c) separately producing monocular video of only background ob~ects, (d) combining the foreground, midground and background video images such that the solid color background of the midground video is replaced by corresponding portions of the monocular background objects video and the solid color background of the foreground video is replaced by corresponding portions of the combined midground and background video, ~i - 14 -(e) displaying first the combined video signal from the background video and one camera of each of the pairs of foreground and midground cameras, (f) displaying the combined video signal from the background video and the other cameras of the pairs of foreground and midground cameras in registration with the display of the first signal, in binocular relationship -to the first signal and for approximately the same period as the first signal, and (g) switching between display of the first and second signals at a rate between 3 and 25 of each image per second.
A method of displaying a stereo pair of images to present a single, three dimensional, sharply focused, flic~er-free image to human visual perception comprising the steps of:
(1) positioning a pair of video cameras in stereo relation to each other to view a scene, (2) displaying the video signal from first one camera, (3) displaying the video signal from the second camera:
(i) in binocular relationship to the first signal by:
(a) measuring the characteristics of each of the two video signals, (b) comparing the characteristics and time occurrence of video elements within each of the two video signals, (c) identifying corresponding video elements within each of the two video signals not within the limits of human binocular fusion, and (d) processing the video signal from one camera to time shift the video elements not within the limits of binocular fusion to bring such elements within the limits of binocular fusion when the processed video signal is displayed together with the video signal from - 14a -3~L

the second camera in accordance with said switching step, (ii) in registration with the display of the first signal and (iii) for approximately the same period as the ~irst signal, and
(4) switching between display of the first and second signals at a rate between 3 and 25 of each image per second.
A method of displaying a stereo pair of images to present a single, three di.mensional, sharply focused, flicker-free image to human visual perception comprising the steps of:
(a) positioning a pair of video cameras in stereo relation to each other to view a scene, (b) processing the signals from the cameras to identify the video elements within the scene which are not within the limit of binocular fusion and elements that are within the limit of binoclllar fusion, (c) combining the video signals from the cameras such that video elements which are within the limit of binocular fusion are alternately displayed for approximately equal periods, alternating at a rate between 3 and 25 of each image per second, and (d) displaying video signals from only one of the cameras continuously for video elements not within the limit of binocular fusion~
A ~ethod oE displaying a stereC pair of images to present a single, three dimensional, sharply focused, flicker-free image to human visual perception comprising the steps of:
(a) positioning a pair of video cameras in stereo relation to each other to view a scene, (b) displaying the video signal from first one camera, (c) displaving the video signal ~rom the second camera in registration with the display of the first signal, in binoc~lar relationship to the first signal - 14b -1~?9~3~
and for approximately the same period as the ~irst signal, (d) switching between display of the first and second signals at a rate between 3 and 25 of each image per second, and (e) monitoring the light level of the scene and increasing the rate of switching between display of the first and second signals responsive to higher light levels.
A method of displaying a stereo pair of images to present a single three dimensional, sharply focused, flicker-free image to human visual perceptlon comprising the steps of:
(a) positioning a plurality of pairs of video ! 15 cameras to view a scene such that the cameras of each pair are in stereo relation to each other and each pair of cameras is converged at a different distance within the scene, (b) synthesizing a first composite video signal depicting the scene from one of the cameras in each pair of cameras utilizing convergence detector circuitry, switching circuitry and layered video circuitry, (c) displaying the first video signal, (d) synthesizing a second composite video signal depicting the scene from the other camera in each pair of cameras utilizing convergence detector circuitry, switching circuitry and layered video circuitry, (e) displaying the second video signal in registration with the display of the first signal, in binocular relationship to the first signal and for approximately the same period as the first signal, and (f~ switching between display of the first and second composite video signals at a rate between 3 and 25 of each image per second~
A method of displaying a stereo pair of images to present a single, three dimensional, sharply focused, flicker-free image to human visual perception comprising the steps of:

- 14c -(a) positioning a pair of video cameras one above the other in stereo relation to each other to view a scene, (b) displaying the video signal from first one camera, (c) displaying the video signal from the second camera in registration with the display of the first signal, in binocular relationship to the first signal and for approximately the same period as the first 10 signal, and (d) switching ~etween display of the first and second signals at a rate between 3 and 25 of each image per second.
A method of displaying a stereo pair of images to present a single, three dimensional, sharply focused, flicker-free image to human visual perception comprising the steps of:
(a) positioning a pair of vertical scan video cameras in stereo relation to each other to view a scene, (b) displaying the vldeo signal from first one camera, (c) displaying the video signal from the second camera in registration with the display of the first signal, in binocular relationship to the first signal and for approximately the same period as the first signal, and (d) switching between display of the first and second signals at a rate between 3 and 25 of each image per second.
A method of displaying a stereo pair of images to present a single, three dimensional, sharply focused, flicker-free image to human visual perception comprising the steps of:
(a) positioning a pair of video cameras in stereo relation to each other to view a scene, (b) displaying the video signal from first one camera, - 14d -.

3~
(c) displa~ing the video signal from ~he second camera in registration with the display of the first signal, in binocular relationship to the first signal and for a substantially shorter period than~display of the first signal, and (d) switching between display of the first and second signals at a rate between 3 and 25 of each image per second.
A television system for displaying a three dimensional, sharply focused, flicker-free image to human visual perception, comprising:
(a) a pair of video cameras positioned in stereo relation to each other to view a scene, (b) means for transmitting the video signal from first one of the pair of cameras and then the other camera for approximately equal periods and switching between the video signals from the first and second camera at a rate between 3 and 25 of each image per second, and (c) means for receiving the transmitted video signals and displaying such signals in registration and in binocular relationship to each other, said binocular relationship displaying means comprising:
(i) means for measuring the characteristics of each of the two video signals, (ii) means for comparing the characteristics and time occurrence of video elements within each of the two video signals, liii) means for identifying corresponding video elements within each of the two video signals not within the limits of human binocular fusion, and (iv) means for processing the video signal from the one camera to time shift the video elements not within the limits of binocular fusion to bring such elements within the limits of binocular fusion when the processed video signal is displayed together with the video signal from the other camera.

- 14e -r 3~

A method for producing a pair of stereoscopic images, the corresponding objects of which are within a viewer's limit of hinocular fusion comprising the steps of: ~
(a) repositioning the corresponding objects in the two images to corresponaing positions such that when the two images are superimposed the video objects at all ranges coincide; and (b) filling in voids created in each image by such repositioning of the objects by using corresponding image elements from the other image.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS lA - lD are drawings illustrating the geometry of binocular vision.
FIG 2 is a block diagram of a basic embodiment of a three dimensional television system in accordance with the present invention.
FIG 3 is a logic diagram illustrating amplitude correlation measurement.
FIG 4 is a logic diagram illustrating slope correlation measurement.
FIG 5 is a logic diagram illustrating a video edge detection system.
FIG. 6 is a block diagram of a system to detect when video elements in two stereoscopic video signals correspond.

- 14f -~343~

1 FIG 7 is a block diagram of a pulsed light ranging and convergence system.
FIG 8 is a block diagram of a system that alternates 8tereoscopic video elements only for objects that are within the limit of binocular fus~on and prod~ces monocular video for other objects.
FIG 9 is a 8ignal diagram to illustrate the alternation of converged video elements.
FIG 10 is a block diagram of a system to proces~
~tereo~copic video to brlng all corresponding video object elements within the limit of binocular fusion.
FIG 11 is a block diagram of a digital computer implementation Of a sterëoscopic video processing 8ystem to produce alternating stereoscopic video such that all lS corresponding video object element9 are within the limit Of binocular fusion.

FIG 12A and 12B is a block diagram of a stereoscopic video processing 8ystem for vertically separated 8têreo8copic TV cameras. The block diagram i8 al~o applicable for horizontally separated, vertically scanned TV camera3.
FIG. 13 is a block diagram of a system in accordance with the present invention which utilizes layered video techniques in order to produce alternating signals in binocular relationship.

DETAILED DESCRIPTION OF THE DRAWINGS

FIG lA is a plan view diagram of a ~man observer and the lines of sight from each eye to a foreground object F and a background object B presented to assist in explaining the present invention.
FIG 1~ is a simplified illustration of the super-imposed retinal images for the observer of Fig lA when his eyes are converged on th~ foreground object F.

~ 3 ~ ~

1 FIG lC is a simplified illustration of the superimposed retinal images for the observer of FIG lA when his eyes are converged on the backgsound object B.
FIG lD i8 a ~i~plified illu8tration Of the superimposed retinal images when the video has been processed in accordance with the present invention to 8imulate ~imultaneous Convergen~e Of the viewer~ eyes on both object F and object B, thus allowing binocular fusion of both objects.
Referring to FIG 2, two television cameras, 210 and 212, with identical optlcs are correctly ~paced and aimed for taking pictures which have a stereo relationship to each other. The cameras are shown converged on object F. A
video switch 214 allows video from one camera and then the other camera to pass, under the command of a controller 216 for controlling the switch 214, and a video synchronization generator 218 synchronizes the controller 216 and the video of the two camera~. The transmission medium 220 sends video selected.by switch 214 to monitor 222.
The embodimen~ of FIG 2 is not specifically designed to reduce or eliminate the limit of binocula~
fusion problem; however it could be u5ed where all video elements are at a relatively equal distance from the cameras.
The two cameras 210 and 212 are aimed and focussed on the desired object, and the video signals from the two cameras are synchronized by the synchronization generator 1 218. The video from the two cameras is applied to the video switch 214. The controller 216 receives synchronization gignals from the synchronlzation generator 218. The . .

r r --16--3~i~

1 controller 216 controls the switch 214 to allow the video first from one camera to pass then from the other camera to pass. The video ~rom the two cameraR are alternated at a rate from about 3 to 25 of each picture per second and ~ay
5 be adjusted for optimal effect. ~ach channel may be exposed for equal amounts of time. However~ relative e~posure times may be varied for optimal effect~ and ~he techniques of U.
S. Patent Number 4,006,291 applied to this invention.
Experiments indicate that the optimal alternation rate may be a function of the light inten8ity of the video scene. This result is supported by the writings of Robert Efron~ Stereoscopic Vision, in the British Journal of Opthalamology, December, 1957. Therefore, the alternation rate can be somewhat optimized for changing video scenes by using a light meter to measure the light inten~ity of the video scene and operatively connecting the meter output to circuitry that controls the alternation rate of the stereo video ~ignals. An alternate approach is to electronically measure~the amplitude of the video signal, the amplitude being a function of light intensity.

LAYERED VIDEO TECHNIQUES

~ ayered video techniques are generally applicable when parts of the v~deo scene can be photographed separately and combined via ~pecial effects~ techniques often used in the video industry. For example, a singer again~t a solid color background can be photographed producing converged binocular video of the singer. Tbe orchestra can be photographed separately with monocular video. Then using existing special effects circuitry, the solid color ~C~ f~3~f ~

1 background can be detected and replaced with the monocular video. This method would produce an im~age of the s~nger with the depth effect of the converged binocular video and with the orchestra appearing in the background as monocular video but without any flicker or jumping in any portion of the entire re~ulting pict~re.
Many variations of thiA technique are feaAible.
For example~ in the previous example the orchestra c~uld be photographed with stereo cameras (but a different ~tereo set from that for the singer) and comb:ned with the singer stereo image. Additional layers of video can also be combined. ~he ~everal blnocular videoc can be combined in real time or video tape recordings can be used, but all must be properly synchronized and converged.

CORRESPONDENCE DETECTION TECHNIQU~S

Since a TV cdmera scans a 9cene horizontally as a function of time, the relative time of occurr~nce of corre~ponding stereo video element~ within a horizontal scan can be uged to determine if each corresponding ~tereo elemen~ pair i8 wlthin the limit of blnocular fusion. This correspondence detection can be used to implement alternaeion Of converged 3tereo video elements while malntain~ng continuous video from one camera for unconverged video. (For the purpose of this aiAcu~sion~ a ~video ele~ent~ may be loosely defined a~ a video ~ignal from the ~mallest discernible part of a slngle object in the video scene. ~Corresponding video elements~ are video elements in the two 9tereo video signals that originate ~rom the same small part of the same object in the video scene.) L3~:~

1 The first step in determining if the corresponding video elements are within the limit of binocular fu~ion i8 to identify the corresponding elements through correlation measurement techniques. Correlation ~easurement can be S implemented in variou8 ways. Correlation can be ba8ed on a~plitude co~parison between the two video signals (FIG 3), slope co~parison between the two video signals ~FIG 4~, edge correspondence measurement between the two signals IFIG 5) or other time domain techniques. Frequency domain techniques can also be utilized; that is, by detecting corresponding frequencies in the two signal~. Combination of the above techniques is often necessary. Correlation measurement can be accomplished on monochrome signals, color 8ignals, or com~ination8 thereof. The correlation measurement can be accomplished through analog techniques or the video signals can be digitized and digital techniques utilized. A good combination technique for color signal correlation measurement is the combination of the light intensity signal and the color signal. The correlation ~easuremènt can be accO~plished in the vertical direction, horizo~tal direction or in a combination vert~cal and horizontal direction. The two direction technique is more easily implemented utilizing the digital techniques and algorithms of picture processing via digital computer.
Logic circuity designed to indicate that the elements correspond is utilized to detect when two video elements in the two stereo video signals are sufficiently correlated.
FIG 3 is a simplified illustration Of an amplitude ~0 correlation measurement device that measures the amplitude 1 difference between the two stereo video signals, video 1 and video 2. A video differential amplifier 310 amplifles the difference between the two video signals. ~hreshold (gain) adjustment 314 is used to adjust the gain of amplifier 310 S so that when video 1 is an appropriate amount above video 2, the output of amplifier 310 will cause the output of the logical OR gate 318 to be in the ~1" state indicating the signals are not adequately correlated. Amplifier 312 is identical to amplifier 310 but is connected with opposite polarity to the two video signals. Thus, when video 1 amplitude i3 a threshold above or below video 2, the OR gate outputs a ~1~ indicating noncorrelation. If the two video signals are nearly equal, the OR gate outputs a ~0"
indicating correlation.
FIG 4 is a simplified illustration of a slope correlation measurement device that measure~ the dif~erence ln the slopes of the two video signals. Items 410 and 412 are video amplifiers connected as differentiators such that their outputs are proportional to the slopes of the video signal inputs. The dlfferentlal ampllflers 416 and 418, threshold adjustments 420 ~na 422, and logical OR gate 424 function the same as in FIG 3. Thus~ when the two signals ~lopes are nearly equal, the logical OR gate 424 outputs a ~0~ indicating the slopes are adequately correlated.
Otherwi9e a ~1~ is output indicating the signals are not adequately correlated.
FIG 5 is a simplified illustration of a video edge detector. A video signal is connected to one terminal of a differential video amplifier ~12 and through a video delay line device S10 (such as a video ~ample and hold circuit) to ~ & ~

1 the other terminal. The amount of delay is illustrated as 200 nanoseconds but may be changed for optimal results. If the video signal changes by some threshold amount within the 200 nanoseconds, the amplifier 512 output will cause the digital counter or flipflop 516 to change state, indicating a video edge has occurred. A threshold adjustment 514 is included to adjust the amoun~ of video signal change nece83ary to signify an edge, as the application requires.
An implementation of a correspondence detector will now be described for a aimple video scene. Referring to FIG
6, properly synchronized left and right stereo video inputs are fed simultaneously to edge detectors 606 and 608 and ~a~ple and hold circuits 610 and 612. ~ach edge detector 606 and 608 detects when each video object begins and sends a signal to corresponding sample and hold circuit 610 and 612. Esch sample and hold cirCuit 610 and 612 ~amples the signal immediately after the edge (for example, 200 nanosecond sample window) and holds it6 respective ~ignal for lnput~to the amplltude correlation detector 614. The ampl~tude correlatlon detector 614 compares the amplitudes of the two sampled signals and outputs a ~1~ when the signals are adequately correlated and a ~0~ when the signals are not adequately correlated. Adequately correlated signals may be said to ~correspond~.
It is apparent that an i~portant part of correspondence measurement and detection is the sensing or detection of the edges of the various objects in the video scene. Special techniques can be used to identify the edges of the video objects to separate the various foreground, midground and background elements. For instance, foreground 3~i~

1 objects can be backlighted with ultraviolet or infrared light. One or both of the cameras can be outfitte~ with a prism video splitter and a special ~vidicon~ that i8 sensitive to the back liqht. The special vidicon can be ~canned in the -came manner as the regular video sensing element and the signal from the ~pecial vidicon will help ~dentify the edgec of the backlighted video objects.
~ Range gating~ can al~o be used to identify the video element edges as a function of range from the cameras. The amount of video shifting nece~sary to allow binocular fuRion i8 a function of the range of the video object from the camezas. ~owever, range i8 not usually measured or determined by a camera. Radar systems can measure range by transmitting short bursts of energy or lS pulses. since the pulses travel at a specific velocity, range to an object can be calculated from the time it takes the pulse echoes to return from the object. Radar range to the object is measured by taking short sequential time samples (range gates) of the reflected energy to determine the time of arrlval of the energy reflected from the object. This principle can al~o be used to implement edge detection of the various objects a~ a function o~ range to the objects, the video shifting necessary for binocular fu~ion being a function of the range to the video object.
This can be implemented in a video system by transmitting a pulse of light and by ~looking~ at the scene for short sequential periods of time. Each ti~e sequential ~look~ contains video from objects at sequentially increasing ranges, and therefore can be used to implement the range dependent video shift.

3~i 1 The light pulses can be inplemented as a pulsed laser or can be a more conventional light strobe or~can be implemented with a mechanical device such as the rotating mirrors used in very high speed photography. Since light travels at 300 meters per microsecond, the pulses must be short in duration, on the order of less than one microsecond and the exact time of transmission must be well controlled. Sequential sample times must also be well controlled, on the order of 20 nanoseconds. The light pulses can be in the invisible portion of the spectrum (ultraviolet or infrared), and ~een only by the sensor that uses it. One embodiment of such a system is shown in FIG
7. As before, two stereo optically related cameras 710 and 712 are used. However, camera 712 contains a prismatic mirror 716 for splitting incoming light between the normal camera sensor 714 and a sensor 718 which i8 sensitive to light from the flash source 726. The flash 80urce 726 is strobed several times per second and emits a short burst of radiatiQn on the order of one microsecond duration. The emitted radiation is reflected from the video scene and is focuse~ by the camera optics and priam 716 on the radiation 9en~ing element 718. The sensing element 718 may be several elements or may be a single element that ~equentially samples the reflected radiation during sequential time interval~. During the f irst time interval, video elements ln the 5 to 10 meter range may be identified, then video elements in the 5 to 20 meter range, etc. The controller 720 uses the information from Sensor 718 to identify element edges and control the video processor 722 which time shifts appropriate video elements in the normal video from camera 3~

1 ~12 such that when combined with the video from camera 710 by switching the two videos as previously described, the depth illusion will result without any limit of binocular fusion problem.
Ranqe to an object can also be determined through triangulation. Thi5 can be implemented by mounting a third camera directly above one of the stereo cameras. ~ith this camera converged with the camera below it on some object, corre~ponding video element3 from the object will be in exact correspondence within a horizontal scan and vertically by horizontal scan line number. Corresponding video elements from other objects at different ranges will be at the same position within the horizontal scan, but will occur on diffêrent hori~ontal 8can lines fôr the upper camera than for the lower camera. Video from one of the two cameras may be passed through delay lines ~such as Fairchild Semiconductor part number CCD321Al Broadcast Quality Video Del~y Llne) equal to integer mult~ples of a horizontal ~can time. The signals from the two cameras are then processed by correspondence detection circultry to determine in which horizontal lines corresponding elements occur, which will permit calculation of the distance of each element from the cameras. This information may be used to implement converged video selection techniques and can also be used in convergence video processing techniques.

CONVERGENCE DETECTION TEC~NIQUES

~The relative time of occurrence (within a Phorizontal scan) of corresponding video elements can be used 30 to determine if the elements are within the limit of r r --24--3~t3~3~ ~L

1 binocular fusion. The corresponaence detector aiscussed ln the previous paragraphs may be used to detect the corresponding video elements in the two stereo video signals. A device designed to detect when the corresponding video elements are within the l~mit of binocular fusion is the convergence detector. The convergence detector is designed to accept signals from the edge detectors and the signalq from the corre~pondence detector. Referring to FIG
lC, it can be seen that when an object is closer to the cameras than the conversence point is to the ca~eras, r1ght camera video from the ob~ect will precede the corresponding left camera video. Similarly when an object is behind the circle of convergence points (FIG LB), left camera video from the object will precede the right camera video as the image is scanned from left to right. A convergence detector measureQ the time that video elements from one camera lead or lag the corresponding video elements from the other camera. If the lead or lag time is less than 80me ~experl~mentally determined) threshold, then the video element~ are declared to be (adequately) converged for binocular fusion, otherwise they are determined to be outside the limit of binocular fusion.

CONVEQGED VIDEO S~L~CTION TECHNIQUES

Converged video selection techniques can also be used to eliminate the li~it of binocular fusion problem from three dimensional video. Such a 8ystem is pre8ented in FIG
8, which shows two cameras 810 and 812 ;n accordance with the basic technlque except the switch 814 is operated differently; that is, in conjunction with a convergence J~3~h detector 830.
Representative signals from the two camera~ 810 and 812 of FIG 8, (with the camera~ converged on object F) are shown in FIG 9. Correlation, correspondence and timing circuitry in the convergence detector 830 identifies the correspondin~ elements in the two ~ideo signals and measures the relative time of occurrence of the corresponding picture elements, shown in FIG 9 a~ time Tl and time T2. In thi8 ca~e, ti~e Tl is within the time threshold for binocular fusion and time T2 i6 greater than the threshold. There-fore, the circuitry first passes video from the left camera shown as ~witch output nu~ber 1 in FIG 9. A short time later the circuitry passe~ the video shown as switch output t2 which is right camera video from object F (within the limit of binocular fusion) and left camera viaeo from object B (since the right camera video would not be within the limit of binocular fusio~. In thi8 manner, the video for object F, which i~ within the limit of binocular fu~lon, is altern~ted at the previously established rate of 3 to 25 alternations per second, as appropriate. However, video from Qhject~ that would cause the binocular fusion problem, such as object B in the example of FIGS 8 and 9, is not alteenated. In summary, only video that will be within the limit of binocular fusion will be alternated at the appropriate rate, for an appropriate portion of the horizontal scan, and for all scans in each frame. Other video is continuous from one of the cameras.
Switching circuits to combine the two video signals are already available in various devices of the television industry ~uch as ~special effects~ products. Gated video 3~

amplifiers such as Motorola part number MC1445 al80 may be used.
One variation of converged video selection techniques is to use one ~master~ ca~era and several partner stereo cameras. For example, the first of three partner stereo cameras might be converged ~ith the master camera on the foreground objects, the 6econd camera is converged with the master camera on midground objects and a third camera with the master camera on background objects. Multiple sets of convergence detectors and switches would be required.
The first set is connected to the master camera vldeo and the first partner ~tereo camera and selects and alternates the foreground object video with the master camera video.
The second set is connected to the master camera video and the second partner ~tereo camera and ~elects and alternates the midground ob~ect video. A third set would function similarly. The order of mixing or switching the videos i~
designed so that foreground ob~ect video replaces midground video tbat i~ overlapped (for example, background video is mixed or alternated first, then midground, then foregrQund). In this manner, several ~layers~ of video can be processed adequately ~or binocular fusion.
Another variation that may accomplish the same results as the above described multiple camera technique with only two cameras is as follows. The two cameras are converged on the foreground ob~ects. The right camera is used as the master camera and foreground object video from ~ the left camera video is alternated and combined with the r right camera video as in the first converged video selection technique des~ribed above. The video from the left camera 3~

is delayed a short time (300 nanoseconds for example). This brings the left camera video from the midground or next ~et~ of objects into the proper time and position relationship (relative to right camera video3 required for binocular fu~ion. A second set of convergence detection and switching circuitry is used to switch the left camera ~idground object video with the right camera video. This sequence of delay, convergence detection and camera video alternation may be repeated several times for sets of objects at other ranges as the application requires.

CONVERGENCE PROC~SSING TECHNIQUES

The previous paragraphs described methods for selecting and alternating corresponding video elements from objects that are within the limit of binocular fusion. The following paragraph5 describe video processing techniques which time shift the video elementa within the two stereo video signals such that all corresponding video elements are within ~he limit of binocular fu9ion.
Referrinq to PIG 10, the left video and right video ~ignal5 1010 and 1012 are produced in accordance with the basic technique de9cribed above. The correspondence detector 1014 functions as previously descrlbed in connection with FIG 10 except that it generates control signals to control the left and right video processors 1016 and 1018. Video processors 1016 and 1018 tlme shift the variou~ video elements in one or both of the stereo related video signal3 such that the various corresponding video elements will be in a converged binocular relationship to each other as required for binocular fusion. The two video 12~3&~

~ignals may then be alternated in the previously described manner to produce the depth illusion. Time ~hifting can be accomplished using the analog ~hift register capabilities of analog devices (such as Fairchild Semiconauctor part number CCD321Al or ~imilar devices) or the video signals can be digitized and time ~hifting or delays can be accompli~hed through the use of digital memory as intermeaiate storage.
An analog version of a convergence video processor may be implemented as follows. The two stereo cameras are converged on the background such that all corresponding background video elements from the two cameras are in time coincidence. Video elements in the right camera video from midground and foreground objects will precede corresponding elements in the left camera video. At the beginning of each horizontal scan, video from both cameras is paQsed undelayed. In this implementation, left camera video is always passed undelayed. Right camera video i8 also passed until the correspondence detector detects noncorrespondence between~the two video signals then the right camera video is delayed until its corresponding video element appears in the le~t camera video and then both signals are pa3sed.
When the right camera video i8 delayed, control signal 1022 controls the switch 1020 to fill in the right camera video with left camera video. The timing signal controls the switch to alternate the left camera video and processed right camera video in the previously described manner at 3 to 25 alternations per second.
The three dimensional television system can also be implemented using a digit~l computer. Referring to FIG 11, video L and video R come from two stereo related cameras.

3~L

The video may be monochrome, or combinations of monochrome and color signals as the application requires. The timing system 1116 sends control signals to the analog to digital converters 1110 and 1112 to cause the video to be digitized at an appropriate rate for adequate 3ignal reproduction (on the order of a few megahert~ he digitized signals from the A/D converters lllo ana 1112 are transferrea to the input digital memory 1114, also under control of the timing system 1116. ~he sample resolutlon (number of bits) of the ~a~ple process should be aaequate to proauce ~he desired effect. Computer ~oftware iB designed to control the computer 1118 to examine each dig~tized video element individually and in sequence as required to identify elements of each ob~ect in the video scene. For example, a 801id colored object would have equal video words for each ad~acent video sample of that object. The equal words can be identified by the computer and the extent of the object determined in the horizontal and also the vertical position for both the left and right video cameras. When corresponding video objects from the two cameras are not in a conv~ged video relationship, computer software is designed to horizontally shift one or both of the object's digital representation &o that the object videos are converged. Background video from the other 3~ereo picture can be used to ~fill in" at the edges where the object was shifted. The computer 1118 can then output the samples to the output digital memory 1120 in the appropriate sequence to reproduce the two stereo related pictures at the previously described alternation rate of the basic technique. ~he digital video words are converted back to 3~

analog form by the D/A converter 1122. The D/A output can be filtered~ synchronization signals can be reconstructed and other signal processing techniques applied as appropriate. The result will be a stereo video picture with all objects in the picture having the appropriate alternating binocular disparity but with each corresponding binocular object properly converged so that binocular Eusion can take place. Thereby, the image is reproduced having the illusion of depth but without the flicker and jump cau~ed by the limit of blnocular fusion problem.
The above method may require that the stereo video be recorded so that it can be processed at a slower than real time rate. Special purpose circuitry with ~hardwired"
algorithms can provide for real time proCessing in some applications.
In convergence processing techniques, when shifting the positions Oe video objects, the size, shape, color and texture of the objects should be preserved. When shifting of foreground ob~ects reveals hidden midground object edges in one of the two stereo views, that view may be filled in with i~s partner stereo elements. It is not e~sential that background elements be alternated, since no disparity is normally perceived as background.
Although the implementations herein described are for alternating stereoscopic video systems, other systems such as those using colored or polarized glasses will be substantially improved when the video is processed as described herein to provide global convergence of the stereo images.

3~i~

VERTICALLY SEPARATED STERE~OSCOPIC CAMERAS

Visual perception response to the images produced by horizontally separated stereoscopic cameras is well known. The illusion of depth also results when the cameras are vertically Qeparated. A three dimenslonal television system in accordance with the present invention and usin9 vertically separated (horizontal ~can) cameras may be i~plemented in the same manner as the 8ystem previously described for the system of FIG 2 except the cameras are positioned one above the other. Camera ~eparation distance should be nearly the same as before for the horizontally ~eparatea cameras. It may be necessary to decrease the separation distance slightly since the visual perception system may have a smaller vertical disparity fusion capability than is the horizontal disparity fusion capability. Switching of the two video signals is accomplished as before at the rate of 3 to 25 of each partner per second. The cameras may be focussed and converg`éd as before and the relative exposure times may be varied for optimal effects, applying the techniques of U.S.
Patent~~4,006,291. The vertical separation of cameras can also be applied to the technique of alternating interlaced fields as described in U.S. Patent 3,457,364.
When the camera~ are vertically separated, corre~ponding video elements in the two stereo ~ignals occur at the same point within their respective horizontal scan lines, but may occur on different scan lines.
A block diagram of a v$deo convergence processing system designed for vertically separated cameras is shown in FIG 12A. With the cameras converged on the background, 3~1 video elements from the lower camera will occur on the same scan line or some scan line above the scan line of ehe corresponding video element of the upper camera. The design of FIG 12A assumes that the geometry of the video scene is 8Uch that each video element is within five lines of its corresponaing stereo video element, but the design can be expanded as the appli~ation requires. FIG 12B contains additional details of identical blocks 1216, 1218, 1220, 1222, 1224, and 1226 of FIB 12A.
The purpose of the system of FIG 12A is to replace each video element of the upper camera signal 1210 with its corre~ponding video element from the lower camera signal 1212 and then alternate the resulting signal ~ith the upper camera signal 1210 ~whose elements have not been replaced by elements of the signal 1212) at the previously described rate of 3 to 25 of each image per second. Globally converged binocular video will result. To accomplish this, the video signal from the upper camera 1210 and the vldeo 8ignal ~rom the lower camera 1212 are input to the correlat~on detectlon and switching block 1216, details of which a~e shown in FIG 123. Logical zeros are applied to both inputs of OR gate 1272 (for blocks 1216 and 1218 only). A timlng signal 1264 clocks these signals Into sample and hold circuits 1266 and 1268 and flipflop 1274.
If the signals OUt of the sample and hold circuits 1266 and 1268 are adequately correlated and the signal out of flipflop 1274 i8 a zero, then signal 1232 fro~ the correlat~on detector 1276 Causes ~he switch 1270 to pass the lower camera video from ~ample and hold circuit 1268.
Otherwise, the upper camera video is passed.

The purpose of the two inputs 1228 and 1230 to OR
gate 1272, in subsequent blocks 1220, 1222, 1224 and 1226, is (1) to prevent upper ca~era signal video element~ that have already been replaced from being replaced again and (2) to prevent a lower camera signal video element from replacing ~ore than one upper ca~era signal video element.

Signal 1232 generates the control bit to accomplish this in sub~equent sta~es. When the correlation detector 1276 causes the switch 1270 to pass the lower camera video element, it also emits a logical 1 on ~ignal 1232. Note that the output from block 1218 takes two paths, to block 1220 and ~hift register 1242. The path to block 1220 is to accompli~h function (1) above, and the path to shift register 1242 is to accomplish function l2) above. The shift register 1242 provides an appropriate delay to keep the control bit synchronized with it~ associated video element a~ it is delayed by delay element 1244.

The CD/SW block 1216 will replace all background video ele~ents. 810ck 1218 will replace video elements from ob~ects ~ust ln front of the backgroun~. Note that block 1218 i~8 connected to replace upper camera video with lower camera video from the ad~acent line of the next interlaced field; hence, vertical delay block 1234 is necessary to delay the lower camera video by approximately l/60 second.
Block 1220 replaces video element~ from the next closer objects which occur in the next horizontal scan line. Horizont~l delay ele~ent 1240 delays the lower camera video for one horizontal scan $nterval in order to bring the lower camera video into time coincidence with upper camera video two scan lines below it.

~34-~ 3 ~ 1 The process is repeated until upper camera video has been processed with lower camera video of the same horizontal scan line and the five horlzontal lines above it.
Delay element 1254 delays the unmodified upper camera video an appropriate amount to synchronize the two signals into tbe switch 1256. Synchronization generator 1262 and controller 1260 provide signals to the switch 1256 to alternate the two viaeo~ at 3 to 25 of each image per ~econd, and globally converged alternating binocular video results .
The system of FIG 12A can be implemented in an analog form or the video signals can be digitized and the described system implemènted via digital circuits. In the analog approach the delay elements can be implemented with the previously described Fairchild part number CCD321Al.
~he digital approach use~ digital memory as the delay elements.

VE~TICALLY SCANNED HORIZONTAL~Y SEPARATED
- STEREOSCOPIC CAMERAS

~ he previously described vertically separated camera syRtem i8 a particularly reliable and easily implementable method of producing globally converged stereoscopic video, and can be implemented with standard horizontally scanned cameras. ~owever, the 3D effect from vertical disparity may not be as good as horizontal disparity. A similar system but using horizontal disparity can be implemented with vertically scanned cameras. The system of FIG 12A will function properly as long as the camera separation is orthogonal to the camera scan direction.

1,~6?~3~1 A three dimensional television system in accordance with the present invention and employing vert$cally scanned TV cameras may be implemented as follows. The system should be in accordance with the ~y-Qtem previously descr~bed for the sy~tem o~ FI~ 2 except vertically scanned TV cameras are u~ed. With cameras ~canning from bottom to top, and 1hterlaced ~can lines sequencing fro~ left to right, the lef~ camera video should be connected to video ~nput 1210 in FIG 12A. Right camera video should be connected to video input 1212. The system of FIG 12A functions as previously described.

The globally converged binocular video thus produced would produce a picture 90 degree~ from upright on a standara broadcast system and standard TV receiver.
However~ it can be converted to B compatible video signal with a scan conversion system consisting of an array of video storage elements such as Fairchild part number CCD321Al. Each scan llne of video is temporarily stored in a 8tora5e element until a complete frame is stored. The video is scanned out, one video element from each scan line progre~sing left to right such that one horizontal line of video i8 produced, the top line first, then the second line, then the third line, thus progressing until all video is output. Alternate video lines are delayed fot approximately 1/60 second to provide for the standard interlaced fields.

It may be necessary to have two storage arrays so that video can be stored in one array while the previous frame is processed out of the other array.

The vertically scanned cameras can be high resolution cameras to assure that the video produced is of 3~

broadcast quality.
Fig. 13 illustrates implementation of a layered video system designed to produce globally converged binocular video. Such a system separately produces stereo video images of only foreground object 1310, which is located approximateiy equal distances from a pair of foreground cameras 1312, and 1314 and in front of a solid colored background 1316. Midground object 1318 is located approximately equal distance from a second pair of cameras lû 1320 and 1322 and in front of a solid color background 1324. A separate monocular video camera 1326 produces a monocular video image of only background objects 1328.
The video images coming from the background camera 1328, midground cameras 1318 and 1320 and foreground cameras 1310 and 1312 are combined under the control of electronic switches 1330 and 1332 and a synchronizer 1334 as described in the text associated with Fig. 2 above, such that the solid color background of the midground video from mid-ground cameras 1318 and 1320 is replaced by correspondingportions of the monocular background objects video from background camera 1328, and the solid color background of the foreground video from foreground cameras 1312 and 1314 is replaced by corresponding portions of the combined midground and background video.

Claims (2)

THE EMBODIMENTS OF THE INVENTION IN WHICH AN EXCLUSIVE
PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:
1. A method of displaying a stereo pair of images to present a single, three dimensional, sharply focused, flicker-free image to human visual perception comprising the steps of:
(a) separately producing stereo video images of only foreground objects located approximately equal distances from a first pair of cameras against a solid color background, (b) separately producing stereo video images of only midground objects located approximately equal distances from a second pair of cameras against a solid color background, (c) separately producing monocular video of only background objects, (d) combining the foreground, midground and background video images such that the solid color background of the midground video is replaced by corresponding portions of the monocular background objects video and the solid color background of the foreground video is replaced by corresponding portions of the combined midground and background video, (e) displaying first the combined video signal from the background video and one camera of each of the pairs of foreground and midground cameras, (f) displaying the combined video signal from the background video and the other cameras of the pairs of foreground and midground cameras in registration with the display of the first signal, in binocular relationship to the first signal and for approximately the same period as the first signal, and (g) switching between display of the first and second signals at a rate between 3 and 25 of each image per second.
2. The method in accordance with Claim 1 further comprising the step of displaying the second signal for a substantially shorter period than display of the first signal.
CA000499829A 1986-01-17 1986-01-17 Three dimensional television system Expired - Lifetime CA1294361C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA000499829A CA1294361C (en) 1986-01-17 1986-01-17 Three dimensional television system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA000499829A CA1294361C (en) 1986-01-17 1986-01-17 Three dimensional television system

Publications (1)

Publication Number Publication Date
CA1294361C true CA1294361C (en) 1992-01-14

Family

ID=4132295

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000499829A Expired - Lifetime CA1294361C (en) 1986-01-17 1986-01-17 Three dimensional television system

Country Status (1)

Country Link
CA (1) CA1294361C (en)

Similar Documents

Publication Publication Date Title
US4567513A (en) Three dimensional television system
US4723159A (en) Three dimensional television and video systems
US5835133A (en) Optical system for single camera stereo video
US4647965A (en) Picture processing system for three dimensional movies and video systems
US6466255B1 (en) Stereoscopic video display method and apparatus, stereoscopic video system, and stereoscopic video forming method
US5870137A (en) Method and device for displaying stereoscopic video images
US6392689B1 (en) System for displaying moving images pseudostereoscopically
US6414709B1 (en) Methods and apparatus for zooming during capture and reproduction of 3-dimensional images
US5805205A (en) Simulataneous two-dimensional and three-dimensional imaging system using a bifurcated, dual-aperture light valve
US6326995B1 (en) Methods and apparatus for zooming during capture and reproduction of 3-dimensional images
JPS63107294A (en) Three-dimensional visual projector
EP0099406A1 (en) Stereoscopic television system.
JPH10327433A (en) Display device for composted image
JPH10504692A (en) Method and apparatus for displaying stereoscopic video images on a display device
JPH10155104A (en) Compound eye image pickup method and device and storage medium
US4695130A (en) Camera flash attachment for three dimensional imaging systems
US5155750A (en) Stereoscopic radiographic inspection system
WO2003049038A1 (en) Method and system for generation of a 3d illusion from a 2d image sequence
EP0230704B1 (en) Three dimensional television system
NL8101554A (en) LIPSYNCHRONOUS STEREOSCOPIC TELEVISION DEVICE.
RU2097940C1 (en) Method for generation and displaying of three- dimensional image and device which implements said method
US4994898A (en) Color television system for processing signals from a television camera to produce a stereoscopic effect
CA1294361C (en) Three dimensional television system
KR20190074454A (en) Method, apparatus and recording medium for generating mask and both-eye images for three-dimensional image
EP0279092A1 (en) Picture processing system for three dimensional movies and video systems

Legal Events

Date Code Title Description
MKLA Lapsed