WO1995013564A1 - Method and apparatus for visualizing two-dimensional motion picture images in three dimensions - Google Patents

Method and apparatus for visualizing two-dimensional motion picture images in three dimensions Download PDF

Info

Publication number
WO1995013564A1
WO1995013564A1 PCT/US1994/012863 US9412863W WO9513564A1 WO 1995013564 A1 WO1995013564 A1 WO 1995013564A1 US 9412863 W US9412863 W US 9412863W WO 9513564 A1 WO9513564 A1 WO 9513564A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
eye
frames
displaying
time
Prior art date
Application number
PCT/US1994/012863
Other languages
French (fr)
Inventor
Eric Martin White
Original Assignee
Eric Martin White
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eric Martin White filed Critical Eric Martin White
Publication of WO1995013564A1 publication Critical patent/WO1995013564A1/en

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/286Image signal generators having separate monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/337Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using polarisation multiplexing

Definitions

  • This invention relates, in general, to methods and apparatus for visualizing two-dimensional motion picture images in stereopsis or three dimensions; and, in particular, to methods and apparatus employing time-delay buffering for displaying different image frames to right and left eyes during projection, in order to supply binocular disparity depth perception cues.
  • motion picture refers to a series of related images recorded on successive fields or frames on film, videotape, magnetic disk or other recording medium which, when shown in sequence, impart an impression of motion.
  • two-dimensional motion picture refers to a motion picture in which each frame corresponds to a view of the subject at a different instant in time, as seen from a single vantage point, without the benefit of binocular views simulative of the different views seen by left and right eyes viewing the same subject at the same time.
  • motion picture and "two-dimensional motion picture” are intended to encompass a continuum of images, such as optical fiber screen projections of live images, wherein the term “frame” can be thought of as the state of the broadcast or received image at a particular frozen moment in time.
  • the human brain relies and responds to various visual cues and stimuli to develop depth perception. Some of these cues are present in two-dimensional images. Others are traditionally only present in what is referred to as three-dimensional images. An illusion of depth in a scene portrayed in two- dimensional perspective can be realized from depth perception cues such as linear perspective, structure and size of familiar objects, occlusion, shading and shadows, and relative motion. True three dimensional depth perception, however, requires additional cues, such as accommodation, convergence, and binocular disparity.
  • Accommodation is the muscular tension needed to adjust the focal length of the crystalline lens in the eye in order to focus on an object in space.
  • Convergence refers to the muscular tension for rotating each eye when focusing on an object in space.
  • the angle between the two rays from each eye to the object is usually referred to as the convergence angle.
  • Binocular disparity is the horizontal shift present when an observer with two eyes looks at a scene and the images formed at the back of each eye for any particular point in the scene differ in vantage point or field of view due to the spacing distance between left and right eyes. This phenomena is sometimes referred to as parallax offset.
  • Binocular disparity occurs because the eyes are separated by a horizontal distance of approximately 2.5 inches, known as the interocular distance. When an observer views an object, the object will be projected to different points in the left and right eyes. The distance between these two points is known as the parallax offset.
  • Corresponding points in the left and right eye perspective views of an image are known as homologous points.
  • the brain operates to fuse the shifted separate left and right eye views into a single merged image. Once image fusion occurs, a reasonable amount of image separation can be tolerated, without loss of fusion. See. Fender, D.G. and Julesz, B. , "Extension of Panum's Fusional Area in Binocularly Stabilized Vision," J. Opt. Soc. Am.. 57 (1967), 819-830.
  • One method utilizes specially designed cameras for recording two separate images simultaneously of the same scene, with fields of views slightly spaced apart to simulate the separate vantage points of right and left eyes.
  • the separate views are then projected through appropriate filtering (complementary color filters or polarized light) , alternating odd-even frames, or other encoding means onto a common viewing screen from which they can be separately viewed by special glasses or other decoding means acting between the screen and the eyes.
  • filtering complementary color filters or polarized light
  • the separate views can be individually projected to different screens for direct separate independent viewing by left and right eyes.
  • An example of the latter technique is the virtual reality headmount display, wherein separate left and right views are delivered to two small video displays, positioned one in front of each eye.
  • a typical three-dimensional motion picture reconstruction process is described in Geschwind et al. U.S. Patent No. 4,925,294.
  • pairs of left- and right-eye binocular images are derived from standard two-dimensional motion picture film or videotape, so as to exhibit at least some three- dimensional binocular disparity cues when projected using conventional three-dimensional exhibition or transmission systems.
  • Separation of a single, two- dimensional image stream into diverse parallax offset shifted views is accomplished, one frame at a time, by a computer-assisted, human system.
  • Depth information is assigned to various image elements by human decision and/or computer analysis, and the missing parallax offset view constructed by corresponding shifting of foreground elements relative to background.
  • missing information or holes are either obtained from earlier or later frames, extrapolated from adjacent areas in the same frame, or newly synthesized. Such process is tedious and time- consuming.
  • normal depth perception cues present in two-dimensional images of moving objects are augmented by time-delay frame buffering, whereby one frame of the two-dimensional sequence is shown to one eye, while another time- shifted frame is shown to the other eye.
  • moving objects will be viewed in slightly different positions by the two eyes, and background portions blocked for one eye can be viewed by the other, thereby simulating the parallax offset needed for three-dimensional viewing.
  • the invention provides simple, inexpensive, instant "real-time” visualization in three dimensions of any moving objects displayed in time sequential frames on two-dimensional media.
  • the method of the invention does not require any pre-processing or special broadcasting considerations at the time of image production, and can produce the three-dimensional effect from any two-dimensional image source that includes motion in a sequence of images. All that is needed is simple, real-time image buffering for viewing in a conventional stereoscopic viewing environment.
  • frame input from a two-dimensional motion picture source is captured and stored into a frame buffer for later retrieval.
  • two buffered frames are displayed to the observer.
  • One eye sees the newest buffered frame and the other eye sees a time-delayed buffered image.
  • This visual "echo" procedure produces the binocular disparity that is missing from the two-dimensional recording.
  • Additional image processing (adjustments in brightness, contrast, focus and eye dominance) is used to enhance the three-dimensional effect and minimize unwanted effects.
  • FIGS. 1A and IB are schematic representations of images of a scene as viewed by left and right eyes, respectively;
  • FIG. 2 is a schematic representation of a two dimensional motion picture representation of the same scene shown in FIGS> 1A-1B;
  • FIG. 3 is a schematic view of an apparatus in accordance with the invention, for projecting the two dimensional motion picture of FIG. 2 for visualization in three dimensions;
  • FIG. 4 is a block diagram of another apparatus for displaying a three dimensional visualization;
  • FIG. 5 is a flow diagram helpful in understanding the operation of the apparatus of FIG. 4.
  • FIGS. 1A-5 Illustrative embodiments of the inventive apparatus and method for visualizing two-dimensional moving images in three dimensions are described with reference to FIGS. 1A-5.
  • the human vision system perceives three- dimensional images by using the stereooptic effect created when each eye views a different view of the same scene at the same time. These views differ because of the horizontal distance between the viewer's eyes, giving each eye a slightly different field of view or perspective.
  • FIGS. 1A and IB show the respective views seen by left and right eyes of an exemplary scene having relatively movable foreground and background objects.
  • the foreground object is a car 10 which is traveling from right to left, but is frozen in its position at a particular time t.
  • the background comprises a hill and cloud composition 12. Because of horizontal parallax, the foreground object 10 appears in the left-eye view (FIG. 1A) slightly shifted to the right relative to the background 12, compared to the same object 10 in right-eye view (FIG. IB) .
  • This binocular disparity cues the brain to provide the depth perception needed to visualize the scene in three dimensions.
  • Prior art systems either at time of initial production (two camera systems) or through reconstruction (e.g., Geschwind, et al., U.S. Patent No. 4,925,294), prerecord dual images in two-dimensional motion picture sequences, with separate left and right eye views for each frame or recorded instant of time.
  • the inventive method provides the binocular disparity information, without the need for such dual image redundancy, directly from the single-image per single-instant- of-time original two-dimensional motion picture.
  • binocular disparity is most apparent for relative movement in the horizontal direction.
  • Parallax cues are not relied on to the same extent for movements in the vertical direction or movements normal to the plane of the screen.
  • the brain relies on traditional two- dimensional cues (shading, shadows, etc.).
  • three-dimensional visualization can be achieved provided at least some binocular disparity cuing is provided for relative horizontal movements.
  • This entails a right shift of foreground object 10 for the left eye (or left shift for the right eye) , and slightly different non-overlap portions visible of background 12 (viz. a portion of background 12 visible ahead of car 10 in FIG. 1A is blocked in FIG. IB; and a portion of background visible behind car 10 in FIG. 1 is blocked in FIG. 1A) .
  • the invention supplies the horizontal shift and object blocking disparity information continuously and directly from the two- dimensional motion picture itself.
  • FIG. 2 illustrates a motion picture sequence recorded using conventional two-dimensional recording techniques, of the same scene shown in FIGS. 1A and IB.
  • the illustrated recording medium may be a conventional motion picture film or videotape recording 14 having a plurality of successive frames N, N + 1, N + 2, etc. corresponding to two-dimensional image representations of the scene recorded at respective successive times t, t + 1, t + 2, etc.
  • the motion picture 14 was recorded using traditional single field-of-view camera techniques, with only one image frame recorded for each moment of time.
  • frame N may, for example, represent the FIG. 1A view seen by the left eye at time t, but no simultaneous recording was made at the same time t of the right eye view (FIG. IB) .
  • Car 10 is, however, moving horizontally from right to left relative to background 12, and this horizontal shift with time is recorded on the successive frames N, N + 1, N + 2.
  • Frame N + 1 thus shows car 10 shifted to the left relative to background 12 compared to the position of car 10 in frame N; and frame N + 2 shows car 10 shifted to the left relative to background 12 compared to its position in frame N + 1.
  • a later frame e.g.
  • frame N + 2 taken at time t + 2) can be used to provide a view that simulates the horizontally shifted view of the same scene which would have been seen by the second eye, if recorded at the earlier time t.
  • the invention utilizes this observation, to create the stereooptic effect from any traditional two-dimensional sequential frame media, sending one eye the current frame being viewed and the other eye a different frame, either a previous frame that has already been seen by the first eye, or a new frame that has not yet been seen.
  • the offset between the two frames being displayed may be adjusted to tune the three- dimensional effect.
  • the audio offset may also be adjusted to synchronize with either frame, or between them to create audio synchronization.
  • a first exemplary motion picture projection apparatus 16 for practicing the inventive method is illustrated schematically in FIG. 3.
  • a frame N from a series of frames N, N + 1, N + 2, etc. of a reel of film 14 is illuminated by a light source 17 to project the two- dimensional image of frame N through a lens 18 and polarizing filter 19, onto a planar surface of a viewing screen or wall 20.
  • a second frame (frame N + i, offset by a predetermined number of frames i in one or the other direction from frame N) of the same film 14 is illuminated by the same or a different light source 22 to project the two- dimensional image of frame N + i through a lens 23 and polarizing filter 24 onto the same screen or wall 20.
  • Filter 19 acts to polarize the image of frame N in one polarization direction
  • filter 24 acts to polarize the frame image of N + i in another polarization direction, perpendicular to the polarization direction of filter 19.
  • the lenses 18 and 23 are dimensioned, configured and adapted to merge the two projections into superposed combined polarized images 25 on screen 20.
  • the audience views the combined images 25 through polarized viewing glasses 26, having left and right polarizing filter lenses 27, 28, whose polarizations match those of the projected images, so that the image 29 projected through lens 18 and filter 19 is seen by the left eye, and the image 30 projected through lens 23 and filter 24 is seen by the right eye.
  • This simultaneous projection of spaced frames N and N + i is repeated in synchronization with timed shutters, strobes or the like, for each successive frame of film 14, until the total length of the sequence of images has been viewed.
  • FIG. 4 An alternative, two screen implementation of apparatus 16' for practicing the inventive method is illustrated in FIG. 4.
  • a video signal from a TV, laser disc, videotape, computer, video camera, or other device capable of producing a sequence 14 of video image frames or fields N, N + 1, N + 2, etc. is input into a video signal digitizer 32 for conversion from analog to digital representations of each image frame.
  • the digital image frame F created by the video signal digitizer 32 from each analog image frame is in turn stored in a first buffer 34a of a RAM (random-access memory) buffer storage element 33.
  • Element 33 has a plurality of RAM buffers 34 equal to the maximum number of digitized frames Fl, F2, F3, F4 that are to be captured during the time offset.
  • each buffer 34 there are four buffers 34 that "rotate" (shift data contents) one buffer position counterclockwise before each frame is captured. This ensures that the contents of the right view output buffer 34a will always be the most recent frame F4 (corresponding to the digital frame F offset by i frames from previously stored digital frame Fl) received from the video signal digitizer 32 and the left view output buffer 34b will always have the earliest frame Fl received from the video signal digitizer 32.
  • the output from left view output buffer 34b is sent to a left view image processor 36 where adjustments to the left view image such as brightness, contrast, saturation, etc. can be made.
  • the output from right view output buffer 34a is sent to a right view image processor 37 where like adjustments to the right view image can be made.
  • This optional adjustment step is advantageous for improving the depth perceived by the viewer, by allowing one view to be enhanced or diminished relative to the other to assist eye dominance during image fusion by the brain.
  • the final left and right view digital images are then converted back to video signals by the output interfaces 38, 39, which provide respective inputs to left and right LCD (liquid crystal display) displays 40, 41 of a head- mounted stereoscopic viewing device 43.
  • the three-dimensional visualization projection process can be controlled by microprocessor in accordance with the flow diagram given in FIG. 5.
  • Left or right eye dominance preference can be manually set at a switch 44, and input at step 45.
  • the input frame delay offset D corresponding to the incremental shift i in number of frames between left and right eye viewing, can likewise be manually selected at a switch 46, and input at step 47.
  • the contents of each buffer 34 are copied into a next buffer 34 at 49.
  • the selection of D sets which buffer 34 will be used for the time-delayed view.
  • the image of a current frame of a sequential frame image source 14 is captured and stored into the first buffer 34a.
  • the current frame stored in buffer 34a and time-delayed frame stored in buffer 34b are then sent respectively through the right view and left view processors 37, 36.
  • One or both of the frames is then acted upon in accordance with settings at switches 52, 53, 54 which set the adjustments to, e.g., focus, saturation and brightness to the image for enhancement or degradation of either left or right image in accordance with selected eye dominance.
  • the left and right view image signals are then transported for display to the viewing device 43, in accordance with the eye dominance setting. The process is then advanced to the next frame, at 58, and repeated.
  • FIG. 5 illustrates the process wherein only the delayed image frame is acted upon in accordance with switch settings 52, 53, 54. And, at 57, a decision is made in accordance with the setting of switch 44 as to which eye display (left or right) the acted upon frame will be sent. If the right eye is selected as dominant, the contents of buffer 34a are sent to the right eye display 40, and the processed contents of buffer 34b are sent to the left eye display 41. However, if the left eye is selected as dominant, the images are switched, with the contents of buffer 34a going to the left eye and the processed contents of buffer 34b going to the right eye. Such switching enables the left and right eye images to be switched according to preference, and eliminates the need for one of the processors 36, 37.
  • left and right dominance selection, input frame delay offset, and non-dominant image processing can be varied to suit individual preferences and specific object motion encountered in a particular motion picture.
  • one sequence of motion can be viewed using one set of parameter settings, and another sequence can be viewed using a different set of parameter settings.
  • the described circuitry can be optionally configured to record the parameter settings during an initial viewing, for subsequent automatic setting playback during a subsequent viewing. In this way, preferred settings for different pictures, different scenes of the same picture, or different viewer preferences can be preprogrammed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A two-dimensional motion picture is visualized in three dimensions by displaying current frame images continuously in sequence to one eye and time-offset frame images continuously in sequence to the other eye, so that binocular disparity is provided by horizontal offset of moving objects between the frames. In one embodiment, a projector (16) projects images (29, 30) from different frame positions onto a screen (20) through separate lenses (18, 23) and mutually perpendicular polarizing filters (19, 24), and viewers use polarizing 3-D glasses (26) to see images from one frame position with one eye and images from the other frame position with the other eye. In a second embodiment, frames are digitized (32) and stored in serially cascaded buffers (34). One eye view is obtained from the image captured from the current frame, while the other eye view is obtained from one of the stored frame images. Frame offset is varied by changing the buffer from which the stored frame image is read. To improve visualization, one eye image is enhanced or diminished relative to the other, based on selection of eye dominance.

Description

METHOD AND APPARATUS FOR VISUALIZING TWO-DIMENSIONAL MOTION PICTURE IMAGES IN THREE DIMENSIONS
This invention relates, in general, to methods and apparatus for visualizing two-dimensional motion picture images in stereopsis or three dimensions; and, in particular, to methods and apparatus employing time-delay buffering for displaying different image frames to right and left eyes during projection, in order to supply binocular disparity depth perception cues.
BACKGROUND OF THE INVENTION
The term "motion picture" as used herein refers to a series of related images recorded on successive fields or frames on film, videotape, magnetic disk or other recording medium which, when shown in sequence, impart an impression of motion. The term "two-dimensional motion picture" refers to a motion picture in which each frame corresponds to a view of the subject at a different instant in time, as seen from a single vantage point, without the benefit of binocular views simulative of the different views seen by left and right eyes viewing the same subject at the same time. The terms "motion picture" and "two-dimensional motion picture" are intended to encompass a continuum of images, such as optical fiber screen projections of live images, wherein the term "frame" can be thought of as the state of the broadcast or received image at a particular frozen moment in time.
The human brain relies and responds to various visual cues and stimuli to develop depth perception. Some of these cues are present in two-dimensional images. Others are traditionally only present in what is referred to as three-dimensional images. An illusion of depth in a scene portrayed in two- dimensional perspective can be realized from depth perception cues such as linear perspective, structure and size of familiar objects, occlusion, shading and shadows, and relative motion. True three dimensional depth perception, however, requires additional cues, such as accommodation, convergence, and binocular disparity.
Accommodation is the muscular tension needed to adjust the focal length of the crystalline lens in the eye in order to focus on an object in space. Convergence refers to the muscular tension for rotating each eye when focusing on an object in space. The angle between the two rays from each eye to the object is usually referred to as the convergence angle. Binocular disparity is the horizontal shift present when an observer with two eyes looks at a scene and the images formed at the back of each eye for any particular point in the scene differ in vantage point or field of view due to the spacing distance between left and right eyes. This phenomena is sometimes referred to as parallax offset.
It is generally accepted that accommodation and convergence are less important depth cues than binocular disparity. See, e.g.. Ittelson, .H. , Visual Space Perception, Springer Publishing Co. , New York, 1960; and Julesz, B. , Foundation of Cyclopean Perception. University of Chicago Press, Chicago, 1971. Thus, when presenting two dimensional motion pictures for three dimensional visualization, it is essential to be able to provide at least some modicum of binocular disparity cuing. Binocular disparity occurs because the eyes are separated by a horizontal distance of approximately 2.5 inches, known as the interocular distance. When an observer views an object, the object will be projected to different points in the left and right eyes. The distance between these two points is known as the parallax offset. Corresponding points in the left and right eye perspective views of an image are known as homologous points. The brain operates to fuse the shifted separate left and right eye views into a single merged image. Once image fusion occurs, a reasonable amount of image separation can be tolerated, without loss of fusion. See. Fender, D.G. and Julesz, B. , "Extension of Panum's Fusional Area in Binocularly Stabilized Vision," J. Opt. Soc. Am.. 57 (1967), 819-830. There exist several conventional approaches for visualizing motion pictures in three dimensions. One method utilizes specially designed cameras for recording two separate images simultaneously of the same scene, with fields of views slightly spaced apart to simulate the separate vantage points of right and left eyes. The separate views are then projected through appropriate filtering (complementary color filters or polarized light) , alternating odd-even frames, or other encoding means onto a common viewing screen from which they can be separately viewed by special glasses or other decoding means acting between the screen and the eyes. Alternatively, the separate views can be individually projected to different screens for direct separate independent viewing by left and right eyes. An example of the latter technique is the virtual reality headmount display, wherein separate left and right views are delivered to two small video displays, positioned one in front of each eye. The problem with two-image-at-a-time recording for three dimensional visualization is that it is expensive, requires special equipment, and requires prior intent at the time the motion picture is produced. The majority of existing motion pictures are recorded with two-dimensional projection in mind, so have only one vantage point or field of view frame for each moment in time. Similar to efforts at providing colorization to previously recorded black and white films, there exist techniques for post production processing of two-dimensional motion pictures to reconstruct separate left- and right-eye images for each frame for stereoscopic visualization. Such techniques are, however, very expensive, requiring human intervention and tedious frame-by-frame creation of the missing parallax offset view.
A typical three-dimensional motion picture reconstruction process is described in Geschwind et al. U.S. Patent No. 4,925,294. In Geschwind, pairs of left- and right-eye binocular images are derived from standard two-dimensional motion picture film or videotape, so as to exhibit at least some three- dimensional binocular disparity cues when projected using conventional three-dimensional exhibition or transmission systems. Separation of a single, two- dimensional image stream into diverse parallax offset shifted views is accomplished, one frame at a time, by a computer-assisted, human system. Depth information is assigned to various image elements by human decision and/or computer analysis, and the missing parallax offset view constructed by corresponding shifting of foreground elements relative to background. As image elements are separated from background scenes or each other, missing information or holes are either obtained from earlier or later frames, extrapolated from adjacent areas in the same frame, or newly synthesized. Such process is tedious and time- consuming.
SUMMARY OF THE INVENTION It is an object of the present invention to provide methods and apparatus for visualization of two-dimensional motion pictures directly in three dimensions, without the need to first reconstruct a three-dimensional two-image-per-instant-of-time sequence.
In accordance with the invention, normal depth perception cues present in two-dimensional images of moving objects are augmented by time-delay frame buffering, whereby one frame of the two-dimensional sequence is shown to one eye, while another time- shifted frame is shown to the other eye. In this manner, moving objects will be viewed in slightly different positions by the two eyes, and background portions blocked for one eye can be viewed by the other, thereby simulating the parallax offset needed for three-dimensional viewing.
The invention provides simple, inexpensive, instant "real-time" visualization in three dimensions of any moving objects displayed in time sequential frames on two-dimensional media.
The method of the invention does not require any pre-processing or special broadcasting considerations at the time of image production, and can produce the three-dimensional effect from any two-dimensional image source that includes motion in a sequence of images. All that is needed is simple, real-time image buffering for viewing in a conventional stereoscopic viewing environment.
In accordance with a preferred embodiment of the invention, described in greater detail below, frame input from a two-dimensional motion picture source is captured and stored into a frame buffer for later retrieval. After a specified number of frames have been buffered, two buffered frames are displayed to the observer. One eye sees the newest buffered frame and the other eye sees a time-delayed buffered image. This visual "echo" procedure produces the binocular disparity that is missing from the two-dimensional recording. By repeating this process continuously, the two-dimensional source material is visualized in three dimensions.
Additional image processing (adjustments in brightness, contrast, focus and eye dominance) is used to enhance the three-dimensional effect and minimize unwanted effects.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention have been chosen for purposes of illustration and description, and are shown in the accompanying drawings, wherein: FIGS. 1A and IB are schematic representations of images of a scene as viewed by left and right eyes, respectively;
FIG. 2 is a schematic representation of a two dimensional motion picture representation of the same scene shown in FIGS> 1A-1B;
FIG. 3 is a schematic view of an apparatus in accordance with the invention, for projecting the two dimensional motion picture of FIG. 2 for visualization in three dimensions; FIG. 4 is a block diagram of another apparatus for displaying a three dimensional visualization; and
FIG. 5 is a flow diagram helpful in understanding the operation of the apparatus of FIG. 4.
Throughout the drawings, like elements are referred to by like numerals.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Illustrative embodiments of the inventive apparatus and method for visualizing two-dimensional moving images in three dimensions are described with reference to FIGS. 1A-5.
The human vision system perceives three- dimensional images by using the stereooptic effect created when each eye views a different view of the same scene at the same time. These views differ because of the horizontal distance between the viewer's eyes, giving each eye a slightly different field of view or perspective.
FIGS. 1A and IB show the respective views seen by left and right eyes of an exemplary scene having relatively movable foreground and background objects. The foreground object is a car 10 which is traveling from right to left, but is frozen in its position at a particular time t. The background comprises a hill and cloud composition 12. Because of horizontal parallax, the foreground object 10 appears in the left-eye view (FIG. 1A) slightly shifted to the right relative to the background 12, compared to the same object 10 in right-eye view (FIG. IB) . This binocular disparity cues the brain to provide the depth perception needed to visualize the scene in three dimensions. Prior art systems, either at time of initial production (two camera systems) or through reconstruction (e.g., Geschwind, et al., U.S. Patent No. 4,925,294), prerecord dual images in two-dimensional motion picture sequences, with separate left and right eye views for each frame or recorded instant of time. The inventive method provides the binocular disparity information, without the need for such dual image redundancy, directly from the single-image per single-instant- of-time original two-dimensional motion picture.
It is noted that binocular disparity is most apparent for relative movement in the horizontal direction. Parallax cues are not relied on to the same extent for movements in the vertical direction or movements normal to the plane of the screen. For such movements, the brain relies on traditional two- dimensional cues (shading, shadows, etc.). Thus, three-dimensional visualization can be achieved provided at least some binocular disparity cuing is provided for relative horizontal movements. This entails a right shift of foreground object 10 for the left eye (or left shift for the right eye) , and slightly different non-overlap portions visible of background 12 (viz. a portion of background 12 visible ahead of car 10 in FIG. 1A is blocked in FIG. IB; and a portion of background visible behind car 10 in FIG. 1 is blocked in FIG. 1A) . In departure from conventional three-dimensional image visualization systems, the invention supplies the horizontal shift and object blocking disparity information continuously and directly from the two- dimensional motion picture itself.
FIG. 2 illustrates a motion picture sequence recorded using conventional two-dimensional recording techniques, of the same scene shown in FIGS. 1A and IB. The illustrated recording medium may be a conventional motion picture film or videotape recording 14 having a plurality of successive frames N, N + 1, N + 2, etc. corresponding to two-dimensional image representations of the scene recorded at respective successive times t, t + 1, t + 2, etc. The motion picture 14 was recorded using traditional single field-of-view camera techniques, with only one image frame recorded for each moment of time. Thus, frame N may, for example, represent the FIG. 1A view seen by the left eye at time t, but no simultaneous recording was made at the same time t of the right eye view (FIG. IB) . Car 10 is, however, moving horizontally from right to left relative to background 12, and this horizontal shift with time is recorded on the successive frames N, N + 1, N + 2. Frame N + 1 thus shows car 10 shifted to the left relative to background 12 compared to the position of car 10 in frame N; and frame N + 2 shows car 10 shifted to the left relative to background 12 compared to its position in frame N + 1. In each case, because car 10 has moved, a portion of background 12 which was blocked by car 10 in one frame becomes visible in the next. Thus, though not taken with a second camera at the same instant of time t, a later frame (e.g. frame N + 2 taken at time t + 2) can be used to provide a view that simulates the horizontally shifted view of the same scene which would have been seen by the second eye, if recorded at the earlier time t. The invention utilizes this observation, to create the stereooptic effect from any traditional two-dimensional sequential frame media, sending one eye the current frame being viewed and the other eye a different frame, either a previous frame that has already been seen by the first eye, or a new frame that has not yet been seen. The offset between the two frames being displayed may be adjusted to tune the three- dimensional effect. The audio offset may also be adjusted to synchronize with either frame, or between them to create audio synchronization. The result of using the invention to display two- dimensional motion picture recordings is that each eye receives different images containing visual cues that, when combined, create a three-dimensional effect. This effect can be used to create a static or dynamic display of offset images. A first exemplary motion picture projection apparatus 16 for practicing the inventive method is illustrated schematically in FIG. 3. In accordance with the invention, a frame N from a series of frames N, N + 1, N + 2, etc. of a reel of film 14 is illuminated by a light source 17 to project the two- dimensional image of frame N through a lens 18 and polarizing filter 19, onto a planar surface of a viewing screen or wall 20. Simultaneously, a second frame (frame N + i, offset by a predetermined number of frames i in one or the other direction from frame N) of the same film 14 is illuminated by the same or a different light source 22 to project the two- dimensional image of frame N + i through a lens 23 and polarizing filter 24 onto the same screen or wall 20. Filter 19 acts to polarize the image of frame N in one polarization direction, and filter 24 acts to polarize the frame image of N + i in another polarization direction, perpendicular to the polarization direction of filter 19. The lenses 18 and 23 are dimensioned, configured and adapted to merge the two projections into superposed combined polarized images 25 on screen 20. The audience then views the combined images 25 through polarized viewing glasses 26, having left and right polarizing filter lenses 27, 28, whose polarizations match those of the projected images, so that the image 29 projected through lens 18 and filter 19 is seen by the left eye, and the image 30 projected through lens 23 and filter 24 is seen by the right eye. This simultaneous projection of spaced frames N and N + i is repeated in synchronization with timed shutters, strobes or the like, for each successive frame of film 14, until the total length of the sequence of images has been viewed.
An alternative, two screen implementation of apparatus 16' for practicing the inventive method is illustrated in FIG. 4. A video signal from a TV, laser disc, videotape, computer, video camera, or other device capable of producing a sequence 14 of video image frames or fields N, N + 1, N + 2, etc. is input into a video signal digitizer 32 for conversion from analog to digital representations of each image frame. The digital image frame F created by the video signal digitizer 32 from each analog image frame is in turn stored in a first buffer 34a of a RAM (random-access memory) buffer storage element 33. Element 33 has a plurality of RAM buffers 34 equal to the maximum number of digitized frames Fl, F2, F3, F4 that are to be captured during the time offset. For the shown embodiment, there are four buffers 34 that "rotate" (shift data contents) one buffer position counterclockwise before each frame is captured. This ensures that the contents of the right view output buffer 34a will always be the most recent frame F4 (corresponding to the digital frame F offset by i frames from previously stored digital frame Fl) received from the video signal digitizer 32 and the left view output buffer 34b will always have the earliest frame Fl received from the video signal digitizer 32.
The output from left view output buffer 34b is sent to a left view image processor 36 where adjustments to the left view image such as brightness, contrast, saturation, etc. can be made. The output from right view output buffer 34a is sent to a right view image processor 37 where like adjustments to the right view image can be made. This optional adjustment step is advantageous for improving the depth perceived by the viewer, by allowing one view to be enhanced or diminished relative to the other to assist eye dominance during image fusion by the brain. The final left and right view digital images are then converted back to video signals by the output interfaces 38, 39, which provide respective inputs to left and right LCD (liquid crystal display) displays 40, 41 of a head- mounted stereoscopic viewing device 43. The three-dimensional visualization projection process can be controlled by microprocessor in accordance with the flow diagram given in FIG. 5. Left or right eye dominance preference can be manually set at a switch 44, and input at step 45. The input frame delay offset D, corresponding to the incremental shift i in number of frames between left and right eye viewing, can likewise be manually selected at a switch 46, and input at step 47. For each projection of a frame, the contents of each buffer 34 are copied into a next buffer 34 at 49. The selection of D sets which buffer 34 will be used for the time-delayed view. Next, at 50, the image of a current frame of a sequential frame image source 14 is captured and stored into the first buffer 34a. The current frame stored in buffer 34a and time-delayed frame stored in buffer 34b are then sent respectively through the right view and left view processors 37, 36. One or both of the frames is then acted upon in accordance with settings at switches 52, 53, 54 which set the adjustments to, e.g., focus, saturation and brightness to the image for enhancement or degradation of either left or right image in accordance with selected eye dominance. The left and right view image signals are then transported for display to the viewing device 43, in accordance with the eye dominance setting. The process is then advanced to the next frame, at 58, and repeated.
FIG. 5 illustrates the process wherein only the delayed image frame is acted upon in accordance with switch settings 52, 53, 54. And, at 57, a decision is made in accordance with the setting of switch 44 as to which eye display (left or right) the acted upon frame will be sent. If the right eye is selected as dominant, the contents of buffer 34a are sent to the right eye display 40, and the processed contents of buffer 34b are sent to the left eye display 41. However, if the left eye is selected as dominant, the images are switched, with the contents of buffer 34a going to the left eye and the processed contents of buffer 34b going to the right eye. Such switching enables the left and right eye images to be switched according to preference, and eliminates the need for one of the processors 36, 37.
As the viewer observes the two-dimensional motion picture under the visual "echo" process, described above, left and right dominance selection, input frame delay offset, and non-dominant image processing (i.e., focus, saturation, brightness, etc.) can be varied to suit individual preferences and specific object motion encountered in a particular motion picture. If desired, one sequence of motion can be viewed using one set of parameter settings, and another sequence can be viewed using a different set of parameter settings. The described circuitry can be optionally configured to record the parameter settings during an initial viewing, for subsequent automatic setting playback during a subsequent viewing. In this way, preferred settings for different pictures, different scenes of the same picture, or different viewer preferences can be preprogrammed.
Viewers who wear contact lenses will appreciate the type of binocular disparity that exists when one contact lens is in, and one is out. In such case, the dominant eye is the one with a lens in and the non-dominant eye is the one with a lens out. And, though only one eye sees clearly, binocular disparity depth perception is still present from the unfocused information received by the other eye. This same visualization can be realized with the method of the invention, where a selected right or left view is diminished by blurring, darkening, etc. relative to the other left or right view. Such attenuation will reduce strobing, jumping and other similar effects that may result with some sequences because of the echo effect.
Those skilled in the art to which the invention relates will appreciate that other substitutions and modifications can be made to the described embodiment, without departing from the spirit and scope of the invention as described by the claims below.

Claims

CLAIMSWHAT IS CLAIMED IS;
1. A method for visualizing two-dimensional motion picture images in three dimensions, the motion picture comprising a series of related images of a moving subject presented on successive frames, each frame providing a view of the subject from a single vantage point at a different successive moment of time, and the method comprising displaying an image of one of said frames, exclusively to one eye of an observer, and simultaneously displaying an image of another of said frames, offset by a given number of frames from said one of said frames, exclusively to another eye of the observer, so that both eyes see different frame images at the same moment of time.
2. A method as in Claim 1, wherein said step of displaying an image of one of said frames to said one eye comprises sequentially displaying a plurality of images of ones of said frames, exclusively to said one eye; and said step of displaying an image of another of said frames to said others of said frames, respectively offset by said given number of frames from said ones of said frames, exclusively to said another eye, so that both eyes also see the same frame images at different moments of time.
3. A method as in Claim 2, further comprising the step of identifying said one or another eye as dominant; and, in response to said identification, enhancing said images displayed to said eye identified as dominant relative to said images displayed to said eye not identified as dominant.
4. A method as in Claim 3, wherein said relative enhancing step comprises presenting said images in sharper focus to said dominant eye than to said other eye.
5. A method as in Claim 2, wherein said images of said ones of said frames are displayed by projecting said images in first encoded form onto a screen and decoding said first encoded form between said screen and said one eye; and said images of said other of said frame are displayed by projecting said images in second encoded form onto a screen and decoding said second encoded form between said screen and said another eye.
6. A method as in Claim 5, wherein said projecting steps comprise projecting said images of said one frames and images of said other frames, respectively, in mutually perpendicular polarized light directions onto said screen; and said decoding steps comprise decoding said polarized images, respectively, by corresponding mutually perpendicular polarized light filters placed before said one and another eyes.
7. A method for producing three-dimensional visualization of two-dimensional images of a moving subject sequentially presented on successive frames of a motion picture; the method comprising: capturing said two-dimensional images, one-at- a-time, continuously in succession from said sequentially presented frames; displaying said captured images, one-at-a-time, continuously in succession to one eye using a stereooptic viewing device; storing said captured images one-at-a-time, continuously in succession, as they are captured, for an interval corresponding to the time to display the images of a given number of said frames in said displaying step; simultaneously with said captured image displaying step, displaying said stored images after said time interval one-at-a-time, continuously in succession to another eye using the stereooptic viewing device.
8. A method as in Claim 7, wherein said method further comprises providing a plurality of storage buffers, and wherein said storing step comprises storing said images one-at-a-time, continuously in succession, into a first one of said storage buffers, and shifting said stored images sequentially from said first buffer successively in order through others of said buffers, in synchronization with each capture of a frame image in said capturing step.
9. A method as in Claim 7, wherein said method further comprises selecting said given number of frames, and said stored image displaying step comprises displaying the image stored in a particular one of said storage buffers chosen in response to said given number selecting step.
10. A method as in Claim 7, wherein said method further comprises selecting a dominant eye preference; and displaying said captured images to one of the right or left eyes and said stored images to the other of the right or left eyes, according to said preference designated in said selecting step.
11. A method as in Claim 7, wherein said method further comprises processing one of said captured or stored images to enhance or diminish the display of said captured images relative to the display of said stored images.
12. A method as in Claim 11, wherein said method further comprises selecting said given number of frames, and said stored image displaying step comprises displaying the image stored in a particular one of said storage buffers chosen in response to said given number selecting step.
13. A method as in Claim 12, wherein said method further comprises selecting a dominant eye preference; and displaying said captured image to one of the right or left eyes and said stored images to the other of the right or left eyes, according to said preference designated in said selecting step.
14. Apparatus for visualizing a two dimensional motion picture in three-dimensions, the motion picture comprising a series of related images of a moving subject presented on successive frames, each frame providing a view of the subject from a single vantage point at a different successive moment of time, said apparatus comprising: means for sequentially presenting said frames, one-at-a-time, continuously; means associated with said sequentially presenting means, for capturing said two-dimensional images, one-at-a-time, continuously in succession, from said sequentially presented frames; means for displaying said captured images one- at-a-time, continuously in succession, exclusively to one eye of an observer; means for storing said captured images, one-at- a-time, continuously in succession, for an interval corresponding to the time for said image capture means to capture the images of a given number of said frames; and means for displaying said stored images after said time interval, one-at-a-time, continuously in succession, exclusively to another eye of an observer.
15. Apparatus as in Claim 14, wherein said means for storing comprises a plurality of storage buffers, and means for storing said images one-at-a- time continuously into a first one of said storage buffers, and means for shifting said stored images sequentially from said first buffer successively in order through others of said buffers, in synchronization with capture of said frame images by said capturing means.
16. Apparatus as in Claim 15, further comprising a first switch for selecting said given number of frames, and wherein said means for displaying said stored images comprises means for displaying said stored images from a particular one of said storage buffers chosen in response to setting of said first switch.
17. Apparatus as in Claim 16, further comprising a second switch for selecting a dominant eye preference; and wherein said means for displaying said captured images and means for displaying said stored images act to display said images to said right or left eye in response to setting of said second switch.
18. Apparatus as in Claim 17, further comprising an image processor for processing said stored images to enhance or diminish the display of said captured images relative to the display of said stored images.
19. Apparatus as in Claim 16, wherein said means for displaying said captured images and said means for displaying said stored images respectively comprise means for displaying said images to respective displays of a virtual reality headmount display.
PCT/US1994/012863 1993-11-09 1994-11-08 Method and apparatus for visualizing two-dimensional motion picture images in three dimensions WO1995013564A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15053093A 1993-11-09 1993-11-09
US08/150,530 1993-11-09

Publications (1)

Publication Number Publication Date
WO1995013564A1 true WO1995013564A1 (en) 1995-05-18

Family

ID=22534966

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1994/012863 WO1995013564A1 (en) 1993-11-09 1994-11-08 Method and apparatus for visualizing two-dimensional motion picture images in three dimensions

Country Status (1)

Country Link
WO (1) WO1995013564A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2063647A1 (en) * 2007-11-24 2009-05-27 Barco NV Calibration of a 3-dimensional display
US20110149050A1 (en) * 2009-06-01 2011-06-23 Katsumi Imada Stereoscopic image display apparatus
US20130100262A1 (en) * 2010-07-01 2013-04-25 Sagem Defense Securite Low-noise bioccular digital vision device
RU2525751C2 (en) * 2009-03-30 2014-08-20 Панасоник Корпорэйшн Recording medium, playback device and integrated circuit
WO2014199127A1 (en) * 2013-06-10 2014-12-18 The University Of Durham Stereoscopic image generation with asymmetric level of sharpness
GB2517261A (en) * 2013-06-11 2015-02-18 Sony Comp Entertainment Europe Head-mountable apparatus and systems

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2683389A (en) * 1948-04-26 1954-07-13 Wright Walter Isaac Projection of cinematograph film
US3143032A (en) * 1962-06-26 1964-08-04 Cednas Karl Lennart Erling Projection device for projectors with twin lens system
US3537782A (en) * 1968-09-23 1970-11-03 Fairchild Hiller Corp Three-dimensional motion picture film projection system using conventional film
US4636866A (en) * 1982-12-24 1987-01-13 Seiko Epson K.K. Personal liquid crystal image display
US4754327A (en) * 1987-03-20 1988-06-28 Honeywell, Inc. Single sensor three dimensional imaging
US4807024A (en) * 1987-06-08 1989-02-21 The University Of South Carolina Three-dimensional display methods and apparatus
US4933755A (en) * 1989-02-15 1990-06-12 Dahl Thomas R Head mounted stereoscopic television viewer

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2683389A (en) * 1948-04-26 1954-07-13 Wright Walter Isaac Projection of cinematograph film
US3143032A (en) * 1962-06-26 1964-08-04 Cednas Karl Lennart Erling Projection device for projectors with twin lens system
US3537782A (en) * 1968-09-23 1970-11-03 Fairchild Hiller Corp Three-dimensional motion picture film projection system using conventional film
US4636866A (en) * 1982-12-24 1987-01-13 Seiko Epson K.K. Personal liquid crystal image display
US4754327A (en) * 1987-03-20 1988-06-28 Honeywell, Inc. Single sensor three dimensional imaging
US4807024A (en) * 1987-06-08 1989-02-21 The University Of South Carolina Three-dimensional display methods and apparatus
US4933755A (en) * 1989-02-15 1990-06-12 Dahl Thomas R Head mounted stereoscopic television viewer

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2063647A1 (en) * 2007-11-24 2009-05-27 Barco NV Calibration of a 3-dimensional display
RU2525751C2 (en) * 2009-03-30 2014-08-20 Панасоник Корпорэйшн Recording medium, playback device and integrated circuit
US20110149050A1 (en) * 2009-06-01 2011-06-23 Katsumi Imada Stereoscopic image display apparatus
US8704881B2 (en) * 2009-06-01 2014-04-22 Panasonic Corporation Stereoscopic image display apparatus
US20130100262A1 (en) * 2010-07-01 2013-04-25 Sagem Defense Securite Low-noise bioccular digital vision device
WO2014199127A1 (en) * 2013-06-10 2014-12-18 The University Of Durham Stereoscopic image generation with asymmetric level of sharpness
GB2517261A (en) * 2013-06-11 2015-02-18 Sony Comp Entertainment Europe Head-mountable apparatus and systems
GB2517261B (en) * 2013-06-11 2015-08-05 Sony Comp Entertainment Europe Head-mountable apparatus and systems
US9207455B2 (en) 2013-06-11 2015-12-08 Sony Computer Entertainment Europe Limited Electronic correction based on eye tracking
US9645398B2 (en) 2013-06-11 2017-05-09 Sony Computer Entertainment Europe Limited Electronic correction based on eye tracking

Similar Documents

Publication Publication Date Title
US6108005A (en) Method for producing a synthesized stereoscopic image
EP2188672B1 (en) Generation of three-dimensional movies with improved depth control
US5835133A (en) Optical system for single camera stereo video
Ezra et al. New autostereoscopic display system
US20010015753A1 (en) Split image stereoscopic system and method
US6326995B1 (en) Methods and apparatus for zooming during capture and reproduction of 3-dimensional images
US4303316A (en) Process for recording visual scenes for reproduction in stereopsis
JPH08501397A (en) Three-dimensional optical observation device
JP2000502234A (en) Image conversion and coding technology
WO1992008156A1 (en) System and devices for time delay 3d
JPH08205201A (en) Pseudo stereoscopic vision method
US6183089B1 (en) Motion picture, TV and computer 3-D imaging system and method of use
US4420230A (en) Production of three dimensional motion pictures
KR19990053446A (en) Three-dimensional stereoscopic image generation device using multiple liquid crystal slits
US4994898A (en) Color television system for processing signals from a television camera to produce a stereoscopic effect
WO1995013564A1 (en) Method and apparatus for visualizing two-dimensional motion picture images in three dimensions
Jones Jr et al. VISIDEP (tm): visual image depth enhancement by parallax induction
CA2191711A1 (en) Visual display systems and a system for producing recordings for visualization thereon and methods therefor
HUT73088A (en) Method and apparatus for producing three-dimensional imagery
AU649530B2 (en) Improvements in three-dimensional imagery
EP0123748A1 (en) Stereoscopic method and apparatus
Butterfield Autostereoscopy delivers what holography promised
CN101477299A (en) Stereo movie shooting apparatus
Mayhew et al. Parallax scanning using a single lens
Mayhew A 35mm autostereoscopic system for live-action imaging using a single camera and lens

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA