WO2006109252A2 - Imagerie numerique donnant une illusion de profondeur - Google Patents

Imagerie numerique donnant une illusion de profondeur Download PDF

Info

Publication number
WO2006109252A2
WO2006109252A2 PCT/IB2006/051114 IB2006051114W WO2006109252A2 WO 2006109252 A2 WO2006109252 A2 WO 2006109252A2 IB 2006051114 W IB2006051114 W IB 2006051114W WO 2006109252 A2 WO2006109252 A2 WO 2006109252A2
Authority
WO
WIPO (PCT)
Prior art keywords
scene
images
features
generating
camera
Prior art date
Application number
PCT/IB2006/051114
Other languages
English (en)
Other versions
WO2006109252A3 (fr
Inventor
Benzion Landa
Original Assignee
Humaneyes Technologies Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Humaneyes Technologies Ltd. filed Critical Humaneyes Technologies Ltd.
Priority to EP06727890A priority Critical patent/EP1875306A2/fr
Priority to JP2008506034A priority patent/JP2009500878A/ja
Priority to US11/918,232 priority patent/US20090066786A1/en
Publication of WO2006109252A2 publication Critical patent/WO2006109252A2/fr
Publication of WO2006109252A3 publication Critical patent/WO2006109252A3/fr
Priority to IL186613A priority patent/IL186613A0/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography

Definitions

  • the invention relates to methods of generating a visual display of a scene that provides a perception of depth and methods for controlling a degree of the depth perception.
  • Human stereo vision relies on primary or physiological and secondary or psychological, sometimes called pictorial, depth cues to interpret three-dimensional information from a scene received on the two-dimensional surface of the eye or retina.
  • some physiological cues such as accommodation, or an amount of change of lens shape that the eyes provide to focus on an object
  • monocular vision i.e. by a single eye
  • many physiological depth cues are binocular cues that are a function of a person having two eyes. Binocular cues include convergence, or the angle through which the eyes are rotated to focus on an object, and retinal disparity, which refers to a difference between images of a same scene that the left and right eyes see because of their different positions.
  • Psychological depth cues are used to explain depth perception a person experiences when looking at photos and paintings and include relative size, linear perspective, height of objects above the line of sight, interposition, shading, shadow, relative brightness, color (chromostereopsis), and atmospheric attenuation. These psychological cues are widely used in terrain representations to provide a sense of depth. Motion cues are often classified as a psychological cue. However, since they produce changes in relative displacements of retinal images of objects at different distances, motion cues actually produce physiological responses that convey three-dimensional information. Many of the psychological cues can be combined with physiological cues to produce enhanced three-dimensional effects.
  • Various three-dimensional stereoscopic display technologies exist for conveying either physiological or psychological cues that impart depth information from, for example, two- dimensional paper, video and computer system images.
  • printed representations commonly use psychological cues.
  • anaglyph prints rely on the physical separation of left and right images based on splitting the color spectrum into red and green/blue components and masking the images so that they are "routed" to the appropriate eyes to render the three-dimensional scene.
  • Film products may include polarized films for separation and allow color images to be printed. Films still require glasses, commonly with horizontal polarization for one eye and vertical polarization for the other to convey separate images.
  • Video and computer systems use both anaglyph and polarization and these methods are commonly implemented in softcopy photogrammetry systems to accomplish the left/right separation. These systems also allow motion and other specialized approaches with technologies such as lenticular and holographic displays to enhance the stereo image separation.
  • Autostereoscopic methods of three-dimensional display are based on a variety of concepts and methods and do not require a person to wear 3D glasses to achieve synthesis of depth information in the human eye-brain system.
  • Autostereoscopic displays rely on stereoscopic viewing of right and left images or alternating pairs of images and include lenticular displays, parallax barrier displays as well as displays based on motion parallax.
  • An aspect of some embodiments of the invention relates to providing methods and apparatus for generating a plurality of "display images" of a scene, which when displayed sequentially and sufficiently rapidly to an observer without left-right masking provides a display of the scene having a desired perception of depth.
  • An aspect of some embodiments of the invention relates to providing methods for controlling the perception of depth in the displayed scene.
  • the methods are used to enhance a perception of depth.
  • the methods are used to reduce a perception of depth.
  • An aspect of some embodiments of the invention relates to controlling velocity of features in a scene or the focal length at which the features are imaged independent of velocities or imager focal lengths of other features in the scene to generate relative motion of features that provides an enhanced perception of depth.
  • controlling velocity of features is used to reduce a perception of depth.
  • the present inventors have discovered that, when a scene containing certain depth cues is presented on an ordinary two-dimensional display (such as a computer display or television screen), depth-enhanced images (“3D" images) can be perceived by the viewer.
  • an ordinary two-dimensional display such as a computer display or television screen
  • the nature of the depth cues that appear to induce or enhance the perceived depth perception are preferably related to the nature in which the brain integrates the two disparate images received from the left and right eye when viewing a true three dimensional scene ("stereopsis").
  • the images that are viewed by each eye separately, though parallax-displaced from one another, are both normal "perspective" views, little different from those captured by an optical camera.
  • stereopsis creates a new single image which the brain "sees”, which differs significantly from the normal perspective views as seen by each eye.
  • the single integrated image which the brain perceives is significantly distorted in several important respects compared to the normal perspective view seen by each eye.
  • motion sequence we mean that the depth-cued scene, as described above, is presented in a sequence of still frames - such as a movie or video sequence - in which the angle of view of the scene progressively changes (i.e. the "viewer” or the “camera” is moving with respect to at least a part of the scene).
  • Positions from which a scene is imaged may lie along any of many different forms of curves. The positions may move laterally with respect to the scene and/or execute zoom in and/or zoom out motion with respect to the scene. For example, in some embodiments of the invention, the positions lie along a three-dimensional non planar curve, which curve requires a function of three spatial coordinates relative to the scene to be properly described. In some embodiments of the invention, the positions lie along a planar curve. In some embodiments of the invention, the positions lie along a straight line.
  • objects at the desired focal distance can be deemed to be viewed through a virtual lens of a particular focal length, while nearer objects through progressively shorter focal length lenses while more distant objects through progressively longer focal length lenses, making nearer objects smaller and more distant objects larger, the ratio of such focal lengths and enlargements, as well as the aspect ratios of such enlargements, being adjustably determined in advance.
  • objects at the desired focal distance can be deemed to be viewed through a virtual lens of a particular focal length, while nearer objects through progressively shorter focal length lenses while more distant objects through progressively longer focal length lenses, while maintaining the size of the objects at each plane in their correct respective proportions or dimensions with respect to their distance from the camera.
  • the focal length differences alter the relative motion of near and far objects with respect to the rate of camera motion.
  • the virtual lenses of different focal lengths may each be of different cameras located coaxially with respect to the optical of the lenses and may, preferably, be co-located at the same point or the same distance from any point in the scene.
  • the multiple virtual cameras may be regarded as a single camera with multiple lenses, each of different focal length, each associated with a designated plane of the scene. This method lends itself particularly well to the automated or semi-automated generation of computer images in which each camera lens automatically captures the appropriate plane of the virtual scene through all of the required camera motions.
  • the lenses (and optionally the imaging plane) for the various focal length lenses are spaced along a same optical axis.
  • the focal lengths of the virtual lenses change as a function of their location relative to the scene or to a particular feature or features in the scene.
  • the methods of this invention are especially useful for computer "manipulated" images which were originally captured by a camera, or for images that are entirely computer generated, such as animated films or computer games, or for combinations thereof.
  • the relative motion of elements in a scene is varied as a function of their depth.
  • a computer-generated (or at least partly computer generated or computer manipulated) video sequence of a scene is created in which the camera moves in any direction or combination of directions or other motion (horizontal, vertical, diagonal, rotational, zoom or other) with respect to at least one element in the scene.
  • the relative motion of other elements in the scene are such that at least some of those other elements, whether nearer or more distant from the camera than the first element, move at a higher rate of relative motion than their respective distances from the first element would dictate if the objects of the scene were in fixed relative positions to one another.
  • a computer generated scene comprising a relatively stationary child, a tree in the background and a ball in the foreground and that the child's parent is photographing the scene from a moving vehicle with a video camera.
  • the parent aims the camera so that the child remains in the center of the camera's field of view and acquires thereby a sequence of camera images of the scene with the child substantially at the center of the images.
  • the child is substantially in the center of the scene, because of motion parallax, when viewing a video sequence of the images, the tree in the background and the ball in the foreground move in opposite directions relative to the child.
  • the motion parallax imparts a modicum of depth to the video sequence.
  • the images in the video sequence are two-dimensional images, binocular cues that provide a sense of depth when viewing the actual scene are missing from the video sequence. As a result, the video sequence tends to appear flat relative to the real scene and suffer from a loss of "vitality".
  • the velocities of the tree and the ball relative to the child are increased.
  • the increased velocities in the display images are greater than velocities of the ball and tree that correspond to, and would be expected from, their distances relative to the child.
  • the increased relative velocities provide an enhanced sense of depth and increased spatial separation between the child, the tree and the ball and consequently a vital 3D depth to the image.
  • the ball tree and child are all stationary and their real world relative positions are not changing.
  • the relative motion of the ball and tree in the video sequence images are a result only of the motion of the parent during which he or she acquired the video sequence.
  • the depth enhanced images of the scene correspond to images of a "real world" in which the tree and ball are moving relative to the child.
  • the scope of the invention contemplates an unlimited number of objects or elements.
  • the same rules of relative motion could be applied to each blade of grass on the lawn - or to any or all other objects or elements in any scene.
  • every pixel in a scene may be designated a particular relative rate of motion as a function of its nominal distance from the camera or from another reference point.
  • said relative motion need not be linear across the entire field of view (for example, it may be more pronounced in the center of the frame and less pronounced in the peripheral regions, or vice versa).
  • changing velocities of features in a scene does not necessarily entail changes in distance of the features relative to each other in the scene or changes in their scale. Changes in velocities can be decoupled from changes in distances and scale, though they may be performed in combination with and in coordination with such changes.
  • an impression of depth difference between the features is respectively increased or decreased.
  • the unexpected changes in velocities are configured to generate distortions in a scene that vary in time and lend the scene a seeming measure of plasticity that cannot in general be reproduced by conventional perspective imaging.
  • normal perspective vision and perspective projection map straight lines in a three dimensional scene into straight lines.
  • Velocity changes in accordance with an embodiment of the invention on the other hand often map straight lines into curved lines, which may vary in time during a presentation of a sequence of display images. The inventors have found that such challenges to the eye-brain vision system can be effective in stimulating the brain to generate a relatively strong enhanced sense of depth.
  • a scene may be partitioned into layers, each of which is located at a different depth in the scene.
  • the layers are not necessarily flat and for example, in some embodiments of the invention the layers may be curved and have regions of different thickness.
  • the layers are spherical or have a spherical region.
  • the layers are cylindrical or have a cylindrical region.
  • the layers are planar.
  • At least one layer is assigned a velocity or is viewed through a particular focal length lens, such that stationary features located in the layer move relative to other stationary features in the scene to generate a desired sense of depth in the scene in accordance with an embodiment of the invention.
  • the layers are processed, for example using a computer, to image the scene in the motion sequence display so that at different times in the sequence, i.e. times at which the display images are displayed, the features in the at least one layer are displaced from other features in the scene as a function of its assigned velocity or focal length of lens through which it is viewed.
  • Thicknesses of the layers, into which a scene is partitioned in accordance with an embodiment of the invention may vary.
  • each layer has a uniform thickness.
  • thickness of a layer may vary.
  • layers are optionally relatively thin.
  • regions for which velocity may be allowed to be a coarse function of depth layers are optionally thick.
  • layers approach zero thickness and velocity of features in a scene, or the focal length through which they are viewed become a smooth function of depth.
  • the distance from which the feature is imaged for the display image is decreased and/or the focal length at which the feature is imaged is increased.
  • the distance from which the feature is imaged is increased and/or the focal length at which the feature is imaged is decreased.
  • the greater the number of layers or focal lengths used i.e., the more continuous the changes in the scene
  • the greater and more lifelike the three dimensional effect the greater and more lifelike the three dimensional effect.
  • the number of layers should be increased and the focal length change should be as continuous as possible.
  • An aspect of some embodiments of the invention relates to generating depth cues in display images of a scene by changing the scale of features in the scene.
  • the inventors have determined that to enhance depth perception it can be advantageous to accompany changes in characteristics of features of a scene intended to generate depth cues by scale changes in the features.
  • scale changes of features that appear counterintuitive may be advantageous in supporting depth cues.
  • features in an image of a scene are proportional to their depth, with farther features smaller and nearer features larger, the inventors have determined that in fostering depth perception it can be advantageous to depart from scaling features proportional to their intended depths.
  • Methods in accordance with embodiments of the present invention are optionally encoded in any of various computer accessible storage medium, such as a floppy disk, CD, flash memory or an ASIC for use in generating and/or displaying display images that provide enhanced depth perception.
  • Methods and apparatus in accordance with embodiments of the present invention may be used to enhance depth perception for many different and varied applications and two dimensional visual presentation formats.
  • the methods may be used to enhance depth perception for computer game images, animated movies, visual advertising and video training.
  • a method of generating a visual display of a scene having a desired illusion of depth comprising: generating a sequence of display images of the scene; setting velocities of features in the scene to generate non-perspective distortions of the scene in the display images; and sequentially displaying the display images.
  • a method of generating a visual display of a scene having an enhanced illusion of depth comprising: generating a sequence of images of the scene, each image being acquired for a different position relative to at least some of the features in the scene; and sequentially displaying the display images, wherein generating said sequence includes: setting positions of features in the scene in a systematic manner from image to image to generate non-perspective distortions of the images to provide an enhanced perception of depth to the sequence of images.
  • a method of generating a visual display of a scene having an enhanced illusion of depth comprising: generating a sequence of display images of the scene, each image being acquired by a virtual camera from a different position relative to at least some of the features in the scene; and sequentially displaying the display images, wherein generating said sequence includes providing at least one feature in the scene with a velocity that generates relative motion between features, which in the scene are nominally stationary relative to each other, wherein a velocity provided to a feature of the at least one feature is a function of the features depth in the scene relative to the camera.
  • the method includes generating a visual display according to claim 3 wherein the virtual camera positions lie along a non-planar three dimensional trajectory. Alternatively, the virtual camera positions lie along a planar trajectory. Optionally, the virtual camera positions lie along a linear trajectory.
  • positions are closer to the scene than others.
  • at least two positions are displaced from each other laterally relative to the scene.
  • features at different depths relative to the camera are imaged at different focal lengths.
  • the scene is partitioned into layers.
  • a same velocity is set for features in a same layer.
  • different velocities are set for different layers.
  • the camera images different layers from different locations along the camera's optic axis.
  • features that are closer to the camera are provided with a velocity in a first direction
  • features at an intermediate distance are stationary and wherein features at a farther distance are provided with a velocity in a second direction generally opposite to said first direction.
  • features that are closer to the camera are provided with a first velocity in a first direction
  • features at an intermediate distance are provided with a second velocity smaller than said first velocity in said first direction and features at very large distances are stationary.
  • features that are close to the camera are stationary, features at an intermediate distance are provided with a first velocity in a first direction and features at a farther distance are provided with a second velocity greater than said first velocity in said first direction.
  • the first direction is generally parallel to a direction of motion between said different positions.
  • moving features are provided with an additional velocity or change in position consistent with the change provided to correspondingly placed stationary objects.
  • a method of generating a visual display of a scene having an enhanced illusion of depth comprising: generating a sequence of display images of the scene, each display image being acquired by a virtual camera from a different position relative to at least some of the scene's features; and sequentially displaying the display images, wherein a focal length at which the virtual camera images a feature in the scene is a function of the distance of the feature from the camera.
  • some features that are further from the camera are imaged with a focal length that is greater than a focal length used to image features that are relatively nearer to the camera.
  • the method includes displaying the display images so that may be viewed by both eyes of a viewer.
  • a computer accessible storage medium encoded with a method or an image or sequence of visual images in accordance with the invention.
  • Fig. IA schematically shows the scene comprising a ball a girl and a tree discussed in the summary being imaged to provide a motion sequence of display images in accordance with prior art
  • Fig. IB schematically shows the scene shown in Fig. IA being imaged to provide a motion sequence in accordance with an embodiment of the present invention
  • FIG. 1C schematically shows sequences of display images generated in accordance with prior art as shown in Fig. 1 and, as illustrated in Fig. IB, in accordance with the present invention
  • Fig. 2 schematically shows partitioning a scene into layers and assigning velocities to the layers, in accordance with an embodiment of the present invention
  • Figs. 3 A-3E schematically illustrate distortions in versions of a scene used to generate display images of the scene, in accordance with an embodiment of the present invention
  • Fig. 4 shows a plan view of distortions shown in Figs 3A-3E, in accordance with an embodiment of the present invention
  • Fig. 5A schematically illustrates generating display images by imaging different features of a scene at different focal lengths
  • Fig. 5B schematically shows sequences of display images generated in accordance with prior art as shown in Fig. 1 and, as illustrated in Fig. 5A, in accordance with the present invention.
  • Fig. IA schematically shows a scene 500 comprising a tree 501, a girl 502 in front of the tree and a ball 503 in front of the girl being imaged by a moving camera represented by an hourglass icon 520 in accordance with prior art.
  • a waist 521 of the hourglass icon represents an optic center of camera 520 and a dashed line 522 represents the optic axis of the camera.
  • camera 520 is moving along a straight line trajectory 530 at a constant velocity in a direction indicated by a block arrow 531 and acquiring camera images of scene 500 at regular intervals.
  • camera 520 is oriented so that at each of the positions along trajectory 530 at which it acquires images of scene 500, its optic axis intersects girl 502.
  • Camera 520 is schematically shown at five locations Pl- P5 along trajectory 530 at which it acquires corresponding display images IMl - IM5 of scene 500 for use in providing a motion sequence of images of the scene.
  • Scene 500 may be an actual scene and camera 520 an actual camera, held for example by a parent of the girl in the scene riding in a car.
  • image 500 is a synthetic image and camera 520 a computer graphics camera that acquires images of scene 500.
  • Images IMl - IM5 are shown from the back, upside down and reversed from left to right in orientations at which they are acquired by camera 520.
  • Images IM1*-IM5* are images IM1-IM5 respectively as seen by an observer, reversed left to right and right side up relative to the orientations of images IM1-IM5.
  • a ray 560 from a central spot 561 of tree 501 that passes through optic center 521 of camera 520 and is incident on the images indicates locations of an image of the central spot on the images.
  • rays 570 indicate locations of images of a central spot 571 of ball 503 for some of images IM1-IM5.
  • a central spot 580 on girl 502 is imaged on all images IM1-IM5 at points on the images intersected by optic axis 522.
  • optic axis 522 passes through central points 561 and 580 of tree 501 and girl 502 and through an uppermost "top" point 572 on the circumference of ball 503 that is directly above center 571 of the ball.
  • Locations Pl and P2 are mirror images of locations P4 and P5 respectively in a plane perpendicular to trajectory 530 that passes through optic center 521 of camera 520 at location P3.
  • the images of the ball, girl and tree are layered one on top of the other since at position P3 the girl, the tree and the ball are aligned one behind the other.
  • Fig. IB schematically shows scene 500 being imaged to provide display images for a motion sequence of scene 500 that generates an enhanced sense of depth, in accordance with an embodiment of the invention.
  • FIG. IB camera 520 moves along trajectory 530, and acquires images of scene 500 at positions Pl - P5, at each of which positions optic axis 522 intersects girl 502.
  • tree 501 and ball 502 are not stationary during imaging by camera 520. Instead tree 501 optionally moves, as indicated by a block arrow 590 from left to right during imaging and optionally ball 503, as indicted by a block arrow 591, moves from right to left during imaging.
  • positions Pl - P5 of camera 520 tree 501 is imaged at corresponding positions Tl - T5 respectively and ball 503 at positions Bl - B5 respectively.
  • the figures acquired by camera 520 at positions Pl - P5 are labeled IMI l - IMl 5 and in their viewing orientations are labeled IM11* - IM15*.
  • tree 501 and ball 503 move at constant velocities preferably in a direction parallel to the movement of camera 520.
  • they move with varying velocities.
  • tree 501 moves with a constant velocity and positions Tl - T5 are equally spaced.
  • Ball 503 moves with a constant velocity between positions B2-B4.
  • ball 503 optionally moves with a velocity higher than the velocity at which it moves between positions B2 - B4.
  • the increased velocity of ball 503 between positions Bl and B2 and between B4 and B5 provides for enhanced relative motion between the girl and ball in images acquired at positions relatively far from P3.
  • positions far from P3 because of the relatively large angles that rays from the ball make with optic axis 522, for a same given velocity at which ball 503 moves, relative motion between the ball and the girl in perspective images IMl 1 and IM15 is substantially reduced relative to that in images IM12 - IM14.
  • Figs. IA and IB the motion of camera 520 along trajectory 530 in Figs. IA and IB is relative to scene 500 and the images acquired at locations Pl - P5 could of course be duplicated by maintaining the camera stationary and moving the scene.
  • the effects of changing velocities of features in scene 500 in accordance with an embodiment of the invention, such as shown in Fig. IB, can be provided by vector addition of velocities.
  • images according to Fig. IA can be acquired by a real camera, the images of Fig. IB can not generally be so acquired.
  • the methods of the present invention are suitable for both composite images of real objects and for animations. They could also be used to a limited extent in manipulating real images.
  • a scene such as scene 500 has more features than just the tree, the girl and the ball and will usually comprise grass, stones, bushes a swing on the tree and possibly a dog, who most probably will find it difficult to remain still during the telling of this story.
  • the scene is partitioned into layers, each of which is optionally parallel to and located at a different distance from trajectory 530.
  • Fig. 2 schematically shows scene 500 partitioned into a plurality of layers 600 in accordance with an embodiment of the invention.
  • layers 600 are planar and have a same uniform thickness.
  • each layer 600 is provided with a velocity so that there is a relatively smooth change in velocity of features in scene 500 as a function of their respective depths.
  • Layers labeled 601, 602 and 603 comprise tree 501, girl 502 and ball 503 respectively.
  • Arrows 610 arrayed opposite layers 600 along a line 611 schematically indicate velocities assigned to the layers in accordance with an embodiment of the invention.
  • a velocity assigned a given layer 600 is indicated by the arrow 610 opposite the layer.
  • Direction of the arrow schematically indicates direction of the velocity and length of the arrow its magnitude.
  • Layers 601 and 603 comprising tree 501 and ball 503 are assigned velocities in opposite directions and have maximum, not necessarily equal, velocities in their respective directions.
  • Layer 602 comprising girl 502 is assigned zero velocity since camera 520 is assumed to move along trajectory 530 with its orientation adjusted so that optic axis 522 is directed to the girl.
  • Velocities 610 may be determined, in accordance with embodiments of the invention, in various ways. For example, if Z is a distance of a given layer 600 from trajectory 530 and Z 0 is distance of girl 502 from the trajectory, magnitude of velocities 610 may be proportional to a power of (Z-Z 0 ), for example, (Z-Z 0 )I/ ⁇ O r (Z-Z 0 ) ⁇ 5 or an exponential function e ⁇ l(Z ⁇ Zo)I. Furthermore, velocities 610 are not necessarily constant in time, as measured for example by progress of camera 520 along trajectory 530 and as translated into a display sense of time when images IM11*-IM15* (Figs.
  • velocities of features in scene 500 are determined by velocities 610 of the layers 600 in which they are located, which velocities by their nature are discontinuous as boundaries between layers having different velocities, in some embodiments of the invention, velocities of features are continuous functions of depth in a scene. Thus, it is considered desirable to increase the number of layers.
  • the child is stationary.
  • the stationary plane can be virtually any plane between the plane of the camera motion and infinity.
  • the sense of depth perception generated by a motion sequence of images IMI l* - IMl 5* is not only enhanced by the increased relative velocities of the ball, the girl, the tree and features in layers 600 intervening between them caused by choice of velocities 610.
  • the inventors have determined that the enhanced sense of depth also appears to be engendered by distortions introduced into the sequence of images IMl 1*-IM15* by the choice of velocities.
  • the sequence of images acquired of scene 500 are assumed to be of a "real" scene in which tree 501 does not move and by assumption the girl is standing still and the inanimate ball does not move of its own accord.
  • the tree the ball and the girl are assumed to be aligned one behind the other as they are shown in Fig. IA so that a straight line passes through the central spots 561 and 580 of the tree and the girl and top spot 572 of ball 503.
  • the line optionally coincides with optic axis 522 when camera 520 is located at position P3.
  • this line, as well as all straight lines in scene 500 are mapped into straight lines acquired by camera 520 at all positions of the camera.
  • Figs. 3A-3E schematically illustrate "perspective breaking" distortions introduced into images IM11*-IM15* by imaging scene 500 using a choice of velocities in accordance with an embodiment of the invention similar to that shown for example in Fig. IB.
  • Figures 3A-3E show the locations of tree 501 and ball 503 relative to girl 502 for each of the positions Pl - P5 (Fig. IB) of camera 520.
  • the velocity changes in accordance with an embodiment of the invention morph the real scene into a distorted scene for which line 623 is morphed into lines 621, 622, 624 and 625 respectively which are no longer straight lines. Similarly almost all other lines that are straight in the real image (Fig. 3C), are morphed into lines that are not straight lines.
  • Fig. 4 shows a top view, i.e. as seen from a viewpoint along the z-axis, of the scenes shown in Figs. 3A-3E superimposed one on top of the other so that the distortions that line 623 undergoes in positions Pl, P2, P3 and P4 are readily seen.
  • Each of the positions of tree 501 and ball 503 is labeled with a position of camera 520 to which the position of the tree and the ball correspond. Since the position of girl 502 does not change, it is not labeled with any of the camera positions.
  • Images IM11*-IM15* (Fig. IB, Fig. 1C) record the distortions and therefore do not provide a sequence of images that correspond to a real scene. Instead the images record a scene that, in accordance with an embodiment of the invention, even if optionally only subliminally, is somewhat plastic and is colored with time changing distortions, which as recorded in images IM11*-IM15* provide an enhanced sense of depth when the images are viewed in sequence.
  • features in a scene are imaged at different focal lengths to control relative velocity of the features in a sequence of images and depth perception of a motion sequence using the images. For example, velocity of a given feature relative to another feature in a scene may be increased or decreased by imaging the given feature at a larger focal length than the other feature.
  • the cameras with different focal lengths are spaced along the optic axis in a way which the inventors believe mimics the way a real image is imaged by the eye.
  • a given plane is imaged with a first lens in which the field of view forms an angle of ⁇ .
  • the cameras are placed at positions along the axis at which their view angle views the same area of the particular plane.
  • the set of cameras spaced is along the optic axis - aiming at the same scene.
  • the focal length of each camera is adjusted to compensate for the relative scale change that occurs due to various distances of each particular camera from the scene. As a result - all cameras see the same scene, at the same scale- but with different focal lengths.
  • Fig. 5A schematically shows an effect of imaging different features in scene 500 at different focal lengths, in accordance with an embodiment of the invention.
  • camera 520 images scene 500 at locations Pl - P5 along trajectory 530.
  • positions Pl, P2, P3, P4 and P5 camera 520 images the girl and the ball in images IM21, IM22 ...IM25 respectively at a focal length Fl and images tree 501 in images IM31, IM32 ... IM35 respectively at a focal length F2.
  • F2 is greater than Fl .
  • Display images in accordance with an embodiment of the invention are formed by aligning and combining images IM21 ... IM25 with images IM 31 ... IM35 so that features in IM21 ... IM25 overlay features in IM31 ... IM35 to provide appropriate occlusion of farther features by nearer features.
  • Combined images CI23-33, CI24- 34, and CI25-35 are shown aligned behind the images from which they are formed, respectively IM23 and IM33, IM24 and IM34, and IM35 and IM36.
  • Display combined images CI21-31*, CI22-32*, ... CI25-35* shown in Fig. 5A correspond to combined images CI21-31, CI22-32, ... CI25-35 turned right side up and reversed left to right.
  • Fig. 5B shows the combined display images CI21-31*, CI22-32*, ... CI25-35* and images IMl* - IM5* acquired by prior art as illustrated in Fig. IA so that the combined images in accordance with an embodiment of the invention may be readily compared with the prior art images.
  • the images show that the relative motion between the tree and the girl and ball is substantially increased by imaging the tree at the longer focal length F2.
  • the increased relative motion provides an enhanced perception of depth when the combined display images are displayed in a motion sequence compared to the depth perception provided by the prior art images IMl* - IM5*.
  • velocities and focal lengths can be adjusted either automatically or in response to a user input, via a mouse of the like.
  • each of the verbs, "comprise” “include” and “have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of members, components, elements or parts of the subject or subjects of the verb.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Studio Circuits (AREA)
  • Image Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

Procédé servant à générer l'affichage visuel d'une scène produisant une illusion désirée de profondeur, ce qui consiste à générer une séquence d'images affichées de la scène, à régler les vitesses de caractéristiques de la scène afin de générer des distorsions non perspectives de la scène dans ces images affichées et à afficher consécutivement lesdites images.
PCT/IB2006/051114 2004-05-10 2006-04-11 Imagerie numerique donnant une illusion de profondeur WO2006109252A2 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP06727890A EP1875306A2 (fr) 2005-04-11 2006-04-11 Imagerie numerique donnant une illusion de profondeur
JP2008506034A JP2009500878A (ja) 2005-04-11 2006-04-11 奥行錯覚デジタル撮像
US11/918,232 US20090066786A1 (en) 2004-05-10 2006-04-11 Depth Illusion Digital Imaging
IL186613A IL186613A0 (en) 2005-04-11 2007-10-11 Depth illusion digital imaging

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US67008705P 2005-04-11 2005-04-11
US60/670,087 2005-04-11

Publications (2)

Publication Number Publication Date
WO2006109252A2 true WO2006109252A2 (fr) 2006-10-19
WO2006109252A3 WO2006109252A3 (fr) 2007-06-21

Family

ID=37087409

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/051114 WO2006109252A2 (fr) 2004-05-10 2006-04-11 Imagerie numerique donnant une illusion de profondeur

Country Status (5)

Country Link
EP (1) EP1875306A2 (fr)
JP (1) JP2009500878A (fr)
KR (1) KR20080007451A (fr)
IL (1) IL186613A0 (fr)
WO (1) WO2006109252A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10306214B2 (en) 2014-07-17 2019-05-28 Sony Interactive Entertainment Inc. Stereoscopic image presenting device, stereoscopic image presenting method, and head-mounted display
CN111862511A (zh) * 2020-08-10 2020-10-30 湖南海森格诺信息技术有限公司 基于双目立体视觉的目标入侵检测装置及其方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9838669B2 (en) * 2012-08-23 2017-12-05 Stmicroelectronics (Canada), Inc. Apparatus and method for depth-based image scaling of 3D visual content
JP6461679B2 (ja) * 2015-03-31 2019-01-30 大和ハウス工業株式会社 映像表示システム及び映像表示方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020191841A1 (en) * 1997-09-02 2002-12-19 Dynamic Digital Depth Research Pty Ltd Image processing method and apparatus
US20060125962A1 (en) * 2003-02-11 2006-06-15 Shelton Ian R Apparatus and methods for handling interactive applications in broadcast networks

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3934211B2 (ja) * 1996-06-26 2007-06-20 松下電器産業株式会社 立体cg動画像生成装置
JPH11259672A (ja) * 1998-03-13 1999-09-24 Mitsubishi Electric Corp 3次元仮想空間表示装置
JP2004334661A (ja) * 2003-05-09 2004-11-25 Namco Ltd 画像生成システム、プログラム及び情報記憶媒体

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020191841A1 (en) * 1997-09-02 2002-12-19 Dynamic Digital Depth Research Pty Ltd Image processing method and apparatus
US20060125962A1 (en) * 2003-02-11 2006-06-15 Shelton Ian R Apparatus and methods for handling interactive applications in broadcast networks

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10306214B2 (en) 2014-07-17 2019-05-28 Sony Interactive Entertainment Inc. Stereoscopic image presenting device, stereoscopic image presenting method, and head-mounted display
CN111862511A (zh) * 2020-08-10 2020-10-30 湖南海森格诺信息技术有限公司 基于双目立体视觉的目标入侵检测装置及其方法

Also Published As

Publication number Publication date
WO2006109252A3 (fr) 2007-06-21
EP1875306A2 (fr) 2008-01-09
IL186613A0 (en) 2008-01-20
JP2009500878A (ja) 2009-01-08
KR20080007451A (ko) 2008-01-21

Similar Documents

Publication Publication Date Title
US20090066786A1 (en) Depth Illusion Digital Imaging
JP4707368B2 (ja) 立体視画像作成方法および装置
US8953023B2 (en) Stereoscopic depth mapping
JP5414947B2 (ja) ステレオ撮影装置
Sandin et al. The VarrierTM autostereoscopic virtual reality display
CN108141578B (zh) 呈现相机
US20150145977A1 (en) Compensation technique for viewer position in autostereoscopic displays
US10725316B2 (en) Optical stereoscopic display screen for naked eye viewing
TW201019708A (en) A method of processing parallax information comprised in a signal
WO2000035200A9 (fr) Procede de correction d'images destine a compenser la distorsion de celles-ci en fonction du point de vue
JP2000500598A (ja) 三次元描画システムおよび方法
US20110141246A1 (en) System and Method for Producing Stereoscopic Images
JP2007527665A (ja) 立体観察を管理するシステムおよび方法
CN107545537A (zh) 一种从稠密点云生成3d全景图片的方法
EP1875306A2 (fr) Imagerie numerique donnant une illusion de profondeur
US10110876B1 (en) System and method for displaying images in 3-D stereo
WO2009109804A1 (fr) Procédé et appareil de traitement d'image
US20060152580A1 (en) Auto-stereoscopic volumetric imaging system and method
KR101093929B1 (ko) 깊이 지도를 이용하여 3차원 영상을 표시하는 방법 및 시스템
Watt et al. 3D media and the human visual system
Audu et al. Generation of three-dimensional content from stereo-panoramic view
Rhee et al. Stereoscopic view synthesis by view morphing
Hansen et al. Calibrating, Rendering and Evaluating the Head Mounted Light Field Display
Lu Computational Photography
Jung et al. Enhanced Linear Perspective using Adaptive Intrinsic Camera Parameters

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2008506034

Country of ref document: JP

Ref document number: 186613

Country of ref document: IL

Ref document number: 11918232

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Ref document number: DE

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2006727890

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 1020077025819

Country of ref document: KR

NENP Non-entry into the national phase

Ref country code: RU

WWW Wipo information: withdrawn in national office

Ref document number: RU

WWE Wipo information: entry into national phase

Ref document number: 5071/CHENP/2007

Country of ref document: IN

WWP Wipo information: published in national office

Ref document number: 2006727890

Country of ref document: EP

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)