WO2008012821A2 - Imagerie pour infographie - Google Patents

Imagerie pour infographie Download PDF

Info

Publication number
WO2008012821A2
WO2008012821A2 PCT/IL2007/000936 IL2007000936W WO2008012821A2 WO 2008012821 A2 WO2008012821 A2 WO 2008012821A2 IL 2007000936 W IL2007000936 W IL 2007000936W WO 2008012821 A2 WO2008012821 A2 WO 2008012821A2
Authority
WO
WIPO (PCT)
Prior art keywords
layers
imagers
camera
camera rig
scene
Prior art date
Application number
PCT/IL2007/000936
Other languages
English (en)
Other versions
WO2008012821A3 (fr
Inventor
Benzion Landa
Dragan Stiglic
Original Assignee
Humaneyes Technologies Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Humaneyes Technologies Ltd. filed Critical Humaneyes Technologies Ltd.
Publication of WO2008012821A2 publication Critical patent/WO2008012821A2/fr
Publication of WO2008012821A3 publication Critical patent/WO2008012821A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Definitions

  • the invention relates to methods of generating a visual display of a computer graphics scene.
  • Human stereo vision relies on primary or physiological and secondary or psychological, sometimes called pictorial, depth cues to interpret three-dimensional information from a scene received on the two-dimensional surface of the eye or retina.
  • some physiological cues such as accommodation, or an amount of change of lens shape that the eyes provide to focus on an object
  • monocular vision i.e. by a single eye
  • many physiological depth cues are binocular cues that are a function of a person having two eyes. Binocular cues include convergence, or the angle through which the eyes are rotated to focus on an object, and retinal disparity, which refers to a difference between images of a same scene that the left and right eyes see because of their different positions.
  • Psychological depth cues are used to explain depth perception a person experiences when looking at photos and paintings and include relative size, linear perspective, height of objects above the line of sight, interposition, shading, shadow,' relative brightness, color (chromostereopsis), and atmospheric attenuation. These psychological cues are widely used in terrain representations to provide a sense of depth. Motion cues are often classified as a psychological cue. However, since they produce changes in relative displacements of retinal images of objects at different distances, motion cues actually produce physiological responses that convey three-dimensional information. Many of the psychological cues can be combined with physiological cues to produce enhanced three-dimensional effects.
  • Various three-dimensional stereoscopic display technologies exist for conveying either physiological or psychological cues that impart depth information from, for example, two-dimensional paper, video and computer system images.
  • printed representations commonly use psychological cues.
  • anaglyph stereoscopic imaging relies on the physical separation of left and right images based on splitting the color spectrum into red and green/blue components and masking the images so that they are "routed" to the appropriate eyes to render the three-dimensional scene.
  • Other stereoscopic viewing techniques may include the use of polarized films for viewing, which allows richly colored images to be viewed. Films still require glasses however, commonly with horizontal polarization for one eye and vertical polarization for the other, in order to convey separate images to each eye.
  • Video and computer systems use both anaglyph and polarization, as well as autostereoscopic methods of three-dimensional display based on a variety of concepts and methods which do not require a viewer to wear 3D glasses to achieve synthesis of depth information in the human eye-brain system.
  • Autostereoscopic displays rely on stereoscopic viewing of right and left images or alternating pairs of images and include lenticular displays, parallax barrier displays as well as displays based on motion parallax.
  • Printed images viewed through lenticular screens or parallax barrier screens can create not only the illusion of depth, but also motion and other specialized effects such as "flip" images, which change in appearance as a function of viewing angle.
  • PCT Publication 2006/109252 the disclosure of which is incorporated herein by reference, describes methods for increasing the illusion of depth in images.
  • This publication describes artificially moving foreground and background laterally to increase depth perception in a moving image. It also describes the use of multiple cameras with different focal lengths placed that are either co-located or located along the optical axis.
  • This publication both in the background section thereof and in the invention disclosure provides a good background for understanding the present invention.
  • the present application describes new methods of manipulating visual space to form images, especially useful for various aspects of computerized graphics (CG) image generation, such as 3D enhancement or suppression, parallax changes, visual and/or recorded space distortion, etc.
  • CG computerized graphics
  • One aspect of the invention is concerned with space manipulation achieved with a series of cameras, for example, virtual cameras with different focal lengths each viewing different depth layers (or portions of layers) or other selected portions of a scene.
  • a series of cameras for example, virtual cameras with different focal lengths each viewing different depth layers (or portions of layers) or other selected portions of a scene.
  • Such series of cameras are described herein as a multi-camera rig.
  • the focal lengths do not vary monotonically with distance of the object.
  • improved visual effects can be achieved by having some further layers of the scene rendered with a shorter focal length lens than some closer layers.
  • lateral segments of the scene are imaged by different multi-camera rigs of a set of such rigs.
  • the various multi-camera rigs may be situated along a straight line or along a curve. They may all be facing in a same direction or they may be facing in different directions, toward the scene.
  • the multi- camera rigs are in accordance with the first embodiment of the invention.
  • the focal length variations for the various multi-camera rigs are different.
  • a plurality of spaced sets of cameras according to the second embodiment is used to image the scene.
  • the cameras in the sets do not all have the same optical axis.
  • the cameras are placed in a nonuniform or uniform matrix of positions.
  • the parameters of the cameras are selected to provide controllable warping of the visual space within which the objects are found. This allows for a relatively simple way to provide controlled warping of visual space.
  • a second aspect of the invention is concerned with techniques for joining together of various images produced by cameras having different focal lengths.
  • the cutting plane between layers is used to match the focal lengths of the cameras that image the depth regions on either side of the cutting plane. This methodology can also be used to match the images formed by grading the focal lengths across the depth of the scene according to the first aspect of the invention.
  • sequences of images taken with the multi-camera rigs of the invention are used to provide video or video-like sequences.
  • the objects or layers in the scene are made move in a manner which increases the perception of depth of the scene. For example, when nearby layers move in a different direction from far layers, the perception of depth is increased.
  • the apparent velocity can be enhanced or decreased depending on the focal length distribution of the imagers in the rigs.
  • unexpected changes in velocities are configured to generate distortions in a scene that vary in time and lend the scene a seeming measure of plasticity that cannot in general be reproduced by conventional perspective imaging.
  • normal perspective vision and perspective projection map straight lines in a three dimensional scene into straight lines.
  • Velocity changes in accordance with an embodiment of the invention on the other hand often map straight lines into curved lines, which may vary in time during a presentation of a sequence of display images. The inventors have found that such challenges to the eye- brain vision system can be effective in stimulating the brain to generate a relatively strong enhanced sense of depth.
  • the layers are not necessarily flat and for example, in some embodiments of the invention the layers may be curved and have regions of different thickness. In some embodiments of the invention the layers are spherical or have a spherical region. Optionally, the layers are cylindrical or have a cylindrical region. Optionally, the layers are planar. The layers can be uniform in thickness or non-uniform
  • Thicknesses of the layers, into which a scene is partitioned, in accordance with an embodiment of the invention may vary.
  • each layer has a uniform thickness.
  • thickness of a layer may vary.
  • layers are optionally relatively thin.
  • regions for which velocity may be allowed to be a coarse function of depth layers are optionally thick.
  • layers approach zero thickness and velocity of features in a scene, or the focal length through which they are viewed become a smooth function of depth.
  • Methods in accordance with embodiments of the present invention are optionally encoded in any of various computer accessible storage media, such as a floppy disk, CD, flash memory or an ASIC for use in generating and/or displaying display images that provide enhanced depth perception, or they may be programmed in a computer.
  • Methods and apparatus in accordance with embodiments of the present invention may be used to enhance depth perception for many different and varied applications and two dimensional visual presentation formats. For example, the methods may be used to enhance depth perception for computer game images, animated movies, visual advertising and video training.
  • a multi-camera rig for imaging a scene having a plurality of depth defined layers, comprising: a plurality of imagers, each rendering one of the layers; wherein the imagers have a same optical axis and have focal lengths that vary in a manner different from a monotonic increase with distance of the layer which they image from a reference point in the multi-camera rig.
  • At least one imager that images a layer that is relatively far has a focal length shorter than an imager that images a layer that is relatively closer.
  • the focal lengths of the cameras first increase with increasing distance of the layers they image and then decrease with further increase in the distance of the layers they image.
  • the multi-camera rig comprises between 2 and 10 imagers, between 10 and 100 imagers, between 100 and 1000 imagers or more imagers.
  • At least some of the imagers are positioned at different points along the optical axis.
  • At least some of the imagers are situation at a same position.
  • at least some imagers situated at a same position have different focal lengths.
  • at least some of the layers have different thicknesses.
  • at least some of the layers have irregular shapes.
  • At least some pairs of imagers that image adjoining layers are positioned at a distance and have a focal length such that an object at the border between the layers is image at a same size by both imagers.
  • the scene is a computer generated (CG) scene.
  • CG computer generated
  • the imagers are virtual imagers.
  • a plurality of multi-camera rigs according to the invention situated along a line other than the optic axis of any of the rigs.
  • the line is a straight line, optionally perpendicular to the optic axis.
  • the line is a curved line, such that some of the multi-camera rigs are further from the scene than others.
  • the focal lengths of imagers having different positions along the curved line have focal lengths adjusted such that they each provide a same field of view at a reference surface.
  • the reference surface is a border between layers.
  • the axes of the multi-camera rigs are parallel.
  • a multi-camera rig comprising a plurality of imagers having parallel optical axes distributed along a line that is not an optic axis of the image and is a curved line.
  • the focal lengths of imagers having different positions along the curved line have focal lengths adjusted such that they each provide a same field of view at a reference surface.
  • multi-camera rig for imaging a scene having a plurality of depth defined layers, comprising: a plurality of imagers, each rendering one of the layers; wherein the imagers have a same optical axis; and wherein at least some pairs of imagers that image adjoining layers are positioned at a distance and have a focal length such that an object at the border between the layers is image at a same size by both imagers.
  • a method of forming an image of a scene comprising: imaging the scene utilizing a multi-camera rig or rigs according to the invention; and superposing images of portions of the scene rendered by the imagers.
  • the method includes generating a plurality of sequential images; and showing the sequence in order to provide a moving picture.
  • the method includes moving at least some of the layers laterally with respect to other layers to enhance an illusion of depth.
  • layers that are further from the multi-camera rig move with a higher speed than layers that are closer to the rig.
  • the layers are cylindrical and the method includes moving at least some of the layers with respect to other layers about the center of the cylinders.
  • the multi-camera rig is at the center of rotation.
  • FIG. 1 schematically shows a methodology of providing an image having non- perspective distortions, in accordance with an embodiment of the present invention
  • FIG. 2 demonstrates an example of how the focal lengths and positions of the imaging units can be matched at a divider between layers, in accordance with an embodiment of the present invention
  • Figs. 3A-3D show a composite image (Fig. 3D) and the intermediate layer images (Figs. 3 A-3C) for a scene, taken by three imagers at a same position;
  • Figs. 4A-4D are images to those of Figs. 4A-4D taken by three imagers in which the imagers are configured and positioned to meet the criteria shown in Fig. 2, in accordance with an embodiment of the present invention
  • Figs. 5A and 5B compare an image taken with a normal single focal length camera and one taken with a multi-camera rig (three imagers) of an embodiment of the present invention
  • Figs. 6A-6B show the effect of increasing the number of layers and also the use of a non-monotonic focal length distribution, in accordance with an embodiment of the invention
  • Figs. 7A-7D illustrate the effect of using a monotonic focal length distribution as opposed to using a distribution of focal lengths that emphasizes the middle ground in accordance with an embodiment of the invention
  • Fig. 8 shows a distribution of cameras or multi-camera rigs, distributed along a curve.
  • Figs. 9a-9h show the effect of using different imaging directions for multi- camera rigs.
  • the present invention is generally concerned with generation of images especially CG images utilizing virtual cameras.
  • Fig. 1 schematically shows a methodology of providing an image having non- perspective distortions.
  • a scene 100 (such as a CG scene) is divided into layers 102a, 102b and 102c spaced apart by a series of virtual dividers 104a and 104b. While, for ease of rendering, the objects (except for the path) are depicted as being entirely within one layer, the present application is applicable to rendering of objects that cross boundaries between layers. Furthermore, while for ease of discussion only three layers are shown, in a practical situation many more layers, up to a continuum of layers are used. In practical systems 2, 10, 100 1000 or more images or intermediate imagers are used.
  • Multi-camera rig 106 is made up of a plurality of imaging units 106a, 106b and 106c, the focal lengths of which vary. Three imaging units, which render layers 102a, 102b and 102c respectively, are shown. However, where the scene is divided into a greater number of layers, the number of imaging units will be correspondingly increased.
  • the imaging units are placed at different positions along a common optical axis 108, although two or more (or even all the) imaging units could have a same position.
  • the positions and focal lengths of the imaging units and their placement along the optic axis depends on the desired effect and on whether continuity conditions have to be met at the spacers, i.e., between the layers.
  • Fig. 2 demonstrates an example of how the focal lengths and positions of the imaging units can be matched at a divider 104a.
  • an object at the divider should have the same dimensions when viewed by either imaging unit (i.e., the units that image the layers on both sides of the divider). This object is designated by its height h where its height (hi) as measured in layer 102a is equal to its height (h2) measured in region 102b.
  • This matching criteria is the same whether the longer focal length imaging unit is used to image the nearer or farther layer and the positioning will be the same as well. This matching criteria is preferably met at all the dividers.
  • this continuity condition does not necessarily apply when the objects are in only a single layer.
  • a layer does cross an object occasionally, then one way to avoid a continuity problem is to bend the divider such that the object is in one layer only.
  • Nearby associated objects are preferably imaged in the same layer to avoid visual anomalies.
  • the continuity condition can be relaxed, especially if the changes in focal length from layer to layer are relatively small. If the focal lengths vary within an object, then the object is generally distorted, however, if the layers are thin, there will be little or no stepping, even if the continuity condition called for above is not met. Thus, for example, it is possible for some or all of the imaging units (each with a different focal length) to all be at the same point. In this case, the continuity condition can not be met. However, if there are a large number of imaging units, then the image will look smooth. This can allow for sculpting of space.
  • images that cross a large number of layers, such as a street, may be warped.
  • Figs. 3a-3d show a composite image (Fig. 3d) and the intermediate layer images (3a-3c) for scene 100, taken by three imagers at the same position.
  • Fig. 3d there is a troubling discontinuity in the road which is present in all the layers.
  • the other objects which are in a single layer are not visually disturbing.
  • Figs. 4a-4d are corresponding images taken by three imagers in which the imagers are configured and positioned to meet the criteria shown in Fig. 2. As can be seen, there are no discontinuities.
  • Figs. 5a and 5b compare an image taken with a normal single focal length camera (5a) and one taken with a multi-camera rig (5b-three imagers) of an embodiment of the present invention.
  • the relative distances of the far objects is reduced. It is noted that this variation is caused by the use of a wide angle lens for the near layer, a normal lens for the middle layer and a long focal length lens for the far distances. If this order were reversed, then the person would seem to be much closer and the mountains would recede.
  • the technique of the invention allows for a wide variety of effects, especially if there are many layers and cameras, providing great freedom to the animator for changing perceptions by warping space in a convenient manner.
  • Figs. 6a and 6b show the effect of reduced number of layers.
  • Fig. 6a is an image of a test set of objects taken with three virtual cameras (the focal lengths as a function of distance are shown in Fig. 6b).
  • Figs. 7a (7b is the distribution of focal lengths) and 7c (7d is the distribution of focal lengths) illustrate the effect of using a monotonic focal length distribution as opposed to using a distribution of focal lengths that emphasizes the middle ground.
  • Fig. 8 shows a plurality of cameras distributed on a curve.
  • Each of the cameras has a different focal length and all are facing in a same direction, toward the scene.
  • each camera should encompass a same distance h at a matching plane (which may itself be curved).
  • the distance h is greater that the distance between the cameras so that the fields of view overlap. This may make it easier to splice the images and to blur any remaining differences. It is also possible to exactly match the distances h to the distance between the cameras, but this could not be applied in more than one plane.
  • each of cameras 106 is a multi-camera rig as described above.
  • the imagers in each rig are matched not only to the laterally displaced imagers (as described with respect to Fig. 8), but preferably should also be matched at each virtual divider 104, both with respect to the lateral rigs and also with respect to the imagers in the same rig, as described above with respect to Fig. 2. It should be understood that while placement of the cameras or multi camera rigs along a curved line as shown in Fig. 8 is not absolutely necessary, it is desirable, since it allows for varying the focal length across the scene as well as with depth. This allows for changing apparent velocities or sizes across the scene and for warping space differently across the scene. For example, a moving object on the edges of the scene would appear to move across the image more quickly than the same motion in the center. While a simple curve is shown, other, more complex curves can be used for sculpt the visual space in any desired way.
  • depth illusion can be generated/heightened by moving the layers laterally to the camera axis.
  • the relative motion of elements in a scene or of layers in a scene is varied as a function of their depth or as a function of their distance with respect to some point in the scene.
  • a computer-generated (or at least partly computer generated or computer manipulated) video sequence of a scene is created in which the camera moves in any direction or combination of directions or other motion (horizontal, vertical, diagonal, rotational, zoom or other) with respect to at least one element in the scene.
  • the relative motion of other elements in the scene are such that at least some of those other elements, whether nearer or more distant from the camera than the first element, move at a higher rate of relative motion than their respective distances from the first element would dictate if the objects of the scene were in fixed relative positions to one another.
  • the layers are cylindrical and the movement of the layers is rotational around a center, preferably with the camera at the center.
  • the center of rotation is an object of interest in the scene. It will be appreciated that in the case of normal perspective views of a scene, whether filmed with a camera or generated by computer, the relative positions of inanimate elements of the scene do not change with camera motion and therefore the radius of a circle defined by those elements does not change, irrespective of such camera motion. Though not wishing to be bound by any particular example or theory, the inventors believe that it is this exaggerated relative motion of background or foreground (or middleground) elements that creates the enhanced sense of depth perception of video sequences produced in accordance with the invention. It is noted that such motion can be exaggerated or diminished by changing the focal length of a camera that renders the layer in question. While linear motion proportional to the distance as from a central point has been reported in PCT Publication 2006/109252, more complex non-linear movements can result in effects that can not easily be otherwise simulated.
  • Figs. 9a-9h show the effect of using different imaging directions for multi- camera rigs of the invention.
  • Fig. 9a shows the focal length of cameras as a function of scene depth for Figs. 9b-9d. While the effect of different views is evident in these figures, it is clear that Figs 9b-d are the same scene imaged from different directions.
  • Fig. 9e shows focal length of cameras as a function of scene depth for Figs. 9f-9h, corresponding to the views of Fogs 9b-d.
  • These images illustrate the warping abilities that can be achieved with the methods of the present invention. It should be further understood that the individual features described hereinabove can be combined in all possible combinations and sub-combinations to produce exemplary embodiments of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Stereoscopic And Panoramic Photography (AREA)

Abstract

L'invention concerne un ensemble multicaméra qui permet d'imager une scène présentant une pluralité de couches définies par une profondeur. Ledit ensemble comprend une pluralité de systèmes d'imagerie. Chaque système d'imagerie restitue une des couches. Lesdits systèmes d'imagerie possèdent le même axe optique et des longueurs focales qui varient de manière différente d'une augmentation uniforme avec la distance de la couche imagée à partir d'un point de référence dans l'ensemble multicaméra.
PCT/IL2007/000936 2006-07-25 2007-07-25 Imagerie pour infographie WO2008012821A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US82023806P 2006-07-25 2006-07-25
US60/820,238 2006-07-25

Publications (2)

Publication Number Publication Date
WO2008012821A2 true WO2008012821A2 (fr) 2008-01-31
WO2008012821A3 WO2008012821A3 (fr) 2008-03-13

Family

ID=38819824

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2007/000936 WO2008012821A2 (fr) 2006-07-25 2007-07-25 Imagerie pour infographie

Country Status (1)

Country Link
WO (1) WO2008012821A2 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8358332B2 (en) 2007-07-23 2013-01-22 Disney Enterprises, Inc. Generation of three-dimensional movies with improved depth control
WO2016198242A1 (fr) * 2015-06-09 2016-12-15 Kapsch Trafficcom Ag Dispositif de détection de véhicules dans une zone de trafic
WO2019014845A1 (fr) * 2017-07-18 2019-01-24 辛特科技有限公司 Procédé de synthèse de champ lumineux à base de prismes
CN113347349A (zh) * 2021-04-30 2021-09-03 深圳英飞拓智能技术有限公司 一种全景成像系统及其控制方法

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BRADLEY A ET AL: "Virtual Microscopy with Extended Depth of Field" DIGITAL IMAGE COMPUTING: TECHNQIUES AND APPLICATIONS, 2005. DICTA ' 05. PROCEEDINGS DEC. 2005, PISCATAWAY, NJ, USA,IEEE, 20 December 2005 (2005-12-20), pages 235-242, XP010879453 ISBN: 0-7695-2774-4 *
FEHN C ET AL: "3D analysis and image-based rendering for immersive TV applications" SIGNAL PROCESSING. IMAGE COMMUNICATION, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 17, no. 9, October 2002 (2002-10), pages 705-715, XP004388881 ISSN: 0923-5965 *
SENOH T ET AL: "Space-Sampling Method for 3D Cinemas" COMPUTER VISION AND PATTERN RECOGNITION WORKSHOP, 2006 CONFERENCE ON NEW YORK, NY, USA 17-22 JUNE 2006, PISCATAWAY, NJ, USA,IEEE, 17 June 2006 (2006-06-17), pages 170-170, XP010922686 ISBN: 0-7695-2646-2 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8358332B2 (en) 2007-07-23 2013-01-22 Disney Enterprises, Inc. Generation of three-dimensional movies with improved depth control
US9094674B2 (en) 2007-07-23 2015-07-28 Disney Enterprises, Inc. Generation of three-dimensional movies with improved depth control
WO2016198242A1 (fr) * 2015-06-09 2016-12-15 Kapsch Trafficcom Ag Dispositif de détection de véhicules dans une zone de trafic
WO2019014845A1 (fr) * 2017-07-18 2019-01-24 辛特科技有限公司 Procédé de synthèse de champ lumineux à base de prismes
CN111194430A (zh) * 2017-07-18 2020-05-22 辛特科技有限公司 一种基于棱镜合成光场的方法
CN111194430B (zh) * 2017-07-18 2021-10-26 辛特科技有限公司 一种基于棱镜合成光场的方法
CN113347349A (zh) * 2021-04-30 2021-09-03 深圳英飞拓智能技术有限公司 一种全景成像系统及其控制方法

Also Published As

Publication number Publication date
WO2008012821A3 (fr) 2008-03-13

Similar Documents

Publication Publication Date Title
US20090066786A1 (en) Depth Illusion Digital Imaging
US8646917B2 (en) Three dimensional display with multiplane image display elements
JP5465430B2 (ja) オートステレオスコピック視域の角度範囲の制御
US8953023B2 (en) Stereoscopic depth mapping
EP1143747B1 (fr) Traitement d'images pour affichage autostéréoscopique
EP2188672B1 (fr) Generation de films en 3d avec un controle de profondeur ameliore
KR101629479B1 (ko) 능동 부화소 렌더링 방식 고밀도 다시점 영상 표시 시스템 및 방법
US9438886B2 (en) Parallax scanning methods for stereoscopic three-dimensional imaging
Lipton et al. New autostereoscopic display technology: the synthaGram
WO2014144989A1 (fr) Affichages et procédés à champ lumineux 3d à angle de visualisation, profondeur et résolution améliorés
US20110141246A1 (en) System and Method for Producing Stereoscopic Images
Bourke Synthetic stereoscopic panoramic images
WO2008012821A2 (fr) Imagerie pour infographie
US20050062742A1 (en) Method and system for 3-D object modeling
EP1875306A2 (fr) Imagerie numerique donnant une illusion de profondeur
Ogawa et al. Swinging 3D lamps: a projection technique to convert a static 2D picture to 3D using wiggle stereoscopy
Date et al. Luminance profile control method using gradation iris for autostereoscopic 3D displays
WO2009109804A1 (fr) Procédé et appareil de traitement d'image
CN103969836A (zh) 一种用于多视点自由立体显示器的视角扩展方法
Ebisu et al. Realization of electronic 3D display combining multiview and volumetric solutions
Date et al. 66.3: Invited Paper: Smooth Motion Parallax Autostereoscopic 3D Display Using Linear Blending of Viewing Zones
Makiguchi et al. 21‐1: Reducing Image Quality Variation with Motion Parallax for Glassless 3D Screens using Linear Blending Technology
Cohen et al. A multiuser multiperspective stereographic QTVR browser complemented by java3D visualizer and emulator
US11601633B2 (en) Method for optimized viewing experience and reduced rendering for autostereoscopic 3D, multiview and volumetric displays
Boev et al. GPU-based algorithms for optimized visualization and crosstalk mitigation on a multiview display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07789991

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

NENP Non-entry into the national phase in:

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 07789991

Country of ref document: EP

Kind code of ref document: A2