WO2006003577A1 - Creation d'une carte de profondeur - Google Patents

Creation d'une carte de profondeur Download PDF

Info

Publication number
WO2006003577A1
WO2006003577A1 PCT/IB2005/052094 IB2005052094W WO2006003577A1 WO 2006003577 A1 WO2006003577 A1 WO 2006003577A1 IB 2005052094 W IB2005052094 W IB 2005052094W WO 2006003577 A1 WO2006003577 A1 WO 2006003577A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
pixels
group
image
background
Prior art date
Application number
PCT/IB2005/052094
Other languages
English (en)
Inventor
Peter-Andre Redert
Bartolomeus W. D. Van Geest
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2006003577A1 publication Critical patent/WO2006003577A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Definitions

  • the invention relates to a method of generating a depth map comprising depth values representing distances to a viewer, for respective pixels of an image.
  • the invention further relates to a depth map generating unit for generating a depth map comprising depth values representing distances to a viewer, for respective pixels of an image.
  • the invention further relates to an image processing apparatus comprising: receiving means for receiving a signal corresponding to an image; and such a depth map generating unit for generating a depth map.
  • the invention further relates to a computer program product to be loaded by a computer arrangement, comprising instructions to generate a depth map comprising depth values representing distances to a viewer, for respective pixels of an image, the computer arrangement comprising processing means and a memory.
  • the method comprises: - segmenting the image into at least one group of pixels corresponding to a foreground object and a further group of pixels corresponding to background; assigning a first group of depth values corresponding to the further group of pixels on basis of a predetermined background depth profile; assigning a second group of depth values corresponding to the at least one group of pixels on basis of a predetermined foreground depth profile, whereby the assigning of the second group of depth values is based on a particular depth value of the background depth profile, the particular depth value belonging to a particular pixel which is located at a predetermined location relative to the at least one group of pixels.
  • the rationale behind the invention is that for most natural images, i.e. captured by means of a camera the depth values can relatively well fitted to a predetermined model.
  • the model comprises a background and foreground objects.
  • the background can form one or more objects, e.g. the sky, a road, a see or a meadow.
  • the background extends over a relatively large part of the image.
  • the background is modeled by means of a background depth profile. This background depth profile corresponds to a surface description.
  • the background in an image corresponds to a horizontally oriented surface in world coordinates. That means that, because of the perspective projection, there is a spatial relation between pixel coordinates in an image and corresponding depth values.
  • depth values are assigned on basis of the background depth profile. That means that the coordinates of the pixels and their position relative to the background depth profile are used to determine the depth values corresponding to these pixels.
  • the foreground objects are modeled by means of a foreground depth profile. Because of gravity, most objects are vertically oriented in world coordinates. That means that, typically foreground objects appearing in an image can be fitted relatively well with a predetermined foreground depth profile which is based on that assumption.
  • depth values are assigned on basis of the foreground depth profile. That means that the coordinates of the pixels and their position relative to the foreground depth profile are used to determine the depth values corresponding to these pixels.
  • the actual depth values to be used for assignment to pixels corresponding to foreground objects is based on the position of the foreground objects relative to the background.
  • foreground objects are connected to the background.
  • an object like a car is standing on the ground. That means that from a particular viewpoint the depth values of the car are substantially equal to the depth value of the part of the ground on which the car is standing.
  • a lamp is hanging on the ceiling. That means that from a particular viewpoint the depth values of the lamp are substantially equal to the depth value of the part of the ceiling which is directly connected with the lamp.
  • the background depth profile corresponds to an increasing function and whereby a relatively low depth value is assigned to a first one of the pixels of the further group of pixels which is located at a first border of the image.
  • a relatively low depth value means that the corresponding pixel is relatively close to the viewer, while a relatively high depth value means that the corresponding pixel is relatively far away from the viewer.
  • This background depth profile is a relatively simple profile which is appropriate to model the background in many images.
  • a relatively high depth value is assigned to a second one of pixels of the further group of pixels which is located at a relatively large distance from the first one of the pixels, e.g. in the middle of the image.
  • a relatively high depth value is assigned to a second one of pixels of the further group of pixels which is located at a second border of the image.
  • the first border corresponds to the bottom of the image and the second border corresponds to the top of the image.
  • the first border corresponds to the bottom of the image and the second border corresponds to the top of the image.
  • This depth profile corresponds to a substantially horizontally oriented surface.
  • the particular pixel is located below the at least one group of pixels. This corresponds with a quite natural situation that an object is standing on something else, e.g. the ground.
  • the first border corresponds to the top of the image and the second border corresponds to the bottom of the image.
  • This depth profile also corresponds to a substantially horizontally oriented surface, like a ceiling.
  • the particular pixel is located above the at least one group of pixels. This corresponds with a quite natural situation that an object is hanging on something else, e.g. the ceiling.
  • the foreground depth profile corresponds to a further function which is less increasing than the increasing function of the background depth profile.
  • the method according to in the invention is not limited to this. It is advantageous, e.g. for scaling depth values into a range of possible depth values which can be visualized by a certain display device, to apply alternative depth profiles.
  • the two depth profiles have relation as described in Claim 8. The effect of this is that the differences in depth values of consecutive pixel pairs located at the border of a foreground object are increasing.
  • a first difference in depth values for a first pixel pair which is located adjacent to the particular pixel of claim 1, comprising a first pixel belonging to the foreground object and its neighboring pixel belonging to the background is relatively low.
  • a second difference in depth values for a second pixel pair which is located relatively far away from the particular pixel of claim 1, comprising a second pixel belonging to the foreground object and its neighboring pixel belonging to the background is relatively high.
  • the generating unit comprises: segmentation means for segmenting the image into at least one group of pixels corresponding to a foreground object and a further group of pixels corresponding to background; - first assigning means for assigning a first group of depth values corresponding to the further group of pixels on basis of a predetermined background depth profile; second assigning means for assigning a second group of depth values corresponding to the at least one group of pixels on basis of a predetermined foreground depth profile, whereby the assigning of the second group of depth values is based on a particular depth value of the background depth profile, the particular depth value belonging to a particular pixel which is located at a predetermined location relative to the at least one group of pixels. It is a further object of the invention to provide an image processing apparatus comprising a depth map generating unit of the kind described in the opening paragraph which is arranged to generate a depth map based on a new depth cue.
  • the generating unit comprises: - segmentation means for segmenting the image into at least one group of pixels corresponding to a foreground object and a further group of pixels corresponding to background; first assigning means for assigning a first group of depth values corresponding to the further group of pixels on basis of a predetermined background depth profile; - second assigning means for assigning a second group of depth values corresponding to the at least one group of pixels on basis of a predetermined foreground depth profile, whereby the assigning of the second group of depth values is based on a particular depth value of the background depth profile, the particular depth value belonging to a particular pixel which is located at a predetermined location relative to the at least one group of pixels.
  • the computer program product after being loaded, provides said processing means with the capability to carry out: - segmenting the image into at least one group of pixels corresponding to a foreground object and a further group of pixels corresponding to background; assigning a first group of depth values corresponding to the further group of pixels on basis of a predetermined background depth profile; assigning a second group of depth values corresponding to the at least one group of pixels on basis of a predetermined foreground depth profile, whereby the assigning of the second group of depth values is based on a particular depth value of the background depth profile, the particular depth value belonging to a particular pixel which is located at a predetermined location relative to the at least one group of pixels.
  • Fig. 1 schematically shows an image and the corresponding depth map being generated with the method according to the invention
  • Fig. 2 schematically shows another image and the corresponding depth map being generated with the method according to the invention
  • Figs. 3 A and 3B schematically show results of segmentation
  • Fig. 4 schematically shows three depth profiles in one direction
  • Fig. 5 schematically shows a multi-view image generation unit comprising a depth map generation unit according to the invention.
  • Fig. 6 schematically shows an embodiment of the image processing apparatus according to the invention.
  • Fig. 1 schematically shows an image 100 and the corresponding depth map 122 being generated with the method according to the invention.
  • Fig. 1 shows an image 100 representing an object, i.e. a car and shows the ground on which the car is standing.
  • the image 100 is segmented into a first group of pixels 104 corresponding to the object and a second group of pixels 102 corresponding to the background. In this case no further objects are present in the image 100. That means that each pixel of the image 100 belongs to either the first group of pixels 104 or to the second group of pixels 102.
  • Fig. 1 schematically shows a predetermined background depth profile 110.
  • the gray values of the background depth profile 110 correspond to depth values.
  • the background depth profile 110 corresponds to a monotonous increasing function in a first direction, i.e. the vertical direction.
  • the increasing function is such that a relatively low depth value is assigned to pixels which are located at the bottom border of the image 100 and that a relatively high depth value is assigned to pixels which are located at the top border of the image 100.
  • the background depth profile 110 also corresponds to a constant function in a second direction, i.e. the horizontal direction.
  • the constant function is such that horizontally neighboring pixels will be assigned mutually equal depth values.
  • Fig. 1 also shows a predetermined foreground depth profile 112.
  • the gray values of the foreground depth profile 112 corresponds to depth values.
  • the foreground depth profile 112 corresponds to a constant function in two orthogonal directions. That means that all depth values will be mutually equal.
  • This actual depth value is determined on basis of the predetermined background depth profile 110 and a particular pixel 106 which is located in the image 100 below the first group of pixels 104 which correspond to the car.
  • Fig. 1 is indicated that the actual depth value is derived from the background depth profile 110 by taking a sample 108 from the background depth profile 110 on basis of the coordinates of the particular pixel 106.
  • all depth values of the predetermined foreground depth profile 112 are equal to the depth value of the sample 108 of the predetermined background profile 110.
  • Fig. 1 schematically shows that on basis of the segmentation and the predetermined background depth profile 110 the second group of pixels 102 are assigned appropriate depth values.
  • the assigned depth values corresponding to the background are referred to with reference number 114.
  • Fig. 1 schematically shows that on basis of the segmentation and the predetermined foreground depth profile 112 the first group of pixels 104 are assigned appropriate depth values.
  • the assigned depth values corresponding to the car are referred to with reference number 116.
  • Fig. 1 schematically shows the final combination of the assignment of depth values to the respective pixels of the image, i.e. the final depth map 122.
  • Fig. 2 schematically shows another image 200 and the corresponding depth map 222 being generated with the method according to the invention.
  • Fig. 2 shows an image 200 representing an object, i.e. a lamp and shows the ceiling on which the lamp is hanging.
  • the image 200 is segmented into a first group of pixels 204 corresponding to the object and a second group of pixels 202 corresponding to the background. In this case no further objects are present in the image 200. That means that each pixel of the image 200 belongs to either the first group of pixels 204 or to the second group of pixels 202.
  • Fig. 2 schematically shows a predetermined background depth profile 210.
  • the gray values of the background depth profile 210 corresponds to depth values.
  • the background depth profile 210 corresponds to a monotonous decreasing function in a first direction, i.e. the vertical direction.
  • the decreasing function is such that a relatively high depth value is assigned to pixels which are located at the bottom border of the image 200 and that a relatively low depth value is assigned to pixels which are located at the top border of the image 200.
  • the background depth profile 210 also corresponds to a constant function in a second direction, i.e. the horizontal direction.
  • the constant function is such that horizontally neighboring pixels will be assigned mutually equal depth values.
  • Fig. 2 also shows a predetermined foreground depth profile 212.
  • the gray values of the foreground depth profile 212 corresponds to depth values.
  • the foreground depth profile 212 corresponds to a constant function in two orthogonal directions. That means that all depth values will be mutually equal.
  • This actual depth value is determined on basis of the predetermined background depth profile 210 and a particular pixel 206 which is located in the image 200 above the first group of pixels 204 which correspond to the lamp.
  • the actual depth value is derived from the background depth profile 210 by taking a sample 208 from the background depth profile 210 on basis of the coordinates of the particular pixel 206.
  • all depth values of the predetermined foreground depth profile 212 are equal to the depth value of the sample 208 of the predetermined background profile 210.
  • Fig. 2 schematically shows that on basis of the segmentation and the predetermined background depth profile 210 the second group of pixels 202 are assigned appropriate depth values. In Fig. 2 the assigned depth values corresponding to the background are referred to with reference number 214.
  • Fig. 2 schematically shows that on basis of the segmentation and the predetermined foreground depth profile 212 the first group of pixels 204 are assigned appropriate depth values.
  • the assigned depth values corresponding to the lamp are referred to with reference number 216.
  • Fig. 2 schematically shows the final combination of the assignment of depth values to the respective pixels of the image, i.e. the final depth map 222.
  • Figs. 3A and 3B schematically show results of segmentation. Segmentation is an image processing process whereby the pixels of the image are classified and assigned to one of a plurality of groups of pixels, i.e. segments. The segmentation is performed on basis of pixel values. With pixel values is meant color and/or luminance values. Typically, such a group of pixels is surrounded by a contour. There are several known techniques in the field of image processing for determining segments and/or contours. They can e.g. be determined by means of edge detection, homogeneity calculation or based on temporal filtering. Contours can be open or closed.
  • Figs. 3A and 3B schematically show images and contours which are found on basis of edge detection in the images. Detecting edges might be based on spatial high-pass filtering of individual images. However, the edges are preferably detected on basis of mutually comparing multiple images, in particular computing pixel value differences of subsequent images of the sequence of video images.
  • the pixel value differences E(x, y,n) are computed on basis of color values:
  • Equation 3 a further alternative is given for the computation of pixel value differences
  • E(x,y,n) max( ⁇ R(x,y,n) - R(x,y,n - ⁇ ) ⁇ , ⁇ G(x,y,n) - G(x,y,n - ⁇ ) ⁇ , ⁇ B(x,y,t ⁇ -B(x,y,n- ⁇ ) ⁇ )
  • the pixel value difference signal E is filtered by clipping all pixel value differences which are below a predetermined threshold, to a constant e.g. zero.
  • a morphologic filter operation is applied to remove all spatially small edges.
  • Morphologic filters are common non-linear image processing units. See for instance the article "Low-level image processing by max-min filters” by P.W. Verbeek, H.A. Vrooman and LJ. van Vliet, in "Signal Processing", vol. 15, no. 3, pp. 249-258, 1988.
  • Edge detection might also be based on motion vector fields. That means that regions in motion vector fields having a relatively large motion vector contrast are detected. These regions correspond with edges in the corresponding image.
  • the edge detection unit is also provided with pixel values, i.e. color and or luminance values of the video images. Motion vector fields are e.g.
  • Fig. 3A schematically shows an image 300 comprising a first segment 304 corresponding to background and a second segment 302 corresponding to an object which is located in front of the background.
  • the second segment is surrounded by a closed contour 306.
  • This contour is located on an edge of the first segment 304, i.e. on the border between the first segment 304 and the second 302 segment.
  • a closed contour it is relatively easy to determine which pixels belong to the first segment and which pixels do not belong to the first segment.
  • the group of pixels which are inside the contour 306 belong to the first segment 302.
  • the other group of pixels 304 which are located outside the contour 306 do not belong to the first segment 302.
  • a particular pixel 308 which is located at a predetermined location relative to the first segment 302, for instance below the first segment 302.
  • Fig. 4B shows an image 310 in which an open contour 312 is drawn.
  • This contour is located on an edge of the first segment, i.e. on the border between the first segment and a second segment.
  • an open contour it is not straightforward to determine which pixels belong to the first segment and which do not belong to the first segment.
  • An option to deal with this issue is closing the contour, which is found on basis of edge detection, by connecting to endpoints of the open contour. In Fig. 3B this is indicated with a line-segment with reference number 318 .
  • a particular pixel 314 which is located (e.g.) below the line segment 318.
  • the particular pixel 316 is located (e.g.) below a first one of the end points of the open contour or the particular pixel 320 is located (e.g.) below a second one of the end points of the open contour.
  • Fig. 4 schematically shows three depth profiles 400-402 in one direction.
  • the x-axis 406 corresponds to a spatial dimension, i.e. direction. In this case, going from left to right in Fig. 4 corresponds to a vertical direction from bottom to top in an image 100.
  • the y- axis corresponds to depth.
  • a first one of a depth profiles 402 as depicted in Fig. 4 corresponds to the predetermined background depth profile 110 as described in connection with Fig. 1.
  • the function of the first one of a depth profiles 402 is specified in Equation 1.
  • D(p) c + a-y(p) (1)
  • the depth value D(p) for pixel p is equal to the product of a constant a and the y- coordinate of the pixel y(p) added to a constant c .
  • a second one of a depth profiles 400 as depicted in Fig. 4 corresponds to the predetermined foreground depth profile 112 as described in connection with Fig. 1.
  • the function of the second one of a depth profiles 400 is specified in Equation 1.
  • the depth value D(p) c (2)
  • the depth value D(p) for pixel p is equal to a constant c .
  • the foreground depth profile 112 having a single value represents a vertically oriented object in world coordinates.
  • a third depth profile 401 is depicted which is appropriate to be used as foreground depth profile in combination with the first one of the profiles 402 as background depth profile. It can be clearly seen that by using these two depth profiles 402, 401 an important aspect of the invention is still fulfilled. The effect of this is that the differences in depth values of consecutive pixel pairs located at the border of a foreground object are increasing. For instance, a first difference 408 in depth values for a first pixel pair which is located adjacent to the particular pixel 106, comprising a first pixel belonging to the foreground object and its neighboring pixel belonging to the background is relatively low. However, a second difference in depth values for a second pixel pair which is located relatively far away from the particular pixel of 106, comprising a second pixel belonging to the foreground object and its neighboring pixel belonging to the background is relatively high.
  • the dynamical range of the resultant depth map can be adjusted with a.
  • the numbers b and e regulate the trade-off between background and foreground profiles. Typical values are in between 0.1 and 10. Possibly, e can be chosen differently for vertically and horizontally neighboring pixelpairs pq.
  • the infinities in (4) and (6) are typically implemented by large numbers, e.g. 100-1000.
  • Fig. 5 schematically shows a multi-view image generation unit 500 comprising a depth map generation unit 501 according to the invention.
  • the multi-view image generation unit 500 is arranged to generate a sequence of multi-view images on basis of a sequence of video images.
  • the multi-view image generation unit 500 is provided with a stream of video images at the input connector 508 and provides two correlated streams of video images at the output connectors 510 and 512, respectively. These two correlated streams of video images are to be provided to a multi-view display device which is arranged to visualize a first series of views on basis of the first one of the correlated streams of video images and to visualize a second series of views on basis of the second one of the correlated streams of video images. If a user, i.e.
  • the first one of the correlated streams of video images corresponds to the sequence of video images as received and that the second one of the correlated streams of video images is rendered on basis of the sequence of video images as received.
  • both streams of video images are rendered on basis of the sequence of video images image as received.
  • the rendering is e.g. as described in the article "Synthesis of multi viewpoint images at non-intermediate positions" by P. A. Redert, E. A. Hendriks, and J. Biemond, in Proceedings of International Conference on Acoustics, Speech, and Signal Processing, Vol.
  • the multi- view image generation unit 500 comprises: a depth map generation unit 501 for generating depth maps for the respective input images on basis of the transitions in the image; and - a rendering unit 506 for rendering the multi-view images on basis of the input images and the respective depth maps, which are provided by the depth map generation unit 501.
  • the depth map generating unit 501 for generating depth maps comprising depth values representing distances to a viewer, for respective pixels of the images comprises: a segmentation unit 502 for segmenting the image into at least one group of pixels corresponding to a foreground object and a further group of pixels corresponding to background; a first assigning unit 503 for assigning a first group of depth values corresponding to the further group of pixels on basis of a predetermined background depth profile; a second assigning unit 504 for assigning a second group of depth values corresponding to the at least one group of pixels on basis of a predetermined foreground depth profile, whereby the assigning of the second group of depth values is based on a particular depth value of the background depth profile, the particular depth value belonging to a particular pixel which is located at a predetermined location relative to the at least one group of pixels.
  • the segmentation unit 502, the first assigning unit 503, the second assigning unit 504 and the rendering unit 506 may be implemented using one processor. Normally, these functions are performed under control of a software program product. During execution, normally the software program product is loaded into a memory, like a RAM, and executed from there. The program may be loaded from a background memory, like a ROM, hard disk, or magnetically and/or optical storage, or may be loaded via a network like Internet. Optionally an application specific integrated circuit provides the disclosed functionality.
  • multi-view image generation unit 500 as described in connection with Fig. 5 is designed to deal with video images
  • alternative embodiments of the depth map generation unit according to the invention are arranged to generate depth maps on basis of individual images, i.e. still pictures.
  • Fig. 6 schematically shows an embodiment of the image processing apparatus 600 according to the invention, comprising: a receiving unit 602 for receiving a video signal representing input images; - a multi-view image generation unit 501 for generating multi-view images on basis of the received input images, as described in connection with Fig 5; and a multi-view display device 606 for displaying the multi-view images as provided by the multi-view image generation unit 501.
  • the video signal may be a broadcast signal received via an antenna or cable but may also be a signal from a storage device like a VCR (Video Cassette Recorder) or
  • the signal is provided at the input connector 510.
  • the image processing apparatus 600 might e.g. be a TV. Alternatively the image processing apparatus 600 does not comprise the optional display device but provides the output images to an apparatus that does comprise a display device 606. Then the image processing apparatus 600 might be e.g. a set top box, a satellite-tuner, a VCR player, a DVD player or recorder.
  • the image processing apparatus 600 comprises storage means, like a hard-disk or means for storage on removable media, e.g. optical disks.
  • the image processing apparatus 600 might also be a system being applied by a film-studio or broadcaster.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

Procédé de création d'une carte de profondeur (122) qui comporte des valeurs de profondeur représentant des distances par rapport à un observateur, pour des pixels respectifs d'une image (100). Ledit procédé consiste à segmenter l'image en au moins un groupe de pixels (104) correspondant à un objet en premier plan et en un autre groupe de pixels (102) correspondant à un arrière-plan, à attribuer un premier groupe de valeurs de profondeur correspondant à l'autre groupe de pixels (102) sur la base d'un profil (110) de profondeur d'arrière-plan prédéterminé, à attribuer un second groupe de valeurs de profondeur correspondant au(x) groupe(s) de pixels (104) sur la base d'un profil (110) de profondeur de premier plan prédéterminé, l'attribution du second groupe de valeurs de profondeur étant basée sur une valeur de profondeur particulière du profil (110) de profondeur d'arrière-plan, la valeur de profondeur particulière appartenant à un pixel (106) particulier qui est situé à un endroit prédéterminé par rapport au(x) groupe(s) de pixels (104).
PCT/IB2005/052094 2004-06-29 2005-06-24 Creation d'une carte de profondeur WO2006003577A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP04103041 2004-06-29
EP04103041.2 2004-06-29

Publications (1)

Publication Number Publication Date
WO2006003577A1 true WO2006003577A1 (fr) 2006-01-12

Family

ID=34971969

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2005/052094 WO2006003577A1 (fr) 2004-06-29 2005-06-24 Creation d'une carte de profondeur

Country Status (1)

Country Link
WO (1) WO2006003577A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2184713A1 (fr) * 2008-11-04 2010-05-12 Koninklijke Philips Electronics N.V. Procédé et système pour générer une carte de profondeurs
WO2012016600A1 (fr) * 2010-08-06 2012-02-09 Trident Microsystems, Inc. Procédé de génération d'une carte de profondeur, procédé de conversion d'une séquence d'images bidimensionnelles et dispositif de génération d'image stéréoscopique
US20120287233A1 (en) * 2009-12-29 2012-11-15 Haohong Wang Personalizing 3dtv viewing experience
US8588514B2 (en) 2007-05-11 2013-11-19 Koninklijke Philips N.V. Method, apparatus and system for processing depth-related information
WO2014001062A3 (fr) * 2012-06-26 2014-04-24 Ultra-D Coöperatief U.A. Dispositif permettant de générer une carte de profondeur
EP2747028A1 (fr) 2012-12-18 2014-06-25 Universitat Pompeu Fabra Procédé de récupération d'une carte de profondeur relative à partir d'une image unique ou d'une séquence d'images fixes
EP2624208A3 (fr) * 2012-01-17 2016-03-30 Samsung Electronics Co., Ltd. Système d'affichage avec mécanisme de conversion d'image et son procédé de fonctionnement
US9418433B2 (en) 2007-07-03 2016-08-16 Koninklijke Philips N.V. Computing a depth map

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
BUCHER T: "Measurement of distance and height in images based on easy attainable calibration parameters", INTELLIGENT VEHICLES SYMPOSIUM, 2000. IV 2000. PROCEEDINGS OF THE IEEE DEARBORN, MI, USA 3-5 OCT. 2000, PISCATAWAY, NJ, USA,IEEE, US, 3 October 2000 (2000-10-03), pages 314 - 319, XP010528956, ISBN: 0-7803-6363-9 *
BYONG MOK OH ET AL: "IMAGE-BASED MODELING AND PHOTO EDITING", COMPUTER GRAPHICS. SIGGRAPH 2001. CONFERENCE PROCEEDINGS. LOS ANGELES, CA, AUG. 12 - 17, 2001, COMPUTER GRAPHICS PROCEEDINGS. SIGGRAPH, NEW YORK, NY : ACM, US, 12 August 2001 (2001-08-12), pages 433 - 442, XP001049915, ISBN: 1-58113-374-X *
CRIMINISI A ET AL: "Single view metrology", COMPUTER VISION, 1999. THE PROCEEDINGS OF THE SEVENTH IEEE INTERNATIONAL CONFERENCE ON KERKYRA, GREECE 20-27 SEPT. 1999, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, vol. 1, 20 September 1999 (1999-09-20), pages 434 - 441, XP010350435, ISBN: 0-7695-0164-8 *
HENDRIX C ET AL: "Relationship between monocular and binocular depth cues for judgements of spatial information and spatial instrument design", DISPLAYS, ELSEVIER SCIENCE PUBLISHERS BV., BARKING, GB, vol. 16, no. 3, 1995, pages 103 - 113, XP004032524, ISSN: 0141-9382 *
HORSWILL I: "Visual collision avoidance by segmentation", INTELLIGENT ROBOTS AND SYSTEMS '94. 'ADVANCED ROBOTIC SYSTEMS AND THE REAL WORLD', IROS '94. PROCEEDINGS OF THE IEEE/RSJ/GI INTERNATIONAL CONFERENCE ON MUNICH, GERMANY 12-16 SEPT. 1994, NEW YORK, NY, USA,IEEE, vol. 2, 12 September 1994 (1994-09-12), pages 902 - 909, XP010141907, ISBN: 0-7803-1933-8 *
KOLLER D, WEBER J, MALIK J: "Robust Multiple Car Tracking with Occlusion Reasoning", TECHNICAL REPORT, no. UCB:CSD-93-780, January 1994 (1994-01-01), UNIVERSITY OF CALIFORNIA AT BERKELEY, pages 1 - 27, XP002347471 *
MAXWELL B A, MEEDEN L: "E28/CS81 Lecture #23 S00", LECTURE NOTES, COURSE E28: ROBOTICS, SPRING 2000, 2000, Department of Engineering, Swarthmore College, USA, pages 1 - 3, XP002347470 *
RENNO J ET AL: "Towards plug-and-play visual surveillance: learning tracking models", PROCEEDINGS 2002 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING. ICIP 2002. ROCHESTER, NY, SEPT. 22 - 25, 2002, INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, NEW YORK, NY : IEEE, US, vol. VOL. 2 OF 3, 22 September 2002 (2002-09-22), pages 453 - 456, XP010607752, ISBN: 0-7803-7622-6 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8588514B2 (en) 2007-05-11 2013-11-19 Koninklijke Philips N.V. Method, apparatus and system for processing depth-related information
US9418433B2 (en) 2007-07-03 2016-08-16 Koninklijke Philips N.V. Computing a depth map
EP2184713A1 (fr) * 2008-11-04 2010-05-12 Koninklijke Philips Electronics N.V. Procédé et système pour générer une carte de profondeurs
WO2010052632A1 (fr) * 2008-11-04 2010-05-14 Koninklijke Philips Electronics N.V. Procédé et dispositif de production d'une carte de profondeur
US8447141B2 (en) 2008-11-04 2013-05-21 Koninklijke Philips Electronics N.V Method and device for generating a depth map
US20120287233A1 (en) * 2009-12-29 2012-11-15 Haohong Wang Personalizing 3dtv viewing experience
WO2012016600A1 (fr) * 2010-08-06 2012-02-09 Trident Microsystems, Inc. Procédé de génération d'une carte de profondeur, procédé de conversion d'une séquence d'images bidimensionnelles et dispositif de génération d'image stéréoscopique
EP2624208A3 (fr) * 2012-01-17 2016-03-30 Samsung Electronics Co., Ltd. Système d'affichage avec mécanisme de conversion d'image et son procédé de fonctionnement
WO2014001062A3 (fr) * 2012-06-26 2014-04-24 Ultra-D Coöperatief U.A. Dispositif permettant de générer une carte de profondeur
EP2747028A1 (fr) 2012-12-18 2014-06-25 Universitat Pompeu Fabra Procédé de récupération d'une carte de profondeur relative à partir d'une image unique ou d'une séquence d'images fixes

Similar Documents

Publication Publication Date Title
US7764827B2 (en) Multi-view image generation
JP5587894B2 (ja) 深さマップを生成するための方法及び装置
US9171372B2 (en) Depth estimation based on global motion
KR101168384B1 (ko) 깊이 맵을 생성하는 방법, 깊이 맵 생성 유닛, 이미지 처리 장치, 및 컴퓨터 프로그램 제품
US20080260288A1 (en) Creating a Depth Map
EP2033164B1 (fr) Procédés et systèmes de conversion d'images cinématographiques 2d pour une représentation stéréoscopique 3d
US8036451B2 (en) Creating a depth map
TWI483612B (zh) Converting the video plane is a perspective view of the video system
WO2006003577A1 (fr) Creation d'une carte de profondeur
EP0952552A2 (fr) Procede et appareil pour produire des images à deux dimensions à partir de données video à trois dimensions
US20120127267A1 (en) Depth estimation based on global motion
US20130070049A1 (en) System and method for converting two dimensional to three dimensional video
KR20110059506A (ko) 복수의 이미지들로부터 카메라 파라미터를 얻기 위한 시스템과 방법 및 이들의 컴퓨터 프로그램 제품
Sharma et al. A flexible architecture for multi-view 3DTV based on uncalibrated cameras
JP2023172882A (ja) 三次元表現方法及び表現装置
Lee et al. Removing foreground objects by using depth information from multi-view images
Zhang Depth Generation using Structured Depth Templates.
CN111476707A (zh) 一种2d转换3d与vr的虚拟设备
Zhi-ping et al. View synthesis of the new viewpoint based on contour information
JPH03268069A (ja) 画像処理方法

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase