WO2005034035A1 - Amelioration de cartes de profondeur - Google Patents

Amelioration de cartes de profondeur Download PDF

Info

Publication number
WO2005034035A1
WO2005034035A1 PCT/IB2004/051992 IB2004051992W WO2005034035A1 WO 2005034035 A1 WO2005034035 A1 WO 2005034035A1 IB 2004051992 W IB2004051992 W IB 2004051992W WO 2005034035 A1 WO2005034035 A1 WO 2005034035A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
value
samples
input
input depth
Prior art date
Application number
PCT/IB2004/051992
Other languages
English (en)
Inventor
Christiaan Varekamp
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2005034035A1 publication Critical patent/WO2005034035A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the invention relates to a method of converting an input depth map into an output depth map, the input depth map comprising input depth samples and the output depth map comprising output depth samples, each depth sample having a respective depth value.
  • the invention further relates to a depth-map conversion unit for converting an input depth map into an output depth map.
  • the invention further relates to an image processing apparatus comprising: receiving means for receiving a signal representing an input depth map; and a depth-map conversion unit for converting the input depth map into an output depth map.
  • the invention further relates to a computer program product to be loaded by a computer arrangement, comprising instructions to convert an input depth map into an output depth map.
  • the ability to record accurate depth information is a key requirement for three- dimensional (3D) television systems and other systems that use 3D video such as mobile phones and game devices.
  • Two approaches can be taken for recording 3D content: direct recording of a depth related signal.
  • an infrared (IR) camera may be used in a radar-like approach to record a depth image, or also called depth-map; or - recording two or more video signals from different directions and calculating depth from disparity.
  • IR infrared
  • This is the more traditional approach to acquire a depth image.
  • Both approaches have their advantages and drawbacks.
  • the first approach provides noisy depth measurements for materials that have a low intrinsic infrared reflectance.
  • a recurring problem is that depth samples cannot accurately be obtained for dark hair and other dark materials.
  • the depth map comprises values for which the reliability value is relatively low.
  • a depth map is an array, typically a two-dimensional array of values corresponding to depth, distance from a viewer. In fact there might even be "missing data" points. Another reason why the depth map might comprise samples with a label "missing data", is that the corresponding scene points fall outside a predetermined depth measurement window.
  • the second approach requires accurate disparity estimation, which is a hard problem. Both methods unavoidably cause errors in the depth maps. These errors may range from small regions that are noisy, i.e. a few pixels close together to larger regions where entire objects have a large depth error. Noise and other errors in the depth map decrease rendering quality for stereoscopic viewing. Also other applications of depth measurements such as compression will suffer.
  • This object of the invention is achieved in that the method comprises: testing for a first one of the input depth samples, having a relatively low reliability value, whether a new depth value should be assigned on basis of a difference between a first control value of a control signal representing human visible information and a second control value of the control signal, the first control value corresponding to the first one of the input depth samples and the second control value related to a second one of the input depth samples being located in a neighborhood of the first one of the input depth samples and having a relatively high reliability value; optionally establishing the new depth value on basis of a second depth value of the second one of the input depth samples; and assigning the new depth value to a first one of the output depth samples corresponding to the first one of the input depth samples.
  • the new depth value is established on basis of the second depth value of the second one of the depth samples if the difference is below a predetermined threshold.
  • the basic assumption underlying the inventive method is that there cannot be a significant step in the depth map if there is not also a significant step in the co-registered visual image.
  • depth maps are used in combination with visual images, e.g. for stereoscopic (3D) television, then spatial coherence in co-registered visual images is exploited to fill in the depth samples with a relatively low reliability value. Even for "missing" data points in the depth map new depth values are determined on basis of depth samples with a relatively high reliability value.
  • the human visible information comprises one of luminance and color.
  • the first one of the input depth samples belongs to a first set comprising first input depth samples to which non- determined depth values have been assigned and the second one of the input depth samples belongs to a second set comprising second input depth samples to which respective determined depth values have been assigned, the first input depth samples and the second input depth samples being located around the first one of the input depth samples and whereby the new depth value is established on basis of the second depth value of the second one of the depth samples if the ratio between a first number of input depth samples of the second set and a total number of input depth samples of the first set and the second set is above a further predetermined threshold.
  • the set of "valid" measurements i.e.
  • a dilatation growing
  • the dilation is stopped when the object boundary becomes too irregular. Since, objects in the real world, and their image taken by a camera often have smooth boundaries.
  • the test on boundary regularity is based on the ratio of the different input depth samples. If the first number of input depth samples is relatively high then there are relatively many input depth samples having a determined depth value, i.e. a depth value for which the reliability is relatively high. A non-determined depth value means that the corresponding reliability is relatively low. In that case, a particular default depth value might have been assigned, e.g. representing infinite.
  • the second control value corresponds to the second one of the input depth samples.
  • the second control value can directly be fetched from the signal representing human visible information.
  • the second control value is based on a third control value corresponding to the second one of the input depth samples and a fourth control value corresponding to a third one of the input depth samples, being located in the neighborhood of the first one of the input depth samples and having a further relatively high reliability value.
  • An advantage of this embodiment is an improved robustness.
  • the new depth value is established by computing an average or median of the second depth value and further depth values belonging to further input depth samples being located in the neighborhood of the first one of the input depth samples and having relatively high reliability values.
  • the depth-map conversion unit comprises: testing means for testing for a first one of the input depth samples, having a relatively low reliability value, whether a new depth value should be assigned on basis of a difference between a first control value of a control signal representing human visible information and a second control value of the control signal, the first control value corresponding to the first one of the input depth samples and the second control value related to a second one of the input depth samples being located in a neighborhood of the first one of the input depth samples and having a relatively high reliability value; establishing means for optionally establishing the new depth value on basis of a second depth value of the second one of the input depth samples; and assigning means for assigning the new depth value to a first one of the output depth samples corresponding to the first one of the input depth samples.
  • the depth-map conversion unit comprises: testing means for testing for a first one of the input depth samples, having a relatively low reliability value, whether a new depth value should be assigned on basis of a difference between a first control value of a control signal representing human visible information and a second control value of the control signal, the first control value corresponding to the first one of the input depth samples and the second control value related to a second one of the input depth samples being located in a neighborhood of the first one of the input depth samples and having a relatively high reliability value; - establishing means for optionally establishing the new depth value on basis of a second depth value of the second one of the input depth samples; and assigning means for assigning the new depth value to a first one of the output depth samples corresponding to the first one of the input depth samples.
  • This object of the invention is achieved in that, the computer program product, after being loaded, provides said processing means with the capability to carry out: - testing for a first one of the input depth samples, having a relatively low reliability value, whether a new depth value should be assigned on basis of a difference between a first control value of a control signal representing human visible information and a second control value of the control signal, the first control value corresponding to the first one of the input depth samples and the second control value related to a second one of the input depth samples being located in a neighborhood of the first one of the input depth samples and having a relatively high reliability value; optionally establishing the new depth value on basis of a second depth value of the second one of the input depth samples; and assigning the new depth value to a first one of the output depth samples corresponding to the first one of the input depth samples.
  • Modifications of the depth-map conversion unit and variations thereof may correspond to modifications and variations
  • Fig. 1A-1C schematically show the working of an IR depth camera
  • Fig. 2A schematically shows a visual image
  • Fig. 2B schematically shows an input depth map corresponding to the visual image of Fig. 2A
  • Fig. 3 A schematically shows a first part of a single iteration of the method according to the invention
  • Fig. 3B schematically shows a second part of a single iteration of the method according to the invention
  • Fig. 1A-1C schematically show the working of an IR depth camera
  • Fig. 2A schematically shows a visual image
  • Fig. 2B schematically shows an input depth map corresponding to the visual image of Fig. 2A
  • Fig. 3 A schematically shows a first part of a single iteration of the method according to the invention
  • Fig. 3B schematically shows a second part of a single iteration of the method according to the invention
  • Fig. 1A-1C schematically shows the working of an IR depth camera
  • Fig. 4 schematically shows a structuring set
  • Fig. 5A schematically shows an input depth map
  • Fig. 5B schematically shows an output depth map corresponding to the input depth map of Fig. 5A
  • Fig. 6 schematically shows a depth-map conversion unit according to the invention
  • Fig. 7 schematically shows an embodiment of the image processing apparatus according to the invention.
  • Same reference numerals are used to denote similar parts throughout the figures.
  • Figs. 1A-C schematically show the working of an IR depth camera 108 which is based on time-of-flight.
  • Fig. 1 A schematically shows a light wall 100 moving from the camera 108 to the scene 106.
  • Fig. IB schematically shows an imprinted light wall 102 returning to camera 108 and
  • Fig. 1C schematically shows a truncated light wall 104 containing depth information from the scene 106.
  • Depth is extracted from the reflected deformed infrared light- wall 102 by deploying a fast image shutter in front of a CCD chip and blocking the incoming light.
  • the collected light at each of the pixels is inversely proportional to depth of the specific pixel. Since reflecting objects may have any reflection coefficient there exist a need to compensate for this effect.
  • a normalization depth is calculated per pixel by simply dividing the front portion pixel intensity by the corresponding portion of the total intensity.
  • the reflected IR light passes the same lenses as the visual light. Behind the lenses, the IR and visual light are separated and recorded with different sensors. There are no angular differences between the colour camera and the depth sensor, so each pixel of the colour camera is assigned a corresponding depth value. Camera zoom is accounted for in a very natural way as both IR and visual light passes through the same optical path.
  • the specific operation of the IR depth camera results in two types of "missing" data points, i.e. samples of which the reliability value is relatively low: - Points where the infrared reflectance is low due to the specific properties of the reflecting object.
  • Some materials have a low infrared reflectance. Also, smooth surfaces at large grazing angles are problematic since most of the illumination energy is scattered away from the sensor.
  • a threshold operation is done in hardware associated with the depth camera; and Points where the depth falls outside a predetermined depth measurement window. The light collected at a pixel is inversely proportional to the depth of the specific pixel. Due to transmit power limitations there is a fixed measurement window for which accurate depth can be recorded. Outside this range, observations are reported as "missing". The scene is usually arranged in such a way that either all objects fall inside the measurement window, or that objects that fall outside this window are always behind the measurement window. This last situation makes it possible to assign an arbitrary large depth to these pixels.
  • Fig. 2A schematically shows a visual image 200 representing a scene with three actors. Two of them are sitting on a desk and the third one is standing up-right.
  • Fig. 2B schematically shows an input depth map 202 corresponding to the visual image of Fig. 2A.
  • This input depth map 202 comprises depth samples 210-214 for which no appropriate depth value has been determined, e.g. because of one of the causes described above in connection with the Figs. lA-C.
  • a first inappropriate depth sample 210 corresponds to the dark hair 204 of one of the actors.
  • a second inappropriate depth sample 212 corresponds to an office device 206 which is located out of the predetermined depth measurement window.
  • a third inappropriate depth sample 214 corresponds to a surface 208 of an office chair which is substantially oriented in the transfer direction of the light wall of the IR camera.
  • Fig. 3 A schematically shows a first part of a single iteration of the method according to the invention and Fig. 3B schematically shows a second part of the single iteration of the method according to the invention.
  • These Figs. 3A and 3B show one discrete time step t, i.e.
  • Figs. 3 A shows the extension of the set of "valid" data points X, to a new set of "valid” data points X M , using a conditional dilation, i.e. growing that depends on the visual image.
  • Figs. 3B shows a depth interpolation step in which a depth is estimated for the set of new data points: X M - X, .
  • X denote the set of points (x,y) for which a "valid" measurement is known and let c , the complement of X , be the set of "missing" data points.
  • a single dilation step grows the "valid" data point set X by using points from the "missing" data set X c .
  • a first condition for extension is that the ratio between a first number of "valid" data points in a structuring set S- ⁇ j , ) and the total number of data points in the structuring set S + ⁇ j is above a predetermined threshold a , where S +( _ ⁇ ) is a structuring set S translated to image coordinates (x,y).
  • Parameter a controls the shape evolution of the set X as a function of the iteration number t . It is the fractional area that "valid" data points occupy in S translated to position (x,y). Setting large, i.e. close to one, prevents the "valid" set of points _ ⁇ " to grow without bounds when the iteration number t ⁇ ⁇ . When starting from a perfectly linear shaped boundary, dilation is only possible for ⁇ 0.5.
  • Fig. 4 schematically shows an example of a structuring set S ⁇ x y) of 5x5 pixels.
  • a possible choice for / is a linear combination of the gradient magnitudes of R,G,B at location (x, y). This is logical since a dilatation across a luminance or color edge should be prevented since they may correspond with depth discontinuities.
  • accurate evaluation of the gradient magnitudes requires that the visual image is pre-filtered with derivatives of a Gaussian using large kernel sizes to avoid effects of noise. This is computationally expensive.
  • a preferred alternative functional value of / is specified in Equation 3. The computation of that alternative functional value of / only requires taking the absolute value of differences and finding the minimum:
  • Fig. 5A schematically shows an input depth map 202 and Fig. 5B schematically shows the corresponding output depth map 502. It can be seen that measurement errors for the chair 214, the table 504 and a part 210 of one of the persons have been removed and interpolated. Note that the valid measurements in the input depth map are not smoothed. Note also that restoration of the depth map is at the expense of smoother object boundaries.
  • This trade-off between improvement based on colour and deformation of object boundaries can be controlled by varying the predetermined thresholds and other parameters'.
  • the dimension of the structuring set S ⁇ A square window of 1 lxl 1 pixels was used for the output depth map 502 as depicted in Fig. 5B; - the predetermined threshold e [ ⁇ ,l] controls the amount of growing based on shape.
  • a predetermined threshold a 0.6 was used for the output depth map 502 as depicted in Fig. 5B; the further predetermined threshold ⁇ e [ ⁇ ,255] controls the amount of growing based on colour distance.
  • a predetermined threshold ⁇ 60 was used for the output depth map 502 as depicted in Fig. 5B; and the total number of iterations.
  • FIG. 6 schematically shows a depth-map conversion unit 600 according to the invention.
  • the depth-map conversion unit 600 is arranged to convert an input depth map 202, which is provided at the input connector 608, into an output depth map 502 which it provides at the output connector 614.
  • the input depth map 202 comprises input depth samples and the output depth map 502 comprising output depth samples, each depth sample having a respective depth value.
  • the depth-map conversion unit 600 comprises: a testing unit for testing for a first one of the input depth samples, having a relatively low reliability value, whether a new depth value should be assigned on basis of a difference between a first control value of a control signal representing human visible information and a second control value of the control signal, the first control value corresponding to the first one of the input depth samples and the second control value related to a second one of the input depth samples being located in a neighborhood of the first one of the input depth samples and having a relatively high reliability value; an establishing unit for optionally establishing the new depth value on basis of a second depth value of the second one of the input depth samples; and an assigning unit for assigning the new depth value to a first one of the output depth samples corresponding to the first one of the input depth samples.
  • the depth-map conversion unit 600 comprises a control interface 608 for providing the depth-map conversion unit 600 with a video image and optionally comprises a further control interface 612 for providing the depth-map conversion unit 600 with reliability values corresponding to the respective input depth values.
  • the reliability values are integrated in the input depth map, e.g. by means of the usage of default depth values which do not correspond to physically determined values.
  • the reliability values optionally correspond to an IR signal from the depth camera, which is not processed by means of a threshold operation.
  • the testing unit 602, the establishing unit 604 and the assigning unit 606 may be implemented using one processor. Normally, these functions are performed under control of a software program product.
  • a memory like a RAM
  • the program may be loaded from a background memory, like a ROM, hard disk, or magnetically and/or optical storage, or may be loaded via a network like Internet.
  • a network like Internet
  • an application specific integrated circuit provides the disclosed functionality.
  • Fig. 7 schematically shows an embodiment of the image processing apparatus
  • the image processing apparatus 700 comprises: Receiving means 702 for receiving a signal representing visual images and co- registered input depth maps.
  • the depth-map conversion unit 600 as described in connection with Fig. 6; and
  • a rendering device 706 for rendering 3D images on basis of the received visual images an the output depth map of the depth-map conversion unit 600.
  • the image processing apparatus 700 might be a depth camera.
  • the image processing apparatus 700 is a display apparatus or storage apparatus.
  • the signal may be a broadcast signal received via an antenna or cable but may also be a signal from a storage device like a VCR (Video Cassette Recorder) or Digital Versatile Disk (DVD).
  • VCR Video Cassette Recorder
  • DVD Digital Versatile Disk
  • the image processing apparatus 700 optionally comprises, a not depicted, display device for displaying the output images of the rendering device 706.
  • the image processing apparatus 700 is e.g. a TV.
  • the image processing apparatus 700 does not comprise the optional display device but provides the output images to an apparatus that does comprise a display device.
  • the image processing apparatus 700 might be e.g. a set top box, a satellite-tuner, a VCR player, a DVD player or recorder.
  • the image processing apparatus 700 comprises storage means, like a hard-disk or means for storage on removable media, e.g. optical disks.
  • the image processing apparatus 700 might also be a system being applied by a film-studio or broadcaster.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

Cette unité (600) de conversion de cartes de profondeur convertit une carte de profondeur d'entrée en une carte de profondeur de sortie. La carte de profondeur d'entrée comprend des échantillons de profondeur d'entrée et la carte de profondeur de sortie comprend des échantillons de profondeur de sortie. Chaque échantillon de profondeur a une valeur respective de profondeur. L'unité (600) de conversion de cartes de profondeur comprend : un moyen de vérification (602) pour vérifier si une nouvelle valeur de profondeur doit être attribuée à un premier échantillon de profondeur d'entrée ayant une valeur de fiabilité relativement faible, en fonction de la différence entre une première valeur de contrôle d'un signal de contrôle qui représente les informations visibles pour un être humain et une deuxième valeur de contrôle du signal de contrôle, la première valeur de contrôle correspondant à un premier échantillon de profondeur d'entrée et la deuxième valeur de contrôle étant associée à un deuxième échantillon de profondeur d'entrée situé à proximité du premier échantillon de profondeur d'entrée et ayant une valeur de fiabilité relativement élevée ; un moyen d'établissement facultatif (604) de la nouvelle valeur de profondeur en fonction d'une deuxième valeur de profondeur du deuxième échantillon de profondeur d'entrée ; et un moyen d'attribution (606) pour attribuer la nouvelle valeur de profondeur à un premier échantillon de profondeur de sortie qui correspond au premier échantillon de profondeur d'entrée.
PCT/IB2004/051992 2003-10-07 2004-10-06 Amelioration de cartes de profondeur WO2005034035A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP03103703.9 2003-10-07
EP03103703 2003-10-07

Publications (1)

Publication Number Publication Date
WO2005034035A1 true WO2005034035A1 (fr) 2005-04-14

Family

ID=34400564

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2004/051992 WO2005034035A1 (fr) 2003-10-07 2004-10-06 Amelioration de cartes de profondeur

Country Status (1)

Country Link
WO (1) WO2005034035A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007052191A2 (fr) * 2005-11-02 2007-05-10 Koninklijke Philips Electronics N.V. Remplissage par des resultats de profondeur
EP1931150A1 (fr) * 2006-12-04 2008-06-11 Koninklijke Philips Electronics N.V. Système de traitement des images pour le traitement des données combinées d'image et de profondeur
US20120201424A1 (en) * 2011-02-03 2012-08-09 Microsoft Corporation Environmental modifications to mitigate environmental factors
US8422801B2 (en) 2007-12-20 2013-04-16 Koninklijke Philips Electronics N.V. Image encoding method for stereoscopic rendering
US8810602B2 (en) 2009-06-09 2014-08-19 Samsung Electronics Co., Ltd. Image processing apparatus, medium, and method
CN105740839A (zh) * 2010-05-31 2016-07-06 苹果公司 三维场景的分析
US10628950B2 (en) 2017-03-01 2020-04-21 Microsoft Technology Licensing, Llc Multi-spectrum illumination-and-sensor module for head tracking, gesture recognition and spatial mapping

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5644386A (en) * 1995-01-11 1997-07-01 Loral Vought Systems Corp. Visual recognition system for LADAR sensors
US5966678A (en) * 1998-05-18 1999-10-12 The United States Of America As Represented By The Secretary Of The Navy Method for filtering laser range data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5644386A (en) * 1995-01-11 1997-07-01 Loral Vought Systems Corp. Visual recognition system for LADAR sensors
US5966678A (en) * 1998-05-18 1999-10-12 The United States Of America As Represented By The Secretary Of The Navy Method for filtering laser range data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TORRES-MENDEZ L A ET AL: "Range synthesis for 3D environment modeling", PROCEEDINGS SIXTH IEEE WORKSHOP ON APPLICATIONS OF COMPUTER VISION, 3 December 2002 (2002-12-03), pages 231 - 236, XP010628754 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007052191A3 (fr) * 2005-11-02 2008-01-03 Koninkl Philips Electronics Nv Remplissage par des resultats de profondeur
WO2007052191A2 (fr) * 2005-11-02 2007-05-10 Koninklijke Philips Electronics N.V. Remplissage par des resultats de profondeur
US9948943B2 (en) 2006-12-04 2018-04-17 Koninklijke Philips N.V. Image processing system for processing combined image data and depth data
EP1931150A1 (fr) * 2006-12-04 2008-06-11 Koninklijke Philips Electronics N.V. Système de traitement des images pour le traitement des données combinées d'image et de profondeur
WO2008068707A2 (fr) * 2006-12-04 2008-06-12 Koninklijke Philips Electronics N.V. Système de traitement d'image destiné à traiter des données d'image et des données de profondeur combinées
WO2008068707A3 (fr) * 2006-12-04 2009-07-16 Koninkl Philips Electronics Nv Système de traitement d'image destiné à traiter des données d'image et des données de profondeur combinées
EP2106668B1 (fr) 2006-12-04 2018-04-18 Koninklijke Philips N.V. Système de traitement des images pour le traitement des données combinées d'image et de profondeur
US8422801B2 (en) 2007-12-20 2013-04-16 Koninklijke Philips Electronics N.V. Image encoding method for stereoscopic rendering
EP3007440A1 (fr) 2007-12-20 2016-04-13 Koninklijke Philips N.V. Procédé de codage d'image pour un rendu stéréoscopique
US8810602B2 (en) 2009-06-09 2014-08-19 Samsung Electronics Co., Ltd. Image processing apparatus, medium, and method
CN105740839A (zh) * 2010-05-31 2016-07-06 苹果公司 三维场景的分析
CN105740839B (zh) * 2010-05-31 2020-01-14 苹果公司 三维场景的分析
US8724887B2 (en) * 2011-02-03 2014-05-13 Microsoft Corporation Environmental modifications to mitigate environmental factors
US20120201424A1 (en) * 2011-02-03 2012-08-09 Microsoft Corporation Environmental modifications to mitigate environmental factors
US10628950B2 (en) 2017-03-01 2020-04-21 Microsoft Technology Licensing, Llc Multi-spectrum illumination-and-sensor module for head tracking, gesture recognition and spatial mapping

Similar Documents

Publication Publication Date Title
US10205896B2 (en) Automatic lens flare detection and correction for light-field images
US8971625B2 (en) Generating dolly zoom effect using light field image data
US10089750B2 (en) Method and system of automatic object dimension measurement by using image processing
RU2431938C2 (ru) Эффективное кодирование множества видов
US7724952B2 (en) Object matting using flash and no-flash images
CN1910623B (zh) 图像变换方法、纹理映射方法、图像变换装置和服务器客户机系统
JP4308319B2 (ja) 画像処理装置および画像処理方法
TWI430202B (zh) 使用全色像素之銳化方法
CN113992861B (zh) 一种图像处理方法及图像处理装置
US20130335535A1 (en) Digital 3d camera using periodic illumination
CN102436639B (zh) 一种去除图像模糊的图像采集方法和图像采集系统
Bimber et al. Multifocal projection: A multiprojector technique for increasing focal depth
US7486837B2 (en) Method, device, and program for image conversion, method, device and program for texture mapping, and server-client system
TW201118791A (en) System and method for obtaining camera parameters from a plurality of images, and computer program products thereof
EP2147407A1 (fr) Image en couleur à bruit réduit utilisant une image panchromatique
CA2693666A1 (fr) Systeme et procede pour une reconstruction d'objet tridimensionnelle a partir d'images bidimensionnelles
WO2004066212A2 (fr) Acquisition de carte en pleine profondeur
Schenkel et al. Natural scenes datasets for exploration in 6DOF navigation
Hach et al. Cinematic bokeh rendering for real scenes
WO2005034035A1 (fr) Amelioration de cartes de profondeur
Leimkühler et al. Perceptual real-time 2D-to-3D conversion using cue fusion
JP4722055B2 (ja) 3次元モデルを縮尺変更する方法及び縮尺変更ユニット
Rüfenacht Stereoscopic high dynamic range video
Alasal et al. Improving passive 3D model reconstruction using image enhancement
Ballabeni et al. Intensity histogram equalisation, a colour-to-grey conversion strategy improving photogrammetric reconstruction of urban architectural heritage

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase