WO2010021972A1 - Éclairement structuré périphérique pour récupérer une forme et un aspect d'objet en 3d - Google Patents

Éclairement structuré périphérique pour récupérer une forme et un aspect d'objet en 3d Download PDF

Info

Publication number
WO2010021972A1
WO2010021972A1 PCT/US2009/054007 US2009054007W WO2010021972A1 WO 2010021972 A1 WO2010021972 A1 WO 2010021972A1 US 2009054007 W US2009054007 W US 2009054007W WO 2010021972 A1 WO2010021972 A1 WO 2010021972A1
Authority
WO
WIPO (PCT)
Prior art keywords
mirror
camera
appearance
light
shape
Prior art date
Application number
PCT/US2009/054007
Other languages
English (en)
Inventor
Douglas Lanman
Daniel Crispell
Gabriel Taubin
Original Assignee
Brown University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Brown University filed Critical Brown University
Publication of WO2010021972A1 publication Critical patent/WO2010021972A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo

Definitions

  • This invention generally relates to systems and methods for recovering the shape and appearance of objects from images captured under structured lighting. More particularly, this invention relates to systems and methods for recovering the shape and appearance of objects, where the systems comprise a single camera and a single projector displaying structured light patterns.
  • Coded structured light is a particularly reliable and inexpensive triangulation-based method for reconstructing the shape and appearance of three dimensional objects from image data.
  • a coded structured light system comprises a single calibrated projector-camera pair. By illuminating the object surface with a known sequence of coded images, the correspondence between projector pixels and camera pixels can be uniquely identified.
  • Gray codes One such sequence of coded images is known as Gray codes. Inokuchi et al. originally applied Gray codes to 3D scanning; see S. Inokuchi, K. Sato, and F.
  • each pattern is composed of a sequence of black and white stripes oriented along the projector scan-lines.
  • Each projector scan-line corresponds to a projected light plane.
  • the corresponding projector scan- line can be identified for each camera pixel.
  • Each image pixel defines a camera ray supported by a straight line in 3D.
  • the intersection of each camera rays with its corresponding projected light plane determines the location of a 3D point on the subset of the surface illuminated by the Gray codes.
  • the collection of all these 3D points is the result of the coded structured light method.
  • a polygon mesh is reconstructed by interconnecting these 3D points forming polygon mesh faces .
  • An alternative to moving the projector and camera is to use mirrors to create virtual structured light projectors and virtual camera views.
  • planar mirrors to create multiple virtual structured light projectors.
  • E. Epstein, M. Granger-Piche, and P. Poulin "Exploiting mirrors in interactive reconstruction with structured light", Vision, Modeling, and Visualization, 2004.
  • one or more planar mirrors are illuminated by a projector displaying a modified Gray code sequence which is invariant to mirror reflections.
  • the authors mask the projected patterns to ensure that each surface point is illuminated from a single direction in each image. While eliminating the need for multiplexing several projectors to obtain complete object models, this system still suffers from several limitations. Foremost, it increases the number of required patterns since the directly and indirectly viewed portions of the object surface cannot be illuminated simultaneously.
  • planar mirrors to obtain several virtual views of a 3D scene or object from a single image capture.
  • the virtual cameras created by planar mirror reflections have several benefits over multiple camera systems, including automatic frame synchronization, color calibration, and identical intrinsic calibration parameters; see J. Gluckman and S. Nayar, "Planar Catadioptric Stereo: Geometry and Calibration", International Conference on Computer Vision and Pattern Recognition, 1999. While Gluckman and Nayar restricted their attention to stereo catadioptric systems in that work, Forbes et al .
  • a three-dimensional (3D) scanning apparatus to recover the shape and appearance of an object, comprising a camera, an illuminator, and a mirror surface.
  • the camera and illuminator are placed in front of the object to be 3D scanned, and the mirror surface is placed behind the object.
  • the camera and illuminator are pointed towards the object and the mirror surface.
  • the illuminator projects light patterns on the object and on the mirror surface.
  • the light patterns are composed of projected light rays.
  • the mirror surface is a generalized cylinder with a mirror axis, and the projected light rays are perpendicular to the mirror axis.
  • the illuminator and mirror surface are placed in such a way that most of the surface of the object is illuminated either directly or indirectly. While part of the surface of the object is illuminated by projected light rays emanating directly from the illuminator, other parts of the surface of the object are illuminated indirectly by reflected light rays resulting from projected light rays bouncing off the mirror surface after one or more mirror reflections.
  • the camera has the object and part of the mirror surface simultaneously on view, resulting in the camera simultaneously observing the object from multiple directions.
  • the mirror surface is composed of a left planar mirror and a right planar mirror. The left planar mirror and right planar mirror meet at the mirror axis, and are separated by a mirror angle of approximately 72 degrees. The camera captures a single camera view which includes one real image and four reflected images.
  • the camera is used to capture camera views of the object under a given illumination pattern displayed by the illuminator.
  • the illumination patterns correspond to a sequence of Gray code patterns.
  • the Gray coded images are captured using the 3D scanning apparatus disclosed herein.
  • each pixel is identified as being either illuminated or in shadow.
  • the set of thresholded values for each camera pixel are analyzed to determine which projector scanline illuminated each pixel.
  • each projector scanline corresponds to one of the planes contained in a plurality of parallel light planes emitted by the illuminator
  • this second step determines which plane of light illuminates a given camera pixel.
  • the object shape is estimated from the structured light sequence.
  • ray-plane triangulation is used to reconstruct the three-dimensional coordinate of a point on the object surface for each camera pixel. Furthermore, these points can be analyzed to produce a polygon mesh to represent the object shape.
  • An optional additional image collected under ambient illumination, with no pattern projected by the illuminator, is used to further estimate the appearance (e.g., color) of the estimated object surface.
  • an optional additional image collected under ambient illumination, with no pattern projected by the illuminator is used to further estimate the appearance (e.g., color) of the estimated object surface.
  • a plurality of ambient images, collected for various poses of the object and ambient illumination could be collected to obtain a better approximation of the object appearance.
  • the set of illumination images and the ambient image provide sufficient information to reconstruct the shape and appearance of a 3D object placed within the system.
  • FIG. 1 is an illustration depicting the presently preferred embodiment of a system for obtaining the shape and appearance of an object by capturing a set of images under varying projected structured light illumination patterns using a camera, a standard "pinhole" projector, a mirror surface, and a large aperture converging Fresnel lens .
  • FIG. 2 is an illustration depicting a general form of a system for obtaining the shape and appearance of an object by capturing a set of images under varying projected structured light illumination patterns.
  • FIGS. 3a and 3b are diagrams depicting possible geometric properties of the light source and means of bending the light rays in order to form the plurality of first parallel planes shown in FIG. 2.
  • FIG. 4 is an illustration of the first four Gray code structured light illumination patterns.
  • FIGS. 5a, 5b, and 5c are illustrations depicting a typical image collected by the invention under preferred illumination, wherein FIG. 5a illustrates a typical object under ambient illumination, FIG. 5b shows the same object under a Gray code structured light illumination as in FIG. 4, and FIG. 5c shows the decoded projector scan-line with darker colors indicating larger scan-lines.
  • FIG. 6 is a logic flow diagram for the processing of the set of images computed in accordance with FIG. 4 into an estimate of the correspondence between camera pixels and projector scan-lines.
  • FIG. 7 is an illustration depicting a second embodiment of a system for obtaining the shape and appearance of an object by capturing a set of images under varying projected structured light illumination patterns, wherein an array of cameras replaced the single camera of FIG. 1.
  • FIG. 1 A presently preferred first embodiment of the invention is depicted in FIG. 1, a 3D scanning apparatus 100 to recover the shape and appearance of an object 120, comprising a camera 101, an illuminator 110, and a mirror surface 130; the camera 101 and illuminator 110 placed in front of an object 120 to be 3D scanned; the mirror surface 130 beeing placed behind the object 120; the camera 101 and illuminator 110 pointing towards the object 120 and the mirror surface 130; the illuminator 110 projecting light patterns 111 on the object 120 and mirror surface 130; the light patterns 111 composed of projected light rays 112, 113; the mirror surface 130 being a generalized cylinder with a mirror axis 133 as the generalized cylinder axis; the projected light rays 112, 113 being perpendicular to the mirror axis 133; the illuminator 110 and mirror surface 130 placed in such a way that most of the surface of the object 120 is illuminated either directly or indirectly; part of the surface of the object 120
  • a generalized cylinder is a surface with an axis and a cross section.
  • the axis is a straight line
  • the cross section is a planar curve contained in a plane perpendicular to the axis.
  • the generalized cylinder can be described as the result of sweeping the plane containing the cross section curve perpendicularly along the axis.
  • a point on a generalized cylinder can be described by two parameters: one along the axis, and the second one along the cross section.
  • two intersecting planes define a generalized cylinder where the axis is the intersecting line
  • the cross section is the union of two intersecting lines perpendicular to the axis.
  • the surfaces of so called “cylindrical lenses” and “cylindrical mirrors” are other examples of generalized cylinders, where the cross sections are circles or parabolas. It is a well known property that a plane tangent to a point on the surface of a generalized cylinder is parallel to the axis. Equivalently, the normal direction to a point on the surface of a generalized cylinder is a vector perpendicular to the axis. It follows that the reflection of a light ray perpendicular to the axis of a generalized cylinder mirror surface is also perpendicular to the axis.
  • the mirror surface 130 is composed of a left planar mirror 131 and a right planar mirror 132; the left planar mirror 131 and right planar mirror 132 meeting at the mirror axis 133; the left planar mirror 131 and right planar mirror 132 separated by a mirror angle 134 of approximately 72 degrees; the camera 101 capturing a single camera view 150; the single camera view 150 simultaneously measuring one real image 155 and four reflected images 151, 152, 153, and 154.
  • the reflected light ray 114 is also perpendicular to the mirror axis 133.
  • a key feature of this invention is this property which prevents the reflected light ray 114 from interfering with other projected light rays 112, 113, or other reflected light rays 114 on the surface of the object 120, after they reflect off the mirror surface 130. Through this mechanism, this invention is able to significantly reduce the number of images required to capture the shape and appearance of the object 120, and avoid moving the object 120 with respect to the imaging system.
  • the projected light rays 112, 113 are contained in a plurality of parallel light planes 115, so that all the reflected light rays 114 corresponding to projected light rays 112 contained in the same light plane 115, also belong to the same light plane 115, and light rays 112, 113, 114 contained in different first parallel planes do not interfere with each other. In an even more preferred embodiment all the projected light rays 112, 113 belonging to the same parallel light plane 115 are projected with the same color.
  • the illuminator 110 is an orthographic projector.
  • An orthographic projector emits projected light rays 112, 113 which are all parallel to each other.
  • the orthographic projector is composed of a pinhole projector and a coaxial convergent lens position so that one of its focal points coincides with the center of projection of the projector.
  • FIG. 2 shows a preferred embodiment of the illuminator 110, comprising a light source 200, and a means of bending 220; the light source 200 emitting light source rays 210; the light source rays 210 being transformed into projected light rays 112, 113 by the means of bending 220; the projected light rays constituting the light pattern 111.
  • the light source 200 is a data projector driven by a computer.
  • the means of bending 220 is achieved by using a convergent lens.
  • the means of bending 220 is achieved by using a large aperture convergent Fresnel lens.
  • the means of bending 220 is achieved by using a cylindrical lens.
  • the means of bending 220 is achieved by using a plurality of mirror surfaces.
  • the means of bending 220 is achieved by using a plurality of planar mirror surfaces.
  • FIG. 3a shows a second embodiment of the illuminator 110, wherein components that are also found in FIG. 2 are numbered accordingly.
  • the light source rays 210 are contained in a plurality of source parallel light planes 300, which are subsequently transformed by the means of bending 220 into the plurality of parallel light planes 115.
  • FIG. 3b shows a third embodiment of the illuminator 110, wherein components that are also found in FIG. 2 are numbered accordingly.
  • the light source emits a single source parallel plane 310, which is subsequently transformed by the means of bending 220.
  • the light source 200 which emits a single source parallel plane 310 is implemented using a laser stripe projector.
  • the means of bending 220 include a mechanically-manipulated mirror in order to time- sequentially redirect the single second parallel plane into each plane of the plurality of parallel light planes 115.
  • the positions of the camera 101, the light source 200, the means of bending 220, and the mirror surface 130 must be calibrated with respect to a coordinate system defined with respect to the center of projection of the camera (or other global reference position) . Any well- known calibration or measurement technique for obtaining camera and projector parameters and measuring object locations may be used. If the positions are suitably calibrated, the object 120 can be located at any point with a reconstruction volume within which points on the object surface are both imaged by the camera and illuminated by the projection system.
  • Camera 101 is used to capture camera views 150 of object 120 viewed under a given illumination pattern displayed by the illuminator 110.
  • the illumination patterns correspond to a sequence of Gray code patterns, a few of which 400, 410, 420, 430, are depicted in FIG. 4.
  • Each Gray code pattern is composed of alternating black and white stripes of increasingly-fine width.
  • An exemplary Gray coded captured camera view 510 is shown in FIG. 5. These images are processed according to the logic flow diagram depicted in FIG. 6.
  • Gray coded images 510 are captured.
  • step 610 for each image captured under a given structured light illumination pattern, each pixel is identified as being either illuminated or in shadow. Any suitable pixel thresholding operation may be performed in step 610.
  • step 620 after processing multiple such images, the set of thresholded values for each camera pixel are analyzed to determine which projector scanline illuminated each pixel. Equivalently, since each projector scanline corresponds to one of the planes contained in the plurality of parallel light planes 115 shown in FIG. 1, step 620 determines which plane of light illuminates a given camera pixel.
  • 520 is an exemplary labeling of projected scanlines produced by step 620. Any suitable analysis may be applied to decode the images to determine which plane illuminated a given camera pixel. For a Gray code structured light sequence, the Gray code decoding scheme may be employed. Any sequence which can be decoded in such a manner can be used within our invention.
  • step 630 the object shape is estimated from the structured light sequence.
  • any suitable prior-art method may be used for this purpose, such as the Gray code patterns shown in FIG. 4.
  • ray-plane triangulation is used to reconstruct the three-dimensional coordinate of a point on the object surface for each camera pixel.
  • the projector scan-line illuminating a given pixel is determined in step 620.
  • the calibration of the position of the camera, the projector, and the means of bending are used to determine the equation of the planes within the plurality of parallel light planes 115, as well as the equation of the line connecting the center of projection of the camera with a given pixel.
  • the intersection of a plane with a camera ray will provide an estimate for the three-dimensional coordinate of a point on the object surface.
  • This analysis can be applied independently to allow points on the object surface to represent the object shape. Furthermore, these points can be analyzed to produce a polygon mesh to represent the object shape.
  • An additional image 500 collected under ambient illumination, with no pattern projected by the illuminator 110, is be used to further estimate the appearance (e.g., color) of the estimated object surface.
  • the recovered set of three-dimensional points representing the object surface can be combined with the system calibration, to determine which pixels in the ambient image correspond to a given point on the surface. This correspondence can be used to estimate the appearance of each three-dimensional point.
  • This scheme is known as view-independent texture mapping. In general, any texture mapping or appearance modeling scheme could be applied, such as view-dependent texture mapping.
  • a plurality of ambient images, collected for various poses of the object and ambient illumination could be collected to obtain a better approximation of the object appearance as shown in FIG. 7.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention porte sur des procédés et un appareil pour récupérer la forme et l'aspect d'un objet éclairé par un motif de lumière structuré codé qui est observé par une caméra, laquelle capture simultanément de multiples vues de l'objet. Dans un mode de réalisation, une lentille de Fresnel et un projecteur de données sont utilisés en tant que moyens d'éclairement de l'objet avec des motifs de lumière structurés périphériques. Le projecteur projette plusieurs motifs et la caméra capture des images de l'objet pour chaque motif. Les images sont décodées pour déterminer la correspondance entre des lignes de balayage de projecteur et des pixels de caméra. En combinaison avec l'étalonnage de la position relative des divers éléments optiques, comprenant le projecteur et la caméra, l'intersection rayon-plan est utilisée pour déterminer la profondeur tridimensionnelle de chaque partie de la surface. Une image supplémentaire capturée sous un éclairement ambiant est utilisée pour récupérer l'aspect de chaque point de surface de chaque pixel de caméra.
PCT/US2009/054007 2008-08-18 2009-08-17 Éclairement structuré périphérique pour récupérer une forme et un aspect d'objet en 3d WO2010021972A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US18930608P 2008-08-18 2008-08-18
US61/189,306 2008-08-18

Publications (1)

Publication Number Publication Date
WO2010021972A1 true WO2010021972A1 (fr) 2010-02-25

Family

ID=41707420

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/054007 WO2010021972A1 (fr) 2008-08-18 2009-08-17 Éclairement structuré périphérique pour récupérer une forme et un aspect d'objet en 3d

Country Status (1)

Country Link
WO (1) WO2010021972A1 (fr)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AT511223A1 (de) * 2011-03-18 2012-10-15 A Tron3D Gmbh Vorrichtung zum aufnehmen von bildern von dreidimensionalen objekten
US20120281087A1 (en) * 2011-05-02 2012-11-08 Faro Technologies, Inc. Three-dimensional scanner for hand-held phones
CN104296679A (zh) * 2014-09-30 2015-01-21 唐春晓 镜像式三维信息采集装置及方法
CN104506838A (zh) * 2014-12-23 2015-04-08 宁波盈芯信息科技有限公司 一种符号阵列面结构光的深度感知方法、装置及系统
US9091529B2 (en) 2011-07-14 2015-07-28 Faro Technologies, Inc. Grating-based scanner with phase and pitch adjustment
US9170098B2 (en) 2011-07-13 2015-10-27 Faro Technologies, Inc. Device and method using a spatial light modulator to find 3D coordinates of an object
CN106556356A (zh) * 2016-12-07 2017-04-05 西安知象光电科技有限公司 一种多角度三维轮廓测量系统及测量方法
CN106705896A (zh) * 2017-03-29 2017-05-24 江苏大学 一种基于单摄像机全方位主动视觉的电连接器壳体缺陷检测装置及方法
CN109285213A (zh) * 2018-07-18 2019-01-29 西安电子科技大学 全方位偏振三维重建方法
CN110514143A (zh) * 2019-08-09 2019-11-29 南京理工大学 一种基于反射镜的条纹投影系统标定方法
US10512508B2 (en) 2015-06-15 2019-12-24 The University Of British Columbia Imagery system
CN110672039A (zh) * 2019-09-18 2020-01-10 南京理工大学 一种基于平面反射镜的物体全方位三维测量方法
CN111947598A (zh) * 2020-07-24 2020-11-17 南京理工大学 基于平面反射镜的360°三维人头测量方法

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4206965A (en) * 1976-08-23 1980-06-10 Mcgrew Stephen P System for synthesizing strip-multiplexed holograms
US5127037A (en) * 1990-08-15 1992-06-30 Bynum David K Apparatus for forming a three-dimensional reproduction of an object from laminations
US5455689A (en) * 1991-06-27 1995-10-03 Eastman Kodak Company Electronically interpolated integral photography system
US5517603A (en) * 1991-12-20 1996-05-14 Apple Computer, Inc. Scanline rendering device for generating pixel values for displaying three-dimensional graphical images
US20020130820A1 (en) * 1998-04-20 2002-09-19 Alan Sullivan Multi-planar volumetric display system and method of operation
US20020150288A1 (en) * 2001-02-09 2002-10-17 Minolta Co., Ltd. Method for processing image data and modeling device
US20030194131A1 (en) * 2002-04-11 2003-10-16 Bin Zhao Object extraction
US20040151365A1 (en) * 2003-02-03 2004-08-05 An Chang Nelson Liang Multiframe correspondence estimation
US20040155877A1 (en) * 2003-02-12 2004-08-12 Canon Europa N.V. Image processing apparatus
US20040201584A1 (en) * 2003-02-20 2004-10-14 Binary Simplex, Inc. Spatial decomposition methods using bit manipulation
US20040207823A1 (en) * 2003-04-16 2004-10-21 Alasaarela Mikko Petteri 2D/3D data projector
US20050057569A1 (en) * 2003-08-26 2005-03-17 Berger Michael A. Static and dynamic 3-D human face reconstruction
US6947579B2 (en) * 2002-10-07 2005-09-20 Technion Research & Development Foundation Ltd. Three-dimensional face recognition
US20050259159A1 (en) * 1999-01-06 2005-11-24 Hideyoshi Horimai Apparatus and method for photographing three-dimensional image, apparatus and method for displaying three-dimensional image, and apparatus and method for converting three-dimensional image display position
US20050259158A1 (en) * 2004-05-01 2005-11-24 Eliezer Jacob Digital camera with non-uniform image resolution
US20060017722A1 (en) * 2004-06-14 2006-01-26 Canon Europa N.V. Texture data compression and rendering in 3D computer graphics
US20060066612A1 (en) * 2004-09-23 2006-03-30 Herb Yang Method and system for real time image rendering
US20070206836A1 (en) * 2004-09-06 2007-09-06 Bayerische Motoren Werke Aktiengesellschaft Device for the detection of an object on a vehicle seat
US20080018855A1 (en) * 2004-03-22 2008-01-24 Larichev Andrey V Aberrometer Provided with a Visual Acuity Testing System

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4206965A (en) * 1976-08-23 1980-06-10 Mcgrew Stephen P System for synthesizing strip-multiplexed holograms
US5127037A (en) * 1990-08-15 1992-06-30 Bynum David K Apparatus for forming a three-dimensional reproduction of an object from laminations
US5455689A (en) * 1991-06-27 1995-10-03 Eastman Kodak Company Electronically interpolated integral photography system
US5517603A (en) * 1991-12-20 1996-05-14 Apple Computer, Inc. Scanline rendering device for generating pixel values for displaying three-dimensional graphical images
US20020130820A1 (en) * 1998-04-20 2002-09-19 Alan Sullivan Multi-planar volumetric display system and method of operation
US20050259159A1 (en) * 1999-01-06 2005-11-24 Hideyoshi Horimai Apparatus and method for photographing three-dimensional image, apparatus and method for displaying three-dimensional image, and apparatus and method for converting three-dimensional image display position
US20020150288A1 (en) * 2001-02-09 2002-10-17 Minolta Co., Ltd. Method for processing image data and modeling device
US20030194131A1 (en) * 2002-04-11 2003-10-16 Bin Zhao Object extraction
US6947579B2 (en) * 2002-10-07 2005-09-20 Technion Research & Development Foundation Ltd. Three-dimensional face recognition
US20040151365A1 (en) * 2003-02-03 2004-08-05 An Chang Nelson Liang Multiframe correspondence estimation
US20040155877A1 (en) * 2003-02-12 2004-08-12 Canon Europa N.V. Image processing apparatus
US20040201584A1 (en) * 2003-02-20 2004-10-14 Binary Simplex, Inc. Spatial decomposition methods using bit manipulation
US20040207823A1 (en) * 2003-04-16 2004-10-21 Alasaarela Mikko Petteri 2D/3D data projector
US20050057569A1 (en) * 2003-08-26 2005-03-17 Berger Michael A. Static and dynamic 3-D human face reconstruction
US20080018855A1 (en) * 2004-03-22 2008-01-24 Larichev Andrey V Aberrometer Provided with a Visual Acuity Testing System
US20050259158A1 (en) * 2004-05-01 2005-11-24 Eliezer Jacob Digital camera with non-uniform image resolution
US20060017722A1 (en) * 2004-06-14 2006-01-26 Canon Europa N.V. Texture data compression and rendering in 3D computer graphics
US20070206836A1 (en) * 2004-09-06 2007-09-06 Bayerische Motoren Werke Aktiengesellschaft Device for the detection of an object on a vehicle seat
US20060066612A1 (en) * 2004-09-23 2006-03-30 Herb Yang Method and system for real time image rendering

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AT511223B1 (de) * 2011-03-18 2013-01-15 A Tron3D Gmbh Vorrichtung zum aufnehmen von bildern von dreidimensionalen objekten
EP2499992A3 (fr) * 2011-03-18 2013-05-22 a.tron3d GmbH Dispositif de capture d'images d'objets tridimensionnels
US9101434B2 (en) 2011-03-18 2015-08-11 A.Tron3D Gmbh Device for recording images of three-dimensional objects
AT511223A1 (de) * 2011-03-18 2012-10-15 A Tron3D Gmbh Vorrichtung zum aufnehmen von bildern von dreidimensionalen objekten
US20120281087A1 (en) * 2011-05-02 2012-11-08 Faro Technologies, Inc. Three-dimensional scanner for hand-held phones
US9170098B2 (en) 2011-07-13 2015-10-27 Faro Technologies, Inc. Device and method using a spatial light modulator to find 3D coordinates of an object
US9091529B2 (en) 2011-07-14 2015-07-28 Faro Technologies, Inc. Grating-based scanner with phase and pitch adjustment
CN104296679A (zh) * 2014-09-30 2015-01-21 唐春晓 镜像式三维信息采集装置及方法
CN104506838A (zh) * 2014-12-23 2015-04-08 宁波盈芯信息科技有限公司 一种符号阵列面结构光的深度感知方法、装置及系统
US10512508B2 (en) 2015-06-15 2019-12-24 The University Of British Columbia Imagery system
CN106556356A (zh) * 2016-12-07 2017-04-05 西安知象光电科技有限公司 一种多角度三维轮廓测量系统及测量方法
CN106705896A (zh) * 2017-03-29 2017-05-24 江苏大学 一种基于单摄像机全方位主动视觉的电连接器壳体缺陷检测装置及方法
CN106705896B (zh) * 2017-03-29 2022-08-23 江苏大学 一种基于单摄像机全方位主动视觉的电连接器壳体缺陷检测装置及方法
CN109285213A (zh) * 2018-07-18 2019-01-29 西安电子科技大学 全方位偏振三维重建方法
CN110514143A (zh) * 2019-08-09 2019-11-29 南京理工大学 一种基于反射镜的条纹投影系统标定方法
WO2021027719A1 (fr) * 2019-08-09 2021-02-18 南京理工大学 Procédé d'étalonnage basé sur réflecteur pour système de projection de franges
US11808564B2 (en) 2019-08-09 2023-11-07 Nanjing University Of Science And Technology Calibration method for fringe projection systems based on plane mirrors
CN110672039A (zh) * 2019-09-18 2020-01-10 南京理工大学 一种基于平面反射镜的物体全方位三维测量方法
CN110672039B (zh) * 2019-09-18 2021-03-26 南京理工大学 一种基于平面反射镜的物体全方位三维测量方法
CN111947598A (zh) * 2020-07-24 2020-11-17 南京理工大学 基于平面反射镜的360°三维人头测量方法
CN111947598B (zh) * 2020-07-24 2022-04-01 南京理工大学 基于平面反射镜的360°三维人头测量方法

Similar Documents

Publication Publication Date Title
WO2010021972A1 (fr) Éclairement structuré périphérique pour récupérer une forme et un aspect d'objet en 3d
US6791542B2 (en) Modeling 3D objects with opacity hulls
US6831641B2 (en) Modeling and rendering of surface reflectance fields of 3D objects
US20190156557A1 (en) 3d geometric modeling and 3d video content creation
CN104335005B (zh) 3d扫描以及定位系统
US9363501B2 (en) Combining depth-maps from different acquisition methods
US6903738B2 (en) Image-based 3D modeling rendering system
Zhang et al. Rapid shape acquisition using color structured light and multi-pass dynamic programming
US6455835B1 (en) System, method, and program product for acquiring accurate object silhouettes for shape recovery
US6792140B2 (en) Image-based 3D digitizer
US6858826B2 (en) Method and apparatus for scanning three-dimensional objects
US20130038696A1 (en) Ray Image Modeling for Fast Catadioptric Light Field Rendering
US20020057438A1 (en) Method and apparatus for capturing 3D surface and color thereon in real time
CN101198964A (zh) 使用红外图案照射创建对象的三维图像
WO2009120073A2 (fr) Explorateur tridimensionnel auto-référencé à lumière structurée et à étalonnage dynamique
CN107370950B (zh) 对焦处理方法、装置和移动终端
Lanman et al. Surround structured lighting: 3-D scanning with orthographic illumination
Lanman et al. Surround structured lighting for full object scanning
JP4335589B2 (ja) 3dオブジェクトをモデル化する方法
Balzer et al. Cavlectometry: Towards holistic reconstruction of large mirror objects
Aliaga Digital inspection: An interactive stage for viewing surface details
JP3932776B2 (ja) 3次元画像生成装置および3次元画像生成方法
Jost et al. Modeling 3d textured objects by fusion of multiple views
Wong et al. Multi-view 3D model reconstruction: exploitation of color homogeneity in voxel mask
Robinson Surface scanning with uncoded structured light sources

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09808660

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09808660

Country of ref document: EP

Kind code of ref document: A1