GB2352901A - Rendering three dimensional representations utilising projected light patterns - Google Patents

Rendering three dimensional representations utilising projected light patterns Download PDF

Info

Publication number
GB2352901A
GB2352901A GB9910960A GB9910960A GB2352901A GB 2352901 A GB2352901 A GB 2352901A GB 9910960 A GB9910960 A GB 9910960A GB 9910960 A GB9910960 A GB 9910960A GB 2352901 A GB2352901 A GB 2352901A
Authority
GB
United Kingdom
Prior art keywords
image
optical radiation
representation
structured optical
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9910960A
Other versions
GB9910960D0 (en
Inventor
Ivan Daniel Meir
Jonathan Anthony Holdback
Jeremy David Norman Wilson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tricorder Technology PLC
Original Assignee
Tricorder Technology PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tricorder Technology PLC filed Critical Tricorder Technology PLC
Priority to GB9910960A priority Critical patent/GB2352901A/en
Priority to GB0027703A priority patent/GB2353659A/en
Priority to PCT/GB1999/001556 priority patent/WO1999060525A1/en
Priority to AU40505/99A priority patent/AU4050599A/en
Priority to JP2000550066A priority patent/JP2002516443A/en
Publication of GB9910960D0 publication Critical patent/GB9910960D0/en
Publication of GB2352901A publication Critical patent/GB2352901A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/221Image signal generators using stereoscopic image cameras using a single 2D image sensor using the relative movement between cameras and objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0085Motion estimation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0092Image segmentation from stereoscopic image signals

Abstract

An apparatus for deriving a 3D representation of at least part of an object (3) comprises a projector (PR) arranged to project a speckle pattern or other structured light from a slide (S) onto the object and a single digital camera (CM) arranged to acquire an image of the object and an image of a calibration target (T) which is similarly illuminated by the speckle pattern in a preliminary calibration step. Pairs of points (P1, Q1; P2, Q2) of the respective images are correlated eg by Gruen's algorithm to enable the baseline vector (V) joining the perspective centres (O<SB>C</SB>, O<SB>P</SB>) of the camera and projector to be found from the intersection of planes O<SB>C</SB>P1Q1 and O<SB>C</SB>P2Q2. The 3D representation is then generated in simulated 3D space by projection and intersection of ray bundles from virtual projectors (implemented in software) located on the baseline vector (V).

Description

2352901 Me od and Apparatus for deriving 3D Representation The present
invention relates to a method and apparatus for deriving a representation of the three-dimensional (31)) shape of an object from an image (referred to herein 0 as an object image) of the projection of structured optical radiation onto the object surface. The term "structured optical radiation" is a generalisation of the term "structured light" and is intended to cover not only structured light but also structured electromagnetic radiation of other wavelengths which obeys the laws of optics.
In principle the 3D shape of part of an object surface can be obtained by projecting C structured light, eg a grid pattern onto a surface of the object, acquiring an image of the illuminated region of the object surface and identifying the elements of the structured light (eg the crossed lines of the grid pattern) which correspond to the 0 respective features (eg crossed lines) of the image, assuming that the spatial distribution of the structured light is known.
One such arrangement is shown by Hu & Stockman in 3-D Surface Solution Using Structured Light and Constraint Propagation" in IEEE Trans PAMI Vol 2 No 4 pp 390 - 402, 1989, who discuss the advantages of the technique over stereoscopic imaging techniques in which two cameras acquire overlapping images of a common surface region of an object and the 3D shape is reconstructed from a correlation of pairs of features of the two images which correspond to the same physical surface feature. In particular the "correspondence problem" of correlating features in the two images obtained in stereoscopic imaging reduces to a much simpler "line labelling problem" when stripes are projected onto the object surface. Various geometric constraints are disclosed which simplify this problem further. The projection of both grids and grey scale patterns is disclosed.
In the arrangement shown in WO 93/03579 two projectors each project a set of light t:
planes diverging from a line origin, the sets of light planes being mutually orthogonal and being directed onto a common region of the object surface to form a grid pattern. The line origins intersect at a point between the projectors above the object; this point is effectively the origin (optical centre) of a virtual projector.
A camera is arranged to acquire an image of the grid pattern as distorted by the 2 object surface and is so oriented with respect to the projector arrangement that the plane defined by a) the baseline joining the optical centres of the virtual projector and b) the optical axis of the camera is nearly parallel to one set of light planes.
This ensures that the stripes in the image corresponding to this set of light planes cross only a small number of stripes derived from the other set of light planes and that the number of possible pairs of light planes corresponding to a given intersection of stripes in the acquired image is reduced to a minimum. Since each intersection can be associated with only one pair of light planes the potential ambiguity in the image reconstruction is reduced.
The above problem of ambiguity is addressed in the context of stereoscopic imaging by W091/15732 (Gordon) which discloses an arrangement in which a laser scanner projects a series of stripes onto the scanned object and left and right cameras detect the distorted stripes from the object.
It is recognised that a given bright point in the image plane of one camera cannot be simply corTelated with an illuminated point on the surface of the object because it is not known which stripe illuminates that point. Accordingly, an arbitrary pixel in the stripe in one camera's image plane is selected and a line drawn through the centre of the camera lens projecting this line out into space. This line is then projected onto the image plane of the other camera and the resulting epipolar line in the other camera's image plane cuts a number of stripes also imaged on its image plane. Any one of these points of intersection could in principle correspond to the arbitrary pixel mentioned above. The particular point which corresponds is found by projecting all the points of intersection back into space and determining which of the resulting lines intersects a laser stripe from the laser projector.
US 5,838,428 discloses an arrangement employing successively projected patterns of stripes, coded in accordance with a DeBruijn sequence and blurred to enable the resolution of the arrangement to be increased by interpolation. The projector comprises a mask and a xenon flash tube. The code is such that each element in the CCD photodetector used to image the projection of the pattern on the ob ect surface j detects a unique sequence of grey levels which is determined by the 3D position of the corresponding surface portion of the object and the structure of the projected light. The latter is found by a calibration procedure involving imaging the same sequence of projections on a reference plane.
3 It should be noted that there is no correlation of images in the calibration or measurement procedure.
Although the above arrangement avoids the correspondence or line labelling problem it has the. disadvantage of requiring multiple projections of structured light and acquisition of images, increasing the complexity and reducing the speed of the arTangement. In particular the shape of a moving or shape-changing object cannot be determined. Furthermore the calibration procedure is complex.
An object of the present invention is to alleviate or overcome at least some disadvantages of the prior art.
In one aspect the invention provides apparatus for generating a 3D representation of at least part of an object from an object image of the projection of structured optical radiation onto the object surface and from at least one calibration image of the projection of the structured optical radiation onto a surface displaced from the object surface, the apparatus comprising image processing means arranged to generate correspondences between at least one calibration image and the object image and optionally a further calibration image, and reconstruction processing means arranged to simulate a first projection of the object image and a second projection linking t:- respective correspondences of at least two of the correlated images and to derive said 3D representation from the mutual intersections of the first and second projections.
The calibration image can for example be of the projection of the structured optical radiation onto a calibration surface or can for example be a further object image obtained after moving the object relative to the camera used to acquire the initial object image and the projector means used to project the structured optical radiation.
Preferably the first and second projections are from a baseline linking an origin of the structured optical radiation and a perspective centre associated with the image (eg the optical centre of the camera lens -used to acquire the image), the reconstruction processing means being arranged to derive said baseline from two or more pairs of correlated features. This feature is illustrated in Figures 2 and 11 discussed in detail below.
In one embodiment the image processing means is arranged to generate correspondences between two or more calibration images and to determine the spacing between origins of the first and second projections in dependence upon both the correspondences of the two or more calibration images and input or stored metric information associated with the calibration images. This feature is illustrated in 4 Figure 11, discussed in detail below.
In another embodiment the reconstruction processing means is arranged to vary the spacing between the origins of the first and second projections in dependence upon a scaling variable enterable by a user. In this embodiment a further calibration image is not required. Preferably the apparatus includes means for displaying the _31) representation with a relative scaling dependent upon the value of the scaling variable.
Preferably the apparatus includes means for combining two or more 3D representations and means for adjusting the relative scaling of the representations to enable them to fit each other.
Preferably the image processing means is arranged to generate said correspondences of said images by comparing local radiometric distributions of said images.
Suitable algorithms for correlating (generating correspondences between) overlapping images are already known - eg Gruen's algorithm (see Gruen, A W "Adaptive least squares correlation: a powerful image matching technique"S Afr J of Photogrammetry, remote sensing and Cartography Vol 14 No 3 (1985) and Gruen, A W and Baltsavias, E P "High precision image matching for digital terrain model generation" Int Arch photogrammetry Vol 25 No 3 (1986) p254) and particularly the "region-growing" modification thereto which is described in Otto and Chau "Region-growing algorithm for matching terrain images" Image and Vision Computing Vol 7 No 2 May 1989 p83, all of which are incorporated herein by reference.
Essentially, Gruen's algorithm is an adaptive least squares correlation algorithm in which two image patches of typically 15 x 15 to 30 x 30 pixels are correlated (ie selected from larger left and right images in such a manner as to give the most consistent match between patches) by allowing an affine geometric distortion between coordinates in the images (ie stretching or compression in which originally parallel lines remain parallel in the transformation) and allowing an additive radiometric distortion between the grey levels of the pixels in the image patches, generating an over-constrained set of linear equations representing the discrepancies between the correlated pixels and finding a least squares solution which iiiinimises the discrepancies.
The Gruen algorithm is essentially an iterative algorithm and requires a reasonable approximation for the correlation to be fed in before it will converge to the correct solution. The Otto and Chau region-growing algorithm begins with an approximate match between a point in one image and a point in the other, utilises Gruen's algorithm to produce a more accurate match and to generate the geometric and radiometric distortion parameters, and uses the distortion parameters to predict approximate matches for points in the region of the neighbourhood of the initial matching point. The neighbouring points are selected by choosing the adjacent points on a grid having a grid spacing of eg 5 or 10 pixels in order to. avoid running Gruen's algorithm for every pixel.
is Hu et al "Matching Point Features with ordered Geometric, Rigidity and Disparity Constraints" IEEE Transactions on Pattern Analysis and Machine Intelligence Vol 16 No 10, 1994 pp,1041-1049 (and references cited therein) discloses further methods for correlating features of overlapping images.
Since the above algorithms were developed for generating correspondences between images having poorly defined features (eg aerial photographs) whereas the projection of structured light onto an object surface will generate distinct local radiometric distributions, the problem of correlation is less critical in the context of the present invention. Accordingly the precise correlation algorithm is not critical.
However we have found a number of improvements to the Gruen algorithm, as follows:
i) the additive radiometric shift employed in the algorithm can be dispensed with; ii) if during successive iterations, a candidate matched point moves by more than a certain amount (eg 3 pixels) per iteration then it is not a valid matched point and should be rejected; iii) during the growing of a matched region it is useful to check for sufficient contrast at at least three of the four sides of the region in order to ensure that there is sufficient data for a stable convergence - in order to facilitate this it is desirable to make the algorithm configurable to enable the parameters (eg required contrast) to 6 be optimised for different environments, and iv) in order to quantify the validity of the correspondences between images it has been found useful to re-derive the original grid point in the starting image by applying the algorithm to the matched point in the other image (ie reversing the stereo matching process) and measuring the distance between the original grid point and the new grid point found in the starting image from the reverse stereo matching.
The smaller the distance the better the correspondence.
In another aspect the invention provides apparatus for deriving a 3D representation of at least part of an object, comprising means for projecting structured optical radiation onto the surface of the object, means for acquiring a 2D image of the projection of the structured optical radiation, and image processing means arranged to derive the 3D representation from the distortion of the structure of the projected optical radiation by the object surface, the projected structured optical radiation having an irregular radiometric and/or colorimetric, distribution.
In a related aspect the invention provides a method of generating a 3D representation of at least part of an object, wherein structured optical radiation is projected onto the surface of the object, a 2D image of the projection of the structured optical radiation on the surface is acquired, and the 3D representation is derived from the distortion of the structure of the projected optical radiation by the object surface, the projected structured optical radiation having an irregular radiometric and/or colorimetric distribution.
In a further aspect the invention provides a method of generating a 3D representation of at least part of an object from an object image of the projection of structured optical radiation onto the ob ect surface and from at least one calibration image of j the projection of the structured optical radiation onto a surface displaced from the object surface, the method comprising the steps of:
i) correlating at least one calibration image with the object image and optionally with a further calibration image; ii) simulating a first projection of the object image and a second projection of the structured optical radiation, and 7 iii) deriving said 3D representation from the mutual intersections of the first and second projections.
In a related aspect the invention provides image processing apparatus for deriving a 3D representation of at least part of an object from a 2D image of the illuminated object, the object being illuminated with structured optical radiation projected from a location spaced apart from the viewpoint at which the 2D image is acquired, the 10 2D image being correlated with the structured radiation, the apparatus comprising digital processing means arranged to form a 3D reconstruction in which an illuminated region of the object in the image extends in a simulated 3D space in dependence upon both the correlation and a scaling variable, the scaling variable being representative of the separation between the location from which the 15 structured optical radiation is projected and the viewpoint at which the 2D image is acquired. PreferTed features of the invention are defined in the dependent claims.
In another aspect the invention provides image processing apparatus for deriving a C., 3D representation of at least part of an object from a 2D image thereof, the object being illuminated with structured optical radiation projected from a location spaced apart from the viewpoint at which the 2D image is acquired, the 2D image being C correlated with rays of the structured radiation, the apparatus comprising digital processing means arranged to form a 3D reconstruction which extends in a simulated 3D space in dependence upon both the correlation and a scaling variable, the scaling variable being representative of the separation between the location from which the structured optical radiation is projected and the viewpoint at which the 2D image is acquired.
This aspect of the invention is illustrated in Figure 6. Following a simple calibration procedure requiring no knowledge of the position of the camera or the projector relative to the object it enables a 3D representation to be generated. This can optionally be displayed and scaled or it can be distorted eg for special effects in graphics and animation.
Preferably the apparatus is arranged to derive a further 3D reconstruction from a 8 further 2D image acquired from a different viewpoint relative to the object, the combining means being arTanged to combine the first-mentioned 3D reconstruction and the further 3D reconstruction by manipulations in a simulated 3D space involving one or more of rotation and translation, the apparatus further comprising scaling means arranged to reduce or eliminate any remaining discrepancies between the 3D reconstructions by scaling one 3D reconstruction relative to the other along at least one axis.
Preferably the apparatus is arranged to display both 3D reconstructions simultaneously and to manipulate them in simulated 3D space in response to commands entered by a user.
In one embodiment the apparatus is arTanged to perform the manipulations of the 3D reconstructions under the control of a computer pointing device.
In a related aspect the invention provides a method of deriving a 3D representation of at least part of an object from a 2D image thereof, comprising the steps of illuminating the object with structured projected optical radiation, acquiring a 2D image of the illuminated object, correlating the 2D image with rays of the structured optical radiation, and digitally processing the 2D image to form a 3D reconstruction which extends in a simulated 3D space in dependence upon both the correlation and a scaling variable, the scaling variable being representative of the separation between a location from which the structured optical radiation is projected and the viewpoint at which the 2D image is acquired.
Preferred embodiments of the invention are described below by way of example only with reference to Figures I to 11 of the accompanying drawings, wherein:
Figure I is a schematic representation of apparatus in accordance with all aspects of the invention; Figure 2 is a sketch perspective ray diagram showing the optical arrangement of the apparatus of Figure 1; Figure 3 shows an object image and a calibration image acquired by the apparatus of Figures 1 and 2 and the correlation of their features; 9 Figure 4 is a sketch perspective ray diagram showing the use of two reference surfaces to locate the camera and projector of the apparatus of Figures I and 2 on the baseline connecting their respective perspective centres; Figure 5 is a flow diagram illustrating the mode of operation of the apparatus of Figures 1 and 2 in accordance with the first method aspect of the invention; Figure 6 is a ray diagram illustrating both the acquisition of the images utilised by the apparatus of Figures I and 2 and the regeneration of the object shape by virtual projectors of the acquired object image and structured optical radiation; Figure 7 is a screenshot illustrating the fitting together of two 3D surface portions of the object using the apparatus of Figures I and 2; Figure 8 is further screenshot showing the scaling of the resulting composite 3D surface portion along vertical and horizontal axes; Figure 9 is a further screenshot showing the scaling of intersecting 3D surface portions to fit each other; Figure 10 is a screenshot showing a user interface provided by the apparatus of Figures I and 2 for manipulating the images and 3D surface portions, and Figure 11 is a 3D ray diagram showing the derivation of the projectorcamera separation from the position of two reference or target surfaces.
Referring to Figure 1, the apparatus comprises a personal computer 4 (eg a Pentium'D PC) having conventional CPU, ROM, RAM and a hard drive and a frame grabber connection at an input port to a digital camera I as well as a video output port connected to a screen 5 and conventional input ports connected to a keyboard and a mouse 6 or other pointing device. The hard drive is loaded with conventional operating system such as Windows 695 and software: a) to display images acquired by the camera 1; b) to generate correspondences between overlapping regions of images input from the camera 1; c) to derive from the acquired images the baseline joining the perspective centres of the camera and projector; d) to project images acquired by the camera into a simulated 3D space from virtual projectors located on the baseline at a separation selected (with the keyboard or pointing device) by the user or determined (thereby creating a partial 3D 10 reconstruction); e) to scale the partial 3D reconstruction along one or more axes and combine such partial 3D reconstructions as illustrated in Figures 7, 8 and 9, and f) to determine the separation of the perspective centres of the camera and projector along the baseline from further correlations of object images and calibration images, 0 and thence derive an accurate partial 3D reconstruction of the object surface.
Additionally the software is preferably arranged to correct the images for distortion due eg to curvature of field of the camera and projector optics before they are processed as described above, either during an initial calibration procedure or as part of a ray bundle adjustment process during the processing of the object and calibration image(s). Suitable correction and calibration procedures are described by
Tsai in "An Efficient and Accurate Camera Calibration Technique for 3D Machine Vision" Proc IEEE CVPR (1986) pp 364-374 and will not be described further.
The camera, a Pulnix M-9701 progressive scan digital camera, is shown mounted at one end of a support frame F on eg a ball and socket mounting and a slide projector PR is shown securely mounted on the other end of the frame. Slide projector PR is provided with a speckle pattern slide S and is arranged to project the resulting speckle pattern onto the surface of region R of an object 3 which is in the field of view of camera CM.
The intrinsic camera parameters are initially determined by acquiring images of a reference plate (not shown) in known positions. The reference plate is planar and has an array of printed blobs of uniform and known spacing. The following parameters 11 are determined and are therefore assumed to be known in the subsequent description:
i) focal length of the camera ii) distortion parameters of the lenses of the camera and projector iii) scale factor iv) image coordinates of principal point.
Additionally the pixel size (determined by the camera manucturer) is assumed to be known.
Optionally, the following extrinsic camera parameters are determined:
a) camera location b) camera orientation.
Alternatively the camera location and orientation can be taken to define the coordinate system relative to which the object surface coordinates are deterinined.
Referring now to Figure 2, the camera CM is shown with its perspective centre OC located on a baseline vector V and viewing (initially) a target surface T and (subsequently) object 3. The (virtual) origin or perspective centre Op of projector PR also lies on baseline vector V and is defined by the optical system of the projector comprising field lenses OL and condenser lenses CL. A point light source LS such as a filament bulb illuminates slide S and directs a speckle pattern onto (initially) target surface T and (subsequently) the surface of object 3.
The baseline vector V is found by the following procedure:
Firstly an image 11 (Figure 3) of the region of the surface of object 3 illuminated by the projected speckle pattern is acquired and stored in the memory of computer 4 and an arbitrary group of at least two spaced apart points Q1 and Q2 of this region are selected as points q1 and q2 in the image formed on the photodetector plane PD of the camera. The group of points qI and q2 is stored.
Secondly the object 3 is substituted by target surface T and an image 12 (Figure 3) of 12 the illuminated region of the target surface is acquired by camera CM. The position and orientation of the target T relative to the camera are found by acquiring an image of the target in the absence of any illumination from the projector, utilising a known pattern of blobs BL formed on the periphery of the target. The image 12 is stored and and a patch defined by its central point (eg Qn, Figure 3) of the first image 11 is correlated with the corresponding point Pn of the second image 12 by selecting a surrounding region R of initially 3 x 3 pixels and, by comparing local radiometric, intensity distributions by means of the above-described modified Gruens algorithm, searching for the corresponding region R' in image 12 which is allowed to be distorted with an affine geometric distortion (eg in the simple case illustrated in Figure 3, horizontally elongated). The correlated patch is expanded (up to a maximum of 19 x 19 pixels) and the process is repeated. In this manner the corresponding point Pn is found.
This process is repeated to find a large number of pairs of correspondences PQ (Figure 3) and in particular to correlate the patches centered on Pl, P2 (Figure 2) with the points in the group Ql, Q2 (Figure 2). Since the algorithm has a sub-pixel resolution, the latter are not necessarily centred on particular pixels.
In the following geometric discussion the correspondences are treated for the sake of simplicity as corTelated pairs of points but it should be noted that this does not imply anything about their topography - in particular it does not imply that they lie at comers or edges of the object, for example.
Referring to Figure 2, the origin (perspective centre) of the projector PR will lie at the intersection of PlQ1 and P2Q2. However the 3D locations of these four points are not known, only the ray lines from the camera on which they lie, namely plPl, qIQI, p2P2 and q2Q2. But the line PlQ1 will lie in the plane OCPIQ1 ie plane OCplq1 which is available for- in the calibration process and the two images 11 and 12 and the line P2Q2 will lie in the plane OCP2Q2 ie plane OCp2q2 which is similarly available from the calibration process and the two images I I and 12. These planes define a baseline vector V by their intersection, which passes through OC and the perspective centre OP of the projector.
A particularly simple way of finding the baseline vector V is to project plql and p2q2 which will meet at a point X in the plane of photodetector PD. The projection 13 from point X through the perspective centre OC is the baseline vector V as shown.
In this manner the baseline vector V can be determined, though not the position of the projector origin OP along this baseline. In practice the groups of points P and Q will each comprise more than two pairs and hence overdetermine the baseline vector V. Accordingly the computer 4 is preferably arranged to derive a bundle of such vectors as determined by the sets of points PQ, to eliminate "outliers" ie those vectors which deviate by more than a given threshold from the mean and to perform a least squares estimate of vector V on the remainder of the vectors, in accordance with known statistical methods.
The derivation of a three-dimensional representation of the object 3 is shown in the ray diagram of Figure 6.
The camera CM and projector PR are shown located on baseline vector V. A first virtual projector prl is implemented by the image processing software in computer 4 and has the same optical characteristics as the camera (as determined in the initial calibration procedure). Image 11 (Figure 3) is projected from this virtual projector in a 3D space simulated by the image processing software.
A second virtual projector pr2 is similarly implemented by the image processing software and preferably has the same optical characteristics as the projector PR (which is also represented in Figure 6). This virtual projector projects a set of ray lines in the simulated 3D spacecorresponding to the respective physical projector rays PQ and the ray lines are each labelled with the respective correlated pixels of the image 11 as found in the image correlation process described with reference to Figure 3. It will be appreciated that the image 12 and target T define, and can be equated with, a set of rays originating from the perspective centre OP of the projector. Since it is known which ray line from the projector PR/prl intersects each ray line from its corresponding pixel in image 11, the point in 3D space corresponding to each intersection can be found, and hence the set of points Qa, Qb, Qc... defining the surface.
In practice many ray lines will not intersect and the best estimate of the corresponding 3D surface point will be the mid-point of the perpendicular line joining them at their closest approach. Algorithms for this purpose are known per se.
14 In the above discussion of Figure 6 it has been assumed that the relative positions of the camera CM and and projector PR on baseline vector V (and hence the positions of the virtual projectors prl and pr2) are known. In fact these are assumed or entered as a scaling variable by the user of the computer 4.
That the ray lines from the respective virtual projectors will intersect irrespective of the spacing between the virtual projectors, assuming that their orientations are unchanged, is illustrated in Figure 6 by the alternative virtual projector position pr2' corresponding to an assumed real projector position PR'. The resulting 3D reconstruction of the object 3' will be different in size but of the same shape: a single scaling factor will be required to interconvert objects 3 and 3'. However it may be convenient in practice to provide for different horizontal and vertical scaling factors (eg because of different horizontal and vertical magnifications of the camera) and in general only one set of scaling factors will be consistent with fitting together a set of partial 3D surfaces of the same object acquired from different directions.
Accordingly the software in computer 4 is arranged to scale such acquired 3D representations to enable them to be fitted together to form a selfconsistent overall 3D representation of the object. This aspect is described below with reference to Figures 7 to 10.
Before doing so however, an alternative calibration procedure will be described with reference to Figure 4, which shows two planar calibration targets Tl and T2 (having peripheral blobs or discs BL similar to target T of Figure 2) whose orientations and positions relative to the camera axis system are known, eg as a result of a photogrammetric determination involving separately acquiring images of them in the absence of any illumination from the projector, and processing the images in a procedure similar to that described above in connection with Figure 2. The perspective centres OC and Op of the camera and projector are also shown.
In a first stage of the calibration procedure, target T I is illuminated by the structured light from the projector and an image is acquired by the camera CM. Figure 4 1 illustrates three points pl, p2 and p3 at which the structured light impinges on target T L These (and many other points, not shown) will be imaged by the camera CM.
In a second stage of the calibration procedure, target T I is removed and target T2 is illuminated by the structured light from the projector. An image is acquired by the camera CM. The three points P1, P2 and P3 corresponding to points pl, p2 and p3 are found by correlating the newly acquired image of the projection of the structured radiation on target T2 with the previously acquired image of the corresponding projection on T1 by the procedure described above with reference to Figure 3.
Figure 11 illustrates further the relationship between the positions of two calibration targets T1 and T2 and the perspective centre Op of the projector PR (the camera CM being assumed fixed on the baseline vector V). A pair of points P1 and P2 on target T1 form image points pl. and p2 respectively on the photodetector a rray PD of camera CM and (in a subsequent step following the removal of target TI) a pair of points P3 and P4 on target T2 which are correlated with P1 and P2 respectively form image points p3 and p4 respectively on photo detector array PD.
Accordingly in a third stage the pencil of rays formed by corresponding points on targets TI and T2 (eg P1 and P3; P2 and P4) is constructed to find the position of the perspective centre Op of the projector. In practice the rays will not intersect at a point but a best estimate can be found from a least squares algorithm.
It will be appreciated that other calibration procedures are possible. For example the camera could be calibrated by the Tsai method (Roger Y Tsai, IEEE Journal of Robotics and Automation RA-3, No 4, August 1987 p 323 - see also references cited therein).
Figures 7 to 10 illustrate the fitting together and manipulation of the partial 3D reconstructions on screen.
In Figure 7 a partial 3D reconstruction RI and a partial 3D reconstruction R2 are fitted together at their corresponding points F and F', A and A' and result in a composite (but still incomplete) 3D reconstruction R3 as shown in Figure 13 having an array comprising points E, B and D from RI and points C' and G' in defined positions which can be fitted to a further partial 3D reconstruction R4 as shown in Figure 8.
Referring to Figure 8, it is assumed that faces ABC, DGCB, DGEF and ABDEF of 16 R4 are formed but that the region bounded by points A, C, G and F is not. The user rotates R3 on screen using eg mouse 6 (Figure 1) to give a rotated partial 3D representation RY which is aligned with R4 and which he/she then sees is horizontally compressed relative to R4. Accordingly R3 can be expanded horizontally or R4 can be compressed horizontally to enable them to be fitted together to form a self consistent, complete 3D representation of the object R5..
As shown in Figure 9, this can be compressed or expanded along any desired axis or otherwise distorted under the control of the user to produce a final representation R6 or R6' having any desired relationship with the actual object 3.
Figure 10 shows a screen shot of the user interface generated by a program for manipulating the 3D reconstructions 30. Buttons BN are provided which can be clicked on by the mouse under the control of the user and,when thus selected, enable the user to drag portions of the displayed representation so as to zoom, rotate and pan the view of the object, as well as to come closer to and move away from the object ("nearer" and "further" buttons respectively). As described this far, the interface is similar to the publicly available interface of the COSMO PLAYER web browser plug-in produced by Cosmo Software Inc, of California USA. However, in accordance with a feature of the present invention, "wheels" W1 and W2 are provided which are rotatable by the mouse and enable the user to adjust the separation between the virtual projectors and to vary the distance to the object respectively. The latter control is effectively a perspective control. Optionally, further buttons or other means (not shown) may be provided to enable distortions (such as those described above) to be applied in a graphical fashion, or to enable other distortions such as shear distortion to be applied.
Preferred features of any aspect of the invention can be combined with any other aspect of the invention.

Claims (34)

17 Claims
1. Apparatus for generating a 3D representation of at least part of an object from an object image of the projection of structured optical radiation onto the object surface and from at least one calibration image of the projection of the structured optical radiation onto a surface displaced from the object surface, the apparatus comprising image processing means arranged to generate correspondences between at least one calibration image and the object image and optionally a further calibration image, and reconstruction processing means arranged to simulate a first projection of the object image and a second projection linking respective correspondences of at least two of the correlated images and to derive said 3D representation from the mutual intersections of the first and second projections.
2. Apparatus as claimed in claim I wherein the first and second projections are from a baseline linking an origin of the structured optical radiation and a perspective centre associated with the images, the reconstruction processing means being arranged to derive said baseline from the correlation.
3. Apparatus as claimed in claim 1 or claim 2 wherein the image processing means is arranged to correlate two or more calibration images and to determine the spacing between origins of the first and second projections in dependence upon both the correlation of the two or more calibration images and input or stored metric information associated with the calibration.
4. Apparatus as claimed in any preceding claim wherein the reconstruction processing means is arTanged to vary the spacing between the origins of the first and second projections in dependence upon a scaling variable enterable by a user.
5. Apparatus as claimed in claim 4 including means for displaying the 3D representation with a relative scaling dependent upon the value of the scaling variable.
6. Apparatus as claimed in any preceding claim including means for combining two or more 3D representations and means for adjusting the relative scaling of the representations to enable them to fit each other.
18
7. Apparatus as claimed in any preceding claim wherein the image processing means is arranged to correlate pixels in one of said images with corresponding locations in the other of said images by comparing the local radiometric distributions associated with said pixels and locations respectively.
8. Apparatus as claimed in claim 7 wherein the image processing means is arranged to allow a radiometric and/or geometric distortion during the correlation process.
9. Apparatus as claimed in any preceding claim further comprising projector means arranged to project the structured optical radiation onto the object surface and at least one calibration surface.
10. Apparatus for deriving a 3D representation of at least part of an object, comprising means for projecting structured optical radiation onto the surface of the object, means for acquiring a 2D image of the projection of the structured optical radiation, and image processing means arranged to derive the 3D representation from the distortion of the structure of the projected optical radiation by the object surface, the projected structured optical radiation having an irregular radiometric and/or colorimetric distribution.
11. Apparatus as claimed in claim 10 wherein the structured optical radiation comprises a distribution of three or more radiometric and/or colorimetric intensity values.
12. Apparatus as claimed in claim 10 or claim 11 wherein the structured optical radiation comprises a speckle pattern.
13. Apparatus as claimed in any of claims 10 to 12 wherein the structured optical radiation comprises a fractal pattern whose structure is invariant at different scales.
14. Apparatus as claimed in any of claims 10 to 13 wherein the projector means is arranged to project the structured radiation from a focal point or focal line defined by its optics.
15. Apparatus as claimed in any preceding claim further comprising a camera 19 arranged to acquire the object image and at least one calibration image.
16. Apparatus as claimed in any preceding claim further comprising at least one calibration target which in use is illuminated by the structured radiation.
17. A method of generating a 3D representation of at least part of an object, wherein structured optical radiation is projected onto the surface of the object, a 2D image of the projection of the structured optical radiation on the surface is acquired, and the 3D representation is derived from the distortion of the structure of the projected optical radiation by the object surface, the projected structured optical radiation having an irregular radiometric and/or colorimetric distribution.
18. A method of generating a 3D representation of an object from an object image of the projection of structured optical radiation onto the object surface and from at least one calibration image of the projection of the structured optical radiation onto a surface displaced from the ob ect surface, the method comprising the steps of- j ID i) correlating at least one calibration image with the object image and optionally with a further calibration image; ii) simulating a first projection of the object image and a second projection of the structured optical radiation, and iii) deriving said 3D representation from the mutual intersections of the first and second projections.
19. A method as claimed in claim 18 wherein the first and second projections are from a baseline linking an origin of the structured optical radiation and a perspective centre associated with the image respectively, said baseline being derived from two or more pairs of correlated features.
20. A method as claimed in claim 23 or claim 24 wherein two or more calibration images are correlated and the spacing between origins of the first and second projections is detem-iined in dependence upon both the correlation of the two or more calibration images and input or stored metric information associated with the calibration images.
21. A method as claimed in any of claims 18 to 20 wherein the spacing between the origins of the first and second projections is varied in dependence upon a scaling variable entered by a user.
22. A method as claimed in claim 21 wherein the 3D representation is displayed with a relative scaling dependent upon the value of the scaling variable.
23. A method as claimed in any of claims 18 to 22 wherein two or more 3D representations are combined and the relative scaling of the representations is adjusted along at least one axis to enable them to fit each other.
24. A method as claimed in any of claims18 to 23 wherein regions of said images are correlated by comparing the local radiometric and/or colorimetric distributions associated with said regions.
25. A method of generating a 3D representation of at least part of an object, wherein structured optical radiation is projected onto the surface of the object, a 2D image of the projection of the structured optical radiation on the surface is acquired, and the 3D representation is derived from the distortion of the structure of the projected optical radiation by the object surface, the projected structured optical radiation having an irregular radiometric and/or colorimetric distribution.
26. A method as claimed in claim 24 or claim 25 wherein a radiometric and/or geometric distortion is allowed between potentially corresponding regions.
27. Image processing apparatus for deriving a 3D representation of at least part of an object from a 2D image of the illuminated object, the object being illuminated with structured optical radiation projected from a location spaced apart from the viewpoint at which the 2D image is acquired, the 2D image being correlated with the structured radiation, the apparatus comprising digital processing means arranged to form a 3D reconstruction which extends in a simulated 3D space in dependence upon both the correlation and a scaling variable, the scaling variable being representative of the separation between the location from which the structured optical radiation is projected and the viewpoint at which the 2D image is acquired.
21
28. Apparatus as claimed in claim 27 which is arranged to derive a further 3D reconstruction from a further 2D image acquired from a different viewpoint relative to the object, the combining means being arTanged to combine the first-mentioned 3D reconstruction and the further 3D reconstruction by manipulations in a simulated 3D space involving one or more of rotation and translation, the apparatus further comprising scaling means arranged to reduce or eliminate any remaining discrepancies between the 3D reconstructions by scaling one 3D reconstruction relative to the other along at least one axis.
29. Apparatus as claimed in claim 28 which is arranged to display both 3D reconstructions simultaneously and to manipulate them in simulated 3D space in response to commands entered by a user.
30. Apparatus as claimed in claim 29 which is arranged to perform the manipulations of the 3D reconstructions under the control of a computer pointing device.
3 1. A method of deriving a 3D representation of at least part of an object from a 2D image thereof, comprising the steps of illuminating the object with structured projected optical radiation, acquiring a 2D image of the illuminated object, correlating the 2D image with rays of the structured optical radiation, and digitally processing the 2D image to form a 3D reconstruction which extends in a simulated 3D space in dependence upon both the correlation and a scaling variable, the scaling variable being representative of the separation between a location from which the structured optical radiation is projected and the viewpoint at which the 2D image is acquired.
32. A method as claimed in claim 31 wherein a view of the reconstruction in the simulated 3D space is displayed on a screen and the scaling variable is entered by a user.
33. Apparatus for generating a 3D representation of at least part of an object, substantially as described bereinabove with reference to Figures 1 to 4 and 11 optionally modified in accordance with Figures 5 to 10 of the accompanying drawings.
22
34. A method of deriving a 3D representation of at least part of an object, substantially as described hereinabove with reference to Figures I to 4 and 11 optionally modified in accordance with Figures 5 to 10 of the accompanying drawings.
GB9910960A 1998-05-15 1999-05-12 Rendering three dimensional representations utilising projected light patterns Withdrawn GB2352901A (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
GB9910960A GB2352901A (en) 1999-05-12 1999-05-12 Rendering three dimensional representations utilising projected light patterns
GB0027703A GB2353659A (en) 1998-05-15 1999-05-17 Method and apparatus for 3D representation
PCT/GB1999/001556 WO1999060525A1 (en) 1998-05-15 1999-05-17 Method and apparatus for 3d representation
AU40505/99A AU4050599A (en) 1998-05-15 1999-05-17 Method and apparatus for 3d representation
JP2000550066A JP2002516443A (en) 1998-05-15 1999-05-17 Method and apparatus for three-dimensional display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9910960A GB2352901A (en) 1999-05-12 1999-05-12 Rendering three dimensional representations utilising projected light patterns

Publications (2)

Publication Number Publication Date
GB9910960D0 GB9910960D0 (en) 1999-07-14
GB2352901A true GB2352901A (en) 2001-02-07

Family

ID=10853266

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9910960A Withdrawn GB2352901A (en) 1998-05-15 1999-05-12 Rendering three dimensional representations utilising projected light patterns

Country Status (1)

Country Link
GB (1) GB2352901A (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7433024B2 (en) 2006-02-27 2008-10-07 Prime Sense Ltd. Range mapping using speckle decorrelation
US7710032B2 (en) 2003-07-11 2010-05-04 Koninklijke Philips Electronics N.V. Encapsulation structure for display devices
US8050461B2 (en) 2005-10-11 2011-11-01 Primesense Ltd. Depth-varying light fields for three dimensional sensing
US8150142B2 (en) 2007-04-02 2012-04-03 Prime Sense Ltd. Depth mapping using projected patterns
US8350847B2 (en) 2007-01-21 2013-01-08 Primesense Ltd Depth mapping using multi-beam illumination
US8374397B2 (en) 2005-10-11 2013-02-12 Primesense Ltd Depth-varying light fields for three dimensional sensing
US8390821B2 (en) 2005-10-11 2013-03-05 Primesense Ltd. Three-dimensional sensing using speckle patterns
US8400494B2 (en) 2005-10-11 2013-03-19 Primesense Ltd. Method and system for object reconstruction
US8456517B2 (en) 2008-07-09 2013-06-04 Primesense Ltd. Integrated processor for 3D mapping
US8462207B2 (en) 2009-02-12 2013-06-11 Primesense Ltd. Depth ranging with Moiré patterns
US8493496B2 (en) 2007-04-02 2013-07-23 Primesense Ltd. Depth mapping using projected patterns
US8494252B2 (en) 2007-06-19 2013-07-23 Primesense Ltd. Depth mapping using optical elements having non-uniform focal characteristics
US8717417B2 (en) 2009-04-16 2014-05-06 Primesense Ltd. Three-dimensional mapping and imaging
US8786682B2 (en) 2009-03-05 2014-07-22 Primesense Ltd. Reference image techniques for three-dimensional sensing
US8830227B2 (en) 2009-12-06 2014-09-09 Primesense Ltd. Depth-based gain control
US8982182B2 (en) 2010-03-01 2015-03-17 Apple Inc. Non-uniform spatial resource allocation for depth mapping
US9030528B2 (en) 2011-04-04 2015-05-12 Apple Inc. Multi-zone imaging sensor and lens array
US9066087B2 (en) 2010-11-19 2015-06-23 Apple Inc. Depth mapping using time-coded illumination
US9098931B2 (en) 2010-08-11 2015-08-04 Apple Inc. Scanning projectors and image capture modules for 3D mapping
US9131136B2 (en) 2010-12-06 2015-09-08 Apple Inc. Lens arrays for pattern projection and imaging
US9157790B2 (en) 2012-02-15 2015-10-13 Apple Inc. Integrated optoelectronic modules with transmitter, receiver and beam-combining optics for aligning a beam axis with a collection axis
US9330324B2 (en) 2005-10-11 2016-05-03 Apple Inc. Error compensation in three-dimensional mapping
US9582889B2 (en) 2009-07-30 2017-02-28 Apple Inc. Depth mapping based on pattern matching and stereoscopic information
US10342431B2 (en) 2000-07-26 2019-07-09 Melanoscan Llc Method for total immersion photography

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112562059B (en) * 2020-11-24 2023-12-08 革点科技(深圳)有限公司 Automatic structured light pattern design method
CN114697623B (en) * 2020-12-29 2023-08-15 极米科技股份有限公司 Projection plane selection and projection image correction method, device, projector and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4630203A (en) * 1983-12-27 1986-12-16 Thomas Szirtes Contour radiography: a system for determining 3-dimensional contours of an object from its 2-dimensional images
WO1998058351A1 (en) * 1997-06-17 1998-12-23 British Telecommunications Public Limited Company Generating an image of a three-dimensional object

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4630203A (en) * 1983-12-27 1986-12-16 Thomas Szirtes Contour radiography: a system for determining 3-dimensional contours of an object from its 2-dimensional images
WO1998058351A1 (en) * 1997-06-17 1998-12-23 British Telecommunications Public Limited Company Generating an image of a three-dimensional object

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10342431B2 (en) 2000-07-26 2019-07-09 Melanoscan Llc Method for total immersion photography
US7710032B2 (en) 2003-07-11 2010-05-04 Koninklijke Philips Electronics N.V. Encapsulation structure for display devices
US9066084B2 (en) 2005-10-11 2015-06-23 Apple Inc. Method and system for object reconstruction
US9330324B2 (en) 2005-10-11 2016-05-03 Apple Inc. Error compensation in three-dimensional mapping
US8374397B2 (en) 2005-10-11 2013-02-12 Primesense Ltd Depth-varying light fields for three dimensional sensing
US8050461B2 (en) 2005-10-11 2011-11-01 Primesense Ltd. Depth-varying light fields for three dimensional sensing
US8400494B2 (en) 2005-10-11 2013-03-19 Primesense Ltd. Method and system for object reconstruction
US8390821B2 (en) 2005-10-11 2013-03-05 Primesense Ltd. Three-dimensional sensing using speckle patterns
US7433024B2 (en) 2006-02-27 2008-10-07 Prime Sense Ltd. Range mapping using speckle decorrelation
US8350847B2 (en) 2007-01-21 2013-01-08 Primesense Ltd Depth mapping using multi-beam illumination
US8150142B2 (en) 2007-04-02 2012-04-03 Prime Sense Ltd. Depth mapping using projected patterns
US8493496B2 (en) 2007-04-02 2013-07-23 Primesense Ltd. Depth mapping using projected patterns
US8494252B2 (en) 2007-06-19 2013-07-23 Primesense Ltd. Depth mapping using optical elements having non-uniform focal characteristics
US8456517B2 (en) 2008-07-09 2013-06-04 Primesense Ltd. Integrated processor for 3D mapping
US8462207B2 (en) 2009-02-12 2013-06-11 Primesense Ltd. Depth ranging with Moiré patterns
US8786682B2 (en) 2009-03-05 2014-07-22 Primesense Ltd. Reference image techniques for three-dimensional sensing
US8717417B2 (en) 2009-04-16 2014-05-06 Primesense Ltd. Three-dimensional mapping and imaging
US9582889B2 (en) 2009-07-30 2017-02-28 Apple Inc. Depth mapping based on pattern matching and stereoscopic information
US8830227B2 (en) 2009-12-06 2014-09-09 Primesense Ltd. Depth-based gain control
US8982182B2 (en) 2010-03-01 2015-03-17 Apple Inc. Non-uniform spatial resource allocation for depth mapping
US9098931B2 (en) 2010-08-11 2015-08-04 Apple Inc. Scanning projectors and image capture modules for 3D mapping
US9066087B2 (en) 2010-11-19 2015-06-23 Apple Inc. Depth mapping using time-coded illumination
US9131136B2 (en) 2010-12-06 2015-09-08 Apple Inc. Lens arrays for pattern projection and imaging
US9167138B2 (en) 2010-12-06 2015-10-20 Apple Inc. Pattern projection and imaging using lens arrays
US9030528B2 (en) 2011-04-04 2015-05-12 Apple Inc. Multi-zone imaging sensor and lens array
US9651417B2 (en) 2012-02-15 2017-05-16 Apple Inc. Scanning depth engine
US9157790B2 (en) 2012-02-15 2015-10-13 Apple Inc. Integrated optoelectronic modules with transmitter, receiver and beam-combining optics for aligning a beam axis with a collection axis

Also Published As

Publication number Publication date
GB9910960D0 (en) 1999-07-14

Similar Documents

Publication Publication Date Title
GB2352901A (en) Rendering three dimensional representations utilising projected light patterns
US6930685B1 (en) Image processing method and apparatus
Pan et al. ProFORMA: Probabilistic Feature-based On-line Rapid Model Acquisition.
CN104778694B (en) A kind of parametrization automatic geometric correction method shown towards multi-projection system
Bonfort et al. Voxel carving for specular surfaces
US9001120B2 (en) Using photo collections for three dimensional modeling
US6750873B1 (en) High quality texture reconstruction from multiple scans
CN104335005B (en) 3D is scanned and alignment system
US6917702B2 (en) Calibration of multiple cameras for a turntable-based 3D scanner
Bonfort et al. General specular surface triangulation
KR100681320B1 (en) Method for modelling three dimensional shape of objects using level set solutions on partial difference equation derived from helmholtz reciprocity condition
EP3382645B1 (en) Method for generation of a 3d model based on structure from motion and photometric stereo of 2d sparse images
Mousavi et al. The performance evaluation of multi-image 3D reconstruction software with different sensors
WO1999060525A1 (en) Method and apparatus for 3d representation
TWI752905B (en) Image processing device and image processing method
Aliaga et al. Photogeometric structured light: A self-calibrating and multi-viewpoint framework for accurate 3d modeling
Mahdy et al. Projector calibration using passive stereo and triangulation
WO2020075252A1 (en) Information processing device, program, and information processing method
US20040095484A1 (en) Object segmentation from images acquired by handheld cameras
Wu et al. Unsupervised texture reconstruction method using bidirectional similarity function for 3-D measurements
Grammatikopoulos et al. Automatic multi-image photo-texturing of 3d surface models obtained with laser scanning
JPH04130587A (en) Three-dimensional picture evaluation device
Gu et al. 3dunderworld-sls: an open-source structured-light scanning system for rapid geometry acquisition
CN115205491A (en) Method and device for handheld multi-view three-dimensional reconstruction
Yamazaki et al. Coplanar shadowgrams for acquiring visual hulls of intricate objects

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)