US20060017720A1  System and method for 3D measurement and surface reconstruction  Google Patents
System and method for 3D measurement and surface reconstruction Download PDFInfo
 Publication number
 US20060017720A1 US20060017720A1 US10/891,632 US89163204A US2006017720A1 US 20060017720 A1 US20060017720 A1 US 20060017720A1 US 89163204 A US89163204 A US 89163204A US 2006017720 A1 US2006017720 A1 US 2006017720A1
 Authority
 US
 United States
 Prior art keywords
 surface
 pattern
 φ
 camera
 object
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Abandoned
Links
Images
Classifications

 G—PHYSICS
 G01—MEASURING; TESTING
 G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
 G01B11/00—Measuring arrangements characterised by the use of optical means
 G01B11/24—Measuring arrangements characterised by the use of optical means for measuring contours or curvatures
 G01B11/25—Measuring arrangements characterised by the use of optical means for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
 G01B11/2504—Calibration devices

 G—PHYSICS
 G01—MEASURING; TESTING
 G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
 G01B11/00—Measuring arrangements characterised by the use of optical means
 G01B11/24—Measuring arrangements characterised by the use of optical means for measuring contours or curvatures
 G01B11/25—Measuring arrangements characterised by the use of optical means for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
 G01B11/2509—Color coding

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
 G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T7/00—Image analysis
 G06T7/50—Depth or shape recovery
 G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T7/00—Image analysis
 G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Abstract
A system and method for measuring and surface reconstruction of a 3D image of an object comprises a projector arranged to project a pattern onto a surface of an object to be imaged; and a processor stage arranged to examine distortion or distortions produced in the pattern by the surface. The processor stage is arranged to convert by, for example, a triangulation process the distortion or distortions produced in the pattern by the surface to a distance representation representative of the shape of the surface. The processor stage is also arranged to reconstruct electronically the surface shape of the object.
Description
 The present invention relates to a system and method for 3D measurement and surface reconstruction of an image reconfigurable vision, and in particular to a reconfigurable vision system and method.
 In many practical applications, such as reverse engineering, robotic exploration/navigation in clustered environments, model construction for virtual reality, human body measurements, and advanced product inspection and manipulation by robots, the automatic measurement and reconstruction of 3D shapes with high speed and accuracy is of critical importance. Currently, the devices widely used in industry for obtaining 3D measurements involve the mechanical scanning of a scene, for example in a laser scanning digitizer, which inevitably makes the measurement a slow process. Some advanced active vision systems using structured lighting have been explored and built. However, the existing systems lack the ability to change their settings, to calibrate by themselves and to reconstruct the 3D scene automatically.
 To reconstruct a complete and accurate 3D model of an unknown object, two fundamental issues must be addressed. The first issue is how to acquire the 3D data for reconstructing the object surface. Currently, a laser range finder/scanner [1] is widely used for 3D surface data acquisition in industry. However, due to the mechanical scanning involved, the acquisition speed is limited. To increase the efficiency in the 3D imaging, pattern projections can be employed [2]. Portable 3D imaging systems based a similar principle have also been designed recently.
 The second issue is how to determine the next viewpoint for each view so that all the information about the object surface can be acquired in an optimal way. This is also known as the NBV (Next Best View) problem, which determines the sensor direction (or pose) in the reconstruction process. The problem of viewpoint planning [3] for digitalization of 3D objects can be treated in different ways depending on whether or not the object's geometry is known beforehand [4,5]. For an unknown object, since the number of viewpoints and their viewing direction are unknown or cannot be determined prior to data acquisition, conventional 3D reconstruction processes typically involve an incremental iterative cycle of viewpoint planning, digitizing, registration and view integration and is conventionally based on a partial model reconstructed thus far. Based on a partial model reconstructed, the NBV algorithm then provides quantitative evaluations on the suitability of the remaining viewpoints. The evaluation for each viewpoint is based on all visible surface elements of the object that can be observed. The viewpoint with the highest visibility (evaluation score) is selected as the NBV.
 In general, there are two fundamental problems to be solved when determining the Next Best View. The first problem is to determine the areas of the object which need to be sensed next and the second is to determine how to position the sensor to sample those areas. As there is no prior knowledge about the object, it is impossible to obtain a complete description of an object when occlusion occurs. Therefore, it is not generally possible to obtain precisely the invisible portions from either the current viewpoint or the acquired partial description of the object, so only an estimation of the Next Best View may be derived.
 Various Next Best View algorithms have been proposed to date, for example Connolly [6] uses octree to represent object space, and the regions that have been scanned are labeled as seen, regions between the sensor and the surface are labeled as empty and all other regions are labeled as unseen. A set of candidate viewpoints is enumerated at fixed increments around the object. The Next Best View is calculated based on the evaluation of the visibility of each candidate viewpoint. This algorithm is computationally expensive and it does not incorporate the sensor geometry.
 Maver and Bajesy [7] presented a solution to the NBV problem for a specific scanning setup consisting of an active optical range scanner and a turntable. In this document, unseen regions of the objects are represented as polygons. Visibility constraints for the sensor to view the unseen region are computed from the polygon boundaries. However, this solution is limited to a particular sensor configuration.
 Pito [8] proposes an approach based on an intermediate position space representation of both sensor visibility constraints and unseen portions of the viewing volume. The NBV is determined as the sensor position that maximized the unseen portion of the object volume. This approach has been demonstrated to have achieved automatic viewpoint planning for a range sensor constrained to move on a cylindrical path around the object.
 Whaite and Ferrie [9] use the superellipsoid model to represent an object and define a shell of uncertainty. The Next Best View is selected at the sensor position where the uncertainty of the current model fitted to the partial data points is the largest. This algorithm enables uncertaintydriven exploration of an object to build a model. However, the superellipsoid cannot accurately represent objects with a complex surface shape. Furthermore, surface visibility constraints were not incorporated in the viewpoint planning process.
 Reed and Allen [10] propose a targetdriven viewpoint planning method. The volume model is used to represent the object by extrusion and intersection operations. The constraints, such as sensor imaging constraints, model occlusion constraints and sensor placement constraints, are also represented as solid modeling volumes and are incorporated into the viewpoint planning. The algorithm involves expensive computation on the solid modeling and intersection operation.
 Scott [11] considers viewpoint planning as integer programing. However, in this system the object must be scanned before viewpoint planning to obtain prior knowledge about an unknown object. Given a rough model of an unknown object, a sequential set of viewpoints is calculated to cover all surface patches of the object with registration constraint. However, the object must be scanned before viewpoint planning to obtain the prior knowledge about unknown objects.
 In many applications, a vision sensor often needs to move from one place to another and change its configuration for perception of different object features. A dynamic reconfigurable vision sensor is useful in such applications to provide an active view of the features.
 Active robot vision, in which a vision sensor can move from one place to another for performing a multiview vision task, is an active research area. A traditional vision sensor with fixed structure is often inadequate for the robot to perceive the object's features in an uncertain environment as the object distance and size are unknown before the robot sees the object. A dynamically reconfigurable sensor may assist the robot in controlling the configuration and gaze at the object surfaces. For example, with a structured light system, the camera needs to see the object surface illuminated by the projector, to perform the 3D measurement and reconstruction task.
 The system must be calibrated and traditionally, the calibration task is accomplished statically by manual operations. A calibration target/device is conventionally designed with a precision calibration fixture to provide a number of points whose world coordinates are precisely known [12][14]. With a planar calibration pattern, the target needs to be placed at several accurately known positions in front of the vision sensor. For dynamically reconfigurable vision systems, the vision system needs to have the ability of selfrecalibration without requiring external 3D data provided by a precision calibration device.
 Selfcalibration of vision sensors has been actively researched in the last decade. However, most of the conventionally available methods were developed for calibration of passive vision systems such as stereo vision and depthfrommotion [15][22]. Conventionally these systems require dedicated devices for calibrating the intrinsic and extrinsic parameters of the cameras. Due to the special calibration target needed, such a calibration is normally carried out offline before a task begins. In many practical applications, online calibration during the execution of a task is needed. Over the years, efforts have been made in research to achieve efficient online calibrations.
 Maybank and Faugeras [23] suggested the calibration of a camera using image correspondences in a sequence of images from a moving camera. The kinds of constructions that could be achieved from a binocular stereo rig were further addressed in [24]. It was found that a unique projective representation of the scene up to an arbitrary projective transformation could be constructed if five arbitrary correspondences were chosen and an affine representation of the scene up to an arbitrary affine transformation could be constructed if four arbitrary correspondences were adopted.
 Hartly [25] gave a practical algorithm for Euclidean reconstruction from several views with the same camera based on LevenbergMarquardt Minimization. A new approach based on stratification was introduced in [26].
 In this context, much work has been conducted in Euclidean reconstruction up to a transformation. Pollefeys et al [27] proposed a method to obtain a Euclidean reconstruction from images taken with an uncalibrated camera with variable focal lengths. This method is based on an assumption that although the focal length is varied, the principal point of the camera remains unchanged. This assumption limits the range of applications of this method.
 A similar assumption was also made in the investigations in [28,29]. In practice, when the focal length is changed (e.g. by zooming), the principal point may vary as well. In the work by Heyden and Astrom [30], they proved that it is possible to obtain Euclidean reconstruction up to a scale using an uncalibrated camera with known aspect ratio and skew parameters of the camera. A special case of a camera with Euclidean image plane was used for their study. A crucial step in the algorithm is the initialization which will affect the convergence. How to obtain a suitable initialization was still an issue to solve [31]. Kahl [32] presented an approach to selfcalibration and Euclidean reconstruction of a scene, assuming an affine model with zero skew for the camera. Other parameters such as the intrinsic parameters could be unknown or varied. The reconstruction which needed a minimum of three images was an approximation and was up to a scale. Pollefeys et al gave the minimum number of images needed for achieving metric reconstruction, i.e. to restrict the projective ambiguity to a metric one according to the set of constraints available from each view [31].
 The abovementioned reconstruction methods are based on passive vision systems. As a result, they suffer from the ambiguity of correspondences between the camera images, which is a difficult problem to solve especially when freeform surfaces [33] are involved in the scene. However, to avoid this problem, active vision may be adopted. Structured light or pattern projection systems have been used for this purpose. To reconstruct precisely a 3D shape with such a system, the active vision system consisting of a projector and a camera needs to be carefully calibrated [34, 35]. The traditional calibration procedure normally involves two separate stages: camera calibration and projector calibration. These individual calibrations are carried out offline and they have to be repeated each time the setting is changed. As a result, the applications of active vision systems are limited, since the system configuration and parameters must be kept unchanged during the entire measurement process.
 For active vision systems using structuredlight, the existing calibration methods are mostly based on static and manual operations. The available camera selfcalibration methods cannot be applied directly to structuredlight systems as they need more than two views for the calibration. Recently, there has been some work on selfcalibration [36][40] of structuredlight systems. Fofi et al. [36] investigated the selfcalibration of structuredlight systems, but the work was based on the assumption that a square projected onto a planar surface will most generally give a quadrilateral shape in the form of a parallelogram”.
 Jokinen [37] studied a selfcalibration method based on multiple views, where the object is moved by steps. Several maps were acquired for the registration and calibration. The limitation of this method is that the object must be placed on a special device so that it can be precisely moved.
 Using a cube frame, Chu et al. [38] proposed a calibration free approach for recovering unified world coordinates.
 Chen and Li [39, 40] recently proposed a selfrecalibration method for a structuredlight system allowing changes in the system configuration in two degrees of freedom.
 In some applications, such as seabed metric reconstruction with an underwater robot, when the size or distance of the scene changes, the configuration and parameters of the vision system need to be changed to optimize the measurement. In such applications, uncalibrated reconstruction is needed. In this regard, efforts have been made in recent research. Fofi et al [41] studied the Euclidean reconstruction by means of an uncalibrated structured light system with a colourcoded grid pattern. They modeled the pattern projector as a pseudo camera and then the whole system as a twocamera system. Uncalibrated Euclidean reconstruction was performed with varying focus, zoom and aperture of the camera. The parameters of the structured light sensor were computed according to the stratified algorithm [26], [42]. However, it was not clear how many of the parameters of the camera and projector could be selfdetermined in the uncalibrated reconstruction process.
 Thus, there is a need for a reconfigurable vision system and method for 3D measurement and reconstruction in which recalibration may be conducted without having to use special calibration apparatus as required by traditional calibration methods.
 In general terms, the present invention provides a method and system for the measurement and surface reconstruction of a 3D image of an object comprising projecting a pattern onto the surface to be imaged, examining distortion produced in the pattern by the surface, converting for example by a triangulation process the distortions produced in the pattern by the surface to a distance representation representative of the shape of the surface. The surface shape of the 3D image may then be reconstructed, for example electronically such as digitally, for further processing.
 In a preferred embodiment, the object is firstly sliced into a number of cross section curves, with each crosssection to be reconstructed by a closed Bspline curve. Then, a Bayesian information criterion (BIC) is applied for selecting the control point number of Bspline models. Based on the selected model, entropy is used as the measurement of uncertainly of the Bspline model to predict the information gain for each cross section curve. After obtaining the predicted information gain of all the Bspline models, the information gain of the Bspline models may be mapped into a view space. The viewpoint that contains maximal information gain for the object is then selected as the Next Best View. A 3D surface reconstruction may then be carried out.
 An advantage of one or more preferred embodiments of the invention system is that the 3D information of a scene may be acquired at high speed by taking a single picture of the scene.
 With this method, a complex 3D shape may be divided into a series of cross section curves, each of which represents the local geometrical feature of the object. These cross section curves may be described by a set of parametric equations. For reconstruction purposes using parametric equations, the most common methods include spline function (e.g. Bspline) [43], implicit polynomial [44], [45] and superquadric (e.g. superellipsoid). [46]. Compared with implicit polynomial and superquadric, Bspline has the following main advantages:
 1.Smoothness and continuity, which allows a curve to consist of a concatenation of curve segments, yet be treated as a single unit;
 2.Builtin boundedness, a property which is lacking in implicit or explicit polynomial representations whose zero set can shoot to infinity;
 3.Parameterized representation, which decouples the x, y coordinates enabling them to be treated separately;
 Preferred embodiments of the invention will now be described by way of example and with reference to the accompanying drawings, in which:

FIG. 1 is a schematic block diagram of an active vision system according to an embodiment of the invention; 
FIG. 2 is a schematic diagram showing the geometrical relationships between the components of the vision system of the embodiment ofFIG. 1 ; 
FIG. 3 is a diagram illustrating the illumination projection in the embodiment ofFIG. 1 ; 
FIG. 4 is a block schematic illustrating the color encoding for identification of coordinates on the projector of the system ofFIG. 1 ; 
FIG. 5 a illustrates an ideal step illumination curve of the blur area and its irradiant flux in the system ofFIG. 1 ; 
FIG. 5 b illustrates a graph of illumination against distance showing an out of focus blur area and an irradiated area for the system ofFIG. 1 ; 
FIG. 6 illustrates a graph of illumination against distance showing the determination of the blur radius for the system ofFIG. 1 ; 
FIG. 7 illustrates a graph of the variation with distance of the point spread function showing the determination of the bestfocused location; 
FIG. 8 is a schematic diagram of an apparatus incorporating the system ofFIG. 1 ; 
FIG. 9 is a flow diagram of a view planning strategy according to an embodiment of the invention; 
FIG. 10 is a flow diagram of information entropy calculation for viewpoint planning according to an embodiment of the invention; 
FIG. 11 a is a schematic illustration of a view space with Q=16, and 
FIG. 11 b is a schematic illustration of a viewpoint representation. 
FIG. 1 shows an active vision system according to a preferred embodiment of the invention. The system comprises an LCD projector 1 adapted to cast a pattern of light onto an object 2 which is then viewed by a camera and processor unit 3. The relative position between the projector 1 and the camera in the camera and processing unit 3 has six degrees of freedom (DOF). When a beam of light is cast from the projector 1 and viewed obliquely by the camera, the distortions in the beam line may be translated into height variations via triangulation if the system is calibrated including the relative position between the projector 1 and camera. The vision system may be selfrecalibrated automatically if and when this relative position is changed. The camera and processor unit 3 preferably includes a processor stage, as well as the camera, for processing the observed distortions in the projected pattern caused by the object 2 and associated data and for enabling and carrying out reconstruction. In a further preferred embodiment, the processor stage may be remotely located from the camera and may be connectable thereto to receive the data for processing and carrying out the reconstruction process. 
FIG. 2 shows the geometrical relationship between the projector 1, the object 2 and the camera of the system ofFIG. 1 , and, in particular, the pattern projected by the projector 1 onto the object 2 and viewed by the camera 3.  For the camera,
x_{c}=P_{c}w_{c},
where  x_{c}=[λx_{c }λy_{c }λ]^{T }are the coordinates on the image sensor plane, λεR is an uncertain scalar
 w_{c}=[X_{c }Y_{c }Z_{c }1]^{T }are the 3D coordinates of an object point from the view of the camera (
FIG. 2 ), and  P_{c }is the 3×4 perspective matrix
${P}_{c}={\left[\begin{array}{cccc}{v}_{c}& {k}_{\mathrm{sy}}& {x}_{c}^{0}& 0\\ 0& {s}_{\mathrm{xy}}{v}_{c}& {y}_{c}^{0}& 0\\ 0& 0& 1& 0\end{array}\right]}_{3\times 4}$
where  v_{c }is the distance between image plane and camera optical center,
 s_{xy }is the ratio between the horizontal and vertical pixel cell sizes,
 k_{xy }represents the placement perpendicularity of the cell grids,
 and (x_{c} ^{0}, y_{c} ^{0}) is the center offset on the camera sensor.
 Similarly, for the projector,
x_{p}=P_{p}w_{p},
where  x_{p}=[κx_{p }κy_{p }κ]^{T }are the coordinates on the projector plane,
 κεR is also an uncertain scalar,
 w_{p}=[X_{p }Y_{p }Z_{p }1]^{T }are the 3D coordinates of the object point based on the view of projector (see
FIG. 2 ), and  P_{p }is the inverse perspective matrix of the projector
${P}_{p}={\left[\begin{array}{cccc}{v}_{p}& 0& {x}_{p}^{0}& 0\\ 0& {v}_{p}& {y}_{p}^{0}& 0\\ 0& 0& 1& 0\end{array}\right]}_{3\times 4}.$  The relationship between the camera coordinate system and projector coordinate system is
${w}_{p}={\mathrm{Mw}}_{c}=\left[\begin{array}{cc}R& t\\ {o}^{T}& 1\end{array}\right]{w}_{c},$
where M is a 4×4 matrix and t is the translation vector,
t=s[t_{x }t_{y }t_{z}]^{T}.  Here s is a scaling factor to normalize t_{x} ^{2}+t_{y} ^{2}+t_{z} ^{2}=1.
x_{p}=P_{p}w_{p}=P_{p}Mw_{c}.  Let
$H={P}_{p}M={\left[\begin{array}{c}{h}_{1}\\ {h}_{2}\\ {h}_{3}\end{array}\right]}_{\left(3\times 4\right)},$
where h_{1}, h_{2}, and h_{3 }are 4dimensional vectors. We have
κx_{p}=h_{1}w_{c}, κy_{p}=h_{2}w_{c }and κ=h_{3}w_{c}.  So
(x _{p} h _{3} −h _{1})w _{c}=0.  Then the following can be derived:
$\left[\begin{array}{c}{P}_{c}\\ {x}_{p}{h}_{3}{h}_{1}\end{array}\right]{w}_{c}=\left[\begin{array}{c}{x}_{c}\\ 0\end{array}\right],$  Denote x_{c+}=[x_{c} ^{T},0]^{T }and
$Q=\left[\begin{array}{c}{P}_{c}\\ {x}_{p}{h}_{3}{h}_{1}\end{array}\right]=\left[\begin{array}{cccc}{q}_{11}& {q}_{12}& {q}_{13}& 0\\ 0& {q}_{22}& {q}_{23}& 0\\ 0& 0& 1& 0\\ {q}_{41}& {q}_{42}& {q}_{43}& {q}_{44}\end{array}\right].$  The 3dimensional world position of a point on the object surface can be determined by
w_{c}=Q^{−1}x_{c+}.  As mentioned above, the relative positions of the camera 3 and the projector 1 may be changed dynamically during runtime of the system. As the camera (which is acting as a sensor) is reconfigurable during the runtime, it should be automatically recalibrated for 3D perception tasks. Here the recalibration means that the camera (sensor) has been calibrated before installation in the system, but it should require calibrated again as the relative configuration changes. It is assumed for present purposes that the intrinsic parameters such as the focal lengths, scale factors, distortion coefficients will remain unchanged whereas the extrinsic parameters of the positions and orientations between the camera and projector have to be determined during the runtime of the system.
 System Reconfiguration and Automatic Recalibration
 The whole calibration of the structured light system of
FIG. 1 may be divided into two parts. The first part concerns the calibration of the intrinsic parameters including the focal lengths and optical centers, this is called static calibration and may be performed offline in a static manner. The second part deals with calibration of the extrinsic parameters of the relative position of the camera 3 and the projector 1, and this is hereinafter referred to as selfrecalibration. The static calibration needs to be performed only once. The selfrecalibration is thus more important and needs to be performed online whenever the system configuration is changed during a measurement task.  Once the perspective projection matrices of the camera 3 and the projector 1 relative to a global coordinate frame are computed from the static calibration, it is possible to obtain P_{c }and P_{p }which are the perspective projection matrices of the camera and the projector respectively relative to a global coordinate frame. The dynamic selfrecalibration task requires the determination of the relative position M between the camera 3 and the projector 1. There are 6 unknown parameters, three for 3axis rotation and three for 3dimensional translation (as shown in
FIG. 1 ).  For a point on an object surface, it is known that its coordinates on the camera's sensor plane x_{c}=[λx_{c }λy_{c }λ]^{T }and on the projector's source plane x_{p}=[κx_{p }κy_{p }κ]^{T }are related via the following:
x_{p} ^{T}Fx_{c}=0,
where F is a 3×3 essential matrix:$F=\mathrm{sRS}{=}_{s}\left[\begin{array}{ccc}{F}_{11}& {F}_{12}& {F}_{13}\\ {F}_{21}& {F}_{22}& {F}_{23}\\ {F}_{31}& {F}_{32}& {F}_{33}\end{array}\right],$  Here R is a 3axis rotation matrix and S is a skewsymmetric matrix
$S=\left[\begin{array}{ccc}0& {t}_{z}& {t}_{y}\\ {t}_{z}& 0& {t}_{x}\\ {t}_{y}& {t}_{x}& 0\end{array}\right],$
based on the translation vector t.  The recalibration task is to determine the 6 independent parameters in R and t (
FIG. 2 ). For each surface point, x_{p} ^{T}Fx_{c}=0 may be expressed as:
a_{i} ^{T}f=0.  Here
f=[F_{11 }F_{21 }F_{31 }F_{12 }F_{22 }F_{32 }F_{13 }F_{23 }F_{33}]^{T},
a_{i}=[x_{c}x_{p }x_{c}y_{p }x_{c }y_{c}x_{p }y_{c}y_{p }y_{c }x_{p }y_{p }1]^{T}.
where (x_{c}, y_{c}) is the coordinates on the camera's sensor and (x_{p}, y_{p}) is the coordinates on the projector's LCD/DMD.  The projected patterns can be in black/white (b/w) or in colors. In either case, a coding method is in general needed. For b/w projections, gray codes can be used with the stripe light planes, which allows robust identifications of the stripe index (
FIG. 3 ).  In a preferred embodiment of the invention, if an illumination pattern with colourencoded grids is used, a cell's coordinates on the projector 1 can be immediately determined by the colours of adjacent neighbouring cells in addition to its own when projecting a source pattern. Via a table lookup, each cell's position can be uniquely identified. An example of such coded color pattern is shown in
FIG. 4 . The method of computing the values of blur diameters from an image has been proposed in [39,40] which is incorporated herein by reference. In a preferred embodiment, the system may comprise a Color Coded Pattern Projection system, a CCD camera, and a miniplatform for housing the components and providing the relative motion in 6DOF between the projector 1 and the camera 3. The method for automatic system recalibration and uncalibrated 3D reconstruction according to one or more preferred embodiments of the present invention may be implemented using such a system. With the adaptively adjustable sensor settings, this system will provide enhanced automation and performance for the measurement and surface reconstruction of 3D objects.  For n points observed, an n×9 matrix A can be obtained as the calibration data:
A=[a_{1}, a_{2}, . . . , a_{n}]^{T},
Af=0,  If it is assumed that the structured light vision system has 6DOF in its relative pose, i.e. three position parameters and three orientation parameters, between the camera 3 and the projector 1. The focal lengths of the projector 1 and the camera 3 are assumed to have been obtained in a previous static calibration stage. The optical centers are fixed or can be described by a function of the focal length. The projector 1 generates grid patterns with horizontal and vertical coordinates so that the projector's LCD/DMD can be considered an image of the scene. The relative position between the camera 3 and the projector 1 may be described by
$\left[\begin{array}{c}{X}_{p}\\ {Y}_{p}\\ {Z}_{p}\end{array}\right]=R\left[\begin{array}{c}{X}_{c}\\ {Y}_{c}\\ {Z}_{c}\end{array}\right]\mathrm{Rt}.$  If there are n (n>5) points observed on a plane and they do not lie on the same line, we have proved that the rank of the calibration matrix A is six, i.e.
Rank (A)=6.  If the following 6by6 matrix is considered which is a submatrix of matrix A,
A_{6}=[r_{a1}, r_{a2}, r_{a3}, r_{a4}, r_{a5}, r_{a6},]^{T }
where r_{ai}=[1x_{ci }y_{ci }x_{pi }y_{pi }x_{ci}y_{ci},]^{T }and x_{ci }is the x value of the ith point projected on the camera coordinate system.  The matrix A_{6 }can be diagonalized by basic rowoperations:
D(A _{6})=diag (1,x _{c,2} −x _{c,1}, . . . )  Since x_{c,i}≠x_{cj}, y_{c,i}≠y_{cj}, x_{p,i}≠x_{pj}, y_{p,i}≠y_{pj}, it can be proved that every element in D(A_{6}) is nonzero if no four points of the sampled data lie on the same line. Therefore det(A_{6})≠0 and,
Rank(A)≧Rank(A _{6})=6.  On the other hand, based on the projection model of the camera 3 and projector model 1, the coordinates of a surface point projected on the camera (sensor) may be given by (X/Z, Y/Z). For a point, (x_{c},y_{c}) and (x_{p},y_{p}) are related by:
${Z}_{p}\left[\begin{array}{c}{x}_{p}\\ {y}_{p}\\ 1\end{array}\right]={Z}_{c}R\left[\begin{array}{c}{x}_{c}\\ {y}_{c}\\ 1\end{array}\right]\mathrm{Rt}.$
where Z_{c }and Z_{p }are the depth values based on the view of the camera 3 and projector 1, respectively.  For the camera 3 and projector 1, the scene plane may be defined as follows:
$Z=\frac{{C}_{3}}{1{C}_{1}x{C}_{2}y}$  Let r_{1}, r_{2}, and r_{3 }be the three rows in R, and then
$\frac{{C}_{3c}}{1{C}_{1c}{x}_{p}{C}_{2c}{y}_{p}}\left[\begin{array}{c}{x}_{p}\\ {y}_{p}\\ 1\end{array}\right]=\frac{{C}_{3c}}{1{C}_{1c}{x}_{c}{C}_{2c}{y}_{c}}\left[\begin{array}{c}{r}_{1}\\ {r}_{2}\\ {r}_{3}\end{array}\right]\left[\begin{array}{c}{x}_{c}\\ {y}_{c}\\ 1\end{array}\right]\left[\begin{array}{c}{r}_{1}\\ {r}_{2}\\ {r}_{3}\end{array}\right]t,$
which contains three equations and then, this equation may be considered to be equivalent to the following system:$\hspace{1em}\{\begin{array}{c}{\tau}_{11}{x}_{c}{x}_{p}+{\tau}_{12}{x}_{c}{y}_{p}+{\tau}_{13}{x}_{p}{y}_{c}+{\tau}_{14}{y}_{c}{y}_{p}+{\tau}_{15}{x}_{c}+\\ {\tau}_{16}{x}_{p}+{\tau}_{17}{y}_{c}+{\tau}_{18}{y}_{p}+{\tau}_{19}=0\\ {\tau}_{21}{x}_{c}{x}_{p}+{\tau}_{22}{x}_{c}{y}_{p}+{\tau}_{23}{x}_{p}{y}_{c}+{\tau}_{24}{y}_{c}{y}_{p}+{\tau}_{25}{x}_{c}+\\ {\tau}_{26}{x}_{p}+{\tau}_{27}{y}_{c}+{\tau}_{28}{y}_{p}+{\tau}_{29}=0\\ {\tau}_{31}{x}_{c}{x}_{p}+{\tau}_{32}{x}_{c}{y}_{p}+{\tau}_{33}{x}_{p}{y}_{c}+{\tau}_{34}{y}_{c}{y}_{p}+{\tau}_{35}{x}_{c}+\\ {\tau}_{36}{x}_{p}+{\tau}_{37}{y}_{c}+{\tau}_{38}{y}_{p}+{\tau}_{39}=0\end{array}$
or
Γa=0,
where Γ is a 3by9 matrix, a is described above, and {τ_{ij}} are constants.  It can be proved that there is no linear relationship among the above three equations, i.e. rank(Γ)=3.
 Considering 9 points as the calibration data, then matrix A is 9by9 in size. Since it is constrained by Γa=0, the maximum rank of A is 6.
 Therefore the rank of matrix A must be 6.
 The general solution of the equation a_{i} ^{T}f=0 has the form of
f=ξ _{1} f _{1}+ξ_{1} f _{2}+ξ_{1} f _{3},
where ξ_{1}, ξ_{2}, and ξ_{3 }are real numbers, f_{i }is a 9dimensional vector, and [f_{1}; f_{2}; f_{3}] is the nullbasis of A.  Using singular value decomposition (SVD), we have
B=svd(A ^{T} A)=UDV ^{T},
where A^{T}A is a 9×9 matrix, D is a nondecreasing diagonal matrix, and U and V are orthogonal matrices.  Then, f_{1}, f_{2}, and f_{3 }are the three vectors in V corresponding to the least eigenvalues. Theoretically, if there is no noise, matrix B is exactly as just described, i.e. of rank 6. In such a case, there would be three vanishing/singular values in the diagonal matrix D and the sum and or mean of squared errors (SSE and/or MSE) would be zero, since the vector f lies exactly in the nullspace of B.
 However, in a practical system, there maybe fewer than 3 singular values in the matrix D, as the matrix B can be perturbed. Since the data are from real measurements, B may have a rank of 7 or even higher. In such a case it is still possible to take the three column vectors from the matrix V as the basis vectors corresponding to the least values in D. It will still be the best in the sense B will have been tailored (with rank B>6) to some other matrix C with rank C=6 in such a way that C is the “nearest” to B among all the nby9 matrices with rank 6 in terms of the spectral norm and Frobenius norm.
 Define
f=Hk,
where k=[ξ_{1 }ξ_{2 }ξ_{3}]^{T }and H=[f_{1 }f_{2 }f_{3}]=[H_{u} ^{T}, H_{m} ^{T}, H_{l} ^{T}]^{T}. H_{u}, H_{m}, and H_{l }are 3×3 matrices and each for three rows in H.  The above can be written as
H_{u }k=F_{c1}, H_{m }k=F_{c2}, and H_{l }k=F_{c3},
where F_{c1}, F_{c2}, and F_{c3 }are the three columns in F. Therefore,$G={F}^{T}F=\left[\begin{array}{ccc}{k}^{T}{H}_{u}^{T}{H}_{u}k& {k}^{T}{H}_{u}^{T}{H}_{m}k& {k}^{T}{H}_{u}^{T}{H}_{l}k\\ {k}^{T}{H}_{m}^{T}{H}_{u}k& {k}^{T}{H}_{m}^{T}{H}_{m}k& {k}^{T}{H}_{m}^{T}{H}_{l}k\\ {k}^{T}{H}_{l}^{T}{H}_{u}k& {k}^{T}{H}_{l}^{T}{H}_{m}k& {k}^{T}{H}_{l}^{T}{H}_{l}k\end{array}\right]$  As R is orthogonal, F^{T}F can also be expressed as
$G={S}^{T}{R}^{T}\mathrm{RS}={S}^{T}S=\left[\begin{array}{ccc}1{t}_{x}^{2}& {t}_{x}{t}_{y}& {t}_{x}{t}_{z}\\ {t}_{x}{t}_{y}& 1{t}_{y}^{2}& {t}_{y}{t}_{z}\\ {t}_{x}{t}_{z}& {t}_{y}{t}_{z}& 1{t}_{z}^{2}\end{array}\right].$  The three unknowns of k=[ξ_{1 }ξ_{2 }ξ_{3}]^{T }can be determined. The normalized relative position t_{n}=[t_{x }t_{y }t_{z}]^{T }can then be solved:
$\hspace{1em}\{\begin{array}{c}{t}_{x}=\pm \sqrt{1{k}^{T}{H}_{u}^{T}{H}_{u}k}\\ {t}_{y}=\pm \sqrt{1{k}^{T}{H}_{m}^{T}{H}_{m}k}\\ {t}_{z}=\pm \sqrt{1{k}^{T}{H}_{l}^{T}{H}_{l}k}\end{array}$  It should be noted that multiple solutions exist. In fact, if [k t]^{T }is a solution of the system, [±k ±t]^{T }must also be the solutions. One of these solutions is correct for a real system setup. To find this, the reprojection method can be used.
 When k and t are known, the rotation matrix R can be determined by
$R=\left[\begin{array}{c}{r}_{1}\\ {r}_{2}\\ {r}_{3}\end{array}\right]=\left[\begin{array}{c}{F}_{\mathrm{c1}}\times t+\left({F}_{\mathrm{c2}}\times t\right)\times \left({F}_{\mathrm{c3}}\times t\right)\\ {F}_{\mathrm{c2}}\times t+\left({F}_{\mathrm{c3}}\times t\right)\times \left({F}_{\mathrm{c1}}\times t\right)\\ {F}_{\mathrm{c3}}\times t+\left({F}_{\mathrm{c1}}\times t\right)\times \left({F}_{\mathrm{c2}}\times t\right)\end{array}\right]$
where “x” is the cross product of two vectors.  Among the six unknown parameters, the five in R and t_{n }have been determined so far and reconstruction can be performed but up to a scaling factor. The last unknown, s in t, may be determined here by a method using a constraint derived from the bestfocused location (BFL). This is based on the fact that for a lens with a specified focal length, the object surface point is perfectly focused only at a special distance.
 For an imaging system, the mathematical model for standard linear degradation caused by blurring and additive noise is usually described by
$g\left(i,j\right)=\sum _{k=1}^{m}\sum _{l=1}^{n}h\left(ik,jl\right)f\left(k,l\right)+n\left(i,j\right),\mathrm{or}$ $g=h\otimes f+n$
where f is the original image, h is the point spread function, n is the additive noise, m×n is the image size. The operation “{circle around (x)}” represents twodimensional convolution. The blur can be used as a cue to find the perfectly focused distance.  For the projector in such a system, the most significant blur is the outoffocus blur. This results from the fact that for a lens with a specific focal length the illumination pattern will be blurred on the object surface unless it is projected on the perfectly focused distance. Since the noise n only affects the accuracy of the result, it will not be considered in the following deduction. The illumination pattern on the source plane (LCD or DMD) to be projected is described as
${I}_{s}\left(x\right)=\{\begin{array}{cc}{L}_{a},& \left(\frac{T}{2}<x2\mathrm{nT}<\frac{T}{2}\right),n\in N\\ 0,& \mathrm{otherwise}\end{array},$
where T is the stripe width of the source pattern.  The scene irradiance caused by a light source is inversely proportional to the square of the distance from the light source. On the other hand, the image intensity on the camera sensor is independent of the scene distance and is only proportional to the scene irradiance. Therefore, the image intensity can be described as
$I=\frac{{C}_{c}{C}_{l}}{{l}^{2}},$
where C_{c }is the sensing constant of the camera and C_{l }is the irradiant constant of light projection. They are related to many factors, such as the diameter of sensor's aperture, the focal length, and properties of surface materials.  Assume that the intensity at the point where the bestfocused plane intersects the principal axis of the lens is l_{0}(x=0, z=z_{0}), where z_{0 }is the bestfocused distance. That is
${I}_{0}=\frac{{C}_{c}{C}_{I}}{{\left({z}_{0}+{v}_{p}\right)}^{2}},$
where v_{p }is the focal length, i.e. the distance between the source plane and optical center of the projector lens.  Consider a straightline in the scene projected on the XZ coordinate system:
z=c _{1} x+c _{0}.  For an arbitrary point, we have l=√{square root over (x^{2}+(z+v_{p})^{2})} and thus
$I=\frac{{\left({z}_{0}+{v}_{p}\right)}^{2}{I}_{0}}{{l}^{2}}.$  In the view of the projector, when the illumination pattern casts on a line, the intensity distribution becomes nonlinear and is given by
$\begin{array}{cc}{I}_{i}\left(x\right)=\frac{{\left({z}_{0}+{v}_{p}\right)}^{2}}{{\left(z+{v}_{p}\right)}^{2}+{{x}^{2}\left(1+\frac{{v}_{p}}{z}\right)}^{2}}{I}_{0},\text{}\left(\frac{T}{2}<\frac{{v}_{p}}{z}x2\mathrm{nT}<\frac{T}{2}\right),n\in N.& \left(8\right)\end{array}$  Transforming the xaxis to align with the observed line, we have:
x _{l} =√{square root over (1+c _{ 1 } ^{ 2 } x)}.  The above gives
${I}_{i}\left({x}_{l}\right)=\frac{{\left({z}_{0}+{v}_{p}\right)}^{2}}{{\left[1+{\left(\frac{{c}_{2}{x}_{l}}{{c}_{2}{x}_{l}+{c}_{0}}\right)}^{2}\right]\left[\left({c}_{2}+1\right){x}_{l}+{c}_{0}\right]}^{2}}{I}_{0},\text{}\mathrm{where}\text{\hspace{1em}}{c}_{2}=\frac{{c}_{1}}{\sqrt{1+{c}_{1}^{2}}}.$  The illumination will be blurred unless it is projected on a plane at the perfectly focused distance:
${z}_{0}=\frac{{v}_{p}{f}_{p}}{{v}_{p}{f}_{p}},$
where f_{p }is the intrinsic focus length of the projector. For all other points in the scene, the zdisplacement of a surface point to the bestfocused location is:$\Delta \text{\hspace{1em}}z=\uf603z{z}_{0}\uf604=\uf603z\frac{{v}_{p}{f}_{p}}{{v}_{p}{f}_{p}}\uf604.$
where v_{p }is the distance to from the image plane to the optical center.  The corresponding blur radius is proportional to Δz:
$\sigma =\frac{{v}_{p}{f}_{p}}{{v}_{p}{F}_{\mathrm{mm}}}\Delta \text{\hspace{1em}}z,$
where${F}_{\mathrm{mm}}=\frac{{f}_{p}}{r}$
is the fnumber of the lens setting.  For outoffocus blur, the effect of blurring can be described via a point spread function (PSF) to account for the diffraction effect of light wave. A Gaussian model is normally used.
 With our light stripes, the onedimensional PSF is
${h}_{\sigma}\left(x\right)=\frac{1}{\sqrt{2\pi}\sigma}{e}^{\frac{{x}^{2}}{2{\sigma}^{2}}}.$  The brightness of the illumination from the projector is the convolution of the ideal illumination intensity curve with the PSF blur model:
I(x)=I _{i}(x){circle around (x)}h _{σ}(x)=∫_{∞} ^{28 } I _{i}(u)h _{σ}(x−u)du.  The Fourier transform of the above is
I _{F}(ω)=I _{F} ^{i}(ω)H _{σ}(ω), (18)
where H_{σ}(ω) is the Fourier transform of the Gaussian function${H}_{\sigma}\left(\omega \right)={\int}_{\infty}^{+\infty}\frac{1}{\sqrt{2\pi}\sigma}{e}^{\frac{{x}^{2}}{2{\sigma}^{2}}}{e}^{\mathrm{j\omega \pi}}\text{\hspace{1em}}dx={e}^{\frac{{\sigma}^{2}{\omega}^{2}}{2}}.$  Without significant loss of accuracy, l_{i}(x) may be approximated by averaging the intensity on a light stripe to simplify the Fourier transform, I_{i}(x)={overscore (I)}(x). If a coordinate system with its origin at the center of the bright stripe is used, this intensity can be written as
$\stackrel{\_}{I}\left(x\right)={I}_{0}\left[\varepsilon \left(x+\frac{T}{2}\right)\varepsilon \left(x\frac{T}{2}\right)\right],$
where ε is a unit step function.  The Fourier transform of the above is
${I}_{F}^{i}\left(\omega \right)={I}_{0}T\text{\hspace{1em}}\frac{\mathrm{sin}\left(\frac{\omega \text{\hspace{1em}}T}{2}\right)}{\frac{\omega \text{\hspace{1em}}T}{2}}={I}_{0}{\mathrm{TS}}_{a}\left(\frac{\omega \text{\hspace{1em}}T}{2}\right).$  Since l(x) is measured by the camera, its Fourier transform I_{F}(ω) can be calculated. Using integration, we have
$\int {e}^{\frac{{\sigma}^{2}{\omega}^{2}}{2}}d\omega ={I}_{0}T\int \frac{{I}_{F}\left(\omega \right)}{{S}_{a}\left(\frac{\omega \text{\hspace{1em}}T}{2}\right)}d\omega .$  The left side is found to be
$\mathrm{Left}=\frac{\sqrt{2}}{\sigma}{\int}_{\infty}^{+\infty}{e}^{{\left(\frac{\mathrm{\sigma \omega}}{\sqrt{2}}\right)}^{2}}\text{\hspace{1em}}d\left(\frac{\mathrm{\sigma \omega}}{\sqrt{2}}\right)=\frac{\sqrt{2}\sqrt{\pi}}{\sigma}.$  Therefore the blur radius can be computed by
$\sigma ={\left[\frac{{I}_{0}T}{\sqrt{2\pi}}\int \frac{{I}_{F}\left(\omega \right)}{{S}_{a}\left(\frac{\omega \text{\hspace{1em}}T}{2}\right)}d\omega \right]}^{1}.$  Neglecting the effect of blurring caused by multiple illumination stripes, we have the following theorem to determine the blur radius with low computational cost and high precision.
 Theorem 1. With the projection of a step illumination on the object surface, the blur radius is proportional to the time rate flow of irradiant light energy in the blurred area:
$\sigma =\frac{\sqrt{2\pi}}{{I}_{0}}S,$
where l_{0 }is the ideal intensity and S is the area size as illustrated inFIG. 5 b.  This means that the blur radius σ[m] is proportional to the area size under the blurring curve: σ=√{square root over (2π)}SlI_{0}. The time rate flow of radiant light energy, i.e. the irradiant power or irradiant flux Φ [watt], is also the area size S [watt] under the blurring curve (or surface, in the case of twodimensional analysis) illustrated in
FIG. 5 b.  Therefore, we only need to compute the area size S for every stripe to determine the blur radius. In a simple way, the edge position (x=0) can be detected by a gradient method, and S is then determined by summating the intensity function from 0 to x_{1}. However, even using a subpixel method for the edge detection, errors are still considerable since l(x) changes sharply near the origin.
 To solve this problem, we propose an accurate method in the sense of energy minimization. As illustrated in
FIG. 6 , we have$\begin{array}{c}{F}_{s}\left({x}_{o}\right)={S}_{1}\left({x}_{o}\right)+{S}_{2}\left({x}_{o}\right)\\ \frac{{I}_{0}}{\sqrt{\pi}}{\int}_{\frac{{x}_{o}}{\sqrt{2}\sigma}}^{+\infty}{e}^{{y}^{2}}\text{\hspace{1em}}dy+{I}_{0}\frac{{I}_{0}}{\sqrt{\pi}}{\int}_{\infty}^{\frac{{x}_{o}}{\sqrt{2}\sigma}}{e}^{{y}^{2}}\text{\hspace{1em}}dy.\end{array}$  It can be proved that the derivative of the above function is
F_{x}′≧0, if x_{o}≧0,
where “=” holds if and only if x_{o}=0.  The same situation occurs when x_{o}≦0. Therefore, at x_{o}=0, we have
S=min(F _{s})/2.  This means that the same quantity of light energy flows from S_{2 }to S_{1}. This method for computing S is more stable than traditional methods and it yields high accuracy.
 Now the bestfocused location can be computed by analyzing the blur information in an image. With Theorem 1, by integrating the blurring curve on each stripe edge, the blur radius ai can be calculated. These blur radiuses are recorded as a set
D={(i,σ _{i})i ε N},
where i is the stripe index on the projector's source pattern.  The blur size is proportional to the displacement of a scene point from the BFL, σ=k_{f}Δz. Since the blur diameters are unsigned, a minimum value σ_{min }in the data set D can be found. For a line in the scene, in order to obtain a straight line corresponding to the linearly changing depth in the scene, we separate D into two parts and apply linear bestfit to obtain two straight lines:
σ_{l}(x)=k _{1} +k _{2 }and σ_{r}(x)=k _{3} x+k _{4}.  Finding the intersection of the two lines gives a bestfocused location (as shown in
FIG. 7 )${x}_{b}=\frac{{k}_{4}{k}_{2}}{{k}_{1}{k}_{3}},$
which corresponds to Δz=0 or$z\left({x}_{b}\right)=\frac{{v}_{p}{f}_{p}}{{v}_{p}{f}_{p}}.$
The corresponding coordinates on the image are (x_{h},y_{h}), where y_{b }is determined according to the scanning line on the image.  From the above analysis, there exists a point which represents the bestfocused location for a line in the scene. Now consider the blur distribution on a plane in the scene. This happens when we analyze multiple scan lines crossing the light stripes or when the source illumination is a grid pattern.
 For a plane in scene, z=c_{1}x+c_{2}y+c_{3}, the blur radius is
$\begin{array}{c}\sigma \left(x,y\right)=\frac{{v}_{p}{f}_{p}}{{v}_{p}{F}_{\mathrm{num}}}\uf603\text{:}\frac{{v}_{p}{f}_{p}}{{v}_{p}{f}_{p}}\uf604\\ =\uf603\frac{\left({v}_{p}{f}_{p}\right)\left({c}_{1}x+{c}_{2}y+{c}_{3}\right){v}_{p}{f}_{p}}{{v}_{p}{F}_{\mathrm{num}}}\uf604\end{array}$  The best focused locations form a straight line which is the intersection of two planes. A valley line can be found since the blur radius is unsigned.
 For a freeform surface in the scene, the bestfocused location can be determined by extending the above method making some minor modifications. For each light stripe, we can also compute its blur diameter and obtain a pair (z_{ri}, σ_{l}), where iεN is the stripe index and z_{ri }iε[0, 1] is its relative depth in the camera view. Plotting these pairs in a coordinate system with σ as the vertical axis and z_{r }as the horizontal axis, we can also find a valley to be the best focused distance, z_{rb}.
 For the point with minimum blur value, i.e. the bestfocused location (Δz=0), is constrained by
${Z}_{c}\left[\begin{array}{c}{x}_{c}\\ {y}_{c}\\ 1\end{array}\right]s\left[\begin{array}{c}{t}_{x}\\ {t}_{y}\\ {t}_{z}\end{array}\right]=\frac{{v}_{p}{f}_{p}}{{v}_{p}{f}_{p}}{R}^{1}\left[\begin{array}{c}{x}_{p}\\ {y}_{p}\\ 1\end{array}\right]=\left[\begin{array}{c}{b}_{1}\\ {b}_{2}\\ {b}_{3}\end{array}\right],$  The scaling factor s can thus be determined:
$s=\frac{{x}_{c}{b}_{2}{y}_{c}{b}_{1}}{{y}_{c}{t}_{x}{x}_{c}{t}_{y}}.$  The procedures for selfrecalibration of a structured light vision sensor are given as follows:

 Step 1: projecting grid encoded patterns onto the scene;
 Step 2: determining t_{n }and R for 5 unknowns;
 Step 3: computing the blur distribution and determining the bestfocused location;
 Step 4: determining the scaling factor s; and
 Step 5: combining the relative matrix for 3D reconstruction.
 The method presented here automatically resolves six parameters of a colorencoded structured light system. When the internal parameters of the camera 3 and the projector 1 are known via precalibration, the 6 DOF relative placement of the active vision system can be automatically recalibrated with neither manual operations nor assistance of a calibration device. This feature is very important for many situations when the vision sensor needs to be reconfigured online during a measurement or reconstruction task.
 The method itself does not require the six parameters all to be the external ones. In fact, only if the number of unknown parameters of the system does not exceed six, the method can still be applied. For example, if the two focal lengths, v_{c }and v_{p}, are variable during the system reconfiguration and the relative structure has 4DOFs, we may solve them in a similar way, by replacing the matrix F by F=(P_{c} ^{−1})^{T}RSP_{p} ^{−1 }and modifying the decomposition method accordingly.
 When the unknown parameters exceed six, the abovedescribed method may not solve them directly. However, a 2step method may be used to solve this problem. That is, before the internal parameters are reconfigured, we take an image firstly to obtain the 6DOF external parameters. Then after changing the sensors' internal parameters, we take an image again to recalibrate them.
 Special features of a method according to a preferred embodiment of the invention the invented method include:

 The single image based recalibration allows measurement or reconstruction to be performed immediately after reconfiguration in the software, without requiring any extra requirement.
 Metric measurement of the absolute geometry of the 3D shape may be obtained by replacing r(x_{b}, y_{b}) with z_{rb}. This is different from most of the currently available conventional methods where the 3D reconstruction supported is up to a certain transformation.
Automatic viewpoint planning
 To obtain a complete 3D model of an object 2 with the vision system as shown in
FIG. 1 , multiple views may be required, e.g. via robot vision as shown inFIG. 8 which illustrates the components ofFIG. 1 housed in a robot apparatus. The viewpoint planning is charged with the task of determining the position and orientation and system configuration parameters for each view to be taken. It is assumed for the purposes of this description that we are dealing with an unknown object, i.e. assuming no prior knowledge about the object model. It is also assumed here that the object is general freeform surfaced. The approach is preferably to model a 3D object via a series of cross section curves. These cross section curves can be described by a set of parametric equations of Bspline curves. A criterion is proposed to select the optimal model structure for the available data points on the cross section curve.  For object reconstructions, two conventional approaches are volume reconstruction and surface reconstruction. The volume based technique is concerned with the manipulation of the volumetric objects stored in a volume raster of voxels. Surface reconstruction may be approached in one of the two ways: 1) representing the surface with a mosaic of flat polygon titles, usually triangles: and 2) representing the surface with a series of curved patches joined with some order of continuity. However, as mentioned above, a preferred reconstruction method for embodiments of the present invention is to model a 3D object via a series of cross section curves.
 Model Selection for Reconstruction
 The object 2 is sliced into a number of cross section curves, each of which represents the local geometrical features of the object. These cross section curves may be described by a set of parametric equations. For reconstruction of cross section curves, compared with implicit polynomial [47] and superquadric, Bspline has the following main advantages:
 1) Smoothness and continuity, which allows any curve to be composed of a concatenation of curve segments and yet be treated as a single unit,
 2) Builtin boundedness. a property which is lacking in implicit or explicit polynomial representation whose zero set can shoot to infinity; and
 3) Parameterized representation, which decouples the x, z coordinates to be treated separately.
 2.1 Closed BSpline Curve Approximation
 A closed cubic Bspline curve consisting of n+1 curve segments may be defined by
$\begin{array}{cc}p\left(t\right)=\sum _{j=0}^{n+3}{B}_{j,4}\left(t\right)\xb7{\Phi}_{j}& \left(1\right)\end{array}$
where p(t)=[x(t), z(t)] is a point on Bspline curve with location parameter t. In this section we use the chord length method for parameterization. In (1), B_{j,4}(t) is the jth normalized cubic Bspline basis function, which is defined over the uniform knots vector
[u _{−3} , u _{−2} , . . . , U _{n+4},]=[−3, −2, . . . , n+4]  In addition, the amplitude of B_{j,4}(t) is in the range of (0.0, 1.0), and the support region of B_{j,4}(t) is compact and nonzero for t ε [u_{j}, u_{j+4}]. The (Φ_{j})_{j=0} ^{n+3 }are cyclical control points satisfying the following conditions:
Φ_{n+1}=Φ_{0},Φ_{n+2}=Φ_{1},Φ_{n+3}=Φ_{2 }  By factorization of the Bspline model, the parameters of Bspline model can be represented as:
[Φ_{x} ^{τ},Φ_{z} ^{τ}]^{τ}=[Φ_{x0}, . . . Φ_{xn},Φ_{z0}, . . . Φ_{zn}]^{τ}  For a set of m data points r=(r_{i})_{i=1} ^{m}=([x_{i},y_{i}])_{i=1} ^{m }let d^{2 }be the sum of the squared residual errors between the data points and their corresponding points on the Bspline curve, ie
$\begin{array}{c}{d}^{2}=\sum _{i=1}^{m}{\uf605{r}_{i}p\left({t}_{i}\right)\uf606}^{2}\\ =\sum _{i=1}^{m}{\left[{x}_{i}\sum _{j=0}^{n+3}{B}_{j,4}\left({t}_{t}\right)\xb7{\Phi}_{\mathrm{xj}}\right]}^{2}+\sum _{i=1}^{m}{\left[{y}_{i}\sum _{j=0}^{n+3}{B}_{j,4}\left({t}_{t}\right)\xb7{\Phi}_{\mathrm{yj}}\right]}^{2}\end{array}$  From the cyclical condition of control points in the equation Φ_{n+1}=Φ_{0}, Φ_{n+2}=Φ_{1}, Φ_{n+3}=Φ_{2}, there are only n+1 control points to be estimated. The LS estimation method of the n+1 control points may be obtained from the curve points by minimizing d^{2 }above with respect to Φ=[Φ_{x} ^{T}, Φ_{y} ^{T}]^{T}=[Φ_{x0}, . . . Φ_{xn}, Φ_{y0}, . . . Φ_{yn}]^{T}.
 The following estimation of Φ may then be obtained by factorization of the Bspline:
Φ_{x}=[B^{τ}B]^{−1}B^{τ}x }(2)
Φ_{y}=[B^{τ}B]^{−1}B^{τ}y
where x=[x_{1}, . . . , x_{m}]^{τ}, y=[y_{1}, . . . , y_{m}]^{τ}$B=\left[\begin{array}{ccccc}{\stackrel{\_}{B}}_{0,4}^{1}+{\stackrel{\_}{B}}_{n+1,4}^{1}& {\stackrel{\_}{B}}_{1,4}^{1}+{\stackrel{\_}{B}}_{n+2,4}^{1}& {\stackrel{\_}{B}}_{2,4}^{1}+{\stackrel{\_}{B}}_{n+3,4}^{1}& \dots & {\stackrel{\_}{B}}_{n,4}^{1}\\ {\stackrel{\_}{B}}_{0,4}^{2}+{\stackrel{\_}{B}}_{n+1,4}^{2}& {\stackrel{\_}{B}}_{1,4}^{2}+{\stackrel{\_}{B}}_{n+2,4}^{2}& {\stackrel{\_}{B}}_{2,4}^{2}+{\stackrel{\_}{B}}_{n+3,4}^{2}& \dots & {\stackrel{\_}{B}}_{n,4}^{2}\\ \vdots & \vdots & \vdots & \text{\hspace{1em}}& \vdots \\ {\stackrel{\_}{B}}_{0,4}^{m}+{\stackrel{\_}{B}}_{n+1,4}^{m}& {\stackrel{\_}{B}}_{1,4}^{m}+{\stackrel{\_}{B}}_{n+2,4}^{m}& {\stackrel{\_}{B}}_{2,4}^{m}+{\stackrel{\_}{B}}_{n+3,4}^{m}& \dots & {\stackrel{\_}{B}}_{n,4}^{m}\end{array}\right]$
and {overscore (B)}_{j,4} ^{i}=B_{j,4}(τ_{i}).  The chord length method may preferably be used for the parameterization of the Bspline. The chord length L of a curve may be calculated as follows:
$L=\sum _{i=2}^{m+1}\uf605{r}_{i}{r}_{i1}\uf606$
where r_{m+1}=r_{1 }for a closed curve. The t_{i }associated with point q_{i }may be given as:${t}_{i}={t}_{i1}+\frac{\uf605{r}_{i}{r}_{i1}\uf606}{L}\xb7{t}_{\mathrm{max}}$
where t_{1}=0 and t_{max}=n+1
Model Selection with Improved BIC Criterion  For a given set of measurement data, there exists a model of optimal complexity corresponding to the smallest prediction (generalization) error for further data. The complexity of a Bspline model of a surface is related to its control point (parameter) number [43],[48]. If the Bspline model is too complicated, the approximated Bspline surface tends to overfit noisy measurement data. If the model is too simple, then it is not capable of fitting the measurement data, making the approximation results underfitted. In general, both over and underfitted surfaces have poor generalization capability. Therefore, the problem of finding an appropriate model, referred to as model selection, is important for achieving a high level generalization capability.
 Model selection has been studied from various standpoints in the field of statistics. Examples include information statistics [49][51] Bayesian statistics [52][54], and structural risk minimization [55]. The Bayesian approach is a preferred model selection method. Based on posterior model probabilities, the Bayesian approach estimates a probability distribution over an ensemble of models. The prediction is accomplished by averaging over the ensemble of models. Accordingly, the uncertainty of the models is taken into account, and complex models with more degrees of freedom are penalized.
 For a given set of models {M_{k}, k=1,2, . . . } and data r, there exists a model of optimal model structure corresponding to smallest generalization error for further data and the Bayesian approach may be used to select the model with the largest (maximum) posterior probability to account for the data acquired so far.
 In a first preferred method, the model M may be denoted by:
M=arg_{M} _{ k,k=1, . . . , k max }max{p(rM _{k})}
where the posterior probability of model M_{k }may be denoted by$\begin{array}{c}p\left(r\u2758{M}_{k}\right)={\int}_{{\Phi}_{k}}p\left(r\u2758{\Phi}_{k},{M}_{k}\right)p\left({\Phi}_{k}\u2758{M}_{k}\right)\text{\hspace{1em}}d{\Phi}_{k}\\ \cong {\left(2\pi \right)}^{{d}_{k}/2}{\uf603H\left({\hat{\Phi}}_{k}\right)\uf604}^{1/2}p\left(r\u2758{\hat{\Phi}}_{k},{M}_{k}\right)p\left({\hat{\Phi}}_{k}\u2758{M}_{k}\right)\end{array}$  Neglecting the term p({circumflex over (Φ)}_{k}M_{k}), the posterior probability of model M_{k }becomes [11]:
$M=\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k},k=1,\dots \text{\hspace{1em}}{k}_{\mathrm{max}}}{\mathrm{max}}\left\{\mathrm{log}\text{\hspace{1em}}p\left(r\u2758{\hat{\Phi}}_{k},{M}_{k}\right)\frac{1}{2}\mathrm{log}\uf603H\left({\hat{\Phi}}_{k}\right)\uf604\right\}$
where {circumflex over (Φ)}_{k }is the maximum likelihood estimate of Φ_{k}, and d_{k }is the parameter number of model M_{k}, H({circumflex over (Φ)}_{k}) is the Hessian matrix of −log p(r{circumflex over (Φ)}_{k},M_{k}) evaluated at {circumflex over (Φ)}_{k}.  The likelihood function p(r{circumflex over (Φ)}_{k},M_{k}) of closed Bspline cross section curves can be factored into x and y components as
p(r{circumflex over (Φ)} _{k} ,M _{k})=p(x{circumflex over (Φ)} _{kx} ,M _{k})·p(y{circumflex over (Φ)} _{ky} ,M _{k})
where {circumflex over (Φ)}_{kx }and {circumflex over (Φ)}_{ky }can be calculated by$\hspace{1em}\{\begin{array}{c}{\Phi}_{x}={\left[{B}^{T}B\right]}^{1}{B}^{T}x\\ {\Phi}_{y}={\left[{B}^{T}B\right]}^{1}{B}^{T}y\end{array}$  Consider, for example, the x component. Assuming the residual error sequence to be zero mean and white Gaussian with variance σ_{kx} ^{2}({circumflex over (Φ)}_{kx}), we have the following likelihood function:
$p\left(x\u2758{\hat{\Phi}}_{\mathrm{kx}},{M}_{k}\right)={\left(\frac{1}{2{\mathrm{\pi \sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)}\right)}^{m/2}\xb7\mathrm{exp}\left\{\frac{1}{2{\sigma}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)}\sum _{k=0}^{m1}{\left[{x}_{k}{B}_{k}{\hat{\Phi}}_{\mathrm{kx}}\right]}^{2}\right\}$
and or σ_{kx} ^{2}({circumflex over (Φ)}_{kx},M_{k}) is estimated by:${\hat{\sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)=\frac{1}{m}\sum _{k=0}^{m1}{\left[{x}_{k}{B}_{k}{\hat{\Phi}}_{\mathrm{kx}}\right]}^{2}$  In a similar way, the likelihood function of the y component can also be obtained. The corresponding Hessian matrix insert Ĥ_{k }of −log p(rΦ_{k},M_{k}) evaluated at {circumflex over (Φ)}_{k }may be denoted by:
$H\left({\hat{\Phi}}_{k}\right)=\left[\begin{array}{cc}\frac{{B}^{T}B}{{\hat{\sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)}& 0\\ 0& \frac{{B}^{T}B}{{\hat{\sigma}}_{\mathrm{ky}}^{2}\left({\hat{\Phi}}_{\mathrm{ky}}\right)}\end{array}\right]$  By approximating
$\frac{1}{2}\mathrm{log}\uf603H\left({\hat{\Phi}}_{k}\right)\uf604$
by the asymptotic expected value of Hessian$\frac{1}{2}\left({d}_{\mathrm{kx}}+{d}_{\mathrm{ky}}\right)\mathrm{log}\left(m\right),$
we can obtain the BIC criterion for Bspline model selection as follows:$M=\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k},k=1,\dots \text{\hspace{1em}}{k}_{\mathrm{max}}}{\mathrm{max}}\left\{\begin{array}{c}\frac{m}{2}\mathrm{log}\text{\hspace{1em}}{\hat{\sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)\frac{m}{2}\mathrm{log}\text{\hspace{1em}}{\hat{\sigma}}_{\mathrm{ky}}^{2}\left({\hat{\Phi}}_{\mathrm{ky}}\right)\\ \frac{1}{2}\left({d}_{\mathrm{kx}}+{d}_{\mathrm{ky}}\right)\mathrm{log}\left(m\right)\end{array}\right\}$
where d_{kx }and d_{ky }are the number of control points in the x and y directions respectively, and m is the number of data points.  In the above equation, the first two terms {circumflex over (σ)}_{kx} ^{2 }and {circumflex over (σ)}_{ky} ^{2 }measure the prediction accuracy of the Bspline model, which increases with the complexity of the model.
 In contrast, the second term decreases and acts as a penalty for using additional parameters to model the data. However, since the {circumflex over (σ)}_{kx} ^{2 }and {circumflex over (σ)}_{ky} ^{2 }only depend on the training sample for model estimation, they are insensitive when under fitting or over fitting occurs. In the above equation, only penalty terms prevent the occurrence of overfitting. In fact, an honest estimate of σ_{kx} ^{2 }and σ_{ky} ^{2 }should be based on a resampling procedure. Here, the available data may be divided into a training sample and a prediction sample. The training sample is used only for model estimation, whereas the prediction sample is used only for estimating the prediction data noise {circumflex over (σ)}_{kx} ^{2 }and {circumflex over (σ)}_{ky} ^{2}. That is, the training sample is used to estimate the model parameter {circumflex over (Φ)}_{k }by: Φ_{x}=[B^{τ}B]^{−1}B^{τ}x, Φ_{y}=[B^{τ}B]^{−1}B^{τ}y while the prediction sample is used to predict data noise σ_{k} ^{2 }by
${\hat{\sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)=\frac{1}{m}\sum _{k=0}^{m1}{\left[{x}_{k}{B}_{k}{\hat{\Phi}}_{\mathrm{kx}}\right]}^{2}.$
In fact, if the model {circumflex over (Φ)}_{k }fitted to the training data is valid, then the estimated variance {circumflex over (σ)}_{l} ^{2 }from a prediction sample should also be a valid estimate of data noise.  In another preferred embodiment, for a given a set of models insert p51 and data r, the Bayesian approach selects the model with the largest posterior probability. The posterior probability of model M_{k }may be denoted by:
$p\left({M}_{k}\u2758r\right)=\frac{p\left(r\u2758{M}_{k}\right)p\left({M}_{k}\right)}{\sum _{L=1}^{{K}_{\mathrm{max}}}p\left(r\u2758{M}_{L}\right)p\left({M}_{L}\right)}$
where p(rM_{k}) is the integrated likelihood of model M_{k }and P(M_{k}) is the prior probability of model M_{k}. To find the model with the largest posterior probability, evaluate
p(M _{k} r) for k=1,2, . . . ,k _{max }
and select the model that has the maximum p(M_{k}r), that is$\begin{array}{c}M=\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k},k=1,\dots \text{\hspace{1em}}{k}_{\mathrm{max}}}{\mathrm{max}}\left\{p\left({M}_{k}\u2758r\right)\right\}\\ =\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k},k=1,\dots \text{\hspace{1em}}{k}_{\mathrm{max}}}{\mathrm{max}}\left\{\frac{p\left(r\u2758{M}_{k}\right)p\left({M}_{k}\right)}{\sum _{L=1}^{{k}_{\mathrm{max}}}p\left(r\u2758{M}_{L}\right)p\left({M}_{L}\right)}\right\}\end{array}$  Here, we assume that the models have the same likelihood a priori, so that p(Mk)=1/k_{max}, (k=1 . . . k_{max}). Therefore, the model selection in
$p\left({M}_{k}\u2758r\right)=\frac{p\left(r\u2758{M}_{k}\right)p\left({M}_{k}\right)}{\sum _{L=1}^{{k}_{\mathrm{max}}}p\left(r\u2758{M}_{L}\right)p\left({M}_{L}\right)}$
will not be affected by p(M_{k}). This is also the case with Σ_{L=1} ^{k} ^{ max }p(rM_{L})p(M_{L}) since it is not a function of M_{k}. Consequently, the factors p(M_{k}) and Σ_{L=1} ^{K} ^{ max }p(rM_{L})p(M_{L}) may be ignored in computing the model criteria.  Equation
$\begin{array}{c}M=\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k}k=1,{\mathrm{\dots k}}_{\mathrm{max}}}{\mathrm{max}}\left\{p\left({M}_{k}\u2758r\right)\right\}\\ =\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k}k=1,{\mathrm{\dots k}}_{\mathrm{max}}}{\mathrm{max}}\left\{\frac{p\left(r\u2758{M}_{k}\right)p\left({M}_{k}\right)}{\sum _{L=1}^{{k}_{\mathrm{max}}}p\left(r\u2758{M}_{L}\right)p\left({M}_{L}\right)}\right\}\end{array}$
then becomes$M=\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k}k=1,{\mathrm{\dots k}}_{\mathrm{max}}}{\mathrm{max}}\left\{p\left(r\u2758{M}_{k}\right)\right\}$  To calculate the posterior probability of model M_{k}, we need to evaluate the marginal density of data for each model p(rM_{k}) which requires multidimensional integration:
p(rM _{k})=∫_{Φ} _{ k } p(rΦ _{k} ,M _{k})p(Φ_{k} M _{k})dΦ _{k }
where Φ_{k }is the parameter vector for model M_{k},p(rΦ_{k},M_{k}) is the likelihood and p(Φ_{k}M_{k}) is the prior distribution for model M_{k}.  In practice, calculating the multidimensional integration is very difficult, especially for obtaining a closed form analytical solution. The research in this area has resulted in many approximation methods for achieving this. The Laplace's approximation method for the integration appears to be a simple one and has become a standard method for calculating the integration of multivariable Gaussians [53]. This gives:
$\begin{array}{c}p\left(r\u2758{M}_{k}\right)={\int}_{{\Phi}_{k}}p\left(r\u2758{\Phi}_{k},{M}_{k}\right)p\left({\Phi}_{k}\u2758{M}_{k}\right)d{\Phi}_{k}\\ \cong {\left(2\pi \right)}^{{d}_{k}/2}{\uf603H\left({\hat{\Phi}}_{k}\right)\uf604}^{1/2}p\text{(}r\u2758{\hat{\Phi}}_{k},{M}_{k}\text{)}p\text{(}{\hat{\Phi}}_{k}\u2758{M}_{k}\text{)}\end{array}$
where {circumflex over (Φ)}_{k }is the maximum likelihood estimate of Φ_{k}, d_{k }denotes the number of parameters (control points for Bspline model) in model M_{k}, and H({circumflex over (Φ)}52 is the Hessian matrix of −log p(r Φ_{k},M_{k}) evaluated at {circumflex over (Φ)}_{k},$H\left({\hat{\Phi}}_{k}\right)=\frac{{\partial}^{2}\mathrm{log}\text{\hspace{1em}}p\left(r\u2758{\Phi}_{k},{M}_{k}\right)}{\partial {\Phi}_{k}\partial {\Phi}_{k}^{T}}{\u2758}_{{\Phi}_{k}={\hat{\Phi}}_{k}}$  This approximation is particularly good when the likelihood function is highly peaked around {circumflex over (Φ)}_{k}. This is usually the case when the number of data samples is large. Neglecting the terms of p({circumflex over (Φ)}_{k}M_{k}) and using log in the calculation, the posterior probability of model M_{k }becomes:
$M=\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k},k=1,{\mathrm{\dots k}}_{\mathrm{max}}}{\mathrm{max}}\left\{\mathrm{log}\text{\hspace{1em}}p\left(r\u2758{\hat{\Phi}}_{k},{M}_{k}\right)\frac{1}{2}\mathrm{log}\uf603H\left({\hat{\Phi}}_{k}\right)\uf604\right\}$  The likelihood function p(r{circumflex over (Φ)}_{k},M_{k}) of aclosed Bspline cross section curve may be factored into x and y components as
p(r{circumflex over (Φ)} _{k} ,M _{k})=p(x{circumflex over (Φ)} _{kx} ,M _{k})·p(y{circumflex over (Φ)} _{ky} ,M _{k})
where {circumflex over (Φ)}_{kx }and {circumflex over (Φ)}_{ky }may be calculated by$\{\begin{array}{c}{\Phi}_{x}={\left[{B}^{T}B\right]}^{1}{B}^{T}x\\ {\Phi}_{y}={\left[{B}^{T}B\right]}^{1}{B}^{T}y\end{array}\hspace{1em}$  Consider the x component. Assuming that the residual error sequence is zero mean and white Gaussian with a variance σ_{kx} ^{2}({circumflex over (Φ)}_{kx}). The likelihood function may be denoted as follows:
$p\left(x\u2758{\hat{\Phi}}_{\mathrm{kx}},{M}_{k}\right)={\left(\frac{1}{2{\mathrm{\pi \sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)}\right)}^{m/2}\mathrm{exp}\left\{\frac{1}{2{\sigma}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)}\sum _{k=0}^{m1}{\left[{x}_{k}{B}_{k}{\hat{\Phi}}_{\mathrm{kx}}\right]}^{2}\right\}$
with σ_{kx} ^{2}({circumflex over (Φ)}_{kx},M_{k}) estimated by${\hat{\sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)=\frac{1}{m}\sum _{k=0}^{m1}{\left[{x}_{k}{B}_{k}{\hat{\Phi}}_{\mathrm{kx}}\right]}^{2}$  Similarly, the likelihood function of the y component may also be obtained. The corresponding Hessian matrix Ĥ_{k }of −log p(rΦ_{k},M_{k})
 Evaluated at {circumflex over (Φ)}_{k }is
$H\left({\hat{\Phi}}_{k}\right)=\left[\begin{array}{cc}\frac{{B}^{T}B}{{\hat{\sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)}& 0\\ 0& \frac{{B}^{T}B}{{\hat{\sigma}}_{\mathrm{ky}}^{2}\left({\hat{\Phi}}_{\mathrm{ky}}\right)}\end{array}\right]$  Approximating
$\frac{1}{2}\mathrm{log}\uf603H\left({\hat{\Phi}}_{k}\right)\uf604$
by the asymptotic expected value of Hessian insert$\frac{1}{2}\left({d}_{\mathrm{kx}}+{d}_{\mathrm{ky}}\right)\mathrm{log}\left(m\right)$
the Bayesian information criterion (BEC) for selecting the structure of a Bspline curve is$M=\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k},k=1,\dots \text{\hspace{1em}}{k}_{\mathrm{max}}}{\mathrm{max}}\left\{\begin{array}{c}\frac{m}{2}\mathrm{log}\text{\hspace{1em}}{\hat{\sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)\frac{m}{2}\mathrm{log}\text{\hspace{1em}}{\hat{\sigma}}_{\mathrm{ky}}^{2}\left({\hat{\Phi}}_{\mathrm{ky}}\right)\\ \frac{1}{2}\left({d}_{\mathrm{kx}}+{k}_{\mathrm{ky}}\right)\mathrm{log}\left(m\right)\end{array}\right\}$
where d_{kx }and d_{ky }are the number of control points in x and y directions respectively, and m is the number of data points.  In the conventional BIC criterion as shown in the above equation, the first two terms measure the estimation accuracy of the Bspline model. In general, the variance {circumflex over (σ)}_{k} ^{2 }estimated from
${\hat{\sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)=\frac{1}{m}\sum _{k=0}^{m1}{\left[{x}_{k}{B}_{k}{\hat{\Phi}}_{\mathrm{kx}}\right]}^{2}$
tends to decrease with the increase in the number of control points. The smaller the variance value in {circumflex over (σ)}_{k} ^{2}, the bigger the value of the first two terms (as the variance is much smaller than one) and therefore the higher the order (i.e. the more control points) of the model resulting from$M=\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k},k=1,\dots \text{\hspace{1em}}{k}_{\mathrm{max}}}{\mathrm{max}}\left\{\begin{array}{c}\frac{m}{2}\mathrm{log}\text{\hspace{1em}}{\hat{\sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)\frac{m}{2}\mathrm{log}\text{\hspace{1em}}{\hat{\sigma}}_{\mathrm{ky}}^{2}\left({\hat{\Phi}}_{\mathrm{ky}}\right)\\ \frac{1}{2}\left({d}_{\mathrm{kx}}+{k}_{\mathrm{ky}}\right)\mathrm{log}\left(m\right)\end{array}\right\}$  However, if too many control points are used, the Bspline model will overfit noisy data points. An overfitted Bspline model will have poor generalization capability. Model selection thus should achieve a proper tradeoff between the approximation accuracy and the number of control points of the Bspline model. With a conventional BIC criterion, the same data set is used for estimating both the control points of the Bspline model and the variances. Thus the first two terms in the above equation cannot detect the occurrence of over fitting in the Bspline model selected.
 In theory, the third term in the above equation could penalize overfitting as it appears directly proportional to the number of control points used. In practice, however, it may be noted that the effect of this penalty term is insignificant compared with that of the first two terms. As a result, the conventional BIC criterion is rather insensitive to the occurrence of overfitting and tends to select more control points in the Bspline model to approximate the data point, which normally results in a model with poor generalization capability.
 The reason for the occurrence of overfitting in conventional BIC criterion lies in the way the variances σ_{kx} ^{2 }and σ_{ky} ^{2 }are obtained. A reliable estimate of σ_{kx} ^{2 }and σ_{ky} ^{2 }should be based on resampling of the data, in other words, the generalization capability of a Bspline model should be validated using another set of data points rather than the same data used in obtaining the model.
 To achieve this, the available data may be divided into two sets: a training sample and a prediction sample. The training sample may be used only for model estimation, whereas the prediction sample may be used only for estimating data noise σ_{kx} ^{2 }and σ_{ky} ^{2}.
 For a candidate Bspline model M_{k }with d_{kx }and d_{ky }control points in the x and y directions, the BIC may be evaluated via the following steps:
 1)Estimate the model parameter {circumflex over (Φ)}_{k }using the training sample by
$\hspace{1em}\{\begin{array}{c}{\Phi}_{x}={\left[{B}^{T}B\right]}^{1}{B}^{T}x\\ {\Phi}_{y}={\left[{B}^{T}B\right]}^{1}{B}^{T}y\end{array}$  2) Estimate the data noise σ_{k} ^{2 }using the prediction sample by equation
${\hat{\sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)=\frac{1}{m}\sum _{k=0}^{m1}{\left[{x}_{k}{B}_{k}{\hat{\Phi}}_{\mathrm{kx}}\right]}^{2}$  If the model {circumflex over (Φ)}_{k }fitted to the training data is valid, then the estimated variance {circumflex over (σ)}_{k} ^{2 }from the prediction sample should also be a valid estimate of the data noise. It may be seen that the data noise σ_{k} ^{2 }estimated from the prediction sample may be more sensitive to the quality of the model than one directly estimated from a training sample, as the variance σ_{k} ^{2 }estimated from the prediction sample may also have the capability of detecting the occurrence of overfitting.
 Thus, in one or more preferred embodiments, a Bayesian based approach may be adopted as the model selection method. Based on the posterior model probabilities, the Bayesian based approach estimates a probability distribution over an ensemble of models. The prediction is accomplished by averaging over the ensemble of models. Accordingly, the uncertainty of the models is taken into account, and complex models with more degrees of freedom are penalized. Given a set of models {M_{k},k=1,2, . . . ,k_{max}} and data r, the Bayesian approach selects the model with the largest posterior probability. To find the model with the largest posterior probability, we evaluate p(M_{k}r) for k=1,2, . . . ,k_{max }and select the model that has the maximum p(M_{k}r), that is
$\begin{array}{c}M=\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k},k=1,\dots \text{\hspace{1em}}{k}_{\mathrm{max}}}{\mathrm{max}}\left\{p\left({M}_{k}\u2758r\right)\right\}\\ =\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k},k=1,\dots \text{\hspace{1em}}{k}_{\mathrm{max}}}{\mathrm{max}}\left\{\frac{p\left(r\u2758{M}_{k}\right)p\left({M}_{k}\right)}{\sum _{L=1}^{{k}_{\mathrm{max}}}p\left(r\u2758{M}_{L}\right)p\left({M}_{L}\right)}\right\}\end{array}$  Assuming that the models have the same likelihood a priori, so that p(M_{k})=1/k_{max}, (k=1, . . . ,k_{max}) the model selection will not be affected by p(M_{k}) . This is also the case with Σ_{L=1} ^{k} ^{ max }p(rM_{L})p(M_{L}) since it is not a function of M_{k}. Consequently, we have
$M=\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k},k=1,\dots \text{\hspace{1em}}{k}_{\mathrm{max}}}{\mathrm{max}}\left\{p\left(r\u2758{M}_{k}\right)\right\}$  Using Laplace's approximation for calculating the integration of multivariable Gaussians, we can obtain the Bayesian information criterion (BIC) for selecting the structure of Bspline curve
$M=\mathrm{arg}\text{\hspace{1em}}\underset{{M}_{k},k=1,\dots \text{\hspace{1em}}{k}_{\mathrm{max}}}{\mathrm{max}}\left\{\begin{array}{c}\frac{m}{2}\mathrm{log}\text{\hspace{1em}}{\hat{\sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)\frac{m}{2}\mathrm{log}\text{\hspace{1em}}{\hat{\sigma}}_{\mathrm{ky}}^{2}\left({\hat{\Phi}}_{\mathrm{ky}}\right)\\ \frac{1}{2}\left({d}_{\mathrm{kx}}+{k}_{\mathrm{ky}}\right)\mathrm{log}\left(m\right)\end{array}\right\}$
where d_{kx }and d_{ky }are the number of control points in x and y directions respectively, m is the number of data points.  Here we divide the available data into two sets: a training sample and a prediction sample. The training sample is used only for model estimation, whereas the prediction sample is used only for estimating data noise. For a candidate Bspline model with its control points, the BIC is evaluated via the following steps:
 1) Estimate the model parameter using the training sample;
 2) Estimate the data noise using the prediction sample.
 If the model fitted to the training data is valid, then the estimated variances from the prediction sample should also be a valid estimate of the data noise. If the variances found from the prediction sample are unexpectedly large, we have reasons to believe that the candidate model fits the data badly. It is seen that the data noise estimated from the prediction sample will thus be more sensitive to the quality of the model than the one directly estimated from training sample, as the variance estimated from the prediction sample also has the capability of detecting the occurrence of overfitting.
 We further define an entropy function which measures the information about the model, given the available data points. The entropy can be used as the measurement of the uncertainty of the model parameter.
 Uncertainty Analysis
 In this section, we will analyze the uncertainty of the Bspline model for guiding data selection so that new data points will maximize information on the Bspline model's parameter Φ. Here Φ_{k }is replaced by Φ to simplify the descriptions and to show that we may deal with the selected “best” Bspline model with d_{kx }and d_{ky }control points.
 To obtain the approximate Bspline model, we will predict the distribution of the information gain about the model's parameter Φ along each cross section curve. A measure of the information gain will be obtained whose expected value will be maximal when the new measurement data are acquired. The measurement is based on Shannon's entropy whose properties make it a sensible information measure here. We will describe the information entropy of the Bspline model and how to use it to achieve maximal information gain about the parameters of the Bspline model Φ.
 Information Entropy of a BSpline Model
 In a first preferred embodiment, given Φ and the data points r=(r_{i})_{i=1} ^{m }are assumed to be statistically independent, with Gaussian noise of zero mean and variance σ^{2 }the joint probability of r=(r_{i})_{i=1} ^{m }may be denoted by
$p\left(r\u2758\Phi \right)=\frac{1}{{\left(2{\mathrm{\pi \sigma}}^{2}\right)}^{m/2}}\xb7\mathrm{exp}\left[\frac{1}{2{\sigma}^{2}}{\left(rB\xb7\Phi \right)}^{T}\left(rB\xb7\Phi \right)\right]$  The above equation has an asymptotic approximation representation defined by [27]
$p\left(r\u2758\Phi \right)\approx p\left(r\u2758\hat{\Phi}\right)\mathrm{exp}\left[\frac{1}{2}{\left(\Phi \hat{\Phi}\right)}^{T}{H}_{m}\left(\Phi \hat{\Phi}\right)\right]$
where {circumflex over (Φ)} is the maximum likelihood estimation of Φ given the data points and H_{m }p53] is the Hessian matrix of −log p(rΦ) evaluated at {circumflex over (Φ)} given data points r=(r_{i})_{i=1} ^{m}. The posteriori distribution p(Φr) of the given data is approximately proportional to$p\left(\Phi \u2758r\right)\approx p\left(r\u2758\hat{\Phi}\right)\xb7\mathrm{exp}\left[\frac{1}{2}{\left(\Phi \hat{\Phi}\right)}^{T}{H}_{m}\left(\Phi \hat{\Phi}\right)\right]p\left(\Phi \right)$
where the p(Φ) is the priori probability of the Bspline model parameters.  If the priori has a Gaussian distribution with mean {circumflex over (Φ)} and covariance H_{m} ^{−1}, we have
$p\left(\Phi \u2758r\right)\propto \mathrm{exp}\left[\frac{1}{2}{\left(\Phi \hat{\Phi}\right)}^{T}{H}_{m}\left(\Phi \hat{\Phi}\right)\right]$  From Shannon's information entropy, the conditional entropy of p(Φr) is defined by
E _{m}(Φ)=∫p(Φr)·log p(Φr)dΦ  If p(Φr) obeys Gaussian distribution, the corresponding entropy is [28]
${E}_{m}=\Delta +\frac{1}{2}\mathrm{log}\left(\mathrm{det}\text{\hspace{1em}}{H}_{m}^{1}\right)$
where Δ is a constant.  The entropy measures the information about the Bspline model parameters, given data points (r_{1}, . . . , r_{m},). The more information about Φ the smaller the entropy will be. In this work, we use the entropy as the measurement of the uncertainty of the model parameter Φ.
 Thus, to minimize E_{m}, we will make detH_{m} ^{−1 }as small as possible.
 In a further preferred embodiment, for parameter Φ, the joint probability of r=(r_{i})_{i=1} ^{m }has an asymptotic approximation representation
$p\left(r\u2758\Phi \right)\approx p\left(r\u2758\hat{\Phi}\right)\mathrm{exp}\left[\frac{1}{2}{\left(\Phi \hat{\Phi}\right)}^{T}{H}_{m}\left(\Phi \hat{\Phi}\right)\right]$
where H_{m }is the Hessian matrix given points r=(r_{i})_{i=1} ^{m}.  Therefore, the posteriori distribution [seep1511] of given data may be approximately given as
$p\left(\Phi \u2758r\right)\approx p\left(r\u2758\hat{\Phi}\right)\xb7\mathrm{exp}\left[\frac{1}{2}{\left(\Phi \hat{\Phi}\right)}^{T}{H}_{m}\left(\Phi \hat{\Phi}\right)\right]p\left(\Phi \right)$
where the p(Φ) is the priori probability of Bspline model parameters. If we assume that the priori probability over the Bspline model parameters is initialized as uniform distribution in the interval which they lie in, we have$p\left(\Phi \u2758r\right)\propto \mathrm{exp}\left[\frac{1}{2}{\left(\Phi \hat{\Phi}\right)}^{T}{H}_{m}\left(\Phi \hat{\Phi}\right)\right]$  It is easy to confirm that if p(Φr) obeys Gaussian distribution, the correspond rig entropy is [12]
${E}_{m}=\Delta +\frac{1}{2}\mathrm{log}\left(\mathrm{det}\text{\hspace{1em}}{H}_{m}^{1}\right)$
where Δ_{m }is the constant.  The entropy measures the information about Bspline model parameters, given data points. (r_{1}, . . . r_{m}).
 Thus, in a preferred embodiment, we select the entropy as the measurement of uncertainty of the model parameter Φ.
 Information Gain
 In order to predict the distribution of the information gain, a new data point r_{m+1 }may be assumed to have been collected along a contour. The potential information gain is determined by incorporating the new data point r_{m+1}. If we move the new point r_{m+1 }along the contour, the distribution of the potential information gain along the whole contour may be obtained.
 To derive the relationship between the information gain and the new data point {seep54}, firstly we may assume that a new data point {seep54} has been collected. Then, let p(Φr_{1}, . . . ,r_{m},r_{m+1}) the probability distribution of model parameter Φ_{i }after a new point r_{m+1 }is added. Its corresponding entropy is
${E}_{m+1}=\Delta +\frac{1}{2}\mathrm{log}\left(\mathrm{det}\text{\hspace{1em}}{\hat{H}}_{m+1}^{1}\right).$
The information gain then is$\Delta \text{\hspace{1em}}E={E}_{m}{E}_{m+1}=\frac{1}{2}\mathrm{log}\text{\hspace{1em}}\frac{\mathrm{det}\text{\hspace{1em}}{H}_{m}^{1}}{\mathrm{det}\text{\hspace{1em}}{H}_{m+1}^{1}}$  From
$H\left({\hat{\Phi}}_{k}\right)=\left[\begin{array}{cc}\frac{{B}^{T}B}{{\hat{\sigma}}_{\mathrm{kx}}^{2}\left({\hat{\Phi}}_{\mathrm{kx}}\right)}& 0\\ 0& \frac{{B}^{T}B}{{\hat{\sigma}}_{\mathrm{ky}}^{2}\left({\hat{\Phi}}_{\mathrm{ky}}\right)}\end{array}\right]$
the new data point r_{m+1 }will incrementally update the Hessian matrix as follows:${H}_{m+1}\approx {H}_{m}+\left[\begin{array}{cc}\frac{1}{{\sigma}_{x}^{2}}\xb7{\stackrel{\_}{B}}_{m+1}^{T}{\stackrel{\_}{B}}_{m+1}& 0\\ 0& \frac{1}{{\sigma}_{y}^{2}}\xb7{\stackrel{\_}{B}}_{m+1}^{T}{\stackrel{\_}{B}}_{m+1}\end{array}\right]$
where {circumflex over (σ)}_{m+1} ^{2}≈{circumflex over (σ)}_{m} ^{2}·{overscore (B)}_{m+1 }is defined by
{overscore (B)} _{m+1} =[{overscore (B)} _{0,4} ^{m+1} +{overscore (B)} _{n+1,4} ^{m+1} ,{overscore (B)} _{1,4} ^{m+1} +{overscore (B)} _{n+2,4} ^{m+1} , . . . ,{overscore (B)} _{n,4} ^{m+1}].  The determinant of H_{m+1 }
$\mathrm{det}\text{\hspace{1em}}{H}_{m+1}\approx \mathrm{det}\left[I+\left[\begin{array}{cc}\frac{1}{{\hat{\sigma}}_{x}^{2}}\xb7{\stackrel{\_}{B}}_{m+1}^{T}{\stackrel{\_}{B}}_{m+1}& 0\\ 0& \frac{1}{{\hat{\sigma}}_{y}^{2}}\xb7{\stackrel{\_}{B}}_{m+1}^{T}{\stackrel{\_}{B}}_{m+1}\end{array}\right]{H}_{m}^{1}\right]\xb7\mathrm{det}\text{\hspace{1em}}{H}_{m}$
can be simplified to
det H_{m+1}≈(1+{overscore (B)}_{m+1}·[B^{T}B]^{−1}·{overscore (B)}_{m+1} ^{T})^{2}·det H_{m }  Since det H^{−1}=1/det H
$\Delta \text{\hspace{1em}}E={E}_{m}{E}_{m+1}=\frac{1}{2}\mathrm{log}\text{\hspace{1em}}\frac{\mathrm{det}\text{\hspace{1em}}{H}_{m}^{1}}{\mathrm{det}\text{\hspace{1em}}{H}_{m+1}^{1}}$
can be simplified to
ΔE=log(1+{overscore (B)} _{m+1} ·[B ^{T} B] ^{−1} ·{overscore (B)} _{m+1} ^{T})  Assuming that the new additional data point r_{m+1 }travels along the contour, the resulting potential information gain of the Bspline model will change according to ΔE above. In order to reduce the uncertainty of the model, it may be desirable to have the new data point at such location that the potential information gain attainable is largest. Therefore, after reconstructing the section curve by fitting partial data acquired from previous viewpoints, the Next Best Viewpoint should be selected as the one that senses those new data points which give the largest possible potential information gain for the Bspline model.
 Thus, in order to predict the distribution of the information gain, we assume a new data point collected along a contour. The potential information gain is determined by incorporating the new data point. If we move the new point along the contour, the distribution of the potential information gain along the whole contour can be obtained. Now, we will derive the relationship between the information gain and the new data point.
 As mentioned above, the new data points will incrementally update the Hessian matrix. In order to reduce the uncertainty of the model, we would like to have the new data point at such location that the potential information gain attainable is largest. Therefore, after reconstructing the section curve by fitting partial data acquired from previous viewpoints, the Next Best Viewpoint should be selected as the one that sense those new data points which give the largest possible potential information gain for the model.
 Next Best View
 The task in the view planning here is to obtain the visibility regions in the viewing space that contain the candidate viewpoints where the missing information about the 3D object can be obtained. The NBV should be the viewpoint that can give maximum information about the object. We need to map the predicted information gain to the view space for viewpoint planning. For a viewpoint, we say that a data point on the object is visible if the angle between its normal and the view direction is smaller than a breakdown angle of the sensor. The view space for each data point is the set of all possible viewpoints that can see it. The view space can be calculated via the following procedure:
 1) Calculating the normal vector of a point on the object, using a least square error fitting of a local surface patch in its neighbourhood.
 2) Extracting viewpoints from which the point is visible. These viewpoints are denoted as view space.
 After the view space is extracted, we construct a measurement matrix. The components of the measurement matrix is given as
$\begin{array}{cc}{m}_{k,j}=\{\begin{array}{cc}\langle {n}_{k}\xb7{v}_{j}\rangle & \mathrm{if}\text{\hspace{1em}}{r}_{k}\text{\hspace{1em}}\mathrm{is}\text{\hspace{1em}}\mathrm{visible}\text{\hspace{1em}}\mathrm{to}\text{\hspace{1em}}{v}_{j}\\ 0& \mathrm{otherwise}\end{array}& \left(30\right)\end{array}$
where v_{j }is the direction vector of viewpoint v_{j}.  Then, for each view, we define a global measure of the information gain as the criterion to be summed over all visible surface points seen under this view of the sensor. This measure is defined by
${I}_{j}\left({p}_{j}\right)=\sum _{k\in {R}_{j}}{m}_{k,j}\xb7\Delta \text{\hspace{1em}}{E}_{k}$
where p_{j }contains the location parameters at a viewpoint, ΔE_{k }is the information gain at surface point r_{k}, which is weighted by M_{k,j}.  Therefore, the Next Best View p* is one that maximizes the information gain function of I(p)
(p*)=max I _{j}(p _{j})  At the new viewpoint, another set of data is acquired, registered, and integrated with the previous partial model. This process is repeated until all data are acquired to build a complete model of the 3D surface. The terminating condition is defined via the information gain. When there are missing data, the information gain will have outstanding peaks where data are missing. When all data are obtained, there will be no obvious peaks. Rather, the information gain will appear noise like indicating that the terminating condition is satisfied.
 In planning the viewpoint, we can also specify the vision system's configuration parameters. The configuration parameters can include optical settings of the camera and projector as well as the relative position and orientation between the camera and projector. The planning needs to satisfy multiple constraints including visibility, focus, field of view, viewing angle, resolution, overlap, occlusion, and some operational constraints such as kinematic reachability of the sensor pose and robotenvironment collision. A complete cycle in the incremental modeling process is illustrated in
FIG. 9 . As shown inFIG. 9 , in a first stage static calibration and first view acquisition is carried out. In a second stage, 3D reconstruction via a single view is performed. Next, 3D model registration and fusion is performed followed by the determination of a next viewpoint decision and terminating condition. Sensor reconfiguration follows this step and recalibration is performed. The process may then be repeated from the 3D reconstruction stage. 
FIG. 10 shows a flow diagram of information entropy based viewpoint planning for digitization of a 3D object according to a preferred embodiment. In a first stage, 3D data is acquired from another viewpoint. Next, multiple view range images are registered. In the next stage, a Bspline model is selected and the model parameters of each cross section curve are estimated. Following this, the uncertainty of each cross section Bspline curve is analyzed and the information gain of the object is predicted. Next, information gain about the object is mapped into a view space. Candidate viewpoints are then evaluated and the NBV selected. The process may then repeated.  In a preferred embodiment, the candidate viewpoints may be represented in a tessellated spherical view space by subdividing recursively each triangular facet of an icosahedron. If we assume that view space is centered at the object, arid its radius is equal to a priori specified distance from the sensor to the object, each viewpoint may be represented by pantilt angles φ([−180°, 180°]) and θ([−90°, 90°]), denoted as v(θ,φ).
 For a viewpoint v(θ,φ) on the object, it may be considered to be visible if the angle between its normal and the view direction is smaller than a breakdown angle α for the ranger sensor being used. The view space V_{k }for each point r_{k }(k=1,2, . . . ) to be sensed by the range sensor is the set of all possible directions that can access to r_{k}. The view space V_{k }may be calculated via the following procedure:
 1) Calculating the normal vector n_{k }of one point r_{k }(k−=,2, . . . ) using on the object, a least square error fitting of a 3×3 local surface patch in its neighborhood.
 2) Extracting viewpoints from which q_{k }is visible. These viewpoints are denoted as view space V_{k}.
 After the view space V_{k}, (k=1,2, . . . ), has been extracted, the measurement matrix may be constructed M The column vector M_{RJ }of the measurement matrix corresponds to the set R_{j }of points visible for viewpoint v_{j }while the row vector M_{k,v}, corresponds to view space V_{k }of the next best point q_{k}. The components m_{kj }of /byw measure matrix may be defined as follows:
${m}_{k,j}=\{\begin{array}{cc}\langle {n}_{k}\xb7{v}_{j}\rangle & \mathrm{if}\text{\hspace{1em}}{r}_{k}\text{\hspace{1em}}\mathrm{is}\text{\hspace{1em}}\mathrm{visible}\text{\hspace{1em}}\mathrm{to}\text{\hspace{1em}}{v}_{j}\\ 0& \mathrm{otherwise}\end{array}$
where v_{j }is the direction vector of viewpoint vj.  Then, for each view v(θ,φ), the View Space visibility may be defined which may measure the global information gain I(θ,φ) by
${I}_{j}\left({\theta}_{j},{\varphi}_{j}\right)=\sum _{k\in {R}_{j}}{m}_{k,j}\xb7\Delta \text{\hspace{1em}}{E}_{k}$
where ΔE_{k }is the information gain at surface point r_{k}, which is weighted by m_{k,j }  Therefore, the Next Best View (θ*,φ*) may be considered to be the one that maximizes the information gain function of I(θ,φ)
$\left({\theta}^{*},{\varphi}^{*}\right)=\underset{{\theta}_{j},{\varphi}_{j}}{\mathrm{max}}{I}_{j}\left({\theta}_{j},{\varphi}_{j}\right)$
View Space Representation  View space is a set of 3D positions where the sensor (vision system) takes measurements. If we assume that the 3D object is within the field of view and time depth of view of the vision system and the optical settings of the vision system are fixed, based on these assumptions, the parameters of the vision system to be planned are the time viewing positions of the sensor. As in the embodiment described above, in this embodiment, the candidate viewpoints are represented in a spherical viewing space. The viewing space is usually a continuous spherical surface. To reduce the number of viewpoints used in practice, it is necessary to discretize the surface by some kind of tessellation.
 In general, there are two methods for tessellating a view sphere, namely latitudelongitude based methods and icosahedron based methods. For a latitudelongitude based tessellation, the distribution of viewpoints varies considerably from time poles to the equator. For this reason, uniformly segmented geodesic tessellation is widely used [29,30,31 ]. This method tessellates the sphere by subdividing recursively each triangular facet of the icosahedrons. Using the geodesic dome construction technique, the constructed dome contains 20×Q^{2 }triangles and 10×Q^{2}+2 vertices, where Q is the frequency of the geodesic division. The vertices of the triangles represent the candidate viewpoints.
 By way of example, a rhombusshaped array data structure may be used [30]. For example, we may calculate the view space with Q=16 as shown in
FIG. 11 (a). In addition, if we assume that the view space is centered around the object, and its radius is equal to a priori specified distance from the sensor to the object, as shown inFIG. 11 (b), since the optical axis of the sensor passes through the center of the object, the viewpoint may be represented by pantilt angles φ([−180°, 180°]) and θ([−90°, 90°]).  According to the representation of the viewing space, the fundamental task in the view planning here is to obtain the visibility regions in the viewing space that contain the candidate viewpoints where the missing information about the 3D object can be obtained without occlusions. The NBV should be the viewpoint that can give maximum information about the object.
 With the above view space representation, we can now map the predicted information gain to the view space for viewpoint planning. For a viewpoint v(θ,φ), we say one data point on the object is visible if the angle between its normal and the view direction is smaller than a breakdown angle α of the sensor. The view space V_{k }for each data point r_{k}, (k 1,2, . . . ) is the set of all possible viewpoints that can see r_{k}. The view space V_{k }can be calculated via the following procedure:
 1) Calculating the normal vector n_{k }of a point r_{k }(k=1,2, . . . ) on the object, using a least square error fitting of a 3×3 local surface patch in its neighborhood.
 2) Extracting viewpoints from which r_{k }is visible. These viewpoints are denoted as view space V_{k}.
 After the view space V_{k}, (k=1,2, . . . ), is extracted, we construct a measurement matrix M. The components m_{k,j }of an lbyw measurement matrix may be given as
${m}_{k,j}=\{\begin{array}{cc}\langle {n}_{k}\xb7{v}_{j}\rangle & \mathrm{if}\text{\hspace{1em}}{r}_{k}\text{\hspace{1em}}\mathrm{is}\text{\hspace{1em}}\mathrm{visible}\text{\hspace{1em}}\mathrm{to}\text{\hspace{1em}}{v}_{j}\\ 0& \mathrm{otherwise}\end{array}$
where v_{j }is the direction vector of viewpoint v_{j}.  Then, for each view v(θ,φ) we define a global measure of the information gain I(θ,φ) as the criterion to be summed over all visible surface points seen under this view of the sensor. I(θ,φ) is defined by
${I}_{j}\left({\theta}_{j},{\varphi}_{j}\right)=\sum _{k\in {R}_{j}}{m}_{k,j}\xb7\Delta \text{\hspace{1em}}{E}_{k}$
where ΔE_{k }is the information gain at surface point r_{k}, which is weighted by m_{kj}.  Therefore, the Next Best View (θ*,φ*) is one that maximizes the information gain function of I(θ,φ)
$\left({\theta}^{*},{\varphi}^{*}\right)=\underset{{\theta}_{j},{\varphi}_{j}}{\mathrm{max}}{I}_{j}\left({\theta}_{j},{\varphi}_{j}\right)$  In summary, one or more preferred embodiments of the present invention provide a viewpoint planning method by reducing incrementally the uncertainty of a closed Bspline curve. Also proposed is an improved BIC criterion for model selection, which accounts for acquired data well. By representing the object with a series of relatively simple cross section curves, it is possible to define entropy as a measurement of uncertainty to predict the information gain for a cross section Bspline model. Based on that, it is possible to establish View Space Visibility and select the viewpoint with maximum visibility as the Next Best View.
 One or more embodiments of the present invention may find particular application in the following fields but application of the invention is not to be considered limited to the following:

 in reverse engineering, to obtain a digitized 3D data/model of a physical product;
 human body measurements for the apparel industry or for tailor made clothing design;
 advanced object recognition, product inspection and manipulation;
 environment model construction for virtual reality;
 as a 3D sensor for robotic exploration/navigation in clustered environments.
 One or more preferred embodiments of the invention may have particular advantages in that by using encoded patterns projected over an area on the object surface, high speed 3D imaging may be achieved. Also, automated selfrecalibration of the system may be performed when the system's configuration is changed or perturbed. In a further preferred embodiment, uncalibrated 3D reconstruction may be performed. Furthermore, in a preferred embodiment real Euclidean reconstruction of a 3D surface may be achieved.
 It will be appreciated that the scope of the present invention is not restricted to the described embodiments. For example, whilst the embodiments have been described in terms of four sensors and four variable gain control components, a different number of such components may be used. Numerous other modifications, changes, variations, substitutions and equivalents will therefore occur to those skilled in the art without departing from the spirit and scope of the present invention.
 The results of a series of experiments conducted in respect of a number of preferred embodiments according to the present invention are set out in the attached Schedule 1, the contents of which is incorporated herein in total. Furthermore, details of the application of a number of preferred embodiments of the present invention to uncalibrated Euchlidean 3D reconstruction using an active vision system according to an embodiment of the present invention is set out in Schedule 2, the contents of which is incorporated herein in total.
 The contents of the following documents which have been referred to throughout the specification are hereby incorporated herein by reference:
 [1] Y. F. Li and S. Chen, Automatic Recalibration of an Active Structured Light Vision System, IEEE Transactions on Robotics and Automation. 19(2): 259268, April 2003.
 [2] R. Pito, A Solution to the Next Best View Problem for Automated Surface Acquisition, IEEE Trans. Pattern Analysis ad Machine Intelligence, 21(10):10161030, October 1999.
 [3] C. I. Connolly, The Determinant of Next Best Views, Proc. IEEE Intl. Conf. on Robotics and Automation, pp. 432435, 1985.
 [4] J. Maver and R. Bajcsy, Occlusions as a Guide for Planning the Next View, IEEE Trans. Pattern Analysis and Machine Intelligence, 15(2): 417433, February 1993.
 [5] P. Whaite and F. P. Ferrie, Autonomous Exploration: Driven by Uncertainty, IEEE Trans. Pattern Analysis and Machine Intelligence, 19(3):193205, March 1997.
 [6] C. I. Connolly, The Determinant of Next Best Views, Proc. IEEE Intl. Conf. on Robotics and Automation, pp. 432435, 1985.
 [7] J. Maver and R. Bajcsy, Occlusions as a Guide for Planning the Next View, IEEE Trans. Pattern Analysis and Machine Intelligence, 15(2): 417433,February 1993.
 [8] R. Pito, A Solution to the Next Best View Problem for Automated Surface Acquisition, IEEE Trans. Pattern Analysis and Machine Intelligence, 21(10):10161030, October 1999.
 [9] P. Whaite and F. P. Ferrie, Autonomous Exploration: Driven by Uncertainty, IEEE Trans. Pattern Analysis and Machine Intelligence, 19(3):193205, March 1997.
 [10] M. K. Reed, P. K. Allen, 3D Modeling from Range Imagery: An Incremental Method with a Planning Component, Image and Vision Computing, 17(2):99111, 1999.
 [11] W. Scott, G. Roth and J. Rivest, View Planning with a Registration Constraint, IEEE Int. Conf. Recent Advances in 3D Digital Imaging and Modeling, pages 127134, 2001.
 [12] A. M. Mclvor, “Nonlinear Calibration of a Laser Stripe Profiler”, Optical Engineering, vol. 41, no. 1, January 2002, pp. 205212.
 [13] D. Q. Huynh, “Calibration of a Structured Light System: A Projective Approach”, Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 1997, pp. 225 230.
 [14] R J. Valkenburg and A. M. Mclvor, “Accurate 3D measurement using a structured light system”, Image and Vision Computing, vol. 16, no. 2, February 1998, pp. 99110.
 [15] A. Zomet, L. Wolf and A. Shashua, “Omnirig: linear selfrecalibration of a rig with varying internal and external parameters”, Proc. Eighth IEEE lot. Conf. on Computer Vision, vol. 1, 2001, pp. 135 141.
 [16] J. Dias, A. de Almeida, H. Araújo and J. Batista, “Camera Recalibration with HandEye Robotic System”, IECON 91, Kobe, Japan, October 1991.
 [17] C. T. Huang and O. R. Mitchell, “Dynamic camera calibration”, Proc. Int. Symposium on Computer Vision, 1995, pp. 169174.
 [18] S. B. Kang, “Catadioptric selfcalibration”, Proc. IEEE Conf. on Computer Vision and Pattern Recognition, vol. 1, 2000, pp. 201 207.
 [19] Y. Seo and K. S. Hong, “Theory and practice on the selfcalibration of a rotating and zooming camera from two views”, IEE proc. on Vision, Image and Signal Processing, vol. 148, no. 3, June 2001, pp. 166172.
 [20] Y. Ma , R. Vidal, J. Kosecka and S. Sastry, “Kruppa's Equations Revisited: its Degeneracy, Renormalization and Relations to Chirality”, Proc. of European Conf. on Computer Vision, Trinity College Dublin, Ireland, 2000.
 [21] A. Bartoli, P. Stunn and R. Horaud, “Structure and Motion from Two Uncalibrated Views Using Points on Planes”, Proc. of the third Int. Conf. on 3D Digital Imaging and Modeling, Quebec City, Canada, pp. 8390, June 2001.
 [22] A. Fusiello, “Uncalibrated Euclidean reconstruction: A review”, Image and Vision Computing, 18(67), May 2000, pp. 555563.
 [23] S. J. Maybank and O. D. Faugeras, A theory of selfcalibration of a moving camera, International Journal of Computer Vision, Vol. 8, No. 2, pp. 123151, November 1992.
 [24] O. Faugeras, What can be seen in three dimensions with .an uncalibrated stereo rig? Computer Vision—ECCV'92, Lecture Notes in Computer Science, Proc. Of the Second European Conference on Computer Vision, Santa Margherita Ligure, Italy, pp. 563578, May, 1992.
 [25] R. I. Hartley, Euclidean reconstruction from uncalibrated views, Applications of Invariance in Computer Vision, Lecture Notes in Computer Science, 852, Springer, Berlin, pp. 237256, 1993.
 [26] O. Faugeras, Stratification of threedimensional vision: projective, affine, and metric representations, Journal of the Optical Society of America A, Vol. 12, No. 3, pp. 465484, March 1994.
 [27] M. Pollefeys, L. Van Gool and M. Proesmans, Euclidean 3D reconstruction from image sequences with variable focal lengths, Proc. European Conference on Computer Vision, Cambridge, UK, Vol. 1, pp. 3142, 1996.
 [28] Y. Seo and K. S. Hong, About the selfcalibration of a rotating and zooming camera: Theory and practice, Proceedings of the Seventh IEEE International Conference on Computer Vision, Corfu, Greece, Vol. 1, pp. 183 189, September 1999.
 [29] H. Kim and K. S. Hong, A practical selfcalibration method of rotating and zooming cameras, Proceedings 15th International Conference on Pattern Recognition, Barcelona, Spain, Vol. 1, pp. 354 357, September 2000,
 [30] A. Heyden and K. Astrom, Euclidean reconstruction from image sequences with varying and unknown focal length and principal point, Proc. of IEEE Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, pp. 438443, June 1997.
 [31] M. Pollefeys, R. Koch and L. V. Gool, Selfcalibration and metric reconstruction in spite of varying and unknown intrinsic camera parameters, International Journal of Computer Vision, Vol. 32, No. 1, pp. 725, August 1999.
 [32] F. Kahl and A. Heyden, Robust selfcalibration and Euclidean reconstruction via affine approximation, Proceedings Fourteenth International Conference on Pattern Recognition, Brisbane, Australia, Vol. 1, pp. 5658, August 1998.
 [33] Y. F. Li and Z. Liu, Method for Determining the Probing Points for Efficient Measurement and Reconstruction of Freeform Surfaces, Measurement Science and Technology, Vol. 14, No. 8, August 2003.
 [34] Y. F. Li and S. Chen, Automatic Recalibration of an Active . Structured Light Vision System, IEEE Transactions on Robotics and Automation, Vol. 19, No. 2, pp. 259268, April 2003.
 [35] S. Chen and Y. F. Li, Dynamically Reconfigurable Visual Sensing for 3D Perception, Proc. IEEE International Conference on Robotics and Automation, Taipei, Taiwan, September 2003.
 [36] D. Fofi, J. Salvi and E. Mouaddib, “Uncalibrated Vision based on Structured Light”, IEEE Int. Conf. on Robotics and Automation, Seoul, Korea, May 2001.
 [37] O. Jokinen, “Selfcalibration of a light striping system by matching multiple 3D profile maps”, Proc. Second Int. Conf. on 3D Digital Imaging and Modeling, Ottawa, 1999, pp. 180190.
 [38] C. W. Chu, S. Hwang and S. K. Jung, “Calibrationfree Approach to 3D Reconstruction Using Light Stripe Projections on a Cube Frame”, Proc. IEEE 3rd Int. Conf. on 3D Digital Imaging and Modeling, Quebec City, Canada, June 2001, pp. 1319.
 [39] S. Y. Chen and Y. F. Li, “Self Recalibration of a Structured Light Vision System from a Single View”, Proc. 2002 IEEE Int. Conf. on Robotics and Automation, Washington D.C., May 2002, pp. 25392544.
 [40] Y. F. Li and S.Y. Chen, “Automatic Recalibration of an Active Vision System Using a Single View”, IEEE Trans. on Robotics and Automation, vol. 19, no. 2, April 2003.
 [41] D. Fofi, E. M. Mouaddib and J. Salvi, How to selfcalibrate a structured light sensor, Proc. 9th International Symposium on Intelligent Robotic System, Toulouse, France, July 2001.
 [42] A. Fusiello, Uncalibrated Euclidean reconstruction: a review, Image and Vision Computing, Vol. 18, No. 67, pp. 555563, May 2000.
 [43]: S. Fernand, and Y. Wang, “Part 1: Modeling Image Curves Using Invariant 3D Object Curve Models: A Path to 3D Recognition and Shape Estimation from Image Contours”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 16, No. 1, pp. 112, 1994.
 [44] D. Keren, D. B. Cooper and J. Subrahmonia, “Describing Complicated Objects by Implicit Polynomials”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 16, No. 2, pp. 3853, 1994.
 [45] G. Taubin, “Estimation of Planar Curves, Surfaces and Nonplanar Space Curves Defined by Implicit Equations, with Application to Edge and Range Image Segmentation”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 13, No. 11, pp. 11151138, 1991.
 [46] P. Whaite and P. P. Ferrie, “Autonomous Exploration: Driven by Uncertainty”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 19, No. 3, pp. 193205, 1997.
 [47] D. Keren, D. B. Cooper and J. Subrahmonia, Describing Complicated Objects by Implicit Polynomials, IEEE Trans. Pattern Analysis and Machine Intelligence, 16(2):3853, February 1994.
 [48] Z. Yan, B. Yang, and C. Menq, “Uncertainty Analysis and Variation Reduction of Three Dimensional Coordinate Metrology. Part 1: Geometric Error Decomposition,” International Journal of Machine Tools and Manufacture, Vol. 39, No. 8, pp. 11991217, 1999.
 H. Akaike, “A New Look at the Statistical Model Identification,” IEEE Trans. Automatic Control, Vol. 19, No. 6, pp. 716726, 1974.
 [50] S. Konishi and G. Kitagawa, “Generalized Information Criterion in Model Selection”, Biometrika, Vol. 83, pp. 875890, 1996.
 [51] M. Sugiyama and H. Ogawa, “Subspace Information Criterion for Model Selection,” Neural Computation, Vol. 13, No. 8, pp. 18631889, 2001.
 [52] G. Shwartz, “Estimating the Dimension of a Model”, Annals of Statistics, Vol. 6, pp. 461464, 1978.
 [53] P. Torr, “Bayesian Model Estimation and Selection for Epipolar Geometry and Generic Manifold Fitting,” International Journal of Computer Vision, Vol. 50, No. 1, pp. 3561, 2002.
 [54] P. M. Djuric, “Asymptotic MAP Criteria for Model Selection”, IEEE Trans. Signal Processing, Vol. 46, No. 10, pp. 27262734, 1998.
 [55] V. Cherkassky, X. Shao, F. Mulier, and V. Vapnik, “Model Complexity Control for Regression Using VC Generalization Bounds,” IEEE Trans. Neural Networks, Vol. 10, No. 5, pp. 10751089, 1999.
Claims (44)
1. A method for measuring and surface reconstruction of a 3D image of an object comprising:
projecting a pattern onto a surface of an object to be imaged;
examining in a processor stage distortion or distortions produced in said pattern by said surface;
converting in said processor stage said distortion or distortions produced in said pattern by said surface to a distance representation representative of the shape of the surface; and
reconstructing electronically said surface shape of said object.
2. A method according to claim 1 , wherein the step of projecting a pattern comprises projecting a pattern of rectangles onto a surface of an object to be imaged.
3. A method according to claim 1 , wherein the step of projecting a pattern comprises projecting a striped pattern onto a surface of an object to be imaged.
4. A method according to claim 1 , wherein the step of projecting a pattern comprises projecting a pattern of squares onto a surface of an object to be imaged.
5. A method according to claim 1 , wherein the step of projecting a pattern comprises projecting a pattern using an LCD projector.
6. A method according to claim 1 , wherein the step of projecting a pattern comprises projecting a colourcoded array pattern onto a surface of an object to be imaged.
7. A method according to claim 1 , further comprising viewing using a camera said pattern projected onto said surface and passing one or more signals from said camera representative of said pattern to said processing stage.
8. A method according to claim 7 , wherein the step of viewing using a camera comprises viewing using a CCD camera.
9. A method according to claim 7 , wherein said step of projecting comprises projecting using a projector, said method further comprising arranging said camera and said projector to have 6 degrees of freedom relative to each other.
10. A method according to claim 9 , wherein said step of arranging comprises arranging said camera and said projector to have 3 linear degrees of freedom and 3 rotational degrees of freedom relative to each other.
11. A method according to claim 1 , wherein said step of projecting comprises projecting using a projector, the method further comprising calibrating said projector prior to projecting said pattern.
12. A method according to claim 9 , further comprising automatically reconfiguring one or more settings of said degrees of freedom if said one or more settings are varied during operation.
13. A method according to claim 12 , wherein said step of reconfiguring comprises taking a single image of said surface for reconfiguring one or more external parameters of said camera and/or said projector.
14. A method according to claim 13 , wherein said step of reconfiguring comprises taking a further image of said surface for reconfiguring one or more internal parameters of said camera and/or said projector.
15. A method according to claim 1 , further comprising viewing said surface obliquely to monitor distortion or distortions in said pattern.
16. A method according to claim 1 , wherein said step of reconstructing comprises reconstructing said surface from a single image.
17. A method according to claim 1 , wherein said step of reconstructing comprises reconstructing said surface from two or more images taken from different positions if one or more portions of said image are obscured in a first image taken.
18. A method according to claim 1 , wherein said step of examining comprises:
slicing in said processor stage said pattern as distorted by said surface into a number of cross section curves;
reconstructing one or more of said crosssection curves by a closed Bspline curve technique;
selecting a control point number of Bspline models from said one or more curves;
determining using entropy techniques representation of uncertainty in said selected Bspline models to predict the information gain for each cross section curve;
mapping said information gain of said Bspline models into a view space; and
selecting as the Next Best View a view point in said view space containing maximum information gain for said object.
19. A method according to claim 18 , wherein a Bayesian information criterion (BIC) is applied for selecting the control point number of Bspline models from said one or more curves.
20. A method according to claim 18 , further comprising terminating said method when said entire surface of said object has been examined and it has been determined that there is no further information to be gained from said surface.
21. A method according to claim 1 , further comprising taking metric readings from said reconstructed surface shape.
22. A method according to claim 1 , wherein said step of converting said distortion or distortions comprises converting using a triangulation process.
23. A system for measuring and surface reconstruction of a 3D image of an object comprising:
a projector arranged to project a pattern onto a surface of an object to be imaged;
a processor stage arranged to examine distortion or distortions produced in said pattern by said surface;
said processor stage further being arranged to convert said distortion or distortions produced in said pattern by said surface to a distance representation representative of the shape of the surface; and
said processor stage being arranged to reconstruct electronically said surface shape of said object.
24. A system according to claim 23 , wherein said pattern comprises an array of rectangles.
25. A system according to claim 23 , wherein said pattern comprises an array of stripes.
26. A system according to claim 23 , wherein said pattern comprises an array of squares.
27. A system according to claim 23 , wherein said projector comprises an LCD projector.
28. A system according to claim 23 , wherein said pattern comprises a colourcoded array pattern.
29. A system according to claim 23 , further comprising a camera arranged to view said pattern projected onto said surface; said camera being arranged to pass one or more signals representative of said pattern to said processor.
30. A system according to claim 29 , wherein said camera comprises a CCD camera.
31. A system according to claim 29 , wherein said projector and said camera are arranged to have 6 degrees of freedom relative to each other.
32. A system according to claim 31 , wherein said projector and said camera are arranged to have 3 linear degrees of freedom and 3 rotational degrees of freedom relative to each other.
33. A system according to claim 23 , wherein said projector is calibrated prior to projecting said pattern.
34. A system according to claim 29 , wherein said processor is arranged to automatically reconfigure one or more settings of said degrees of freedom if said one or more settings are varied during operation.
35. A system according to claim 34 , wherein processor is arranged to reconfigure said one or more settings by taking a single image of said surface for reconfiguring one or more external parameters of said camera and/or said projector.
36. A system according to claim 35 , wherein processor is arranged to reconfigure said one or more settings by taking a further image of said surface for reconfiguring one or more internal parameters of said camera and/or said projector.
37. A system according to claim 29 , wherein said camera is arranged to view said surface obliquely to monitor distortion or distortions in said pattern.
38. A system according to claim 23 , wherein said processor is arranged to reconstruct said surface from a single image.
39. A system according to claim 23 , wherein said processor is arranged to reconstruct said surface from two or more images taken from different positions if one or more portions of said image are obscured in a first image taken.
40. A system according to claim 23 , wherein said processor is arranged to:
slice said processor stage said pattern as distorted by said surface into a number of cross section curves;
reconstruct one or more of said crosssection curves by a closed Bspline curve technique;
select a control point number of Bspline models from said one or more curves;
determine using entropy techniques representation of uncertainty in said selected Bspline models to predict the information gain for each cross section curve;
map said information gain of said Bspline models into a view space; and
select as the Next Best View a view point in said view space containing maximum information gain for said object.
41. A system according to claim 40 , wherein said processor is arranged to apply a Bayesian information criterion (BIC) for selecting the control point number of Bspline models from said one or more curves.
42. A system according to claim 40 , wherein said processor is arranged to terminate one or more processing steps when said entire surface of said object has been examined and it has been determined that there is no further information to be gained from said surface.
43. A system according to claim 23 , wherein said processor is arranged to convert said distortion or distortions using a triangulation process.
44. An active vision system comprising the system according to claim 23.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

US10/891,632 US20060017720A1 (en)  20040715  20040715  System and method for 3D measurement and surface reconstruction 
Applications Claiming Priority (2)
Application Number  Priority Date  Filing Date  Title 

US10/891,632 US20060017720A1 (en)  20040715  20040715  System and method for 3D measurement and surface reconstruction 
US12/269,124 US8213707B2 (en)  20040715  20081112  System and method for 3D measurement and surface reconstruction 
Related Child Applications (1)
Application Number  Title  Priority Date  Filing Date 

US12/269,124 Continuation US8213707B2 (en)  20040715  20081112  System and method for 3D measurement and surface reconstruction 
Publications (1)
Publication Number  Publication Date 

US20060017720A1 true US20060017720A1 (en)  20060126 
Family
ID=35656644
Family Applications (2)
Application Number  Title  Priority Date  Filing Date 

US10/891,632 Abandoned US20060017720A1 (en)  20040715  20040715  System and method for 3D measurement and surface reconstruction 
US12/269,124 Active US8213707B2 (en)  20040715  20081112  System and method for 3D measurement and surface reconstruction 
Family Applications After (1)
Application Number  Title  Priority Date  Filing Date 

US12/269,124 Active US8213707B2 (en)  20040715  20081112  System and method for 3D measurement and surface reconstruction 
Country Status (1)
Country  Link 

US (2)  US20060017720A1 (en) 
Cited By (77)
Publication number  Priority date  Publication date  Assignee  Title 

US20070132763A1 (en) *  20051208  20070614  Electronics And Telecommunications Research Institute  Method for creating 3D curved suface by using corresponding curves in a plurality of images 
US20070253617A1 (en) *  20060427  20071101  Mako Surgical Corp.  Contour triangulation system and method 
US20080144973A1 (en) *  20061213  20080619  Hailin Jin  Rendering images under cylindrical projections 
WO2008104082A1 (en) *  20070301  20080904  Titan Medical Inc.  Methods, systems and devices for threedimensional input, and control methods and systems based thereon 
US20090140926A1 (en) *  20071204  20090604  Elden Douglas Traster  System and method for localization utilizing dynamically deployable beacons 
US20090184961A1 (en) *  20051216  20090723  Ihi Corporation  Threedimensional shape data recording/display method and device, and threedimensional shape measuring method and device 
CN100533487C (en) *  20070419  20090826  北京理工大学  Smooth symmetrical surface 3D solid model rebuilding method based on single image 
WO2009112895A1 (en) *  20080310  20090917  Timothy Webster  Position sensing of a piston in a hydraulic cylinder using a photo image sensor 
US20090287450A1 (en) *  20080516  20091119  Lockheed Martin Corporation  Vision system for scan planning of ultrasonic inspection 
US20090284593A1 (en) *  20080516  20091119  Lockheed Martin Corporation  Accurate image acquisition for structuredlight system for optical shape and positional measurements 
US20090287427A1 (en) *  20080516  20091119  Lockheed Martin Corporation  Vision system and method for mapping of ultrasonic data into cad space 
US20100034429A1 (en) *  20080523  20100211  Drouin MarcAntoine  Deconvolutionbased structured light system with geometrically plausible regularization 
US20100114374A1 (en) *  20081103  20100506  Samsung Electronics Co., Ltd.  Apparatus and method for extracting feature information of object and apparatus and method for creating feature map 
WO2010072912A1 (en)  20081222  20100701  Noomeo  Device for threedimensional scanning with dense reconstruction 
US20110170767A1 (en) *  20070928  20110714  Noomeo  Threedimensional (3d) imaging method 
US8116558B2 (en)  20051216  20120214  Ihi Corporation  Threedimensional shape data position matching method and device 
US8121399B2 (en)  20051216  20120221  Ihi Corporation  Selfposition identifying method and device, and threedimensional shape measuring method and device 
CN102436676A (en) *  20110927  20120502  夏东  Threedimensional reestablishing method for intelligent video monitoring 
US20130155417A1 (en) *  20100819  20130620  Canon Kabushiki Kaisha  Threedimensional measurement apparatus, method for threedimensional measurement, and computer program 
US8533967B2 (en)  20100120  20130917  Faro Technologies, Inc.  Coordinate measurement machines with removable accessories 
US8537374B2 (en)  20100120  20130917  Faro Technologies, Inc.  Coordinate measuring machine having an illuminated probe end and method of operation 
CN103389048A (en) *  20120510  20131113  康耐视公司  Laser profiling attachment for a vision system camera 
US8601702B2 (en)  20100120  20131210  Faro Technologies, Inc.  Display for coordinate measuring machine 
WO2013184340A1 (en) *  20120607  20131212  Faro Technologies, Inc.  Coordinate measurement machines with removable accessories 
US8607536B2 (en)  20110114  20131217  Faro Technologies, Inc.  Case for a device 
US8615893B2 (en)  20100120  20131231  Faro Technologies, Inc.  Portable articulated arm coordinate measuring machine having integrated software controls 
WO2013155379A3 (en) *  20120412  20140103  Smart Picture Technologies Inc.  Orthographic image capture system 
US8630314B2 (en)  20100111  20140114  Faro Technologies, Inc.  Method and apparatus for synchronizing measurements taken by multiple metrology devices 
US8638446B2 (en)  20100120  20140128  Faro Technologies, Inc.  Laser scanner or laser tracker having a projector 
US8677643B2 (en)  20100120  20140325  Faro Technologies, Inc.  Coordinate measurement machines with removable accessories 
CN103810700A (en) *  20140114  20140521  燕山大学  Method for determining next optimal observation orientation by occlusion information based on depth image 
US8744763B2 (en)  20111117  20140603  Honeywell International Inc.  Using structured light to update inertial navigation systems 
US8773526B2 (en)  20101217  20140708  Mitutoyo Corporation  Edge detection using structured illumination 
US8832954B2 (en)  20100120  20140916  Faro Technologies, Inc.  Coordinate measurement machines with removable accessories 
US20140277731A1 (en) *  20130318  20140918  Kabushiki Kaisha Yaskawa Denki  Robot picking system, control device, and method of manufacturing a workpiece 
US8875409B2 (en)  20100120  20141104  Faro Technologies, Inc.  Coordinate measurement machines with removable accessories 
ITPI20130041A1 (en) *  20130514  20141115  Benedetto Allotta  Method for determining the orientation of a submerged surface and apparatus that carries out this method 
US8898919B2 (en)  20100120  20141202  Faro Technologies, Inc.  Coordinate measurement machine with distance meter used to establish frame of reference 
CN104240214A (en) *  20120313  20141224  湖南领创智能科技有限公司  Depth camera rapid calibration method for threedimensional reconstruction 
WO2015026636A1 (en) *  20130821  20150226  Faro Technologies, Inc.  Realtime inspection guidance of triangulation scanner 
US8970693B1 (en) *  20111215  20150303  Rawles Llc  Surface modeling with structured light 
US8997362B2 (en)  20120717  20150407  Faro Technologies, Inc.  Portable articulated arm coordinate measuring machine with optical communications bus 
US9074883B2 (en)  20090325  20150707  Faro Technologies, Inc.  Device for optically scanning and measuring an environment 
US9113023B2 (en)  20091120  20150818  Faro Technologies, Inc.  Threedimensional scanner with spectroscopic energy detector 
US9163922B2 (en)  20100120  20151020  Faro Technologies, Inc.  Coordinate measurement machine with distance meter and camera to determine dimensions within camera images 
US9168654B2 (en)  20101116  20151027  Faro Technologies, Inc.  Coordinate measuring machines with dual layer arm 
JP2015195576A (en) *  20140325  20151105  パナソニックＩｐマネジメント株式会社  Imaging method of multiviewpoint image and image display method 
US9185364B1 (en) *  20141120  20151110  Robert Odierna  Subsurface marine light unit with variable wavelength light emission and an integrated camera 
US9210288B2 (en)  20091120  20151208  Faro Technologies, Inc.  Threedimensional scanner with dichroic beam splitters to capture a variety of signals 
CN105160700A (en) *  20150618  20151216  上海工程技术大学  Cross section curve reconstruction method for threedimensional model reconstruction 
US20150369593A1 (en) *  20140619  20151224  Kari MYLLYKOSKI  Orthographic image capture system 
US9329271B2 (en)  20100510  20160503  Faro Technologies, Inc.  Method for optically scanning and measuring an environment 
US9372265B2 (en)  20121005  20160621  Faro Technologies, Inc.  Intermediate twodimensional scanning with a threedimensional scanner to speed registration 
US9417316B2 (en)  20091120  20160816  Faro Technologies, Inc.  Device for optically scanning and measuring an environment 
US9417056B2 (en)  20120125  20160816  Faro Technologies, Inc.  Device for optically scanning and measuring an environment 
JP2016197127A (en) *  20160802  20161124  キヤノン株式会社  Measurement device, control method of measurement device, and program 
US9513107B2 (en)  20121005  20161206  Faro Technologies, Inc.  Registration calculation between threedimensional (3D) scans based on twodimensional (2D) scan data from a 3D scanner 
US9529083B2 (en)  20091120  20161227  Faro Technologies, Inc.  Threedimensional scanner with enhanced spectroscopic energy detector 
CN106323241A (en) *  20160612  20170111  广东警官学院  Method for measuring threedimensional information of person or object through monitoring video or vehiclemounted camera 
US9551575B2 (en)  20090325  20170124  Faro Technologies, Inc.  Laser scanner having a multicolor light source and realtime color receiver 
US9607239B2 (en)  20100120  20170328  Faro Technologies, Inc.  Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations 
US9628775B2 (en)  20100120  20170418  Faro Technologies, Inc.  Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations 
WO2017095580A1 (en) *  20151202  20170608  Qualcomm Incorporated  Active camera movement determination for object position and extent in threedimensional space 
US20170278221A1 (en) *  20160322  20170928  Samsung Electronics Co., Ltd.  Method and apparatus of image representation and processing for dynamic vision sensor 
WO2017174791A1 (en) *  20160408  20171012  Carl Zeiss Ag  Device and method for measuring a surface topography, and calibration method 
US9846940B1 (en) *  20160815  20171219  Canon U.S.A., Inc.  Spectrally encoded endoscopic image process 
US9879976B2 (en)  20100120  20180130  Faro Technologies, Inc.  Articulated arm coordinate measurement machine that uses a 2D camera to determine 3D coordinates of smoothly continuous edge features 
US10067231B2 (en)  20121005  20180904  Faro Technologies, Inc.  Registration calculation of threedimensional scanner data performed between scans based on measurements by twodimensional scanner 
US10068344B2 (en)  20140305  20180904  Smart Picture Technologies Inc.  Method and system for 3D capture based on structure from motion with simplified pose detection 
US10074191B1 (en)  20150705  20180911  Cognex Corporation  System and method for determination of object volume with multiple threedimensional sensors 
US10083522B2 (en)  20150619  20180925  Smart Picture Technologies, Inc.  Image based measurement system 
US10115035B2 (en) *  20150108  20181030  Sungkyunkwan University Foundation For Corporation Collaboration  Vision system and analytical method for planar surface segmentation 
US10119805B2 (en)  20110415  20181106  Faro Technologies, Inc.  Threedimensional coordinate scanner and method of operation 
WO2018217911A1 (en) *  20170524  20181129  Augustyn + Company  Method, system, and apparatus for rapidly measuaring incident solar irradiance on multiple planes of differing angular orientations 
US10175037B2 (en)  20151227  20190108  Faro Technologies, Inc.  3D measuring device with battery pack 
US10209059B2 (en)  20100421  20190219  Faro Technologies, Inc.  Method and apparatus for following an operator and locking onto a retroreflector with a laser tracker 
US10222607B2 (en)  20161214  20190305  Canon U.S.A., Inc.  Threedimensional endoscope 
Families Citing this family (26)
Publication number  Priority date  Publication date  Assignee  Title 

WO2008062407A2 (en)  20061121  20080529  Mantisvision Ltd.  3d geometric modeling and 3d video content creation 
US8090194B2 (en)  20061121  20120103  Mantis Vision Ltd.  3D geometric modeling and motion capture using both single and dual imaging 
US8564502B2 (en) *  20090402  20131022  GM Global Technology Operations LLC  Distortion and perspective correction of vector projection display 
US8547374B1 (en) *  20090724  20131001  Lockheed Martin Corporation  Detection and reconstruction of 3D objects with passive imaging sensors 
US9366772B2 (en)  20091105  20160614  Exxonmobil Upstream Research Company  Method for creating a hierarchically layered earth model 
CN101986347B (en) *  20101028  20121212  浙江工业大学  Method for reconstructing stereoscopic vision sequence 
US8941651B2 (en) *  20110908  20150127  Honeywell International Inc.  Object alignment from a 2dimensional image 
US9070019B2 (en) *  20120117  20150630  Leap Motion, Inc.  Systems and methods for capturing motion in threedimensional space 
US8638989B2 (en)  20120117  20140128  Leap Motion, Inc.  Systems and methods for capturing motion in threedimensional space 
US8693731B2 (en)  20120117  20140408  Leap Motion, Inc.  Enhanced contrast for object detection and characterization by optical imaging 
US9679215B2 (en)  20120117  20170613  Leap Motion, Inc.  Systems and methods for machine control 
US8662676B1 (en) *  20120314  20140304  Rawles Llc  Automatic projector calibration 
US9188433B2 (en)  20120524  20151117  Qualcomm Incorporated  Code in affineinvariant spatial mask 
US9285893B2 (en)  20121108  20160315  Leap Motion, Inc.  Object detection and tracking with variablefield illumination devices 
US9465461B2 (en)  20130108  20161011  Leap Motion, Inc.  Object detection and tracking with audio and optical signals 
US9501152B2 (en)  20130115  20161122  Leap Motion, Inc.  Freespace user interface and control using virtual constructs 
US10241639B2 (en)  20130115  20190326  Leap Motion, Inc.  Dynamic user interactions for display control and manipulation of display objects 
JP6037901B2 (en) *  20130311  20161207  日立マクセル株式会社  Operation detection device, the operation detection method and a display control data generating method 
WO2014200589A2 (en)  20130315  20141218  Leap Motion, Inc.  Determining positional information for an object in space 
US9916009B2 (en)  20130426  20180313  Leap Motion, Inc.  Nontactile interface systems and methods 
US9747696B2 (en)  20130517  20170829  Leap Motion, Inc.  Systems and methods for providing normalized parameters of motions of objects in threedimensional space 
CN103530907B (en) *  20131021  20170201  深圳市易尚展示股份有限公司  Complex threedimensional model image drawing method based on 
US9996638B1 (en)  20131031  20180612  Leap Motion, Inc.  Predictive information for free space gesture control and communication 
US9613262B2 (en)  20140115  20170404  Leap Motion, Inc.  Object detection and tracking for providing a virtual device experience 
US9773302B2 (en) *  20151008  20170926  HewlettPackard Development Company, L.P.  Threedimensional object model tagging 
US9996944B2 (en)  20160706  20180612  Qualcomm Incorporated  Systems and methods for mapping an environment 
Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

US5831621A (en) *  19961021  19981103  The Trustees Of The University Of Pennyslvania  Positional space solution to the next best view problem 

2004
 20040715 US US10/891,632 patent/US20060017720A1/en not_active Abandoned

2008
 20081112 US US12/269,124 patent/US8213707B2/en active Active
Patent Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

US5831621A (en) *  19961021  19981103  The Trustees Of The University Of Pennyslvania  Positional space solution to the next best view problem 
Cited By (116)
Publication number  Priority date  Publication date  Assignee  Title 

US20070132763A1 (en) *  20051208  20070614  Electronics And Telecommunications Research Institute  Method for creating 3D curved suface by using corresponding curves in a plurality of images 
US7812839B2 (en) *  20051208  20101012  Electronics And Telecommunications Research Institute  Method for creating 3D curved suface by using corresponding curves in a plurality of images 
US8121399B2 (en)  20051216  20120221  Ihi Corporation  Selfposition identifying method and device, and threedimensional shape measuring method and device 
US8300048B2 (en) *  20051216  20121030  Ihi Corporation  Threedimensional shape data recording/display method and device, and threedimensional shape measuring method and device 
US20090184961A1 (en) *  20051216  20090723  Ihi Corporation  Threedimensional shape data recording/display method and device, and threedimensional shape measuring method and device 
US8116558B2 (en)  20051216  20120214  Ihi Corporation  Threedimensional shape data position matching method and device 
US7623702B2 (en) *  20060427  20091124  Mako Surgical Corp.  Contour triangulation system and method 
US20070253617A1 (en) *  20060427  20071101  Mako Surgical Corp.  Contour triangulation system and method 
US8023772B2 (en)  20061213  20110920  Adobe System Incorporated  Rendering images under cylindrical projections 
US20080144973A1 (en) *  20061213  20080619  Hailin Jin  Rendering images under cylindrical projections 
US20110058753A1 (en) *  20061213  20110310  Adobe Systems Incorporated  Rendering images under cylindrical projections 
US7822292B2 (en) *  20061213  20101026  Adobe Systems Incorporated  Rendering images under cylindrical projections 
US9421068B2 (en) *  20070301  20160823  Titan Medical Inc.  Methods, systems and devices for three dimensional input and control methods and systems based thereon 
WO2008104082A1 (en) *  20070301  20080904  Titan Medical Inc.  Methods, systems and devices for threedimensional input, and control methods and systems based thereon 
US8792688B2 (en) *  20070301  20140729  Titan Medical Inc.  Methods, systems and devices for three dimensional input and control methods and systems based thereon 
US20100036393A1 (en) *  20070301  20100211  Titan Medical Inc.  Methods, systems and devices for threedimensional input, and control methods and systems based thereon 
CN100533487C (en) *  20070419  20090826  北京理工大学  Smooth symmetrical surface 3D solid model rebuilding method based on single image 
US8483477B2 (en)  20070928  20130709  Noomeo  Method of constructing a digital image of a threedimensional (3D) surface using a mask 
US20110170767A1 (en) *  20070928  20110714  Noomeo  Threedimensional (3d) imaging method 
US20090140926A1 (en) *  20071204  20090604  Elden Douglas Traster  System and method for localization utilizing dynamically deployable beacons 
WO2009112895A1 (en) *  20080310  20090917  Timothy Webster  Position sensing of a piston in a hydraulic cylinder using a photo image sensor 
CN102084214A (en) *  20080516  20110601  洛伊马汀公司  Accurate image acquisition for structuredlight system for optical shape and positional measurements 
US20090287427A1 (en) *  20080516  20091119  Lockheed Martin Corporation  Vision system and method for mapping of ultrasonic data into cad space 
JP2011521231A (en) *  20080516  20110721  ロッキード・マーチン・コーポレーション  Accurate image acquisition related structured light system for optical measurement of the shape and position 
WO2009140461A1 (en)  20080516  20091119  Lockheed Martin Corporation  Accurate image acquisition for structuredlight system for optical shape and positional measurements 
US20090284593A1 (en) *  20080516  20091119  Lockheed Martin Corporation  Accurate image acquisition for structuredlight system for optical shape and positional measurements 
US20090287450A1 (en) *  20080516  20091119  Lockheed Martin Corporation  Vision system for scan planning of ultrasonic inspection 
KR101489030B1 (en) *  20080516  20150202  록히드 마틴 코포레이션  Accurate Image Acqusition for structuredlight System For Optical Shape And Positional Measurements 
TWI464365B (en) *  20080516  20141211  Lockheed Corp  Method of providing a three dimensional representation of an article and apparatus for providing a threedimensional representation of an object 
US8220335B2 (en) *  20080516  20120717  Lockheed Martin Corporation  Accurate image acquisition for structuredlight system for optical shape and positional measurements 
AU2009246265B2 (en) *  20080516  20140403  Lockheed Martin Corporation  Accurate image acquisition for structuredlight system for optical shape and positional measurements 
US8411995B2 (en) *  20080523  20130402  National Research Council Of Canada  Deconvolutionbased structured light system with geometrically plausible regularization 
US20100034429A1 (en) *  20080523  20100211  Drouin MarcAntoine  Deconvolutionbased structured light system with geometrically plausible regularization 
JP2010107495A (en) *  20081103  20100513  Samsung Electronics Co Ltd  Apparatus and method for extracting characteristic information of object and apparatus and method for producing characteristic map using the same 
US20100114374A1 (en) *  20081103  20100506  Samsung Electronics Co., Ltd.  Apparatus and method for extracting feature information of object and apparatus and method for creating feature map 
US8352075B2 (en) *  20081103  20130108  Samsung Electronics Co., Ltd.  Apparatus and method for extracting feature information of object and apparatus and method for creating feature map 
WO2010072912A1 (en)  20081222  20100701  Noomeo  Device for threedimensional scanning with dense reconstruction 
US9074883B2 (en)  20090325  20150707  Faro Technologies, Inc.  Device for optically scanning and measuring an environment 
US9551575B2 (en)  20090325  20170124  Faro Technologies, Inc.  Laser scanner having a multicolor light source and realtime color receiver 
US9113023B2 (en)  20091120  20150818  Faro Technologies, Inc.  Threedimensional scanner with spectroscopic energy detector 
US9210288B2 (en)  20091120  20151208  Faro Technologies, Inc.  Threedimensional scanner with dichroic beam splitters to capture a variety of signals 
US9529083B2 (en)  20091120  20161227  Faro Technologies, Inc.  Threedimensional scanner with enhanced spectroscopic energy detector 
US9417316B2 (en)  20091120  20160816  Faro Technologies, Inc.  Device for optically scanning and measuring an environment 
US8630314B2 (en)  20100111  20140114  Faro Technologies, Inc.  Method and apparatus for synchronizing measurements taken by multiple metrology devices 
US9628775B2 (en)  20100120  20170418  Faro Technologies, Inc.  Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations 
US8677643B2 (en)  20100120  20140325  Faro Technologies, Inc.  Coordinate measurement machines with removable accessories 
US8683709B2 (en)  20100120  20140401  Faro Technologies, Inc.  Portable articulated arm coordinate measuring machine with multibus arm technology 
US8537374B2 (en)  20100120  20130917  Faro Technologies, Inc.  Coordinate measuring machine having an illuminated probe end and method of operation 
US8533967B2 (en)  20100120  20130917  Faro Technologies, Inc.  Coordinate measurement machines with removable accessories 
US8638446B2 (en)  20100120  20140128  Faro Technologies, Inc.  Laser scanner or laser tracker having a projector 
US8763266B2 (en)  20100120  20140701  Faro Technologies, Inc.  Coordinate measurement device 
US8615893B2 (en)  20100120  20131231  Faro Technologies, Inc.  Portable articulated arm coordinate measuring machine having integrated software controls 
US9879976B2 (en)  20100120  20180130  Faro Technologies, Inc.  Articulated arm coordinate measurement machine that uses a 2D camera to determine 3D coordinates of smoothly continuous edge features 
US10060722B2 (en)  20100120  20180828  Faro Technologies, Inc.  Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations 
US9163922B2 (en)  20100120  20151020  Faro Technologies, Inc.  Coordinate measurement machine with distance meter and camera to determine dimensions within camera images 
US8875409B2 (en)  20100120  20141104  Faro Technologies, Inc.  Coordinate measurement machines with removable accessories 
US9607239B2 (en)  20100120  20170328  Faro Technologies, Inc.  Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations 
US9009000B2 (en)  20100120  20150414  Faro Technologies, Inc.  Method for evaluating mounting stability of articulated arm coordinate measurement machine using inclinometers 
US8601702B2 (en)  20100120  20131210  Faro Technologies, Inc.  Display for coordinate measuring machine 
US8942940B2 (en)  20100120  20150127  Faro Technologies, Inc.  Portable articulated arm coordinate measuring machine and integrated electronic data processing system 
US8832954B2 (en)  20100120  20140916  Faro Technologies, Inc.  Coordinate measurement machines with removable accessories 
US8898919B2 (en)  20100120  20141202  Faro Technologies, Inc.  Coordinate measurement machine with distance meter used to establish frame of reference 
US10209059B2 (en)  20100421  20190219  Faro Technologies, Inc.  Method and apparatus for following an operator and locking onto a retroreflector with a laser tracker 
US9684078B2 (en)  20100510  20170620  Faro Technologies, Inc.  Method for optically scanning and measuring an environment 
US9329271B2 (en)  20100510  20160503  Faro Technologies, Inc.  Method for optically scanning and measuring an environment 
US20130155417A1 (en) *  20100819  20130620  Canon Kabushiki Kaisha  Threedimensional measurement apparatus, method for threedimensional measurement, and computer program 
US8964189B2 (en) *  20100819  20150224  Canon Kabushiki Kaisha  Threedimensional measurement apparatus, method for threedimensional measurement, and computer program 
US9168654B2 (en)  20101116  20151027  Faro Technologies, Inc.  Coordinate measuring machines with dual layer arm 
US8773526B2 (en)  20101217  20140708  Mitutoyo Corporation  Edge detection using structured illumination 
US8607536B2 (en)  20110114  20131217  Faro Technologies, Inc.  Case for a device 
US10119805B2 (en)  20110415  20181106  Faro Technologies, Inc.  Threedimensional coordinate scanner and method of operation 
CN102436676A (en) *  20110927  20120502  夏东  Threedimensional reestablishing method for intelligent video monitoring 
US8744763B2 (en)  20111117  20140603  Honeywell International Inc.  Using structured light to update inertial navigation systems 
US8970693B1 (en) *  20111215  20150303  Rawles Llc  Surface modeling with structured light 
US9417056B2 (en)  20120125  20160816  Faro Technologies, Inc.  Device for optically scanning and measuring an environment 
CN104240214A (en) *  20120313  20141224  湖南领创智能科技有限公司  Depth camera rapid calibration method for threedimensional reconstruction 
WO2013155379A3 (en) *  20120412  20140103  Smart Picture Technologies Inc.  Orthographic image capture system 
CN103389048A (en) *  20120510  20131113  康耐视公司  Laser profiling attachment for a vision system camera 
US8675208B2 (en) *  20120510  20140318  Cognex Corporation  Laser profiling attachment for a vision system camera 
WO2013184340A1 (en) *  20120607  20131212  Faro Technologies, Inc.  Coordinate measurement machines with removable accessories 
GB2517621A (en) *  20120607  20150225  Faro Tech Inc  Coordinate measurement machines with removable accessories 
CN104380033A (en) *  20120607  20150225  法罗技术股份有限公司  Coordinate measurement machines with removable accessories 
US8997362B2 (en)  20120717  20150407  Faro Technologies, Inc.  Portable articulated arm coordinate measuring machine with optical communications bus 
US9372265B2 (en)  20121005  20160621  Faro Technologies, Inc.  Intermediate twodimensional scanning with a threedimensional scanner to speed registration 
US10203413B2 (en)  20121005  20190212  Faro Technologies, Inc.  Using a twodimensional scanner to speed registration of threedimensional scan data 
US9746559B2 (en)  20121005  20170829  Faro Technologies, Inc.  Using twodimensional camera images to speed registration of threedimensional scans 
US10067231B2 (en)  20121005  20180904  Faro Technologies, Inc.  Registration calculation of threedimensional scanner data performed between scans based on measurements by twodimensional scanner 
US9513107B2 (en)  20121005  20161206  Faro Technologies, Inc.  Registration calculation between threedimensional (3D) scans based on twodimensional (2D) scan data from a 3D scanner 
US9618620B2 (en)  20121005  20170411  Faro Technologies, Inc.  Using depthcamera images to speed registration of threedimensional scans 
US9739886B2 (en)  20121005  20170822  Faro Technologies, Inc.  Using a twodimensional scanner to speed registration of threedimensional scan data 
US20140277731A1 (en) *  20130318  20140918  Kabushiki Kaisha Yaskawa Denki  Robot picking system, control device, and method of manufacturing a workpiece 
US9149932B2 (en) *  20130318  20151006  Kabushiki Kaisha Yaskawa Denki  Robot picking system, control device, and method of manufacturing a workpiece 
ITPI20130041A1 (en) *  20130514  20141115  Benedetto Allotta  Method for determining the orientation of a submerged surface and apparatus that carries out this method 
WO2014184748A1 (en)  20130514  20141120  Universita' Degli Studi Di Firenze  Method for determining the orientation of a submerged surface and apparatus that carries out this method 
US20150054946A1 (en) *  20130821  20150226  Faro Technologies, Inc.  Realtime inspection guidance of triangulation scanner 
WO2015026636A1 (en) *  20130821  20150226  Faro Technologies, Inc.  Realtime inspection guidance of triangulation scanner 
CN103810700A (en) *  20140114  20140521  燕山大学  Method for determining next optimal observation orientation by occlusion information based on depth image 
US10068344B2 (en)  20140305  20180904  Smart Picture Technologies Inc.  Method and system for 3D capture based on structure from motion with simplified pose detection 
JP2015195576A (en) *  20140325  20151105  パナソニックＩｐマネジメント株式会社  Imaging method of multiviewpoint image and image display method 
US20150369593A1 (en) *  20140619  20151224  Kari MYLLYKOSKI  Orthographic image capture system 
US9185364B1 (en) *  20141120  20151110  Robert Odierna  Subsurface marine light unit with variable wavelength light emission and an integrated camera 
US10115035B2 (en) *  20150108  20181030  Sungkyunkwan University Foundation For Corporation Collaboration  Vision system and analytical method for planar surface segmentation 
CN105160700A (en) *  20150618  20151216  上海工程技术大学  Cross section curve reconstruction method for threedimensional model reconstruction 
US10083522B2 (en)  20150619  20180925  Smart Picture Technologies, Inc.  Image based measurement system 
US10074191B1 (en)  20150705  20180911  Cognex Corporation  System and method for determination of object volume with multiple threedimensional sensors 
WO2017095580A1 (en) *  20151202  20170608  Qualcomm Incorporated  Active camera movement determination for object position and extent in threedimensional space 
US10175037B2 (en)  20151227  20190108  Faro Technologies, Inc.  3D measuring device with battery pack 
US9934557B2 (en) *  20160322  20180403  Samsung Electronics Co., Ltd  Method and apparatus of image representation and processing for dynamic vision sensor 
US20170278221A1 (en) *  20160322  20170928  Samsung Electronics Co., Ltd.  Method and apparatus of image representation and processing for dynamic vision sensor 
WO2017174791A1 (en) *  20160408  20171012  Carl Zeiss Ag  Device and method for measuring a surface topography, and calibration method 
EP3462129A1 (en) *  20160408  20190403  Carl Zeiss AG  Device and method for measuring a surface topography and calibration method 
CN106323241A (en) *  20160612  20170111  广东警官学院  Method for measuring threedimensional information of person or object through monitoring video or vehiclemounted camera 
JP2016197127A (en) *  20160802  20161124  キヤノン株式会社  Measurement device, control method of measurement device, and program 
US9846940B1 (en) *  20160815  20171219  Canon U.S.A., Inc.  Spectrally encoded endoscopic image process 
US10222607B2 (en)  20161214  20190305  Canon U.S.A., Inc.  Threedimensional endoscope 
WO2018217911A1 (en) *  20170524  20181129  Augustyn + Company  Method, system, and apparatus for rapidly measuaring incident solar irradiance on multiple planes of differing angular orientations 
Also Published As
Publication number  Publication date 

US8213707B2 (en)  20120703 
US20090102840A1 (en)  20090423 
Similar Documents
Publication  Publication Date  Title 

Montiel et al.  Unified inverse depth parametrization for monocular SLAM  
Bakstein et al.  Panoramic mosaicing with a 180/spl deg/field of view lens  
US6078701A (en)  Method and apparatus for performing local to global multiframe alignment to construct mosaic images  
Hartley  Selfcalibration of stationary cameras  
Teller et al.  Calibrated, registered images of an extended urban area  
US6819318B1 (en)  Method and apparatus for modeling via a threedimensional image mosaic system  
Mendonca et al.  Epipolar geometry from profiles under circular motion  
Winkelbach et al.  Lowcost laser range scanner and fast surface registration approach  
US8773508B2 (en)  3D imaging system  
JP4435867B2 (en)  The image processing device for generating normal vector information, methods, computer programs, and, the viewpoint conversion image generating device  
Stoykova et al.  3D timevarying scene capture technologies—A survey  
Champleboux et al.  From accurate range imaging sensor calibration to accurate modelbased 3D object localization  
US9189856B1 (en)  Reduced homography for recovery of pose parameters of an optical apparatus producing image data with structural uncertainty  
KR101601331B1 (en)  System and method for threedimensional measurment of the shape of material object  
Godin et al.  Threedimensional registration using range and intensity information  
JP4245963B2 (en)  Method and system for calibrating multiple cameras using calibration object  
Micusik et al.  Structure from motion with wide circular field of view cameras  
Scaramuzza et al.  A flexible technique for accurate omnidirectional camera calibration and structure from motion  
Dold et al.  Registration of terrestrial laser scanning data using planar patches and image data  
US5559334A (en)  Epipolar reconstruction of 3D structures  
US7256899B1 (en)  Wireless methods and systems for threedimensional noncontact shape sensing  
US7271377B2 (en)  Calibration ring for developing and aligning view dependent image maps with 3D surface data  
EP1882895A1 (en)  3dimensional shape measuring method and device thereof  
EP2104365A1 (en)  Method and apparatus for rapid threedimensional restoration  
US20110249117A1 (en)  Imaging device, distance measuring method, and nontransitory computerreadable recording medium storing a program 
Legal Events
Date  Code  Title  Description 

AS  Assignment 
Owner name: CITY UNIVERSITY OF HONG KONG, HONG KONG Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LI, YOU FU;REEL/FRAME:015315/0964 Effective date: 20041025 