US20160094830A1  System and Methods for Shape Measurement Using Dual Frequency Fringe Patterns  Google Patents
System and Methods for Shape Measurement Using Dual Frequency Fringe Patterns Download PDFInfo
 Publication number
 US20160094830A1 US20160094830A1 US14/866,245 US201514866245A US2016094830A1 US 20160094830 A1 US20160094830 A1 US 20160094830A1 US 201514866245 A US201514866245 A US 201514866245A US 2016094830 A1 US2016094830 A1 US 2016094830A1
 Authority
 US
 United States
 Prior art keywords
 images
 scene
 step
 projector
 fringe patterns
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Abandoned
Links
Images
Classifications

 G—PHYSICS
 G01—MEASURING; TESTING
 G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
 G01B11/00—Measuring arrangements characterised by the use of optical means
 G01B11/24—Measuring arrangements characterised by the use of optical means for measuring contours or curvatures
 G01B11/25—Measuring arrangements characterised by the use of optical means for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
 G01B11/2536—Measuring arrangements characterised by the use of optical means for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object using several gratings with variable grating pitch, projected on the object with the same angle of incidence

 H04N13/0048—

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects

 G06T7/0075—

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T7/00—Image analysis
 G06T7/50—Depth or shape recovery
 G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

 H04N13/0253—

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T2207/00—Indexing scheme for image analysis or image enhancement
 G06T2207/10—Image acquisition modality
 G06T2207/10004—Still image; Photographic image
 G06T2207/10012—Stereo images

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N13/00—Stereoscopic video systems; Multiview video systems; Details thereof
 H04N2013/0074—Stereoscopic image analysis
 H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Abstract
A method obtains the shape of a target by projecting and recording images of dual frequency fringe patterns. Locations in each projector image plane are encoded into the patterns and projected onto die target while images are recorded. The resulting images show the patterns superimposed onto the target. The images are decoded to recover relative phase values for the patterns primary and dual frequencies. The relative phases are unwrapped into absolute phases and converted back to projector image plane locations. The relation between camera pixels and decoded projector locations is saved as a correspondence image representing the measured shape of the target. Correspondence images with a geometric triangulation method create a 3D model of the target. Dual frequency hinge patterns have a low frequency embedded into a high frequency sinusoidal, both frequencies are recovered in closed form by the decoding method, thus, enabling direct phase unwrapping.
Description
 This application claims priority to U.S. Provisional Patent Application No. 62/055,835, filed Sep. 26, 2014, which is incorporated herein. by reference.
 The subject technology relates generally to measuring 3D shapes using structured light patterns, and more particularly, to computing depth values using dual frequency sinusoidal fringe patterns.
 Structured light methods are widely used as noncontact 3D scanners. Common applications of this technology are industrial inspection, medical imaging, and cultural heritage preservation. These scanners use one or more cameras to image the scene while being illuminated by a sequence of known patterns. One projector and a single camera is a typical setup, where the projector projects a fixed pattern sequence while the camera records one image for each projected pattern. The pattern sequence helps to establish correspondences between projector and camera coordinates. Such correspondences in conjunction with a triangulation method allow recovery of the scene shape. The pattern set determines many properties of a structured light 3D scanner such as precision and scanning time.
 A general purpose 3D scanner must produce high quality results for a variety of materials and shapes to be of practical use. In particular, the general purpose 3D scanner must be robust to global illumination effects and source illumination defocus, or measurement errors would render the general purpose 3D scanner unsuitable for scanning nonLambertian surfaces. Global illumination is defined as all light contributions measured at a surface point not directly received from the primary light source. Common examples are interreflections and subsurface scattering. Illumination defocus is caused by the light source finite depth of field. It is known that high frequency structured light patterns are robust to such issues.
 However, most existing structured light based scanners are not robust to global illumination and defocus effects, and a few that are robust either use pattern sequences of hundreds of images, or fail to provide a closed form decoding algorithm. In both cases, the existing structured light based scanners cannot measure scene shapes as fast as required by many applications.
 In view of the above, a new shape measurement system and method, based on structured light patterns robust to global illumination effects and source illumination defocus, including fast encoding and decoding algorithms, is required.
 The subject technology provides a 3D measurement method and system for real world scenes comprising a variety of materials.
 The subject technology has a measurement speed which is significantly faster than existing structured light 3D shape measurement techniques without loss of precision.
 The subject technology provides a closed term decoding method of the projected structured light patterns which improves from the state of the art methods.
 The subject technology also allows for simultaneous measurements of multiple objects.
 One embodiment of the subject technology is directed to a method for measuring shapes using structured light. The method includes encoding light source coordinates of a scene using dual frequency sinusoidal fringe patterns, modulating the light source with the dual frequency sinusoidal fringe patterns, and recording images of the scene while the scene is being illuminated by the modulated light source. The method further includes extracting the coded coordinates from the recorded images by using a closed form decoding algorithm.
 Another embodiment of the subject technology is directed to a system for threedimensional shape measurement including a first module for encoding light source coordinates using dual frequency sinusoidal fringe patterns, a light source for projecting such fringe patterns onto a scene while recording images of the scene under this illumination, a second module for extracting the encoded coordinates from the recorded images, and a third module for computing the scene shape using a geometric triangulation method.
 Yet another embodiment of the subject technology is directed to a system for threedimensional shape measurement including at least one projector, at least one camera, and at least one system processor. The system is configured to generate dual frequency sinusoidal fringe pattern sequences, projecting the generated fringe patterns onto a scene, capturing images of the scene illuminated by the dual frequency fringe patterns, and decoding the images to provide for threedimensional shape measurement of the scene. In a preferred embodiment, the system processor includes one or more CPU's (Graphical Processing Unit).
 Still another embodiment of the subject technology is directed to a system for threedimensional shape measurement of a single object including at least one projector, at least one camera, at least one system processor, and a turntable. The system is configured to generate dual frequency sinusoidal fringe pattern sequences, projecting the generated fringe patterns onto a scene, capturing images of the object sitting on top of the turntable illuminated by the fringe patterns. The system includes projecting fringe patterns and recording images of the object under this illumination at different rotations of the turntable while keeping the object fixed on top the turntable. The system also includes decoding all recorded images and computing the object shape from all captured turntable rotations and generating a 3D model of the object.
 Additional aspects and/or advantages will be set forth in part in the description, and claims which follows and, in part, will be apparent from the description and claims, or may be learned by practice of the invention. No single embodiment need exhibit each or every object, feature, or advantage as it is contemplated that different embodiments may have different objects, features, and advantages.
 For a more complete understanding of the invention, reference is made to the following description and accompanying drawings.

FIG. 1A is a schematic diagram illustrating one embodiment of the shape measurement system. 
FIG. 1B is a schematic diagram illustrating the system processor of one embodiment of the shape measurement system. 
FIG. 2 is a flow chart of a preferred embodiment of the subject technology. 
FIG. 3 is a flow chart of the acquisition step ofFIG. 2 . 
FIG. 4 is a plot of acquisition timings in accordance with the subject technology. 
FIG. 5 is a flow chart of the decoding step ofFIG. 2 . 
FIG. 6 is a flow chart of the decoding step in an alternative embodiment. 
FIG. 7 is an example of index assignment to a square pixel array in accordance with the subject technology. 
FIG. 8 is an example of index assignment to a diamond pixel array in accordance with the subject technology. 
FIG. 9 is an example of a dual frequency pattern sequence in accordance with the subject technology. 
FIG. 10 is an example of an image captured while the scene is illuminated by a dual frequency fringe pattern in accordance with the subject technology. 
FIG. 11 shows the relation between a correspondence value and a projector index to illustrate the concept of triangulation in accordance with the subject technology. 
FIG. 12 is a flow chart of single pixel decoding in accordance with the subject technology. 
FIG. 13 is an example of a correspondence image generated by the subject technology. 
FIG. 14 is an example of 3D model generated by the subject technology.  The subject technology overcomes many of the prior art problems associated with generating 3D models. The advantages, and other features of the technology disclosed herein, will become more readily apparent to those having ordinary skill in the art from the following detailed description of certain preferred embodiments taken in conjunction with the drawings which set forth representative embodiments of the present invention and wherein like reference numerals identify similar structural elements.
 in brief overview, the subject technology includes a system that obtains the shape of a target object or scene by projecting and recording images of dual frequency fringe patterns. The system determines locations in image planes that are encoded into patterns and projected onto the target while images are being recorded. The resulting images show the patterns superimposed onto the target. The images are decoded to recover relative phase values for the patterns primary and dual frequencies. The relative phases are unwrapped into absolute phases and converted back to projector image plane locations. The relation between camera pixels and decoded projector locations is saved as a correspondence image representing the measured shape of the target. Correspondence images together with a geometric triangulation method create a 3D model of the target. Dual frequency fringe patterns have a low frequency embedded into a high frequency sinusoidal. Both frequencies are recovered in closed form by the decoding method, thus, enabling direct phase unwrapping. Only high frequency fringes are visible in the pattern images making the result more robust, for example, with respect to source illumination defocus and global illumination effects. Thus, the subject technology is applicable to shape measurement of targets of a variety of materials.
 Referring now to
FIG. 1A , a schematic diagram illustrates a shape measurement system 100. The shape measurement system 100 includes a system processor 102 connected to a light source 104, such as a projector, and a camera 106. The light source 104 is controlled by the system processor 102 to project fringe patterns onto one or more objects, herein referred to as a scene 108. The camera 106 is controlled by the system processor 102 to record scene images. Both camera 106 and light source 104 are oriented towards the target scene 108 for which the 3D shape is being measured. Preferably, the camera 106, the light source 104, and the scene 108 remain static while the shape measurement is being performed.  Referring now to
FIG. 1B , a schematic diagram illustrating the system processor 102 is shown. As illustration, the system processor 102 typically includes a central processing unit 110 including one or more microprocessors in communication with memory 112 such as random access memory (RAM) and magnetic hard disk drive. An operating system is stored on the memory 112 for execution on the central processing unit 110. A hard disk drive is typically used for storing data, applications and the like utilized by the applications. Although not shown for simplicity, the system processor 102 includes mechanisms and structures for performing I/O operations and other typical functions. It is envisioned that the system processor 102 can utilize multiple servers in cooperation to facilitate greater performance and stability by distributing memory and processing as is well known.  The memory 112 includes several modules for performing the operations of the subject technology. An encoding module 114, an acquisition module 116, and a decoding module 118 all interact with data stored in a dual frequency patterns database 120, an images database, and other places.
 The flow charts herein illustrate the structure or the logic of the present technology, possibly as embodied in computer program software for execution on particular device such as the system processor 102 or a modified computer, digital processor or microprocessor. Those skilled in the art will appreciate that the flow charts illustrate the structures of the computer program code elements, including logic circuits on an integrated circuit as the case may be, that function according to the present technology. As such, the present technology may be practiced by a machine component that renders the program code elements in a form that instructs equipment to perform a sequence of function steps corresponding to those shown in the flow charts.
 Referring now to
FIG. 2 , there is illustrated a flowchart 200 depicting a process for measuring the shape of a scene 108. Generally, once the process begins, a set of sinusoidal fringe patterns are generated by the encoding module 114 of the system processor 102 in an encoding step 202. Second, during an acquisition step 204, the light source 104 is used to project the generated patterns, one by one, onto the scene 108. The camera 106 records an image of the Scene. 108 for each projected fringe pattern which is acquired by the acquisition module 116 for storage. Preferably, the pattern projection and image capture times are synchronized in such a way that a single pattern is projected while each image is being captured. No more than one image is captured during each pattern projection time. Finally, all the captured images are processed by the decoding module 118 in a decoding step 206 to generate a digital representation of the scene 108 related to the measured shape.  Encoding Step 202
 The encoding step 202 encodes light source coordinates into dual frequency fringe patterns. In a preferred embodiment, the light source 104 projects 2dimensional grayscale images. In this case, the light source coordinates are integer index values which identify a line in the light source 104 image plane. There exist different pixel array organizations among commercial projectors.
FIG. 7 andFIG. 8 are examples of square and diamond pixel arrays respectively. In the square pixel array case, a projector index 120 identifies a pixel column (shown in hold inFIG. 7 ). In the diamond pixel array case, a projector index 120 identifies a diagonal line from the topleft corner to the bottomright corner of the array (shown in bold inFIG. 8 ). The encoding step 202 generates a sequence of 2dimensional grayscale images where the value at each pixel encodes the corresponding projector index 120 in the array.  Each projector index 120 is encoded using Equation (1) below where r is an output vector, a and a are constant values. S and A are matrices, T and s are also vectors, p being the projector index 120. The length of vector r is equal to the number of imager to be generated in the sequence. The first component of r is the encoded pixel value of index p in the first image in the sequence, the second component corresponds to the encoded pixel value in the second image in the sequence, and so forth. Computing a vector r for each projector index 120 in the projector pixel array, and filling the image sequence pixel values using the components of r as described concludes the encoding step 202. The output of encoding step 202 is the sequence of images generated. The values required to evaluate Equation (1) are explained in detail in the following paragraphs.
 Referring now to
FIG. 9 , a dual frequency pattern sequence 900 is shown. The dual frequency pattern sequence 900 is made of sinusoidal fringe patterns of F primary frequencies and phase shifts of the primary frequencies. A primary pattern frequency is the spatial frequency of the fringes in the pattern image. Each pattern has also an embedded dual frequency which is not visible in the image but is extracted by the pattern decode step 602 (seeFIG. 6 and operation 1210 inFIG. 12 ). The value F and the number of shifts are chosen by the designer. However, F should be greater than one. The designer must also choose F real values {T_{1}, T_{2}, . . . , T_{F}} which are used to compute vector T using Equation (2) below. All T_{i }values must be greater than one. The length of vector T is equal to the number of primary frequencies. 
$\begin{array}{cc}T={\left[\frac{1}{{T}_{1}},\frac{1}{{T}_{1}\ue89e{T}_{2}},\dots \ue89e\phantom{\rule{0.8em}{0.8ex}},\frac{1}{{T}_{1}\ue89e{T}_{2}\ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89e\dots \ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89e{T}_{F}}\right]}^{T}\in {\mathbb{R}}^{\rho}& \left(2\right)\end{array}$  Matrix A is a mixing matrix from Equation (3) below. Matrix A has F columns and F rows.

$\begin{array}{cc}A=\left[\begin{array}{ccc}1& \phantom{\rule{0.3em}{0.3ex}}& \phantom{\rule{0.3em}{0.3ex}}\\ \vdots & \ddots & \phantom{\rule{0.3em}{0.3ex}}\\ 1& \phantom{\rule{0.3em}{0.3ex}}& 1\end{array}\right]& \left(3\right)\end{array}$  The designer must also choose a set {N_{1}, N_{2}, . . . N_{F}} of frequency shifts. Each integer N_{i }in the set must be equal or greater than 2 and the set must satisfy Equation (4) below, A typical selection is to make N_{1}=3 and N_{i}=2 for i>1.

$\begin{array}{cc}N\equiv \sum _{i=1}^{F}\ue89e{N}_{i}\ge 2\ue89eF+1& \left(4\right)\end{array}$  Vector s is built by stacking altogether the shifts of each frequency as follows: N_{1 }shifts of F_{1}, N_{2 }shifts of, and so forth. The length of vector s is N. Let s_{i }be a vector of length N_{i }containing the shifts of F_{i}; then, vectors and each s_{i }are defined as shown in Equation (5) below.

$\begin{array}{cc}s=\left[\begin{array}{c}{s}_{1}\\ {s}_{2}\\ \vdots \\ {s}_{F}\end{array}\right]\in {\mathbb{R}}^{N},{s}_{i}=\left[\begin{array}{c}\frac{0}{\mathrm{max}\ue8a0\left({N}_{i},3\right)}\\ \frac{1}{\mathrm{max}\ue8a0\left({N}_{i},3\right)}\\ \vdots \\ \frac{{N}_{i}1}{\mathrm{max}\ue8a0\left({N}_{i},3\right)}\end{array}\right]\in {\mathbb{R}}^{{N}_{i}}& \left(5\right)\end{array}$  S is a block diagonal matrix matching the shift vector s. Matrix S has F columns and N rows and is given in Equation (6) below.

$\begin{array}{cc}S=\left[\begin{array}{cccc}{S}_{1}& \phantom{\rule{0.3em}{0.3ex}}& \phantom{\rule{0.3em}{0.3ex}}& \phantom{\rule{0.3em}{0.3ex}}\\ \phantom{\rule{0.3em}{0.3ex}}& {S}_{2}& \phantom{\rule{0.3em}{0.3ex}}& \phantom{\rule{0.3em}{0.3ex}}\\ \phantom{\rule{0.3em}{0.3ex}}& \phantom{\rule{0.3em}{0.3ex}}& \ddots & \phantom{\rule{0.3em}{0.3ex}}\\ \phantom{\rule{0.3em}{0.3ex}}& \phantom{\rule{0.3em}{0.3ex}}& \phantom{\rule{0.3em}{0.3ex}}& {S}_{F}\end{array}\right],{S}_{i}=\left[\begin{array}{c}1\\ 1\\ \vdots \\ 1\end{array}\right]\in {\mathbb{R}}_{i}^{N}& \left(6\right)\end{array}$  Finally, the offset value a and the amplitude value a are constants proportional to the light source 104 dynamic range. For instance, values o=127 and a=127 would generate patterns images in the range [0, 255].
 An example sequence generated using this method is shown in
FIG. 9 . In the example, the following parameters were used: 
F=3, T _{1}=64, T _{2}=5, T_{3}=5, N_{1}=3, N_{2}=2, N_{3}=2 (7)  Still referring to
FIG. 9 , patterns 902 ac are the 3 shifts of the first primary frequency, patterns 902 d, 902 e are the 2 shifts of the second primary frequency, and patterns 902 f, 902 g are the 2 shifts of the third primary frequency. The 7 patterns 902 ag comprise the whole sequence.  Acquisition. Step 204
 Referring again to FIG, 2, the acquisition step 204 is described in more detail below. Referring additionally to
FIG. 3 , a flowchart 300 implementing the acquisition step 204 is shown. Once started, the light source 104 projects a dual frequency pattern onto the scene 108. The dual frequency patterns are stored in a dual frequency database 120 in the memory 112. At step 304, the system processor 102 commands the camera 106 to capture one image and saves the image in an image database in the memory 112 for later processing. At step 306, the system processor 102 determines whether the projected pattern is the last in the sequence, in which case the acquisition step 204 ends. If not, the system processor 102 jumps to step 302 and advances to the next pattern in the sequence. In other words, the first time step 302 is executed, the first pattern in the sequence is projected; each additional time, the next fringe pattern in the sequence is projected.  Steps 302 and 304 must be executed with precise timings, as shown graphically in
FIG. 4 , which illustrates a projector and camera timing plot 400. The timing plot 400 includes the projector timing 402 synchronized with the camera timing 404. The moment that the projection of the first pattern in the sequence begins is designated t_{0}. Each pattern is projected for a period of time t_{p }(i.e., projection of the first pattern stops at time t_{0}+t_{p}). Capture by the camera must begin after a small delay t_{d }and continue for a time t_{c }(i.e., the first image capture begins at t_{0}−t_{d }and finish at t_{0}+t_{d}+t_{c}). Preferably, there is a delay t_{d }also between the projection end of a pattern and the projection beginning of the next one (i.e., projection of the second pattern begins at t_{1}=t_{0}+t_{p}+t_{d}). The projection start of pattern i ∈ [1, N] is computed as t_{i}=t_{0}+(i−1)*(t_{p}+t_{d}), image i capture begins at t_{i}+t_{d }and ends at t_{i}+t_{d}+t_{e}, projection of pattern i finishes at t_{i}+t_{p}.  Referring now to
FIG. 10 , an exemplary image 1000 captured by acquisition step 204 of a scene 108 by projecting a dual frequency fringe pattern 902 a is shown. Such images for dual frequency fringe patterns 902 ag would be saved in image database 122.  Decoding Step 206
 Referring now to
FIG. 5 , a flowchart 500 illustrating the details of the decoding step 206 is shown. During the decoding step 206, the system processor 102 processes a set of captured images at step 502 (e.g., pattern decoding), creates a correspondence image at step 504, and a 3D Model at step 508. In a preferred embodiment, the decoding step 206 includes a decoding pattern step 502, and a triangulation step 506.  Referring now to
FIG. 6 , another embodiment implements the decoding step 206 by only performing a decoding pattern step 602 and a correspondence Image step 604.  Referring again to
FIG. 5 , the correspondence image created at step 504 is a matrix with the same number of columns and rows as the input images. Referring additionally toFIG. 11 , an example 1100 of triangulation in accordance with the subject technology is shown. InFIG. 11 , the concept of triangulation is shown by the relationship between a correspondence value and a projector index,  Each location in the matrix, called a pixel, contains a correspondence value 1102. As shown in
FIG. 11 , the example correspondence value 1102 is five. The decoding step 206 creates a correspondence image at step 504 by setting each correspondence value equal to the projector index 120 (seeFIGS. 7, 8 and 11 ), which is the index assigned to the projector pixel of a projector image 1104, which illuminated the point 1104 in the scene 104 imaged by the camera pixel 1102 in the same location as the correspondence value.  In other words with respect to the example in
FIG. 11 , a scene point 1104 is illuminated by a projector pixel 1106 with a projector index 120 equal to 5. The same scene point 1104 is being imaged by a camera pixel 1102 in column i and row j. Therefore, the correspondence value 1102 at column i and row j in a correspondence image 1108 of the camera 106 will be set to a value equal to 5. Some pixels in the correspondence image 1108 cannot be assigned to a valid projector index 120, either because the decoding step 206 cannot identify reliably a corresponding projector pixel, or, because the point imaged at that location is not illuminated by the light source 104. In both of these cases, the correspondence value is set to ‘unknown’.  The pattern decode step 502 computes a correspondence value for each pixel in the correspondence image 1108 independently. Referring additionally to
FIG. 12 , a flowchart 1200 detailing the pattern decode step 502 is illustrated.  Referring to
FIG. 12 , step 1202 computes a relative phase value of the primary frequencies, called raw phase. Subsequently step 1210 computes a relative phase value of the dual frequencies, called dual phase. Additionally step 1212 calculates an absolute phase value. The absolute phase value maps to a projector index.  Referring in more detail to
FIG. 12 , exemplary logic for generation of each correspondence is shown. The logic ofFIG. 12 is executed in parallel by the pattern decode step 502 for each location in the correspondence image of step 504.  At step 1202 the raw phase value for the primary frequencies is computed by solving the linear system in Equation (8) below, where U and R are vectors and M is a fixed matrix.

$\begin{array}{cc}U=\underset{U}{\mathrm{argmin}}\ue89e\uf605R\mathrm{MU}\uf606& \left(8\right)\end{array}$  Vector R is called radiance vector, a vector built from the pixel values from the captured image set. The length of R is N, the number of images in the set. The first component of the radiance vector has the pixel value at the pixel location being decoded in the first image of the sequence. The second component has the pixel value at the same location in the second image of the sequence, and so forth. The decoding matrix 114 is shown in Equations (9) and (10) below. Values F and N_{i }corresponds to those used at the encoding step 2020. Matrix M has at least 2F+1 columns and N rows.

$\begin{array}{cc}M=\left[\begin{array}{ccccc}1& {M}_{1}& \phantom{\rule{0.3em}{0.3ex}}& \phantom{\rule{0.3em}{0.3ex}}& \phantom{\rule{0.3em}{0.3ex}}\\ 1& \phantom{\rule{0.3em}{0.3ex}}& {M}_{2}& \phantom{\rule{0.3em}{0.3ex}}& \phantom{\rule{0.3em}{0.3ex}}\\ \vdots & \phantom{\rule{0.3em}{0.3ex}}& \phantom{\rule{0.3em}{0.3ex}}& \ddots & \phantom{\rule{0.3em}{0.3ex}}\\ 1& \phantom{\rule{0.3em}{0.3ex}}& \phantom{\rule{0.3em}{0.3ex}}& \phantom{\rule{0.3em}{0.3ex}}& {M}_{F}\end{array}\right]& \left(9\right)\\ {M}_{i}=\left[\begin{array}{cc}\mathrm{cos}\ue8a0\left(2\ue89e\pi \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e\frac{0}{\mathrm{max}\ue8a0\left({N}_{i},3\right)}\right)& \mathrm{sin}\ue8a0\left(2\ue89e\pi \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e\frac{0}{\mathrm{max}\ue8a0\left({N}_{i},3\right)}\right)\\ \mathrm{cos}\ue8a0\left(2\ue89e\pi \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e\frac{1}{\mathrm{max}\ue8a0\left({N}_{i},3\right)}\right)& \mathrm{sin}\ue8a0\left(2\ue89e\pi \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e\frac{1}{\mathrm{max}\ue8a0\left({N}_{i},3\right)}\right)\\ \vdots & \vdots \\ \mathrm{cos}\ue8a0\left(2\ue89e\pi \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e\frac{{N}_{i}1}{\mathrm{max}\ue8a0\left({N}_{i},3\right)}\right)& \mathrm{sin}\ue8a0\left(2\ue89e\pi \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e\frac{{N}_{i}1}{\mathrm{max}\ue8a0\left({N}_{i},3\right)}\right)\end{array}\right]& \left(10\right)\end{array}$  At step 1202, the raw phase's ω_{i }corresponding to the primary frequencies are computed from vector U as in Equation (11) below. The notation U(n) means the ncomponent of U.

$\begin{array}{cc}{\omega}_{i}=\mathrm{arctan}\ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e\frac{U\ue8a0\left(2\ue89ei+2\right)}{U\ue8a0\left(2\ue89ei+1\right)},i\ue89e\text{:}\ue89e1\ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89e\dots \ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89eF& \left(11\right)\end{array}$  At step 1204, the system processor 102 computes an amplitude value a; for each primary frequency from vector U using Equation (12) below. At step 1206, each amplitude value a_{i }is compared with a threshold value T_{Amp}. If some of the amplitude values a_{i }are below T_{Amp}, the process proceeds to step 1208. At step 1208, the decoded value is unreliable and the correspondence value is set to ‘unknown’ for the current pixel, and the decoding process for this location finishes. The threshold value T_{amp }is set by the designer.

a _{i}=√{square root over (U(2i+1)^{2} +U(2i+2)^{2})}{square root over (U(2i+1)^{2} +U(2i+2)^{2})}, i:1 . . . F (12)  From step 1206, for the pixel locations where all amplitude value a_{i }are above T_{Amp }decoding continues to step 1210 by computing relative phase values {tilde over (ω)}_{i }of the embedded frequencies using Equation (13) below.

(13)  The process of calculating an absolute phase value for each relative phase value is called ‘phase unwrapping’. A relative phase value is the phase value ‘relative’ to the current sine period beginning that is a value in [0, 2π)._{d2]} An absolute phase value is the phase value measured from the origin of the signal. At this point, the set of relative phase values {ω} ∪ {{tilde over (ω)}_{1}} may be unwrapped using an algorithm.
 At step 1212, computation of the absolute phase unwraps both the raw phases and the dual phases as follows: raw phases ω_{i }and dual phases {tilde over (ω)}_{i }are put altogether in a single set and sorted by frequency, renaming to ν_{0 }the phase corresponding to the lowest frequency and increasing until ν_{2F−1}, the phase corresponding to the highest frequency. Equation (14) is applied to obtain the absolute phase values p_{i }which correspond to the projector indices,

$\begin{array}{cc}\{\begin{array}{c}{k}_{0}\equiv 0,{k}_{i}=\lfloor \frac{{t}_{i}\ue89e{p}_{i1}{v}_{i}}{2\ue89e\pi}\rfloor \ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89e\mathrm{if}\ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89ei>0\\ {p}_{i}=\frac{2\ue89e{k}_{i}\ue89e\pi +{v}_{i}}{{t}_{i}}\end{array}& \left(14\right)\end{array}$  in Equation (14), the operator └·┘ takes the integer part of the argument, and the values i correspond to the frequency values. The values of the embedded frequencies F_{i }and the values of the primary frequencies f_{i }are given in Equations (15) and (16) respectively, where f_{i }corresponds to ω_{i }and F_{i }corresponds to {tilde over (ω)}_{i}. The value t_{i }corresponds to the frequency of the relative phase that was renamed to ν_{i}.

$\begin{array}{cc}{F}_{i}=\{\begin{array}{cc}0& \mathrm{if}\ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89ei=1\\ \frac{1}{{T}_{1}\ue89e{T}_{2}\ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89e\dots \ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89e{T}_{i}}& \mathrm{if}\ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89ei>1\end{array}& \left(15\right)\\ {f}_{i}=\frac{1}{{T}_{1}}+{F}_{i}& \left(16\right)\end{array}$  The Pattern Decode step 502 ends assigning a single projector index 120 (see
FIGS. 7 and 6 ) to the correspondence value 1102 (seeFIG. 11 ) at the current location in the correspondence image of step 604. The values p_{i }unwrapped above are already projector indices, if another unwrapping algorithm is used the absolute phase values must be converted to projector indices dividing them by the corresponding frequency value t_{i}. Two possible ways of getting a single correspondence value from the multiple projector indices are: use the mean p of the indices corresponding to the primary frequencies as in Equation (17), or, set the correspondence to the index of the highest frequency as in Equation (18). InFIG. 12 , step 1214 sets the correspondence to the index of the highest frequency. 
$\begin{array}{cc}p=\frac{1}{F}\ue89e\sum _{i=F}^{2\ue89eF1}\ue89e{p}_{i}& \left(17\right)\\ p={p}_{2\ue89eF1}& \left(18\right)\end{array}$  The Pattern Decode step 502 of
FIG. 5 stops once all pixels in the camera images have been decoded either as projector index values or as ‘unknown’. Note that even when projector indices are integer values, correspondence values are usually not integers because the scene point imaged by a camera pixel could be in any place between the discetized projector lines encoded by the indices.  Referring back to
FIG. 2 , the decoding step 206 continues by using the correspondence image to triangulate points at step 506 ofFIG. 5 and generates a 3D Model at step 508 ofFIG. 5 . Step 506 calculates the intersection of a camera ray and as projector light plane. The camera ray begins at the origin of the camera coordinate system (as shown inFIG. 11 ) and passes through the current camera pixel center. The projector light plane is the plane that contains the projector line encoded by the projector index 120 and the origin of the projector coordinate system (as shown inFIG. 11 ).  The camera ray extends in the direction of the scene 108 and intersects the indicated plane exactly on the scene point being imaged by the current camera pixel location as can he seen in
FIG. 11 . The sought camera ray coincides with the dashed line of the camera light path but it has opposite direction. The projector light plane contains the dashed line representing the projector light path. The intersection 1104 between the projector and camera light paths is on the scene 108. Once the intersection 1104 is calculated, the result is a 3D point which becomes part of the 3D Model of step 508 inFIG. 5 . Step 506 performs this intersection for each correspondence index value 1102 with a value different from ‘unknown’. After processing all pixel locations the 3D Model of step 508 is complete and the decoding step 206 ends.  Another embodiment of the subject technology assigns two sets of projector indices, one for rows and one for columns. Each set is encoded, acquired, and decoded as in the preferred embodiment but independently of each other, generating two correspondence images at step 504. At this time, 3D points are generated by computing the ‘approximate intersection’ of the camera ray, and the ray defined as the intersection the two light planes defined by the two correspondences assigned to each camera pixel location. The approximate intersection is defined as the point which minimizes the sum of the square distances to both rays.

FIG. 13 is a sample correspondence image 1300 andFIG. 14 shows a sample 3D Model 1400, both generated at the decoding step 206 ofFIG. 2 .  It will be appreciated by those of ordinary skill in the pertinent art that the functions of several elements may, in alternative embodiments, be carried out by fewer elements, or a single element. Similarly, in some embodiments, any functional element may perform fewer, or different, operations than those described with respect to the illustrated embodiment. Also, functional elements (e.g., modules, databases, interfaces, hardware, computers, servers and the like) shown as distinct liar purposes of illustration may be incorporated within other functional elements in a particular implementation.
 All patents, patent applications and other references disclosed herein are hereby expressly incorporated in their entireties by reference. While the subject technology has been described with respect to preferred embodiments, those skilled in the art will readily appreciate that various changes and/or modifications can be made to the subject technology without departing from the spirit or scope of the invention as defined by the appended claims.
Claims (7)
1. A method for measuring shapes comprising the steps of:
encoding light source coordinates of a scene using dual frequency sinusoidal fringe patterns;
modulating the light source with the dual frequency sinusoidal fringe patterns; and
recording images of the scene while the scene is being illuminated by the modulated light source.
2. A method as recited in claim 1 , further comprising the step of extracting coded coordinates from the recorded images by using a closed form decoding algorithm.
3. A system for threedimensional shape measurement comprising:
a light source for projecting such fringe patterns onto a scene;
a camera for recording images of the scene under illumination; and
a system for processing the recorded images including: a first module for encoding light source coordinates using dual frequency sinusoidal fringe patterns; a second module for extracting the encoded coordinates from the recorded images; and a third module for computing the scene shape using a geometric triangulation method.
4. A system as recited in claim 3 , further comprising a turntable, wherein the scene is an object on top of the turntable, the light source projects fringe patterns and the camera records images of the object at different rotations of the turntable while keeping the object fixed on top the turntable,
the second module decodes all recorded images, and
the third module computes a shape of the object from all captured turntable rotations and generates a 3D model of the object.
5. A method for obtaining a shape of a target object comprising the steps of:
projecting dual frequency fringe patterns on the target object;
recording images of the illuminated target object;
encoding locations in each projector image plane into the projected dual frequency fringe patterns while images are recorded;
decoding the images to recover relative phase values for the projected dual frequency fringe patterns primary and dual frequencies;
unwrapping the relative phase values into absolute phases;
converted the absolute phases back to projector image plane locations; and
creating a correspondence image based on a relation between camera pixels and decoded projector locations, wherein the correspondence image represents a measured shape of the target object.
6. A method as recited in claim 5 , further comprising the step of creating a 3D model of the target object based on the correspondence images together with a geometric triangulation of the target object.
7. A method as recited in claim 5 , further comprising the step of using direct phase unwrapping.
Priority Applications (2)
Application Number  Priority Date  Filing Date  Title 

US201462055835P true  20140926  20140926  
US14/866,245 US20160094830A1 (en)  20140926  20150925  System and Methods for Shape Measurement Using Dual Frequency Fringe Patterns 
Applications Claiming Priority (2)
Application Number  Priority Date  Filing Date  Title 

US14/866,245 US20160094830A1 (en)  20140926  20150925  System and Methods for Shape Measurement Using Dual Frequency Fringe Patterns 
US16/019,000 US20180306577A1 (en)  20140926  20180626  System and Methods for Shape Measurement Using Dual Frequency Fringe Pattern 
Related Child Applications (1)
Application Number  Title  Priority Date  Filing Date 

US16/019,000 Division US20180306577A1 (en)  20140926  20180626  System and Methods for Shape Measurement Using Dual Frequency Fringe Pattern 
Publications (1)
Publication Number  Publication Date 

US20160094830A1 true US20160094830A1 (en)  20160331 
Family
ID=55585890
Family Applications (2)
Application Number  Title  Priority Date  Filing Date 

US14/866,245 Abandoned US20160094830A1 (en)  20140926  20150925  System and Methods for Shape Measurement Using Dual Frequency Fringe Patterns 
US16/019,000 Pending US20180306577A1 (en)  20140926  20180626  System and Methods for Shape Measurement Using Dual Frequency Fringe Pattern 
Family Applications After (1)
Application Number  Title  Priority Date  Filing Date 

US16/019,000 Pending US20180306577A1 (en)  20140926  20180626  System and Methods for Shape Measurement Using Dual Frequency Fringe Pattern 
Country Status (1)
Country  Link 

US (2)  US20160094830A1 (en) 
Cited By (4)
Publication number  Priority date  Publication date  Assignee  Title 

US20150070473A1 (en) *  20130912  20150312  Hong Kong Applied Science and Technology Research Institute Company Limited  ColorEncoded Fringe Pattern for ThreeDimensional Shape Measurement 
US20160047890A1 (en) *  20140812  20160218  Abl Ip Holding Llc  System and method for estimating the position and orientation of a mobile communications device in a beaconbased positioning system 
CN106157321A (en) *  20160729  20161123  长春理工大学  Real point light source position calculation method based on plane surface highdynamic range image 
CN106767533A (en) *  20161228  20170531  深圳大学  Fringe projection profilometrybased efficient phase positionthreedimensional mapping method and system 
Citations (7)
Publication number  Priority date  Publication date  Assignee  Title 

US6438148B1 (en) *  19981215  20020820  Nortel Networks Limited  Method and device for encoding data into high speed optical train 
US20050094700A1 (en) *  20031031  20050505  Industrial Technology Research Institute  Apparatus for generating a laser structured line having a sinusoidal intensity distribution 
US20070206204A1 (en) *  20051201  20070906  Peirong Jia  Fullfield threedimensional measurement method 
US20100103486A1 (en) *  20070418  20100429  Seereal Technologies S.A.  Device for the Production of Holographic Reconstructions with Light Modulators 
US20140018676A1 (en) *  20120711  20140116  Samsung Electronics Co., Ltd.  Method of generating temperature map showing temperature change at predetermined part of organ by irradiating ultrasound wave on moving organs, and ultrasound system using the same 
US9715216B2 (en) *  20101006  20170725  Aosys Limited  Holograms and fabrication processes 
US9791542B2 (en) *  20140812  20171017  Abl Ip Holding Llc  System and method for estimating the position and orientation of a mobile communications device in a beaconbased positioning system 

2015
 20150925 US US14/866,245 patent/US20160094830A1/en not_active Abandoned

2018
 20180626 US US16/019,000 patent/US20180306577A1/en active Pending
Patent Citations (7)
Publication number  Priority date  Publication date  Assignee  Title 

US6438148B1 (en) *  19981215  20020820  Nortel Networks Limited  Method and device for encoding data into high speed optical train 
US20050094700A1 (en) *  20031031  20050505  Industrial Technology Research Institute  Apparatus for generating a laser structured line having a sinusoidal intensity distribution 
US20070206204A1 (en) *  20051201  20070906  Peirong Jia  Fullfield threedimensional measurement method 
US20100103486A1 (en) *  20070418  20100429  Seereal Technologies S.A.  Device for the Production of Holographic Reconstructions with Light Modulators 
US9715216B2 (en) *  20101006  20170725  Aosys Limited  Holograms and fabrication processes 
US20140018676A1 (en) *  20120711  20140116  Samsung Electronics Co., Ltd.  Method of generating temperature map showing temperature change at predetermined part of organ by irradiating ultrasound wave on moving organs, and ultrasound system using the same 
US9791542B2 (en) *  20140812  20171017  Abl Ip Holding Llc  System and method for estimating the position and orientation of a mobile communications device in a beaconbased positioning system 
Cited By (11)
Publication number  Priority date  Publication date  Assignee  Title 

US20150070473A1 (en) *  20130912  20150312  Hong Kong Applied Science and Technology Research Institute Company Limited  ColorEncoded Fringe Pattern for ThreeDimensional Shape Measurement 
US9459094B2 (en) *  20130912  20161004  Hong Kong Applied Science and Technology Research Institute Company Limited  Colorencoded fringe pattern for threedimensional shape measurement 
US20160047890A1 (en) *  20140812  20160218  Abl Ip Holding Llc  System and method for estimating the position and orientation of a mobile communications device in a beaconbased positioning system 
US9989624B2 (en)  20140812  20180605  Abl Ip Holding Llc  System and method for estimating the position and orientation of a mobile communications device in a beaconbased positioning system 
US9594152B2 (en) *  20140812  20170314  Abl Ip Holding Llc  System and method for estimating the position and orientation of a mobile communications device in a beaconbased positioning system 
US9791542B2 (en)  20140812  20171017  Abl Ip Holding Llc  System and method for estimating the position and orientation of a mobile communications device in a beaconbased positioning system 
US9791543B2 (en)  20140812  20171017  Abl Ip Holding Llc  System and method for estimating the position and orientation of a mobile communications device in a beaconbased positioning system 
US9846222B2 (en)  20140812  20171219  Abl Ip Holding Llc  System and method for estimating the position and orientation of a mobile communications device in a beaconbased positioning system 
US10001547B2 (en)  20140812  20180619  Abl Ip Holding Llc  System and method for estimating the position and orientation of a mobile communications device in a beaconbased positioning system 
CN106157321A (en) *  20160729  20161123  长春理工大学  Real point light source position calculation method based on plane surface highdynamic range image 
CN106767533A (en) *  20161228  20170531  深圳大学  Fringe projection profilometrybased efficient phase positionthreedimensional mapping method and system 
Also Published As
Publication number  Publication date 

US20180306577A1 (en)  20181025 
Similar Documents
Publication  Publication Date  Title 

Hansard et al.  Timeofflight cameras: principles, methods and applications  
Jeon et al.  Accurate depth map estimation from a lenslet light field camera  
Scharstein et al.  Highaccuracy stereo depth maps using structured light  
US8326025B2 (en)  Method for determining a depth map from images, device for determining a depth map  
US8218858B2 (en)  Enhanced object reconstruction  
US8107722B2 (en)  System and method for automatic stereo measurement of a point of interest in a scene  
Scharstein et al.  Highresolution stereo datasets with subpixelaccurate ground truth  
AU2008296518B2 (en)  System and method for threedimensional measurement of the shape of material objects  
US7844079B2 (en)  System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns  
US6671399B1 (en)  Fast epipolar line adjustment of stereo pairs  
US10140753B2 (en)  3D geometric modeling and 3D video content creation  
US8090194B2 (en)  3D geometric modeling and motion capture using both single and dual imaging  
US20040105580A1 (en)  Acquisition of threedimensional images by an active stereo technique using locally unique patterns  
Smisek et al.  3D with Kinect  
US5821943A (en)  Apparatus and method for recreating and manipulating a 3D object based on a 2D projection thereof  
DE102015000386B4 (en)  Apparatus and method for measuring threedimensional shape and nontransitory computerreadable storage medium  
EP1986154A1 (en)  Modelbased camera pose estimation  
Ahmadabadian et al.  A comparison of dense matching algorithms for scaled surface reconstruction using stereo camera rigs  
WO2009150799A1 (en)  Image processing device, image processing method, and program  
US20130335535A1 (en)  Digital 3d camera using periodic illumination  
WO1992006444A1 (en)  Mechanism for determining parallax between digital images  
US7768527B2 (en)  Hardwareintheloop simulation system and method for computer vision  
Schops et al.  A multiview stereo benchmark with highresolution images and multicamera videos  
KR101259835B1 (en)  Device and method for generating depth information  
WO2007015059A1 (en)  Method and system for threedimensional data capture 
Legal Events
Date  Code  Title  Description 

AS  Assignment 
Owner name: BROWN UNIVERSITY, RHODE ISLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TAUBIN, GABRIEL;MORENO, DANIEL;REEL/FRAME:036670/0004 Effective date: 20150924 

STCB  Information on status: application discontinuation 
Free format text: ABANDONED  FAILURE TO RESPOND TO AN OFFICE ACTION 