US20160094830A1 - System and Methods for Shape Measurement Using Dual Frequency Fringe Patterns - Google Patents
System and Methods for Shape Measurement Using Dual Frequency Fringe Patterns Download PDFInfo
- Publication number
- US20160094830A1 US20160094830A1 US14/866,245 US201514866245A US2016094830A1 US 20160094830 A1 US20160094830 A1 US 20160094830A1 US 201514866245 A US201514866245 A US 201514866245A US 2016094830 A1 US2016094830 A1 US 2016094830A1
- Authority
- US
- United States
- Prior art keywords
- images
- scene
- projector
- patterns
- fringe patterns
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H04N13/0048—
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2536—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object using several gratings with variable grating pitch, projected on the object with the same angle of incidence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G06T7/0075—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- H04N13/0253—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- the subject technology relates generally to measuring 3D shapes using structured light patterns, and more particularly, to computing depth values using dual frequency sinusoidal fringe patterns.
- Structured light methods are widely used as non-contact 3D scanners. Common applications of this technology are industrial inspection, medical imaging, and cultural heritage preservation. These scanners use one or more cameras to image the scene while being illuminated by a sequence of known patterns.
- One projector and a single camera is a typical setup, where the projector projects a fixed pattern sequence while the camera records one image for each projected pattern.
- the pattern sequence helps to establish correspondences between projector and camera coordinates. Such correspondences in conjunction with a triangulation method allow recovery of the scene shape.
- the pattern set determines many properties of a structured light 3D scanner such as precision and scanning time.
- a general purpose 3D scanner must produce high quality results for a variety of materials and shapes to be of practical use.
- the general purpose 3D scanner must be robust to global illumination effects and source illumination defocus, or measurement errors would render the general purpose 3D scanner unsuitable for scanning non-Lambertian surfaces.
- Global illumination is defined as all light contributions measured at a surface point not directly received from the primary light source. Common examples are interreflections and subsurface scattering. Illumination defocus is caused by the light source finite depth of field. It is known that high frequency structured light patterns are robust to such issues.
- the subject technology provides a 3D measurement method and system for real world scenes comprising a variety of materials.
- the subject technology has a measurement speed which is significantly faster than existing structured light 3D shape measurement techniques without loss of precision.
- the subject technology provides a closed term decoding method of the projected structured light patterns which improves from the state of the art methods.
- the subject technology also allows for simultaneous measurements of multiple objects.
- One embodiment of the subject technology is directed to a method for measuring shapes using structured light.
- the method includes encoding light source coordinates of a scene using dual frequency sinusoidal fringe patterns, modulating the light source with the dual frequency sinusoidal fringe patterns, and recording images of the scene while the scene is being illuminated by the modulated light source.
- the method further includes extracting the coded coordinates from the recorded images by using a closed form decoding algorithm.
- Another embodiment of the subject technology is directed to a system for three-dimensional shape measurement including a first module for encoding light source coordinates using dual frequency sinusoidal fringe patterns, a light source for projecting such fringe patterns onto a scene while recording images of the scene under this illumination, a second module for extracting the encoded coordinates from the recorded images, and a third module for computing the scene shape using a geometric triangulation method.
- Yet another embodiment of the subject technology is directed to a system for three-dimensional shape measurement including at least one projector, at least one camera, and at least one system processor.
- the system is configured to generate dual frequency sinusoidal fringe pattern sequences, projecting the generated fringe patterns onto a scene, capturing images of the scene illuminated by the dual frequency fringe patterns, and decoding the images to provide for three-dimensional shape measurement of the scene.
- the system processor includes one or more CPU's (Graphical Processing Unit).
- Still another embodiment of the subject technology is directed to a system for three-dimensional shape measurement of a single object including at least one projector, at least one camera, at least one system processor, and a turntable.
- the system is configured to generate dual frequency sinusoidal fringe pattern sequences, projecting the generated fringe patterns onto a scene, capturing images of the object sitting on top of the turntable illuminated by the fringe patterns.
- the system includes projecting fringe patterns and recording images of the object under this illumination at different rotations of the turntable while keeping the object fixed on top the turntable.
- the system also includes decoding all recorded images and computing the object shape from all captured turntable rotations and generating a 3D model of the object.
- FIG. 1A is a schematic diagram illustrating one embodiment of the shape measurement system.
- FIG. 1B is a schematic diagram illustrating the system processor of one embodiment of the shape measurement system.
- FIG. 2 is a flow chart of a preferred embodiment of the subject technology.
- FIG. 3 is a flow chart of the acquisition step of FIG. 2 .
- FIG. 4 is a plot of acquisition timings in accordance with the subject technology.
- FIG. 5 is a flow chart of the decoding step of FIG. 2 .
- FIG. 6 is a flow chart of the decoding step in an alternative embodiment.
- FIG. 7 is an example of index assignment to a square pixel array in accordance with the subject technology.
- FIG. 8 is an example of index assignment to a diamond pixel array in accordance with the subject technology.
- FIG. 9 is an example of a dual frequency pattern sequence in accordance with the subject technology.
- FIG. 10 is an example of an image captured while the scene is illuminated by a dual frequency fringe pattern in accordance with the subject technology.
- FIG. 11 shows the relation between a correspondence value and a projector index to illustrate the concept of triangulation in accordance with the subject technology.
- FIG. 12 is a flow chart of single pixel decoding in accordance with the subject technology.
- FIG. 13 is an example of a correspondence image generated by the subject technology.
- FIG. 14 is an example of 3D model generated by the subject technology.
- the subject technology includes a system that obtains the shape of a target object or scene by projecting and recording images of dual frequency fringe patterns.
- the system determines locations in image planes that are encoded into patterns and projected onto the target while images are being recorded.
- the resulting images show the patterns superimposed onto the target.
- the images are decoded to recover relative phase values for the patterns primary and dual frequencies.
- the relative phases are unwrapped into absolute phases and converted back to projector image plane locations.
- the relation between camera pixels and decoded projector locations is saved as a correspondence image representing the measured shape of the target.
- Correspondence images together with a geometric triangulation method create a 3D model of the target.
- Dual frequency fringe patterns have a low frequency embedded into a high frequency sinusoidal.
- Both frequencies are recovered in closed form by the decoding method, thus, enabling direct phase unwrapping. Only high frequency fringes are visible in the pattern images making the result more robust, for example, with respect to source illumination defocus and global illumination effects.
- the subject technology is applicable to shape measurement of targets of a variety of materials.
- the shape measurement system 100 includes a system processor 102 connected to a light source 104 , such as a projector, and a camera 106 .
- the light source 104 is controlled by the system processor 102 to project fringe patterns onto one or more objects, herein referred to as a scene 108 .
- the camera 106 is controlled by the system processor 102 to record scene images. Both camera 106 and light source 104 are oriented towards the target scene 108 for which the 3D shape is being measured. Preferably, the camera 106 , the light source 104 , and the scene 108 remain static while the shape measurement is being performed.
- the system processor 102 typically includes a central processing unit 110 including one or more microprocessors in communication with memory 112 such as random access memory (RAM) and magnetic hard disk drive.
- RAM random access memory
- An operating system is stored on the memory 112 for execution on the central processing unit 110 .
- a hard disk drive is typically used for storing data, applications and the like utilized by the applications.
- the system processor 102 includes mechanisms and structures for performing I/O operations and other typical functions. It is envisioned that the system processor 102 can utilize multiple servers in cooperation to facilitate greater performance and stability by distributing memory and processing as is well known.
- the memory 112 includes several modules for performing the operations of the subject technology.
- An encoding module 114 , an acquisition module 116 , and a decoding module 118 all interact with data stored in a dual frequency patterns database 120 , an images database, and other places.
- the flow charts herein illustrate the structure or the logic of the present technology, possibly as embodied in computer program software for execution on particular device such as the system processor 102 or a modified computer, digital processor or microprocessor.
- the flow charts illustrate the structures of the computer program code elements, including logic circuits on an integrated circuit as the case may be, that function according to the present technology.
- the present technology may be practiced by a machine component that renders the program code elements in a form that instructs equipment to perform a sequence of function steps corresponding to those shown in the flow charts.
- a flowchart 200 depicting a process for measuring the shape of a scene 108 .
- a set of sinusoidal fringe patterns are generated by the encoding module 114 of the system processor 102 in an encoding step 202 .
- the light source 104 is used to project the generated patterns, one by one, onto the scene 108 .
- the camera 106 records an image of the Scene. 108 for each projected fringe pattern which is acquired by the acquisition module 116 for storage.
- the pattern projection and image capture times are synchronized in such a way that a single pattern is projected while each image is being captured. No more than one image is captured during each pattern projection time.
- all the captured images are processed by the decoding module 118 in a decoding step 206 to generate a digital representation of the scene 108 related to the measured shape.
- the encoding step 202 encodes light source coordinates into dual frequency fringe patterns.
- the light source 104 projects 2-dimensional grayscale images.
- the light source coordinates are integer index values which identify a line in the light source 104 image plane.
- FIG. 7 and FIG. 8 are examples of square and diamond pixel arrays respectively.
- a projector index 120 identifies a pixel column (shown in hold in FIG. 7 ).
- a projector index 120 identifies a diagonal line from the top-left corner to the bottom-right corner of the array (shown in bold in FIG. 8 ).
- the encoding step 202 generates a sequence of 2-dimensional grayscale images where the value at each pixel encodes the corresponding projector index 120 in the array.
- Each projector index 120 is encoded using Equation (1) below where r is an output vector, a and a are constant values. S and A are matrices, T and s are also vectors, p being the projector index 120 .
- the length of vector r is equal to the number of imager to be generated in the sequence.
- the first component of r is the encoded pixel value of index p in the first image in the sequence, the second component corresponds to the encoded pixel value in the second image in the sequence, and so forth.
- Computing a vector r for each projector index 120 in the projector pixel array, and filling the image sequence pixel values using the components of r as described concludes the encoding step 202 .
- the output of encoding step 202 is the sequence of images generated. The values required to evaluate Equation (1) are explained in detail in the following paragraphs.
- the dual frequency pattern sequence 900 is made of sinusoidal fringe patterns of F primary frequencies and phase shifts of the primary frequencies.
- a primary pattern frequency is the spatial frequency of the fringes in the pattern image.
- Each pattern has also an embedded dual frequency which is not visible in the image but is extracted by the pattern decode step 602 (see FIG. 6 and operation 1210 in FIG. 12 ).
- the value F and the number of shifts are chosen by the designer. However, F should be greater than one.
- the designer must also choose F real values ⁇ T 1 , T 2 , . . . , T F ⁇ which are used to compute vector T using Equation (2) below. All T i values must be greater than one.
- the length of vector T is equal to the number of primary frequencies.
- T [ 1 T 1 , 1 T 1 ⁇ T 2 , ... ⁇ , 1 T 1 ⁇ T 2 ⁇ ⁇ ... ⁇ ⁇ T F ] T ⁇ R ⁇ ( 2 )
- Matrix A is a mixing matrix from Equation (3) below. Matrix A has F columns and F rows.
- N 1 , N 2 , . . . N F frequency shifts.
- Vector s is built by stacking altogether the shifts of each frequency as follows: N 1 shifts of F 1 , N 2 shifts of, and so forth.
- the length of vector s is N.
- Matrix S is a block diagonal matrix matching the shift vector s.
- Matrix S has F columns and N rows and is given in Equation (6) below.
- FIG. 9 An example sequence generated using this method is shown in FIG. 9 .
- the following parameters were used:
- patterns 902 a - c are the 3 shifts of the first primary frequency
- patterns 902 d, 902 e are the 2 shifts of the second primary frequency
- patterns 902 f, 902 g are the 2 shifts of the third primary frequency.
- the 7 patterns 902 a - g comprise the whole sequence.
- the acquisition step 204 is described in more detail below.
- a flowchart 300 implementing the acquisition step 204 is shown.
- the light source 104 projects a dual frequency pattern onto the scene 108 .
- the dual frequency patterns are stored in a dual frequency database 120 in the memory 112 .
- the system processor 102 commands the camera 106 to capture one image and saves the image in an image database in the memory 112 for later processing.
- the system processor 102 determines whether the projected pattern is the last in the sequence, in which case the acquisition step 204 ends. If not, the system processor 102 jumps to step 302 and advances to the next pattern in the sequence. In other words, the first time step 302 is executed, the first pattern in the sequence is projected; each additional time, the next fringe pattern in the sequence is projected.
- Steps 302 and 304 must be executed with precise timings, as shown graphically in FIG. 4 , which illustrates a projector and camera timing plot 400 .
- the timing plot 400 includes the projector timing 402 synchronized with the camera timing 404 .
- the moment that the projection of the first pattern in the sequence begins is designated t 0 .
- Each pattern is projected for a period of time t p (i.e., projection of the first pattern stops at time t 0 +t p ).
- Capture by the camera must begin after a small delay t d and continue for a time t c (i.e., the first image capture begins at t 0 ⁇ t d and finish at t 0 +t d +t c ).
- FIG. 10 an exemplary image 1000 captured by acquisition step 204 of a scene 108 by projecting a dual frequency fringe pattern 902 a is shown. Such images for dual frequency fringe patterns 902 a - g would be saved in image database 122 .
- the system processor 102 processes a set of captured images at step 502 (e.g., pattern decoding), creates a correspondence image at step 504 , and a 3D Model at step 508 .
- the decoding step 206 includes a decoding pattern step 502 , and a triangulation step 506 .
- FIG. 6 another embodiment implements the decoding step 206 by only performing a decoding pattern step 602 and a correspondence Image step 604 .
- the correspondence image created at step 504 is a matrix with the same number of columns and rows as the input images.
- FIG. 11 an example 1100 of triangulation in accordance with the subject technology is shown.
- the concept of triangulation is shown by the relationship between a correspondence value and a projector index,
- Each location in the matrix contains a correspondence value 1102 .
- the example correspondence value 1102 is five.
- the decoding step 206 creates a correspondence image at step 504 by setting each correspondence value equal to the projector index 120 (see FIGS. 7, 8 and 11 ), which is the index assigned to the projector pixel of a projector image 1104 , which illuminated the point 1104 in the scene 104 imaged by the camera pixel 1102 in the same location as the correspondence value.
- a scene point 1104 is illuminated by a projector pixel 1106 with a projector index 120 equal to 5.
- the same scene point 1104 is being imaged by a camera pixel 1102 in column i and row j. Therefore, the correspondence value 1102 at column i and row j in a correspondence image 1108 of the camera 106 will be set to a value equal to 5.
- Some pixels in the correspondence image 1108 cannot be assigned to a valid projector index 120 , either because the decoding step 206 cannot identify reliably a corresponding projector pixel, or, because the point imaged at that location is not illuminated by the light source 104 . In both of these cases, the correspondence value is set to ‘unknown’.
- the pattern decode step 502 computes a correspondence value for each pixel in the correspondence image 1108 independently. Referring additionally to FIG. 12 , a flowchart 1200 detailing the pattern decode step 502 is illustrated.
- step 1202 computes a relative phase value of the primary frequencies, called raw phase.
- step 1210 computes a relative phase value of the dual frequencies, called dual phase.
- step 1212 calculates an absolute phase value.
- the absolute phase value maps to a projector index.
- FIG. 12 exemplary logic for generation of each correspondence is shown.
- the logic of FIG. 12 is executed in parallel by the pattern decode step 502 for each location in the correspondence image of step 504 .
- the raw phase value for the primary frequencies is computed by solving the linear system in Equation (8) below, where U and R are vectors and M is a fixed matrix.
- Vector R is called radiance vector, a vector built from the pixel values from the captured image set.
- the length of R is N, the number of images in the set.
- the first component of the radiance vector has the pixel value at the pixel location being decoded in the first image of the sequence.
- the second component has the pixel value at the same location in the second image of the sequence, and so forth.
- the decoding matrix 114 is shown in Equations (9) and (10) below. Values F and N i corresponds to those used at the encoding step 2020 .
- Matrix M has at least 2F+1 columns and N rows.
- M [ 1 M 1 1 M 2 ⁇ ⁇ 1 M F ] ( 9 )
- M i [ cos ⁇ ( 2 ⁇ ⁇ ⁇ ⁇ 0 max ⁇ ( N i , 3 ) ) - sin ⁇ ( 2 ⁇ ⁇ ⁇ ⁇ 0 max ⁇ ( N i , 3 ) ) cos ⁇ ( 2 ⁇ ⁇ ⁇ ⁇ 1 max ⁇ ( N i , 3 ) ) - sin ⁇ ( 2 ⁇ ⁇ ⁇ ⁇ 1 max ⁇ ( N i , 3 ) ) ⁇ ⁇ cos ⁇ ( 2 ⁇ ⁇ ⁇ ⁇ N i - 1 max ⁇ ( N i , 3 ) ) - sin ⁇ ( 2 ⁇ ⁇ ⁇ ⁇ N i - 1 max ⁇ ( N i , 3 ) ] ( 10 )
- the raw phase's ⁇ i corresponding to the primary frequencies are computed from vector U as in Equation (11) below.
- the notation U(n) means the n-component of U.
- ⁇ i arctan ⁇ ⁇ U ⁇ ( 2 ⁇ i + 2 ) U ⁇ ( 2 ⁇ i + 1 ) , i ⁇ : ⁇ 1 ⁇ ⁇ ... ⁇ ⁇ F ( 11 )
- the system processor 102 computes an amplitude value a; for each primary frequency from vector U using Equation (12) below.
- each amplitude value a i is compared with a threshold value T Amp . If some of the amplitude values a i are below T Amp , the process proceeds to step 1208 .
- the decoded value is unreliable and the correspondence value is set to ‘unknown’ for the current pixel, and the decoding process for this location finishes.
- the threshold value T amp is set by the designer.
- step 1206 for the pixel locations where all amplitude value a i are above T Amp decoding continues to step 1210 by computing relative phase values ⁇ tilde over ( ⁇ ) ⁇ i of the embedded frequencies using Equation (13) below.
- phase unwrapping The process of calculating an absolute phase value for each relative phase value is called ‘phase unwrapping’.
- a relative phase value is the phase value ‘relative’ to the current sine period beginning that is a value in [0, 2 ⁇ ).
- An absolute phase value is the phase value measured from the origin of the signal.
- the set of relative phase values ⁇ ⁇ ⁇ tilde over ( ⁇ ) ⁇ 1 ⁇ may be unwrapped using an algorithm.
- step 1212 computation of the absolute phase unwraps both the raw phases and the dual phases as follows: raw phases ⁇ i and dual phases ⁇ tilde over ( ⁇ ) ⁇ i are put altogether in a single set and sorted by frequency, renaming to ⁇ 0 the phase corresponding to the lowest frequency and increasing until ⁇ 2F ⁇ 1 , the phase corresponding to the highest frequency. Equation (14) is applied to obtain the absolute phase values p i which correspond to the projector indices,
- Equation (14) the operator ⁇ takes the integer part of the argument, and the values i correspond to the frequency values.
- the values of the embedded frequencies F i and the values of the primary frequencies f i are given in Equations (15) and (16) respectively, where f i corresponds to ⁇ i and F i corresponds to ⁇ tilde over ( ⁇ ) ⁇ i .
- the value t i corresponds to the frequency of the relative phase that was renamed to ⁇ i .
- the Pattern Decode step 502 ends assigning a single projector index 120 (see FIGS. 7 and 6 ) to the correspondence value 1102 (see FIG. 11 ) at the current location in the correspondence image of step 604 .
- the values p i unwrapped above are already projector indices, if another unwrapping algorithm is used the absolute phase values must be converted to projector indices dividing them by the corresponding frequency value t i .
- Two possible ways of getting a single correspondence value from the multiple projector indices are: use the mean p of the indices corresponding to the primary frequencies as in Equation (17), or, set the correspondence to the index of the highest frequency as in Equation (18). In FIG. 12 , step 1214 sets the correspondence to the index of the highest frequency.
- the Pattern Decode step 502 of FIG. 5 stops once all pixels in the camera images have been decoded either as projector index values or as ‘unknown’. Note that even when projector indices are integer values, correspondence values are usually not integers because the scene point imaged by a camera pixel could be in any place between the discetized projector lines encoded by the indices.
- Step 506 calculates the intersection of a camera ray and as projector light plane.
- the camera ray begins at the origin of the camera coordinate system (as shown in FIG. 11 ) and passes through the current camera pixel center.
- the projector light plane is the plane that contains the projector line encoded by the projector index 120 and the origin of the projector coordinate system (as shown in FIG. 11 ).
- the camera ray extends in the direction of the scene 108 and intersects the indicated plane exactly on the scene point being imaged by the current camera pixel location as can he seen in FIG. 11 .
- the sought camera ray coincides with the dashed line of the camera light path but it has opposite direction.
- the projector light plane contains the dashed line representing the projector light path.
- the intersection 1104 between the projector and camera light paths is on the scene 108 . Once the intersection 1104 is calculated, the result is a 3D point which becomes part of the 3D Model of step 508 in FIG. 5 .
- Step 506 performs this intersection for each correspondence index value 1102 with a value different from ‘unknown’. After processing all pixel locations the 3D Model of step 508 is complete and the decoding step 206 ends.
- Another embodiment of the subject technology assigns two sets of projector indices, one for rows and one for columns.
- Each set is encoded, acquired, and decoded as in the preferred embodiment but independently of each other, generating two correspondence images at step 504 .
- 3D points are generated by computing the ‘approximate intersection’ of the camera ray, and the ray defined as the intersection the two light planes defined by the two correspondences assigned to each camera pixel location.
- the approximate intersection is defined as the point which minimizes the sum of the square distances to both rays.
- FIG. 13 is a sample correspondence image 1300 and FIG. 14 shows a sample 3D Model 1400 , both generated at the decoding step 206 of FIG. 2 .
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
Abstract
A method obtains the shape of a target by projecting and recording images of dual frequency fringe patterns. Locations in each projector image plane are encoded into the patterns and projected onto die target while images are recorded. The resulting images show the patterns superimposed onto the target. The images are decoded to recover relative phase values for the patterns primary and dual frequencies. The relative phases are unwrapped into absolute phases and converted back to projector image plane locations. The relation between camera pixels and decoded projector locations is saved as a correspondence image representing the measured shape of the target. Correspondence images with a geometric triangulation method create a 3D model of the target. Dual frequency hinge patterns have a low frequency embedded into a high frequency sinusoidal, both frequencies are recovered in closed form by the decoding method, thus, enabling direct phase unwrapping.
Description
- This application claims priority to U.S. Provisional Patent Application No. 62/055,835, filed Sep. 26, 2014, which is incorporated herein. by reference.
- The subject technology relates generally to measuring 3D shapes using structured light patterns, and more particularly, to computing depth values using dual frequency sinusoidal fringe patterns.
- Structured light methods are widely used as non-contact 3D scanners. Common applications of this technology are industrial inspection, medical imaging, and cultural heritage preservation. These scanners use one or more cameras to image the scene while being illuminated by a sequence of known patterns. One projector and a single camera is a typical setup, where the projector projects a fixed pattern sequence while the camera records one image for each projected pattern. The pattern sequence helps to establish correspondences between projector and camera coordinates. Such correspondences in conjunction with a triangulation method allow recovery of the scene shape. The pattern set determines many properties of a structured
light 3D scanner such as precision and scanning time. - A
general purpose 3D scanner must produce high quality results for a variety of materials and shapes to be of practical use. In particular, thegeneral purpose 3D scanner must be robust to global illumination effects and source illumination defocus, or measurement errors would render thegeneral purpose 3D scanner unsuitable for scanning non-Lambertian surfaces. Global illumination is defined as all light contributions measured at a surface point not directly received from the primary light source. Common examples are interreflections and subsurface scattering. Illumination defocus is caused by the light source finite depth of field. It is known that high frequency structured light patterns are robust to such issues. - However, most existing structured light based scanners are not robust to global illumination and defocus effects, and a few that are robust either use pattern sequences of hundreds of images, or fail to provide a closed form decoding algorithm. In both cases, the existing structured light based scanners cannot measure scene shapes as fast as required by many applications.
- In view of the above, a new shape measurement system and method, based on structured light patterns robust to global illumination effects and source illumination defocus, including fast encoding and decoding algorithms, is required.
- The subject technology provides a 3D measurement method and system for real world scenes comprising a variety of materials.
- The subject technology has a measurement speed which is significantly faster than existing structured
light 3D shape measurement techniques without loss of precision. - The subject technology provides a closed term decoding method of the projected structured light patterns which improves from the state of the art methods.
- The subject technology also allows for simultaneous measurements of multiple objects.
- One embodiment of the subject technology is directed to a method for measuring shapes using structured light. The method includes encoding light source coordinates of a scene using dual frequency sinusoidal fringe patterns, modulating the light source with the dual frequency sinusoidal fringe patterns, and recording images of the scene while the scene is being illuminated by the modulated light source. The method further includes extracting the coded coordinates from the recorded images by using a closed form decoding algorithm.
- Another embodiment of the subject technology is directed to a system for three-dimensional shape measurement including a first module for encoding light source coordinates using dual frequency sinusoidal fringe patterns, a light source for projecting such fringe patterns onto a scene while recording images of the scene under this illumination, a second module for extracting the encoded coordinates from the recorded images, and a third module for computing the scene shape using a geometric triangulation method.
- Yet another embodiment of the subject technology is directed to a system for three-dimensional shape measurement including at least one projector, at least one camera, and at least one system processor. The system is configured to generate dual frequency sinusoidal fringe pattern sequences, projecting the generated fringe patterns onto a scene, capturing images of the scene illuminated by the dual frequency fringe patterns, and decoding the images to provide for three-dimensional shape measurement of the scene. In a preferred embodiment, the system processor includes one or more CPU's (Graphical Processing Unit).
- Still another embodiment of the subject technology is directed to a system for three-dimensional shape measurement of a single object including at least one projector, at least one camera, at least one system processor, and a turntable. The system is configured to generate dual frequency sinusoidal fringe pattern sequences, projecting the generated fringe patterns onto a scene, capturing images of the object sitting on top of the turntable illuminated by the fringe patterns. The system includes projecting fringe patterns and recording images of the object under this illumination at different rotations of the turntable while keeping the object fixed on top the turntable. The system also includes decoding all recorded images and computing the object shape from all captured turntable rotations and generating a 3D model of the object.
- Additional aspects and/or advantages will be set forth in part in the description, and claims which follows and, in part, will be apparent from the description and claims, or may be learned by practice of the invention. No single embodiment need exhibit each or every object, feature, or advantage as it is contemplated that different embodiments may have different objects, features, and advantages.
- For a more complete understanding of the invention, reference is made to the following description and accompanying drawings.
-
FIG. 1A is a schematic diagram illustrating one embodiment of the shape measurement system. -
FIG. 1B is a schematic diagram illustrating the system processor of one embodiment of the shape measurement system. -
FIG. 2 is a flow chart of a preferred embodiment of the subject technology. -
FIG. 3 is a flow chart of the acquisition step ofFIG. 2 . -
FIG. 4 is a plot of acquisition timings in accordance with the subject technology. -
FIG. 5 is a flow chart of the decoding step ofFIG. 2 . -
FIG. 6 is a flow chart of the decoding step in an alternative embodiment. -
FIG. 7 is an example of index assignment to a square pixel array in accordance with the subject technology. -
FIG. 8 is an example of index assignment to a diamond pixel array in accordance with the subject technology. -
FIG. 9 is an example of a dual frequency pattern sequence in accordance with the subject technology. -
FIG. 10 is an example of an image captured while the scene is illuminated by a dual frequency fringe pattern in accordance with the subject technology. -
FIG. 11 shows the relation between a correspondence value and a projector index to illustrate the concept of triangulation in accordance with the subject technology. -
FIG. 12 is a flow chart of single pixel decoding in accordance with the subject technology. -
FIG. 13 is an example of a correspondence image generated by the subject technology. -
FIG. 14 is an example of 3D model generated by the subject technology. - The subject technology overcomes many of the prior art problems associated with generating 3D models. The advantages, and other features of the technology disclosed herein, will become more readily apparent to those having ordinary skill in the art from the following detailed description of certain preferred embodiments taken in conjunction with the drawings which set forth representative embodiments of the present invention and wherein like reference numerals identify similar structural elements.
- in brief overview, the subject technology includes a system that obtains the shape of a target object or scene by projecting and recording images of dual frequency fringe patterns. The system determines locations in image planes that are encoded into patterns and projected onto the target while images are being recorded. The resulting images show the patterns superimposed onto the target. The images are decoded to recover relative phase values for the patterns primary and dual frequencies. The relative phases are unwrapped into absolute phases and converted back to projector image plane locations. The relation between camera pixels and decoded projector locations is saved as a correspondence image representing the measured shape of the target. Correspondence images together with a geometric triangulation method create a 3D model of the target. Dual frequency fringe patterns have a low frequency embedded into a high frequency sinusoidal. Both frequencies are recovered in closed form by the decoding method, thus, enabling direct phase unwrapping. Only high frequency fringes are visible in the pattern images making the result more robust, for example, with respect to source illumination defocus and global illumination effects. Thus, the subject technology is applicable to shape measurement of targets of a variety of materials.
- Referring now to
FIG. 1A , a schematic diagram illustrates ashape measurement system 100. Theshape measurement system 100 includes asystem processor 102 connected to alight source 104, such as a projector, and acamera 106. Thelight source 104 is controlled by thesystem processor 102 to project fringe patterns onto one or more objects, herein referred to as ascene 108. Thecamera 106 is controlled by thesystem processor 102 to record scene images. Bothcamera 106 andlight source 104 are oriented towards thetarget scene 108 for which the 3D shape is being measured. Preferably, thecamera 106, thelight source 104, and thescene 108 remain static while the shape measurement is being performed. - Referring now to
FIG. 1B , a schematic diagram illustrating thesystem processor 102 is shown. As illustration, thesystem processor 102 typically includes acentral processing unit 110 including one or more microprocessors in communication withmemory 112 such as random access memory (RAM) and magnetic hard disk drive. An operating system is stored on thememory 112 for execution on thecentral processing unit 110. A hard disk drive is typically used for storing data, applications and the like utilized by the applications. Although not shown for simplicity, thesystem processor 102 includes mechanisms and structures for performing I/O operations and other typical functions. It is envisioned that thesystem processor 102 can utilize multiple servers in cooperation to facilitate greater performance and stability by distributing memory and processing as is well known. - The
memory 112 includes several modules for performing the operations of the subject technology. Anencoding module 114, anacquisition module 116, and adecoding module 118 all interact with data stored in a dualfrequency patterns database 120, an images database, and other places. - The flow charts herein illustrate the structure or the logic of the present technology, possibly as embodied in computer program software for execution on particular device such as the
system processor 102 or a modified computer, digital processor or microprocessor. Those skilled in the art will appreciate that the flow charts illustrate the structures of the computer program code elements, including logic circuits on an integrated circuit as the case may be, that function according to the present technology. As such, the present technology may be practiced by a machine component that renders the program code elements in a form that instructs equipment to perform a sequence of function steps corresponding to those shown in the flow charts. - Referring now to
FIG. 2 , there is illustrated aflowchart 200 depicting a process for measuring the shape of ascene 108. Generally, once the process begins, a set of sinusoidal fringe patterns are generated by theencoding module 114 of thesystem processor 102 in anencoding step 202. Second, during anacquisition step 204, thelight source 104 is used to project the generated patterns, one by one, onto thescene 108. Thecamera 106 records an image of the Scene. 108 for each projected fringe pattern which is acquired by theacquisition module 116 for storage. Preferably, the pattern projection and image capture times are synchronized in such a way that a single pattern is projected while each image is being captured. No more than one image is captured during each pattern projection time. Finally, all the captured images are processed by thedecoding module 118 in adecoding step 206 to generate a digital representation of thescene 108 related to the measured shape. -
Encoding Step 202 - The
encoding step 202 encodes light source coordinates into dual frequency fringe patterns. In a preferred embodiment, thelight source 104 projects 2-dimensional grayscale images. In this case, the light source coordinates are integer index values which identify a line in thelight source 104 image plane. There exist different pixel array organizations among commercial projectors.FIG. 7 andFIG. 8 are examples of square and diamond pixel arrays respectively. In the square pixel array case, aprojector index 120 identifies a pixel column (shown in hold inFIG. 7 ). In the diamond pixel array case, aprojector index 120 identifies a diagonal line from the top-left corner to the bottom-right corner of the array (shown in bold inFIG. 8 ). Theencoding step 202 generates a sequence of 2-dimensional grayscale images where the value at each pixel encodes the correspondingprojector index 120 in the array. - Each
projector index 120 is encoded using Equation (1) below where r is an output vector, a and a are constant values. S and A are matrices, T and s are also vectors, p being theprojector index 120. The length of vector r is equal to the number of imager to be generated in the sequence. The first component of r is the encoded pixel value of index p in the first image in the sequence, the second component corresponds to the encoded pixel value in the second image in the sequence, and so forth. Computing a vector r for eachprojector index 120 in the projector pixel array, and filling the image sequence pixel values using the components of r as described concludes theencoding step 202. The output of encodingstep 202 is the sequence of images generated. The values required to evaluate Equation (1) are explained in detail in the following paragraphs. - Referring now to
FIG. 9 , a dualfrequency pattern sequence 900 is shown. The dualfrequency pattern sequence 900 is made of sinusoidal fringe patterns of F primary frequencies and phase shifts of the primary frequencies. A primary pattern frequency is the spatial frequency of the fringes in the pattern image. Each pattern has also an embedded dual frequency which is not visible in the image but is extracted by the pattern decode step 602 (seeFIG. 6 andoperation 1210 inFIG. 12 ). The value F and the number of shifts are chosen by the designer. However, F should be greater than one. The designer must also choose F real values {T1, T2, . . . , TF} which are used to compute vector T using Equation (2) below. All Ti values must be greater than one. The length of vector T is equal to the number of primary frequencies. -
- Matrix A is a mixing matrix from Equation (3) below. Matrix A has F columns and F rows.
-
- The designer must also choose a set {N1, N2, . . . NF} of frequency shifts. Each integer Ni in the set must be equal or greater than 2 and the set must satisfy Equation (4) below, A typical selection is to make N1=3 and Ni=2 for i>1.
-
- Vector s is built by stacking altogether the shifts of each frequency as follows: N1 shifts of F1, N2 shifts of, and so forth. The length of vector s is N. Let si be a vector of length Ni containing the shifts of Fi; then, vectors and each si are defined as shown in Equation (5) below.
-
- S is a block diagonal matrix matching the shift vector s. Matrix S has F columns and N rows and is given in Equation (6) below.
-
- Finally, the offset value a and the amplitude value a are constants proportional to the
light source 104 dynamic range. For instance, values o=127 and a=127 would generate patterns images in the range [0, 255]. - An example sequence generated using this method is shown in
FIG. 9 . In the example, the following parameters were used: -
F=3, T 1=64, T 2=5, T3=5, N1=3, N2=2, N3=2 (7) - Still referring to
FIG. 9 , patterns 902 a-c are the 3 shifts of the first primary frequency,patterns patterns - Acquisition.
Step 204 - Referring again to FIG, 2, the
acquisition step 204 is described in more detail below. Referring additionally toFIG. 3 , aflowchart 300 implementing theacquisition step 204 is shown. Once started, thelight source 104 projects a dual frequency pattern onto thescene 108. The dual frequency patterns are stored in adual frequency database 120 in thememory 112. Atstep 304, thesystem processor 102 commands thecamera 106 to capture one image and saves the image in an image database in thememory 112 for later processing. Atstep 306, thesystem processor 102 determines whether the projected pattern is the last in the sequence, in which case theacquisition step 204 ends. If not, thesystem processor 102 jumps to step 302 and advances to the next pattern in the sequence. In other words, thefirst time step 302 is executed, the first pattern in the sequence is projected; each additional time, the next fringe pattern in the sequence is projected. -
Steps FIG. 4 , which illustrates a projector andcamera timing plot 400. Thetiming plot 400 includes theprojector timing 402 synchronized with thecamera timing 404. The moment that the projection of the first pattern in the sequence begins is designated t0. Each pattern is projected for a period of time tp (i.e., projection of the first pattern stops at time t0+tp). Capture by the camera must begin after a small delay td and continue for a time tc (i.e., the first image capture begins at t0−td and finish at t0+td+tc). Preferably, there is a delay td also between the projection end of a pattern and the projection beginning of the next one (i.e., projection of the second pattern begins at t1=t0+tp+td). The projection start of pattern i ∈ [1, N] is computed as ti=t0+(i−1)*(tp+td), image i capture begins at ti+td and ends at ti+td+te, projection of pattern i finishes at ti+tp. - Referring now to
FIG. 10 , anexemplary image 1000 captured byacquisition step 204 of ascene 108 by projecting a dualfrequency fringe pattern 902 a is shown. Such images for dual frequency fringe patterns 902 a-g would be saved inimage database 122. - Decoding
Step 206 - Referring now to
FIG. 5 , aflowchart 500 illustrating the details of thedecoding step 206 is shown. During thedecoding step 206, thesystem processor 102 processes a set of captured images at step 502 (e.g., pattern decoding), creates a correspondence image atstep 504, and a 3D Model atstep 508. In a preferred embodiment, thedecoding step 206 includes adecoding pattern step 502, and atriangulation step 506. - Referring now to
FIG. 6 , another embodiment implements thedecoding step 206 by only performing adecoding pattern step 602 and acorrespondence Image step 604. - Referring again to
FIG. 5 , the correspondence image created atstep 504 is a matrix with the same number of columns and rows as the input images. Referring additionally toFIG. 11 , an example 1100 of triangulation in accordance with the subject technology is shown. InFIG. 11 , the concept of triangulation is shown by the relationship between a correspondence value and a projector index, - Each location in the matrix, called a pixel, contains a
correspondence value 1102. As shown inFIG. 11 , theexample correspondence value 1102 is five. Thedecoding step 206 creates a correspondence image atstep 504 by setting each correspondence value equal to the projector index 120 (seeFIGS. 7, 8 and 11 ), which is the index assigned to the projector pixel of aprojector image 1104, which illuminated thepoint 1104 in thescene 104 imaged by thecamera pixel 1102 in the same location as the correspondence value. - In other words with respect to the example in
FIG. 11 , ascene point 1104 is illuminated by aprojector pixel 1106 with aprojector index 120 equal to 5. Thesame scene point 1104 is being imaged by acamera pixel 1102 in column i and row j. Therefore, thecorrespondence value 1102 at column i and row j in acorrespondence image 1108 of thecamera 106 will be set to a value equal to 5. Some pixels in thecorrespondence image 1108 cannot be assigned to avalid projector index 120, either because thedecoding step 206 cannot identify reliably a corresponding projector pixel, or, because the point imaged at that location is not illuminated by thelight source 104. In both of these cases, the correspondence value is set to ‘unknown’. - The pattern decode
step 502 computes a correspondence value for each pixel in thecorrespondence image 1108 independently. Referring additionally toFIG. 12 , aflowchart 1200 detailing thepattern decode step 502 is illustrated. - Referring to
FIG. 12 ,step 1202 computes a relative phase value of the primary frequencies, called raw phase. Subsequently step 1210 computes a relative phase value of the dual frequencies, called dual phase. Additionally step 1212 calculates an absolute phase value. The absolute phase value maps to a projector index. - Referring in more detail to
FIG. 12 , exemplary logic for generation of each correspondence is shown. The logic ofFIG. 12 is executed in parallel by thepattern decode step 502 for each location in the correspondence image ofstep 504. - At
step 1202 the raw phase value for the primary frequencies is computed by solving the linear system in Equation (8) below, where U and R are vectors and M is a fixed matrix. -
- Vector R is called radiance vector, a vector built from the pixel values from the captured image set. The length of R is N, the number of images in the set. The first component of the radiance vector has the pixel value at the pixel location being decoded in the first image of the sequence. The second component has the pixel value at the same location in the second image of the sequence, and so forth. The
decoding matrix 114 is shown in Equations (9) and (10) below. Values F and Ni corresponds to those used at the encoding step 2020. Matrix M has at least 2F+1 columns and N rows. -
- At
step 1202, the raw phase's ωi corresponding to the primary frequencies are computed from vector U as in Equation (11) below. The notation U(n) means the n-component of U. -
- At
step 1204, thesystem processor 102 computes an amplitude value a; for each primary frequency from vector U using Equation (12) below. Atstep 1206, each amplitude value ai is compared with a threshold value TAmp. If some of the amplitude values ai are below TAmp, the process proceeds to step 1208. Atstep 1208, the decoded value is unreliable and the correspondence value is set to ‘unknown’ for the current pixel, and the decoding process for this location finishes. The threshold value Tamp is set by the designer. -
a i=√{square root over (U(2i+1)2 +U(2i+2)2)}{square root over (U(2i+1)2 +U(2i+2)2)}, i:1 . . . F (12) - From
step 1206, for the pixel locations where all amplitude value ai are above TAmp decoding continues to step 1210 by computing relative phase values {tilde over (ω)}i of the embedded frequencies using Equation (13) below. -
(13) - The process of calculating an absolute phase value for each relative phase value is called ‘phase unwrapping’. A relative phase value is the phase value ‘relative’ to the current sine period beginning that is a value in [0, 2π).|d2] An absolute phase value is the phase value measured from the origin of the signal. At this point, the set of relative phase values {ω} ∪ {{tilde over (ω)}1} may be unwrapped using an algorithm.
- At
step 1212, computation of the absolute phase unwraps both the raw phases and the dual phases as follows: raw phases ωi and dual phases {tilde over (ω)}i are put altogether in a single set and sorted by frequency, renaming to ν0 the phase corresponding to the lowest frequency and increasing until ν2F−1, the phase corresponding to the highest frequency. Equation (14) is applied to obtain the absolute phase values pi which correspond to the projector indices, -
- in Equation (14), the operator └·┘ takes the integer part of the argument, and the values i correspond to the frequency values. The values of the embedded frequencies Fi and the values of the primary frequencies fi are given in Equations (15) and (16) respectively, where fi corresponds to ωi and Fi corresponds to {tilde over (ω)}i. The value ti corresponds to the frequency of the relative phase that was renamed to νi.
-
- The
Pattern Decode step 502 ends assigning a single projector index 120 (seeFIGS. 7 and 6 ) to the correspondence value 1102 (seeFIG. 11 ) at the current location in the correspondence image ofstep 604. The values pi unwrapped above are already projector indices, if another unwrapping algorithm is used the absolute phase values must be converted to projector indices dividing them by the corresponding frequency value ti. Two possible ways of getting a single correspondence value from the multiple projector indices are: use the mean p of the indices corresponding to the primary frequencies as in Equation (17), or, set the correspondence to the index of the highest frequency as in Equation (18). InFIG. 12 ,step 1214 sets the correspondence to the index of the highest frequency. -
- The
Pattern Decode step 502 ofFIG. 5 stops once all pixels in the camera images have been decoded either as projector index values or as ‘unknown’. Note that even when projector indices are integer values, correspondence values are usually not integers because the scene point imaged by a camera pixel could be in any place between the discetized projector lines encoded by the indices. - Referring back to
FIG. 2 , thedecoding step 206 continues by using the correspondence image to triangulate points atstep 506 ofFIG. 5 and generates a 3D Model atstep 508 ofFIG. 5 . Step 506 calculates the intersection of a camera ray and as projector light plane. The camera ray begins at the origin of the camera coordinate system (as shown inFIG. 11 ) and passes through the current camera pixel center. The projector light plane is the plane that contains the projector line encoded by theprojector index 120 and the origin of the projector coordinate system (as shown inFIG. 11 ). - The camera ray extends in the direction of the
scene 108 and intersects the indicated plane exactly on the scene point being imaged by the current camera pixel location as can he seen inFIG. 11 . The sought camera ray coincides with the dashed line of the camera light path but it has opposite direction. The projector light plane contains the dashed line representing the projector light path. Theintersection 1104 between the projector and camera light paths is on thescene 108. Once theintersection 1104 is calculated, the result is a 3D point which becomes part of the 3D Model ofstep 508 inFIG. 5 . Step 506 performs this intersection for eachcorrespondence index value 1102 with a value different from ‘unknown’. After processing all pixel locations the 3D Model ofstep 508 is complete and thedecoding step 206 ends. - Another embodiment of the subject technology assigns two sets of projector indices, one for rows and one for columns. Each set is encoded, acquired, and decoded as in the preferred embodiment but independently of each other, generating two correspondence images at
step 504. At this time, 3D points are generated by computing the ‘approximate intersection’ of the camera ray, and the ray defined as the intersection the two light planes defined by the two correspondences assigned to each camera pixel location. The approximate intersection is defined as the point which minimizes the sum of the square distances to both rays. -
FIG. 13 is asample correspondence image 1300 andFIG. 14 shows asample 3D Modeldecoding step 206 ofFIG. 2 . - It will be appreciated by those of ordinary skill in the pertinent art that the functions of several elements may, in alternative embodiments, be carried out by fewer elements, or a single element. Similarly, in some embodiments, any functional element may perform fewer, or different, operations than those described with respect to the illustrated embodiment. Also, functional elements (e.g., modules, databases, interfaces, hardware, computers, servers and the like) shown as distinct liar purposes of illustration may be incorporated within other functional elements in a particular implementation.
- All patents, patent applications and other references disclosed herein are hereby expressly incorporated in their entireties by reference. While the subject technology has been described with respect to preferred embodiments, those skilled in the art will readily appreciate that various changes and/or modifications can be made to the subject technology without departing from the spirit or scope of the invention as defined by the appended claims.
Claims (7)
1. A method for measuring shapes comprising the steps of:
encoding light source coordinates of a scene using dual frequency sinusoidal fringe patterns;
modulating the light source with the dual frequency sinusoidal fringe patterns; and
recording images of the scene while the scene is being illuminated by the modulated light source.
2. A method as recited in claim 1 , further comprising the step of extracting coded coordinates from the recorded images by using a closed form decoding algorithm.
3. A system for three-dimensional shape measurement comprising:
a light source for projecting such fringe patterns onto a scene;
a camera for recording images of the scene under illumination; and
a system for processing the recorded images including: a first module for encoding light source coordinates using dual frequency sinusoidal fringe patterns; a second module for extracting the encoded coordinates from the recorded images; and a third module for computing the scene shape using a geometric triangulation method.
4. A system as recited in claim 3 , further comprising a turntable, wherein the scene is an object on top of the turntable, the light source projects fringe patterns and the camera records images of the object at different rotations of the turntable while keeping the object fixed on top the turntable,
the second module decodes all recorded images, and
the third module computes a shape of the object from all captured turntable rotations and generates a 3D model of the object.
5. A method for obtaining a shape of a target object comprising the steps of:
projecting dual frequency fringe patterns on the target object;
recording images of the illuminated target object;
encoding locations in each projector image plane into the projected dual frequency fringe patterns while images are recorded;
decoding the images to recover relative phase values for the projected dual frequency fringe patterns primary and dual frequencies;
unwrapping the relative phase values into absolute phases;
converted the absolute phases back to projector image plane locations; and
creating a correspondence image based on a relation between camera pixels and decoded projector locations, wherein the correspondence image represents a measured shape of the target object.
6. A method as recited in claim 5 , further comprising the step of creating a 3D model of the target object based on the correspondence images together with a geometric triangulation of the target object.
7. A method as recited in claim 5 , further comprising the step of using direct phase unwrapping.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/866,245 US20160094830A1 (en) | 2014-09-26 | 2015-09-25 | System and Methods for Shape Measurement Using Dual Frequency Fringe Patterns |
US16/019,000 US10584963B2 (en) | 2014-09-26 | 2018-06-26 | System and methods for shape measurement using dual frequency fringe pattern |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462055835P | 2014-09-26 | 2014-09-26 | |
US14/866,245 US20160094830A1 (en) | 2014-09-26 | 2015-09-25 | System and Methods for Shape Measurement Using Dual Frequency Fringe Patterns |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/019,000 Division US10584963B2 (en) | 2014-09-26 | 2018-06-26 | System and methods for shape measurement using dual frequency fringe pattern |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160094830A1 true US20160094830A1 (en) | 2016-03-31 |
Family
ID=55585890
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/866,245 Abandoned US20160094830A1 (en) | 2014-09-26 | 2015-09-25 | System and Methods for Shape Measurement Using Dual Frequency Fringe Patterns |
US16/019,000 Active 2035-09-29 US10584963B2 (en) | 2014-09-26 | 2018-06-26 | System and methods for shape measurement using dual frequency fringe pattern |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/019,000 Active 2035-09-29 US10584963B2 (en) | 2014-09-26 | 2018-06-26 | System and methods for shape measurement using dual frequency fringe pattern |
Country Status (1)
Country | Link |
---|---|
US (2) | US20160094830A1 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150070473A1 (en) * | 2013-09-12 | 2015-03-12 | Hong Kong Applied Science and Technology Research Institute Company Limited | Color-Encoded Fringe Pattern for Three-Dimensional Shape Measurement |
US20160047890A1 (en) * | 2014-08-12 | 2016-02-18 | Abl Ip Holding Llc | System and method for estimating the position and orientation of a mobile communications device in a beacon-based positioning system |
CN106157321A (en) * | 2016-07-29 | 2016-11-23 | 长春理工大学 | True point source position based on plane surface high dynamic range images measuring method |
CN106767533A (en) * | 2016-12-28 | 2017-05-31 | 深圳大学 | Efficient phase three-dimensional mapping method and system based on fringe projection technology of profiling |
CN108489420A (en) * | 2018-03-01 | 2018-09-04 | 西安工业大学 | A kind of dual wavelength phase unwrapping package method that can effectively remove phase noise |
US10681331B2 (en) | 2017-02-06 | 2020-06-09 | MODit 3D, Inc. | System and method for 3D scanning |
US10812772B2 (en) | 2017-02-03 | 2020-10-20 | MODit 3D, Inc. | Three-dimensional scanning device and methods |
WO2020245130A1 (en) * | 2019-06-03 | 2020-12-10 | Inspecvision Limited | Projector assembly system and method |
CN113551617A (en) * | 2021-06-30 | 2021-10-26 | 南京理工大学 | Binocular and dual-frequency complementary three-dimensional surface measurement method based on fringe projection |
CN113658321A (en) * | 2021-07-26 | 2021-11-16 | 浙江大华技术股份有限公司 | Three-dimensional reconstruction method, system and related equipment |
CN116105632A (en) * | 2023-04-12 | 2023-05-12 | 四川大学 | A self-supervised phase unwrapping method and device for structured light three-dimensional imaging |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6880512B2 (en) * | 2018-02-14 | 2021-06-02 | オムロン株式会社 | 3D measuring device, 3D measuring method and 3D measuring program |
KR102187211B1 (en) | 2018-04-23 | 2020-12-04 | 코그넥스코오포레이션 | Methods and apparatus for improved 3-d data reconstruction from stereo-temporal image sequences |
CN111174730B (en) * | 2020-01-07 | 2021-07-16 | 南昌航空大学 | A Fast Phase Unwrapping Method Based on Phase Encoding |
CN113008163B (en) * | 2021-03-01 | 2022-09-27 | 西北工业大学 | Encoding and decoding method based on frequency shift stripes in structured light three-dimensional reconstruction system |
CN114170345B (en) * | 2021-11-25 | 2024-08-27 | 南京信息工程大学 | Stripe pattern design method for structured light projection nonlinear correction |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6438148B1 (en) * | 1998-12-15 | 2002-08-20 | Nortel Networks Limited | Method and device for encoding data into high speed optical train |
US20050094700A1 (en) * | 2003-10-31 | 2005-05-05 | Industrial Technology Research Institute | Apparatus for generating a laser structured line having a sinusoidal intensity distribution |
US20070206204A1 (en) * | 2005-12-01 | 2007-09-06 | Peirong Jia | Full-field three-dimensional measurement method |
US20100103486A1 (en) * | 2007-04-18 | 2010-04-29 | Seereal Technologies S.A. | Device for the Production of Holographic Reconstructions with Light Modulators |
US20140018676A1 (en) * | 2012-07-11 | 2014-01-16 | Samsung Electronics Co., Ltd. | Method of generating temperature map showing temperature change at predetermined part of organ by irradiating ultrasound wave on moving organs, and ultrasound system using the same |
US9715216B2 (en) * | 2010-10-06 | 2017-07-25 | Aosys Limited | Holograms and fabrication processes |
US9791542B2 (en) * | 2014-08-12 | 2017-10-17 | Abl Ip Holding Llc | System and method for estimating the position and orientation of a mobile communications device in a beacon-based positioning system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0915904D0 (en) * | 2009-09-11 | 2009-10-14 | Renishaw Plc | Non-contact object inspection |
TWI485361B (en) * | 2013-09-11 | 2015-05-21 | Univ Nat Taiwan | Measuring apparatus for three-dimensional profilometry and method thereof |
-
2015
- 2015-09-25 US US14/866,245 patent/US20160094830A1/en not_active Abandoned
-
2018
- 2018-06-26 US US16/019,000 patent/US10584963B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6438148B1 (en) * | 1998-12-15 | 2002-08-20 | Nortel Networks Limited | Method and device for encoding data into high speed optical train |
US20050094700A1 (en) * | 2003-10-31 | 2005-05-05 | Industrial Technology Research Institute | Apparatus for generating a laser structured line having a sinusoidal intensity distribution |
US20070206204A1 (en) * | 2005-12-01 | 2007-09-06 | Peirong Jia | Full-field three-dimensional measurement method |
US20100103486A1 (en) * | 2007-04-18 | 2010-04-29 | Seereal Technologies S.A. | Device for the Production of Holographic Reconstructions with Light Modulators |
US9715216B2 (en) * | 2010-10-06 | 2017-07-25 | Aosys Limited | Holograms and fabrication processes |
US20140018676A1 (en) * | 2012-07-11 | 2014-01-16 | Samsung Electronics Co., Ltd. | Method of generating temperature map showing temperature change at predetermined part of organ by irradiating ultrasound wave on moving organs, and ultrasound system using the same |
US9791542B2 (en) * | 2014-08-12 | 2017-10-17 | Abl Ip Holding Llc | System and method for estimating the position and orientation of a mobile communications device in a beacon-based positioning system |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9459094B2 (en) * | 2013-09-12 | 2016-10-04 | Hong Kong Applied Science and Technology Research Institute Company Limited | Color-encoded fringe pattern for three-dimensional shape measurement |
US20150070473A1 (en) * | 2013-09-12 | 2015-03-12 | Hong Kong Applied Science and Technology Research Institute Company Limited | Color-Encoded Fringe Pattern for Three-Dimensional Shape Measurement |
US10578705B2 (en) | 2014-08-12 | 2020-03-03 | Abl Ip Holding Llc | System and method for estimating the position and orientation of a mobile communications device in a beacon-based positioning system |
US20160047890A1 (en) * | 2014-08-12 | 2016-02-18 | Abl Ip Holding Llc | System and method for estimating the position and orientation of a mobile communications device in a beacon-based positioning system |
US9594152B2 (en) * | 2014-08-12 | 2017-03-14 | Abl Ip Holding Llc | System and method for estimating the position and orientation of a mobile communications device in a beacon-based positioning system |
US9791542B2 (en) | 2014-08-12 | 2017-10-17 | Abl Ip Holding Llc | System and method for estimating the position and orientation of a mobile communications device in a beacon-based positioning system |
US9791543B2 (en) | 2014-08-12 | 2017-10-17 | Abl Ip Holding Llc | System and method for estimating the position and orientation of a mobile communications device in a beacon-based positioning system |
US9846222B2 (en) | 2014-08-12 | 2017-12-19 | Abl Ip Holding Llc | System and method for estimating the position and orientation of a mobile communications device in a beacon-based positioning system |
US9989624B2 (en) | 2014-08-12 | 2018-06-05 | Abl Ip Holding Llc | System and method for estimating the position and orientation of a mobile communications device in a beacon-based positioning system |
US10001547B2 (en) | 2014-08-12 | 2018-06-19 | Abl Ip Holding Llc | System and method for estimating the position and orientation of a mobile communications device in a beacon-based positioning system |
CN106157321A (en) * | 2016-07-29 | 2016-11-23 | 长春理工大学 | True point source position based on plane surface high dynamic range images measuring method |
CN106767533A (en) * | 2016-12-28 | 2017-05-31 | 深圳大学 | Efficient phase three-dimensional mapping method and system based on fringe projection technology of profiling |
US10812772B2 (en) | 2017-02-03 | 2020-10-20 | MODit 3D, Inc. | Three-dimensional scanning device and methods |
US10681331B2 (en) | 2017-02-06 | 2020-06-09 | MODit 3D, Inc. | System and method for 3D scanning |
US11330243B2 (en) | 2017-02-06 | 2022-05-10 | Riven, Inc. | System and method for 3D scanning |
CN108489420A (en) * | 2018-03-01 | 2018-09-04 | 西安工业大学 | A kind of dual wavelength phase unwrapping package method that can effectively remove phase noise |
WO2020245130A1 (en) * | 2019-06-03 | 2020-12-10 | Inspecvision Limited | Projector assembly system and method |
US20220237762A1 (en) * | 2019-06-03 | 2022-07-28 | Inspecvision Limited | Projector assembly system and method |
US12340502B2 (en) * | 2019-06-03 | 2025-06-24 | Inspecvision Limited | Projector assembly system and method |
CN113551617A (en) * | 2021-06-30 | 2021-10-26 | 南京理工大学 | Binocular and dual-frequency complementary three-dimensional surface measurement method based on fringe projection |
CN113658321A (en) * | 2021-07-26 | 2021-11-16 | 浙江大华技术股份有限公司 | Three-dimensional reconstruction method, system and related equipment |
CN116105632A (en) * | 2023-04-12 | 2023-05-12 | 四川大学 | A self-supervised phase unwrapping method and device for structured light three-dimensional imaging |
Also Published As
Publication number | Publication date |
---|---|
US20180306577A1 (en) | 2018-10-25 |
US10584963B2 (en) | 2020-03-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10584963B2 (en) | System and methods for shape measurement using dual frequency fringe pattern | |
US10699476B2 (en) | Generating a merged, fused three-dimensional point cloud based on captured images of a scene | |
EP2295932B1 (en) | Image processing device, image processing method, and program | |
Rocchini et al. | A low cost 3D scanner based on structured light | |
US10529076B2 (en) | Image processing apparatus and image processing method | |
US20120176478A1 (en) | Forming range maps using periodic illumination patterns | |
US20120176380A1 (en) | Forming 3d models using periodic illumination patterns | |
JP5633058B1 (en) | 3D measuring apparatus and 3D measuring method | |
WO2011145285A1 (en) | Image processing device, image processing method and program | |
US10664981B2 (en) | Data processing apparatus and method of controlling same | |
CN119478254B (en) | Three-dimensional reconstruction method and system based on polarization fringe projection structured light fusion | |
Hafeez et al. | Image based 3D reconstruction of texture-less objects for VR contents | |
Li et al. | Projective parallel single-pixel imaging to overcome global illumination in 3D structure light scanning | |
CN109741389A (en) | A Local Stereo Matching Method Based on Region-Based Matching | |
RU2573767C1 (en) | Three-dimensional scene scanning device with non-lambert lighting effects | |
Garbat et al. | Structured light camera calibration | |
KR20240071170A (en) | 3d calibration method and apparatus for multi-view phase shift profilometry | |
KR20190103833A (en) | Method for measuring 3-dimensional data in real-time | |
Portalés et al. | Calibration of a camera–projector monochromatic system | |
CN106815864B (en) | Depth Information Measurement Method Based on Single Frame Modulation Template | |
Khosravani et al. | Coregistration of kinect point clouds based on image and object space observations | |
Chien et al. | Adaptive 3d reconstruction system with improved recovery of miscoded region to automatically adjust projected light patterns | |
Fernández Navarro | One-shot pattern projection for dense and accurate 3d reconstruction in structured light | |
Werner et al. | Trigonometric Moments for Editable Structured Light Range Finding. | |
Lau et al. | Design of coded structured light based on square-shaped primitives |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BROWN UNIVERSITY, RHODE ISLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TAUBIN, GABRIEL;MORENO, DANIEL;REEL/FRAME:036670/0004 Effective date: 20150924 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: NATIONAL SCIENCE FOUNDATION, VIRGINIA Free format text: CONFIRMATORY LICENSE;ASSIGNOR:BROWN UNIVERSITY;REEL/FRAME:070127/0832 Effective date: 20230130 |