WO2018171385A1 - 一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统 - Google Patents

一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统 Download PDF

Info

Publication number
WO2018171385A1
WO2018171385A1 PCT/CN2018/077216 CN2018077216W WO2018171385A1 WO 2018171385 A1 WO2018171385 A1 WO 2018171385A1 CN 2018077216 W CN2018077216 W CN 2018077216W WO 2018171385 A1 WO2018171385 A1 WO 2018171385A1
Authority
WO
WIPO (PCT)
Prior art keywords
phase
fourier transform
stripe
camera
projector
Prior art date
Application number
PCT/CN2018/077216
Other languages
English (en)
French (fr)
Inventor
陈钱
左超
冯世杰
孙佳嵩
张玉珍
顾国华
Original Assignee
南京理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京理工大学 filed Critical 南京理工大学
Priority to US16/496,815 priority Critical patent/US11029144B2/en
Publication of WO2018171385A1 publication Critical patent/WO2018171385A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2504Calibration devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2536Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object using several gratings with variable grating pitch, projected on the object with the same angle of incidence
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Definitions

  • the invention belongs to the field of three-dimensional imaging technology, in particular to an ultra-fast three-dimensional shape measurement method and system based on the improved Fourier transform contour technology.
  • Stripe projection contour technology is a widely used method to obtain three-dimensional shape information of objects, which has the advantages of non-contact, high resolution and strong practicability.
  • Stripe projection contour technology is generally divided into two categories: Fourier transform contour technology and phase shift contour technology.
  • Fourier transform contour technology (Fan Yujia master thesis: Fourier transform contour measurement technology to measure the three-dimensional shape of objects, 2011) only needs one stripe to obtain the three-dimensional information of the object, the measurement speed is fast, but due to the existence of spectrum aliasing For other problems, the measurement accuracy is lower than that of the phase shift profile technique.
  • the phase shift profile technique has high precision, but at least three stripe maps are required to obtain the three-dimensional information of the object, thus limiting its measurement speed.
  • the measurement speed of the currently implemented three-dimensional shape measurement technology cannot meet the needs of the ultra-high-speed three-dimensional shape measurement field.
  • the existing high-speed camera can achieve the speed of 10,000 frames per second for the acquisition of two-dimensional images.
  • the acquisition speed can be faster.
  • the digital micromirror device is the main component of the projector, and the rate at which the binary pattern is projected by the optical switch can also reach 10000 Hz. Therefore, hardware is no longer a limiting factor in the measurement speed of stripe projection contour technology. How to reduce the number of fringe images required while ensuring measurement accuracy is the key to solving the problem.
  • the object of the present invention is to provide an ultra-fast three-dimensional shape measurement method and system based on the improved Fourier transform contour technology, which can significantly improve the measurement speed of the three-dimensional shape of the object while ensuring the accuracy of the three-dimensional shape measurement of the object.
  • a technical solution for achieving the object of the present invention is: an ultrafast three-dimensional topography measuring method and system based on the improved Fourier transform contour technology, which first calibrates a measuring system consisting of a projector, a camera and a computer. Obtaining calibration parameters, the projector projects 2n patterns cyclically to the measured scene, n ⁇ 2, wherein n is a binary high-frequency sinusoidal stripe with different wavelengths, and n is an all-white image with pixel values of “1”. The all-white image is projected between every two binary high-frequency sinusoidal stripes, and the image is acquired by using the camera synchronously; then the background phase is normalized by the background normalized Fourier transform contour method, and the time domain is used to minimize the projection distance.
  • the method obtains the initial absolute phase, and uses the quality-oriented map compensation method to correct the initial absolute phase. Finally, the corrected absolute phase and calibration coefficient are used to reconstruct the three-dimensional shape of the measured scene, and the measured scene is obtained in the world coordinate system. Three-dimensional space coordinates, thus completing the three-dimensional shape measurement of the object.
  • the background normalized Fourier transform contour method uses a stripe image to contain three-dimensional information of the current moment motion scene, and uses the all white image to remove the zero in the spectrum.
  • the frequency eliminates the influence of spectral aliasing caused by sharp edges of the measured object, surface discontinuity and large changes in surface reflectivity, ensuring measurement accuracy.
  • the time domain de-wrapping method by minimizing the projection distance can unfold the wrapping phase in the case of a separated object in the measured scene, and the high-frequency sinusoidal stripe ensures the accuracy of the measurement, so that the height of each sinusoidal stripe is included. Information can be utilized to ensure measurement speed.
  • Reconstruction rate in the measurement range of 400mm ⁇ 275mm ⁇ 400mm, the depth accuracy is 80 ⁇ m, the time domain error is less than 75 ⁇ m, not only can achieve three-dimensional shape measurement for general static and dynamic measurement scenes, but also successfully pop-up ⁇ Three-dimensional topography is measured in ultra-fast scenes such as balloon explosions.
  • FIG. 1 is a schematic flow chart of a measuring method of the present invention.
  • Figure 2 is an experimental measurement of a static complex scene, i.e., a plaster image and a hand, of the present invention.
  • FIG. 3 is an experimental result of the three-dimensional topography measurement of the scene in which the bullet rebounds from the toy gun and hits a plastic plate.
  • Figure 5 is a schematic illustration of the measurement subsystem of the present invention.
  • the present invention is based on an improved ultra-fast three-dimensional topography measurement method of the Fourier transform profile technique.
  • the measurement subsystem is calibrated.
  • the measurement subsystem is composed of a projector, a camera and a computer, and calibration parameters are obtained.
  • the projector projects 2n patterns cyclically to the measured scene, n ⁇ 2, wherein n is a binary high-frequency sinusoidal stripe with different wavelengths, and n is an all-white image with pixel values of “1”, which will be all white.
  • the image interval is projected between every two binary high-frequency sinusoidal stripes, and the image is acquired synchronously by the camera; then the background phase is obtained by the background normalized Fourier transform contour method, and the time domain de-wrapping method is minimized by using the projection distance to obtain a preliminary image.
  • Absolute phase using the quality-oriented map compensation method to correct the initial absolute phase, and finally reconstructing the three-dimensional shape of the measured scene with the corrected absolute phase and calibration coefficient, and obtaining the three-dimensional coordinates of the measured scene in the world coordinate system, thereby The three-dimensional topography measurement of the object is completed.
  • Step one build a measurement subsystem.
  • the measuring subsystem includes a projector, a camera and a computer, wherein the computer and the projector and the camera are respectively connected by a signal line, and the projector and the camera are connected by a trigger line.
  • the computer is equipped with software for controlling the projector and camera.
  • the software can set the parameters of the projector and camera, and control the projection of the projector and camera and the process of capturing images.
  • MATLAB is also installed in the computer. After the image is collected, the process of processing the image is realized by writing code in MATLAB.
  • Zhang Zhengyou Z. Zhang.
  • Step 2 The specific processing process of projecting and acquiring images is as follows: the projector projects 2n patterns (n ⁇ 2) to the measured scene, including n binary high-frequency sinusoidal stripes with different wavelengths, and n pixel values are all “ An all-white image of 1", the all-white image is projected between every two binary high-frequency sinusoidal stripes, and the image is acquired synchronously by the camera.
  • the wavelengths of the n high-frequency sinusoidal stripes emitted by the projector must be different, and the wavelengths thereof are denoted as ⁇ 1 , ⁇ 2 , ..., ⁇ n ⁇ ; two conditions are met when designing the wavelength: 1 wavelength of the sinusoidal stripes is sufficient Small (such as a stripe pattern with at least 50 stripes), to ensure that the phase can be successfully restored by the traditional Fourier transform contour technology; 2 the least common multiple of the set of wavelengths is greater than or equal to the resolution of the projector along the sinusoidal intensity value, Recorded as W (the projector has a horizontal resolution of W and the projected stripes are vertical stripes (the stripe intensity varies along the lateral direction of the projector), then the least common multiple between the wavelengths of the sinusoidal stripe needs to be greater than or equal to W), that is, the following formula:
  • LCM represents the least common multiple operation
  • the generated high frequency sinusoidal stripes are represented in the projector space by the following formula:
  • the upper corner p is the initial letter of the projector "projector", which represents the projector space, I p represents the stripe intensity, (x p , y p ) is the pixel coordinate of the projector, and a p is the average of the sine stripe intensity. Value, b p is the sinusoidal fringe amplitude, The frequency of the sinusoidal fringes;
  • the micromirrors on the component digital micromirror device DMD are all "on" state, expressed by the following formula:
  • Step 3 using the background normalized Fourier transform contour method to obtain the wrapped phase, the process is as follows: after obtaining the image acquired by the camera, each of the acquired images is processed in turn, that is, a high frequency sinusoidal stripe and a corresponding one.
  • the all-white image, the specific process is the high-frequency sine fringe pattern and the all-white image collected by the camera are expressed by the following formula:
  • the upper corner c is the initial letter of the camera "camera", which represents the camera space
  • I 1 is the image captured by the camera after the high-frequency sine fringe pattern is projected onto the measured scene
  • I 2 is the full white image projected to the measured
  • (x c , y c ) is the camera pixel coordinates
  • ⁇ (x c , y c ) is the reflectivity of the measured object
  • f 0 is the sinusoidal fringe frequency
  • ⁇ (x c , y c ) is the phase containing the depth information of the object
  • the zero-frequency part will cause spectral aliasing problems; by using I 1 and I 2 in equations (5) and (6), before the Fourier transform, the zero-frequency part and the measured object
  • the effect of the surface reflectance ⁇ (x c , y c ) can be removed, see equation (7):
  • is a constant (such as 0.01), mainly for the purpose of preventing the occurrence of zero as a divisor.
  • the background normalized I d is subjected to Fourier transform, and the filter (such as Hanning window) is used to extract the effective information, and then the selected spectrum is inversely transformed by Fourier to obtain the wrapped phase. Therefore, using an all-white image to zero-frequency before the Fourier transform
  • the effect of the reflectivity ( ⁇ (x c , y c )) on the uneven distribution of the surface of the measured object is removed, effectively solving the problem of spectral aliasing.
  • the parcel phase corresponding to each high-frequency sinusoidal stripe captured by the camera is obtained, which contains the depth information of the scene corresponding to each moment when the camera acquires the high-frequency sinusoidal fringe pattern.
  • Step 4 Using the projection distance minimization time domain de-wrapping method to obtain the initial absolute phase, the process is as follows: after obtaining the phase corresponding to the high-frequency sine fringe image acquired by the camera through step three, since the range is wrapped in (- ⁇ , ⁇ In the case of the package phase, there is ambiguity, so it needs to be wrapped to obtain the absolute phase.
  • the parcel phase corresponding to a set of high-frequency sinusoidal stripes is used to wrap each of the parcel phases.
  • each high-frequency sinusoidal stripe obtained by the background normalized Fourier transform contour method in step two is obtained.
  • the corresponding parcel phase vector is recorded as Because the resolution of the projector along the direction of sinusoidal stripe intensity variation is limited, the possible stripe level combinations are also limited, and the stripe level combinations are listed one by one ( T, T, M. "Temporal phase unwrapping using orthographic projection".
  • each set of fringe-order sub-vectors is denoted as k i , which contains the stripe order corresponding to each parcel phase in turn [ k 1 , k 2 ,..., k n ] T , for each fringe order vector k i , the corresponding absolute phase ⁇ i is calculated by the following formula:
  • the measurement range of the measurement subsystem is definitely limited, and the range of the stripe level combination is further narrowed down, that is, the depth range of the measurement scene is first estimated, that is, Is the minimum value of the depth of the measurement range in the world coordinate system, The maximum value of the depth of the measurement range in the world coordinate system, according to the calibration factor and the method mentioned in (Liu K. "Real-time 3-d reconstruction by means of structured light illumination”. 2010.)
  • the shape measurement method obtains the range of the phase distribution, that is, [ ⁇ min , ⁇ max ], ⁇ min is the minimum value of the absolute phase, and ⁇ max is the maximum value of the absolute phase, so that the range of the stripe level is obtained by the following formula:
  • k min represents the minimum value of the stripe order
  • k max represents the maximum value of the stripe level
  • (x c , y c ) represents the pixel coordinates of the camera
  • floor represents the rounding down
  • ⁇ min represents the phase minimum
  • Ceil represents the rounding up operation
  • ⁇ max represents the phase maximum. Reducing the range of the stripe order can eliminate a part of the wrong stripe level combination, reduce the point of error, and improve the measurement accuracy.
  • Step 5 Correct the preliminary absolute phase by using the quality-oriented map compensation method.
  • the image collected by the camera may have lower quality (such as small stripe contrast) and the fast motion of the measured object between each frame that cannot be ignored.
  • the absolute phase obtained in step 4 may have some problem of the stripe level solving error, so the mass phase correction method can further correct the absolute phase in the airspace, which can correct these errors and improve the measurement accuracy.
  • the two main problems in the quality-oriented map compensation method are which one is selected as the reliability parameter, that is, how to evaluate whether the absolute phase corresponding to a pixel is correct and how to design a path for correction.
  • the minimum projection distance corresponding to each pixel obtained in step four As a basis for assessing the credibility of the absolute phase ( A larger value means that the corresponding absolute phase confidence is lower.
  • the reliability at the pixel boundary is defined by the sum of the credibility of the adjacent two pixels, by comparing the confidence values at the pixel boundary. To determine the path to be processed, that is, to start from the pixel with a large value of confidence, store the confidence value at the intersection of all pixels in a queue, and sort according to the value of the credibility value, the greater the credibility value The first is processed, resulting in a corrected absolute phase.
  • the absolute phase is sequentially corrected according to the order of the credibility values at the pixel boundaries, and the higher the reliability, the more processed; if the two connected pixels belong to the same group, no processing is performed; pixels belong to different groups, and the number of pixels that group of small number of pixels less than a threshold value T h (determination value T h depending on the circumstances, that the number of pixels is smaller than T h is the wrong point, is greater than the number of pixels T h is The separated object) corrects all phase values in the smaller group according to the larger number of pixels, and combines the two, that is, the phase corresponding to the pixel having a larger number of pixels and a smaller number of pixels.
  • the values are ⁇ L and ⁇ S , respectively Multiply the value of 2 ⁇ to the phase value corresponding to all pixels in the smaller group of pixels, and combine the two into one, and Round indicates rounding off;
  • step (3) Repeat step (3) until all pixel boundaries in the queue have been processed.
  • Step 6 Perform three-dimensional reconstruction using the calibration coefficient and the corrected absolute phase to complete the three-dimensional shape measurement.
  • the specific process is as follows:
  • the final three-dimensional world coordinates are obtained by the following formula, and the reconstruction is completed:
  • E X , F X , E Y , F Y , M Z , N Z , C Z are intermediate variables and are passed by the calibration parameters (K.Liu, Y.Wang, et al “Dual-frequency pattern scheme for high- The method in speed 3-D shape measurement. "Optics express. 18 (5), 5229-5244 (2010).) was obtained.
  • is the absolute phase
  • W is the resolution of the projector along the direction of the stripe intensity change
  • N L is the corresponding number of stripes
  • x p is the projector coordinates
  • X p , Y p , Z p is the space of the measured object
  • the present invention is based on an improved Fourier transform profile technique for an ultrafast three-dimensional topography measurement system, including a measurement subsystem, a Fourier transform profile subsystem, a calibration unit, a projection and acquisition image unit, and a three-dimensional reconstruction unit;
  • the measuring subsystem is composed of a projector, a camera and a computer;
  • the Fourier transform contour subsystem is composed of a background normalized Fourier transform contour module, a projection distance minimization time domain de-wrapping module and a quality steering map compensation module;
  • the calibration unit calibrates the measurement subsystem to obtain calibration parameters
  • the projector continuously projects 2n patterns to the measured scene, n ⁇ 2, wherein n frames are binary high-frequency sinusoidal stripes with different wavelengths, and n pixels have pixel values of “1”.
  • An all-white image in which an all-white image is projected between every two binary high-frequency sinusoidal stripes, and the image is acquired synchronously by the camera;
  • the background normalized Fourier transform contour module processes the image acquired by the camera to obtain a wrap phase, obtains a preliminary absolute phase by minimizing the time domain de-wrapping module by the projection distance, and corrects the preliminary absolute phase through the quality-oriented map compensation module;
  • the corrected absolute phase and calibration coefficient are used to reconstruct the three-dimensional shape of the measured scene through the three-dimensional reconstruction unit, and the three-dimensional space coordinates of the measured scene in the world coordinate system are obtained, thereby completing the three-dimensional shape measurement of the object.
  • the process of projecting and acquiring the image unit and the three-dimensional reconstruction unit, the background normalized Fourier transform contour module, the projection distance minimization time domain de-wrapping module and the quality-oriented map compensation module are as shown in the above steps.
  • the measurement accuracy and measurement speed of the ultra-fast three-dimensional shape measurement method based on the improved Fourier transform contour technique are verified by experiments.
  • a projector and a camera with a binary image projection speed and an image acquisition speed of 20,000 Hz and a computer were used to construct a three-dimensional shape measurement system, and the resolution of the projector was 1024 ⁇ 768.
  • Projecting 6 binary images to the measured scene, 3 of which are binary high frequency sinusoidal stripes with wavelengths ⁇ 1 , ⁇ 2 , ⁇ 3 ⁇ ⁇ 14,16,18 ⁇ (in pixels), 3
  • the image is an all-white image with a pixel value of "1", and the all-white image is projected in a binary high-frequency stripe, and the image is acquired synchronously by the camera.
  • the experimentally constructed system realized a reconstruction rate of 10,000 frames per second without repeating three-dimensional topography. Under the measurement range of 400 mm ⁇ 275 mm ⁇ 400 mm, the depth accuracy is 80 ⁇ m, and the time domain error is less than 75 ⁇ m.
  • the drawings illustrate the experimental results in detail.
  • Figure 2 is an experimental result of three-dimensional topography measurement of a static scene containing a plaster image and a hand using an ultrafast three-dimensional topography measurement method based on the improved Fourier transform contour technique.
  • (a1)-(a3) are the images acquired by the camera when the binary high-frequency sinusoidal fringes are projected to the measurement scene;
  • (a4) the images acquired by the camera when the all-white image is projected to the measurement scene;
  • (b1)-( B3) is a parcel phase map obtained by normalizing the Fourier transform contour method by background;
  • (b4) is a minimum projection distance corresponding to each pixel point obtained by the time domain de-wrapping method for minimizing the projection distance
  • (c1)-(c3) are preliminary absolute phase maps obtained by minimizing the time domain de-wrapping method using projection distance;
  • (c4) is a three-dimensional topography measurement result reconstructed according to the phase corresponding to (c2);
  • (d1)-( D3) In order to limit the absolute phase map after the stripe level sub-
  • the measurement accuracy; (e4) is a three-dimensional topography measurement result reconstructed according to the phase corresponding to (e2). It can be seen that the three-dimensional topographical measurements obtained after these steps have almost no errors. It fully demonstrates that the ultra-fast three-dimensional shape measurement method based on the improved Fourier transform contour technology has high measurement accuracy.
  • FIG. 3 is a result of three-dimensional topography measurement using a super-fast three-dimensional topography measurement method based on the improved Fourier transform contour technique on a scene in which a toy gun pops up and hits a plastic plate and bounces.
  • the measurement results (corresponding to the block area in Figure (b)) and the three-dimensional topography measurements corresponding to the bullets at three times (7.5ms, 12.6ms, 17.7ms).
  • the illustration in Figure (c) shows the contour of the bullet corresponding to the 17.7ms time from different sides; (d) the 3D topography measurement corresponding to the last moment (135ms), the curve in the figure shows the movement of the bullet in the 135ms period
  • the trajectory, the illustration in Figure (d) is a plot of bullet velocity versus time.
  • FIG. 4 is a result of three-dimensional topography measurement of a scene in which a darts flying hits a balloon to cause a balloon explosion using an ultra-fast three-dimensional topography measurement method based on the improved Fourier transform contour technique.
  • images acquired by cameras corresponding to different time points (b) graphs of three-dimensional topography measurements corresponding to two-dimensional images in (a); (c) and (d) are (a) and (b), respectively follow-up;
  • (e) is a three-dimensional topographic reconstruction contour corresponding to the dotted line on the balloon identified in Figure (a) corresponding to 10.7 ms, 11.4 ms, 12.1 ms, 12.8 ms, and 13.7 ms.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统,首先对测量系统进行标定,得到标定参数,然后通过投影仪向被测场景循环投射2n幅图案,n幅为波长不同的二值高频正弦条纹,n幅为像素值都为"1"的全白图像,将全白图像间隔在每两幅二值高频正弦条纹之间进行投影,利用相机同步采集图像;然后对包裹相位去包裹得到初步的绝对相位,并对初步的绝对相位进行校正,最后利用校正后的绝对相位和标定系数重建被测场景的三维形貌,获得了被测场景在世界坐标系的三维空间坐标,从而完成对物体的三维形貌测量。

Description

一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统 技术领域
本发明属于三维成像技术领域,特别是一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统。
背景技术
在过去的几十年中,得益于电子成像传感器、光电子技术以及计算机视觉的快速发展,三维图像获取技术也越来越成熟。然而在诸如生物力学分析、工业检测、固体力学形变分析以及车辆冲击试验等领域中,人们希望能够获得物体瞬态变化时的三维形貌信息,再以一个比较缓慢的速度回放观察从而对其进行分析。条纹投影轮廓技术是获取物体三维形貌信息的一种广泛应用的方法,其具有非接触、分辨率高以及实用性强等优点。条纹投影轮廓技术一般分为傅立叶变换轮廓技术和相移轮廓技术两大类。其中傅立叶变换轮廓技术(范宇佳硕士论文:傅里叶变换轮廓测量技术测量物体三维形貌,2011年)只需要一幅条纹就可以获取物体的三维信息,测量速度快,但是其由于存在频谱混叠等问题,测量精度与相移轮廓技术相比较低。而相移轮廓技术虽然精度高,但是至少需要通过三幅条纹图才可以获取物体的三维信息,所以限制了它的测量速度。当前已经实现的三维形貌测量技术的测量速度根本无法满足超高速三维形貌测量领域的需求。
与此同时,从条纹投影轮廓技术想要实现超高速三维测量所需要的硬件技术指标来看,一方面现在已有的高速相机对于二维图像的采集已经可以达到每秒10000帧的速度,如果减少所采集图片的分辨率的话,其采集速度还可以更快。另一方面,数字微镜器件(DMD)作为投影仪的主要部件,利用光学开关投影二值图案的速率也可以达到10000Hz。所以硬件已不再是条纹投影轮廓技术提升测量速度的限制因素。如何在保证测量精度的同时减少所需的条纹图像数量才是解决问题的关键。针对传统傅立叶变换轮廓技术虽然只需要一幅条纹,测量速度快,但是一旦被测物体存在边缘锐变、表面不连续以及表面反射率变化大等现象就会造成频谱混叠并导致测量精度不高的问题,有研究者提出了π相移傅立叶变换轮廓技术(Guo L,Su X,Li J.“Improved Fourier transform profilometry for the automatic measurement of 3D object shapes”.Optical Engineering,1990,29(12): 1439-1444.)以及减除背景的傅立叶变换轮廓技术(Guo H,Huang P.“3D shape measurement by use of a modified fourier transform method”.Proc.SPIE.2008,7066:70660E.),但是前者将高度信息包含在了两幅正弦条纹图中,导致对运动的敏感性增加,不利于高速三维测量。后者测量所需的条纹图案没有办法准确地在投影仪投二值图案模式下产生,一旦无法使用二值图案投影模式,那么测量速度将会大大降低。同时这两种改进方法都无法解决被测物体表面反射率变化大所造成的频谱混叠问题。而针对相移轮廓技术虽然测量精度高,但所需条纹图较多,影响测量速度的问题,也有相关研究者提出了一些改进的方法,如有人提出利用双频条纹图叠加的方法(Liu K,Wang Y,Lau D L.“Dual-frequency pattern scheme for high-speed 3-D shape measurement”Optics express,2010,18(5):5229-5244.)。也有人提出在条纹图中嵌入散斑的方法(Zhang Y,Xiong Z,Wu F.“Unambiguous 3D measurement from speckle-embedded fringe”.Applied optics,2013,52(32):7797-7805.)。但是改进后的方法对物体进行三维形貌测量的速率依然限制在1000Hz以下,无法满足对诸如子弹出膛、气球爆炸等超高速场景进行三维形貌测量的需求。由此可见,当前缺乏一种测量速度能够达到超高速,即在每秒上万帧左右,同时又能够保证测量精度的三维形貌测量方法。
发明内容
本发明的目的在于提供一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统,使得在保证物体三维形貌测量精度的同时,显著提升了物体三维形貌测量的速度。
实现本发明目的的技术解决方案为:一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统,首先对测量系统进行标定,该测量系统由投影仪、相机和计算机组成,得到标定参数,所述投影仪向被测场景循环投射2n幅图案,n≥2,其中n幅为波长不同的二值高频正弦条纹,n幅为像素值都为“1”的全白图像,将全白图像间隔在每两幅二值高频正弦条纹之间进行投影,利用相机同步采集图像;然后利用背景归一化傅立叶变换轮廓方法得到包裹相位,利用投影距离最小化时域去包裹方法得到初步的绝对相位,利用质量导向图补偿方法对初步的绝对相位进行校正,最后利用校正后的绝对相位和标定系数重建被测场景的三维形貌,获得了被测场景在世界坐标系的三维空间坐标,从而完成了对物体 的三维形貌测量。
本发明与现有技术相比,其显著优点:(1)通过背景归一化傅立叶变换轮廓方法用一幅条纹图像包含了当前时刻运动场景的三维信息,同时利用全白图像除去频谱中的零频,消除了由于被测物体边缘锐变、表面不连续以及表面反射率变化大等现象造成的频谱混叠影响,保证了测量精度。(2)通过投影距离最小化时域去包裹方法能够在被测场景中有分离物体的情况下将包裹相位展开,并且高频正弦条纹保证了测量的精度,使得每幅正弦条纹所包含的高度信息都能得到利用,保证了测量速度。(3)通过质量导向图补偿方法对通过投影距离最小化时域去包裹方法得到的绝对相位进行进一步校正,纠正了由于运动影响可能存在的一些错点,进一步保证了测量的精度。(4)在实验中,利用二值图案投影速度以及图像采集速度都为20000Hz的投影仪、相机以及计算机搭建三维形貌测量系统,通过本发明的方法实现了每秒10000帧无重复三维形貌重建速率,在400mm×275mm×400mm的测量范围下,深度精度为80μm,时间域误差小于75μm,不仅对一般的静态和动态的测量场景能够实现三维形貌测量,而且还成功地对子弹出膛、气球爆炸等超快速场景进行三维形貌测量。
下面结合附图对本发明作进一步详细描述。
附图说明
图1是本发明测量方法的流程示意图。
图2是本发明对静态复杂场景即一个石膏像和一只手的实验测量结果。
图3是本发明对子弹从玩具枪中出膛并击中一块塑料板后反弹的场景进行三维形貌测量的实验结果。
图4是本发明对飞镖飞行戳破气球导致气球爆炸的场景进行三维形貌测量的实验结果。
图5是本发明测量分系统示意图。
具体实施方式
结合图1和图5,本发明基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法,首先对测量分系统进行标定,该测量分系统由投影仪、相机和计算机组成,得到标定参数,所述投影仪向被测场景循环投射2n幅图案,n≥2,其中n 幅为波长不同的二值高频正弦条纹,n幅为像素值都为“1”的全白图像,将全白图像间隔在每两幅二值高频正弦条纹之间进行投影,利用相机同步采集图像;然后利用背景归一化傅立叶变换轮廓方法得到包裹相位,利用投影距离最小化时域去包裹方法得到初步的绝对相位,利用质量导向图补偿方法对初步的绝对相位进行校正,最后利用校正后的绝对相位和标定系数重建被测场景的三维形貌,获得被测场景在世界坐标系的三维空间坐标,从而完成了对物体的三维形貌测量。上述方法具体实现步骤如下:
步骤一,搭建测量分系统。该测量分系统包括一台投影仪,一台相机以及一台计算机,其中计算机与投影仪和相机之间分别通过信号线相连,投影仪和相机之间通过触发线相连。投影仪和相机的摆放位置并没有严格的要求,只要保证其投影和采集的场景包含所要测量的场景即可。计算机中装有厂家配备的控制投影仪和相机的软件,通过软件可以设置投影仪和相机的参数,并控制投影仪和相机投影图像和采集图像的过程。计算机中还装有MATLAB,采集完图像后,处理图像的过程都是利用MATLAB编写代码所实现的。利用张正友提出的相机标定方法(Z.Zhang.“A flexible new technique for camera calibration.”IEEE Transactions on pattern analysis and machine intelligence.22(11),1330-1334(2000).)和Zhang S提出的结构光三维测量系统标定方法(Zhang S,Huang P S.“Novel method for structured light system calibration”.Optical Engineering,2006,45(8):083601-083601-8.)对相机和投影仪进行标定,得到标定参数,包括相机和投影仪的内参和外参。
步骤二,投影及采集图像的具体处理过程如下:投影仪向被测场景循环投射2n幅图案(n≥2),包括n幅波长不同的二值高频正弦条纹,n幅像素值都为“1”的全白图像,将全白图像间隔在每两幅二值高频正弦条纹之间进行投影,利用相机同步采集图像。所述投影仪投出的n幅高频正弦条纹波长必须不同,将其波长记为{λ 12,…,λ n};设计波长时需满足两个条件:①正弦条纹的波长足够小(比如一幅条纹图至少有50根条纹),保证用传统的傅立叶变换轮廓技术能够成功地恢复相位;②这组波长的最小公倍数大于或等于投影仪沿着正弦强度值变化的分辨率,记为W(投影仪横向分辨率为W,投影条纹为竖条纹(条纹强度沿着投影仪横向变化),那么这组正弦条纹波长之间的最小公倍数需要大于或者等于 W),即满足下面的公式:
LCM(λ 12,…,λ n)≥W      (1)
其中,LCM表示求最小公倍数运算,生成的高频正弦条纹在投影仪空间用下面的公式表示:
Figure PCTCN2018077216-appb-000001
其中,上角标p是投影仪英文“projector”的首字母,代表投影仪空间,I p表示条纹强度,(x p,y p)为投影仪的像素坐标,a p为正弦条纹强度的平均值,b p为正弦条纹振幅,
Figure PCTCN2018077216-appb-000002
为正弦条纹的频率;
然后利用半色调技术(Floyd RW.“An adaptive algorithm for spatial gray-scale”.Proc Soc Inf Disp;1976.)将高频正弦条纹生成二值高频正弦条纹,从而使得投影仪的投影速度能够达到其固有投影速度的最快程度,保证了硬件方面不会影响测量速度;由于条纹图为二值图,因此公式(2)中的a p和b p都为1/2,公式(2)写为:
Figure PCTCN2018077216-appb-000003
其中,
Figure PCTCN2018077216-appb-000004
表示第一幅高频正弦条纹图的强度,间隔在二值高频正弦条纹之间投影的全白图像是指投影的图上所有像素点的值都为“1”,也就是投影仪的核心部件数字微镜器件DMD上的微镜全部都呈“开”的状态,用下面的公式表示:
Figure PCTCN2018077216-appb-000005
其中,
Figure PCTCN2018077216-appb-000006
表示全白图像的强度,(x p,y p)依然为投影仪像素坐标,余下的高频正弦条纹的表达式与公式(3)一样,只是根据波长的不同,其频率
Figure PCTCN2018077216-appb-000007
不同而已;将2n幅图像利用投影仪循环向被测场景投影,相机根据投影仪的触发信号同步采集图像。
步骤三,利用背景归一化傅立叶变换轮廓方法得到包裹相位,过程如下:得到相机采集的图像后,依次对采集到的每两幅图像进行处理,即一幅高频正弦条纹和一幅相应的全白图像,具体过程为相机采集到的高频正弦条纹图和全白图像分别用下面的公式表示:
Figure PCTCN2018077216-appb-000008
I 2(x c,y c)=α(x c,y c)      (6)
其中,上角标c是相机英文“camera”的首字母,代表相机空间,I 1为高频正弦条纹图投影到被测场景后相机采集到的图像,I 2为全白图像投影到被测场景后相机采集到的图像,(x c,y c)是相机像素坐标,α(x c,y c)为被测物体的反射率,f 0为正弦条纹频率,φ(x c,y c)为包含物体深度信息的相位,
Figure PCTCN2018077216-appb-000009
经过傅立叶变换之后为零频部分,其存在会造成频谱混叠问题;通过利用式(5)和式(6)中的I 1和I 2,在进行傅立叶变换之前,零频部分以及被测物体表面反射率α(x c,y c)的影响就能够除去,见公式(7):
Figure PCTCN2018077216-appb-000010
其中,γ是常数(如0.01),主要是为了用来防止出现零作为除数的错误。然后对背景归一化后的I d进行傅立叶变换,选用滤波器(如汉宁窗)提取有效信息,再对选取的频谱进行傅立叶逆变换就得到包裹相位。因此,利用全白图像在傅立叶变换之前就把零频
Figure PCTCN2018077216-appb-000011
和反射率(α(x c,y c))在被测物体表面分布不均所造成的影响移除,有效解决了频谱混叠的问题。通过这一步骤得到相机采集到的每幅高频正弦条纹所对应的包裹相位,它包含了相机采集高频正弦条纹图的每个时刻所对应的场景的深度信息。
步骤四,利用投影距离最小化时域去包裹方法得到初步绝对相位,过程如下:通过步骤三得到相机采集到的高频正弦条纹图像对应的相位后,由于其范围被包裹在(-π,π]中,称为包裹相位,存在歧义性,所以需要对其去包裹,从而得到绝对相位。利用一组高频正弦条纹对应的包裹相位来对其中每个包裹相位去包裹,步骤二中投影仪投影的高频正弦条纹波长不同,记为一个波长矢量λ=[λ 12,…,λ n] T,通过步骤二中背景归一化傅立叶变换轮廓法得到的每个高频正弦条纹所对应的包裹相位向量记为
Figure PCTCN2018077216-appb-000012
因为投影仪沿着正弦条纹强度变化方向的分辨率是有限的,所以可能的条纹级次组合也是有限的,将条 纹级次组合一一列举出来(
Figure PCTCN2018077216-appb-000013
T,
Figure PCTCN2018077216-appb-000014
T,
Figure PCTCN2018077216-appb-000015
M.“Temporal phase unwrapping using orthographic projection”.Optics&Lasers in Engineering,2017,90:34-47.),每组条纹级次向量记为k i,它包含了依次对应每个包裹相位的条纹级次[k 1,k 2,…,k n] T,对于每个条纹级次向量k i,通过下面的公式计算对应的绝对相位Φ i
Figure PCTCN2018077216-appb-000016
其中,Φ i为绝对相位向量,
Figure PCTCN2018077216-appb-000017
为包裹相位向量,k i为条纹级次向量,再通过公式(9)和(10)计算绝对相位的投影点向量:
Figure PCTCN2018077216-appb-000018
P i=tλ i -1         (10)
其中,λ i为波长向量,Φ i为绝对相位向量,n为投影正弦条纹的数目,P i为投影点向量,最后通过公式(11)得到两者间的距离
Figure PCTCN2018077216-appb-000019
Figure PCTCN2018077216-appb-000020
选择距离最小值
Figure PCTCN2018077216-appb-000021
所对应的条纹级次向量作为最优解,与此同时,也就得到了对应最优解的绝对相位Φ,以此作为初步的绝对相位。
测量分系统的测量范围肯定是有限的,将条纹级次组合一一列举出来的范围进一步缩小,即首先估计测量场景的深度范围,即
Figure PCTCN2018077216-appb-000022
Figure PCTCN2018077216-appb-000023
为世界坐标系中测量范围深度的最小值,
Figure PCTCN2018077216-appb-000024
为世界坐标系中测量范围深度的最大值,根据标定系数以及(Liu K.“Real-time 3-d reconstruction by means of structured light illumination”.2010.)中提到的方法(实时结构光照明三维形貌测量方法)得到相位分布的范围,即[Φ minmax],Φ min为绝对相位的最小值,Φ max为绝对相位的最大值,从而通过下面的公式得到条纹级次的范围:
Figure PCTCN2018077216-appb-000025
Figure PCTCN2018077216-appb-000026
其中,k min表示条纹级次的最小值,k max表示条纹级次的最大值,(x c,y c)表示相机的像素坐标,floor表示向下取整运算,Φ min表示相位最小值,ceil表示向上取整运算,Φ max表示相位最大值。缩小条纹级次的范围可以排除一部分错误的条纹级次组合,减少错点,提高测量精度。
步骤五,利用质量导向图补偿方法对初步的绝对相位进行校正,具体过程如下:由于相机采集到的图像可能质量较低(如条纹对比度小)以及不可忽视的每帧之间被测物体快速运动的影响,步骤四中得到的绝对相位可能会存在一些条纹级次求解错误的地方,所以通过质量导向图补偿方法在空域对绝对相位进行进一步的校正,可以纠正这些错误,提高测量精度。质量导向图补偿方法中最主要的两个问题就是选择哪项指标作为可信度参数,即如何评估一个像素对应的绝对相位是否是正确的以及如何设计进行校正的路径。将步骤四中得到的每个像素对应的最小投影距离
Figure PCTCN2018077216-appb-000027
作为评估绝对相位的可信度的依据(
Figure PCTCN2018077216-appb-000028
越大意味着所对应的绝对相位可信度越低),像素交界处的可信度用相邻的两个像素对应的可信度之和来定义,通过比较像素交界处的可信度值来确定处理的路径,即从可信度值大的像素开始校正,将所有像素交界处的可信度值存入一个队列,根据可信度值的大小进行排序,可信度值越大的越先被处理,从而得到校正后的绝对相位。
上述处理的具体步骤为:
(1)计算每个像素交界处的可信度值,即把边界处相连的两个像素对应的上个步骤得到的最小投影距离
Figure PCTCN2018077216-appb-000029
相加,作为像素交界处的可信度值;
(2)对相邻的像素依次进行判断,如果相邻像素对应的相位值差值的绝对值小于π,就把这两个像素归为一个组,根据这种方法把所有的像素点进行分组;
(3)根据像素交界处可信度值的排列顺序依次校正绝对相位,可信度越高的越先被处理;如果相连的两个像素属于相同的组,则不做处理;如果相连的两个像素属于不同的组,并且像素数小的那个组的像素数小于一个阈值T h(根据具体情况确定T h的值,认为小于T h的像素数目是错点,大于T h的像素数目是分离的 物体),就根据像素数较大的那个组来校正较小的那个组中的所有相位值,并把两个组合并,即设属于像素数较大和像素数较小组的像素对应的相位值分别为Φ L和Φ S,将
Figure PCTCN2018077216-appb-000030
乘上2π的值加到像素数较小组中所有像素对应的相位值上,并把两个组合并为一个,Round表示四舍五入取整;
(4)重复步骤(3)直到队列中所有的像素交界处都被处理完毕。
经过上述步骤完成利用质量导向图补偿方法对得到的绝对相位进行校正的过程,从而可以纠正错误的绝对相位,进一步提高了测量精度。
步骤六:利用标定系数和校正后的绝对相位进行三维重构,完成三维形貌测量,具体过程如下:
根据步骤一中得到的标定参数(即相机和投影仪的内参以及外参)和步骤五中获得的校正后的绝对相位Φ,结合下面的公式得到最后的三维世界坐标,完成重建:
Figure PCTCN2018077216-appb-000031
其中,E X,F X,E Y,F Y,M Z,N Z,C Z为中间变量,由标定参数通过(K.Liu,Y.Wang,et al“Dual-frequency pattern scheme for high-speed 3-D shape measurement.”Optics express.18(5),5229-5244(2010).)中的方法得到。Φ为绝对相位,W为投影仪沿着条纹强度变化方向的分辨率,N L为对应的条纹根数,x p是指投影仪坐标,X p,Y p,Z p为被测物体的空间三维坐标,从而得到当前时刻对应的测量场景的三维数据,然后对采集到的二维图案序列按照上述步骤以2n幅图像为滑动窗口,重复处理,得到了对整个测量时间段的超快运动场景进行三维形貌测量。
结合图1和图5,本发明基于改进的傅立叶变换轮廓技术的超快三维形貌测量系统,包括测量分系统、傅立叶变换轮廓分系统、标定单元、投影及采集图像单元以及三维重构单元;
所述测量分系统由投影仪、相机和计算机组成;
傅立叶变换轮廓分系统由背景归一化傅立叶变换轮廓模块、投影距离最小化时域去包裹模块和质量导向图补偿模块组成;
标定单元对测量分系统进行标定,得到标定参数;
在投影及采集图像单元中,所述投影仪向被测场景循环投射2n幅图案,n≥2,其中n幅为波长不同的二值高频正弦条纹,n幅为像素值都为“1”的全白图像,将全白图像间隔在每两幅二值高频正弦条纹之间进行投影,利用相机同步采集图像;
背景归一化傅立叶变换轮廓模块对相机采集的图像进行处理得到包裹相位,通过投影距离最小化时域去包裹模块得到初步的绝对相位,再通过质量导向图补偿模块对初步的绝对相位进行校正;
将校正后的绝对相位和标定系数通过三维重构单元重建被测场景的三维形貌,获得了被测场景在世界坐标系的三维空间坐标,从而完成了对物体的三维形貌测量。
投影及采集图像单元以及三维重构单元、背景归一化傅立叶变换轮廓模块、投影距离最小化时域去包裹模块和质量导向图补偿模块中具体实现的过程如上述步骤的内容所示。
通过实验验证基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法的测量精度以及测量速度。在实验中,使用了二值图像投影速度和图像采集速度都为20000Hz的投影仪和相机以及一台计算机搭建三维形貌测量系统,投影仪的分辨率为1024×768。向被测场景循环投影6幅二值图像,其中3幅为波长为{λ 123}={14,16,18}(单位为像素)的二值高频正弦条纹,3幅为像素值为“1”的全白图像,把全白图像间隔在二值高频条纹中投影,利用相机同步采集图像。实验搭建的系统实现了每秒10000帧无重复三维形貌重建速率,在400mm×275mm×400mm的测量范围下,深度精度为80μm,时间域误差小于75μm。实验对一组复杂的静态场景,包括一个石膏像和一只手以及两组高速运动的场景,分别为玩具枪子弹出膛并击中一块塑料板后反弹的场景以及飞镖飞行击中气球导致气球爆炸的场景进行了测量。附图详细说明了实验结果。
图2为使用基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法对包 含一个石膏像以及一只手的静态场景进行三维形貌测量的实验结果。(a1)-(a3)为二值高频正弦条纹投影到测量场景时相机同步采集到的图像;(a4)为全白图像投影到测量场景时相机同步采集到的图像;(b1)-(b3)为通过背景归一化傅立叶变换轮廓方法得到的包裹相位图;(b4)为投影距离最小化时域去包裹方法中得到的每个像素点对应的最小投影距离
Figure PCTCN2018077216-appb-000032
(c1)-(c3)为利用投影距离最小化时域去包裹方法得到的初步绝对相位图;(c4)为根据(c2)对应的相位来重建的三维形貌测量结果;(d1)-(d3)为在投影距离最小化时域去包裹方法中加入深度约束来限制条纹级次搜索范围后的绝对相位图,可以看到其错点明显减少,提高了测量精度;(d4)为根据(d2)对应的相位来重建的三维形貌测量结果;(e1)-(e3)为通过质量导向图补偿方法对初步绝对相位进行校正后的绝对相位图,可以看到错点进一步减少,再次提高了测量精度;(e4)为根据(e2)对应的相位来重建的三维形貌测量结果。可以看到经过这些步骤后得到的三维形貌测量结果几乎没有错点。充分说明了基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法的测量精度很高。
图3为使用基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法对玩具枪子弹出膛并击中一块塑料板后反弹的场景进行三维形貌测量的结果。(a)为不同时间点对应的相机所采集的图像;(b)为图(a)中二维图像对应的三维形貌测量结果图;(c)为子弹刚出枪口时的三维形貌测量结果(对应图(b)中方块区域)以及在三个时刻(7.5ms,12.6ms,17.7ms)子弹所对应的三维形貌测量结果。图(c)中的插图为17.7ms时刻对应的子弹从不同侧面观察到的轮廓;(d)为最后时刻(135ms)对应的三维形貌测量结果,图中的曲线显示135ms时间段中子弹的运动轨迹,图(d)中的插图为子弹速度和时间的函数曲线图。实验结果充分说明了基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法能够准确地复原玩具枪子弹出膛并击中一块塑料板后反弹整个过程的三维形貌,证明了这种三维形貌测量方法所具有的高速性和准确性。
图4为使用基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法对飞镖飞行击中气球导致气球爆炸的场景进行三维形貌测量的结果。(a)为不同时间点对应的相机所采集的图像;(b)为(a)中二维图像对应的三维形貌测量结果图;(c)和(d)分别为(a)和(b)的后续;(e)为10.7ms,11.4ms,12.1ms,12.8ms,13.7ms所对 应的图(a)中标识的气球上的虚线所对应的三维形貌重建轮廓。实验结果充分说明了基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法能够准确地复原飞镖飞行击中气球导致气球爆炸的整个过程的三维形貌,证明了这种三维形貌测量方法所具有的高速性和准确性。

Claims (10)

  1. 一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法,其特征在于首先对测量系统进行标定,该测量系统由投影仪、相机和计算机组成,得到标定参数,然后通过投影仪向被测场景循环投射2n幅图案,n≥2,其中n幅为波长不同的二值高频正弦条纹,n幅为像素值都为“1”的全白图像,将全白图像间隔在每两幅二值高频正弦条纹之间进行投影,利用相机同步采集图像;然后利用背景归一化傅立叶变换轮廓方法得到包裹相位,利用投影距离最小化时域去包裹方法得到初步的绝对相位,利用质量导向图补偿方法对初步的绝对相位进行校正,最后利用校正后的绝对相位和标定系数重建被测场景的三维形貌,获得了被测场景在世界坐标系的三维空间坐标,从而完成对物体的三维形貌测量。
  2. 根据权利要求1所述的基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法,其特征在于投影及采集图像的具体处理过程如下:
    投影仪投出的n幅高频正弦条纹波长必须不同,将其波长记为{λ 12,…,λ n};设计波长时需满足两个条件:①正弦条纹的波长足够小,保证用传统的傅立叶变换轮廓技术能够成功地恢复相位;②这组波长的最小公倍数大于或等于投影仪沿着正弦强度值变化的分辨率,记为W,即满足下面的公式:
    LCM(λ 12,…,λ n)≥W       (1)
    其中,LCM表示求最小公倍数运算,生成的高频正弦条纹在投影仪空间用下面的公式表示:
    Figure PCTCN2018077216-appb-100001
    其中,上角标p代表投影仪空间,I p表示条纹强度,(x p,y p)为投影仪的像素坐标,a p为正弦条纹强度的平均值,b p为正弦条纹振幅,
    Figure PCTCN2018077216-appb-100002
    为正弦条纹的频率。
    然后利用半色调技术将高频正弦条纹转换成二值高频正弦条纹,从而使得投影仪的投影速度能够达到其固有投影速度的最快程度,保证了硬件方面不会影响测量速度;由于条纹图为二值图,因此公式(2)中的a p和b p都为1/2,公式(2)写为:
    Figure PCTCN2018077216-appb-100003
    其中,
    Figure PCTCN2018077216-appb-100004
    表示第一幅高频正弦条纹图的强度。间隔在二值高频正弦条纹之间投 影的全白图像是指投影的图上所有像素点的值都为“1”,也就是投影仪的核心部件数字微镜器件DMD上的微镜全部都呈“开”的状态,用下面的公式表示:
    Figure PCTCN2018077216-appb-100005
    其中,
    Figure PCTCN2018077216-appb-100006
    表示全白图像的强度,(x p,y p)依然为投影仪像素坐标,余下的高频正弦条纹的表达式与公式(3)一样,只是根据波长的不同,频率
    Figure PCTCN2018077216-appb-100007
    不同而已;将2n幅图像利用投影仪循环向被测场景投影,相机根据投影仪的触发信号同步采集图像。
  3. 根据权利要求1所述的基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法,其特征在于利用背景归一化傅立叶变换轮廓方法得到包裹相位,具体过程如下:得到相机采集的图像后,依次对采集到的每两幅图像进行处理,即一幅高频正弦条纹和一幅相应的全白图像;相机采集到的高频正弦条纹图和全白图像分别用下面的公式表示:
    Figure PCTCN2018077216-appb-100008
    I 2(x c,y c)=α(x c,y c)             (6)
    其中,上角标c代表相机空间,I 1为高频正弦条纹图投影到被测场景后相机采集到的图像,I 2为全白图像投影到被测场景后相机采集到的图像,(x c,y c)是相机像素坐标,α(x c,y c)为被测物体的反射率,f 0为正弦条纹频率,φ(x c,y c)为包含物体深度信息的相位,
    Figure PCTCN2018077216-appb-100009
    经过傅立叶变换之后为零频部分;通过利用I 1和I 2,在进行傅立叶变换之前,零频部分以及被测物体表面反射率α(x c,y c)的影响就能够除去,见公式(7):
    Figure PCTCN2018077216-appb-100010
    其中,γ是常数。然后对背景归一化后的I d进行傅立叶变换,选用滤波器提取有效信息,再进行傅立叶逆变换就得到包裹相位,通过这一步骤得到相机采集到的每幅高频正弦条纹所对应的包裹相位,它包含了相机采集高频正弦条纹图的每个 时刻所对应的场景的深度信息。
  4. 根据权利要求1所述的基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法,其特征在于利用投影距离最小化时域去包裹方法得到初步绝对相位,具体过程如下:
    利用一组高频正弦条纹对应的包裹相位来对其中每个包裹相位去包裹,投影仪投影的高频正弦条纹波长不同,记为一个波长矢量λ=[λ 12,…,λ n] T,通过背景归一化傅立叶变换轮廓法得到的每个高频正弦条纹所对应的包裹相位向量记为
    Figure PCTCN2018077216-appb-100011
    将条纹级次组合一一列举出来,每组条纹级次向量记为k i,它包含了依次对应每个包裹相位的条纹级次[k 1,k 2,…,k n] T,对于每个条纹级次向量k i,通过下面的公式计算对应的绝对相位Φ i
    Figure PCTCN2018077216-appb-100012
    其中,Φ i为绝对相位向量,
    Figure PCTCN2018077216-appb-100013
    为包裹相位向量,k i为条纹级次向量,再通过公式(9)和(10)计算绝对相位的投影点向量:
    Figure PCTCN2018077216-appb-100014
    P i=tλ i -1             (10)
    其中,λ i为波长向量,Φ i为绝对相位向量,n为投影正弦条纹的数目,P i为投影点向量,最后通过公式(11)得到两者间的距离
    Figure PCTCN2018077216-appb-100015
    Figure PCTCN2018077216-appb-100016
    选择距离最小值
    Figure PCTCN2018077216-appb-100017
    所对应的条纹级次向量作为最优解,与此同时,也就得到了对应最优解的绝对相位Φ,以此作为初步的绝对相位。
  5. 根据权利要求4所述的基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法,其特征在于通过深度约束将列举出来的条纹级次组合的范围进一步缩小,即首先估计测量场景的深度范围,即
    Figure PCTCN2018077216-appb-100018
    为世界坐标系中测量范 围深度的最小值,
    Figure PCTCN2018077216-appb-100019
    为世界坐标系中测量范围深度的最大值,根据标定系数以及深度约束方法得到相位分布的范围,即[Φ minmax],Φ min为绝对相位的最小值,Φ max为绝对相位的最大值,从而通过下面的公式得到条纹级次的范围:
    Figure PCTCN2018077216-appb-100020
    Figure PCTCN2018077216-appb-100021
    其中,k min表示条纹级次的最小值,k max表示条纹级次的最大值,(x c,y c)表示相机的像素坐标,floor表示向下取整运算,Φ min表示相位最小值,ceil表示向上取整运算,Φ max表示相位最大值。
  6. 根据权利要求1所述的基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法,其特征在于利用质量导向图补偿方法对初步的绝对相位进行校正,具体过程如下:将每个像素对应的最小投影距离
    Figure PCTCN2018077216-appb-100022
    作为评估绝对相位的可信度的依据,像素交界处的可信度用相邻的两个像素对应的可信度之和来定义,通过比较像素交界处的可信度值来确定处理的路径,即从可信度值大的像素开始校正,将所有像素交界处的可信度值存入一个队列,根据可信度值的大小进行排序,可信度值越大的越先被处理,从而得到校正后的绝对相位。
  7. 根据权利要求6所述的基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法,其特征在于质量导向图补偿法处理的具体步骤为:
    (1)计算每个像素交界处的可信度值,即把边界处相连的两个像素对应的在投影距离最小化时域解包裹方法中得到的最小投影距离
    Figure PCTCN2018077216-appb-100023
    相加,作为像素交界处的可信度值;
    (2)对相邻的像素依次进行判断,如果相邻像素对应的相位值差值的绝对值小于π,就把这两个像素归为一个组,根据这种方法把所有的像素点进行分组;
    (3)根据像素交界处可信度值的排列顺序依次校正绝对相位,可信度越高的越先被处理;如果相连的两个像素属于相同的组,则不做处理;如果相连的两个 像素属于不同的组,并且像素数小的那个组的像素数小于一个阈值T h,就根据像素数较大的那个组来校正较小的那个组中的所有相位值,并把两个组合并,即设属于像素数较大和像素数较小组的像素对应的相位值分别为Φ L和Φ S,将
    Figure PCTCN2018077216-appb-100024
    乘上2π的值加到像素数较小组中所有像素对应的相位值Φ S上,并把两个组合并为一个,Round表示四舍五入取整;
    (4)重复步骤(3)直到队列中所有的像素交界处都被处理完毕。
  8. 根据权利要求1所述的基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法,其特征在于利用标定系数和校正后的绝对相位进行三维重构,完成三维形貌测量,具体过程如下:
    根据标定参数和校正后的绝对相位Φ,结合下面的公式得到最后的三维世界坐标,完成重建:
    Figure PCTCN2018077216-appb-100025
    其中,E X,F X,E Y,F Y,M Z,N Z,C Z为中间变量,Φ为绝对相位,W为投影仪沿着条纹强度变化方向的分辨率,N L为对应的条纹根数,x p为投影仪坐标,X p,Y p,Z p为被测物体在世界坐标系下的空间三维坐标,从而得到当前时刻对应的测量场景的三维数据,然后对采集到的二维图案序列重复处理,得到了对整个测量时间段的超快运动场景的三维形貌重建结果。
  9. 一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量系统,其特征在于包括测量分系统、傅立叶变换轮廓分系统、标定单元、投影及采集图像单元以及三维重构单元;
    所述测量分系统由投影仪、相机和计算机组成;
    傅立叶变换轮廓分系统由背景归一化傅立叶变换轮廓模块、投影距离最小化时域去包裹模块和质量导向图补偿模块组成;
    标定单元对测量分系统进行标定,得到标定参数;
    在投影及采集图像单元中,所述投影仪向被测场景循环投射2n幅图案,n≥2,其中n幅为波长不同的二值高频正弦条纹,n幅为像素值都为“1”的全白图像,将全白图像间隔在每两幅二值高频正弦条纹之间进行投影,利用相机同步采集图像;
    背景归一化傅立叶变换轮廓模块对相机采集的图像进行处理得到包裹相位,通过投影距离最小化时域去包裹模块得到初步的绝对相位,再通过质量导向图补偿模块对初步的绝对相位进行校正;
    将校正后的绝对相位和标定系数通过三维重构单元重建被测场景的三维形貌,获得被测场景在世界坐标系的三维空间坐标,从而完成对物体的三维形貌测量。
  10. 根据权利要求9所述的基于改进的傅立叶变换轮廓技术的超快三维形貌测量系统,其特征在于所述背景归一化傅立叶变换轮廓模块中,得到相机采集的图像后,依次对采集到的每两幅图像进行处理,即一幅高频正弦条纹和一幅相应的全白图像;相机采集到的高频正弦条纹图和全白图像分别用下面的公式表示:
    Figure PCTCN2018077216-appb-100026
    I 2(x c,y c)=α(x c,y c)          (6)
    其中,上角标c代表相机空间,I 1为高频正弦条纹图投影到被测场景后相机采集到的图像,I 2为全白图像投影到被测场景后相机采集到的图像,(x c,y c)是相机像素坐标,α(x c,y c)为被测物体的反射率,f 0为正弦条纹频率,φ(x c,y c)为包含物体深度信息的相位,
    Figure PCTCN2018077216-appb-100027
    经过傅立叶变换之后为零频部分;通过利用I 1和I 2,在进行傅立叶变换之前,零频部分以及被测物体表面反射率α(x c,y c)的影响就能够除去,见公式(7):
    Figure PCTCN2018077216-appb-100028
    其中,γ是常数。然后对背景归一化后的I d进行傅立叶变换,选用滤波器提取有 效信息,再进行傅立叶逆变换就得到包裹相位,通过这一步骤得到相机采集到的每幅高频正弦条纹所对应的包裹相位,它包含了相机采集高频正弦条纹图的每个时刻所对应的场景的深度信息;
    所述投影距离最小化时域去包裹模块,利用一组高频正弦条纹对应的包裹相位来对其中每个包裹相位去包裹,投影仪投影的高频正弦条纹波长不同,记为一个波长矢量λ=[λ 12,…,λ n] T,通过背景归一化傅立叶变换轮廓法得到的每个高频正弦条纹所对应的包裹相位向量记为
    Figure PCTCN2018077216-appb-100029
    将条纹级次组合一一列举出来,每组条纹级次向量记为k i,它包含了依次对应每个包裹相位的条纹级次[k 1,k 2,…,k n] T,对于每个条纹级次向量k i,通过下面的公式计算对应的绝对相位Φ i
    Figure PCTCN2018077216-appb-100030
    其中,Φ i为绝对相位向量,
    Figure PCTCN2018077216-appb-100031
    为包裹相位向量,k i为条纹级次向量,再通过公式(9)和(10)计算绝对相位的投影点向量:
    Figure PCTCN2018077216-appb-100032
    P i=tλ i -1          (10)
    其中,λ i为波长向量,Φ i为绝对相位向量,n为投影正弦条纹的数目,P i为投影点向量,最后通过公式(11)得到两者间的距离
    Figure PCTCN2018077216-appb-100033
    Figure PCTCN2018077216-appb-100034
    选择距离最小值
    Figure PCTCN2018077216-appb-100035
    所对应的条纹级次向量作为最优解,与此同时,也就得到了对应最优解的绝对相位Φ,以此作为初步的绝对相位;
    所述的质量导向图补偿模块中,将每个像素对应的最小投影距离
    Figure PCTCN2018077216-appb-100036
    作为评估绝对相位的可信度的依据,像素交界处的可信度用相邻的两个像素对应的可信度之和来定义,通过比较像素交界处的可信度值来确定处理的路径,即从可信度值大的像素开始校正,将所有像素交界处的可信度值存入一个队列,根据可信度值的大小进行排序,可信度值越大的越先被处理,从而得到校正后的绝对相位。
PCT/CN2018/077216 2017-03-24 2018-02-26 一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统 WO2018171385A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/496,815 US11029144B2 (en) 2017-03-24 2018-02-26 Super-rapid three-dimensional topography measurement method and system based on improved fourier transform contour technique

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710182456.8 2017-03-24
CN201710182456.8A CN107044833B (zh) 2017-03-24 2017-03-24 一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统

Publications (1)

Publication Number Publication Date
WO2018171385A1 true WO2018171385A1 (zh) 2018-09-27

Family

ID=59545094

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/077216 WO2018171385A1 (zh) 2017-03-24 2018-02-26 一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统

Country Status (3)

Country Link
US (1) US11029144B2 (zh)
CN (1) CN107044833B (zh)
WO (1) WO2018171385A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462081A (zh) * 2020-03-31 2020-07-28 西安工程大学 一种用于工件表面质量检测的特征区域快速提取方法
CN113532330A (zh) * 2021-08-28 2021-10-22 哈尔滨理工大学 一种相位格雷码三维测量方法
CN116935181A (zh) * 2023-09-19 2023-10-24 中国空气动力研究与发展中心低速空气动力研究所 一种完全二值散斑嵌入脉宽调制模式三维测量方法
CN117351137A (zh) * 2023-08-30 2024-01-05 华中科技大学 一种结构光系统在隧道工作中的应用及其应用方法

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107044833B (zh) 2017-03-24 2019-03-05 南京理工大学 一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统
JP7156794B2 (ja) * 2018-01-16 2022-10-19 株式会社サキコーポレーション 検査装置の高さ情報の取得方法及び検査装置
CN109579741B (zh) * 2018-11-01 2020-06-26 南京理工大学 一种基于多视角的全自动多模态三维彩色测量方法
CN110120096A (zh) * 2019-05-14 2019-08-13 东北大学秦皇岛分校 一种基于显微单目视觉的单细胞三维重建方法
CN110823129A (zh) * 2019-10-17 2020-02-21 湖北大学 基于π相移提升调制度图像质量的方法
CN111174730B (zh) * 2020-01-07 2021-07-16 南昌航空大学 一种基于相位编码的快速相位解包裹方法
US20210262787A1 (en) * 2020-02-21 2021-08-26 Hamamatsu Photonics K.K. Three-dimensional measurement device
CN111667441A (zh) * 2020-05-15 2020-09-15 成都飞机工业(集团)有限责任公司 一种多频相移图像序列振动检测方法
CN111879257A (zh) * 2020-07-21 2020-11-03 南昌航空大学 一种基于傅里叶变换轮廓术的高动态范围实时三维测量方法
CN112802084B (zh) * 2021-01-13 2023-07-07 广州大学 基于深度学习的三维形貌测量方法、系统和存储介质
CN113029042B (zh) * 2021-05-25 2021-08-03 四川大学 一种高温熔融态金属表面形貌动态测量装置及方法
CN113358062B (zh) * 2021-05-31 2022-08-09 湖北工业大学 三维重建相位误差补偿方法
CN113551617B (zh) * 2021-06-30 2023-03-31 南京理工大学 基于条纹投影的双目双频互补三维面型测量方法
CN113776456B (zh) * 2021-08-31 2023-08-08 中国铁道科学研究院集团有限公司 基于双线激光的曲线段钢轨轮廓测量误差修正方法及装置
CN114018175B (zh) * 2021-10-22 2023-03-24 湖南长步道光学科技有限公司 一种实时三维形貌测量方法和系统
CN114119747B (zh) * 2021-11-23 2023-04-04 四川大学 一种基于pmd波前检测的三维流场流动显示方法
CN114234847B (zh) * 2021-12-08 2024-01-30 苏州恒视智能科技有限公司 一种光栅投影系统及光栅相移高度测量自动校正补偿方法
CN114688995A (zh) * 2022-04-27 2022-07-01 河北工程大学 一种条纹投影三维测量中的相位误差补偿方法
CN115014242B (zh) * 2022-05-26 2023-03-10 华中科技大学 基于并行多狭缝结构照明的微观表面形貌测量方法及装置
CN115115587A (zh) * 2022-06-14 2022-09-27 苏州大学 基于傅里叶变换干涉函数的集料形貌特征表征方法及系统
CN115393507B (zh) * 2022-08-17 2023-12-26 梅卡曼德(北京)机器人科技有限公司 三维重建方法、装置及电子设备、存储介质
CN115329256B (zh) * 2022-10-13 2023-02-28 南京理工大学 一种基于fpp的水下光学测量误差补偿方法
CN116105632B (zh) * 2023-04-12 2023-06-23 四川大学 一种结构光三维成像的自监督相位展开方法及装置
CN116608922B (zh) * 2023-05-17 2024-04-05 小儒技术(深圳)有限公司 一种基于雷达的水位流速的测量方法及系统
CN116614713B (zh) * 2023-07-14 2023-10-27 江南大学 一种用于三维形貌测量的自适应多重曝光方法
CN116883291B (zh) * 2023-09-06 2023-11-17 山东科技大学 一种基于二元傅里叶级数的畸变校正方法
CN117252913B (zh) * 2023-11-14 2024-02-06 南京信息工程大学 基于等间距的二值条纹编码投影方法及系统
CN117589302A (zh) * 2023-11-22 2024-02-23 西湖大学 一种三维高速压缩成像方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005091325A (ja) * 2003-09-19 2005-04-07 Wakayama Univ エイリアシングを利用した干渉縞又は投影格子の位相解析方法
US20050280831A1 (en) * 2004-06-17 2005-12-22 Konica Minolta Sensing, Inc. Phase measurement system
CN101451826B (zh) * 2008-12-17 2010-06-09 中国科学院上海光学精密机械研究所 物体三维轮廓测量装置及测量方法
CN104315996A (zh) * 2014-10-20 2015-01-28 四川大学 用二进制编码策略实现傅里叶变换轮廓术的方法
CN107044833A (zh) * 2017-03-24 2017-08-15 南京理工大学 一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7703069B1 (en) * 2007-08-14 2010-04-20 Brion Technologies, Inc. Three-dimensional mask model for photolithography simulation
CN101936718B (zh) * 2010-03-23 2012-07-18 上海复蝶智能科技有限公司 正弦条纹投影装置以及三维轮廓测量方法
CN101975558B (zh) * 2010-09-03 2012-04-11 东南大学 基于彩色光栅投影的快速三维测量方法
CA2822661A1 (en) * 2010-12-24 2012-06-28 The Australian National University Reconstruction of dynamic multi-dimensional image data
US20150292875A1 (en) * 2011-11-23 2015-10-15 The Trustees of Columbia University in the Coty of New York Systems, methods, and media for performing shape measurement
CN103994732B (zh) * 2014-05-29 2016-08-17 南京理工大学 一种基于条纹投影的三维测量方法
JP6800597B2 (ja) * 2016-03-30 2020-12-16 キヤノン株式会社 制御装置、制御方法およびプログラム
CN106595522B (zh) * 2016-12-15 2018-11-09 东南大学 一种光栅投影三维测量系统的误差校正方法
US10681331B2 (en) * 2017-02-06 2020-06-09 MODit 3D, Inc. System and method for 3D scanning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005091325A (ja) * 2003-09-19 2005-04-07 Wakayama Univ エイリアシングを利用した干渉縞又は投影格子の位相解析方法
US20050280831A1 (en) * 2004-06-17 2005-12-22 Konica Minolta Sensing, Inc. Phase measurement system
CN101451826B (zh) * 2008-12-17 2010-06-09 中国科学院上海光学精密机械研究所 物体三维轮廓测量装置及测量方法
CN104315996A (zh) * 2014-10-20 2015-01-28 四川大学 用二进制编码策略实现傅里叶变换轮廓术的方法
CN107044833A (zh) * 2017-03-24 2017-08-15 南京理工大学 一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZUO, C. ET AL.: "Micro Fourier Transform Profilometry (FTP): 3D Shape Measurement at 10, 000 Frames per Second", OPTICS AND LASERS IN ENGINEERING, vol. 102, 6 November 2017 (2017-11-06), pages 70 - 91, XP085253261, ISSN: 0143-8166 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462081A (zh) * 2020-03-31 2020-07-28 西安工程大学 一种用于工件表面质量检测的特征区域快速提取方法
CN111462081B (zh) * 2020-03-31 2023-04-14 西安工程大学 一种用于工件表面质量检测的特征区域快速提取方法
CN113532330A (zh) * 2021-08-28 2021-10-22 哈尔滨理工大学 一种相位格雷码三维测量方法
CN113532330B (zh) * 2021-08-28 2022-10-04 哈尔滨理工大学 一种相位格雷码三维测量方法
CN117351137A (zh) * 2023-08-30 2024-01-05 华中科技大学 一种结构光系统在隧道工作中的应用及其应用方法
CN117351137B (zh) * 2023-08-30 2024-06-11 华中科技大学 一种结构光系统在隧道工作中的应用及其应用方法
CN116935181A (zh) * 2023-09-19 2023-10-24 中国空气动力研究与发展中心低速空气动力研究所 一种完全二值散斑嵌入脉宽调制模式三维测量方法
CN116935181B (zh) * 2023-09-19 2023-11-28 中国空气动力研究与发展中心低速空气动力研究所 一种完全二值散斑嵌入脉宽调制模式三维测量方法

Also Published As

Publication number Publication date
US20210102801A1 (en) 2021-04-08
US11029144B2 (en) 2021-06-08
CN107044833A (zh) 2017-08-15
CN107044833B (zh) 2019-03-05

Similar Documents

Publication Publication Date Title
WO2018171385A1 (zh) 一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其系统
CN109253708B (zh) 一种基于深度学习的条纹投影时间相位展开方法
Feng et al. Calibration of fringe projection profilometry: A comparative review
Zuo et al. Micro Fourier transform profilometry (μFTP): 3D shape measurement at 10,000 frames per second
CN110514143B (zh) 一种基于反射镜的条纹投影系统标定方法
JP6244407B2 (ja) 深度測定の品質の向上
Zuo et al. Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review
CN105783775B (zh) 一种镜面及类镜面物体表面形貌测量装置与方法
KR20150121179A (ko) 실시간 스테레오 정합
CN104111038B (zh) 利用相位融合算法修复饱和产生的相位误差的方法
CN108195313A (zh) 一种基于光强响应函数的高动态范围三维测量方法
JP2021520008A (ja) 車両検査システムとその方法
CN108036740A (zh) 一种基于多视角的高精度实时三维彩色测量系统及其方法
ES2316413T3 (es) Procedimiento para el reconocimiento de desviaciones de la forma superficial de una forma predeterminada.
CN104318234B (zh) 一种采用点云数据表示的人脸皱纹三维提取方法及其设备
JP2018179577A (ja) 位置計測装置
Chen et al. Dynamic 3D surface profilometry using a novel colour pattern encoded with a multiple triangular model
KR102129916B1 (ko) 이벤트 정보를 포함하는 영상을 분석하는 장치 및 방법
EP3582183B1 (en) Deflectometric techniques
CN111582310A (zh) 隐含结构光的解码方法及装置
CN113551617B (zh) 基于条纹投影的双目双频互补三维面型测量方法
CN116518869A (zh) 基于光度立体和双目结构光的金属表面测量方法及系统
Boukamcha et al. Robust technique for 3D shape reconstruction
Garbat et al. Structured light camera calibration
Huang et al. Line laser based researches on a three-dimensional measuring system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18771582

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18771582

Country of ref document: EP

Kind code of ref document: A1