WO2021088481A1 - 一种基于条纹投影的高精度动态实时360度全方位点云获取方法 - Google Patents

一种基于条纹投影的高精度动态实时360度全方位点云获取方法 Download PDF

Info

Publication number
WO2021088481A1
WO2021088481A1 PCT/CN2020/111545 CN2020111545W WO2021088481A1 WO 2021088481 A1 WO2021088481 A1 WO 2021088481A1 CN 2020111545 W CN2020111545 W CN 2020111545W WO 2021088481 A1 WO2021088481 A1 WO 2021088481A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
camera
registration
point
phase
Prior art date
Application number
PCT/CN2020/111545
Other languages
English (en)
French (fr)
Inventor
左超
钱佳铭
陈钱
冯世杰
陶天阳
胡岩
尹维
张良
刘凯
吴帅杰
许明珠
王嘉业
Original Assignee
南京理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京理工大学 filed Critical 南京理工大学
Priority to US17/769,230 priority Critical patent/US11961244B2/en
Publication of WO2021088481A1 publication Critical patent/WO2021088481A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T3/14
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the invention belongs to the field of three-dimensional imaging technology, in particular to a high-precision dynamic real-time 360-degree omnidirectional point cloud acquisition method based on fringe projection.
  • auxiliary equipment In order to obtain and align the 3D topography data of objects from different perspectives, auxiliary equipment is usually needed to perform point cloud data registration (X. Liu, X. Peng, H. Chen, D. He, and BZGao, Opt. letters) 37, 3126 (2012); M. Nie ⁇ ner, M. S. Izadi, and M. Stamminger, ACM Transactions on Graph. (ToG) 32,169 (2013); B. Chen and B. Pan, Measurement 132,350 (2019).).
  • Common auxiliary equipment includes rotating platforms, robotic arms and plane mirrors. Through the coordinate relationship between the rotation axis of the rotating platform and the imaging system, the point cloud data under multiple viewing angles can be converted under the same coordinate system.
  • the turntable system can only obtain the three-dimensional data of the object in the vertical direction of the rotation axis, and it is difficult to obtain the information of the top and bottom of the object (X.Liu, X.Peng, H. Chen, D.He, and BZGao, Opt.letters 37 , 3126 (2012).).
  • Using the robotic arm can obtain more three-dimensional information under the viewing angle by presetting its moving path (M.Nie ⁇ ner, M. S. Izadi, and M. Stamminger, ACM Transactions on Graph. (ToG) 32,169 (2013).).
  • the cost of the robotic arm is very high, and the system based on the robotic arm requires complex hardware connections, so it is extremely difficult to implement such a system.
  • the measurement system with two mirrors can reconstruct the panoramic three-dimensional information of the object in one measurement, because the system can capture the three-dimensional shape of the object from three different perspectives at the same time (B. Chen and B. Pan, Measurement 132, 350 (2019) .).
  • B. Chen and B. Pan, Measurement 132, 350 (2019) . due to limited angle information, such a system still cannot achieve complete 3D measurement.
  • the above-mentioned instrument-assisted 3D registration method is not an ideal method to obtain a 360° omnidirectional 3D model.
  • the ideal way is to arbitrarily rotate the object to obtain enough 3D data under the viewing angle, while sequentially performing real-time 3D registration.
  • this method has two major technical problems: (1) Due to the limitations of hardware technology and real-time algorithms, it is difficult to construct a real-time high-precision 3D measurement system based on FPP for dynamic objects (J. Qian, T. Tao, S. Feng, Q. Chen, and C. Zuo, Opt.
  • this method allows the user to rotate the object by hand and view the continuously updated model, thereby providing instant feedback on the existence of the scanned surface and holes.
  • the point cloud obtained by the stripe boundary coding strategy in this method is quite sparse, which makes the 3D data accumulation process inefficient.
  • the accuracy of its registration method is low, because it skips the traditional instrument-assisted coarse registration, and directly performs the ICP-based fine registration without good pose estimation, thus reconstructing high-precision 3D
  • the model also requires extremely time-consuming post-processing.
  • the purpose of the present invention is to provide a high-precision dynamic real-time 360-degree omnidirectional point cloud acquisition method based on fringe projection.
  • the roughing is performed in parallel in dual threads. Registration and fine registration can achieve omni-directional, unconstrained real-time high-precision 360-degree point cloud acquisition without any additional auxiliary equipment.
  • the technical solution to achieve the objective of the present invention is: a high-precision dynamic real-time 360-degree omnidirectional point cloud acquisition method based on fringe projection, the steps are as follows:
  • Step 1 Build a four-eye fringe projection system based on the stereo phase unwrapping method SPU to complete the system calibration;
  • Step 2 Use the SPU to obtain the absolute phase of the object to achieve real-time high-precision three-dimensional data acquisition under a single viewing angle of the object;
  • Step 3 In the coarse registration thread, the scale-invariant feature transformation SIFT method is used to quickly obtain the two-dimensional matching points of adjacent images, and the coarse registration based on map positioning and reconstruction SLAM technology is realized by solving the "perspective n-point" PnP problem , At the same time quantify the amount of movement of the object during each coarse registration;
  • Step 4 In the fine registration thread, when the cumulative motion amount of the coarse registration reaches the threshold, the fine registration based on the closest point iterative ICP method is executed.
  • the present invention has significant advantages: (1) The method proposed by the present invention does not rely on any complicated and expensive auxiliary registration equipment, and innovatively uses SLAM technology to realize automatic and fast three-dimensional point cloud coarsening. Registration. (2) There is no need to carry out the traditional complicated pre-processing process based on auxiliary instrument registration and the time-consuming post-processing process to obtain high-precision three-dimensional registration data.
  • the present invention proposes a dual-thread parallel processing mechanism from coarse to fine, which can be Coarse registration and fine registration are performed simultaneously in two separate threads in real time.
  • the point cloud splicing process is real-time, high-precision, and unconstrained. The user can use the method proposed in the present invention to directly rotate the measured object by hand, and observe a real-time updated three-dimensional surface shape, which is completed in sequence. Real-time high-precision all-round 360° point cloud data acquisition.
  • Figure 1 is a flow chart of the present invention.
  • Figure 2 shows the placement of four cameras and one projector in the built four-eye fringe projection system.
  • Figure 3 is a description of the improved PnP problem.
  • Fig. 4 is the scene and result of measuring the David model with arbitrary rotation at different times using the method proposed by the invention.
  • Figure 5 is a complete David model measured by the method proposed by this invention.
  • Figure 5(a) is the point cloud result
  • Figure 5(b) is the triangulation result of the point cloud.
  • Fig. 6 is the result of David model and ceramic ball measured by the method proposed by the invention.
  • Figure 6(a)-(b) shows the measured objects under two different viewing angles.
  • Figure 6(c)-(d) shows the point cloud registration results under different viewing angles.
  • Figure 6(e)-(f) shows the error distribution of the ceramic ball registration results.
  • the high-precision dynamic real-time 360-degree omnidirectional point cloud acquisition method of the present invention based on fringe projection first adopts the stereo phase unwrapping (SPU) technology. Without using any additional fringe images or embedded auxiliary patterns, only a single frequency is used. Three-step phase shift fringe to obtain the absolute phase of the object.
  • the present invention uses adaptive depth constraint (ADC) technology to improve the robustness of phase unwrapping, so as to realize the acquisition of high-precision real-time three-dimensional data under a single viewing angle of the object.
  • ADC adaptive depth constraint
  • the present invention uses the Scale Invariant Feature Transform (SIFT) algorithm to quickly obtain the 2D matching points of the adjacent images, and solves the "perspective n points" (PnP) problem to realize the mapping based on the map positioning and reconstruction (SLAM) technology.
  • SIFT Scale Invariant Feature Transform
  • PnP perspective n points
  • SLAM map positioning and reconstruction
  • Coarse registration For each rough registration, the relative movement of the object is quantified by the sum of the two norms of the transformation matrix. After the cumulative movement reaches a certain threshold, the precise registration based on the iterative closest point (ICP) technology is performed.
  • ICP iterative closest point
  • the present invention executes rough registration and fine registration in parallel in dual threads to achieve real-time high-precision three-dimensional point cloud registration.
  • the invention is based on the fringe projection-based high-precision dynamic real-time 360-degree omnidirectional point cloud acquisition method.
  • the object motion area is determined pixel by pixel through the change of the object phase information at two adjacent moments, and the Fourier method is implemented for the area of the object in different motion states With the fusion of the phase shift method, the flow diagram is shown in Figure 1.
  • the implementation steps are as follows:
  • Step 1 Build a four-eye fringe projection system based on the stereo phase unwrapping method (SPU) to complete the system calibration.
  • the whole system includes a computer, four cameras, a projector, the projector and the four cameras are connected by four trigger wires, and the four cameras and the computer are connected by four data wires.
  • the specific steps are as follows: First, optimize the positions of the cameras and projectors in the quadruple-eye fringe projection system.
  • the specific optimization strategy is: the closer the distance between the viewing angles, then any point on one viewing angle is in another viewing angle.
  • the shorter the projection line segment within a certain depth range, on the other hand the greater the distance between the two perspectives, the more accurate the three-dimensional reconstruction between the two perspectives (T.Tao, Q. Chen, S.
  • the placement position is: the first camera 1 and the projector 5 are kept far away, that is, there is a gap between the first camera 1 and the projector 5 for another camera to be placed, and the second camera 2 is placed in the first camera 1 and the projector 5 and close to the first camera 1, the fourth camera 4 and the first camera 1 are placed symmetrically with respect to the projector 5, the third camera 3 and the first camera 1 are placed symmetrically with respect to the projector 5, each The placement of the components is shown in Figure 2.
  • Step 2 Use the Stereo Phase Unwrapping (SUP) method to obtain the absolute phase of the object to obtain real-time high-precision three-dimensional data under a single viewing angle of the object.
  • SUP Stereo Phase Unwrapping
  • Projector 5 projects the image, and the camera collects the image
  • the projector 5 projects three three-step phase-shifting fringe patterns on the object, and triggers four cameras to collect the patterns simultaneously.
  • the three-step phase shift fringe collected by the first camera 1 can be expressed as:
  • (u C ,v C ) represents the coordinate of a pixel on the first camera 1, Represents the three fringe patterns collected by the camera, A C represents the average light intensity, B C represents the modulation intensity light intensity, and ⁇ C represents the absolute phase of the fringe pattern.
  • the three fringe images collected by the first camera 1 obtain the package phase of the object:
  • ⁇ C (u C ,v C ) represents the solved package phase.
  • the relationship between the wrapped phase and the absolute phase is:
  • ⁇ C (u C ,v C ) ⁇ C (u C ,v C )+2k C (u C ,v C ) ⁇ ,k C (u C ,v C ) ⁇ [0,N-1]
  • k C represents the order of fringes
  • N represents the number of fringes.
  • the process of solving the fringe order is called unwrapping.
  • the traditional unwrapping method is the time phase unwrapping method, but this method needs to project additional auxiliary fringes of different frequencies, which increases the sensitivity of the phase shift method to object motion.
  • the present invention adopts the stereo phase unwrapping method (SPU) and assists with the adaptive dynamic depth constraint (ADC) Mechanism, without using any additional auxiliary fringes, the absolute phase of the object can be obtained through only three three-step phase shift fringe images.
  • SPU stereo phase unwrapping method
  • ADC adaptive dynamic depth constraint
  • N possible absolute phases For any pixel in the first camera 1 There are N possible absolute phases. Using the two-dimensional to three-dimensional mapping parameters obtained in step 1, these N possible absolute phases can be reconstructed into N three-dimensional candidate points, and then some incorrect three-dimensional points can be eliminated through ADC technology. Candidate points.
  • N 1 3D candidate points are projected to the second camera 2 through the 3D to 2D mapping parameters obtained in step 1.
  • N 1 corresponding two-dimensional candidate points.
  • the correct matching point must be the same as that in the first camera 1.
  • the correct matching point can be found through phase consistency inspection.
  • the above assumptions may not be true, and the wrapped phase of some wrong candidate points may be closer to The wrapped phase. So here only the parcel phase and The two-dimensional candidate points whose phase difference is within 0.6 rad are retained, and other candidate points are excluded.
  • these two-dimensional candidate points are projected to the third through the two-dimensional to two-dimensional mapping parameters obtained in step 1.
  • Camera 3 obtains the corresponding N 2 two-dimensional candidate points, and compares these candidate points with those in the first camera 1. Points are checked for phase consistency, and candidate points with a phase difference within 0.8 rad are retained (considering that the distance between the first camera 1 and the third camera 3 is farther than the distance between the first camera 1 and the second camera 2 , The accuracy of the projection point is worse, so set a larger phase threshold, which will not be repeated later).
  • the remaining two-dimensional candidate points are continuously projected to the fourth camera 4 for phase consistency check, and candidate points whose phase difference is within 1 rad are retained. After three rounds of phase consistency testing, the only correct candidate point can be confirmed. The phase order of the point can also be confirmed.
  • the absolute phase of the object can be obtained by performing the above operations on each pixel in the first camera 1 in parallel. Finally, the high-precision three-dimensional shape information of the object under a single viewing angle can be reconstructed through the calibration parameters obtained in step 1.
  • Step 3 In the coarse registration thread, use the Scale Invariant Feature Transform (SIFT) algorithm to quickly obtain the two-dimensional matching points of the adjacent images, and solve the "perspective n-point" (PnP) problem to achieve map-based positioning and reconstruction (SLAM) ) Technical coarse registration, while quantifying the amount of movement of the object during each coarse registration. details as follows:
  • SIFT Scale Invariant Feature Transform
  • Point cloud registration includes two processes, coarse registration and fine registration, and these two processes need to be performed serially.
  • the present invention proposes a dual-thread parallel mechanism from coarse to fine, which can perform coarse registration and fine registration in parallel in two independent threads.
  • the coarse registration process in the coarse registration thread will be described in step three, and the fine registration process in the fine registration thread will be described in step four.
  • the present invention proposes a method for eliminating false matching points using three-dimensional information, which is specifically as follows:
  • the 3D matching point set corresponding to the two-dimensional matching points searched by SIFT can be recorded as:
  • H i (P i ) (h i (P i ,P 1 ),h i (P i ,P 2 ),...,h i (P i ,P j ),...h i (P i ,P n ) ⁇
  • H i (P i) denotes the set of the distance between the point P i of the set of all points P
  • h i (P i, P j) represents the distance between P i of the P j, j ⁇ [1, n].
  • Q i the set of Q i
  • D i represents the set of differences between the corresponding elements in H i (P i ) and H i (Q i )
  • D i,j represents h i (P i ,P j ) and h i (Q i ,Q i) j ) difference.
  • D thre the size of the absolute value with a threshold value of j is D thre (D thre may be 5-6mm). If the number is greater than the absolute value of D i is larger than D thre element point number matching half n / 2, that the point P i and Q i are not reliable, should be discarded. This method can basically eliminate the wrong matching point pairs.
  • the present invention treats the problem of solving the rotation and translation matrix between adjacent 3D frames as a perspective n-point (PnP) problem.
  • PnP perspective n-point
  • the traditional PnP problem is improved.
  • the improved PnP problem can be described as (as shown in Figure 3): When the quadruple fringe projection system is stationary and the object moves from position 1 to position 2, the known object How to obtain the 3D data at position 1 (obtained in step 2) and the 2D matching points between the 2D images taken by the first camera 1 before and after the movement of the object (obtained by the SIFT algorithm and optimized by 3D information), how to obtain The posture of the object is transformed to convert the problem of obtaining the rotation and translation matrix of the object into a PnP problem.
  • the present invention uses the EPnP method to solve the PnP problem.
  • the EPnP method is a non-iterative solution to the PnP problem and is widely used in the visual SLAM system due to its balance of accuracy and efficiency.
  • the present invention uses the sum of the two norms of the rotation and translation vectors during each coarse registration to quantify the amount of movement of the object during each coarse registration.
  • the amount of movement is less than 30, considering the data redundancy caused by the too small amount of movement, the result of this coarse registration is discarded and the next coarse registration is performed; when the amount of movement is greater than 50, the excessive registration error is considered. Rough registration is no longer performed, unless this condition is met (that is, the user rotates the object back to a position where the amount of movement between the object and the last valid 3D frame is between 30 and 50), the coarse registration will continue after this condition is met.
  • Step 4 In the fine registration thread, when the cumulative motion of the coarse registration reaches a certain threshold, the fine registration based on the closest point iteration algorithm (ICP) is executed, as follows:
  • the ICP algorithm is used for fine registration.
  • ICP is the main method to align the 3D model when the initial pose estimation between two views is known.
  • the ICP process is time-consuming and not suitable for real-time scenes.
  • the present invention proposes a dual-threaded parallel processing mechanism, which can simultaneously perform SLAM-based coarse registration (step 3) and ICP-based fine registration in two separate threads. Registration (step 4), and only one fine registration will be performed after multiple rough registrations. Specific steps are as follows:
  • step 3 Determine whether the cumulative motion of the effective coarse registration in step 3 reaches the motion threshold 200. Only when the cumulative motion reaches the threshold 200, the ICP-based fine registration will be executed in the fine registration thread, and the next cycle will be used.
  • the fine registration is performed at the time of the second coarse registration, that is, the fine registration is performed simultaneously with multiple coarse registrations in the next cycle.
  • the present invention first downsamples the 3D point cloud that has been coarsely registered, and then uses the ICP algorithm to process the downsampled data. After obtaining the precise transformation matrix, the original dense three-dimensional The point cloud data is operated, and the fine 3D registration is finally completed:
  • the present invention uses the voxelized grid method to down-sampling the point cloud, divides the two pieces of three-dimensional point cloud data after coarse registration into multiple three-dimensional voxel grids, and uses the center of gravity of the midpoint of each voxel grid as the down-sampling After the value.
  • the 3D point cloud set in Fram1 can be expressed as
  • the 3D point cloud set in Fram2 can be expressed as
  • the ICP method is one of the most commonly used methods to align the 3D model when the initial pose estimation between two views is known.
  • the first step is to find the point set Each point in
  • the second step find the rigid body transformation that minimizes the average distance between the corresponding point pairs obtained in the first step, and obtain the rotation and translation matrix corresponding to the rigid body transformation; in the third step, use the first The rotation and translation matrices obtained in the second step are transformed And get a new point cloud set, record the newly obtained point cloud set as The fourth step, if the new point cloud set versus If the average distance between the two point sets is less than 0.01, stop the calculation, otherwise it will As new Continue to iterate through the above steps, unless one of the following two conditions is met: (1) A new set of transformation points versus The average distance between the two is less than 0.01; (2) The total number of iterations reaches 10 times.
  • the ICP is terminated, and the rotation and translation matrix obtained in the last iteration is the transformation matrix required for fine registration.
  • the original dense 3D point cloud data is transformed to finally complete the precise 3D registration.
  • a set of four-lens fringe projection system was constructed, which includes a LightCrafter 4500Pro projector and four Basler acA640-750um cameras (resolution 640 ⁇ 480) for SPU unwrapping.
  • Use HP Z230 computer Intel Xeon E3-1226 v3 CPU, NVIDIA Quadro K2200 GPU, and develop the algorithm proposed by the present invention based on languages such as OpenCV and PCL.
  • the speed of the projector is 100 Hz, and all cameras are triggered by the projector.
  • the imaging speed of the built system is 45 Hz, and the single scan accuracy is 45 microns.
  • Figure 4 shows the point cloud registration scenarios and results in different time periods.
  • the time for a single coarse registration and fine registration is 0.4 seconds and 2 seconds, respectively.
  • the reconstruction time of the entire model is 70 seconds.
  • Figure 5 shows the omnidirectional 360° three-dimensional model after registration.
  • Figure 5(a) is the obtained complete point cloud result
  • Figure 5(b) is the triangulation result of the point cloud. It can be seen that all the 3D shapes of the David model have been reconstructed, and the first point cloud and the last point cloud have been well aligned.
  • This experiment shows that the method proposed by the present invention can obtain a 360° model of a high-quality handheld object in real time.
  • the radii of the entire reconstructed sphere are 25.3580mm and 25.3543mm, and the deviations are 40.9 ⁇ m and 49.5 ⁇ m, respectively.
  • the measured center distance is 99.9345mm with an error of 118.7 ⁇ m.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明公开了一种基于条纹投影的高精度动态实时360度全方位点云获取方法,首先采用基于立体相位展开法的条纹投影技术,并辅助以自适应动态深度约束机制,在不需要任何额外辅助条纹图案的情况下实时获取物体的高精度三维数据。然后,在快速获取由相应的三维信息优化的二维匹配点后,采用一种双线程并行机制进行基于地图定位与重建(SLAM)技术的粗配准和基于最近点迭代(ICP)技术的精配准。本发明使低成本、高速度、高精度、无约束、快速反馈的全方位三维实时建模成为可能,并将为360度工件三维面型缺陷检测和快速逆向成型等领域打开新的大门。

Description

一种基于条纹投影的高精度动态实时360度全方位点云获取方法 技术领域
本发明属于三维成像技术领域,特别是一种基于条纹投影的高精度动态实时360度全方位点云获取方法。
背景技术
近年来,非接触式三维(3D)面型测量技术已经被广泛地应用于多个领域,如工业检测、文物保护、逆向建模、人机交互等等(J.Salvi,S.Fernandez,T.Pribanic,and X.Llado,Pattern recognition 43,2666(2010).)。在诸多三维成像技术中,条纹投影技术(FPP)由于其高测量精度、简单的硬件设施及易于实现等优点而成为最流行的三维成像技术之一(J.Geng,Adv.Opt.Photonics 3,128(2011).)。目前,大多数关于FPP的研究都集中在单个视角下的3D测量(S.Zhang,Opt.Lasers Eng.48,149(2010).)。但在工业检测和逆向建模等应用中,获取物体的全部3D模型十分必要。而传统的FPP系统由于视场有限,无法在一次测量中获得物体完整的3D模型。要获取全部3D形貌,需要从多个视角扫描物体的三维数据。
为了从不同的视角获取并对齐物体的3D形貌数据,通常需要辅助仪器来进行点云数据的配准(X.Liu,X.Peng,H.Chen,D.He,and B.Z.Gao,Opt.letters 37,3126(2012);M.Nieβner,M.
Figure PCTCN2020111545-appb-000001
S.Izadi,and M.Stamminger,ACM Transactions on Graph.(ToG)32,169(2013);B.Chen and B.Pan,Measurement 132,350(2019).)。常见的辅助设备包括旋转平台、机械臂和平面镜。通过旋转平台的转轴和成像系统之间的坐标关系可将多个视角下的点云数据转换在同一个坐标系下。但是转台系统仅能获取物体在转轴垂直方向上的三维数据,而很难获取物体顶部和底部的信息(X.Liu,X.Peng,H.Chen,D.He,and B.Z.Gao,Opt.letters 37,3126(2012).)。使用机器臂可通过预设其移动路径而获取更多视角下的三维信息(M.Nieβner,M.
Figure PCTCN2020111545-appb-000002
S.Izadi,and M.Stamminger,ACM Transactions on Graph.(ToG)32,169(2013).)。但机械臂的成本很高,且基于机械臂的系统需要复杂的硬件连接,因此实现这样的系统极其得困难。具有两个反射镜的测量系统可在一次测量中重建物体的全景三维信息,因为该系统可同时从3个不同视角捕获物体的三维形貌(B.Chen and B.Pan,Measurement 132,350(2019).)。然而,由 于有限的角度信息,这样的系统仍然不能实现完整的3D测量。
由于昂贵的硬件设施、复杂的结构和有限的角度信息,上述基于仪器辅助的三维配准方法不是获取360°全方位3D模型的理想方法。理想的方式是任意地旋转对象以获得足够多的视角下的3D数据,同时顺次地执行实时3D配准。但是该方式存在两大技术难题:(1)由于硬件技术和实时算法的限制,很难为动态对象构建基于FPP的实时高精度3D测量系统(J.Qian,T.Tao,S.Feng,Q.Chen,and C.Zuo,Opt.express 27,2713(2019).);(2)在不使用辅助仪器的情况下进行不间断的实时高精度3D配准非常具有挑战性(S.Kahn,U.Bockholt,A.Kuijper,and D.W.Fellner,Comput.Industry 64,1115(2013).)。2002年,Rusinkiewicz等人(S.Rusinkiewicz,O.Hall-Holt,and M.Levoy,“Real-time 3d model acquisition,”in ACM Transactions on Graphics(TOG),vol.21(ACM,2002),pp.438–446.)提出了一种基于结构光测距仪和改进的迭代最近点(ICP)算法的实时3D模型获取方法。与之前的方法相比,该方法允许用户用手旋转物体并查看连续更新的模型,从而提供有关被扫描表面和孔洞存在的即时反馈。但是,该方法中通过条纹边界编码策略获取的点云相当稀疏,从而使3D数据累积过程效率较低。此外,其配准方法的精度较低,因为其跳过了传统基于仪器辅助的粗配准,在没有良好的姿态估计的情况下就直接进行了基于ICP的精配准,因此重建高精度3D模型还需要极其耗时的后处理。
发明内容
本发明的目的在于提供一种基于条纹投影的高精度动态实时360度全方位点云获取方法,在实时获取物体单个视角下的高精度三维数据的前提下,通过在双线程中并行地进行粗配准和精配准,在无需任何额外的辅助设备的前提下,实现全方位、无约束的实时高精度360度点云获取。
实现本发明目的的技术解决方案为:一种基于条纹投影的高精度动态实时360度全方位点云获取方法,步骤如下:
步骤一,搭建基于立体相位展开法SPU的四目条纹投影系统,完成系统标定;
步骤二,利用SPU获取物体绝对相位,实现物体单个视角下的实时高精度三维数据的获取;
步骤三,在粗配准线程中,使用尺度不变特征变换SIFT方法快速获取相邻 图像的二维匹配点,通过解决“透视n点”PnP问题实现基于地图定位与重建SLAM技术的粗配准,同时量化每次粗配准时物体的运动量;
步骤四,在精配准线程中,当粗配准累积运动量达到阈值时,执行基于最近点迭代ICP方法的精配准。
本发明与现有方法相比,其显著优点:(1)本发明所提出的方法不依赖于任何复杂、昂贵的辅助配准仪器,创新性地利用SLAM技术实现自动、快速的三维点云粗配准。(2)无需进行传统基于辅助仪器配准的复杂预处理过程及为获取高精度三维配准数据的耗时后处理过程,本发明通过提出一种由粗到精的双线程并行处理机制,可在两个单独的线程中同时实时地进行粗配准和精配准。(3)点云拼接过程实时、高精度、无约束,用户可利用本发明所提出的方法直接通过手任意地转动被测物,并观测到一个实时更新的三维面型,顺次连接地完成实时高精度的全方位360°点云数据的获取。
下面结合附图对本发明作进一步详细描述。
附图说明
图1为本发明的流程图。
图2是所搭建的四目条纹投影系统中的四个相机和一个投影仪的摆放位置。
图3是对改进后的PnP问题的描述。
图4是利用该发明所提出的方法,在不同时刻测量任意旋转的大卫模型的场景与结果。
图5是利用该发明所提出的方法测得的完整的大卫模型。其中图5(a)是点云结果,图5(b)是点云的三角化结果。
图6是利用该发明所提出的方法测得的大卫模型和陶瓷球的结果。其中图6(a)-(b)为两个不同视角下的被测物。图6(c)-(d)为不同视角下的点云配准结果。图6(e)-(f)为陶瓷球配准结果的误差分布。
具体实施方式
本发明基于条纹投影的高精度动态实时360度全方位点云获取方法首先采用立体相位展开(SPU)技术,在不使用任何额外的附加条纹图像或嵌入辅助图案的情况下,仅使用单个频率的三步相移条纹来获取物体的绝对相位。同时,本发明利用自适应深度约束(ADC)技术来提高相位展开的鲁棒性,以实现对物体 单个视角下的高精度实时三维数据的获取。然后,本发明在利用尺度不变特征变换(SIFT)算法快速获取相邻图像的2D匹配点的前提下,通过解决“透视n点”(PnP)问题实现基于地图定位与重建(SLAM)技术的粗配准。对于每次粗配准,利用变换矩阵的二范数之和量化物体的相对运动,在累积运动量达到一定阈值后,再进行基于迭代最近点(ICP)技术的精配准。此外,为了提高点云配准效率,本发明在双线程中并行地执行粗配准和精配准,以实现实时高精度的三维点云配准。
本发明基于条纹投影的高精度动态实时360度全方位点云获取方法,通过相邻两时刻物体相位信息的变化来逐像素地判断物体运动区域,针对物体不同运动状态的区域实施傅里叶法和相移法的融合,流程示意图如图1所示,实现步骤如下:
步骤一,搭建基于立体相位展开法(SPU)的四目条纹投影系统,完成系统标定。整个系统包含一台计算机,四个相机,一个投影仪,投影仪和四个相机之间用四根触发线相连接,四个相机和计算机用四根数据线相连接。具体步骤如下:首先,对四目条纹投影系统中的相机、投影仪的位置进行优化,优化的具体策略为:视角之间的距离越近,那么一个视角上的任意一点在另一个视角上的一定深度范围内的投影线段越短,另一方面两个视角之间的距离越大,那么两个视角之间三维重构越准确(T.Tao,Q.Chen,S.Feng,Y.Hu,M.Zhang,and C.Zuo,"High-precision real-time 3D shape measurement based on a quad-camera system,"Journal of Optics 20,014009(2018).),优化后四目条纹投影系统各部件的摆放位置为:第一相机1与投影仪5保持较远距离,即第一相机1与投影仪5之间留出可以再摆放一个相机的空隙,第二相机2置于第一相机1与投影仪5之间并且靠近第一相机1摆放,第四相机4与第一相机1关于投影仪5对称摆放,第三相机3与第一相机1关于投影仪5对称放置,各部件的摆放如图2所示。然后利用张正友标定算法(Z.Zhang,“A flexible new technique for camera calibration.”IEEE Transactions on pattern analysis and machine intelligence.22(11),1330-1334(2000).)将整个系统标定到统一世界坐标系下,得到四个相机与投影仪的内参和外参,并将这些参数转化为二维到三维,三维到二维,二维到二维的映射参数(K.Liu,Y.Wang,D.L.Lau,et al,“Dual-frequency pattern scheme for high-speed 3-D shape measurement.”Optics express.18(5):5229-5244(2010).)。
步骤二:利用立体相位展开法(SUP)获取物体绝对相位,实现物体单个视角下的实时高精度三维数据的获取,具体步骤如下:
1.投影仪5投影图像,相机采集图像
投影仪5向物体投影三幅三步相移条纹图案,同步触发四个相机采集图案。其中第一相机1采集到的三步相移条纹可表示为:
Figure PCTCN2020111545-appb-000003
Figure PCTCN2020111545-appb-000004
Figure PCTCN2020111545-appb-000005
上式中,(u C,v C)表示第一相机1上的一个像素点坐标,
Figure PCTCN2020111545-appb-000006
表示相机采集到的三幅条纹图,A C表示平均光强,B C表示调制度光强,Φ C表示条纹图的绝对相位。
2.获取物体包裹相位
由第一相机1采集到的三幅条纹图像获取物体的包裹相位:
Figure PCTCN2020111545-appb-000007
式中φ C(u C,v C)表示求解出的包裹相位。包裹相位与绝对相位的关系为:
Φ C(u C,v C)=φ C(u C,v C)+2k C(u C,v C)π,k C(u C,v C)∈[0,N-1]
式中k C表示条纹级次,N表示条纹的根数。求解条纹级次的过程叫做解包裹。
3.解包裹
传统解包裹的方法是时间相位展开法,但是该方法需要投影额外的不同频率辅助条纹,这增加了相移法对物体运动的敏感程度。为提高解包裹的效率,降低单次重构所用的条纹图数以降低相移法对运动的敏感程度,本发明采用立体相位展开法(SPU),并辅助以自适应动态深度约束(ADC)机制,在不使用任何额外辅助条纹的情况下,仅通过三幅三步相移条纹图像就可获取物体的绝对相位,具体步骤如下:
(1)寻找第一相机1中某个像素点对应的N个3D候选点
对于第一相机1中的任意一个像素点
Figure PCTCN2020111545-appb-000008
都有N个可能的绝对相位,利用步骤一中获得的二维到三维的映射参数可将这N个可能的绝对相位重构出N个 三维候选点,然后通过ADC技术可排除部分错误的三维候选点。
(2)寻找第二相机2中的2D候选点
假设排除后的三维候选点的数目为N 1(0<N 1<N),再通过步骤一中获得的三维到二维的映射参数将N 1个三维候选点投影到第二相机2中获得N 1个对应的二维候选点。这些二维候选点中必有一个正确的匹配点,且该正确的匹配点与第一相机1中的
Figure PCTCN2020111545-appb-000009
应有相似的包裹相位值,利用该特性可通过相位一致性检验来找出正确的匹配点。但是由于环境噪声和系统误差等因素的存在,上述假设可能不成立,一些错误候选点的包裹相位可能更接近于
Figure PCTCN2020111545-appb-000010
的包裹相位。因此这里仅将包裹相位与
Figure PCTCN2020111545-appb-000011
点的相位差在0.6rad以内的二维候选点保留,而排除其他候选点。
(3)寻找第三相机3中的2D候选点
假设剩下的二维候选点的数量为N 2(0<N 2<N 1<N),再通过步骤一中获取的二维到二维的映射参数将这些二维候选点投影到第三相机3获得对应的N 2个二维候选点,将这些候选点再与第一相机1中的
Figure PCTCN2020111545-appb-000012
点进行相位一致性检验,保留相位差在0.8rad以内的候选点(考虑到由于第一相机1与第三相机3之间的距离相比第一相机1与第二相机2之间的更远,投影点的精度更差,因此设定更大的相位阈值,这一点在后面不再赘述)。
(4)寻找第四相机4中的2D候选点,确定第一相机1中某个像素点的相位级次
将剩下的二维候选点继续投影到第四相机4中进行相位一致性检验,保留相位差在1rad以内的候选点。经过三轮的相位一致性检验,唯一正确的候选点可以被确认,
Figure PCTCN2020111545-appb-000013
点的相位级次也可被确认。
(5)获取物体绝对相位及单个视角下的三维信息
在计算机GPU中并行地对第一相机1中的每个像素点执行上述操作即可获取物体的绝对相位。最后通过步骤一中获得的标定参数可重构出物体单个视角下的高精度三维形貌信息。
步骤三:在粗配准线程中,使用尺度不变特征变换(SIFT)算法快速获取相邻图像的二维匹配点,通过解决“透视n点”(PnP)问题实现基于地图定位与重建 (SLAM)技术的粗配准,同时量化每次粗配准时物体的运动量。具体如下:
点云配准包括粗配准和精配准两个过程,且这两个过程需串行地进行。为了提高点云配置的效率,本发明提出一种由粗到精的双线程并行机制,可在两个独立的线程中并行地进行粗配准和精配准。粗配准线程中的粗配准流程在步骤三中说明,精配准线程中的精配准流程将在步骤四中说明。
1.寻找相邻图片的2D匹配点
假设相邻的两个3D帧Fram1和Fram2,它们有各自对应的二维纹理图I 1、I 2(第一相机1所拍摄的纹理图)以及在第一相机1的坐标系下的三维数据
Figure PCTCN2020111545-appb-000014
并假设当前的3D帧为Fram2,上一个3D帧为Fram1,首先通过尺度不变特征变换SIFT方法寻找并匹配两帧纹理图I 1和I 2的特征点,然后利用欧式距离排除部分错误的二维匹配点。
2.优化2D匹配点
上述获得的2D匹配点中仍然会存在一些错误的匹配点对,这些错误的匹配点会影响到粗配准的效果,因此应尽量避免。因此本发明提出一种利用三维信息消除错误匹配点的方法,具体如下:
(1)分别计算Fram1和Fram2中每一个匹配点到其他所有匹配点的距离
使用SIFT搜索到的二维匹配点多对应的3D匹配点集可记录为:
P={P 1,P 2,...,P i,...,P n}
Q={Q 1,Q 2,...,Q i,...,Q n}
其中P和Q分别表示Fram1和Fram2中的二维匹配点所对应的三维点集,n是匹配点对的数量,i∈[1,n]。计算每个P i与点集P中所有点之间的距离:
H i(P i)={h i(P i,P 1),h i(P i,P 2),...,h i(P i,P j),...h i(P i,P n)}
式中H i(P i)表示P i与点集P中所有点之间的距离的集合,h i(P i,P j)表示P i与P j之间的距离,j∈[1,n]。类似地,Q i的距离集表示为H i(Q i)。
(2)计算步骤(1)中获得的两个距离集间的对应元素之差
然后通过下式获取H i(P i)与H i(Q i)中相应元素之间的差:
D i={D i,1,...,D i,j,...,D i,n}
={h i(P i,P 1)-h i(Q i,Q 1),...,h i(P i,P j)-h i(Q i,Q j),...,h i(P i,P n)-h i(Q i,Q n)}
式中D i表示H i(P i)与H i(Q i)中相应元素之间的差值集合,D i,j表示h i(P i,P j)与h i(Q i,Q j)的差值。
(3)进一步排除错误2D匹配点
然后比较D i中的每个元素D i,j的绝对值与阈值D thre的大小(D thre可以是5-6mm)。若D i中的绝对值大于D thre的元素的个数大于匹配点数量的一半n/2,则认为点对P i与Q i不可靠,应舍去。通过该方法可基本消除错误的匹配点对。
3.获取物体旋转、平移矩阵
在识别2D匹配点后,接下来需获取相邻3D帧之间的旋转和平移矩阵。
(1)将旋转、平移矩阵的求解问题转换为PnP问题
为快速地获取变换矩阵,本发明将相邻3D帧之间的旋转和平移矩阵求解问题当作透视n点(PnP)问题来处理。在本发明中,传统的PnP问题被改进,改进后的PnP问题可被描述为(如图3所示):当四目条纹投影系统静止,而物体从位置1运动到位置2,已知物体在位置1时的三维数据(利用步骤二求得)、以及物体在运动前后第一相机1所拍摄的二维图像间的2D匹配点(利用SIFT算法求得并使用三维信息优化),如何获取物体的变换姿态,以此将物体的旋转、平移矩阵的获取问题转变为PnP问题。
(2)使用EPnP法解决PnP问题,获取旋转和平移矩阵
本发明使用EPnP法来求解PnP问题,EPnP法是PnP问题的非迭代解决方案,由于其精度和效率的平衡而在视觉SLAM系统中得到了广泛使用。这里直接调用OpenCV(一个可开源的跨平台计算机视觉库)语言库中实现姿态估计算法的solvePnPRansac函数(实现姿态估计算法的函数),在该函数中输入数据:P={P 1,P 2,...,P i,...,P n}、Q在I 2中所对应的2D点和第一相机1的内参矩阵(步骤1中获得),设置该函数的最小子集的计算模型为SOLVE_EPnP,运行该函数便可获得两相邻3D帧间的旋转和平移矩阵。然后将Fram1帧的三维数据
Figure PCTCN2020111545-appb-000015
变换来和Fram2的三维数据
Figure PCTCN2020111545-appb-000016
进行粗配准。
4.量化每次粗配准时的运动量,保留运动量在30-50之间的结果
随后本发明使用每次粗配准时的旋转和平移向量的二规范之和来量化每次粗配准时的物体的运动量。当运动量小于30时,考虑到运动量过小而造成的数 据冗余,舍弃该次粗配准的结果并执行下一个粗配准;当运动量大于50时,考虑到过大的配准误差,将不再进行粗配准,除非满足该条件(就是用户将物体旋转回到与上一个有效3D帧之间的运动量在30到50之间的位置),满足该条件后会继续进行粗配准。
步骤四:在精配准线程中,当粗配准累积运动量达到一定阈值时,执行基于最近点迭代算法(ICP)的精配准,具体如下:
粗配准后,使用ICP算法进行精细配准。ICP是在已知两视角间初始姿态估计时对齐3D模型的主要方法,但是由于迭代运算,ICP的过程较耗时,并不适合实时的场景。为了提高配准效率,同时减少配准误差的累积,本发明提出一种双线程并行处理机制,可在两个单独的线程中同时执行基于SLAM的粗配准(步骤三)和基于ICP的精配准(步骤四),且在进行多次粗配准后才会进行一次精配准。具体步骤如下:
1.比较粗配准的累积运动与运动阈值的大小,判断是否可以进行精配准
判断步骤三中的有效粗配准的累积运动是否达到了运动阈值200,只有当累积运动量达到阈值200时,才会在精配准线程中执行基于ICP的精配准,并利用下一个循环多次粗配准的时间来执行该次精配准,即该精配准与下一个循环的多次粗配准同时进行。
2.进行精配准
此外,为进一步提高精配准的效率,本发明首先对已经粗配准的3D点云进行降采样,然后采用ICP算法对降采样数据进行处理,获取精确变换矩阵后,再对原始的密集三维点云数据进行操作,最终完成精细的3D配准:
(1)对已经粗配准的3D点云进行降采样
本发明使用voxelized网格法对点云进行降采样,将经过粗配准后的两片三维点云数据分为多个三维体素网格,使用每个体素网格中点的重心作为降采样后的值。降采样后,Fram1中的3D点云集可表示为
Figure PCTCN2020111545-appb-000017
Fram2中的3D点云集可表示为
Figure PCTCN2020111545-appb-000018
(2)采用ICP算法对降采样的数据进行处理,获取精确变换矩阵
ICP法是在已知两视角间初始姿态估计时对齐3D模型最常用方法的之一。将点云数据降采样后,第一步,寻找点集
Figure PCTCN2020111545-appb-000019
中的每个点在
Figure PCTCN2020111545-appb-000020
中所对应的最近点;第二步,求取使得第一步所得的对应点对间的平均距离最小的刚体变换,并获取该刚体变换所对应的旋转和平移矩阵;第三步,使用第二步所获取的旋转、平移矩阵来变换
Figure PCTCN2020111545-appb-000021
并得到一个新的点云集,将新得到的点云集记录为
Figure PCTCN2020111545-appb-000022
第四步,如果新的点云集
Figure PCTCN2020111545-appb-000023
Figure PCTCN2020111545-appb-000024
满足两点集间的平均距离小于0.01,则停止计算,否则将
Figure PCTCN2020111545-appb-000025
作为新的
Figure PCTCN2020111545-appb-000026
继续迭代进行上述步骤,除非达到以下两个条件之一:(1)新的变换点集
Figure PCTCN2020111545-appb-000027
Figure PCTCN2020111545-appb-000028
间的平均距离小于0.01;(2)迭代总次数达到了10次。在进行数次迭代,达到上述两个迭代终止条件之一后,ICP终止,最后一次迭代所获取的旋转、平移矩阵即是精配准所需的变换矩阵。获取精配准的变换矩阵后,再对原始的密集三维点云数据进行变换操作,最终完成精细的3D配准。
实施例
为验证本发明的有效性,构建了一套四目条纹投影系统,该系统包含一个LightCrafter 4500Pro投影仪和四个用于SPU解包裹的Basler acA640-750um相机(分辨率640×480)。使用HP Z230计算机(Intel Xeon E3-1226 v3 CPU,NVIDIA Quadro K2200 GPU),并基于OpenCV和PCL等语言开发本发明所提出的算法。在实验中,投影仪的速度为100Hz,所有相机都由投影仪来触发。所搭建系统的成像速度为45Hz,单次扫描精度为45微米。
在第一个实验中,任意、无约束地旋转一个大卫石膏模型,并实现对其的实时360度全方位建模。图4中显示了不同时间段的点云配准场景及结果。单次粗配准和精配准的时间分别为0.4秒和2秒。整个模型的重建时间为70秒。图5显示了配准后的全方位360°三维模型,其中图5(a)是获取的完整的点云结果,图5(b)是点云的三角化结果。从中可以看到大卫模型的所有3D形貌都已被重构,且第一片点云和最后一片点云已很好地对齐。该实验表明,本发明所提出的方法可实时获取高质量的手持物体的360°模型。
在第二个实验中,测量了两个半径分别为25.3989mm和25.4038mm、中心距为100.0532mm的陶瓷球。由于球体没有2D特征,因此需借助具有复杂2D 特征的大卫模型一起完成点云配准。将大卫模型和陶瓷球一起旋转,以实现对陶瓷球的实时配准。实验结果如图6所示,其中图6(a)-(b)为两个不同视角下的被测物,图6(c)-(d)为不同视角下的点云配准结果。对测得的球体进行拟合,拟合的球体与测量的球体数据间的误差分布如图6(e)和图6(f)所示。整个重建球体的半径分别为25.3580mm和25.3543mm,偏差分别为40.9μm和49.5μm。测得中心距为99.9345mm,误差为118.7μm。该实验表明,通过本发明所提出的方法重建的完整3D模型的总体精度可以达到百微米级别的高精度水平,这使得该发明所提出的方法有可能被用于360°工业面型缺陷检测和其他领域。

Claims (5)

  1. 一种基于条纹投影的高精度动态实时360度全方位点云获取方法,其特征在于步骤如下:
    步骤一,搭建基于立体相位展开法SPU的四目条纹投影系统,完成系统标定;
    步骤二,利用SPU获取物体绝对相位,实现物体单个视角下的实时高精度三维数据的获取;
    步骤三,在粗配准线程中,使用尺度不变特征变换SIFT方法快速获取相邻图像的二维匹配点,通过解决“透视n点”PnP问题实现基于地图定位与重建SLAM技术的粗配准,同时量化每次粗配准时物体的运动量;
    步骤四,在精配准线程中,当粗配准累积运动量达到阈值时,执行基于最近点迭代ICP方法的精配准。
  2. 根据权利要求1所述的方法,其特征在于步骤一中,四目条纹投影系统包含一台计算机、四个相机、一个投影仪,投影仪和四个相机之间用四根触发线相连接,四个相机和计算机用四根数据线相连接,即第一相机(1)与投影仪(5)之间设置摆放一个相机的空隙,即第二相机(2)置于第一相机(1)与投影仪(5)之间并且靠近第一相机(1)摆放,第四相机(4)与第一相机(1)关于投影仪(5)对称摆放,第三相机(3)与第一相机(1)关于投影仪(4)对称放置;然后利用张正友标定算法将整个系统标定到统一世界坐标系下,得到四个相机与投影仪的内参和外参,并将这些参数转化为二维到三维,三维到二维,二维到二维的映射参数。
  3. 根据权利要求1所述的方法,其特征在于步骤二中实现物体单个视角下的实时高精度三维数据获取的步骤如下:
    (1)投影仪(5)投影图像,相机采集图像
    投影仪(5)向物体投影三步相移条纹图案,同步触发四个相机采集图案,其中第一相机(1)采集到的三步相移条纹为:
    Figure PCTCN2020111545-appb-100001
    Figure PCTCN2020111545-appb-100002
    Figure PCTCN2020111545-appb-100003
    上式中,(u C,v C)表示第一相机(1)上的一个像素点坐标,
    Figure PCTCN2020111545-appb-100004
    表示第一相机1采集到的三幅条纹图,A C表示平均光强,B C表示调制度光强,Φ C表示条纹图的绝对相位;
    (2)获取物体包裹相位
    由第一相机(1)采集到的三幅条纹图像可获取物体的包裹相位:
    Figure PCTCN2020111545-appb-100005
    式中φ C(u C,v C)表示求解出的包裹相位,包裹相位与绝对相位的关系为:
    Φ C(u C,v C)=φ C(u C,v C)+2k C(u C,v C)π,k C(u C,v C)∈[0,N-1]
    式中k C表示条纹级次,N表示条纹的根数。求解条纹级次的过程叫做解包裹;
    (3)解包裹
    采用立体相位展开SPU法,并辅助以自适应动态深度约束ADC机制,在不使用任何额外辅助条纹的情况下,仅通过三幅三步相移条纹图像就可获取物体的绝对相位,具体步骤如下:
    (a)寻找第一相机(1)中某个像素点对应的N个3D候选点
    对于第一相机(1)中的任意一个像素点
    Figure PCTCN2020111545-appb-100006
    都有N个可能的绝对相位,利用步骤一中获得的二维到三维的映射参数可将这N个可能的绝对相位重构出N个三维候选点,然后通过ADC技术可排除部分错误的三维候选点;
    (b)寻找第二相机(2)中的2D候选点
    假设排除后的三维候选点的数目为N 1,0<N 1<N,再通过步骤一中获得的三维到二维的映射参数将N 1个三维候选点投影到第二相机(2)中获得N 1个对应的二维候选点;这些二维候选点中必有一个正确的匹配点,且该正确的匹配点与第一相机(1)中的
    Figure PCTCN2020111545-appb-100007
    应有相似的包裹相位值,利用该特性可通过相位一致性检验来找出正确的匹配点,即将包裹相位与
    Figure PCTCN2020111545-appb-100008
    点的相位差在0.6rad以内的二维候选点保留,而排除其他候选点;
    (c)寻找第三相机(3)中的2D候选点
    假设剩下的二维候选点的数量为N 2,0<N 2<N 1<N,再通过步骤一中获取的二维到二维的映射参数将这些二维候选点投影到第三相机(3)获得对应的 N 2个二维候选点,将这些候选点再与第一相机(1)中的
    Figure PCTCN2020111545-appb-100009
    点进行相位一致性检验,保留相位差在0.8rad以内的候选点;
    (d)寻找第四相机(4)中的2D候选点,确定第一相机(1)中某个像素点的相位级次
    将剩下的二维候选点继续投影到第四相机(4)中进行相位一致性检验,保留相位差在1rad以内的候选点;经过三轮的相位一致性检验,唯一正确的候选点被确认,
    Figure PCTCN2020111545-appb-100010
    点的相位级次也被确认;
    (e)获取物体绝对相位及单个视角下的三维信息
    在计算机GPU中并行地对第一相机(1)中的每个像素点执行上述操作即可获取物体的绝对相位,最后通过步骤一中获得的标定参数重构出物体单个视角下的高精度三维形貌信息。
  4. 根据权利要求1所述的方法,其特征在于步骤三中的具体步骤如下:
    (1)寻找相邻图片的2D匹配点
    假设相邻的两个3D帧Fram1和Fram2,它们有各自对应的二维纹理图I 1、I 2以及在第一相机(1)的坐标系下的三维数据
    Figure PCTCN2020111545-appb-100011
    并假设当前的3D帧为Fram2,上一个3D帧为Fram1,首先通过尺度不变特征变换SIFT方法寻找并匹配两帧纹理图I 1和I 2的特征点,然后利用欧式距离排除部分错误的二维匹配点;
    (2)优化2D匹配点
    利用二维匹配点对应的三维信息来消除错误的匹配点,具体如下:
    (a)分别计算Fram1和Fram2中每一个匹配点到其他所有匹配点的距离
    使用SIFT搜索到的二维匹配点多对应的3D匹配点集可记录为:
    P={P 1,P 2,...,P i,...,P n}
    Q={Q 1,Q 2,...,Q i,...,Q n}
    其中P和Q分别表示Fram1和Fram2中的二维匹配点所对应的三维点集,n是匹配点对的数量,i∈[1,n];计算每个P i与点集P中所有点之间的距离:
    H i(P i)={h i(P i,P 1),h i(P i,P 2),...,h i(P i,P j),...h i(P i,P n)}
    式中H i(P i)表示P i与点集P中所有点之间的距离的集合,h i(P i,P j)表示P i与P j之 间的距离,j∈[1,n],Q i的距离集表示为H i(Q i);
    (b)计算步骤(1)中获得的两个距离集间的对应元素之差
    通过下式获取H i(P i)与H i(Q i)中相应元素之间的差:
    D i={D i,1,...,D i,j,...,D i,n}
    ={h i(P i,P 1)-h i(Q i,Q 1),...,h i(P i,P j)-h i(Q i,Q j),...,h i(P i,P n)-h i(Q i,Q n)}
    式中D i表示H i(P i)与H i(Q i)中相应元素之间的差值集合,D i,j表示h i(P i,P j)与h i(Q i,Q j)的差值;
    (c)进一步排除错误2D匹配点
    比较D i中的每个元素D i,j的绝对值与阈值D thre的大小,若D i中的绝对值大于D thre的元素的个数大于匹配点数量的一半n/2,则认为点对P i与Q i不可靠,将其舍去,消除错误的匹配点对;
    (3)获取物体旋转、平移矩阵
    在识别2D匹配点后,接下来需获取相邻3D帧之间的旋转和平移矩阵;
    (a)将旋转、平移矩阵的求解问题转换为PnP问题
    将传统的PnP问题改进为:当四目条纹投影系统静止,而物体从位置1运动到位置2,已知物体在位置1时的三维数据以及物体在运动前后第一相机(1)所拍摄的二维图像间的2D匹配点,如何获取物体的变换姿态,以此将物体的旋转、平移矩阵的获取问题转变为PnP问题;
    (b)使用EPnP法解决PnP问题,获取旋转和平移矩阵
    使用EPnP法来求解PnP问题:调用OpenCV语言库中实现姿态估计算法的solvePnPRansac函数,函数的输入为P={P 1,P 2,...,P i,...,P n}、Q在I 2中所对应的2D点和第一相机(1)的内参矩阵,设置该函数的最小子集的计算模型为SOLVE_EPnP,运行该函数便可获得两相邻3D帧间的旋转和平移矩阵,将Fram1帧的三维数据
    Figure PCTCN2020111545-appb-100012
    变换来和Fram2的三维数据
    Figure PCTCN2020111545-appb-100013
    进行粗配准;
    (4)量化每次粗配准时的运动量,保留运动量在30-50之间的结果
    使用每次粗配准时的旋转和平移向量的二规范之和来量化每次粗配准时的物体的运动量,即当运动量小于30时,考虑到运动量过小而造成的数据冗余,舍弃该次粗配准的结果并执行下一个粗配准;当运动量大于50时,考虑到过大的配准误差,将不再进行粗配准,除非用户将物体旋转回到与上一个有效3D帧 之间的运动量在30到50之间的位置。
  5. 根据权利要求1所述的方法,其特征在于步骤四中的具体步骤如下:
    (1)比较粗配准的累积运动与运动阈值的大小,判断是否可以进行精配准判断步骤三中的有效粗配准的累积运动是否达到了运动阈值200,只有当累积运动量达到阈值200时,才会在精配准线程中执行基于ICP的精配准,并利用下一个循环多次粗配准的时间来执行该次精配准,即该精配准与下一个循环的多次粗配准同时进行;
    (2)进行精配准
    对已经粗配准的3D点云进行降采样,采用ICP算法对降采样数据进行处理,获取精确变换矩阵后,再对原始的密集三维点云数据进行操作,最终完成精细的3D配准,即
    (a)对已经粗配准的3D点云进行降采样
    使用voxelized网格法对点云进行降采样,将经过粗配准后的两片三维点云数据分为多个三维体素网格,使用每个体素网格中点的重心作为降采样后的值;降采样后,Fram1中的3D点云集可表示为
    Figure PCTCN2020111545-appb-100014
    Fram2中的3D点云集可表示为
    Figure PCTCN2020111545-appb-100015
    (b)采用ICP算法对降采样的数据进行处理,获取精确变换矩阵
    将点云数据降采样后,第一步,寻找点集
    Figure PCTCN2020111545-appb-100016
    中的每个点在
    Figure PCTCN2020111545-appb-100017
    中所对应的最近点;第二歩,求取使得第一步获取的对应点对间的平均距离最小的刚体变换,并获取该刚体变换所对应的旋转和平移矩阵;第三步,使用第二步所获取的旋转、平移矩阵来变换
    Figure PCTCN2020111545-appb-100018
    并得到一个新的点云集,将新得到的点云集记录为
    Figure PCTCN2020111545-appb-100019
    第四步,如果新的点云集
    Figure PCTCN2020111545-appb-100020
    Figure PCTCN2020111545-appb-100021
    满足两点集间的平均距离小于0.01,则停止计算,否则将
    Figure PCTCN2020111545-appb-100022
    作为新的
    Figure PCTCN2020111545-appb-100023
    继续迭代进行上述步骤,除非达到以下两个条件之一:(1)新的变换点集
    Figure PCTCN2020111545-appb-100024
    Figure PCTCN2020111545-appb-100025
    间的平均距离小于0.01;(2)迭代总次数达到了10次;在进行数次迭代,达到上述两个迭代终止条件之一后,ICP终止,最后一次迭代所获取的旋转、平移矩阵即是精配准所需的变换矩阵,获取精配准 的变换矩阵后,再对原始的密集三维点云数据进行变换操作,最终完成精细的3D配准。
PCT/CN2020/111545 2019-11-08 2020-08-27 一种基于条纹投影的高精度动态实时360度全方位点云获取方法 WO2021088481A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/769,230 US11961244B2 (en) 2019-11-08 2020-08-27 High-precision dynamic real-time 360-degree omnidirectional point cloud acquisition method based on fringe projection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911089522.2A CN110880185B (zh) 2019-11-08 2019-11-08 基于条纹投影的高精度动态实时360度全方位点云获取方法
CN201911089522.2 2019-11-08

Publications (1)

Publication Number Publication Date
WO2021088481A1 true WO2021088481A1 (zh) 2021-05-14

Family

ID=69729198

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/111545 WO2021088481A1 (zh) 2019-11-08 2020-08-27 一种基于条纹投影的高精度动态实时360度全方位点云获取方法

Country Status (3)

Country Link
US (1) US11961244B2 (zh)
CN (1) CN110880185B (zh)
WO (1) WO2021088481A1 (zh)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160096A (zh) * 2021-05-27 2021-07-23 山东中医药大学 一种基于视网膜模型的低光图像增强方法
CN113284170A (zh) * 2021-05-26 2021-08-20 北京智机科技有限公司 一种点云快速配准方法
CN113470084A (zh) * 2021-05-18 2021-10-01 西安电子科技大学 一种基于外轮廓粗匹配的点集配准方法
CN113536232A (zh) * 2021-06-28 2021-10-22 上海科技大学 用于无人驾驶中激光点云定位的正态分布变换方法
CN113587816A (zh) * 2021-08-04 2021-11-02 天津微深联创科技有限公司 一种阵列式大场景结构光三维扫描测量方法及其装置
CN113592961A (zh) * 2021-08-18 2021-11-02 易思维(杭州)科技有限公司 一种基于精度控制场和点云特征相似性的点云拼接方法
CN113643273A (zh) * 2021-08-24 2021-11-12 凌云光技术股份有限公司 一种基于点云数据的缺陷检测方法及装置
CN113673371A (zh) * 2021-07-30 2021-11-19 河南工业大学 基于振动信号的滚珠丝杠副正反向换向点识别方法及系统
CN113706591A (zh) * 2021-07-30 2021-11-26 华东理工大学 一种基于点云的表面弱纹理卫星三维重建方法
CN113808273A (zh) * 2021-09-14 2021-12-17 大连海事大学 一种船行波数值模拟的无序式增量稀疏点云重构方法
CN114119995A (zh) * 2021-11-08 2022-03-01 山东科技大学 一种基于物方面元的空地影像匹配方法
CN114742776A (zh) * 2022-03-30 2022-07-12 西安交通大学 一种机床加工刀具三维磨破损在线监测方法
CN114897907A (zh) * 2022-07-14 2022-08-12 北京远舢智能科技有限公司 一种烟支外观缺陷检测方法、装置及电子设备
CN115147540A (zh) * 2022-05-07 2022-10-04 南方科技大学 一种改进的三维重构方法
CN115442584A (zh) * 2022-08-30 2022-12-06 中国传媒大学 一种多传感器融合的异形面动态投影方法
WO2023109960A1 (zh) * 2021-12-17 2023-06-22 先临三维科技股份有限公司 三维扫描的处理方法、装置和三维扫描设备
CN116863086A (zh) * 2023-09-04 2023-10-10 武汉国遥新天地信息技术有限公司 一种光学动作捕捉系统刚体稳定重建方法
CN116883471A (zh) * 2023-08-04 2023-10-13 天津大学 面向胸腹部经皮穿刺的线结构光无接触点云配准方法
CN117036622A (zh) * 2023-10-08 2023-11-10 海纳云物联科技有限公司 融合航拍图像和地面扫描的三维重建方法、装置和设备
CN114022526B (zh) * 2021-11-29 2024-04-26 合肥工业大学 一种基于三维形状上下文的sac-ia点云配准方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110880185B (zh) 2019-11-08 2022-08-12 南京理工大学 基于条纹投影的高精度动态实时360度全方位点云获取方法
CN113538587A (zh) * 2020-04-16 2021-10-22 深圳先进技术研究院 一种相机坐标变换方法、终端以及存储介质
CN114283046B (zh) * 2021-11-19 2022-08-19 广州市城市规划勘测设计研究院 基于icp算法的点云文件配准方法、装置及存储介质
CN115546255B (zh) * 2022-11-28 2023-02-28 南京理工大学 基于sift流单帧条纹投影高动态范围误差补偿方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102288131A (zh) * 2011-05-12 2011-12-21 上海大学 物体360°轮廓误差的自适应条纹测量装置和方法
CN107240067A (zh) * 2017-05-11 2017-10-10 同济大学 一种基于三维重建的序列图像自动拼接方法
US20190068954A1 (en) * 2015-06-13 2019-02-28 Alberto Daniel Lacaze Senising on uavs for mapping and obstacle avoidance
CN110044301A (zh) * 2019-03-29 2019-07-23 易思维(天津)科技有限公司 基于单双目混合测量的三维点云计算方法
CN110880185A (zh) * 2019-11-08 2020-03-13 南京理工大学 基于条纹投影的高精度动态实时360度全方位点云获取方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007061632A2 (en) * 2005-11-09 2007-05-31 Geometric Informatics, Inc. Method and apparatus for absolute-coordinate three-dimensional surface imaging
CN103914874B (zh) * 2014-04-08 2017-02-01 中山大学 一种无特征提取的紧致sfm三维重建方法
CN107240129A (zh) * 2017-05-10 2017-10-10 同济大学 基于rgb‑d相机数据的物体及室内小场景恢复与建模方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102288131A (zh) * 2011-05-12 2011-12-21 上海大学 物体360°轮廓误差的自适应条纹测量装置和方法
US20190068954A1 (en) * 2015-06-13 2019-02-28 Alberto Daniel Lacaze Senising on uavs for mapping and obstacle avoidance
CN107240067A (zh) * 2017-05-11 2017-10-10 同济大学 一种基于三维重建的序列图像自动拼接方法
CN110044301A (zh) * 2019-03-29 2019-07-23 易思维(天津)科技有限公司 基于单双目混合测量的三维点云计算方法
CN110880185A (zh) * 2019-11-08 2020-03-13 南京理工大学 基于条纹投影的高精度动态实时360度全方位点云获取方法

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470084A (zh) * 2021-05-18 2021-10-01 西安电子科技大学 一种基于外轮廓粗匹配的点集配准方法
CN113470084B (zh) * 2021-05-18 2024-01-30 西安电子科技大学 一种基于外轮廓粗匹配的点集配准方法
CN113284170A (zh) * 2021-05-26 2021-08-20 北京智机科技有限公司 一种点云快速配准方法
CN113160096A (zh) * 2021-05-27 2021-07-23 山东中医药大学 一种基于视网膜模型的低光图像增强方法
CN113160096B (zh) * 2021-05-27 2023-12-08 山东中医药大学 一种基于视网膜模型的低光图像增强方法
CN113536232A (zh) * 2021-06-28 2021-10-22 上海科技大学 用于无人驾驶中激光点云定位的正态分布变换方法
CN113536232B (zh) * 2021-06-28 2023-03-21 上海科技大学 用于无人驾驶中激光点云定位的正态分布变换方法
CN113706591A (zh) * 2021-07-30 2021-11-26 华东理工大学 一种基于点云的表面弱纹理卫星三维重建方法
CN113673371B (zh) * 2021-07-30 2024-04-05 河南工业大学 基于振动信号的滚珠丝杠副正反向换向点识别方法及系统
CN113673371A (zh) * 2021-07-30 2021-11-19 河南工业大学 基于振动信号的滚珠丝杠副正反向换向点识别方法及系统
CN113706591B (zh) * 2021-07-30 2024-03-19 华东理工大学 一种基于点云的表面弱纹理卫星三维重建方法
CN113587816A (zh) * 2021-08-04 2021-11-02 天津微深联创科技有限公司 一种阵列式大场景结构光三维扫描测量方法及其装置
CN113592961A (zh) * 2021-08-18 2021-11-02 易思维(杭州)科技有限公司 一种基于精度控制场和点云特征相似性的点云拼接方法
CN113592961B (zh) * 2021-08-18 2023-08-01 易思维(杭州)科技有限公司 一种基于精度控制场和点云特征相似性的点云拼接方法
CN113643273B (zh) * 2021-08-24 2024-05-03 凌云光技术股份有限公司 一种基于点云数据的缺陷检测方法及装置
CN113643273A (zh) * 2021-08-24 2021-11-12 凌云光技术股份有限公司 一种基于点云数据的缺陷检测方法及装置
CN113808273A (zh) * 2021-09-14 2021-12-17 大连海事大学 一种船行波数值模拟的无序式增量稀疏点云重构方法
CN113808273B (zh) * 2021-09-14 2023-09-12 大连海事大学 一种船行波数值模拟的无序式增量稀疏点云重构方法
CN114119995B (zh) * 2021-11-08 2024-03-15 山东科技大学 一种基于物方面元的空地影像匹配方法
CN114119995A (zh) * 2021-11-08 2022-03-01 山东科技大学 一种基于物方面元的空地影像匹配方法
CN114022526B (zh) * 2021-11-29 2024-04-26 合肥工业大学 一种基于三维形状上下文的sac-ia点云配准方法
WO2023109960A1 (zh) * 2021-12-17 2023-06-22 先临三维科技股份有限公司 三维扫描的处理方法、装置和三维扫描设备
CN114742776B (zh) * 2022-03-30 2024-03-15 西安交通大学 一种机床加工刀具三维磨破损在线监测方法
CN114742776A (zh) * 2022-03-30 2022-07-12 西安交通大学 一种机床加工刀具三维磨破损在线监测方法
CN115147540A (zh) * 2022-05-07 2022-10-04 南方科技大学 一种改进的三维重构方法
CN114897907A (zh) * 2022-07-14 2022-08-12 北京远舢智能科技有限公司 一种烟支外观缺陷检测方法、装置及电子设备
CN115442584B (zh) * 2022-08-30 2023-08-18 中国传媒大学 一种多传感器融合的异形面动态投影方法
CN115442584A (zh) * 2022-08-30 2022-12-06 中国传媒大学 一种多传感器融合的异形面动态投影方法
CN116883471A (zh) * 2023-08-04 2023-10-13 天津大学 面向胸腹部经皮穿刺的线结构光无接触点云配准方法
CN116883471B (zh) * 2023-08-04 2024-03-15 天津大学 面向胸腹部经皮穿刺的线结构光无接触点云配准方法
CN116863086A (zh) * 2023-09-04 2023-10-10 武汉国遥新天地信息技术有限公司 一种光学动作捕捉系统刚体稳定重建方法
CN116863086B (zh) * 2023-09-04 2023-11-24 武汉国遥新天地信息技术有限公司 一种光学动作捕捉系统刚体稳定重建方法
CN117036622B (zh) * 2023-10-08 2024-02-23 海纳云物联科技有限公司 融合航拍图像和地面扫描的三维重建方法、装置和设备
CN117036622A (zh) * 2023-10-08 2023-11-10 海纳云物联科技有限公司 融合航拍图像和地面扫描的三维重建方法、装置和设备

Also Published As

Publication number Publication date
US11961244B2 (en) 2024-04-16
CN110880185B (zh) 2022-08-12
US20240037765A1 (en) 2024-02-01
CN110880185A (zh) 2020-03-13

Similar Documents

Publication Publication Date Title
WO2021088481A1 (zh) 一种基于条纹投影的高精度动态实时360度全方位点云获取方法
CN110288642B (zh) 基于相机阵列的三维物体快速重建方法
CN110514143B (zh) 一种基于反射镜的条纹投影系统标定方法
WO2021140886A1 (ja) 三次元モデル生成方法、情報処理装置およびプログラム
US20060017720A1 (en) System and method for 3D measurement and surface reconstruction
JP6483832B2 (ja) Rgb−dセンサを使用して物体を走査する方法とシステム
CN109307483A (zh) 一种基于结构光系统几何约束的相位展开方法
Wu et al. A novel high precise laser 3D profile scanning method with flexible calibration
CN111060006A (zh) 一种基于三维模型的视点规划方法
JP5220144B2 (ja) 低密度の反射対応から鏡面物体の表面を再構築するための方法
CN111353997B (zh) 一种基于条纹投影的实时三维面型缺陷检测方法
Liu et al. CurveFusion: reconstructing thin structures from RGBD sequences
Jokinen Area-based matching for simultaneous registration of multiple 3-D profile maps
CN116433841A (zh) 一种基于全局优化的实时模型重建方法
Arold et al. Hand-guided 3D surface acquisition by combining simple light sectioning with real-time algorithms
Albouy et al. Accurate 3D structure measurements from two uncalibrated views
Khalfaoui et al. View planning approach for automatic 3d digitization of unknown objects
Liu et al. Interactive 3D model acquisition and registration
Wang et al. High-speed 360° 3D shape measurement based on visual stereo phase unwrapping method
Urquhart et al. Toward real-time dynamic close-range photogrammetry
Qian et al. Real-time 3D point cloud registration
LIU et al. High-Quality Reconstruction of 3D Model
Li et al. A method of 3D reconstruction from image sequence
Li et al. A view planning method incorporating self-termination for automated surface measurement
CN116958233A (zh) 基于多波段红外结构光系统的皮肤烧伤面积计算方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20886057

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 17769230

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20886057

Country of ref document: EP

Kind code of ref document: A1