LU503375B1 - Measuring method and system for structured light 3d scanning - Google Patents

Measuring method and system for structured light 3d scanning Download PDF

Info

Publication number
LU503375B1
LU503375B1 LU503375A LU503375A LU503375B1 LU 503375 B1 LU503375 B1 LU 503375B1 LU 503375 A LU503375 A LU 503375A LU 503375 A LU503375 A LU 503375A LU 503375 B1 LU503375 B1 LU 503375B1
Authority
LU
Luxembourg
Prior art keywords
point cloud
scanning
cloud data
measurement
data information
Prior art date
Application number
LU503375A
Other languages
French (fr)
Inventor
Long Cao
Zimei Tu
Qin Qin
Wenchen Li
Original Assignee
Univ Shanghai Polytech
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Shanghai Polytech filed Critical Univ Shanghai Polytech
Application granted granted Critical
Publication of LU503375B1 publication Critical patent/LU503375B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Abstract

The invention relates to a measuring method and a measuring system for structured light 3D scanning. The scanning measurement method comprises the following steps: collecting 3D point cloud data information of the measured object in different postures; carrying out rotational coarse registration on 3D point cloud data information in adjacent postures; optimizing the robust estimation algorithm by using the robust estimation principle, registering the 3D point cloud data information by using the robust estimation algorithm, outputting the point cloud data containing new position information, and carrying out the fine registration on the point cloud data. The scanning measurement system comprises a structured light 3D scanning camera, a scanning turntable coated in black and a 3D measurement processing unit.

Description

DESCRIPTION LU503375
MEASURING METHOD AND SYSTEM FOR STRUCTURED LIGHT 3D SCANNING
TECHNICAL FIELD
The invention relates to the field of measuring technique, and in particular to a measuring method and a measuring system for structured light 3D scanning.
BACKGROUND
With the development of manufacturing technology and non-standard automation technology, the accuracy requirements for the measurement of the external dimensions of parts and the requirements for work efficiency are constantly improved. The conventional 2D detecting method is time-consuming and labor-intensive; besides, it is difficult to achieve high-precision detection for some complex curved surfaces, and the detection accuracy and efficiency have been difficult to meet the detection standards and requirements of modern industrial manufacturing technology for various parts. Modern 3D digital detection technology is gradually replacing the traditional detection technology, has been widely used, and has become the core technology for detecting various high-precision complex parts.
The existing measuring methods for ordinary 3D measuring devices need to register 3D point cloud data information. Generally, the existing 3D measuring devices only use
ICP algorithm for registration, which has many iterations and low registration efficiency.
In addition, in order to ensure the correct registration rate, it is generally required that the overlapping rate is greater than 30%, when the overlapping rate is lower than 30%, the correct registration rate will deteriorate seriously.
In the research of the prior art, there isn’t any better method, which can ensure the correct registration rate, while solve the problems of many iterations and low registration efficiency in the existing ICP algorithm.
SUMMARY LU503375
The purpose of the present invention is to solve the technological deficiency existing in the prior art.
To achieve the above aim, in the first aspect, the embodiment of the present invention describes a measuring method for structured light 3D scanning, which comprises the following steps: collecting 3D point cloud data information of the measured object in different postures; carrying out rotational coarse registration on 3D point cloud data information in adjacent postures; optimizing the robust estimation algorithm by using the robust estimation principle, registering the 3D point cloud data information by using the robust estimation algorithm, outputting the point cloud data containing new position information, and carrying out the fine registration on the point cloud data.
The registration algorithm of the present invention combines coarse registration with fine registration. The coarse registration algorithm is to calculate the Euler angle of the collected 3D point cloud in combination with the movement of the turntable, so as to provide original data for the fine registration. Fine registration is to extract the overlapping part of the 3D point cloud data after coarse registration, and perform robust estimation and alignment on the 3D point cloud data in the overlapping part. The extracted overlapping point cloud data is close to the real position to be aligned, so the robust estimation alignment algorithm only needs a few iterations to complete the alignment, which shortens the running time of the robust estimation alignment algorithm.
This registration algorithm can ensure that the overlapping area reaches more than 30%, improve the registration accuracy, reduce the point cloud data to be processed, and accelerate the processing speed of point clouds.
Preferably, performing the robust estimation by using an robust estimation model, which includes Huber robust estimation model, and/or IGG robust estimation model.
The registration algorithm of robust estimation of the present invention optimizes
ICP algorithm by using the principle of robust estimation, and uses the weight reduction factor to perform robust estimation, thus improving the robustness of the algorithm. Th@,503375 robust model is selected as Huber robust estimation model and/or IGG robust estimation model. After the robust model is selected, the ICP algorithm is improved by the robust estimation method.
Preferably, the scanning measurement method further comprises: down sampling the classified 3D point cloud data information by using the voxel grid method; and/or filtering the 3D point cloud data information.
The point cloud down sampling algorithm used in the present invention classifies the points in the point cloud by using the threshold of the included angle of the normal vectors, and at the same time, applies KD-Tree algorithm to accelerate the search of the point cloud, and uses the voxel grid method to classify the classified point cloud to obtain the down sampled point cloud. This algorithm can preserve the local characteristics of point cloud, and shorten the down sampling processing time as much as possible, thus ensuring the measurement accuracy and measurement speed of the automatic measurement software.
The point cloud filtering algorithm used in the present invention combines the radius filtering algorithm, the bilateral filtering algorithm and the statistical filtering algorithm, retains the advantages of each algorithm, can better filter out noise and retain more detailed features. At the same time, the invention designs the visual adjustment of the filtering effect, which can visually display the filtering effect and facilitate the adjustment of various filtering parameters, so as to achieve a satisfactory filtering effect and improve the usability of the automatic measurement software.
In the second aspect, the embodiment of the present invention provides a measuring system for structured light 3D scanning, which is characterized by comprising: a structured light 3D scanning camera, which is used to collect 3D point cloud information of the measured object; a scanning turntable coated in black, which is used to collect 3D point cloud data of the measured object from multiple angles, and the black coating is used to remove the point cloud information of the non-measured object; a 3D measurement processing unit, which is used to implement the measurement method. LU503375
According to the invention, the measuring platform is designed with black coating, and the black color can absorb the projection of the structured light source. Therefore, when the structured light 3D scanning camera is used to collect 3D point cloud data, the measuring platform will not be detected. Therefore, it won't interfere with the 3D point cloud data collection of the measured object, and it doesn't need to manually post-process the collected 3D point cloud data, which makes the automatic measurement software possible.
Preferably, the 3D measurement processing unit comprises: a camera acquisition control module, used for controlling the scanning camera to acquire 3D point cloud data information of the measured object; a coarse registration processing module, used for roughly registering 3D point cloud data information; a fine registration processing module, used for finely registering 3D point cloud data information.
Preferably, the 3D measurement processing unit further comprises: a down sampling processing module, used for down sampling the 3D point cloud data information; and/or a filtering processing module, used for filtering the 3D point cloud data information.
Preferably, the scanning measurement system further comprises: a 3D scanning camera position adjustment module, used for adjusting the relative position of the 3D scanning camera and the measured object.
The 3D scanning camera position adjustment module designed by the invention is used to adjust the relative position between the 3D scanning camera and the measured object, so that the 3D scanning camera can obtain more scanning area or higher measurement accuracy, so as to meet the measurement requirements of the measured objects of different scales, deal with different kinds of measurement scenes, and improve the compatibility and platform ability of the automatic measurement software.
Preferably, the scanning measurement system further comprises: a calibration module, used for calibrating the structured light 3D scanning camera during initialization.
The calibration function of the present invention is used to automatically calculate the relative distance and angle between the 3D scanning camera and the central axis of the turntable after adjusting the position of the 3D scanning camera. Manual measurement is not needed, which improves the automation degree of automat/Ç,593375 measurement software.
Preferably, the scanning measurement system further comprises: a turntable motion control module, used for driving the turntable to move and collect 3D point cloud data information of the measured object in different postures.
Preferably, the scanning measurement system puts the processing module of the 3D point cloud data information on the computer end, for uniformly processing the 3D point cloud data on the computer end.
In order to improve the universality of the measurement method, the present invention puts the filtering and registration processing in the 3D point cloud data processing stage on the computer end. Therefore, any brand and model of 3D scanning camera, regardless of whether the 3D scanning camera has its own processing module or not, can apply this measurement method. After obtaining the original data of the 3D scanning camera, the 3D point cloud data can be uniformly processed on the computer end. The compatibility of automatic measurement software is improved.
Preferably, the scanning measurement system performs 3D point cloud data processing based on the hardware foundation of OpenMP and CUDA.
When processing 3D point cloud data, if only CPU is used for processing, the efficiency is very low; if only GPU is used for processing, there will be additional memory request and release time. The invention designs a 3D point cloud data processing method based on the combination of OpenMP and CUDA. OpenMP is a multi-core CPU parallel computing technology, and CUDA is a GPU computing technology. This 3D point cloud data processing method can optimize the processing speed of point cloud as much as possible. When the amount of point cloud data is small, OpenMP is used for processing, the processing speed is about 3 times higher than that of CPU serial processing. When the amount of point cloud data is large, CUDA is used for processing, and the processing speed is about 10 times faster than that of CPU serial processing.
Therefore, the 3D point cloud data processing speed of automatic measurement software is accelerated.
Preferably, in order to avoid the situation that the registration process cannot be automatically executed due to registration errors caused by user operation errors and the like, the present invention designs semi-automatic registration as error handling. |R,503375 the semi-automatic registration mode, users can perform the automatic registration by selecting at least three groups of corresponding points, and the selected corresponding points can be automatically registered only if they are roughly in the same position, which improves the usability, the robustness and reliability of the automatic measurement software.
Preferably, before the point cloud is matched with the CAD model of the measured target, it is necessary to use imalign to convert the point cloud into pif format. Besides the original point cloud data, the pif format point cloud data also contains the grid characteristic, which will facilitate the subsequent model matching operation, so that the point cloud can be automatically aligned with the model without manually selecting points. The automation degree of the device and the robustness of the automatic measurement software are improved.
Preferably, the present invention uses LabVIEW as a development tool, which can quickly carry out secondary development to match various 3D cameras and mechanical motion devices. The measurement module of the automatic measurement software in this device uses the Poly Works measurement tool, which eliminates the development of measurement tools and ensures high measurement accuracy. At the same time, it can also choose other measurement software or self-developed measurement software. The compatibility of the automatic measurement software is improved.
The embodiment of the invention has the beneficial effects that: the invention provides a measuring method for structured light 3D scanning, which optimizes the robust estimation algorithm by using the robust estimation principle, so that the registration is faster and more accurate, and the overlapping rate can easily reach more than 30%, which can effectively prevent registration errors and ensure the registration accuracy rate. The invention also provides a measuring system for structured light 3D scanning, which realizes the measurement method and can meet the measurement requirements of parts with complex shapes.
BRIEF DESCRIPTION OF THE FIGURES LU503375
FIG. 1 is a flow chart of a measuring method for structured light 3D scanning according to an embodiment of the present invention;
FIG. 2 is an overall working flow chart of the embodiment of the present invention;
FIG. 3 is a flow chart of configuring a scanning platform according to an embodiment of the present invention;
FIG. 4 is a calibration flow chart according to an embodiment of the present invention;
FIG. 5 is a flowchart of a calibration algorithm according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a calibration piece according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of the axis point of the calibration piece according to an embodiment of the invention;
FIG. 8 is a flow chart of measuring new workpieces according to an embodiment of the present invention;
FIG. 9 is a flowchart of the measurement operation of the embodiment of the present invention;
FIG. 10 is a specific flowchart of a filtering processing method according to an embodiment of the present invention;
FIG. 11 is a specific flowchart of a down-sampling processing method according to an embodiment of the present invention;
FIG. 12 is a specific flow chart of a coarse registration processing method according to an embodiment of the present invention;
FIG. 13 is a specific flow chart of point cloud position transformation according to the embodiment of the present invention;
FIG. 14 is a rotation parameter diagram of point cloud position transformation according to an embodiment of the present invention;
FIG. 15 is a translation parameter diagram of point cloud position transformation according to an embodiment of the present invention;
FIG. 16 is a specific flowchart of a fine registration processing method according {9503375 an embodiment of the present invention;
FIG. 17 is a specific flowchart of a robust estimation algorithm according to an embodiment of the present invention;
FIG. 18 is a specific flowchart of a semi-automatic registration processing method according to an embodiment of the present invention;
FIG. 19 is a specific flow chart of measurement and report output according to an embodiment of the present invention;
FIG. 20 is a front view of a structured light 3D scanning measuring device according to an embodiment of the present invention;
FIG. 21 is a front view of a black coated scanning platform according to an embodiment of the present invention;
FIG. 22 is a schematic diagram of hardware connection for 3D scanning measurement of structured light according to an embodiment of the present invention.
DESCRIPTION OF THE INVENTION
In order to make the object, technical scheme and advantages of the present invention clearer, the present invention will be described in further detail below with reference to the drawings and examples. Obviously, the described examples are part of the embodiments of the present invention, but not all of them. It should be understood that the specific embodiments described here are only for explaining the present invention, but not for limiting the present invention. Based on the embodiment of the present invention, all other embodiments obtained by the skilled person without creative labor belong to the scope of the present invention.
In order to make the public have a better understanding of the present invention, in the following detailed description of the present invention, some specific details are described in detail. For those skilled in the art, the present invention can be fully understood without the description of these details.
Embodiment 1
FIG. 1 is a flow chart of a a measuring method for structured light 3D scanning according to an embodiment of the present invention. The specific working flow includes; 53375 the overall working flow design, the configuration scanning platform flow design, the calibration operation flow design, the calibration algorithm flow, the new measurement workpiece flow design, the measurement operation flow design, the filtering flow design, the 3D point cloud coarse registration flow design, the point cloud position transformation flow, the downs ampling algorithm flow design, the 3D image fine registration flow design, the robust estimation algorithm flow, the 3D point cloud semi-automatic registration and conversion format flow, the measurement and report output flow. 1. As shown in FIG. 2, the overall workflow design is as follows:
Step 1: selecting the current scanning platform.
Step 2: initializating and calibrating.
Step 3: creating a new measurement project.
Step 4: measuring.
Step 5: outputting the measurement report. 2. As shown in FIG. 3, the flow design of configuring the scanning platform is as follows:
Step 1: selecting the 3D scanning camera model and the turntable model.
Step 2: configuring 3D scanning camera parameters and turntable parameters.
Step 3: saving the current measuring device configuration.
Step 4: calibrating the measuring device. 3. As shown in FIG. 4, the flow design of calibration operation is as follows:
Step 1: adjusting the angles of the Y-axis hand module, the Z-axis hand module and the 3D scanning camera bracket to the positions meeting the measurement requirements.
Step 2: placing the calibration piece.
Step 3: starting the calibration procedure.
Step 4: calibrating the size of the 3D scanning camera according to the known size and surface of the calibration piece.
Step 5: finding out the bottom edge and axis of the known calibration piece through the program, and calculating the distance between the origin of the 3D scanning camera and the intersection point of the Z-axis of the camera and the axis line of the turntable,
and the included angle between the 3D scanning camera and the Z-axis hand module. | 503375
Step 6: outputting by the calibrating procedure the distance between the origin of the 3D scanning camera and the intersection point of the Z-axis of the camera and the axis line of the turntable, and the included angle between the 3D scanning camera and the Z-axis hand module.
Step 7: saving the parameters in the configuration for the subsequent measurement. 4. As shown in FIG. 5, the flow design of calibration algorithm is as follows:
Step 1: filtering the collected 3D point cloud data to remove outliers around the bottom contour.
Step 2: calculating the minimum bounding box of the point cloud: using the oriented bounding box OBB based on PCA (principal component analysis). Making the bottom surface (the surface where the collected point cloud contacts the turntable) is a single surface of the six surfaces of the OBB.
Step 3: down sampling the 3D point cloud data, and then respectively obtaining the normal vectors of the centroid positions of six surfaces of the OBB, and determining six camera positions by taking the normal vector direction as the camera viewing angle direction.
Step 4: removing hidden points of point cloud from the perspectives of six camera positions, respectively to obtain six-point cloud images. Calculating the number of points in each point cloud image and the average distance to the corresponding surface. The top surface has the most points and the longest average distance, and the other surface parallel to the top surface is the bottom surface.
Step 5: As shown in FIG. 6, calculating the plane equation of the bottom surface with the bottom surface vertex. z= ax + by + c, the bottom normal vector is N = (a, b, 1), and the unit vector in the direction of OZ axis is z = (0,0,1). Getting the bottom normal vector from the the plane equation of the bottom surface, and calculating the included angle a; between the normal vector and OZ axis: iN] 121°
Step 6: setting the positions of the six center points of the calibration piece as shown in the following figures A, B, C, D, E and F. When the bottom of the calibration pieG@ 503375 covers the axis, as shown in FIG. 7, assuming that the center points scanned for the first time are three points A, B and C, the center points scanned by the turntable after rotating 180 are D', E' and F', connecting A-D', B-E' and C-F' and taking the midpoint A", B' and C', and then taking the middle vertical lines of A'-B' and B'-C'. The intersection point O of the two middle vertical lines is the axis point, and the vertical line between the axis point and the turntable surface is the axis line.
Step 7: the intersection of the Z axis of the 3D scanning camera and the axis of the turntable is the new coordinate origin.
Step 8: outputting the new coordinate origin and the included angle az. 5. As shown in FIG. 8, the flow design of new measuring workpiece is as follows:
Step 1: designing the movement process of the turntable.
Step 2: designing the measurement template, the report template and the automatic measurement script.
Step 3: configuring the corresponding measurement template, the measured object
CAD model, the automatic measurement script input path, etc, and other parameters, such as down sampling configuration parameters, the point cloud saving path, the report saving path, etc.
Step 4: testing whether the running path and templates meet the requirements, if not, modifying the running path and the measurement template.
Step 5: performing the measurement operation to carry out continuous automatic measurement. 6. As shown in FIG. 9, the flow design of measurement operation is as follows:
Step 1: reading configuration parameters, such as the motion path, the number of samples, etc.
Step 2: calling the 3D camera control tool module to control the 3D camera to automatically collect point cloud images.
Step 3: using Gigabit Ethernet to import the collected 3D point cloud data into the automatic control software.
Step 4: calling the filter tool to filter.
Step 5: down sampling according to configuration.
Step 6: storing the 3D point cloud data to the configuration path. LU503375
Step 7: using the turntable motion control module to control the turntable for 3D image acquisition of the next posture.
Step 8: coarsely registering the collected second 3D point cloud with the first one.
Step 9: repeating the above operation until the acquisition is completed.
Step 10: registering after data acquisition is completed. 7. As shown in FIG. 10, the flow design of the filtering process is as follows:
Step 1: filtering the point cloud by radius filtering.
Step 2: using the principal component analysis method, converting the solution of the normal and the normal vector of the point cloud in the adjacent domain into the eigenvalues and eigenvectors of the covariance matrix of the point in the adjacent domain, and calculating the filtering factors of bilateral filtering according to the calculated normal vector in the same direction.
Step 3: calculating the mathematical expectation u and standard deviation o of all points in the adjacent domain by the statistical filtering method, and calculating the threshold € of statistical filtering, using the threshold of statistical filtering and the threshold of 1 - 2 times the global average distance to constrain the two attribute parameters calculated by bilateral filtering. By limiting the size of spatial domain, the characteristics of point cloud structure are guaranteed, and the influence of bilateral filtering of isolated point-to-point clouds in the adjacent domain is reduced.
Step 4: outputting the filtered point cloud. 8. As shown in FIG. 11, the flow of down sampling algorithm is as follows:
Step 1: rasterizing the point cloud in space, and using KD-Tree to accelerate the search of k adjacent domain of the point cloud.
Step 2: calculating the normal vector in the point cloud, and for any point P in the point cloud, the plane of all point quasi-synthesis in the kadjacent domain is a best-fitting plane. To ensure that the fitting plane of this point is the least square plane, the calculation principle is as follows:
X= Ep; PD) xp ~ Pp).
Proposing the concept of local entropy by using the angle between the calculated normal vector and the vector in the adjacent domain: LU503375
Hb. Om) = —Po,1og Pa, — Tis Po log Pa, where Pek, Pej is: =i &.
Pa ES 7 At in the formula, Pek, Pa are the probability distribution of the normal vector of the center of gravity of two points, respectively.
Step 3: classifying the point clouds in the grid by using the information entropy of the included angle of the normal vector. For the grid with smaller normal vector information entropy, directly carrying out voxel mesh, saving the grid with larger normal vector information entropy, and simplifying the point clouds by carrying out smaller voxel mesh for the saved point clouds. 9. As shown in FIG. 12, the flow design of 3D point cloud coarse registration is as follows:
Step 1: performing coordinate position transformation on the point cloud according to the calibration parameters, and transforming to the coordinates with the intersection of the axis of the turntable and the Z-axis of the 3D camera as the origin.
Step 2: calculating the rotation angle according to the included angle az between the camera Z axis and the hand-cranked module output by the calibration program and the angle of each movement, and obtaining the rotated point cloud.
Step 3: outputting the rotated point cloud. 10. As shown in FIG. 13, the point cloud position transformation process is as follows:
Step 1, according to the right-hand rule, establishing a Cartesian coordinate system with the intersection of the Z axis of the camera and the axis of the turntable as the origin, where, the X-axis and Z-axis are located on the turntable plane.
Step 2: setting the scanned point cloud coordinate as OT-XTYTZT, the coordinate with the intersection of the turntable axis and the 3D camera centerline as the coordinate origin, and with the right angle as O-XYZ. The calibrated coordinate origin distance 503375 which takes the axis of the turntable and the centerline of the 3D camera as the intersection point, is |, the angle with z axis is a, and the current rotation angle of the turntable is B. The conversion parameters are three translation parameters Ax, Ay and Az, and three rotation parameters €, Ey, £z, as shown in FIG. 14.
Step 3: firstly, moving the scanned coordinate origin to the coordinate origin with the axis of the turntable and the centerline of the 3D camera as the intersection point. Since the camera center point and the X axis of the turntable center point are on the same straight line in the turntable design, Ax = 0. As shown in FIG. 15, since | and a are known in calibration, Ay and A; can be obtained.
Ay=|-sina,
Az=|-cosa.
Step 4: because the scanned point cloud is accompanied by the elevation angle a and the current rotation angle B, and because the object cannot rotate along the Y axis,ex = à, €, = 0, ez = B can be obtained.
According to the formula Xt =Ax+ R(e)X, XT is the three-dimensional coordinate vector of OT-XTYTZT, and X is the three-dimensional coordinate vector of O-XYZ, R(e) is the rotation matrix, R(€) = R (ex) R (gy) R (ez) feos eg, sing 0
L 8 { 1. cos eg, 0 —sin e, 01865) - Le Ey 0 cos Ex 1 0 0
OE [2 ss me
So there is:
Rai Riz Ris
Re) = Rs Rzz Res
Ray Ry, Ras.
In the formula:
R11=cosgycose, =cosp LU503375
R12 COSExSINEz+SINExSINEyCOSEz = COSA SIN
R13— SINExSINEz-COS ExSINEyCOSEz = SINA- SIN
R21—-coseysinez = -sinf
R22— COSExCOSEz-SINExSINEySINEz = COSB
R23 = SINEXCOSEz+COSExSINEySINEz = SINA-COSB
R31=singy=0
R32=-singxcosey=-sina-cosf3
Raz = COS, COS Ey = COSE 11. As shown in FIG. 16, the flow design of 3D image fine registration is as follows:
Step 1: importing successively two adjacent point clouds.
Step 2: extracting the point cloud data in the overlapping contour of the adjacent point clouds after coarse registration.
Step 3: performing ISS feature point extraction on the overlapped two pieces of point cloud data.
Step 4: carrying out feature description on that extracted ISS feature point by using the FPFH value, and matching the extracted feature points to form feature point pairs.
Step 5: purifying the matched feature point pairs by using RANSAC algorithm, and eliminating the mismatched feature point pairs.
Step 6: using the robust estimation algorithm to register the feature point pairs and realizing the registration of point clouds.
Step 7: calculating the positional transformation relationship between the registered point cloud and the original point cloud.
Step 8: performing position transformation on the whole point cloud according to the position transformation relationship to obtain the spliced point cloud.
Step 9: taking the previous point cloud as a reference in turn, and registering the subsequent point clouds.
Step 10: outputting the registered point cloud data containing new location information. 12. As shown in FIG. 17, the robust estimation algorithm flow:
Step 1: calculating the objective function according to the M-estimation. The principle of M-estimation is: LU503375 n > wv} = min fal .
Applying the principle of M-estimation to the feature points where the source point cloud matches the target point cloud. The objective function g(R, T) is as follows: n
GRIT) =D ok =P + T= Qi i=1
Where, R is the rotation matrix and is the orthogonal matrix R = {R € 3 * 3| R'R = E, det (R) = 1}, and T is the translation matrix a three-dimensional column vector.
Step 2: calculating the weight:
The key of the weight selection iteration method lies in the selection of p number and function . The weight factor w and the equivalent weight Bi are constructed by wo a = Puy two relational expressions with Pe EP and Mi ¥ and then the iterative adjustment solution is carried out. The weight of each pair of matched feature points is determined according to the residual distance, as follows: wi= W(||[R*Pi+T-Qi[|2).
Where, if there is a good stiffness estimation, it can be brought in. If not, the initial value of R can be brought in as a unit matrix, and the initial value of T can be brought in
Sk mm TOR ga, F = x Ca 2 ÿ == Lem fold. J. as a zero vector, * Zinn FE mi aif = gi oii je set, ¥ is the weighted center of point cloud P, Q is the weighted center of point cloud Q. The translation vector T is replaced by an independent column vector u, that is,
T=Q~R#F+u, the objective function changes as follows: i
N ÄRA Ö-Ro tra QM
La © IR<P +0 Qu,
Step 3: calculating a new rotation and translation matrix R,T. Since the sum of all (PP) and all @i(Qi — À is zero vector, the product of them and scalar u is still zero vector, and the formula of IGG weight function can be simplified as:
n n > aw | Py — PIE + > co; iQ: — ol: + @ || u H#- 2dtrace (RC) 0503375 {=t i=1 where C can be expressed as: 3%
C= => Lo BL FL gf ep®S
Obviously, when u- = 0 and trace (RC) is minimum, the whole objective function can be minimized. Where, C can be decomposed by the method based on the singular value decomposition, R is the singular value of C decomposition. According to C = U Y VI, R° =
VUT can be found. In some special cases, a reflection matrix det (VUT) = -1 will appear, at this time, R° = Vdiag(1 ,1 ,-1)UT, R° can be solved, according to u-= 0 and the value of
R* the T"=Q-R +P can be solved.
Step 4: calculating the objective function: calculating the objective function g(R”, T”) according to the calculated R'andT”, and judging that g(R” ,T)-g(R, T) < €. If not, repeating the above steps.
Step 5: calculating the robust model:
The registration algorithm of robust estimation of the present invention optimizes
ICP algorithm by using the principle of robust estimation, and uses the weight reduction factor to perform robust estimation, thus improving the robustness of the algorithm.
Huber robust model and IGG robust model are selected as robust models. (1) Huber Robust Model
The weight function is: vr vi «€ pi = | © cv] — 5 vi > €
The weight factors are:
SE vi po wiv) = | i oo
In the formula, the correction number |v| should be +c, it is Huber’s weight function.
Robust estimation is the most classical least squares algorithm. When the correction number |v| is greater than c, the larger the correction number, the smaller the weight, 593375
Generally, the value of c is between 1 and 3. (2) The weight function of IgG robust model is: {mise ply) = [Iv 157 < |v} < 2.50 \d nl» Lhe
After the robust model is selected, the ICP algorithm is improved by the robust estimation method. 13. As shown in FIG. 18, the flow of semi-automatic registration and format conversion of 3D point clouds is as follows:
Step 1: importing the aligned point cloud into a registering tool for automatic registering.
Step 2: if automatic registering fails, prompting the user that automatic registering fails, and starting semi-automatic registration.
Step 4: selecting at least 3 pairs of feature points.
Step 3: storing the registered point cloud storage in pif format and saving it to the file storage path specified by the configuration. 14. As shown in FIG. 19, the flow of the measurement and report output is as follows:
Step 1: importing the measured object CAD file, measurement template and measurement script according to the path configured by the project.
Step 2: importing pif point cloud according to the point cloud data storage path configuration configured by the project.
Step 3: measuring and calculating the CAD model according to the template and script, such as the feature matching degree, the edge length, the radius of the circle and other parameters.
Step 4: outputting the report containing the above measurement results and color map information, and saving in the configured file storage path.
Embodiment 2
As shown in FIG. 20 and FIG. 21, a measuring system for structured light 3D scanning of this embodiment includes: a structured light 3D scanning camera, 3503375 scanning turntable coated in black, and a 3D measurement processing unit.
As shown in FIG. 20 and FIG. 21, a structured light 3D scanning measurement system of this embodiment specifically includes: a computer 1, an exchanger 2, a cooling fan 3, a power supply 4, a vertical hand-cranked module 5, a 3D scanning camera 6, a horizontal hand-cranked module 7, an intake fan 8, a turnplate 9 on a turntable, a motor 10, a servo controller 11; a structure light 3D scanning camera bracket 12, a black coated scanning turntable 13.
The hardware connection principle of this embodiment is shown in FIG. 22, specifically including:
The scanning platform 13 coated in black is the main structure, and is used for carrying the power supply 4, the structured light 3D scanning camera 6, the intake fan 8, the turnplate 9 on the turntable, the motor 10 and the servo controller 11; the black coating of the scanning turntable 13 is used to remove redundant 3D point cloud information other than the measured object during scanning; the power supply 4 is used for supplying power to the structured light 3D scanning camera 6, the intake fan 8, the cooling fan 3 and the servo controller 11; the structured light 3D scanning camera 6 is used for collecting 3D point cloud data information; the intake fan 8 and the cooling fan 3 together complete the cooling of the scanning turntable; the motor 10 is used for driving the turnplate 9 to move, and completing the 3D point cloud collection of each angle of the measured object; the turnplate 9 is installed on the surface of the scanning turntable, and is used for bearing the tested target; the servo controller 11 is used for receiving the motion control instruction sent by the computer 1, and controlling the motor 10 to make corresponding actions to realize the control of the turntable motion by the turntable motion control module; the computer 1 is used for measuring and processing 3D point cloud, including the following modules: a camera acquisition control module, a coarse registration processing module, a fine registration processing module, a down sampling processing module, a filtering processing module, a 3D scanning camera position adjustment module, a calibration module and a turntable motion control module; the exchanger 2 is used for the communication between the computer 1 and the structured light 3D scanning camera 6.
The horizontal hand-cranked module 7, the vertical hand-cranked module 5, and th@,503375 structured light 3D scanning camera bracket 12 are jointly used to realize the 3D scanning camera position adjustment module. The horizontal hand-operated module 7 is used for horizontal position adjustment, the vertical hand-operated module 5 is used for vertical position adjustment, and the camera bracket 12 is used for pitch angle adjustment.
The above-mentioned specific embodiments have further explained the purpose, technical scheme and beneficial effects of the present invention in detail, so it is understood that the above-mentioned embodiments are only one of the specific embodiments of the present invention, and are not intended to limit the scope of protection of the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the present invention should be included in the scope of protection of the present invention.

Claims (9)

CLAIMS LU503375
1. A measuring method for structured light 3D scanning, characterized by comprising the following steps: collecting 3D point cloud data information of the measured object in different postures; carrying out rotational coarse registration on 3D point cloud data information in adjacent postures; optimizing the robust estimation algorithm by using the robust estimation principle, registering the 3D point cloud data information by using the robust estimation algorithm, outputting the point cloud data containing new position information, and carrying out the fine registration on the point cloud data.
2. The scanning measurement method according to claim 1, characterized in that the robust estimation principle comprises: performing the robust estimation by using an robust estimation model, which includes Huber robust estimation model, and/or IGG robust estimation model.
3. The scanning measurement method according to claim 1, characterized by further comprising: Down sampling the classified 3D point cloud data information by using the voxel grid method; and/or filtering the 3D point cloud data information.
4. A measuring system for structured light 3D scanning, characterized by adopting the scanning measurement method according to any one of claims 1 - 3, comprising: a structured light 3D scanning camera, which is used to collect 3D point cloud information of the measured object; a scanning turntable coated in black, which is used to collect 3D point cloud data of the measured object from multiple angles, and the black coating is used to remove the point cloud information of the non-measured object; a 3D measurement processing unit, which is used to implement the measurement method.
5. The scanning measurement system according to claim 4, characterized in that the 3D measurement processing unit comprises: a camera acquisition control module, used for controlling the scanning camera to acquire 3D point cloud data information of the measured object; LU503375 a coarse registration processing module, used for roughly registering 3D point cloud data information; a fine registration processing module, used for finely registering 3D point cloud data information.
6. The scanning measurement system according to claim 5, characterized in that the 3D measurement processing unit further comprises: a down sampling processing module, used for down sampling the 3D point cloud data information; and/or a filtering processing module, used for filtering the 3D point cloud data information.
7. The scanning measurement system according to claim 5, characterized in that the 3D measurement processing unit further comprises: a 3D scanning camera position adjustment module, used for adjusting the relative position of the 3D scanning camera and the measured object: and/or a calibration module, used for calibrating the structured light 3D scanning camera during initialization; and/or a turntable motion control module, used for driving the turntable to move and collect 3D point cloud data information of the measured object in different postures.
8. The scanning measurement system according to any one of claims 4 - 7, characterized by further comprising: a computer, used for loading the 3D measurement processing unit.
9. The scanning measurement system according to any one of claims 4 - 7, characterized by further comprising: a hardware combining OpenMP and CUDA, used for running the 3D measurement processing unit.
LU503375A 2022-06-27 2023-01-19 Measuring method and system for structured light 3d scanning LU503375B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210737040.9A CN115131208A (en) 2022-06-27 2022-06-27 Structured light 3D scanning measurement method and system

Publications (1)

Publication Number Publication Date
LU503375B1 true LU503375B1 (en) 2023-07-19

Family

ID=83379460

Family Applications (1)

Application Number Title Priority Date Filing Date
LU503375A LU503375B1 (en) 2022-06-27 2023-01-19 Measuring method and system for structured light 3d scanning

Country Status (2)

Country Link
CN (1) CN115131208A (en)
LU (1) LU503375B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372489B (en) * 2023-12-07 2024-03-12 武汉工程大学 Point cloud registration method and system for double-line structured light three-dimensional measurement

Also Published As

Publication number Publication date
CN115131208A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
CN110335234B (en) Three-dimensional change detection method based on antique LiDAR point cloud
CN106935683B (en) A kind of positioning of solar battery sheet SPEED VISION and correction system and its method
CN102663767B (en) Method for calibrating and optimizing camera parameters of vision measuring system
CN105066915B (en) Mould camber processing error and surface roughness On-machine Test device and detection method
CN103278138B (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN105913489A (en) Indoor three-dimensional scene reconstruction method employing plane characteristics
LU503375B1 (en) Measuring method and system for structured light 3d scanning
Wu et al. A novel high precise laser 3D profile scanning method with flexible calibration
CN108876852B (en) Online real-time object identification and positioning method based on 3D vision
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
CN107084671B (en) A kind of recessed bulb diameter measuring system and measurement method based on three wire configuration light
CN109272555B (en) External parameter obtaining and calibrating method for RGB-D camera
CN112733428B (en) Scanning attitude and coverage path planning method for optical measurement
CN113421291A (en) Workpiece position alignment method using point cloud registration technology and three-dimensional reconstruction technology
CN112446844B (en) Point cloud feature extraction and registration fusion method
Rantoson et al. Novel automated methods for coarse and fine registrations of point clouds in high precision metrology
CN113281777A (en) Dynamic measuring method and device for cargo volume
CN110864671B (en) Robot repeated positioning precision measuring method based on line structured light fitting plane
CN114001651B (en) Large-scale slender barrel type component pose in-situ measurement method based on binocular vision measurement and priori detection data
CN111028280A (en) # -shaped structured light camera system and method for performing scaled three-dimensional reconstruction of target
CN110458951B (en) Modeling data acquisition method and related device for power grid pole tower
CN112116665A (en) Structured light sensor calibration method
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN113436235B (en) Laser radar and visual point cloud initialization automatic registration method
CN206864487U (en) A kind of solar battery sheet SPEED VISION positioning and correction system

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20230719