CN115131208A - Structured light 3D scanning measurement method and system - Google Patents

Structured light 3D scanning measurement method and system Download PDF

Info

Publication number
CN115131208A
CN115131208A CN202210737040.9A CN202210737040A CN115131208A CN 115131208 A CN115131208 A CN 115131208A CN 202210737040 A CN202210737040 A CN 202210737040A CN 115131208 A CN115131208 A CN 115131208A
Authority
CN
China
Prior art keywords
point cloud
scanning
cloud data
splicing
data information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210737040.9A
Other languages
Chinese (zh)
Inventor
秦琴
曹龙
宋伟江
汪光宇
谷文军
屠子美
李文辰
刘宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Polytechnic University
Original Assignee
Shanghai Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Polytechnic University filed Critical Shanghai Polytechnic University
Priority to CN202210737040.9A priority Critical patent/CN115131208A/en
Publication of CN115131208A publication Critical patent/CN115131208A/en
Priority to LU503375A priority patent/LU503375B1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a structured light 3D scanning measurement method and a system. The scanning measurement method comprises the following steps: collecting 3D point cloud data information of the measured object under different postures; carrying out rotary rough splicing on the 3D point cloud data information under the adjacent postures; and optimizing the robust estimation algorithm by using a robust estimation principle, registering the 3D point cloud data information by using the robust estimation algorithm, outputting point cloud data containing new position information, and accurately splicing the point cloud data. The scanning measurement system comprises: structured light 3D scanning camera, black application's scanning revolving stage, 3D measurement processing unit. According to the structured light 3D scanning measurement method and system, the robust estimation algorithm is optimized by using the robust estimation principle, so that splicing is faster and more accurate, splicing errors can be effectively prevented, and the splicing accuracy is ensured.

Description

Structured light 3D scanning measurement method and system
Technical Field
The invention relates to the technical field of measurement, in particular to a structured light 3D scanning measurement method and a system.
Background
With the development of manufacturing technology and nonstandard automation technology, the requirements on the precision and the working efficiency of the measurement of the external dimension of the part are higher and higher. The traditional 2D detection method is time-consuming and labor-consuming in size detection, high-precision detection of complex curved surface shapes is difficult to achieve, and the detection precision and efficiency of the method hardly meet the detection standards and requirements of modern industrial manufacturing technology on various parts. Modern 3D digital detection technology gradually replaces traditional detection technology, is more and more widely applied, and becomes a core technology for detecting various high-precision complex parts.
The existing measuring method of the common 3D measuring device needs to carry out splicing processing on 3D point cloud data information. The splicing treatment of the existing 3D measuring device generally only uses an ICP (inductively coupled plasma) algorithm, and has the problems of more iteration times and lower splicing efficiency. In addition, to ensure the splicing accuracy, the overlapping rate is generally required to be greater than 30%, and the splicing accuracy is seriously deteriorated when the overlapping rate is lower than 30%.
In the research of the prior art, a better method is not provided, the splicing accuracy is ensured, and meanwhile, the problems of more iteration times and lower splicing efficiency of the conventional ICP algorithm can be solved.
Disclosure of Invention
The invention aims to solve the defects in the prior art.
In order to achieve the above object, in a first aspect, an embodiment of the present invention describes a structured light 3D scanning measurement method, including the following steps:
collecting 3D point cloud data information of the measured object under different postures;
carrying out rotary rough splicing on the 3D point cloud data information under the adjacent postures;
and optimizing the robust estimation algorithm by using a robust estimation principle, registering the 3D point cloud data information by using the robust estimation algorithm, outputting point cloud data containing new position information, and accurately splicing the point cloud data.
The splicing algorithm combines rough splicing and accurate splicing, and the rough splicing algorithm is used for calculating the Euler angle of the collected 3D point cloud by combining the movement of the rotary table and providing original data for accurate splicing. The fine stitching is to extract an overlapped part in the 3D point cloud data after the coarse stitching, and perform robust estimation alignment on the 3D point cloud data in the overlapped part. The distance between the extracted overlapped point cloud data and the actual position needing to be aligned is relatively close, so that the alignment can be completed by the robust estimation alignment algorithm only needing a few iterations, and the running time of the robust estimation alignment algorithm is shortened. The splicing algorithm can ensure that the overlapping area reaches more than 30 percent, improve the splicing accuracy, reduce the point cloud data to be processed and accelerate the point cloud processing speed.
Preferably, the robust estimation is performed using robust models, including Huber robust models, and/or IGG robust models.
The splicing algorithm for robust estimation optimizes the ICP algorithm by using the principle of robust estimation, performs robust estimation by using the weight reduction factor and improves the robustness of the algorithm. The robust model is selected from a Huber robust model and/or an IGG robust model, and after the robust model is selected, the ICP algorithm is improved by using a robust estimation method.
Preferably, the scanning measurement method further includes: carrying out down-sampling processing on the classified 3D point cloud data information by using a voxel grid method; and/or filtering the 3D point cloud data information.
The point cloud downsampling algorithm used by the invention classifies points in the point cloud by using a threshold of a normal vector included angle, meanwhile, a KD-Tree algorithm is used for carrying out accelerated search on the point cloud, and a voxel grid method is used for carrying out classification processing on the classified point cloud to obtain the downsampled point cloud. The algorithm can store the local characteristics of the point cloud, shorten the down-sampling processing time as far as possible, and ensure the measurement precision and the measurement running speed of automatic measurement software.
The point cloud filtering algorithm used by the invention combines a radial filtering algorithm, a bilateral filtering algorithm and a statistical filtering algorithm, retains the advantages of the algorithms, can better filter noise points and retains more detailed characteristics. Meanwhile, the invention designs the visual adjustment of the filtering effect, can visually display the filtering effect, is convenient for adjusting each filtering parameter so as to achieve the satisfactory filtering effect, and improves the usability of automatic measurement software.
In a second aspect, an embodiment of the present invention describes a structured light 3D scanning measurement system, including:
the structured light 3D scanning camera is used for acquiring 3D point cloud information of the detected object;
the scanning rotary table is used for collecting 3D point cloud data of a measured object at multiple angles, and the black coating is used for removing point cloud information of a non-measured object;
a 3D measurement processing unit for implementing the measurement method.
The invention designs black coating for the measuring platform, and the black coating can absorb the projection of the structured light source. The measurement platform portion is not detected when the structured light 3D scanning camera is used to capture the 3D point cloud data. Therefore, the interference on the acquisition of the 3D point cloud data of the measured object can not be caused, the acquired 3D point cloud data does not need to be manually post-processed, and the automatic measurement software becomes possible.
Preferably, the 3D measurement processing unit includes: the camera acquisition control module is used for controlling the scanning camera to acquire 3D point cloud data information of the measured object; the rough splicing processing module is used for roughly splicing the 3D point cloud data information; and the accurate splicing processing module is used for accurately splicing the 3D point cloud data information.
Preferably, the 3D measurement processing unit further includes: the down-sampling processing module is used for performing down-sampling processing on the 3D point cloud data information; and/or the filtering processing module is used for carrying out filtering processing on the 3D point cloud data information.
Preferably, the scanning measurement system further comprises: and the 3D scanning camera position adjusting module is used for adjusting the relative position of the 3D scanning camera and the measured object.
The 3D scanning camera position adjusting module designed by the invention is used for adjusting the relative position of the 3D scanning camera and the measured object, so that the 3D scanning camera can obtain more scanning areas or higher measuring precision, the measuring requirements of the measured object with different scales can be met, different types of measuring scenes can be met, and the compatibility and the platform performance of automatic measuring software can be improved.
Preferably, the scanning measurement system further comprises: and the calibration module is used for calibrating the structured light 3D scanning camera during initialization.
The calibration function of the invention is used for automatically calculating the relative distance and angle between the 3D scanning camera and the central shaft of the turntable after the position of the 3D scanning camera is adjusted. Manual measurement is not needed, and the automation degree of automatic measurement software is improved.
Preferably, the scanning measurement system further comprises: and the turntable motion control module is used for driving the turntable to move and acquiring the 3D point cloud data information of the measured object under different postures.
Preferably, the scanning measurement system places the processing module of the 3D point cloud data information at a computer end for uniformly processing the 3D point cloud data at the computer end.
In order to improve the universality of the measuring method, the filtering and splicing processing in the 3D point cloud data processing stage is put on a computer terminal. Therefore, the measuring method can be applied to 3D scanning cameras of any brands and models, no matter whether the 3D scanning cameras are provided with processing modules or not, and after the original data of the 3D scanning cameras are obtained, the 3D point cloud data are processed uniformly at a computer terminal. The compatibility of automatic measurement software is improved.
Preferably, the scanning measurement system is configured to perform 3D point cloud data processing based on a hardware basis combining OpenMP and CUDA.
When 3D point cloud data processing is carried out, if only a CPU is used for processing, the efficiency is very low; if only GPU processing is used, additional video memory requests and release time are consumed. The invention designs a 3D point cloud data processing method based on combination of OpenMP and CUDA, wherein OpenMP is a multi-core CPU parallel computing technology, and CUDA is a GPU computing technology. The 3D point cloud data processing method can optimize the point cloud processing speed as much as possible, when the point cloud data volume is small, the point cloud processing is carried out by using OpenMP, and the processing speed is improved by about 3 times compared with that of CPU serial processing; when the point cloud data volume is large, the CUDA is used for processing, and the processing speed is about 10 times faster than that of the CPU serial processing. The 3D point cloud data processing speed of the automatic measurement software is increased.
Preferably, the splicing process cannot be automatically executed due to splicing errors caused by user operation errors and the like is prevented, and semi-automatic splicing is designed as error processing. In a semi-automatic splicing mode, a user can operate automatic splicing by selecting at least 3 groups of corresponding points, and the selected corresponding points can be automatically spliced only by being approximately at the same position, so that the usability, the robustness and the reliability of automatic measurement software are improved.
Preferably, before the point cloud is matched with the CAD model of the measuring target, the point cloud needs to be converted into a pif format by using imagin, and besides original point cloud data, the pif format point cloud data also contains grid characteristics, and the characteristics facilitate subsequent model matching operation, so that the point cloud does not need manual point selection and can be automatically aligned with the model. The automation degree of the device and the robustness of automatic measurement software are improved.
Preferably, the LabVIEW is used as a development tool, and secondary development can be rapidly carried out to adapt to various 3D cameras and mechanical motion devices. The measurement module of the automatic measurement software in the device calls a Poly Works measurement tool, so that the development of the measurement tool is avoided, and meanwhile, higher measurement precision can be ensured. Other measurement software or self-developed measurement software can be selected. The compatibility of automatic measurement software is improved.
The embodiment of the invention has the beneficial effects that: the invention provides a structured light 3D scanning measurement method, which optimizes the steady estimation algorithm by using a robust estimation principle, so that splicing is faster and more accurate, the overlapping rate can easily reach more than 30%, splicing errors can be effectively prevented, and the splicing accuracy is ensured. The invention also provides a structured light 3D scanning measurement system, which realizes the measurement method and can meet the measurement requirement of parts with complex shapes.
Drawings
Fig. 1 is a flowchart of a structured light 3D scanning measurement method according to an embodiment of the present invention;
FIG. 2 is an overall workflow diagram of an embodiment of the present invention;
FIG. 3 is a flowchart of an embodiment of a configuration scanning platform;
FIG. 4 is a calibration flow chart of an embodiment of the present invention;
FIG. 5 is a flow chart of a calibration algorithm according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of a calibration piece according to an embodiment of the present invention;
FIG. 7 is a schematic view of the axis point of the calibration piece according to the embodiment of the present invention;
FIG. 8 is a flow chart of a measurement of a newly added workpiece according to an embodiment of the present invention;
FIG. 9 is a flow chart of a measurement run of an embodiment of the present invention;
FIG. 10 is a flowchart illustrating a filtering method according to an embodiment of the present invention;
FIG. 11 is a flowchart illustrating a down-sampling processing method according to an embodiment of the present invention;
FIG. 12 is a flowchart illustrating a rough splicing method according to an embodiment of the present invention;
FIG. 13 is a flowchart illustrating a point cloud location transformation according to an embodiment of the present invention;
FIG. 14 is a rotation parameter map of a point cloud location transformation according to an embodiment of the present invention;
FIG. 15 is a translation parameter map of a point cloud location transformation according to an embodiment of the present invention;
FIG. 16 is a detailed flowchart of a precise stitching processing method according to an embodiment of the present invention;
FIG. 17 is a detailed flow chart of a robust estimation algorithm according to an embodiment of the present invention;
FIG. 18 is a detailed flowchart of a semi-automatic stitching processing method according to an embodiment of the present invention;
FIG. 19 is a flowchart illustrating measurement and report output according to an embodiment of the present invention;
FIG. 20 is a front view of a structured light 3D scanning measurement device according to an embodiment of the present invention;
FIG. 21 is a front view of a black painted scanning platform according to an embodiment of the present invention;
FIG. 22 is a schematic diagram of a hardware connection for structured light 3D scanning measurement according to an embodiment of the present invention;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without any creative effort, shall fall within the protection scope of the present invention.
In the following detailed description of the present invention, certain specific details are set forth in order to provide a better understanding of the present invention. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details.
Example one
Fig. 1 is a flowchart of a structured light 3D scanning measurement method according to an embodiment of the present invention, where the specific workflow includes: the method comprises the steps of designing an overall working process, configuring a scanning platform process, designing a calibration operation process, calibrating an algorithm process, designing a newly-added measurement workpiece process, designing a measurement operation process, designing a filtering process, designing a 3D point cloud rough splicing process, designing a point cloud position transformation process, designing a down-sampling algorithm process, designing a 3D image accurate splicing process, designing a steady estimation algorithm process, designing a 3D point cloud semi-automatic splicing and format conversion process, and outputting a measurement and report.
1. As shown in fig. 2, the overall workflow is designed as follows:
step 1: the current scanning platform is selected.
Step 2: and (5) initializing and calibrating.
And 3, step 3: and (5) newly building a measurement project.
And 4, starting measurement.
And 5, outputting a measurement report.
2. As shown in fig. 3, the process of configuring the scanning platform is designed as follows:
step 1: and selecting the model of the 3D scanning camera and the model of the turntable.
Step 2: and configuring 3D scanning camera parameters and turntable parameters.
And step 3: the current measurement device configuration is saved.
And 4, calibrating the measuring device.
3. As shown in FIG. 4, the design of the calibration operation flow
Step 1: the angle of the hand module of Y axle and the hand module of Z axle and 3D scanning camera support is adjusted to the position that satisfies the measuring demand.
And 2, step: placing the standard piece.
And step 3: a calibration procedure is initiated.
And 4, step 4: the 3D scanning camera is dimensionally calibrated based on the known dimensions and surface of the target.
And 5: and searching the bottom edge and the axis of the known standard part through a program, and calculating the intersection point distance between the original point of the 3D scanning camera and the Z axis of the camera and the axis of the rotary table and the included angle between the 3D scanning camera and the Z axis hand-cranking module.
Step 6: and the calibration program outputs the intersection point of the 3D scanning camera origin and the Z axis of the camera and the axis line of the rotary table to be away from the included angle between the 3D scanning camera and the Z axis hand-cranking module.
And 7: the parameters are stored in the configuration, so that the subsequent measurement is convenient to use.
4. As shown in fig. 5, the calibration algorithm flow is designed as follows:
step 1: and filtering the acquired 3D point cloud data to remove outliers around the bottom contour.
Step 2: calculating a point cloud minimum bounding box: a PCA (principal component analysis) -based orientation bounding box obb (oriented bounding box) was used. Such that the bottom surface (the side of the acquired point cloud in contact with the turret) is the single one of the sides of the directional bounding box 6.
And step 3: and (3) performing downsampling on the 3D point cloud data, then respectively solving normal vectors of mass center positions of six surfaces of the directional bounding box, and determining 6 camera positions by taking the normal vector direction as a camera visual angle direction.
And 4, step 4: and removing the point cloud hidden points from the visual angles of the 6 camera positions respectively to obtain 6 point cloud images. And calculating the point number of each point cloud image and the average distance from the point cloud image to the corresponding plane. The number of the points is the largest, the average distance is the farthest, the top surface is the other surface parallel to the top surface, and the bottom surface is the other surface parallel to the top surface.
And 5: as shown in fig. 6, the bottom plane equation is calculated with the bottom vertices. And Z is ax + by + c, the bottom surface normal vector is N (a, b,1), the OZ axis direction unit vector is Z (0,0,1), the bottom surface normal vector is obtained by a bottom surface plane equation, and an included angle alpha Z between the normal vector and the OZ axis is calculated:
Figure RE-GDA0003817150970000071
step 6: the positions of 6 circle centers of the calibration piece are shown in a graph A, B, C, D, E, F, and under the condition that the bottom surface of the calibration piece covers the axis, as shown in fig. 7, if the circle center scanned for the first time is A, B, C points, the circle centers scanned after the turntable rotates 180 degrees are D ', E ' and F ', the center points are connected with A-D ', B-E ' and C-F ', the center points A ', B ' and C ' are taken, the perpendicular bisectors of A ' -B ' and B ' -C ' are taken, the intersection point O of the two perpendicular bisectors is an axis point, and the perpendicular line between the axis point and the surface of the turntable is an axis line.
And 7: and the intersection point of the Z axis of the 3D scanning camera and the axis line of the turntable is the new coordinate origin.
And 8: and outputting a new coordinate origin and an included angle alpha z.
5. As shown in fig. 8, the new measured workpiece flow design is as follows:
step 1: designing the motion process of the rotary table.
And 2, step: designing a measurement template, a report template and an automatic measurement script.
And step 3: configuring corresponding measurement template, CAD model of the measured object, input path of automatic measurement script, etc. and other parameters, such as down-sampling configuration parameter, point cloud storing path, report storing path, etc.
And 4, step 4: and testing whether the operation path and the template meet the requirements, and if not, modifying the operation path and the measurement template.
And 5: and executing the measurement operation to perform continuous automatic measurement.
6. As shown in fig. 9, the measurement operation flow is designed as follows:
step 1: reading configuration parameters such as a motion path, the number of sampling sheets and the like.
Step 2: and calling a 3D camera control tool module to control the 3D camera to automatically acquire the point cloud image.
And step 3: importing the collected 3D point cloud data into automatic control software by using gigabit Ethernet
And 4, step 4: and calling a filtering tool to filter.
And 5: and (5) descending and mining according to the configuration.
Step 6: storing the 3D point cloud data to the configuration path.
And 7: the turntable is controlled using the turntable motion control module to perform 3D image acquisition for the next pose.
And 8: and roughly splicing the collected second 3D point cloud and the first point cloud.
And step 9: and repeating the operation until the collection is completed.
Step 10: splicing after data acquisition is finished
7. As shown in fig. 10, the filtering flow is designed as follows:
step 1: the point clouds are filtered using radius filtering.
Step 2: the solution of the normal line and the normal vector of the point cloud in the neighborhood can be converted into the eigenvalue and the eigenvector of the covariance matrix of the points in the neighborhood of the point by utilizing a principal component analysis method, and the filter factor of bilateral filtering is calculated according to the calculated homodromous normal vector.
And step 3: calculating mathematical expectation mu and standard deviation sigma of all points in the critical domain by using a statistical filtering method, calculating a threshold epsilon of the statistical filtering, constraining two attribute parameters of bilateral filtering calculation by using the threshold of the statistical filtering and a threshold of 1-2 times of global average distance, ensuring the characteristics of a point cloud structure by limiting the size of a spatial domain, and reducing the influence of isolated points in the critical domain on bilateral filtering of the point cloud.
And 4, step 4: and outputting the filtered point cloud.
8. As shown in fig. 11, the flow of the down-sampling algorithm is as follows:
step 1: and (4) carrying out spatial rasterization on the point cloud, and accelerating the search of the k adjacent domain of the point cloud by using the KD-Tree.
Step 2: and calculating a normal vector in the point cloud, and fitting all points in k adjacent domains of any point P in the point cloud into a plane as a best fitting plane. In order to ensure that the point fitting plane is a least square plane, the calculation principle is as follows:
Figure RE-GDA0003817150970000081
and (3) utilizing the included angle between the normal vector and the vector in the neighborhood to put forward the concept of the local entropy:
Figure RE-GDA0003817150970000082
wherein P theta k ,Pθ j Comprises the following steps:
Figure RE-GDA0003817150970000083
Figure RE-GDA0003817150970000084
in the formula, P theta k and P theta j are respectively probability distribution of gravity center normal vectors of two points.
And step 3: and classifying the point clouds in the grids by using the information entropy of the included angle of the normal vector, directly performing voxel grids on the grids with smaller information entropy of the normal vector, storing the grids with larger information entropy of the normal vector, and performing smaller voxel grid simplification point clouds on the stored point clouds.
9. As shown in fig. 12, the 3D point cloud rough stitching process is designed as follows:
step 1: and carrying out coordinate position transformation on the point cloud according to the calibration parameters, and converting the point cloud into coordinates taking the intersection point of the axis of the turntable and the Z axis of the 3D camera as an origin.
Step 2: included angle alpha between Z axis of camera and hand-operated module output according to calibration program z And calculating the rotation angle according to the angle of each movement to obtain the rotated point cloud.
And step 3: and outputting the rotated point cloud.
10. As shown in fig. 13, the point cloud position transformation process is as follows:
step 1: and establishing a Cartesian coordinate system with the intersection point of the Z axis of the camera and the axis of the turntable as an origin according to the right-hand rule, wherein the X axis and the Z axis are positioned on the plane of the turntable.
Step 2: setting the coordinate of the scanned point cloud as O T -X T Y T Z T And taking the intersection point of the axis of the turntable and the central line of the 3D camera as a coordinate origin and the coordinate with the corrected angle as O-XYZ, wherein the distance between the coordinate origin obtained by calibration and the coordinate origin with the intersection point of the axis of the turntable and the central line of the 3D camera as a coordinate origin is l, the included angle between the coordinate origin and the z axis is alpha, and the current rotating angle of the turntable is beta. The transformation parameters are respectively 3 translation parameters Δ x 、Δ y 、Δ z Three rotation parameters ε x 、ε y 、ε z As shown in fig. 14.
And step 3: firstly, the scanned coordinate origin is moved to a turntableThe axis and the central line of the 3D camera are the origin of coordinates of a cross point, and the axis of the central point of the camera and the axis of the central point of the turntable are on the same straight line, so that the angle delta is delta x As shown in fig. 15, since l and α are known in the calibration, 0 can be obtained: delta y 、Δ z
Δ y =l·sinα,
Δ z =l·cosα。
And 4, step 4: because the scanned point cloud is accompanied by an elevation angle alpha and a current rotation angle beta, and because the object cannot rotate along the Y axis, the following can be obtained: epsilon x =α、ε y =0、ε z =β
According to formula X T =Δ x +R(ε)X,X T Is a tri O T -X T Y T Z T X is a three-dimensional coordinate vector of O-XYZ, R (epsilon) is a rotation matrix, R (epsilon) ═ R (epsilon) x )R(ε y )R(ε z )
Figure RE-GDA0003817150970000091
Figure RE-GDA0003817150970000101
Figure RE-GDA0003817150970000102
Thus, there are:
Figure RE-GDA0003817150970000103
in the formula:
R 11 =cosε y cosε z =cosβ
R 12 =cosε x sinε z +sinε x sin ε y cos ε z =cosα·sinβ
R 13 =sinε x sinε z -cos ε x sinε y cosε z =sinα·sinβ
R 21 =-cosε y sinε z =-sinβ
R 22 =cosε x cosε z -sinε x sinε y sinε z =cosβ
R 23 =sinε x cosε z +cosε x sinε y sinε z =sinα·cosβ
R 31 =sinε y =0
R 32 =-sinε x cos ε y =-sinα·cosβ
Figure RE-GDA0003817150970000104
11. as shown in fig. 16, the 3D image accurate stitching process is designed as follows:
step 1: and successively leading two adjacent point clouds in.
Step 2: and extracting point cloud data in the overlapped contour of the adjacent point clouds after rough splicing.
And step 3: and extracting ISS characteristic points of the two overlapped point cloud data.
And 4, step 4: and (4) carrying out feature description on the extracted ISS feature points by using the FPFH value, and matching the extracted feature points to form feature point pairs.
And 5: and (5) purifying the matched feature point pairs by using a RANSAC algorithm, and rejecting the mismatched feature point pairs.
Step 6: and registering the characteristic point pairs by using a robust estimation algorithm to realize the point cloud registration.
And 7: and calculating the position transformation relation between the registered point cloud and the original point cloud.
And 8: and carrying out position transformation on the whole point cloud according to the position transformation relation to obtain the spliced point cloud.
And step 9: and sequentially taking the previous point cloud as a reference, and splicing the subsequent point clouds.
Step 10: and outputting the spliced point cloud data containing the new position information.
12. As shown in fig. 17, the robust estimation algorithm flow:
step 1: the M-estimation principle of calculating the objective function according to the M-estimation is as follows:
Figure RE-GDA0003817150970000111
applying the principle of M-estimation to the feature points of the source point cloud and the target point cloud, wherein the target function g (R, T) is as follows:
Figure RE-GDA0003817150970000112
wherein, R is an orthogonal matrix R ═ { R epsilon 3 | R T R ═ E, det (R) ═ 1}, and T is the translation matrix is a 3-dimensional column vector.
Step 2: calculating the weight:
the key of the weight selection iterative method is that rho number and
Figure RE-GDA00038171509700001117
selection of a function by
Figure RE-GDA0003817150970000113
And
Figure RE-GDA0003817150970000114
two relations are used to construct the weight factor omega and the equivalent weight
Figure RE-GDA0003817150970000115
And then carrying out iterative adjustment solving, wherein the weight of each pair of matched feature points is determined according to the residual error of the distance, and the formula is as follows:
ω i =ω(||R*P i +T-Q i || 2 )
wherein, if there is better rigid estimation, it can be brought in, if there is no initial RThe values are in unit matrix, and the initial value of T is brought in by zero vector. Is provided with
Figure RE-GDA0003817150970000116
Is the weighted center of the point cloud P,
Figure RE-GDA0003817150970000117
is the weighted center of point cloud Q. Replacing the translation vector T by an independent column vector u, i.e.
Figure RE-GDA0003817150970000118
Figure RE-GDA0003817150970000119
The objective function varies as:
Figure RE-GDA00038171509700001110
and step 3: a new rotation and translation matrix R, T is calculated. Due to all of
Figure RE-GDA00038171509700001111
And all of
Figure RE-GDA00038171509700001113
The sum is a zero vector, the product of them with the scalar u is still a zero vector, and the IGG weighting function formula can be simplified as:
Figure RE-GDA00038171509700001114
wherein C can be represented as:
Figure RE-GDA00038171509700001115
it is obvious that when u * When the total objective function is minimized, the whole objective function can be minimized, where C may be decomposed by a method based on singular value decomposition, and R is the singular of C decompositionValue according to C ═ U ∑ V T Can determine R * =VU T In special cases, a reflection matrix det (VU) occurs T ) When R is ═ 1 * =Vdiag(1,1,-1)U T Can solve R * According to u * 0 and resolution of R * Can solve the value of
Figure RE-GDA00038171509700001116
And 4, step 4: calculating an objective function:
according to the calculated R * And T * Calculating an objective function g (R) * ,T * ) Judgment of g (R) * ,T * )-g(R,T)<ε if the above steps are not repeated.
And 5: calculating an robust model:
the splicing algorithm for robust estimation optimizes the ICP algorithm by using the principle of robust estimation, performs robust estimation by using the weight reduction factor and improves the robustness of the algorithm. The selection of the robust model is Huber robust model and IGG robust model.
(1) Huber differential resistance model
The weight function is:
Figure RE-GDA0003817150970000121
the weight factors are:
Figure RE-GDA0003817150970000122
in the formula, when the correction | v | should be within ± c, the huber weight function robust estimation is the most classical least square algorithm, and when the correction | v | is greater than c, the larger the correction is, the smaller the weight is, and the value of c is generally between 1 and 3.
(2) The IGG robust model weight function is:
Figure RE-GDA0003817150970000123
after the robust model is selected, the ICP algorithm is improved by using a robust estimation method.
13. As shown in fig. 18, the process of semi-automatic stitching and format conversion for 3D point cloud is as follows:
step 1: and leading the aligned point cloud into a splicing tool for automatic splicing.
Step 2: and if the automatic splicing fails, prompting a user that the automatic splicing fails and starting the semi-automatic splicing.
And 4, step 4: at least 3 pairs of feature points are selected.
And step 3: storing the spliced point cloud into a pif format, and storing the point cloud under a file storage path appointed by configuration.
14. As shown in fig. 19, the measurement and report output flow is as follows:
step 1: and importing a CAD file, a measurement template and a measurement script of the object to be measured according to the path configured by the project.
Step 2: and importing the pif point cloud according to the point cloud data storage path configuration of the project configuration.
And 3, step 3: and (4) comparing the CAD model according to the template and the script to measure and calculate parameters such as feature matching degree, edge length, circle radius and the like.
And 4, step 4: and outputting a report containing the measurement result and the color map information, and storing the report into a configured file storage path.
Example two
As shown in fig. 20 and 21, a structured light 3D scanning measurement system of the present embodiment includes: structured light 3D scanning camera, black application's scanning revolving stage, 3D measurement processing unit.
As shown in fig. 20 and 21, the structured light 3D scanning measurement system of the present embodiment specifically includes: computer 1, switch 2, radiator fan 3, power 4, vertical direction hand module 5, 3D scanning camera 6, the hand module 7 of horizontal direction, air intake fan 8, carousel 9 on the revolving stage, motor 10, servo motor controller 11, structured light 3D scanning camera support 12, the scanning revolving stage 13 of black application.
The hardware connection principle of this embodiment is shown in fig. 22, and specifically includes:
the black coating scanning platform 13 is a main structure and is used for carrying a power supply 4, a structured light 3D scanning camera 6, an air inlet fan 8, a turntable 9 on a turntable, a motor 10 and a servo controller 11; black coating of the scanning rotary table 13 is used for removing redundant 3D point cloud information except the detected object during scanning; the power supply 4 is used for supplying power to the structured light 3D scanning camera 6, the air inlet fan 8, the heat radiation fan 3 and the servo controller 11; the structured light 3D scanning camera 6 is used for acquiring 3D point cloud data information; the air inlet fan 8 and the heat radiation fan 3 together complete heat radiation of the scanning turntable; the motor 10 is used for driving the turntable 9 to move to complete the 3D point cloud collection of each angle of the measured object; the turntable 9 is arranged on the surface of the scanning turntable and used for bearing a measured target; the servo motor controller 11 is used for receiving a motion control instruction sent by the computer 1 and controlling the motor 10 to do corresponding action so as to realize the control of the turntable motion control module on the turntable motion; the computer 1 is used for measuring and processing 3D point cloud, and comprises the following modules: the system comprises a camera acquisition control module, a rough splicing processing module, an accurate splicing processing module, a down-sampling processing module, a filtering processing module, a 3D scanning camera position adjusting module, a calibration module and a turntable motion control module; the switch 2 is used for communication between the computer 1 and the structured light 3D scanning camera 6.
The hand module 7 of horizontal direction, the hand module 5 of vertical direction to and structured light 3D scanning camera support 12 are used for realizing 3D scanning camera position control module jointly. Wherein the hand module 7 of horizontal direction is used for horizontal direction position control, and the hand module 5 of vertical direction is used for vertical direction position control, and camera support 12 is used for pitch angle to adjust.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, so that it should be understood that the above-mentioned embodiments are only one of the embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. A structured light 3D scanning measurement method is characterized by comprising the following steps:
collecting 3D point cloud data information of the measured object under different postures;
carrying out rotary rough splicing on the 3D point cloud data information under the adjacent postures;
and optimizing the robust estimation algorithm by using a robust estimation principle, registering the 3D point cloud data information by using the robust estimation algorithm, outputting point cloud data containing new position information, and accurately splicing the point cloud data.
2. The scanning measurement method according to claim 1, wherein the robust estimation principle comprises:
the robust estimation is performed using robust models, including Huber robust models, and/or IGG robust models.
3. The scan measurement method of claim 1, further comprising:
carrying out down-sampling processing on the classified 3D point cloud data information by using a voxel grid method;
and/or filtering the 3D point cloud data information.
4. A structured light 3D scanning measurement system, wherein the scanning measurement method of any one of claims 1 to 3 is adopted, comprising:
the structured light 3D scanning camera is used for acquiring 3D point cloud information of the detected object;
the scanning rotary table is used for collecting 3D point cloud data of a measured object at multiple angles, and the black coating is used for removing point cloud information of a non-measured object;
and the 3D measurement processing unit is used for implementing the scanning measurement method.
5. The scanning measurement system of claim 4, wherein the 3D measurement processing unit comprises:
the camera acquisition control module is used for controlling the scanning camera to acquire 3D point cloud data information of the measured object;
the rough splicing processing module is used for roughly splicing the 3D point cloud data information;
and the accurate splicing processing module is used for accurately splicing the 3D point cloud data information.
6. The scanning measurement system of claim 5, wherein the 3D measurement processing unit further comprises:
the down-sampling processing module is used for carrying out down-sampling processing on the 3D point cloud data information;
and/or the filtering processing module is used for carrying out filtering processing on the 3D point cloud data information.
7. The scanning measurement system of claim 5, wherein the 3D measurement processing unit further comprises:
and the 3D scanning camera position adjusting module is used for adjusting the relative position of the 3D scanning camera and the measured object.
And/or the calibration module is used for calibrating the structured light 3D scanning camera during initialization.
And/or the rotary table motion control module is used for driving the rotary table to move so as to realize the conversion of different collected postures of the object to be measured.
8. The scanning measurement system of any of claims 4 to 7, further comprising:
a computer for loading the 3D measurement processing unit.
9. The scanning measurement system of any of claims 4 to 7, further comprising:
hardware combining OpenMP and CUDA is used for operating the 3D measurement processing unit.
CN202210737040.9A 2022-06-27 2022-06-27 Structured light 3D scanning measurement method and system Pending CN115131208A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210737040.9A CN115131208A (en) 2022-06-27 2022-06-27 Structured light 3D scanning measurement method and system
LU503375A LU503375B1 (en) 2022-06-27 2023-01-19 Measuring method and system for structured light 3d scanning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210737040.9A CN115131208A (en) 2022-06-27 2022-06-27 Structured light 3D scanning measurement method and system

Publications (1)

Publication Number Publication Date
CN115131208A true CN115131208A (en) 2022-09-30

Family

ID=83379460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210737040.9A Pending CN115131208A (en) 2022-06-27 2022-06-27 Structured light 3D scanning measurement method and system

Country Status (2)

Country Link
CN (1) CN115131208A (en)
LU (1) LU503375B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372489A (en) * 2023-12-07 2024-01-09 武汉工程大学 Point cloud registration method and system for double-line structured light three-dimensional measurement

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372489A (en) * 2023-12-07 2024-01-09 武汉工程大学 Point cloud registration method and system for double-line structured light three-dimensional measurement
CN117372489B (en) * 2023-12-07 2024-03-12 武汉工程大学 Point cloud registration method and system for double-line structured light three-dimensional measurement

Also Published As

Publication number Publication date
LU503375B1 (en) 2023-07-19

Similar Documents

Publication Publication Date Title
CN110648361B (en) Real-time pose estimation method and positioning and grabbing system of three-dimensional target object
CN105913489B (en) A kind of indoor three-dimensional scenic reconstructing method using plane characteristic
CN105741346B (en) Method for calibrating a depth camera
Li et al. Free-form surface inspection techniques state of the art review
WO2014024579A1 (en) Optical data processing device, optical data processing system, optical data processing method, and optical data processing-use program
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
TW200907826A (en) System and method for locating a three-dimensional object using machine vision
CN112907735B (en) Flexible cable identification and three-dimensional reconstruction method based on point cloud
US20170287162A1 (en) Method and system for scanning an object using an rgb-d sensor
Rantoson et al. Novel automated methods for coarse and fine registrations of point clouds in high precision metrology
CN111523547A (en) 3D semantic segmentation method and terminal
Bergström et al. Virtual projective shape matching in targetless CAD-based close-range photogrammetry for efficient estimation of specific deviations
CN115131208A (en) Structured light 3D scanning measurement method and system
Camposeco et al. Non-parametric structure-based calibration of radially symmetric cameras
CN113483664B (en) Screen plate automatic feeding system and method based on line structured light vision
CN110458951B (en) Modeling data acquisition method and related device for power grid pole tower
Makovetskii et al. An algorithm for rough alignment of point clouds in three-dimensional space
CN114581515A (en) Multi-camera calibration parameter optimization method based on optimal path conversion
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN113436235B (en) Laser radar and visual point cloud initialization automatic registration method
CN113393507B (en) Unmanned aerial vehicle point cloud and ground three-dimensional laser scanner point cloud registration method
CN115100277A (en) Method for determining position and pose of complex curved surface structure part
Masuda et al. Simultaneous determination of registration and deformation parameters among 3D range images
Chen et al. Precise 6dof localization of robot end effectors using 3D vision and registration without referencing targets
CN117893610B (en) Aviation assembly robot gesture measurement system based on zoom monocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination