CN114627275B - Whole machine measurement point cloud fusion method based on multi-source heterogeneous data - Google Patents

Whole machine measurement point cloud fusion method based on multi-source heterogeneous data Download PDF

Info

Publication number
CN114627275B
CN114627275B CN202210315036.3A CN202210315036A CN114627275B CN 114627275 B CN114627275 B CN 114627275B CN 202210315036 A CN202210315036 A CN 202210315036A CN 114627275 B CN114627275 B CN 114627275B
Authority
CN
China
Prior art keywords
point cloud
straight line
point
edge
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210315036.3A
Other languages
Chinese (zh)
Other versions
CN114627275A (en
Inventor
汪俊
单忠德
李超
李子宽
张凯钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202210315036.3A priority Critical patent/CN114627275B/en
Publication of CN114627275A publication Critical patent/CN114627275A/en
Application granted granted Critical
Publication of CN114627275B publication Critical patent/CN114627275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a complete machine measurement point cloud fusion method based on multi-source heterogeneous data, which comprises the following steps: extracting straight-line segments of the same-name edges of the whole machine image on the whole machine image based on the gray similarity, and reconstructing a three-dimensional point cloud corresponding to the whole machine image; extracting edge straight lines in the laser three-dimensional point cloud of the whole machine based on the change condition of the point cloud normal vector; respectively extracting straight line intersection points in the reconstructed three-dimensional point cloud and straight line intersection points in the complete machine laser point cloud, and performing similarity judgment on the two types of straight line intersection points to finish rough matching of the two types of point clouds; and precisely matching the reconstructed three-dimensional point cloud with the complete machine laser three-dimensional point cloud under the condition of scale change. According to the invention, the point cloud registration is carried out by utilizing the linear characteristics in the scene, so that the automatic fusion and high-precision registration of two heterogeneous point cloud data of the whole machine image and the laser point cloud are realized, the characteristics of the two heterogeneous data are fused, the point cloud cavity problem existing in the single laser point cloud data registration is reduced, and the integrity of the point cloud data measured by the whole machine is ensured.

Description

Whole machine measurement point cloud fusion method based on multi-source heterogeneous data
Technical Field
The invention relates to the technical field of automatic three-dimensional data fusion, in particular to a complete machine measurement point cloud fusion method based on multi-source heterogeneous data.
Background
In recent years, with the development and popularization of laser radar and three-dimensional scanning technology, the technology is widely applied to the fields of surveying and mapping, power line inspection, digital cities, ancient building protection, military equipment measurement, digital twins and the like. The working mode of three-dimensional laser scanning fixed station setting makes the data loss in the laser point cloud difficult to overcome, and point cloud cavities are generated, thereby causing adverse effects on subsequent data processing. The image data has the characteristics of continuous features, rich texture information and the like, the two data are fused, respective advantages can be fully exerted, point clouds with more abundant information are obtained, cavities in the laser point clouds are effectively made up, and the method has a wide application scene in the fields of automatic generation of digital elevation models, city modeling, target identification and the like. However, the geometrical reference frames of the laser point cloud and the optical image are different, and the laser point cloud and the optical image cannot be directly and accurately aligned. In order to realize effective fusion and application of the two, the problem of geometric registration between the two must be solved firstly.
Disclosure of Invention
The invention provides a complete machine measurement point cloud fusion method based on multi-source heterogeneous data aiming at the defects in the prior art, and solves the problems that data in laser point cloud is lost and the complete machine image data and the laser point cloud cannot be directly and accurately geometrically registered in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme: a complete machine measurement point cloud fusion method based on multi-source heterogeneous data comprises the following steps:
s1, acquiring an image of the whole airplane through a camera, and acquiring laser three-dimensional point cloud of the whole airplane through a scanner;
s2, extracting the same-name edge straight line segment of the whole machine image on the obtained whole machine image based on gray level similarity straight line matching, and reconstructing a three-dimensional point cloud corresponding to the whole machine image by adopting a motion recovery point cloud method for the same-name edge straight line segment;
s3, converting the acquired laser three-dimensional point cloud of the whole airplane into a depth image, calculating a normal vector of each point in the depth image, extracting edge points according to the change of the normal vector, extracting edge straight lines from the edge points, and converting the depth image into the laser point cloud of the whole airplane;
s4, respectively extracting straight line intersection points in the reconstructed three-dimensional point cloud and straight line intersection points in the complete machine laser point cloud, carrying out similarity judgment on the two types of straight line intersection points, matching the two types of straight line intersection points when the similarity judgment condition is met, and calculating initial conversion parameters;
and S5, searching a point with the shortest distance in the whole laser point cloud at each point in the reconstructed three-dimensional point cloud, inputting the initial conversion parameters into an ICP (inductively coupled plasma) method to iterate conversion parameters until the value of the conversion function is minimum, at the moment, finding a point which is accurately matched with the point of the reconstructed three-dimensional point cloud in the laser point cloud, and outputting the optimal conversion parameters.
Further, step S2 comprises the following sub-steps:
step S201, extracting continuous edge points in the whole machine image through a Canny edge detection algorithm, detecting edge straight lines formed by the edge points by combining Hough transformation, eliminating excessively thick edge straight lines by setting a high threshold value, and eliminating excessively thin edge straight lines by setting a low threshold value;
step S202, two images are selected from the whole machine image processed in the step S201 to form a left-right image pair, 11 equally dividing points on the edge straight line including a starting point and an end point are calculated for each edge straight line extracted from the left image in the left-right image pair, 11 polar lines of the 11 equally dividing points on the right image in the left-right image pair are found, the intersection point of the edge straight line in the right image and the 11 polar lines is calculated, the gray scale correlation coefficient value of the intersection point and the corresponding equally dividing point of the left image pair is calculated, the average value of the gray scale correlation coefficient value is used as the correlation coefficient between the edge straight line extracted from the right image and the edge straight line extracted from the left image, and when the correlation coefficient is the largest and is larger than a threshold value, the edge straight line extracted from the right image is the same-name straight line of the edge straight line extracted from the left image;
and step S203, reconstructing a three-dimensional point cloud corresponding to the whole machine image by using the straight line segment with the same edge by adopting a motion recovery point cloud method.
Further, the three-dimensional point cloud reconstructed in step S203 is densely reconstructed using a PMVS algorithm.
Further, step S3 comprises the following sub-steps:
step S301, converting the obtained laser three-dimensional point cloud of the complete airplane into a depth image, and calculating the gray value of a pixel corresponding to each scanning point in the laser three-dimensional point cloud of the complete airplane in the depth image;
step S302, calculating a normal vector of each scanning point according to the pixel point relation in the depth image, and when the angle difference value of the scanning point and the normal vector of the adjacent scanning point is more than 40 degrees, taking the scanning point as an extracted edge point;
and step S303, after the extracted edge points are subjected to Hough transform to extract edge straight lines in the depth image, the depth image is converted into a complete machine laser point cloud.
Further, the calculation process of the gray value specifically includes:
Figure GDA0003899568640000021
wherein i is the ith scanning point in the laser three-dimensional point cloud of the whole airplane, S i Is the distance from the ith scanning point to the center of the scanner, S max The maximum distance S from a scanning point in the laser three-dimensional point cloud of the whole airplane to the center of a scanner min C is a constant, and Depth (i) is a gray value corresponding to the ith scanning point.
Further, step S4 comprises the following sub-steps:
step S401, for the homonymous edge straight-line segments in the reconstructed three-dimensional point cloud, sequentially selecting two homonymous edge straight-line segments which are not subjected to intersection point calculation, and if the length of a common perpendicular line of the two homonymous edge straight-line segments is less than 5mm, considering the midpoint of the common perpendicular line as a straight line intersection point between the two homonymous edge straight-line segments until the straight line intersection point calculation of all homonymous edge straight-line segments is completed;
s402, sequentially selecting two edge straight lines which are not subjected to intersection point calculation for the edge straight lines in the laser point cloud of the whole machine, and if the length of a common vertical line between the two edge straight lines is less than 5mm, considering the midpoint of the common vertical line as a straight line intersection point between two edge straight sections until the straight line intersection point calculation of all the edge straight lines is completed;
step S403, arbitrarily selecting three straight line intersections P from the straight line intersections in step S401 0 、P 1 、P 2 As points to be matched, a triangle delta P is formed 0 P 1 P 2 (ii) a Three straight line intersection points Q are traversed and selected from the straight line intersection points in the step S402 0 、Q 1 、Q 2 Form a triangle Δ Q 0 Q 1 Q 2 Default to Q 0 And P 0 Corresponds, Q 1 And P 1 Corresponds, Q 2 And P 2 Corresponding;
step S404, converting the triangle delta P 0 P 1 P 2 And triangle delta Q 0 Q 1 Q 2 Carrying out similarity judgment, and when the similarity judgment condition is met, Q is 0 And P 0 Matching, Q 1 And P 1 Matching, Q 2 And P 2 Matching, calculating initial conversion parameters; otherwise, step S403 is repeated.
Further, the similarity determination condition is specifically:
(1) Scale ratio S 0 、S 1 、S 2 Are both in the interval of (0.8, 1.2); the scale ratio S 0 、S 1 、S 2 The calculation process of (2) is as follows:
S 0 =L(Q 0 Q 1 )/L(P 0 P 1 )
S 1 =L(Q 1 Q 2 )/L(P 1 P 2 ),
S 2 =L(Q 2 Q 0 )/L(P 2 P 0 )
wherein, L (Q) 0 Q 1 ) Is Q 0 And Q 1 Length between, L (P) 0 P 1 ) Is P 0 And P 1 Length of between, L (Q) 1 Q 2 ) Is Q 1 And Q 2 Length between, L (P) 1 P 2 ) Is P 1 And P 2 Length between, L (Q) 2 Q 0 ) Is Q 0 And Q 2 Length between, L (P) 2 P 0 ) Is P 0 And P 2 A length in between;
(2)∠Q 0 Q 1 Q 2 and < P 0 P 1 P 2 The difference of (a) is within the range of (-5.0, 5.0).
Further, the initial conversion parameters include: rotation matrix R o Translation vector t o And scale factor s o
Further, the output process of the optimal conversion parameter in step S5 is:
Figure GDA0003899568640000031
wherein, (i, j) epsilon C represents that the straight line intersection point in the reconstructed three-dimensional point cloud is matched with the straight line intersection point in the complete machine laser point cloud, b j Representing the jth straight line intersection point in the laser point cloud of the whole machine, s represents an iterative scale factor, R represents an iterative rotation matrix, a i Representing the ith straight line intersection point in the reconstructed three-dimensional point cloud, and t represents an iterative translation vector.
Compared with the prior art, the invention has the beneficial effects that: the invention discloses a complete machine measurement point cloud fusion method based on multi-source heterogeneous data, which is characterized in that the method comprises the following steps of performing point cloud registration by utilizing linear characteristics in a scene, automatically fusing complete machine image and laser point cloud heterogeneous point cloud data, performing high-precision registration, fusing the characteristics of the two heterogeneous data, having higher registration precision and reliable and stable registration result, and obtaining relatively more complete scene point cloud by utilizing a stereoscopic vision reconstruction technology to assist a three-dimensional laser scanning technology.
Drawings
FIG. 1 is a flow chart of the complete machine measurement point cloud automatic fusion method based on multi-source heterogeneous data;
FIG. 2 is a three-dimensional point cloud image corresponding to the reconstructed whole machine image according to the present invention;
FIG. 3 is a diagram illustrating a variation of normal vectors in the present invention;
FIG. 4 is a complete machine laser point cloud image after extracting edge straight lines in the present invention;
FIG. 5 is a fine matching result diagram of the complete machine measurement point cloud fusion method based on multi-source heterogeneous data.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the drawings.
Fig. 1 is a flow chart of the complete machine measurement point cloud automatic fusion method based on multi-source heterogeneous data, and the complete machine measurement point cloud fusion method comprises the following steps:
s1, acquiring an image of the whole airplane through a camera, and acquiring laser three-dimensional point cloud of the whole airplane through a scanner;
s2, extracting the same-name edge straight line segment of the whole machine image on the obtained whole machine image based on gray level similarity straight line matching, and reconstructing a three-dimensional point cloud corresponding to the whole machine image by adopting a motion recovery point cloud method for the same-name edge straight line segment; specifically, the method comprises the following substeps:
step S201, continuous edge points in the whole machine image are extracted through a Canny edge detection algorithm, edge straight lines formed by the edge points are detected through combination of Hough transformation, too thick edge straight lines are removed through setting of a high threshold, too thin edge straight lines are removed through setting of a low threshold, the high threshold is set to be 100, and the low threshold is set to be 20.
Step S202, two images are randomly selected from the whole machine image processed in the step S201 to form a left-right image pair, 11 equally-divided points on the edge straight line including a starting point and an end point are calculated for each edge straight line extracted from the left image in the left-right image pair, 11 polar lines of the 11 equally-divided points on the right image in the left-right image pair are found, intersection points of the edge straight line in the right image and the 11 polar lines are worked out, gray scale correlation coefficient values of the intersection points and the corresponding equally-divided points of the left image pair are worked out, the average value of the gray scale correlation coefficient values is used as a correlation coefficient between the edge straight line extracted from the right image and the edge straight line extracted from the left image, and when the correlation coefficient is the largest and is larger than a threshold value, the edge straight line extracted from the right image is the same-name straight line of the edge straight line extracted from the left image;
and S203, reconstructing a three-dimensional point cloud corresponding to the whole machine image by using the same-name edge straight line segment by adopting a motion recovery point cloud method.
In order to meet the requirement of subsequent fusion with the laser three-dimensional point cloud of the whole machine, the three-dimensional point cloud reconstructed in step S203 is densely reconstructed by using a PMVS algorithm, for example, fig. 2 is a three-dimensional point cloud image corresponding to the reconstructed image of the whole machine, and the reconstructed three-dimensional point cloud having obvious linear features can be seen from fig. 2 and is used for registration of the subsequent point cloud. The whole machine image is converted into the three-dimensional point cloud of the subsequent registration, the edge straight line based on the image extraction is forced to be reconstructed, and the reconstructed three-dimensional point cloud contains obvious straight line characteristics while the dense point cloud is obtained.
S3, converting the obtained laser three-dimensional point cloud of the complete airplane into a depth image, calculating a normal vector of each point in the depth image, extracting edge points according to the change of the normal vector, extracting edge straight lines from the edge points, and converting the depth image into the laser point cloud of the complete airplane; specifically, the method comprises the following substeps:
step S301, converting the obtained laser three-dimensional point cloud of the whole airplane into a depth image, and calculating the gray value of a pixel corresponding to each scanning point in the laser three-dimensional point cloud of the whole airplane in the depth image;
the calculation process of the gray value in the invention specifically comprises the following steps:
Figure GDA0003899568640000051
wherein i is the ith scanning point in the laser three-dimensional point cloud of the whole airplane, S i Is the distance from the ith scanning point to the center of the scanner, S max The maximum distance S from a scanning point in the laser three-dimensional point cloud of the whole airplane to the center of a scanner min C is a constant, and Depth (i) is a gray value corresponding to the ith scanning point.
Step S302, calculating the normal vector of each scanning point according to the pixel point relation in the depth image, and when the angle difference value of the normal vector of the scanning point and the normal vector of the adjacent scanning point is more than 40 degrees, taking the scanning point as the extracted edgeAnd (4) point. An example of the normal vector variation is given in fig. 3, where a point on the top surface in fig. 3 is e.g. point f 1 、f 2 Is theoretically perpendicular to the top surface upwards, and points on the side surface, such as the point f 3 、f 4 The normal vector of (a) is perpendicular to the side face, the edge point e 1 、e 2 、e 3 Is similar to the normal vector of the point on the top surface and the side surface but has a larger difference with the normal vector of the point on the top surface and the side surface, along the edge direction, i.e. e 1 -e 2 -e 3 The normal vector of the edge point in the direction has little change, and the normal vector changes along the vertical edge direction, namely f 4 -f 3 -e 2 -f 2 -f 1 Directional edge point e 2 If the normal vector changes drastically, if the edge point e 2 Normal vector and point f 2 Point f 3 If the difference of the normal quantity angle is more than 40 degrees, the point is determined as an edge point.
Step S303, after extracting edge straight lines in the depth image from the extracted edge points by hough transform, converting the depth image into a complete machine laser point cloud, as shown in fig. 4.
The step of converting the laser three-dimensional point cloud into the depth image is to extract edge straight lines by using a method similar to the step S2 and then convert the depth image into the complete machine laser point cloud, so that the complete machine laser point cloud comprises obvious straight line characteristics.
S4, respectively extracting straight line intersection points in the reconstructed three-dimensional point cloud and straight line intersection points in the complete machine laser point cloud, carrying out similarity judgment on the two types of straight line intersection points, matching the two types of straight line intersection points when the similarity judgment condition is met, and calculating initial conversion parameters; the method specifically comprises the following substeps:
step S401, for the homonymous edge straight-line segments in the reconstructed three-dimensional point cloud, sequentially selecting two homonymous edge straight-line segments which are not subjected to intersection point calculation, and if the length of a common perpendicular line of the two homonymous edge straight-line segments is less than 5mm, considering the midpoint of the common perpendicular line as a straight line intersection point between the two homonymous edge straight-line segments until the straight line intersection point calculation of all homonymous edge straight-line segments is completed;
step S402, two edge straight lines which are not subjected to intersection point calculation are sequentially selected for the edge straight lines in the laser point cloud of the whole machine, if the length of a common vertical line between the two edge straight lines is less than 5mm, the midpoint of the common vertical line is regarded as the straight line intersection point between the two edge straight sections until the straight line intersection point calculation of all the edge straight lines is completed;
step S403, selecting three straight line intersection points P from the straight line intersection points in step S401 0 、P 1 、P 2 As points to be matched, a triangle delta P is formed 0 P 1 P 2 (ii) a Three straight line intersection points Q are traversed and selected from the straight line intersection points in the step S402 0 、Q 1 、Q 2 Form a triangle Δ Q 0 Q 1 Q 2 Default to Q 0 And P 0 Corresponds, Q 1 And P 1 Corresponds, Q 2 And P 2 Corresponding;
step S404, converting the triangle delta P 0 P 1 P 2 And triangle delta Q 0 Q 1 Q 2 Carrying out similarity judgment, and when the similarity judgment condition is met, Q is 0 And P 0 Matching, Q 1 And P 1 Matching, Q 2 And P 2 Matching, calculating initial conversion parameters; otherwise, step S403 is repeated. The initial conversion parameters in the invention comprise: rotation matrix R o Translation vector t o And scale factor s o
The similarity judgment conditions in the invention are specifically as follows:
(1) Scale ratio S 0 、S 1 、S 2 Are both in the interval of (0.8, 1.2); scale ratio S in the invention 0 、S 1 、S 2 The calculation process of (2) is as follows:
S 0 =L(Q 0 Q 1 )/L(P 0 P 1 )
S 1 =L(Q 1 Q 2 )/L(P 1 P 2 ),
S 2 =L(Q 2 Q 0 )/L(P 2 P 0 )
wherein, L (Q) 0 Q 1 ) Is Q 0 And Q 1 Length between, L (P) 0 P 1 ) Is P 0 And P 1 Length of between, L (Q) 1 Q 2 ) Is Q 1 And Q 2 Length between, L (P) 1 P 2 ) Is P 1 And P 2 Length between, L (Q) 2 Q 0 ) Is Q 0 And Q 2 Length between, L (P) 2 P 0 ) Is P 0 And P 2 The length of (d) between;
(2)∠Q 0 Q 1 Q 2 and < P 0 P 1 P 2 The difference is within the (-5.0, 5.0) interval.
The similarity judgment method is simple, visual and easy to understand, and meanwhile, the recognition speed is accelerated due to the small calculated amount, and the internal storage is easy to manage and small in occupied space.
S5, searching a point closest to the laser point cloud of the whole machine at each point in the reconstructed three-dimensional point cloud, inputting the initial conversion parameters into an ICP (inductively coupled plasma) method to iterate conversion parameters until the value of the conversion function is minimum, at the moment, finding a point which is accurately matched with the point of the reconstructed three-dimensional point cloud in the laser point cloud, and outputting the optimal conversion parameters:
Figure GDA0003899568640000071
wherein, (i, j) epsilon C represents that the straight line intersection point in the reconstructed three-dimensional point cloud is matched with the straight line intersection point in the complete machine laser point cloud, and b j Representing the jth straight line intersection point in the laser point cloud of the whole machine, s represents an iterative scale factor, R represents an iterative rotation matrix, a i Representing the ith straight line intersection point in the reconstructed three-dimensional point cloud, and t represents an iterative translation vector.
Fig. 5 is a diagram of a matching result of the complete machine measurement point cloud fusion method based on multi-source heterogeneous data, and it can be seen from fig. 5 that the reconstructed three-dimensional point cloud and the complete machine laser point cloud are accurately registered. Aiming at optical images and laser point cloud data, a straight line is selected as a connection characteristic of registering two heterogeneous data, and a method for identifying and matching the straight line characteristic between related image pairs and a method for identifying the straight line characteristic from three-dimensional point cloud based on normal change are researched; and adding a matching straight line forced reconstruction step in the motion recovery structure algorithm sparse reconstruction process to enable sparse reconstruction point clouds to have obvious three-dimensional straight line characteristics, and completing registration of the edge-enhanced sparse reconstruction points and the laser point clouds by utilizing a specially designed cross-scale three-dimensional point cloud automatic matching method based on intersecting straight lines. Experiments show that for scenes with rich line structures, the complete machine measurement point cloud fusion method has high registration accuracy and reliable and stable registration results, so that a stereoscopic vision reconstruction technology can be used for assisting a three-dimensional laser scanning technology to obtain relatively more complete scene point clouds.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (9)

1. A complete machine measurement point cloud fusion method based on multi-source heterogeneous data is characterized by comprising the following steps:
s1, acquiring an image of the whole airplane through a camera, and acquiring laser three-dimensional point cloud of the whole airplane through a scanner;
s2, extracting the same-name edge straight line segment of the whole machine image on the obtained whole machine image based on gray level similarity straight line matching, and reconstructing a three-dimensional point cloud corresponding to the whole machine image by adopting a motion recovery point cloud method for the same-name edge straight line segment;
s3, converting the obtained laser three-dimensional point cloud of the complete airplane into a depth image, calculating a normal vector of each point in the depth image, extracting edge points according to the change of the normal vector, extracting edge straight lines from the edge points, and converting the depth image into the laser point cloud of the complete airplane;
s4, respectively extracting straight line intersection points in the reconstructed three-dimensional point cloud and straight line intersection points in the complete machine laser point cloud, carrying out similarity judgment on the two types of straight line intersection points, matching the two types of straight line intersection points when the similarity judgment condition is met, and calculating initial conversion parameters;
and S5, searching a point with the closest distance in the complete machine laser point cloud at each point in the reconstructed three-dimensional point cloud, inputting the initial conversion parameters into an ICP (inductively coupled plasma) method to iterate conversion parameters until the value of the conversion function reaches the minimum, at the moment, finding a point which is accurately matched with the point of the reconstructed three-dimensional point cloud in the laser point cloud, and outputting the optimal conversion parameters.
2. The overall measurement point cloud fusion method based on multi-source heterogeneous data according to claim 1, wherein the step S2 comprises the following substeps:
step S201, extracting continuous edge points in the whole machine image through a Canny edge detection algorithm, detecting edge straight lines formed by the edge points by combining Hough transformation, eliminating excessively thick edge straight lines by setting a high threshold value, and eliminating excessively thin edge straight lines by setting a low threshold value;
step S202, two images are selected from the whole machine image processed in the step S201 to form a left-right image pair, 11 equally dividing points on the edge straight line including a starting point and an end point are calculated for each edge straight line extracted from the left image in the left-right image pair, 11 polar lines of the 11 equally dividing points on the right image in the left-right image pair are found, the intersection point of the edge straight line in the right image and the 11 polar lines is calculated, the gray scale correlation coefficient value of the intersection point and the corresponding equally dividing point of the left image pair is calculated, the average value of the gray scale correlation coefficient value is used as the correlation coefficient between the edge straight line extracted from the right image and the edge straight line extracted from the left image, and when the correlation coefficient is the largest and is larger than a threshold value, the edge straight line extracted from the right image is the same-name straight line of the edge straight line extracted from the left image;
and S203, reconstructing a three-dimensional point cloud corresponding to the whole machine image by using the same-name edge straight line segment by adopting a motion recovery point cloud method.
3. The overall measurement point cloud fusion method based on the multi-source heterogeneous data according to claim 2, wherein the three-dimensional point cloud reconstructed in step S203 is densely reconstructed by using a PMVS algorithm.
4. The overall measurement point cloud fusion method based on multi-source heterogeneous data according to claim 1, wherein the step S3 comprises the following substeps:
step S301, converting the obtained laser three-dimensional point cloud of the complete airplane into a depth image, and calculating the gray value of a pixel corresponding to each scanning point in the laser three-dimensional point cloud of the complete airplane in the depth image;
step S302, calculating a normal vector of each scanning point according to the pixel point relation in the depth image, and when the angle difference value of the scanning point and the normal vector of the adjacent scanning point is more than 40 degrees, taking the scanning point as an extracted edge point;
and step S303, after the extracted edge points are subjected to Hough transform to extract edge straight lines in the depth image, converting the depth image into a complete machine laser point cloud.
5. The overall measurement point cloud fusion method based on the multi-source heterogeneous data according to claim 4, wherein the gray value calculation process specifically comprises the following steps:
Figure FDA0003899568630000021
wherein i is the ith scanning point in the laser three-dimensional point cloud of the whole airplane, S i Is the distance from the ith scanning point to the center of the scanner, S max The maximum distance S from a scanning point in the laser three-dimensional point cloud of the whole airplane to the center of a scanner min C is a constant, and Depth (i) is a gray value corresponding to the ith scanning point.
6. The overall measurement point cloud fusion method based on multi-source heterogeneous data according to claim 1, wherein the step S4 comprises the following substeps:
step S401, for the homonymous edge straight-line segments in the reconstructed three-dimensional point cloud, sequentially selecting two homonymous edge straight-line segments which are not subjected to intersection point calculation, and if the length of a common perpendicular line of the two homonymous edge straight-line segments is less than 5mm, considering the midpoint of the common perpendicular line as a straight line intersection point between the two homonymous edge straight-line segments until the straight line intersection point calculation of all homonymous edge straight-line segments is completed;
step S402, two edge straight lines which are not subjected to intersection point calculation are sequentially selected for the edge straight lines in the laser point cloud of the whole machine, if the length of a common vertical line between the two edge straight lines is less than 5mm, the midpoint of the common vertical line is regarded as the straight line intersection point between the two edge straight sections until the straight line intersection point calculation of all the edge straight lines is completed;
step S403, selecting three straight line intersection points P from the straight line intersection points in step S401 0 、P 1 、P 2 As points to be matched, a triangle delta P is formed 0 P 1 P 2 (ii) a Traversing and selecting three straight line intersection points Q from the straight line intersection points in the step S402 0 、Q 1 、Q 2 Form a triangle Δ Q 0 Q 1 Q 2 Default to Q 0 And P 0 Corresponds, Q 1 And P 1 Corresponds, Q 2 And P 2 Corresponding;
step S404, converting the triangle delta P 0 P 1 P 2 And triangle delta Q 0 Q 1 Q 2 Carrying out similarity judgment, and when the similarity judgment condition is met, Q is 0 And P 0 Matching, Q 1 And P 1 Matching, Q 2 And P 2 Matching, calculating initial conversion parameters; otherwise, step S403 is repeated.
7. The overall measurement point cloud fusion method based on the multi-source heterogeneous data according to claim 6, wherein the similarity judgment condition is specifically as follows:
(1) Scale ratio S 0 、S 1 、S 2 Are both in the interval of (0.8, 1.2); the scale ratio S 0 、S 1 、S 2 The calculation process of (c) is as follows:
Figure FDA0003899568630000031
wherein, L (Q) 0 Q 1 ) Is Q 0 And Q 1 Length between, L (P) 0 P 1 ) Is P 0 And P 1 Length of between, L (Q) 1 Q 2 ) Is Q 1 And Q 2 Length between, L (P) 1 P 2 ) Is P 1 And P 2 Length between, L (Q) 2 Q 0 ) Is Q 0 And Q 2 Length between, L (P) 2 P 0 ) Is P 0 And P 2 The length of (d) between;
(2)∠Q 0 Q 1 Q 2 and < P 0 P 1 P 2 The difference of (a) is within the range of (-5.0, 5.0).
8. The multi-source heterogeneous data-based complete machine measurement point cloud fusion method according to claim 1, wherein the initial conversion parameters comprise: rotation matrix R o Translation vector t o And scale factor s o
9. The overall machine measurement point cloud fusion method based on the multi-source heterogeneous data according to claim 1, wherein the output process of the optimal conversion parameters in the step S5 is as follows:
Figure FDA0003899568630000032
wherein, (i, j) epsilon C represents that the straight line intersection point in the reconstructed three-dimensional point cloud is matched with the straight line intersection point in the complete machine laser point cloud, b j Representing the jth straight line intersection point in the laser point cloud of the whole machine, s represents an iterative scale factor, R represents an iterative rotation matrix, a i Representing the ith straight line intersection point in the reconstructed three-dimensional point cloud, and t represents an iterative translation vector.
CN202210315036.3A 2022-03-29 2022-03-29 Whole machine measurement point cloud fusion method based on multi-source heterogeneous data Active CN114627275B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210315036.3A CN114627275B (en) 2022-03-29 2022-03-29 Whole machine measurement point cloud fusion method based on multi-source heterogeneous data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210315036.3A CN114627275B (en) 2022-03-29 2022-03-29 Whole machine measurement point cloud fusion method based on multi-source heterogeneous data

Publications (2)

Publication Number Publication Date
CN114627275A CN114627275A (en) 2022-06-14
CN114627275B true CN114627275B (en) 2022-11-29

Family

ID=81904592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210315036.3A Active CN114627275B (en) 2022-03-29 2022-03-29 Whole machine measurement point cloud fusion method based on multi-source heterogeneous data

Country Status (1)

Country Link
CN (1) CN114627275B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115619835B (en) * 2022-09-13 2023-09-01 浙江大学 Heterogeneous three-dimensional observation registration method, medium and equipment based on depth phase correlation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103744086A (en) * 2013-12-23 2014-04-23 北京建筑大学 High-precision registration method for ground laser radar and close-range photography measurement data
CN104268935A (en) * 2014-09-18 2015-01-07 华南理工大学 Feature-based airborne laser point cloud and image data fusion system and method
WO2020067751A1 (en) * 2018-09-28 2020-04-02 재단법인대구경북과학기술원 Device and method for data fusion between heterogeneous sensors
CN111640158A (en) * 2020-06-11 2020-09-08 武汉斌果科技有限公司 End-to-end camera based on corresponding mask and laser radar external reference calibration method
CN112102458A (en) * 2020-08-31 2020-12-18 湖南盛鼎科技发展有限责任公司 Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance
CN112164145A (en) * 2020-10-30 2021-01-01 武汉大学 Method for rapidly extracting indoor three-dimensional line segment structure based on point cloud data
CN113052954A (en) * 2019-12-28 2021-06-29 深圳先进技术研究院 Three-dimensional reconstruction method, device, terminal and storage medium based on line segment matching

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362247B (en) * 2021-06-11 2023-08-15 山东大学 Semantic real scene three-dimensional reconstruction method and system for laser fusion multi-view camera
CN113506376A (en) * 2021-07-27 2021-10-15 刘秀萍 Ground three-dimensional point cloud multi-scale closure error checking and splicing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103744086A (en) * 2013-12-23 2014-04-23 北京建筑大学 High-precision registration method for ground laser radar and close-range photography measurement data
CN104268935A (en) * 2014-09-18 2015-01-07 华南理工大学 Feature-based airborne laser point cloud and image data fusion system and method
WO2020067751A1 (en) * 2018-09-28 2020-04-02 재단법인대구경북과학기술원 Device and method for data fusion between heterogeneous sensors
CN113052954A (en) * 2019-12-28 2021-06-29 深圳先进技术研究院 Three-dimensional reconstruction method, device, terminal and storage medium based on line segment matching
CN111640158A (en) * 2020-06-11 2020-09-08 武汉斌果科技有限公司 End-to-end camera based on corresponding mask and laser radar external reference calibration method
CN112102458A (en) * 2020-08-31 2020-12-18 湖南盛鼎科技发展有限责任公司 Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance
CN112164145A (en) * 2020-10-30 2021-01-01 武汉大学 Method for rapidly extracting indoor three-dimensional line segment structure based on point cloud data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"点、线相似不变性的城区航空影像与机载激光雷达点云自动配准";张良 等;《测绘学报》;20140417;第43卷(第4期);372-379 *
"点云与图像融合关键技术及在点云孔洞修复中的应用";蒙浩;《中国优秀硕士学位论文全文数据库 信息科技辑》;20200815(第08期);I138-381 *

Also Published As

Publication number Publication date
CN114627275A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
Furukawa et al. Accurate, dense, and robust multiview stereopsis
CN112102458A (en) Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance
CN107590827A (en) A kind of indoor mobile robot vision SLAM methods based on Kinect
CN108597009B (en) Method for detecting three-dimensional target based on direction angle information
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN107818598B (en) Three-dimensional point cloud map fusion method based on visual correction
CN113658337B (en) Multi-mode odometer method based on rut lines
Kumari et al. A survey on stereo matching techniques for 3D vision in image processing
CN110310331B (en) Pose estimation method based on combination of linear features and point cloud features
CN103308000B (en) Based on the curve object measuring method of binocular vision
CN111612728A (en) 3D point cloud densification method and device based on binocular RGB image
CN112419497A (en) Monocular vision-based SLAM method combining feature method and direct method
CN114627275B (en) Whole machine measurement point cloud fusion method based on multi-source heterogeneous data
CN102036094A (en) Stereo matching method based on digital score delay technology
CN112184792A (en) Road slope calculation method and device based on vision
CN116309813A (en) Solid-state laser radar-camera tight coupling pose estimation method
CN113538569A (en) Weak texture object pose estimation method and system
CN113052880A (en) SFM sparse reconstruction method, system and application
Intwala et al. A review on process of 3D Model Reconstruction
CN117115336A (en) Point cloud reconstruction method based on remote sensing stereoscopic image
CN114463396B (en) Point cloud registration method utilizing plane shape and topological graph voting
CN111260712B (en) Depth estimation method and device based on refocusing polar line graph neighborhood distribution
CN117315518A (en) Augmented reality target initial registration method and system
Shen et al. A 3D modeling method of indoor objects using Kinect sensor
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant