CN116152310A - Point cloud registration method, system, equipment and storage medium based on multi-source fusion - Google Patents

Point cloud registration method, system, equipment and storage medium based on multi-source fusion Download PDF

Info

Publication number
CN116152310A
CN116152310A CN202211500712.0A CN202211500712A CN116152310A CN 116152310 A CN116152310 A CN 116152310A CN 202211500712 A CN202211500712 A CN 202211500712A CN 116152310 A CN116152310 A CN 116152310A
Authority
CN
China
Prior art keywords
point cloud
dimensional
registration
point
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211500712.0A
Other languages
Chinese (zh)
Inventor
苗洪志
罗曼
王亚猛
李朋飞
魏巍
罗双华
夏波平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HONG KONG-ZHUHAI-MACAO BRIDGE AUTHORITY
Wuhan Ship Communication Research Institute 722 Research Institute Of China Shipbuilding Corp
Original Assignee
722th Research Institute of CSIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 722th Research Institute of CSIC filed Critical 722th Research Institute of CSIC
Priority to CN202211500712.0A priority Critical patent/CN116152310A/en
Publication of CN116152310A publication Critical patent/CN116152310A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a point cloud registration method based on multi-source fusion, which comprises the following steps: fixing a camera and a laser radar and performing joint calibration to obtain camera parameters; acquiring two-dimensional images of a plurality of target objects through a camera, acquiring three-dimensional point cloud data of the target objects through a laser radar, acquiring corresponding points of the three-dimensional point cloud in the two-dimensional images according to camera parameters, and endowing the three-dimensional point cloud with color information of the corresponding points; extracting feature points in two-dimensional images to be spliced by adopting a feature matching algorithm, matching the extracted feature points, mapping the matched feature points into a three-dimensional space respectively, and performing point cloud rough registration on a source point cloud and a target point cloud; and carrying out point cloud fine registration on the rough registration point cloud after the point cloud rough registration and the target point cloud. The invention also discloses a point cloud registration system based on multi-source fusion, corresponding equipment and a storage medium. The method has good registering effect on the scene with insignificant change of the three-dimensional structural characteristics of the target.

Description

Point cloud registration method, system, equipment and storage medium based on multi-source fusion
Technical Field
The present application relates to the field of point cloud registration technologies, and in particular, to a point cloud registration method, system, device, and storage medium based on multi-source fusion.
Background
With the development of sensor technology, three-dimensional structure acquisition devices such as laser radar and sonar have been increasingly used in various fields, and the three-dimensional structure acquisition devices can record the three-dimensional surface structure of a target object in a point cloud form, so that a digital model of the object can be restored by using reverse engineering. However, in actual engineering, due to the problems of limited field of view of the acquisition equipment or mutual shielding of spatial structures, the acquired single point cloud is often not a complete three-dimensional structure of the target, so that in order to obtain the complete three-dimensional structure of the target, the acquired multiple point clouds need to be spliced. It is therefore necessary to use point cloud registration to register multiple local point clouds into a complete point cloud.
Point cloud registration is typically split into two steps. The first step is coarse registration, namely, under the condition that the initial pose of the point cloud is completely unknown, the source point cloud and the target point cloud are transformed to be close to each other as much as possible, and the three-dimensional structural features of the point cloud are extracted and matched to perform coarse registration of the point cloud; the second step is fine registration, which is to perform further calibration on the basis of coarse registration, and gradually iterate the calibration on the detail part, thereby obtaining more accurate pose information.
The three-dimensional characteristic similarity phenomenon often exists in building groups, wall surfaces and other civil structures in actual scenes, the large scenes often cannot be acquired at one time, a plurality of point clouds are matched together by using point cloud registration, the current common point cloud registration method is completely dependent on the three-dimensional structural characteristics of the point clouds, particularly the rough registration of the point clouds, and registration failure is very easy under the scenes. However, the rough registration of the point cloud is critical in the whole process, and in the case that huge deviation occurs in the rough registration, the fine registration is difficult to register the point cloud to a correct position, so that the error in registration is easy. For a scene with insignificant change of the three-dimensional structural features of the target, two point clouds are difficult to register together by using a general registration method.
Disclosure of Invention
Aiming at least one defect or improvement requirement of the prior art, the invention provides a point cloud registration method, a system, equipment and a storage medium based on multi-source fusion, which have a good registration effect on scenes with insignificant changes of the three-dimensional structural characteristics of targets.
To achieve the above object, according to a first aspect of the present invention, there is provided a point cloud registration method based on multi-source fusion, the method comprising:
fixing a camera and a laser radar and performing joint calibration to obtain camera parameters;
acquiring two-dimensional images of a plurality of target objects through a camera, acquiring three-dimensional point cloud data of the target objects through a laser radar, acquiring corresponding points of the three-dimensional point cloud in the two-dimensional images according to camera parameters, and endowing the three-dimensional point cloud with color information of the corresponding points;
extracting feature points in two-dimensional images to be spliced by adopting a feature matching algorithm, matching the extracted feature points, mapping the matched feature points into a three-dimensional space respectively, and performing point cloud rough registration on a source point cloud and a target point cloud;
and carrying out point cloud fine registration on the rough registration point cloud after the point cloud rough registration and the target point cloud.
Further, the method for registering point clouds based on multi-source fusion, wherein the obtaining corresponding points of the three-dimensional point clouds in the two-dimensional image according to the camera parameters and giving color information of the corresponding points to the three-dimensional point clouds specifically includes:
constructing a world coordinate system to obtain coordinate information of feature points in the three-dimensional point cloud;
according to the conversion relation among the world coordinate system, the camera coordinate system, the plane coordinate system and the pixel coordinate system, and combining the internal parameters and the external parameters of the camera, converting the coordinates of the feature points in the three-dimensional point cloud into the corresponding feature point coordinates of the two-dimensional image;
and acquiring pixel information of the feature points in the two-dimensional image, and endowing RGB color values of the feature points to the three-dimensional point cloud.
Further, in the point cloud registration method based on multi-source fusion, the feature matching algorithm is adopted to extract feature points in two-dimensional images to be spliced, the extracted feature points are matched, and the matched feature points are respectively mapped to a three-dimensional space, and the method specifically comprises the following steps:
respectively extracting feature points of two images to be pieced together by adopting a feature matching algorithm;
matching the feature points through a rapid nearest neighbor algorithm, and finding out a plurality of pairs of best matching feature point pairs;
and mapping the optimal matching characteristic points to a three-dimensional space respectively to obtain two groups of three-dimensional characteristic points corresponding to the optimal matching characteristic point pairs, and obtaining a space transformation matrix between the characteristic points in the two-dimensional image and the corresponding three-dimensional characteristic points.
Further, the method for registering point clouds based on multi-source fusion, wherein the obtaining a spatial transformation matrix between the feature points in the two-dimensional image and the corresponding three-dimensional feature points specifically includes:
obtaining k neighboring points { P } in any feature point search point set P in the best matching feature point pair 1 ,p 2 ,...,p k The depth value of the three-dimensional feature point corresponding to the feature point is:
Figure BDA0003967475190000031
wherein depth (p i ) For two-dimensional feature points p i Depth value of dist (p) i ) For the characteristic pointTwo-dimensional feature point p i Is a distance of (2);
and calculating coordinate information of the three-dimensional feature point corresponding to the feature point by utilizing the depth information and the internal parameter and the external parameter of the camera, wherein the coordinate information is realized by the following formula:
Figure BDA0003967475190000032
wherein ,
Figure BDA0003967475190000033
is an internal reference matrix>
Figure BDA0003967475190000034
The three-dimensional feature point is a feature matrix, wherein (u, v) is the coordinates of the feature point, and (x, y, z) is the coordinates of the three-dimensional feature point corresponding to the feature point.
Further, the point cloud registration method based on multi-source fusion, wherein the performing point cloud rough registration on the source point cloud and the target point cloud specifically includes:
respectively obtaining coordinate information of two groups of three-dimensional feature points according to a space transformation matrix between the feature points in the two-dimensional image and the corresponding three-dimensional feature points;
and obtaining a space transformation matrix of the two groups of three-dimensional characteristic points by adopting a singular value decomposition method, and aligning the source point cloud with the target point cloud according to the space transformation matrix.
Further, the method for registering point clouds based on multi-source fusion, wherein the performing point cloud fine registration on the rough registration point clouds after the rough registration of the point clouds and the target point clouds specifically includes:
obtaining a rough registration point cloud after rough registration of the point cloud, calculating corresponding near points of the rough registration point cloud in the target point cloud set, and calculating translation parameters and rotation parameters between characteristic points in the rough registration point cloud and the corresponding near points;
calculating according to the frequency shift parameter and the rotation parameter to obtain a new transformation point set, and if the average distance between the new transformation point set and the rough registration point cloud is greater than a preset threshold value, continuing iterative calculation; and if the average distance between the new transformation point set and the rough registration point cloud is smaller than or equal to a preset threshold value, finishing the fine registration of the point cloud.
According to a second aspect of the present invention, there is further provided a point cloud registration system based on multi-source fusion, the system including an image acquisition module, a three-dimensional point cloud data acquisition module, a point cloud coarse registration module, and a point cloud fine registration module.
The image acquisition module comprises a camera and is used for acquiring two-dimensional images of a plurality of target objects through the camera;
the three-dimensional point cloud data acquisition module is connected with the image acquisition module, and comprises a laser radar for acquiring three-dimensional point cloud data of a target object through the laser radar, and is used for acquiring corresponding points of the three-dimensional point cloud in a two-dimensional image according to the camera parameters and endowing the color information of the corresponding points to the three-dimensional point cloud;
the point cloud rough registration module is respectively connected with the image acquisition module and the three-dimensional point cloud data acquisition module, and is used for extracting characteristic points in two-dimensional images to be spliced, matching the extracted characteristic points, mapping the matched characteristic points into a three-dimensional space respectively, and carrying out point cloud rough registration on source point clouds and target point clouds;
the point cloud fine registration module performs point cloud fine registration on the rough registration point cloud and the target point cloud after rough registration of the point cloud.
According to a third aspect of the present invention there is also provided a point cloud registration device based on multi-source fusion comprising at least one processing unit and at least one storage unit, wherein the storage unit stores a computer program which, when executed by the processing unit, causes the processing unit to perform the steps of any of the methods described above.
According to a fourth aspect of the present invention there is also provided a storage medium storing a computer program executable by a multi-source fusion based point cloud registration device, which when run on the multi-source fusion based point cloud registration device causes the multi-source fusion based point cloud registration device to perform the steps of any of the methods described above.
In general, the above technical solutions conceived by the present invention, compared with the prior art, enable the following beneficial effects to be obtained:
(1) According to the point cloud registration method based on multi-source fusion, the source point cloud and the target point cloud are roughly registered by solving the spatial relation matrix between the two-dimensional feature points and the corresponding three-dimensional feature points, and the registration effect of the method on a scene with insignificant change of the three-dimensional structural features of the target is good.
(2) According to the point cloud registration method based on multi-source fusion, color information of the target object is fused into the point cloud to obtain color, and the point cloud registration effect can be enhanced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a point cloud registration method based on multi-source fusion according to an embodiment of the present application;
fig. 2 is a schematic diagram of two point clouds before stitching provided in the embodiment of the present application;
fig. 3 is a schematic diagram of a feature point alignment result provided in an embodiment of the present application;
fig. 4 is a schematic diagram of coarse registration of point clouds according to an embodiment of the present application;
fig. 5 is a schematic diagram of point cloud fine registration provided in an embodiment of the present application.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
The terms first, second, third and the like in the description and in the claims of the application and in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
In one aspect, the present application provides a point cloud registration method based on multi-source fusion, and fig. 1 is a schematic flow chart of a point cloud registration method based on multi-source fusion provided in an embodiment of the present application, referring to fig. 1, the method includes the following steps:
(1) Fixing a camera and a laser radar and performing joint calibration to obtain camera parameters;
specifically, the camera and the laser radar are fixed on the same fixing device, so that the relative pose of the camera and the laser radar is fixed. In a specific embodiment, the fixing device is a tripod or other fixing device, the camera is a ZED2 camera, or other types of cameras, which are not specifically limited herein, and the lidar is an lipox lidar, or other types of lidar, which are not specifically limited herein.
Further, a checkerboard calibration graph is created to jointly calibrate the camera and the laser radar, and the pnp problem is calculated through the relation between the positions of the corner points of the checkerboard in the two-dimensional image of the camera and the positions of the corner points of the laser radar in the three-dimensional point cloud, so that the internal parameters and the external parameters of the camera are obtained.
(2) Acquiring two-dimensional images of a plurality of target objects through a camera, acquiring three-dimensional point cloud data of the target objects through a laser radar, acquiring corresponding points of the three-dimensional point cloud in the two-dimensional images according to camera parameters, and endowing the three-dimensional point cloud with color information of the corresponding points;
specifically, two-dimensional image data and three-dimensional point cloud data of a target object in a target scene are respectively acquired by using a fixed camera and a laser radar, and in a specific embodiment, in order to verify the registration effect in a scene with insignificant three-dimensional characteristic change, a wall surface is selected as the target scene.
Firstly, constructing a world coordinate system to obtain coordinate information of feature points in a three-dimensional point cloud; converting coordinates of feature points in the three-dimensional point cloud into corresponding feature point coordinates of the two-dimensional image according to the conversion relation among the world coordinate system, the camera coordinate system, the plane coordinate system and the pixel coordinate system and combining the internal parameters and the external parameters of the camera; and finding out the two-dimensional image characteristic points corresponding to the characteristic points in the three-dimensional point cloud according to the coordinate information, acquiring pixel information of the characteristic points in the two-dimensional image, and endowing the RGB color values of the characteristic points to the three-dimensional point cloud.
(3) Extracting feature points in two-dimensional images to be spliced by adopting a feature matching algorithm, matching the extracted feature points, mapping the matched feature points into a three-dimensional space respectively, and performing point cloud rough registration on a source point cloud and a target point cloud;
fig. 2 is a schematic diagram of two point clouds before stitching, fig. 3 is a schematic diagram of a feature point alignment result provided in an embodiment of the present application, and fig. 4 is a schematic diagram of rough point cloud registration provided in an embodiment of the present application. Respectively extracting characteristic points of two-dimensional images to be pieced together by adopting a characteristic matching algorithm (SURF algorithm); matching the feature points through a quick nearest neighbor algorithm (FLANN algorithm), and finding out a plurality of pairs of best matching feature point pairs; and mapping the optimal matching characteristic points to a three-dimensional space respectively to obtain two groups of three-dimensional characteristic points corresponding to the optimal matching characteristic point pairs, and obtaining a space transformation matrix between the characteristic points in the two-dimensional image and the corresponding three-dimensional characteristic points.
Specifically, k neighboring points { P } in any feature point search point set P in the best matching feature point pair are obtained 1 ,p 2 ,...,p k The depth value of the three-dimensional feature point corresponding to the feature point is:
Figure BDA0003967475190000071
wherein depth (p i ) For two-dimensional feature points p i Depth value of dist (p) i ) Two-dimensional feature point p for the feature point i Is a distance of (2);
and calculating coordinate information of the three-dimensional feature point corresponding to the feature point by utilizing the depth information and the internal parameter and the external parameter of the camera, wherein the coordinate information is realized by the following formula:
Figure BDA0003967475190000081
wherein ,
Figure BDA0003967475190000082
is an internal reference matrix>
Figure BDA0003967475190000083
The three-dimensional feature point is a feature matrix, wherein (u, v) is the coordinates of the feature point, and (x, y, z) is the coordinates of the three-dimensional feature point corresponding to the feature point.
According to the method, the coordinate information of two groups of three-dimensional characteristic points is respectively obtained according to the space transformation matrix between the characteristic points in the two-dimensional image and the corresponding three-dimensional characteristic points;
and solving a space transformation matrix of the two groups of three-dimensional characteristic points by adopting a Singular Value Decomposition (SVD) method, and aligning the source point cloud with the target point cloud according to the space transformation matrix.
(4) And carrying out point cloud fine registration on the rough registration point cloud after the point cloud rough registration and the target point cloud.
Fig. 5 is a schematic diagram of point cloud fine registration provided in the embodiment of the present application, and a process of performing point cloud fine registration by using a colored iterative closest point algorithm (ICP algorithm) is specifically as follows:
obtaining a rough registration point cloud after rough registration of the point cloud, calculating corresponding near points of the rough registration point cloud in a target point cloud set, and calculating translation parameters and rotation parameters between characteristic points in the rough registration point cloud and the corresponding near points;
calculating according to the frequency shift parameter and the rotation parameter to obtain a new transformation point set, and if the average distance between the new transformation point set and the rough registration point cloud is greater than a preset threshold value, continuing iterative calculation; and if the average distance between the new transformation point set and the rough registration point cloud is smaller than or equal to a preset threshold value, finishing the fine registration of the point cloud.
Further, more data sets were used to verify the effectiveness of the method herein under different scenarios, the experimental data were compared using the general method (SAC-ia+icp) using three sets of data, box, room, wall, as different target scenarios, and the final mean Root Mean Square Error (RMSE) was calculated, with the results shown in table 1.
Table 1 results of comparison with general procedure
Figure BDA0003967475190000084
Figure BDA0003967475190000091
The smaller RMSE represents better registration result, and the result proves that the registration effect of the method under the scene with insignificant three-dimensional characteristic change is superior to that of the general method, and the registration effect under the general scene is not superior to that of the general method.
On the other hand, the application also provides a point cloud registration system based on multi-source fusion, which comprises an image acquisition module, a three-dimensional point cloud data acquisition module, a point cloud coarse registration module and a point cloud fine registration module.
The image acquisition module comprises a camera and is used for acquiring two-dimensional images of a plurality of target objects through the camera;
the three-dimensional point cloud data acquisition module is connected with the image acquisition module, and comprises a laser radar, and is used for acquiring three-dimensional point cloud data of a target object through the laser radar, acquiring corresponding points of the three-dimensional point cloud in a two-dimensional image according to camera parameters, and endowing the color information of the corresponding points to the three-dimensional point cloud;
the point cloud rough registration module is respectively connected with the image acquisition module and the three-dimensional point cloud data acquisition module, and is used for extracting characteristic points in two-dimensional images to be spliced, matching the extracted characteristic points, mapping the matched characteristic points to a three-dimensional space respectively, and carrying out point cloud rough registration on the source point cloud and the target point cloud;
and the point cloud fine registration module performs point cloud fine registration on the rough registration point cloud and the target point cloud after rough registration of the point cloud.
The present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method. The computer readable storage medium may include, among other things, any type of disk including floppy disks, optical disks, DVDs, CD-ROMs, micro-drives, and magneto-optical disks, ROM, RAM, EPROM, EEPROM, DRAM, VRAM, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional manners of dividing the actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be performed by hardware associated with a program that is stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The foregoing is merely exemplary embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Embodiments of the present disclosure will be readily apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a scope and spirit of the disclosure being indicated by the claims.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
It will be readily appreciated by those skilled in the art that the foregoing description is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (9)

1. The point cloud registration method based on the multi-source fusion is characterized by comprising the following steps of:
fixing a camera and a laser radar and performing joint calibration to obtain camera parameters;
acquiring two-dimensional images of a plurality of target objects through a camera, acquiring three-dimensional point cloud data of the target objects through a laser radar, acquiring corresponding points of the three-dimensional point cloud in the two-dimensional images according to camera parameters, and endowing the three-dimensional point cloud with color information of the corresponding points;
extracting feature points in two-dimensional images to be spliced by adopting a feature matching algorithm, matching the extracted feature points, mapping the matched feature points into a three-dimensional space respectively, and performing point cloud rough registration on a source point cloud and a target point cloud;
and carrying out point cloud fine registration on the rough registration point cloud after the point cloud rough registration and the target point cloud.
2. The point cloud registration method based on multi-source fusion according to claim 1, wherein the obtaining the corresponding point of the three-dimensional point cloud in the two-dimensional image according to the camera parameter and giving the color information of the corresponding point to the three-dimensional point cloud specifically comprises:
constructing a world coordinate system to obtain coordinate information of feature points in the three-dimensional point cloud;
according to the conversion relation among the world coordinate system, the camera coordinate system, the plane coordinate system and the pixel coordinate system, and combining the internal parameters and the external parameters of the camera, converting the coordinates of the feature points in the three-dimensional point cloud into the corresponding feature point coordinates of the two-dimensional image;
and acquiring pixel information of the feature points in the two-dimensional image, and endowing RGB color values of the feature points to the three-dimensional point cloud.
3. The point cloud registration method based on multi-source fusion according to claim 1, wherein the feature matching algorithm is adopted to extract feature points in two-dimensional images to be spliced, the extracted feature points are matched, and the matched feature points are mapped to a three-dimensional space respectively, and the method specifically comprises the following steps:
respectively extracting feature points of two images to be pieced together by adopting a feature matching algorithm;
matching the feature points through a rapid nearest neighbor algorithm, and finding out a plurality of pairs of best matching feature point pairs;
and mapping the optimal matching characteristic points to a three-dimensional space respectively to obtain two groups of three-dimensional characteristic points corresponding to the optimal matching characteristic point pairs, and obtaining a space transformation matrix between the characteristic points in the two-dimensional image and the corresponding three-dimensional characteristic points.
4. The point cloud registration method based on multi-source fusion according to claim 2 or 3, wherein the obtaining a spatial transformation matrix between the feature points in the two-dimensional image and the corresponding three-dimensional feature points specifically includes:
obtaining k neighboring points { P } in any feature point search point set P in the best matching feature point pair 1 ,p 2 ,...,p k The depth value of the three-dimensional feature point corresponding to the feature point is:
Figure FDA0003967475180000021
wherein depth (p i ) For two-dimensional feature points p i Depth value of dist (p) i ) Two-dimensional feature point p for the feature point i Is a distance of (2);
and calculating coordinate information of the three-dimensional feature point corresponding to the feature point by utilizing the depth information and the internal parameter and the external parameter of the camera, wherein the coordinate information is realized by the following formula:
Figure FDA0003967475180000022
/>
wherein ,
Figure FDA0003967475180000023
is an internal reference matrix>
Figure FDA0003967475180000024
The three-dimensional feature point is a feature matrix, wherein (u, v) is the coordinates of the feature point, and (x, y, z) is the coordinates of the three-dimensional feature point corresponding to the feature point.
5. The point cloud registration method based on multi-source fusion according to claim 4, wherein the performing point cloud rough registration on the source point cloud and the target point cloud specifically comprises:
respectively obtaining coordinate information of two groups of three-dimensional feature points according to a space transformation matrix between the feature points in the two-dimensional image and the corresponding three-dimensional feature points;
and obtaining a space transformation matrix of the two groups of three-dimensional characteristic points by adopting a singular value decomposition method, and aligning the source point cloud with the target point cloud according to the space transformation matrix.
6. The point cloud registration method based on multi-source fusion according to claim 1 or 5, wherein the performing point cloud fine registration on the rough registration point cloud after the point cloud rough registration and the target point cloud specifically comprises:
obtaining a rough registration point cloud after rough registration of the point cloud, calculating corresponding near points of the rough registration point cloud in the target point cloud set, and calculating translation parameters and rotation parameters between characteristic points in the rough registration point cloud and the corresponding near points;
calculating according to the frequency shift parameter and the rotation parameter to obtain a new transformation point set, and if the average distance between the new transformation point set and the rough registration point cloud is greater than a preset threshold value, continuing iterative calculation; and if the average distance between the new transformation point set and the rough registration point cloud is smaller than or equal to a preset threshold value, finishing the fine registration of the point cloud.
7. The point cloud registration system based on the multi-source fusion is characterized by comprising an image acquisition module, a three-dimensional point cloud data acquisition module, a point cloud coarse registration module and a point cloud fine registration module;
the image acquisition module comprises a camera and is used for acquiring two-dimensional images of a plurality of target objects through the camera;
the three-dimensional point cloud data acquisition module is connected with the image acquisition module, and comprises a laser radar for acquiring three-dimensional point cloud data of a target object through the laser radar, and is used for acquiring corresponding points of the three-dimensional point cloud in a two-dimensional image according to the camera parameters and endowing the color information of the corresponding points to the three-dimensional point cloud;
the point cloud rough registration module is respectively connected with the image acquisition module and the three-dimensional point cloud data acquisition module, and is used for extracting characteristic points in two-dimensional images to be spliced, matching the extracted characteristic points, mapping the matched characteristic points into a three-dimensional space respectively, and carrying out point cloud rough registration on source point clouds and target point clouds;
the point cloud fine registration module performs point cloud fine registration on the rough registration point cloud and the target point cloud after rough registration of the point cloud.
8. A point cloud registration device based on multi-source fusion, characterized by comprising at least one processing unit and at least one storage unit, wherein the storage unit stores a computer program which, when executed by the processing unit, causes the processing unit to perform the steps of the method of any of claims 1-6.
9. A storage medium storing a computer program executable by a point cloud registration device based on multi-source fusion, which when run on the point cloud registration device based on multi-source fusion causes the point cloud registration device based on multi-source fusion to perform the steps of the method according to any of claims 1-6.
CN202211500712.0A 2022-11-28 2022-11-28 Point cloud registration method, system, equipment and storage medium based on multi-source fusion Pending CN116152310A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211500712.0A CN116152310A (en) 2022-11-28 2022-11-28 Point cloud registration method, system, equipment and storage medium based on multi-source fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211500712.0A CN116152310A (en) 2022-11-28 2022-11-28 Point cloud registration method, system, equipment and storage medium based on multi-source fusion

Publications (1)

Publication Number Publication Date
CN116152310A true CN116152310A (en) 2023-05-23

Family

ID=86372613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211500712.0A Pending CN116152310A (en) 2022-11-28 2022-11-28 Point cloud registration method, system, equipment and storage medium based on multi-source fusion

Country Status (1)

Country Link
CN (1) CN116152310A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116797803A (en) * 2023-07-13 2023-09-22 南京埃斯顿自动化股份有限公司 Point cloud matching method based on sectional decoupling, electronic equipment and medium
CN116952988A (en) * 2023-09-21 2023-10-27 斯德拉马机械(太仓)有限公司 2D line scanning detection method and system for ECU (electronic control Unit) product
CN117351156A (en) * 2023-12-01 2024-01-05 深圳市云鲸视觉科技有限公司 City real-time digital content generation method and system and electronic equipment thereof
CN117456146A (en) * 2023-12-21 2024-01-26 绘见科技(深圳)有限公司 Laser point cloud splicing method, device, medium and equipment
CN118365821A (en) * 2024-06-19 2024-07-19 江西省送变电工程有限公司 Project progress visual management method and system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116797803A (en) * 2023-07-13 2023-09-22 南京埃斯顿自动化股份有限公司 Point cloud matching method based on sectional decoupling, electronic equipment and medium
CN116797803B (en) * 2023-07-13 2024-02-06 南京埃斯顿自动化股份有限公司 Point cloud matching method based on sectional decoupling, electronic equipment and medium
CN116952988A (en) * 2023-09-21 2023-10-27 斯德拉马机械(太仓)有限公司 2D line scanning detection method and system for ECU (electronic control Unit) product
CN116952988B (en) * 2023-09-21 2023-12-08 斯德拉马机械(太仓)有限公司 2D line scanning detection method and system for ECU (electronic control Unit) product
CN117351156A (en) * 2023-12-01 2024-01-05 深圳市云鲸视觉科技有限公司 City real-time digital content generation method and system and electronic equipment thereof
CN117351156B (en) * 2023-12-01 2024-03-22 深圳市云鲸视觉科技有限公司 City real-time digital content generation method and system and electronic equipment thereof
CN117456146A (en) * 2023-12-21 2024-01-26 绘见科技(深圳)有限公司 Laser point cloud splicing method, device, medium and equipment
CN117456146B (en) * 2023-12-21 2024-04-12 绘见科技(深圳)有限公司 Laser point cloud splicing method, device, medium and equipment
CN118365821A (en) * 2024-06-19 2024-07-19 江西省送变电工程有限公司 Project progress visual management method and system

Similar Documents

Publication Publication Date Title
CN116152310A (en) Point cloud registration method, system, equipment and storage medium based on multi-source fusion
US11288492B2 (en) Method and device for acquiring 3D information of object
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
US8447099B2 (en) Forming 3D models using two images
US20120177284A1 (en) Forming 3d models using multiple images
CN111524168B (en) Point cloud data registration method, system and device and computer storage medium
Khoshelham Automated localization of a laser scanner in indoor environments using planar objects
CN110322492B (en) Space three-dimensional point cloud registration method based on global optimization
WO2021195939A1 (en) Calibrating method for external parameters of binocular photographing device, movable platform and system
Bastanlar et al. Multi-view structure-from-motion for hybrid camera scenarios
TW201523510A (en) System and method for combining point clouds
Alsadik et al. Robust extraction of image correspondences exploiting the image scene geometry and approximate camera orientation
CN113920267A (en) Three-dimensional scene model construction method, device, equipment and storage medium
Miyama Fast stereo matching with super-pixels using one-way check and score filter
Zhang et al. Guided feature matching for multi-epoch historical image blocks pose estimation
CN112837217A (en) Outdoor scene image splicing method based on feature screening
Guan et al. Estimation of camera pose with respect to terrestrial LiDAR data
Weinmann et al. Fast and accurate point cloud registration by exploiting inverse cumulative histograms (ICHs)
Yun et al. Cluster-wise removal of reflection artifacts in large-scale 3D point clouds using superpixel-based glass region estimation
He et al. Automatic orientation estimation of multiple images with respect to laser data
Ju et al. Panoramic image generation with lens distortions
Deliś et al. 3D modeling of architectural objects from video data obtained with the fixed focal length lens geometry
Chen et al. Fine registration of mobile and airborne LiDAR data based on common ground points
Chu et al. A systematic registration method for cross-source point clouds based on cross-view image matching
Kulkarni et al. Vote based correspondence for 3D point-set registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zheng Shunchao

Inventor after: Dai Bolan

Inventor after: Li Guangjun

Inventor after: Miao Hongzhi

Inventor after: Jing Qiang

Inventor after: Luo Man

Inventor after: Wang Yameng

Inventor after: Li Pengfei

Inventor after: Wei Wei

Inventor after: Luo Shuanghua

Inventor after: Xia Boping

Inventor before: Miao Hongzhi

Inventor before: Luo Man

Inventor before: Wang Yameng

Inventor before: Li Pengfei

Inventor before: Wei Wei

Inventor before: Luo Shuanghua

Inventor before: Xia Boping

CB03 Change of inventor or designer information
TA01 Transfer of patent application right

Effective date of registration: 20240206

Address after: No. 368, Henglong Road, Nanping Town, Xiangzhou District, Zhuhai City, Guangdong Province 519060

Applicant after: HONG KONG-ZHUHAI-MACAO BRIDGE AUTHORITY

Country or region after: China

Applicant after: Wuhan Ship Communication Research Institute (722 Research Institute of China Shipbuilding Corp.)

Address before: 430205 Institute of CSIC 722, No.3, zanglongdao, Jiangxia District, Wuhan City, Hubei Province

Applicant before: WUHAN SHIP COMMUNICATION Research Institute (722 RESEARCH INSTITUTE OF CHINA SHIPBUILDING INDUSTRY CORPORATION)

Country or region before: China

TA01 Transfer of patent application right