CN113793370B - Three-dimensional point cloud registration method and device, electronic equipment and readable medium - Google Patents

Three-dimensional point cloud registration method and device, electronic equipment and readable medium Download PDF

Info

Publication number
CN113793370B
CN113793370B CN202110040721.5A CN202110040721A CN113793370B CN 113793370 B CN113793370 B CN 113793370B CN 202110040721 A CN202110040721 A CN 202110040721A CN 113793370 B CN113793370 B CN 113793370B
Authority
CN
China
Prior art keywords
point cloud
point
pixel
registered
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110040721.5A
Other languages
Chinese (zh)
Other versions
CN113793370A (en
Inventor
孙晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Three Hundred And Sixty Degree E Commerce Co ltd
Original Assignee
Beijing Jingdong Three Hundred And Sixty Degree E Commerce Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Three Hundred And Sixty Degree E Commerce Co ltd filed Critical Beijing Jingdong Three Hundred And Sixty Degree E Commerce Co ltd
Priority to CN202110040721.5A priority Critical patent/CN113793370B/en
Publication of CN113793370A publication Critical patent/CN113793370A/en
Application granted granted Critical
Publication of CN113793370B publication Critical patent/CN113793370B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T3/14
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20068Projection on vertical or horizontal image axis

Abstract

The embodiment of the disclosure provides a three-dimensional point cloud registration method, a three-dimensional point cloud registration device, electronic equipment and a readable medium, wherein the three-dimensional point cloud registration method comprises the following steps: acquiring a point cloud and a reference point cloud to be registered; determining a first projection image of a point cloud to be registered in a two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points; determining a second projection image of the datum point cloud on the two-dimensional plane, wherein the second projection image comprises a plurality of second pixel points; determining matched pixel point pairs according to first pixel points of the first projection image and second pixel points of the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point; and determining a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matched pixel points and at least one datum point cloud corresponding to a second pixel point in the matched pixel points, so as to perform point cloud registration operation on the point cloud to be registered according to the transformation matrix. The technical scheme provided by the embodiment of the disclosure can realize initial value-free rapid and accurate registration of a large-range point cloud.

Description

Three-dimensional point cloud registration method and device, electronic equipment and readable medium
Technical Field
The disclosure relates to the technical field of point cloud registration, and in particular relates to a three-dimensional point cloud registration method, a three-dimensional point cloud registration device, electronic equipment and a computer readable medium.
Background
At present, with the popularization of laser radars and various depth sensors, the acquisition of three-dimensional point cloud data becomes more convenient. A large number of three-dimensional point clouds are becoming novel data sources in the research fields of three-dimensional reconstruction, pose estimation, target recognition and the like. However, in most cases, the acquisition of the three-dimensional point cloud needs to be completed through cross-time period, cross-view angle and cross-equipment cooperation, and only partial three-dimensional point cloud data acquired by different view angles, different periods and different equipment are registered together to form complete three-dimensional data covering the whole scene, so that subsequent higher-level algorithm analysis can be performed. Therefore, as a basic algorithm for point cloud data processing, the three-dimensional point cloud registration technology is always a research hotspot in the fields of mapping geographic information, computer vision, computer graphics and the like. According to different target precision of registration, the point cloud registration technology is divided into two main categories of coarse registration and fine registration. The coarse registration refers to registering the point clouds under the condition that the relative pose of the two point clouds to be registered is completely unknown, and the registration result can provide a good initial value for the fine registration. The currently prevailing coarse registration algorithms include an exhaustive search based registration algorithm and a feature matching based registration algorithm. Wherein an exhaustive search based registration algorithm requires traversing the entire transformation space to choose the transformation relationship that minimizes the error function or enumerates the transformation relationships that satisfy the most point pairs. Such as random sample consensus (RANSAC) registration algorithm, four-point consensus registration algorithm (4-Point Congruent Set,4 PCS), super4PCS algorithm, etc. The registration algorithm based on feature matching constructs matching correspondence among point clouds by calculating morphological characteristics of the object to be detected or the scene itself, and then adopts a correlation algorithm to estimate the transformation relationship. Such as random sampling consistency (SAC-IA) of feature points based on a fast point feature histogram (Fast Point Feature Histograms, FPFH), FGR algorithm, AO algorithm based on a direction histogram feature (Signature of Histograms of OrienTations, SHOT) feature, 3DsmoothNet method based on a deep learning extraction feature, ICL algorithm based on a line feature, etc. The purpose of the fine registration is to minimize the spatial position difference between the point clouds based on the coarse registration results. The general fine registration algorithms mainly include ICP and various variants of ICP (e.g. point to plane ICP, point to line ICP, GICP, NICP), NDT, deepVCP based on deep learning, etc.
Although there are numerous point cloud registration algorithms currently, there are many problems with the point cloud registration technique that remain to be solved compared to the image registration algorithm. Most of the current better point cloud registration algorithms are mainly performed on small specific objects (such as classical Stanford rabbit models) or indoor scenes within a range of a few meters. For the indoor and outdoor point cloud registration problem in a wide range of scenes, a great challenge still exists at present. Due to algorithm complexity and memory limitations, existing registration algorithms designed for local small-scale are difficult to extend to large-scale scenes.
Therefore, a new three-dimensional point cloud registration method, apparatus, electronic device, and computer-readable medium are needed.
The above information disclosed in the background section is only for enhancement of understanding of the background of the disclosure and therefore it may include information that does not form the prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide a three-dimensional point cloud registration method, apparatus, electronic device, and computer readable medium, which can implement initial value-free rapid and accurate registration of a large-scale point cloud.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of an embodiment of the present disclosure, a three-dimensional point cloud registration method is provided, the method including: acquiring a point cloud and a reference point cloud to be registered; determining a first projection image of the point cloud to be registered in a two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered; determining a second projection image of the datum point cloud on a two-dimensional plane, wherein the second projection image comprises a plurality of second pixel points, and each second pixel point corresponds to at least one datum point cloud; determining matched pixel point pairs according to first pixel points of the first projection image and second pixel points in the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point; and determining a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matched pixel points and at least one datum point cloud corresponding to a second pixel point in the matched pixel points, so as to perform point cloud registration operation on the point cloud to be registered according to the transformation matrix.
In an exemplary embodiment of the present disclosure, determining the first projection image of the point cloud to be registered in the two-dimensional plane includes: dividing a two-dimensional plane according to the projection position of the point cloud to be aligned on the two-dimensional plane to obtain a plurality of first pixel points; determining a gray value of the first pixel point according to reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point; and generating the first projection image according to the gray value of the first pixel point.
In an exemplary embodiment of the present disclosure, determining the gray value of the first pixel according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel includes: acquiring reflection intensity information of the point cloud to be registered; determining the minimum value in the reflection intensity information of the point cloud to be registered as the global minimum reflection intensity of the point cloud to be registered, and determining the maximum value in the reflection intensity information of the point cloud to be registered as the global maximum reflection intensity of the point cloud to be registered; determining the reflector intensity of the first pixel point according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point; and determining the gray value of the first pixel point according to the global minimum reflection intensity of the point cloud to be aligned, the global maximum reflection intensity of the point cloud to be aligned and the reflector intensity of the first pixel point.
In one exemplary embodiment of the present disclosure, determining a matched pixel point pair from a first pixel point of the first projection image and a second pixel point in the second projection image includes: extracting the characteristics of the first pixel points to obtain first characteristic vectors of the first pixel points; extracting the characteristics of the second pixel points to obtain second characteristic vectors of the second pixel points; determining the feature distance between the first feature vector of the first pixel point and the second feature vector of each second pixel point; and determining the sum of the first pixel point and the second pixel point with the minimum characteristic distance as a matching point pair.
In an exemplary embodiment of the present disclosure, the method further comprises: determining a characteristic distance ratio of a minimum value to a next-minimum value in the characteristic distances; and if the characteristic distance ratio is larger than the characteristic distance ratio threshold, eliminating the matching point pair.
In an exemplary embodiment of the present disclosure, the method further comprises: determining homography transformations of the first projection image and the second projection image; and eliminating the matching point pairs with errors larger than the error pixel threshold value after homography transformation.
In an exemplary embodiment of the present disclosure, determining the transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point of the matched pixel points and at least one reference point cloud corresponding to a second pixel point of the matched pixel points includes: determining a point cloud to be registered, which has the lowest elevation value, in point clouds to be registered corresponding to a first pixel point in the matched pixel points as a first point cloud; determining a reference point cloud with the lowest elevation value in the reference point clouds corresponding to the second pixel points in the matched pixel points as a second point cloud; and determining the transformation matrix by taking the first point cloud and the second point cloud as matching point cloud pairs.
According to a second aspect of an embodiment of the present disclosure, a three-dimensional point cloud registration apparatus is provided, the apparatus including: the point cloud acquisition module is configured to acquire point clouds to be registered and reference point clouds; the first image module is configured to determine a first projection image of the point cloud to be registered on a two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered; a second image module configured to determine a second projection image of the reference point cloud in a two-dimensional plane, the second projection image including a plurality of second pixel points, each second pixel point corresponding to at least one reference point cloud; the pixel point pair module is configured to determine matched pixel point pairs according to a first pixel point of the first projection image and a second pixel point of the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point; the point cloud registration module is configured to determine a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matched pixel points and at least one datum point cloud corresponding to a second pixel point in the matched pixel points, so as to perform point cloud registration operation on the point cloud to be registered according to the transformation matrix.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage means for storing one or more programs; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the three-dimensional point cloud registration method of any of the above.
According to a fourth aspect of embodiments of the present disclosure, a computer readable medium is presented, on which a computer program is stored, which program, when being executed by a processor, implements a three-dimensional point cloud registration method as described in any of the above.
According to the three-dimensional point cloud registration method, the three-dimensional point cloud registration device, the electronic equipment and the computer readable medium, the point cloud to be registered is projected to be a first pixel point on a two-dimensional plane based on a projection mode, a first projection image is generated by using the first pixel point, the datum point cloud is projected to be a second pixel point on the two-dimensional plane, a second projection image is generated by using the second pixel point, and a matched pixel point pair is determined by using the first projection image and the second projection image, so that the three-dimensional point cloud registration problem can be reduced to be a two-dimensional image registration problem, the complexity and the memory requirement of an algorithm are reduced, the registration efficiency is greatly improved, the registration algorithm can be expanded to a large-range scene, and the non-initial value rapid registration of hundreds of millions of point clouds is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. The drawings described below are merely examples of the present disclosure and other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a system block diagram illustrating a three-dimensional point cloud registration method and apparatus according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a three-dimensional point cloud registration method according to an exemplary embodiment.
Fig. 3 is a flow chart illustrating a three-dimensional point cloud registration method according to an exemplary embodiment.
Fig. 4 is a point cloud registration effect diagram, shown according to an example embodiment.
Fig. 5 is a schematic diagram illustrating a projection image generation manner according to an exemplary embodiment.
Fig. 6 is a schematic diagram of a projected image shown according to an exemplary embodiment.
Fig. 7 is an exemplary diagram of the effect of feature matching and false positive filtering according to an exemplary illustration.
Fig. 8 is a flow chart illustrating a three-dimensional point cloud registration method according to an exemplary embodiment.
Fig. 9 is a flowchart illustrating a three-dimensional point cloud registration method according to an example embodiment.
Fig. 10 is a flowchart illustrating a three-dimensional point cloud registration method according to an example embodiment.
Fig. 11 is a block diagram illustrating a three-dimensional point cloud registration apparatus according to an example embodiment.
Fig. 12 schematically illustrates a block diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
The drawings are merely schematic illustrations of the present invention, in which like reference numerals denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and not necessarily all of the elements or steps are included or performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
The following describes example embodiments of the invention in detail with reference to the accompanying drawings.
Fig. 1 is a system block diagram illustrating a three-dimensional point cloud registration method and apparatus according to an exemplary embodiment.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for a three-dimensional point cloud registration system operated by a user with the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the three-dimensional point cloud registration request, and feed back the processing result (for example, a transformation matrix—only an example) to the terminal device.
The server 105 may, for example, obtain a point cloud and a reference point cloud to be registered; the server 105 may, for example, determine a first projection image of the point cloud to be registered in a two-dimensional plane, where the first projection image includes a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered; server 105 may, for example, determine a second projected image of the reference point cloud in a two-dimensional plane, the second projected image comprising a plurality of second pixel points, each second pixel point corresponding to at least one reference point cloud. The server 105 may determine matched pixel point pairs, each comprising a first pixel point and a second pixel point, for example, from a first pixel point of the first projection image and a second pixel point in the second projection image. The server 105 may determine a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matched pixel points and at least one reference point cloud corresponding to a second pixel point in the matched pixel points, for example, so as to perform a point cloud registration operation on the point cloud to be registered according to the transformation matrix.
The server 105 may be an entity server, and may be formed by a plurality of servers, for example, and a part of the server 105 may be used as a three-dimensional point cloud registration task submitting system in the disclosure, for example, to obtain a task to be executed with a three-dimensional point cloud registration command; and a portion of the server 105 may also be used, for example, as a three-dimensional point cloud registration system in the present disclosure, for obtaining point clouds and reference point clouds to be registered; determining a first projection image of the point cloud to be registered in a two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered; determining a second projection image of the datum point cloud on a two-dimensional plane, wherein the second projection image comprises a plurality of second pixel points, and each second pixel point corresponds to at least one datum point cloud; determining matched pixel point pairs according to first pixel points of the first projection image and second pixel points in the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point; and determining a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matched pixel points and at least one datum point cloud corresponding to a second pixel point in the matched pixel points, so as to perform point cloud registration operation on the point cloud to be registered according to the transformation matrix.
Fig. 2 is a flow chart illustrating a three-dimensional point cloud registration method according to an exemplary embodiment. The three-dimensional point cloud registration method provided in the embodiments of the present disclosure may be performed by any electronic device having computing processing capability, for example, the terminal devices 101, 102, 103 and/or the server 105, and in the following embodiments, the server execution method is exemplified, but the present disclosure is not limited thereto. The three-dimensional point cloud registration method 20 provided by the embodiment of the present disclosure may include steps S202 to S208.
As shown in fig. 2, in step S202, a point cloud to be registered and a reference point cloud are acquired.
In the embodiment of the disclosure, the point cloud to be registered and the reference point cloud may be local point cloud data acquired by different perspectives and/or different periods and/or different devices. For example, a point cloud and a reference point cloud to be registered of a wide range of indoor and outdoor scenes can be acquired by an acquisition device. The point cloud to be registered and the reference point cloud may include coordinate data and reflection intensity information in a three-dimensional coordinate system, respectively.
In step S204, a first projection image of the point cloud to be registered in the two-dimensional plane is determined, where the first projection image includes a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered.
In the embodiment of the disclosure, the point cloud to be registered may have three-dimensional coordinate information, and may be projected on a two-dimensional plane according to the three-dimensional coordinate information of the point cloud to be registered to form a plurality of first pixel points. Each point cloud to be registered can be projected on a first pixel point, and each first pixel point can correspond to at least one point cloud to be registered. For the implementation of the method, a plane in which two dimensions in three-dimensional coordinate information of the point cloud to be registered are located can be determined to be a two-dimensional plane. For example, the coordinate information of the point cloud to be registered includes X, Y, Z three dimensions. Wherein the Z coordinate direction and the gravity direction tend to be consistent. X, Y the coordinate direction is perpendicular to the gravitational direction. The plane in which the X, Y coordinate dimensions lie can be determined as a two-dimensional plane (hereinafter XY plane). And projecting the point cloud to be registered to an XY plane by using orthographic projection, so as to achieve the purpose of reducing the dimension of the point cloud to be registered.
In step S206, a second projection image of the reference point cloud on the two-dimensional plane is determined, where the second projection image includes a plurality of second pixel points, and each of the second pixel points corresponds to at least one reference point cloud.
In the embodiment of the present disclosure, the manner of determining the second projection image according to the reference point cloud may adopt a similar generation manner to the first projection image, which is not described herein.
In step S208, a pair of matched pixels is determined according to the first pixel of the first projection image and the second pixel of the second projection image, where each pair of matched pixels includes a first pixel and a second pixel.
In the embodiment of the disclosure, a manner of performing feature extraction and feature matching on pixel points (first pixel points and second pixel points) of a projection image (in a first projection image and a second projection image) is adopted to obtain successfully matched pixel point pairs.
In step S210, a transformation matrix is determined according to at least one point cloud to be registered corresponding to a first pixel point in the matched pixel points and at least one reference point cloud corresponding to a second pixel point in the matched pixel points, so as to perform a point cloud registration operation on the point cloud to be registered according to the transformation matrix.
In the embodiment of the disclosure, the first pixel point and the second pixel point in the matched pixel point pair can be respectively determined to be a target point cloud to be registered and a target reference point cloud, and the target point cloud to be registered and the target reference point cloud are determined to be the matched point cloud pair, so that the transformation matrix is determined by using the matched point cloud pair. When the target point cloud to be registered is determined according to the first pixel point, coordinate information of at least one point cloud to be registered corresponding to the first pixel point can be taken as a basis. When the second pixel point is lifted to the target datum point cloud, the coordinate information of at least one datum point cloud corresponding to the second pixel point can be used as a basis. At present, the collected point cloud device is generally set in a direction which tends to be consistent with the gravity direction as a Z coordinate direction, based on the prior, taking the determination of a target point cloud to be registered according to a first pixel point in an XY plane as an example, the X, Y-dimension coordinate of the first pixel point at the center position of the XY plane can be respectively determined as the X, Y-dimension coordinate value of the lifted target point cloud to be registered, and the Z-dimension coordinate value of the lowest point (namely, the minimum Z coordinate value) in at least one point cloud to be registered corresponding to the first pixel point is determined as the elevation value of the lifted target point cloud to be registered. The transformation matrix is a rotation translation matrix between the target point cloud to be registered and the target reference point cloud, and aims to transform the target point cloud to be registered into the same coordinate system of the target reference point cloud. The transformed coordinates can be expressed as the following equation:
pt=R·ps+T (1)
Wherein p t and p s are the target reference point cloud and the target point cloud to be registered in the same matching point cloud pair, respectively. R, T are rotation matrices and translation matrices, respectively, collectively referred to as the transformation matrices described above.
According to the three-dimensional point cloud registration method provided by the embodiment of the disclosure, the point cloud to be registered is projected to be the first pixel point on the two-dimensional plane based on the projection mode, the first pixel point is utilized to generate the first projection image, the datum point cloud is projected to be the second pixel point on the two-dimensional plane, the second pixel point is utilized to generate the second projection image, the matching pixel point pair is determined by utilizing the first projection image and the second projection image, the three-dimensional point cloud registration problem can be reduced to be the two-dimensional image registration problem, the complexity and the memory requirement of an algorithm are reduced, the registration efficiency is greatly improved, the registration algorithm can be expanded to a large-scale scene, and the non-initial value rapid registration of hundreds of millions of point clouds is realized.
Fig. 3 is a flow chart illustrating a three-dimensional point cloud registration method according to an exemplary embodiment. The three-dimensional point cloud registration method 30 may include steps S302 to S306 when determining a first projection image of a point cloud to be registered in a two-dimensional plane.
As shown in fig. 3, in step S302, a two-dimensional plane is divided according to a projection position of a point cloud to be registered on the two-dimensional plane, so as to obtain a plurality of first pixel points.
In the embodiment of the present disclosure, fig. 5 may be taken as an example. In determining the first projection image, each grid (pixel points indicated in fig. 5) including three-dimensional points in fig. 5 is a first pixel point. Of course, in determining the second projection image, each pixel point including the three-dimensional point in fig. 5 represents a second pixel point. When a plurality of first pixel points are obtained, each point cloud to be registered can be projected onto an XY plane according to the (X, Y) coordinates of the point cloud to be registered, and then the minimum X min、Ymin and the maximum X max、Ymax of the projection points of the point cloud to be registered on the X axis and the Y axis of the point cloud to be registered on the XY plane are counted. After the four-to-range is obtained, the XY plane is equally divided in the four-to-range by taking (X min,Ymin) as a starting point and the interval s to form a plurality of first pixel points. Wherein the first pixel point can be identified by a rank number (i, j). For example, the pixel number in the lower left corner of FIG. 5 is denoted as (0, 0), and the geographic range covered is (X min,Ymin)→(Xmin+s,Ymin +s). The pixel point (i, j) where each three-dimensional point (point cloud to be aligned) (X, Y, Z) is located (first pixel point in this embodiment) can be sequentially calculated according to the following formula (2), and a corresponding relationship between the spatial three-dimensional point and the pixel point where the spatial three-dimensional point is located is established, so as to obtain an index relationship table of the three-dimensional point and the pixel point. Wherein the method comprises the steps ofRepresenting a rounding down.
In step S304, a gray value of the first pixel is determined according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel.
In the embodiment of the disclosure, the reflection intensity information of the point cloud to be registered, which has the lowest mid-elevation value of at least one point cloud to be registered and corresponds to each first pixel point, may be determined as the gray value of the first pixel point. When the two-dimensional plane is an XY plane, the lowest point cloud to be registered of at least one point cloud to be registered corresponding to the first pixel point is the point cloud to be registered with the minimum Z coordinate value.
In step S306, a first projection image is generated according to the gray value of the first pixel point.
In the embodiment of the disclosure, the gray value of each pixel point can be used as the gray expression in the first projection image of the first pixel point, so as to generate the first projection image. An example plot of the first projected image may be shown, for example, in fig. 6 (b).
In an exemplary embodiment, in performing step S304, the following steps S3041-S3044 may be included:
step S3041, obtaining reflection intensity information of the point cloud to be registered.
Step S3042, determining the minimum value in the reflection intensity information of the point cloud to be registered as the global minimum reflection intensity of the point cloud to be registered, and determining the maximum value in the reflection intensity information of the point cloud to be registered as the global maximum reflection intensity of the point cloud to be registered.
Step S3043, determining the reflector intensity of the first pixel according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel.
Step S3044, determining the gray value of the first pixel point according to the global minimum reflection intensity of the point cloud to be registered, the global maximum reflection intensity of the point cloud to be registered and the reflector intensity of the first pixel point.
Because the reflection intensity difference of different lasers is larger, the subsequent feature matching is not facilitated, and therefore after the reflection intensity information of each first pixel point is obtained, the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point can be normalized according to the formula (3).
v(i,j)=255(I(i,j)-Imin)/(Imax-Imin) (3)
Wherein I (i,j) is the reflection intensity information of the first pixel point before normalization. I max and I min are the maximum reflection intensity value and the minimum reflection intensity value in all point clouds to be registered (i.e. the point clouds to be registered in the whole acquired range), and v (I, j) is the reflection intensity information of the first pixel point after normalization.
The embodiment shown in fig. 3 of the present disclosure provides a manner of generating the first projection image, and the manner of generating the second projection image may employ steps similar to steps S302 to S306 or steps S3041 to S3042 of the embodiment of the present disclosure. An example plot of the second projection image may be shown, for example, in fig. 6 (a). For example, when determining the second projection image of the reference point cloud on the two-dimensional plane, the two-dimensional plane may be divided according to the projection position of the reference point cloud on the two-dimensional plane, so as to obtain a plurality of second pixel points; determining a gray value of the second pixel point according to the reflection intensity information of at least one datum point cloud corresponding to the second pixel point; and generating a second projection image according to the gray value of the second pixel point.
For another example, when the gray value of the second pixel point is determined according to the reflection intensity information of at least one datum point cloud corresponding to the second pixel point, the reflection intensity information of the datum point cloud can be obtained; determining the minimum value in the reflection intensity information of the reference point cloud as the global minimum reflection intensity of the reference point cloud, and determining the maximum value in the reflection intensity information of the reference point cloud as the global maximum reflection intensity of the reference point cloud; determining the reflector intensity of the second pixel point according to the reflection intensity information of at least one datum point cloud corresponding to the second pixel point; and determining the gray value of the second pixel point according to the global minimum reflection intensity of the reference point cloud, the global maximum reflection intensity of the reference point cloud and the reflector intensity of the second pixel point.
Fig. 8 is a flow chart illustrating a three-dimensional point cloud registration method according to an exemplary embodiment. The three-dimensional point cloud registration method 80 may include steps S802 to S808 when determining a matching pixel point pair from a first pixel point of a first projection image and a second pixel point of a second projection image.
In step S802, feature extraction is performed on the first pixel point, so as to obtain a first feature vector of the first pixel point.
In step S804, feature extraction is performed on the second pixel point, so as to obtain a second feature vector of the second pixel point.
In step S806, the feature distance between the first feature vector of the first pixel and the second feature vector of each second pixel is determined.
In the embodiment of the disclosure, for the first pixel point A1, all the second pixel points in the second projection image are A2, B2, and C2 in sequence, then the feature distance S1 between A1 and A2, the feature distance S2 between A1 and B2, and the feature distance S3 between A1 and C2 may be determined in sequence.
In step S808, the first pixel point and the second pixel point that have the smallest feature distance are determined as the matching point pair.
In the embodiment of the disclosure, following the foregoing example, the first pixel point and the second pixel point corresponding to the minimum feature distance in S1, S2, and S3 may be determined as the matching point pair. For example, when S1 is the minimum of S1, S2, S3, A1 and A2 may be determined as the matching point pair.
In the embodiment of the disclosure, an acceleration robust feature (Speeded Up Robust Features, SURF) algorithm which can be used for carrying out feature point detection and descriptor calculation on image scaling, rotation and affine transformation and keeps invariance can be adopted, SURF is an improvement on Scale-INVARIANT FEATURE TRANSFORM (SIFT) algorithm, and on the premise of ensuring matching precision and robustness, efficiency is improved by a plurality of times compared with SIFT. The whole matching flow comprises 6 parts of Hessian matrix construction, scale space construction, feature point positioning, feature point main direction calculation and feature descriptor generation, wherein the 6 parts are matched with a feature vector based on Kdtree. In the exemplary embodiment, a bidirectional matching strategy can be further adopted to obtain more matching point pairs, so that matching robustness is improved.
In an exemplary embodiment, a feature distance ratio of a minimum value and a next-smallest value of the feature distances may be determined; and if the characteristic distance ratio is larger than the characteristic distance ratio threshold, eliminating the matching point pair. In the foregoing example, when S1< S2< S3, S1 is the minimum value of the feature distances, S2 is the next smallest value of the feature distances, the feature distance ratio is S1/S2, and when the feature distance ratio S1/S2 is greater than the feature ratio threshold, it is considered that, for the first pixel point A1, both of the first pixel point A1 and the second pixel points A2 and B2 may be the matching point pair, in this case, the behavior reliability of the matching point pair determined directly by the first pixel point A1 and the second pixel point A2 is lower, so that the matching point pair of the A1 and the A2 may be eliminated to improve the matching robustness.
In an exemplary embodiment, homographic transformations of the first projection image and the second projection image may also be determined; and eliminating the matching point pairs with errors larger than the error pixel threshold value after homography transformation. For example, a random sample consensus (RANSAC) algorithm may be used to establish a homography transformation between the first projected image and the second projected image, and pairs of matching points that have errors exceeding the error pixel threshold after homography projection are determined as outliers and filtered. Fig. 7 is an exemplary diagram of the effect of feature matching and false positive filtering according to an exemplary illustration. In this embodiment, the culling is performed using a RANSAC algorithm based on three-dimensional euclidean transformation. Finally, based on all interior point matching pairs after the false point rejection, the centimeter-level large-range point cloud accurate registration can be realized through an effective characteristic matching and multi-level coarse difference rejection mechanism.
Fig. 9 is a flowchart illustrating a three-dimensional point cloud registration method according to an example embodiment. The three-dimensional point cloud registration method 90 may include steps S902 to S906 when determining a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point of the matched pixel points and at least one reference point cloud corresponding to a second pixel point of the matched pixel points.
In step S902, a point cloud to be registered having the lowest elevation value in point clouds to be registered corresponding to a first pixel point in the matched pixel points is determined as the first point cloud.
In step S904, a reference point cloud having the lowest elevation value among the reference point clouds corresponding to the second pixel points among the matched pixel points is determined as the second point cloud.
In step S906, a transformation matrix is determined using the first point cloud and the second point cloud as a matching point cloud pair.
The method comprises the steps of adopting a Levenberg-Marquard (LM) algorithm to iteratively calculate a 6-degree-of-freedom transformation matrix between two point clouds in a matched point cloud pair, and applying the calculated transformation matrix to all three-dimensional points in the point clouds to be registered to realize three-dimensional registration between the point clouds.
The embodiment of the disclosure respectively promotes the two-dimensional first pixel point and the two-dimensional second pixel point to the first point cloud and the second point cloud in the three-dimensional space, and determines a transformation matrix based on the first point cloud and the second point cloud. The matching point pair obtained in the two-dimensional image registration can be converted into a three-dimensional matching point cloud pair, so that the problem of three-dimensional point cloud registration is converted into the problem of two-dimensional image registration, and the quick registration without an initial value is realized.
In an exemplary embodiment, when determining the transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matched pixel points and at least one reference point cloud corresponding to a second pixel point in the matched pixel points, a matched point cloud pair may be further obtained according to the matched point pair according to the formula (4).
Wherein, (i, j) is the coordinate of the pixel point (the first pixel point when determining the first point cloud and the second pixel point when determining the second point cloud), and (X (i,j),Y(i,j)) is the coordinate corresponding to X, Y two dimensions in the three-dimensional space corresponding to the pixel point. The three-dimensional coordinate Z (i,j) may be the elevation value (i.e., the Z coordinate value) of the three-dimensional point with the smallest Z coordinate value in the point cloud to be registered or the reference point cloud corresponding to the pixel point shown in fig. 5.
Fig. 10 is a flowchart illustrating a three-dimensional point cloud registration method according to an example embodiment. The three-dimensional point cloud registration method 100 provided in the embodiments of the present disclosure may include three technical links including point cloud projection 1002, feature extraction and matching 1004, and three-dimensional spatial transformation 1006. The point cloud projection 1002 mainly completes mapping from a three-dimensional point cloud (point cloud to be registered and reference point cloud) to a two-dimensional image (first projection image and second projection image), and the grid index is a corresponding relationship between each grid (i.e., the first pixel point or the second pixel point) and the point cloud to be registered or the reference point cloud. Feature extraction and matching 1004 calculates the pixel point pairs with the same name based on the mapped two-dimensional image, and finally completes registration of two groups of point clouds through three-dimensional mapping and transformation of the pixel point pairs with the same name. Fig. 4 shows, as a specific example, an effect diagram of the point cloud to be registered after registration with the reference point cloud. Fig. 4 (a) is an effect diagram of a reference point cloud, fig. 4 (b) is an effect diagram of a point cloud to be registered, and fig. 4 (c) is an effect diagram of a point cloud after registration. According to the three-dimensional point cloud registration method provided by the embodiment of the disclosure, aiming at the registration problem of large-scale three-dimensional dense point cloud, the complex three-dimensional point cloud registration problem is converted into the two-dimensional image registration problem, and the three-dimensional point cloud registration method based on orthographic projection without initial value is provided. A large number of experiments prove that the method can effectively solve the problem of registering the three-dimensional dense point cloud in a large range (kilometer level). In particular, the method provided by the invention has obvious advantages in the aspects of registration efficiency and registration precision compared with the prior art. In the aspect of efficiency, the three-dimensional point cloud registration problem is converted into the two-dimensional image registration problem, so that the registration efficiency is greatly improved, and the rapid registration without initial values of hundreds of millions of point clouds is realized; in the aspect of registration accuracy, the invention designs an effective feature matching and multi-level coarse-difference eliminating mechanism based on the RANSAC algorithm, and realizes the centimeter-level large-range point cloud accurate registration.
It should be clearly understood that this disclosure describes how to make and use particular examples, but the principles of this disclosure are not limited to any details of these examples. Rather, these principles can be applied to many other embodiments based on the teachings of the present disclosure.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as computer programs executed by a central processing unit (Central Processing Unit, CPU). The above-described functions defined by the above-described method provided by the present disclosure are performed when the computer program is executed by a central processing unit CPU. The program of (a) may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic disk or an optical disk, or the like.
Furthermore, it should be noted that the above-described figures are merely illustrative of the processes involved in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
The following are device embodiments of the present disclosure that may be used to perform method embodiments of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the method of the present disclosure.
Fig. 11 is a block diagram illustrating a three-dimensional point cloud registration apparatus according to an example embodiment. Referring to fig. 11, a three-dimensional point cloud registration apparatus 1100 provided by an embodiment of the present disclosure may include: a point cloud acquisition module 1102, a first image module 1104, a second image module 1106, a pixel point module 1108, and a point cloud registration module 1110.
In the three-dimensional point cloud registration apparatus 1100, the point cloud acquisition module 1102 may be configured to acquire a point cloud and a reference point cloud to be registered.
The first image module 1104 may be configured to determine a first projection image of the point cloud to be registered in the two-dimensional plane, the first projection image including a plurality of first pixel points, each corresponding to at least one point cloud to be registered.
The second image module 1106 may be configured to determine a second projected image of the reference point cloud in the two-dimensional plane, the second projected image including a plurality of second pixels, each second pixel corresponding to at least one reference point cloud.
The pixel pair module 1108 may be configured to determine matched pixel pairs from the first pixel of the first projection image and the second pixel of the second projection image, each matched pixel pair including a first pixel and a second pixel.
The point cloud registration module 1110 may be configured to determine a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matched pixel points and at least one reference point cloud corresponding to a second pixel point in the matched pixel points, so as to perform a point cloud registration operation on the point cloud to be registered according to the transformation matrix.
According to the three-dimensional point cloud registration device provided by the embodiment of the disclosure, the point cloud to be registered is projected to be the first pixel point on the two-dimensional plane based on the projection mode, the first pixel point is utilized to generate the first projection image, the datum point cloud is projected to be the second pixel point on the two-dimensional plane, the second pixel point is utilized to generate the second projection image, the matching pixel point pair is determined by utilizing the first projection image and the second projection image, the three-dimensional point cloud registration problem can be reduced to be the two-dimensional image registration problem, the complexity and the memory requirement of an algorithm are reduced, the registration efficiency is greatly improved, the registration algorithm can be expanded to a large-scale scene, and the non-initial value rapid registration of hundreds of millions of point clouds is realized.
In an exemplary embodiment, the first image module may include: the plane dividing sub-module can be configured to divide the two-dimensional plane according to the projection position of the point cloud to be registered on the two-dimensional plane to obtain a plurality of first pixel points; the gray value sub-module can be configured to determine the gray value of the first pixel point according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point; and a first image sub-module configurable to generate a first projection image from the gray value of the first pixel point.
In an exemplary embodiment, the gray value sub-module may include: the reflection intensity acquisition unit can be configured to acquire reflection intensity information of the point cloud to be registered; the global reflection intensity unit can be configured to determine the minimum value in the reflection intensity information of the point cloud to be registered as the global minimum reflection intensity of the point cloud to be registered, and determine the maximum value in the reflection intensity information of the point cloud to be registered as the global maximum reflection intensity of the point cloud to be registered; the reflector intensity unit can be configured to determine the reflector intensity of the first pixel point according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point; and the gray value unit can be configured to determine the gray value of the first pixel point according to the global minimum reflection intensity of the point cloud to be registered, the global maximum reflection intensity of the point cloud to be registered and the reflector intensity of the first pixel point.
In an exemplary embodiment, the pixel point pair module may include: the first feature vector sub-module can be configured to perform feature extraction on the first pixel point to obtain a first feature vector of the first pixel point; the second feature vector sub-module can be configured to perform feature extraction on the second pixel points to obtain second feature vectors of the second pixel points; the feature distance ion module can be configured to determine a feature distance between a first feature vector of a first pixel point and a second feature vector of each second pixel point; and the matching point pair module can be configured to determine the sum of the first pixel point and the second pixel point with the minimum feature distance as a matching point pair.
In an exemplary embodiment, the three-dimensional point cloud registration apparatus may further include: the characteristic distance ratio module is configured to determine a characteristic distance ratio of a minimum value and a second minimum value in the characteristic distances; the first matching point pair eliminating module may be configured to eliminate the matching point pair if the feature distance ratio is greater than the feature distance ratio threshold.
In an exemplary embodiment, the three-dimensional point cloud registration apparatus may further include: a homography transformation module configurable to determine homography transformations of the first projected image and the second projected image; and the second matching point pair eliminating module can be configured to eliminate matching point pairs with errors greater than the error pixel threshold value after homography conversion.
In an exemplary embodiment, the point cloud registration module may include: the first point cloud submodule can be configured to determine a point cloud to be registered, which has the lowest elevation value, in point clouds to be registered corresponding to a first pixel point in the matched pixel points as a first point cloud; the second point cloud submodule can be configured to determine a reference point cloud with the lowest elevation value in the reference point clouds corresponding to the second pixel points in the matched pixel points as a second point cloud; the transformation matrix sub-module may be configured to determine the transformation matrix using the first point cloud and the second point cloud as a matching point cloud pair.
An electronic device 1200 according to this embodiment of the present invention is described below with reference to fig. 12. The electronic device 1200 shown in fig. 12 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 12, the electronic device 1200 is in the form of a general purpose computing device. Components of electronic device 1200 may include, but are not limited to: the at least one processing unit 1210, the at least one memory unit 1220, and a bus 1230 connecting the different system components (including the memory unit 1220 and the processing unit 1210).
Wherein the storage unit stores program code that is executable by the processing unit 1210 such that the processing unit 1210 performs steps according to various exemplary embodiments of the present invention described in the above-described "exemplary methods" section of the present specification. For example, the processing unit 1210 may perform step S202 shown in fig. 1, to obtain a point cloud to be registered and a reference point cloud; step S204, determining a first projection image of a point cloud to be registered in a two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered; step S206, determining a second projection image of the datum point cloud on the two-dimensional plane, wherein the second projection image comprises a plurality of second pixel points, and each second pixel point corresponds to at least one datum point cloud; step S208, determining matched pixel point pairs according to the first pixel point of the first projection image and the second pixel point of the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point; step S210, determining a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matched pixel points and at least one datum point cloud corresponding to a second pixel point in the matched pixel points, so as to perform point cloud registration operation on the point cloud to be registered according to the transformation matrix.
The storage unit 1220 may include a readable medium in the form of a volatile storage unit, such as a Random Access Memory (RAM) 12201 and/or a cache memory 12202, and may further include a Read Only Memory (ROM) 12203.
Storage unit 1220 may also include a program/utility 12204 having a set (at least one) of program modules 12205, such program modules 12205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1230 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The electronic device 1200 may also communicate with one or more external devices 1300 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1200, and/or any device (e.g., router, modem, etc.) that enables the electronic device 1200 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1250. Also, the electronic device 1200 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet through the network adapter 1260. As shown, the network adapter 1260 communicates with other modules of the electronic device 1200 over bus 1230. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1200, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (9)

1. A method of three-dimensional point cloud registration, comprising:
Acquiring a point cloud and a reference point cloud to be registered;
Determining a first projection image of the point cloud to be registered in a two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered;
Determining a second projection image of the datum point cloud on a two-dimensional plane, wherein the second projection image comprises a plurality of second pixel points, and each second pixel point corresponds to at least one datum point cloud;
Determining matched pixel point pairs according to first pixel points of the first projection image and second pixel points in the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point;
Determining a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matched pixel points and at least one datum point cloud corresponding to a second pixel point in the matched pixel points, so as to perform point cloud registration operation on the point cloud to be registered according to the transformation matrix;
wherein determining the transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matched pixel points and at least one reference point cloud corresponding to a second pixel point in the matched pixel points includes:
Determining a point cloud to be registered, which has the lowest elevation value, in point clouds to be registered corresponding to a first pixel point in the matched pixel points as a first point cloud;
Determining a reference point cloud with the lowest elevation value in the reference point clouds corresponding to the second pixel points in the matched pixel points as a second point cloud;
And determining the transformation matrix by taking the first point cloud and the second point cloud as matching point cloud pairs.
2. The method of claim 1, wherein determining a first projected image of the point cloud to be registered in a two-dimensional plane comprises:
dividing a two-dimensional plane according to the projection position of the point cloud to be aligned on the two-dimensional plane to obtain a plurality of first pixel points;
Determining a gray value of the first pixel point according to reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point;
And generating the first projection image according to the gray value of the first pixel point.
3. The method of claim 2, wherein determining the gray value of the first pixel point according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point comprises:
Acquiring reflection intensity information of the point cloud to be registered;
Determining the minimum value in the reflection intensity information of the point cloud to be registered as the global minimum reflection intensity of the point cloud to be registered, and determining the maximum value in the reflection intensity information of the point cloud to be registered as the global maximum reflection intensity of the point cloud to be registered;
determining the reflector intensity of the first pixel point according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point;
and determining the gray value of the first pixel point according to the global minimum reflection intensity of the point cloud to be aligned, the global maximum reflection intensity of the point cloud to be aligned and the reflector intensity of the first pixel point.
4. The method of claim 1, wherein determining a matching pixel pair from a first pixel of the first projected image and a second pixel in the second projected image comprises:
extracting the characteristics of the first pixel points to obtain first characteristic vectors of the first pixel points;
extracting the characteristics of the second pixel points to obtain second characteristic vectors of the second pixel points;
determining the feature distance between the first feature vector of the first pixel point and the second feature vector of each second pixel point;
and determining the sum of the first pixel point and the second pixel point with the minimum characteristic distance as a matching point pair.
5. The method as recited in claim 4, further comprising:
Determining a characteristic distance ratio of a minimum value to a next-minimum value in the characteristic distances;
And if the characteristic distance ratio is larger than the characteristic distance ratio threshold, eliminating the matching point pair.
6. The method as recited in claim 4, further comprising:
determining homography transformations of the first projection image and the second projection image;
and eliminating the matching point pairs with errors larger than the error pixel threshold value after homography transformation.
7. A three-dimensional point cloud registration apparatus, comprising:
the point cloud acquisition module is configured to acquire point clouds to be registered and reference point clouds;
the first image module is configured to determine a first projection image of the point cloud to be registered on a two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered;
a second image module configured to determine a second projection image of the reference point cloud in a two-dimensional plane, the second projection image including a plurality of second pixel points, each second pixel point corresponding to at least one reference point cloud;
The pixel point pair module is configured to determine matched pixel point pairs according to a first pixel point of the first projection image and a second pixel point of the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point;
The point cloud registration module is configured to determine a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matched pixel points and at least one datum point cloud corresponding to a second pixel point in the matched pixel points, so as to perform point cloud registration operation on the point cloud to be registered according to the transformation matrix;
the point cloud registration module is further configured to determine a point cloud to be registered, which has a lowest elevation value, in point clouds to be registered corresponding to a first pixel point in the matched pixel points as a first point cloud; determining a reference point cloud with the lowest elevation value in the reference point clouds corresponding to the second pixel points in the matched pixel points as a second point cloud; and determining the transformation matrix by taking the first point cloud and the second point cloud as matching point cloud pairs.
8. An electronic device, comprising:
one or more processors;
A storage means for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
9. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-6.
CN202110040721.5A 2021-01-13 2021-01-13 Three-dimensional point cloud registration method and device, electronic equipment and readable medium Active CN113793370B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110040721.5A CN113793370B (en) 2021-01-13 2021-01-13 Three-dimensional point cloud registration method and device, electronic equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110040721.5A CN113793370B (en) 2021-01-13 2021-01-13 Three-dimensional point cloud registration method and device, electronic equipment and readable medium

Publications (2)

Publication Number Publication Date
CN113793370A CN113793370A (en) 2021-12-14
CN113793370B true CN113793370B (en) 2024-04-19

Family

ID=78876819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110040721.5A Active CN113793370B (en) 2021-01-13 2021-01-13 Three-dimensional point cloud registration method and device, electronic equipment and readable medium

Country Status (1)

Country Link
CN (1) CN113793370B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239776B (en) * 2022-07-14 2023-07-28 阿波罗智能技术(北京)有限公司 Point cloud registration method, device, equipment and medium
CN115409880B (en) * 2022-08-31 2024-03-22 深圳前海瑞集科技有限公司 Workpiece data registration method and device, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295239A (en) * 2013-06-07 2013-09-11 北京建筑工程学院 Laser-point cloud data automatic registration method based on plane base images
CN107316325A (en) * 2017-06-07 2017-11-03 华南理工大学 A kind of airborne laser point cloud based on image registration and Image registration fusion method
CN109272537A (en) * 2018-08-16 2019-01-25 清华大学 A kind of panorama point cloud registration method based on structure light
CN109410256A (en) * 2018-10-29 2019-03-01 北京建筑大学 Based on mutual information cloud and image automatic, high precision method for registering
CN109493375A (en) * 2018-10-24 2019-03-19 深圳市易尚展示股份有限公司 The Data Matching and merging method of three-dimensional point cloud, device, readable medium
CN110111414A (en) * 2019-04-10 2019-08-09 北京建筑大学 A kind of orthography generation method based on three-dimensional laser point cloud
CN110852979A (en) * 2019-11-12 2020-02-28 广东省智能机器人研究院 Point cloud registration and fusion method based on phase information matching
CN110853081A (en) * 2019-11-18 2020-02-28 武汉数智云绘技术有限公司 Ground and airborne LiDAR point cloud registration method based on single-tree segmentation
CN110930443A (en) * 2019-11-27 2020-03-27 中国科学院深圳先进技术研究院 Image registration method and device and terminal equipment
CN110942476A (en) * 2019-10-17 2020-03-31 湖南大学 Improved three-dimensional point cloud registration method and system based on two-dimensional image guidance and readable storage medium
CN112001955A (en) * 2020-08-24 2020-11-27 深圳市建设综合勘察设计院有限公司 Point cloud registration method and system based on two-dimensional projection plane matching constraint
CN112184783A (en) * 2020-09-22 2021-01-05 西安交通大学 Three-dimensional point cloud registration method combined with image information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9972067B2 (en) * 2016-10-11 2018-05-15 The Boeing Company System and method for upsampling of sparse point cloud for 3D registration

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295239A (en) * 2013-06-07 2013-09-11 北京建筑工程学院 Laser-point cloud data automatic registration method based on plane base images
CN107316325A (en) * 2017-06-07 2017-11-03 华南理工大学 A kind of airborne laser point cloud based on image registration and Image registration fusion method
CN109272537A (en) * 2018-08-16 2019-01-25 清华大学 A kind of panorama point cloud registration method based on structure light
CN109493375A (en) * 2018-10-24 2019-03-19 深圳市易尚展示股份有限公司 The Data Matching and merging method of three-dimensional point cloud, device, readable medium
CN109410256A (en) * 2018-10-29 2019-03-01 北京建筑大学 Based on mutual information cloud and image automatic, high precision method for registering
CN110111414A (en) * 2019-04-10 2019-08-09 北京建筑大学 A kind of orthography generation method based on three-dimensional laser point cloud
CN110942476A (en) * 2019-10-17 2020-03-31 湖南大学 Improved three-dimensional point cloud registration method and system based on two-dimensional image guidance and readable storage medium
CN110852979A (en) * 2019-11-12 2020-02-28 广东省智能机器人研究院 Point cloud registration and fusion method based on phase information matching
CN110853081A (en) * 2019-11-18 2020-02-28 武汉数智云绘技术有限公司 Ground and airborne LiDAR point cloud registration method based on single-tree segmentation
CN110930443A (en) * 2019-11-27 2020-03-27 中国科学院深圳先进技术研究院 Image registration method and device and terminal equipment
CN112001955A (en) * 2020-08-24 2020-11-27 深圳市建设综合勘察设计院有限公司 Point cloud registration method and system based on two-dimensional projection plane matching constraint
CN112184783A (en) * 2020-09-22 2021-01-05 西安交通大学 Three-dimensional point cloud registration method combined with image information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于二维图像特征的点云配准方法;赵夫群;周明全;;测绘通报(第10期);39-42 *

Also Published As

Publication number Publication date
CN113793370A (en) 2021-12-14

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN107330439B (en) Method for determining posture of object in image, client and server
JP7221324B2 (en) Method and device, electronic device, storage medium and computer program for detecting obstacles
WO2019170164A1 (en) Depth camera-based three-dimensional reconstruction method and apparatus, device, and storage medium
CN109582880B (en) Interest point information processing method, device, terminal and storage medium
US10204423B2 (en) Visual odometry using object priors
CN112771573A (en) Depth estimation method and device based on speckle images and face recognition system
CN114550177B (en) Image processing method, text recognition method and device
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
WO2022262160A1 (en) Sensor calibration method and apparatus, electronic device, and storage medium
CN113793370B (en) Three-dimensional point cloud registration method and device, electronic equipment and readable medium
US20240029297A1 (en) Visual positioning method, storage medium and electronic device
CN114399588B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN112016638A (en) Method, device and equipment for identifying steel bar cluster and storage medium
CN111914756A (en) Video data processing method and device
CN114565668A (en) Instant positioning and mapping method and device
CN112116655B (en) Target object position determining method and device
CN113409340A (en) Semantic segmentation model training method, semantic segmentation device and electronic equipment
CN112085842B (en) Depth value determining method and device, electronic equipment and storage medium
WO2023231435A1 (en) Visual perception method and apparatus, and storage medium and electronic device
CN116798027A (en) Three-dimensional point cloud real-time cloud matching method and device based on multi-scale feature extraction
CN113763468B (en) Positioning method, device, system and storage medium
CN112184766B (en) Object tracking method and device, computer equipment and storage medium
Arnaud et al. On the fly plane detection and time consistency for indoor building wall recognition using a tablet equipped with a depth sensor
CN112131902A (en) Closed loop detection method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant