CN113793370A - Three-dimensional point cloud registration method and device, electronic equipment and readable medium - Google Patents

Three-dimensional point cloud registration method and device, electronic equipment and readable medium Download PDF

Info

Publication number
CN113793370A
CN113793370A CN202110040721.5A CN202110040721A CN113793370A CN 113793370 A CN113793370 A CN 113793370A CN 202110040721 A CN202110040721 A CN 202110040721A CN 113793370 A CN113793370 A CN 113793370A
Authority
CN
China
Prior art keywords
point cloud
point
pixel
registered
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110040721.5A
Other languages
Chinese (zh)
Other versions
CN113793370B (en
Inventor
孙晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Three Hundred And Sixty Degree E Commerce Co ltd
Original Assignee
Beijing Jingdong Three Hundred And Sixty Degree E Commerce Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Three Hundred And Sixty Degree E Commerce Co ltd filed Critical Beijing Jingdong Three Hundred And Sixty Degree E Commerce Co ltd
Priority to CN202110040721.5A priority Critical patent/CN113793370B/en
Publication of CN113793370A publication Critical patent/CN113793370A/en
Application granted granted Critical
Publication of CN113793370B publication Critical patent/CN113793370B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20068Projection on vertical or horizontal image axis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure provides a three-dimensional point cloud registration method, a three-dimensional point cloud registration device, an electronic device and a readable medium, wherein the method comprises the following steps: acquiring point cloud to be registered and reference point cloud; determining a first projection image of the point cloud to be registered on a two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points; determining a second projection image of the reference point cloud on the two-dimensional plane, wherein the second projection image comprises a plurality of second pixel points; determining matched pixel point pairs according to a first pixel point of the first projection image and a second pixel point of the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point; and determining a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matching pixel points and at least one reference point cloud corresponding to a second pixel point in the matching pixel points, so as to perform point cloud registration operation on the point cloud to be registered according to the transformation matrix. The technical scheme provided by the embodiment of the disclosure can realize rapid and accurate registration without initial value of large-range point cloud.

Description

Three-dimensional point cloud registration method and device, electronic equipment and readable medium
Technical Field
The present disclosure relates to the field of point cloud registration technologies, and in particular, to a three-dimensional point cloud registration method and apparatus, an electronic device, and a computer-readable medium.
Background
At present, along with popularization of laser radars and various depth sensors, acquisition of three-dimensional point cloud data becomes more convenient. A large amount of three-dimensional point clouds are becoming novel data sources in the research fields of three-dimensional reconstruction, pose estimation, target identification and the like. However, in most cases, the acquisition of the three-dimensional point cloud can be completed only by time-span, view-span and equipment-span cooperation, and only by registering local three-dimensional point cloud data acquired by different equipment at different views and different periods together to form complete three-dimensional data covering the whole scene, the subsequent higher-level algorithm analysis can be performed. Therefore, as a basic algorithm for processing point cloud data, the three-dimensional point cloud registration technology is always a research hotspot in the fields of mapping geographic information, computer vision, computer graphics and the like. According to different target accuracy of registration, point cloud registration techniques are divided into two main categories, namely coarse registration and fine registration. The rough registration refers to registration of the point clouds under the condition that the relative poses of the two point clouds to be registered are completely unknown, and the registration result can provide a good initial value for fine registration. Current mainstream coarse registration algorithms include exhaustive search based registration algorithms and feature matching based registration algorithms. Where the registration algorithm based on exhaustive search requires traversing the entire transformation space to select the transformation relation that minimizes the error function or enumerate the transformation relations that satisfy the most pairs of points. Such as random sample consensus (RANSAC) registration algorithm, four-Point consistent Set registration algorithm (4-Point consistent Set,4PCS), Super4PCS algorithm, etc. The registration algorithm based on feature matching constructs matching correspondence between point clouds by calculating morphological characteristics possessed by a measured object or a scene, and then estimates a transformation relation by adopting a correlation algorithm. Such as a random sampling consistency (SAC-IA) based on Fast Point Feature Histograms (FPFH) Feature points, a FGR algorithm, an AO algorithm based on histogram of oriented features (SHOT) features, a 3 dsothnet method based on deep learning to extract features, and an ICL algorithm based on line features. The purpose of the fine registration is to minimize the spatial position difference between the point clouds based on the coarse registration result. General fine registration algorithms include primarily ICP and variations of ICP (e.g., Point to plane ICP, Point to line ICP, GICP, NICP), NDT, deep learning based deep vcp, etc.
Although various current point cloud registration algorithms are numerous, compared with an image registration algorithm, a lot of problems still remain to be solved by the point cloud registration technology. Most of the current point cloud registration algorithms with good effects are mainly performed on small specific objects (such as a classical Stanford rabbit model) or indoor scenes within a range of several meters. For the problem of registration of indoor and outdoor point clouds in a large range of scenes, a large challenge still exists at present. Due to the complexity of the algorithm and the limitation of a memory, the existing registration algorithm designed aiming at a local small range is difficult to be expanded to a large-range scene.
Therefore, a new three-dimensional point cloud registration method, apparatus, electronic device and computer readable medium are needed.
The above information disclosed in this background section is only for enhancement of understanding of the background of the disclosure and therefore it may contain information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a three-dimensional point cloud registration method, apparatus, electronic device and computer readable medium, which can implement rapid and accurate registration without initial value for a large range of point clouds.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the embodiments of the present disclosure, a three-dimensional point cloud registration method is provided, which includes: acquiring point cloud to be registered and reference point cloud; determining a first projection image of the point cloud to be registered on a two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered; determining a second projection image of the reference point cloud on the two-dimensional plane, wherein the second projection image comprises a plurality of second pixel points, and each second pixel point corresponds to at least one reference point cloud; determining matched pixel point pairs according to a first pixel point of the first projection image and a second pixel point of the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point; and determining a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matching pixel points and at least one reference point cloud corresponding to a second pixel point in the matching pixel points, so as to perform point cloud registration operation on the point cloud to be registered according to the transformation matrix.
In an exemplary embodiment of the present disclosure, determining a first projection image of the point cloud to be registered on the two-dimensional plane includes: dividing the two-dimensional plane according to the projection position of the point cloud to be registered on the two-dimensional plane to obtain a plurality of first pixel points; determining the gray value of the first pixel point according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point; and generating the first projection image according to the gray value of the first pixel point.
In an exemplary embodiment of the present disclosure, determining the gray value of the first pixel point according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point includes: acquiring reflection intensity information of the point cloud to be registered; determining the minimum value in the reflection intensity information of the point cloud to be registered as the global minimum reflection intensity of the point cloud to be registered, and determining the maximum value in the reflection intensity information of the point cloud to be registered as the global maximum reflection intensity of the point cloud to be registered; determining the reflection sub-intensity of the first pixel point according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point; and determining the gray value of the first pixel point according to the global minimum reflection intensity of the point cloud to be registered, the global maximum reflection intensity of the point cloud to be registered and the reflection sub-intensity of the first pixel point.
In an exemplary embodiment of the present disclosure, determining a matching pixel point pair according to a first pixel point of the first projection image and a second pixel point of the second projection image includes: extracting the characteristics of the first pixel points to obtain first characteristic vectors of the first pixel points; extracting the features of the second pixel points to obtain second feature vectors of the second pixel points; determining the characteristic distance between the first characteristic vector of the first pixel point and the second characteristic vector of each second pixel point; and determining the sum of the first pixel point and the second pixel point with the minimum characteristic distance as a matching point pair.
In an exemplary embodiment of the present disclosure, the method further comprises: determining a characteristic distance ratio of a minimum value and a second minimum value in the characteristic distances; and if the characteristic distance ratio is larger than a characteristic distance ratio threshold value, rejecting the matching point pairs.
In an exemplary embodiment of the present disclosure, the method further comprises: determining a homographic transformation of the first projection image and the second projection image; and eliminating the matched point pairs with the error larger than the error pixel threshold value after homography transformation.
In an exemplary embodiment of the present disclosure, determining a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matching pixel points and at least one reference point cloud corresponding to a second pixel point in the matching pixel points comprises: determining a point cloud to be registered with the lowest elevation value in point clouds to be registered corresponding to a first pixel point in the matching pixel points as a first point cloud; determining the reference point cloud with the lowest elevation value in the reference point clouds corresponding to the second pixel points in the matching pixel points as a second point cloud; determining the transformation matrix using the first point cloud and the second point cloud as a matching point cloud pair.
According to a second aspect of the embodiments of the present disclosure, a three-dimensional point cloud registration apparatus is provided, the apparatus including: the point cloud acquisition module is configured to acquire a point cloud to be registered and a reference point cloud; the first image module is configured to determine a first projection image of the point cloud to be registered on the two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered; a second image module configured to determine a second projected image of the reference point cloud on the two-dimensional plane, the second projected image including a plurality of second pixel points, each second pixel point corresponding to at least one reference point cloud; the pixel point pair module is configured to determine matched pixel point pairs according to a first pixel point of the first projection image and a second pixel point of the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point; and the point cloud registration module is configured to determine a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matching pixel points and at least one reference point cloud corresponding to a second pixel point in the matching pixel points so as to perform point cloud registration operation on the point cloud to be registered according to the transformation matrix.
According to a third aspect of the embodiments of the present disclosure, an electronic device is provided, which includes: one or more processors; storage means for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement the three-dimensional point cloud registration method of any of the above.
According to a fourth aspect of embodiments of the present disclosure, a computer-readable medium is proposed, on which a computer program is stored, which when executed by a processor, implements the three-dimensional point cloud registration method according to any one of the above.
According to the three-dimensional point cloud registration method, the three-dimensional point cloud registration device, the electronic equipment and the computer readable medium, the point cloud to be registered is projected to be a first pixel point on a two-dimensional plane based on a projection mode, a first projection image is generated by using the first pixel point, the reference point cloud is projected to be a second pixel point on the two-dimensional plane, a second projection image is generated by using the second pixel point, and the first projection image and the second projection image are used for determining a matching pixel point pair, so that the three-dimensional point cloud registration problem can be reduced to be a two-dimensional image registration problem, the algorithm complexity and the memory requirement are reduced, the registration efficiency is greatly improved, the registration algorithm can be expanded to a large-range scene, and the initial-value-free rapid registration of hundreds of millions of level point clouds is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. The drawings described below are merely some embodiments of the present disclosure, and other drawings may be derived from those drawings by those of ordinary skill in the art without inventive effort.
Fig. 1 is a system block diagram illustrating a three-dimensional point cloud registration method and apparatus according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a three-dimensional point cloud registration method according to an exemplary embodiment.
Fig. 3 is a flow chart illustrating a three-dimensional point cloud registration method according to an exemplary embodiment.
FIG. 4 is a diagram illustrating a point cloud registration effect according to an exemplary embodiment.
FIG. 5 is a schematic diagram illustrating a manner in which a projected image is generated in accordance with an exemplary embodiment.
FIG. 6 is a schematic diagram of a projected image shown in accordance with an exemplary embodiment.
FIG. 7 is an exemplary diagram illustrating an effect of feature matching and error point filtering according to an example.
FIG. 8 is a flowchart illustrating a method of three-dimensional point cloud registration, according to an example embodiment.
FIG. 9 is a flowchart illustrating a method of three-dimensional point cloud registration, according to an example embodiment.
FIG. 10 is a flow chart illustrating a method of three-dimensional point cloud registration in accordance with an exemplary embodiment.
Fig. 11 is a block diagram illustrating a three-dimensional point cloud registration apparatus according to an exemplary embodiment.
Fig. 12 schematically illustrates a block diagram of an electronic device in an exemplary embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations or operations have not been shown or described in detail to avoid obscuring aspects of the invention.
The drawings are merely schematic illustrations of the present invention, in which the same reference numerals denote the same or similar parts, and thus, a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and steps, nor do they necessarily have to be performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
The following detailed description of exemplary embodiments of the invention refers to the accompanying drawings.
Fig. 1 is a system block diagram illustrating a three-dimensional point cloud registration method and apparatus according to an exemplary embodiment.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for a three-dimensional point cloud registration system operated by a user with the terminal devices 101, 102, 103. The background management server may analyze and otherwise process the received data such as the three-dimensional point cloud registration request, and feed back a processing result (e.g., a transformation matrix — just an example) to the terminal device.
The server 105 may, for example, obtain a point cloud to be registered and a reference point cloud; the server 105 may, for example, determine a first projection image of the point cloud to be registered on the two-dimensional plane, where the first projection image includes a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered; the server 105 may, for example, determine a second projection image of the reference point cloud in the two-dimensional plane, the second projection image including a plurality of second pixel points, each corresponding to at least one reference point cloud. Server 105 may determine pairs of matched pixels, for example, from first pixels of the first projected image and second pixels of the second projected image, each pair of matched pixels including a first pixel and a second pixel. The server 105 may determine a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matching pixel points and at least one reference point cloud corresponding to a second pixel point in the matching pixel points, for example, to perform a point cloud registration operation on the point cloud to be registered according to the transformation matrix.
The server 105 may be a solid server, or may be composed of a plurality of servers, for example, a part of the server 105 may be, for example, used as a three-dimensional point cloud registration task submitting system in the present disclosure, for obtaining a task to be executed with a three-dimensional point cloud registration command; and a portion of the server 105 may also be used, for example, as a three-dimensional point cloud registration system in the present disclosure, for obtaining a point cloud to be registered and a reference point cloud; determining a first projection image of the point cloud to be registered on a two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered; determining a second projection image of the reference point cloud on the two-dimensional plane, wherein the second projection image comprises a plurality of second pixel points, and each second pixel point corresponds to at least one reference point cloud; determining matched pixel point pairs according to a first pixel point of the first projection image and a second pixel point of the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point; and determining a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matching pixel points and at least one reference point cloud corresponding to a second pixel point in the matching pixel points, so as to perform point cloud registration operation on the point cloud to be registered according to the transformation matrix.
Fig. 2 is a flow chart illustrating a three-dimensional point cloud registration method according to an exemplary embodiment. The three-dimensional point cloud registration method provided by the embodiments of the present disclosure may be executed by any electronic device with computing processing capability, for example, the terminal devices 101, 102, and 103 and/or the server 105, and in the following embodiments, the server execution method is taken as an example for illustration, but the present disclosure is not limited thereto. The three-dimensional point cloud registration method 20 provided by the embodiment of the present disclosure may include steps S202 to S208.
As shown in fig. 2, in step S202, a point cloud to be registered and a reference point cloud are acquired.
In the embodiment of the present disclosure, the point cloud to be registered and the reference point cloud may be local point cloud data acquired from different viewing angles and/or different periods and/or different devices. For example, a point cloud to be registered and a reference point cloud for a wide range of indoor and outdoor scenes may be acquired by an acquisition device. The point cloud to be registered and the reference point cloud may respectively include coordinate data and reflection intensity information in a three-dimensional coordinate system.
In step S204, a first projection image of the point cloud to be registered on the two-dimensional plane is determined, where the first projection image includes a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered.
In the embodiment of the disclosure, the point cloud to be registered may have three-dimensional coordinate information, and the point cloud to be registered may be projected on a two-dimensional plane according to the three-dimensional coordinate information of the point cloud to be registered, so as to form a plurality of first pixel points. Each point cloud to be registered can be projected on a first pixel point, and each first pixel point can correspond to at least one point cloud to be registered. In order to facilitate implementation of the method, a plane where two dimensions in the three-dimensional coordinate information of the point cloud to be registered are located can be determined as a two-dimensional plane. For example, the coordinate information of the point cloud to be registered includes X, Y, Z three dimensions. Wherein the Z coordinate direction tends to be consistent with the gravity direction. X, Y, the coordinate direction is perpendicular to the direction of gravity. The plane in which the X, Y coordinate dimension lies can be determined as a two-dimensional plane (hereinafter XY plane). And projecting the cloud of the point to be registered to an XY plane by using orthographic projection to achieve the purpose of reducing the dimension of the cloud of the point to be registered.
In step S206, a second projection image of the reference point cloud on the two-dimensional plane is determined, where the second projection image includes a plurality of second pixel points, and each second pixel point corresponds to at least one reference point cloud.
In the embodiment of the present disclosure, a similar generation manner to the first projection image may be adopted for determining the second projection image according to the reference point cloud, and details are not repeated here.
In step S208, matching pixel point pairs are determined according to a first pixel point of the first projection image and a second pixel point of the second projection image, where each matching pixel point pair includes a first pixel point and a second pixel point.
In the embodiment of the present disclosure, a mode of performing two-dimensional image feature extraction and feature matching on pixel points (first pixel points and second pixel points) in projection images (in a first projection image and a second projection image) is adopted to obtain pixel point pairs successfully matched.
In step S210, a transformation matrix is determined according to at least one point cloud to be registered corresponding to a first pixel point in the matching pixel points and at least one reference point cloud corresponding to a second pixel point in the matching pixel points, so as to perform point cloud registration on the point cloud to be registered according to the transformation matrix.
In the embodiment of the disclosure, a target point cloud to be registered and a target reference point cloud may be determined for a first pixel point and a second pixel point in a matching pixel point pair, respectively, and the target point cloud to be registered and the target reference point cloud may be determined as a matching point cloud pair, so as to determine a transformation matrix by using the matching point cloud pair. When the target point cloud to be registered is determined according to the first pixel point, the coordinate information of at least one point cloud to be registered corresponding to the first pixel point can be used as a basis. When the second pixel point is promoted to the target reference point cloud, the coordinate information of at least one reference point cloud corresponding to the second pixel point can be used as a basis. Currently, a point cloud collection device is usually set to a Z coordinate direction in a direction which tends to be consistent with a gravity direction, based on the prior, taking an example of determining a target point cloud to be registered according to a first pixel point in an XY plane, coordinates of X, Y dimensions of the center position of the first pixel point in the XY plane can be respectively determined as X, Y dimensional coordinate values of the lifted target point cloud to be registered, and a Z dimensional coordinate value of a lowest point (i.e., a Z coordinate value is minimum) in at least one point cloud to be registered corresponding to the first pixel point is determined as an elevation value of the lifted target point cloud to be registered. The transformation matrix is a rotational translation matrix between the target point cloud to be registered and the target reference point cloud, and aims to transform the target point cloud to be registered to a coordinate system identical to the target reference point cloud. The transformed coordinates may be expressed as the following equation:
pt=R·ps+T (1)
wherein p istAnd psRespectively a target reference point cloud and a target point cloud to be registered in the same matching point cloud pair. R, T are rotation and translation matrices, collectively referred to as transformation matrices as previously described.
According to the three-dimensional point cloud registration method provided by the embodiment of the disclosure, a point cloud to be registered is projected to be a first pixel point on a two-dimensional plane based on a projection mode, a first projection image is generated by using the first pixel point, a reference point cloud is projected to be a second pixel point on the two-dimensional plane, a second projection image is generated by using the second pixel point, and a matching pixel point pair is determined by using the first projection image and the second projection image, so that the three-dimensional point cloud registration problem can be reduced to be a two-dimensional image registration problem, the algorithm complexity and the memory requirement are reduced, the registration efficiency is greatly improved, the registration algorithm can be expanded to a large-range scene, and the quick registration without initial values of hundreds of millions of level point clouds is realized.
Fig. 3 is a flow chart illustrating a three-dimensional point cloud registration method according to an exemplary embodiment. The three-dimensional point cloud registration method 30 provided by the embodiment of the present disclosure may include steps S302 to S306 when determining the first projection image of the point cloud to be registered on the two-dimensional plane.
As shown in fig. 3, in step S302, the two-dimensional plane is divided according to the projection position of the point cloud to be registered on the two-dimensional plane, so as to obtain a plurality of first pixel points.
In the embodiment of the present disclosure, fig. 5 may be taken as an example. When determining the first projection image, each grid (the pixel points indicated in fig. 5) including three-dimensional points in fig. 5 is a first pixel point. Of course, when determining the second projection image, each pixel point containing the three-dimensional point in fig. 5 represents a second pixel point. When a plurality of first pixel points are obtained, the first pixel points can be projected onto an XY plane according to the (X, Y) coordinates of each point cloud to be registered, and then the minimum X of the projection points of the point cloud to be registered on the XY plane on the X axis and the Y axis is countedmin、YminAnd maximum value Xmax、Ymax. After obtaining the above four to range, with (X)min,Ymin) And as a starting point, equally dividing the XY plane at intervals of s within the range of four to form a plurality of first pixel points. The first pixel point can be identified by a row number (i, j). For example, the pixel number at the lower left corner in fig. 5 is marked as (0, 0), and the covered geographic range is (X)min,Ymin)→(Xmin+s,Ymin+ s). The pixel point (i, j) where each three-dimensional point (point cloud to be registered in this embodiment) (X, Y, Z) is located (first pixel point in this embodiment) can be calculated in sequence according to the following formula (2), and the corresponding relationship between the space three-dimensional point and the pixel point where the space three-dimensional point is located is established, so as to obtain an index relationship table between the three-dimensional point and the pixel point. Wherein
Figure BDA0002895729950000102
Representing a rounding down.
Figure BDA0002895729950000101
In step S304, a gray value of the first pixel point is determined according to the reflection intensity information of the at least one point cloud to be registered corresponding to the first pixel point.
In the embodiment of the present disclosure, the reflection intensity information of the point cloud to be registered with the lowest mid-range value of the at least one point cloud to be registered corresponding to each first pixel point may be determined as the gray value of the first pixel point. When the two-dimensional plane is an XY plane, the lowest point cloud to be registered of at least one point cloud to be registered corresponding to the first pixel point is the point cloud to be registered with the minimum Z coordinate value.
In step S306, a first projection image is generated according to the gray-scale value of the first pixel point.
In the embodiment of the present disclosure, the gray value of each pixel point may be used as the gray expression in the first projection image of the first pixel point, so as to generate the first projection image. An example diagram of the first projection image may be shown, for example, in fig. 6 (b).
In an exemplary embodiment, in performing step S304, the following steps S3041-S3044 may be included:
step S3041, obtaining reflection intensity information of the point cloud to be registered.
Step S3042, determining the minimum value in the reflection intensity information of the point cloud to be registered as the global minimum reflection intensity of the point cloud to be registered, and determining the maximum value in the reflection intensity information of the point cloud to be registered as the global maximum reflection intensity of the point cloud to be registered.
Step S3043, determining the reflection sub-intensity of the first pixel point according to the reflection intensity information of the at least one point cloud to be registered corresponding to the first pixel point.
Step S3044, determining a gray value of the first pixel point according to the global minimum reflection intensity of the point cloud to be registered, the global maximum reflection intensity of the point cloud to be registered, and the reflection sub-intensity of the first pixel point.
The reflection intensity information of each first pixel point is obtained, and then normalization processing can be performed on the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point according to a formula (3).
v(i,j)=255(I(i,j)-Imin)/(Imax-Imin) (3)
Wherein I(i,j)The reflection intensity information of the first pixel point before normalization. I ismaxAnd IminThe maximum and minimum reflection intensity values in all point clouds to be registered (i.e., the point clouds to be registered in the whole range acquired), v (i,j) the normalized reflection intensity information of the first pixel point is obtained.
The embodiment shown in fig. 3 of the present disclosure provides a manner of generating the first projection image, and a manner of generating the second projection image may adopt steps similar to steps S302 to S306 or steps S3041 to S3042 of the embodiment of the present disclosure. An example diagram of the second projection image may be as shown in fig. 6(a), for example. For example, when a second projection image of the reference point cloud on the two-dimensional plane is determined, the two-dimensional plane may be divided according to the projection position of the reference point cloud on the two-dimensional plane to obtain a plurality of second pixel points; determining the gray value of a second pixel point according to the reflection intensity information of at least one reference point cloud corresponding to the second pixel point; and generating a second projection image according to the gray value of the second pixel point.
For another example, when the gray value of the second pixel point is determined according to the reflection intensity information of at least one reference point cloud corresponding to the second pixel point, the reflection intensity information of the reference point cloud can be obtained; determining the minimum value in the reflection intensity information of the reference point cloud as the global minimum reflection intensity of the reference point cloud, and determining the maximum value in the reflection intensity information of the reference point cloud as the global maximum reflection intensity of the reference point cloud; determining the reflection sub-intensity of the second pixel point according to the reflection intensity information of at least one reference point cloud corresponding to the second pixel point; and determining the gray value of the second pixel point according to the global minimum reflection intensity of the reference point cloud, the global maximum reflection intensity of the reference point cloud and the reflection sub-intensity of the second pixel point.
FIG. 8 is a flowchart illustrating a method of three-dimensional point cloud registration, according to an example embodiment. The three-dimensional point cloud registration method 80 provided in the embodiment of the present disclosure may include steps S802 to S808 when determining a matching pixel pair according to a first pixel point of a first projection image and a second pixel point of a second projection image.
In step S802, feature extraction is performed on the first pixel to obtain a first feature vector of the first pixel.
In step S804, feature extraction is performed on the second pixel point to obtain a second feature vector of the second pixel point.
In step S806, the feature distance between the first feature vector of the first pixel and the second feature vector of each second pixel is determined.
In the embodiment of the disclosure, for the first pixel point a1, all the second pixel points in the second projection image are a2, B2 and C2 in sequence, and then the characteristic distance S1 between a1 and a2, the characteristic distance S2 between a1 and B2, and the characteristic distance S3 between a1 and C2 can be determined in sequence.
In step S808, the sum of the first pixel point and the second pixel point with the minimum feature distance is determined as a matching point pair.
In the embodiment of the present disclosure, in connection with the foregoing examples, the first pixel point corresponding to the minimum characteristic distance in S1, S2, and S3 may be determined as the matching point pair by the second pixel point. For example, when S1 is the minimum of S1, S2, S3, a1 and a2 may be determined as a matching point pair.
In the embodiment of the present disclosure, a Speeded Up Robust Features (SURF) algorithm that maintains invariance to image scaling, rotation, and affine transformation may be adopted to perform feature point detection and descriptor computation, where SURF is an improvement on a Scale-invariant feature transform (SIFT) algorithm, and the efficiency is improved by several times as compared with SIFT on the premise of ensuring matching accuracy and robustness. The whole matching process comprises 6 parts of Hessian matrix construction, scale space construction, feature point positioning, feature point main direction calculation, feature descriptor generation and feature vector matching based on Kdtree. In an exemplary embodiment, a bidirectional matching strategy can be further adopted to obtain more matching point pairs, so that the matching robustness is improved.
In an exemplary embodiment, a feature distance ratio of a minimum value and a next smallest value among the feature distances may be determined; and if the characteristic distance ratio is larger than the characteristic distance ratio threshold, rejecting the matched point pairs. In connection with the foregoing example, when S1< S2< S3, S1 is the minimum value in the feature distances, S2 is the second smallest value in the feature distances, the feature distance ratio is S1/S2, and when the feature distance ratio S1/S2 is greater than the feature ratio threshold, it is considered that, for the first pixel a1, the first pixel a1 and the second pixel a2 and B2 may both be a matching point pair, in this case, the behavior reliability of directly determining the first pixel a1 and the second pixel a2 as the matching point pair is low, and therefore, the matching point pair of a1 and a2 can be removed to improve the matching robustness.
In an exemplary embodiment, a homographic transformation of the first projection image and the second projection image may also be determined; and eliminating the matched point pairs with the error larger than the error pixel threshold value after homography transformation. For example, a random sample consensus (RANSAC) algorithm may be used to establish a homography between the first and second projection images, and a pair of matching points having a homography post-projection error exceeding an error pixel threshold may be identified as outliers and filtered. FIG. 7 is an exemplary diagram illustrating an effect of feature matching and error point filtering according to an example. In this embodiment, the culling is performed using the RANSAC algorithm based on three-dimensional euclidean transforms. And finally, based on all the matching pairs of the inner points after the false points are eliminated, the centimeter-level large-range point cloud accurate registration can be realized through effective feature matching and a multi-level gross error elimination mechanism.
FIG. 9 is a flowchart illustrating a method of three-dimensional point cloud registration, according to an example embodiment. The three-dimensional point cloud registration method 90 provided in the embodiment of the present disclosure may include steps S902 to S906 when determining a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in a matching pixel point and at least one reference point cloud corresponding to a second pixel point in the matching pixel point.
In step S902, the point cloud to be registered having the lowest elevation value in the point clouds to be registered corresponding to the first pixel point in the matching pixel points is determined as the first point cloud.
In step S904, the reference point cloud having the lowest elevation value in the reference point clouds corresponding to the second pixel point in the matching pixel points is determined as the second point cloud.
In step S906, a transformation matrix is determined using the first point cloud and the second point cloud as a matching point cloud pair.
The method comprises the steps of calculating a 6-degree-of-freedom transformation matrix between two point clouds in a matching point cloud pair in an iterative manner by adopting a Levenberg-Marquard (LM) algorithm, and acting the calculated transformation matrix on all three-dimensional points in the point clouds to be registered to realize three-dimensional registration between the point clouds.
According to the embodiment of the disclosure, the two-dimensional first pixel point and the two-dimensional second pixel point are respectively promoted to the first point cloud and the second point cloud in the three-dimensional space, and the transformation matrix is determined based on the first point cloud and the second point cloud. The method can convert the matching point pairs obtained by two-dimensional image registration into three-dimensional matching point cloud pairs, further realize the conversion of the three-dimensional point cloud registration problem into the two-dimensional image registration problem, and realize quick registration without initial values.
In an exemplary embodiment, when the transformation matrix is determined according to at least one point cloud to be registered corresponding to a first pixel point in the matching pixel points and at least one reference point cloud corresponding to a second pixel point in the matching pixel points, a matching point cloud pair can be further obtained according to the matching point pair in formula (4).
Figure BDA0002895729950000141
Wherein, (i, j) is the coordinate of a pixel point (first pixel point when the first point cloud is determined, second pixel point when the second point cloud is determined), (X)(i,j),Y(i,j)) The coordinates corresponding to X, Y dimensions in the three-dimensional space corresponding to the pixel point are obtained. Three-dimensional space coordinate Z(i,j)The value may be an elevation value (i.e., a Z-coordinate value) of a three-dimensional point with a minimum Z-coordinate value in the point cloud to be registered or the reference point cloud corresponding to the pixel point shown in fig. 5.
FIG. 10 is a flow chart illustrating a method of three-dimensional point cloud registration in accordance with an exemplary embodiment. The three-dimensional point cloud registration method 100 provided by the embodiment of the disclosure may include three technical links of point cloud projection 1002, feature extraction and matching 1004, and three-dimensional space transformation 1006. The point cloud projection 1002 mainly performs mapping from a three-dimensional point cloud (a point cloud to be registered and a reference point cloud) to a two-dimensional image (a first projection image and a second projection image), and the grid index is a corresponding relationship between each grid (i.e., a first pixel point or a second pixel point) and the point cloud to be registered or the reference point cloud. Feature extraction and matching 1004 calculates pixel point pairs of the same name based on the mapped two-dimensional image, and finally completes registration of the two groups of point clouds through three-dimensional mapping and transformation of the pixel point pairs of the same name. Fig. 4 shows an effect diagram of the point cloud to be registered and the reference point cloud after registration by using a specific example. Fig. 4(a) is an effect diagram of the reference point cloud, fig. 4(b) is an effect diagram of the point cloud to be registered, and fig. 4(c) is an effect diagram of the point cloud after registration. According to the three-dimensional point cloud registration method provided by the embodiment of the disclosure, aiming at the registration problem of large-range three-dimensional dense point cloud, the complicated three-dimensional point cloud registration problem is converted into the two-dimensional image registration problem, and the three-dimensional point cloud registration method based on the orthographic projection without initial value is provided. A large number of experiments prove that the method provided by the invention can effectively solve the problem of large-range (kilometer level) three-dimensional dense point cloud registration. In particular, the method provided by the invention has obvious advantages in the aspects of registration efficiency and registration accuracy compared with the prior art. In the aspect of efficiency, the three-dimensional point cloud registration problem is converted into the two-dimensional image registration problem, the registration efficiency is greatly improved, and the quick registration without initial values of hundreds of millions of levels of point clouds is realized; in the aspect of registration precision, the invention designs an effective feature matching and multi-level gross error rejection mechanism on the basis of RANSAC algorithm, and realizes centimeter-level large-range point cloud precise registration.
It should be clearly understood that this disclosure describes how to make and use particular examples, but the principles of this disclosure are not limited to any details of these examples. Rather, these principles can be applied to many other embodiments based on the teachings of the present disclosure.
Those skilled in the art will appreciate that all or part of the steps for implementing the above embodiments are implemented as a computer program executed by a Central Processing Unit (CPU). When executed by a central processing unit CPU, performs the above-described functions defined by the above-described methods provided by the present disclosure. The program of (a) may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic or optical disk, or the like.
Furthermore, it should be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 11 is a block diagram illustrating a three-dimensional point cloud registration apparatus according to an exemplary embodiment. Referring to fig. 11, a three-dimensional point cloud registration apparatus 1100 provided by an embodiment of the present disclosure may include: a point cloud acquisition module 1102, a first image module 1104, a second image module 1106, a pixel point pair module 1108, and a point cloud registration module 1110.
In the three-dimensional point cloud registration apparatus 1100, a point cloud obtaining module 1102 may be configured to obtain a point cloud to be registered and a reference point cloud.
The first image module 1104 may be configured to determine a first projection image of the point cloud to be registered on the two-dimensional plane, the first projection image including a plurality of first pixel points, each corresponding to at least one point cloud to be registered.
The second image module 1106 may be configured to determine a second projected image of the reference point cloud in the two-dimensional plane, the second projected image including a plurality of second pixel points, each corresponding to at least one reference point cloud.
The pixel point pair module 1108 may be configured to determine matched pixel point pairs from a first pixel point of the first projected image and a second pixel point of the second projected image, each matched pixel point pair including a first pixel point and a second pixel point.
The point cloud registration module 1110 may be configured to determine a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matching pixel points and at least one reference point cloud corresponding to a second pixel point in the matching pixel points, so as to perform a point cloud registration operation on the point cloud to be registered according to the transformation matrix.
According to the three-dimensional point cloud registration device provided by the embodiment of the disclosure, a point cloud to be registered is projected to be a first pixel point on a two-dimensional plane based on a projection mode, a first projection image is generated by using the first pixel point, a reference point cloud is projected to be a second pixel point on the two-dimensional plane, a second projection image is generated by using the second pixel point, and a matching pixel point pair is determined by using the first projection image and the second projection image, so that the three-dimensional point cloud registration problem can be reduced to be a two-dimensional image registration problem, the algorithm complexity and the memory requirement are reduced, the registration efficiency is greatly improved, the registration algorithm can be expanded to a large-range scene, and the quick registration without initial values of hundreds of millions of level point clouds is realized.
In an exemplary embodiment, the first image module may include: the plane division submodule can be configured to divide the two-dimensional plane according to the projection position of the point cloud to be registered on the two-dimensional plane to obtain a plurality of first pixel points; the gray value sub-module can be configured to determine the gray value of the first pixel point according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point; and a first image sub-module configurable to generate a first projected image according to the gray value of the first pixel point.
In an exemplary embodiment, the gray value sub-module may include: a reflection intensity acquisition unit configured to acquire reflection intensity information of the point cloud to be registered; the global reflection intensity unit can be configured to determine the minimum value in the reflection intensity information of the point cloud to be registered as the global minimum reflection intensity of the point cloud to be registered, and determine the maximum value in the reflection intensity information of the point cloud to be registered as the global maximum reflection intensity of the point cloud to be registered; the reflection sub-intensity unit can be configured to determine the reflection sub-intensity of the first pixel point according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point; and the gray value unit can be configured to determine the gray value of the first pixel point according to the global minimum reflection intensity of the point cloud to be registered, the global maximum reflection intensity of the point cloud to be registered and the reflection sub-intensity of the first pixel point.
In an exemplary embodiment, the pixel point pair module may include: the first feature vector submodule can be configured to extract features of the first pixel point to obtain a first feature vector of the first pixel point; the second feature vector submodule can be configured to extract features of the second pixel point to obtain a second feature vector of the second pixel point; a feature distance submodule configurable to determine a feature distance of a first feature vector of the first pixel point and a second feature vector of each second pixel point; and the matching point pair submodule can be configured to determine the first pixel point and the second pixel point with the minimum characteristic distance as a matching point pair.
In an exemplary embodiment, the three-dimensional point cloud registration apparatus may further include: a feature distance ratio module configured to determine a feature distance ratio of a minimum value and a next minimum value in the feature distances; the first matching point pair rejection module may be configured to reject the matching point pair if the feature distance ratio is greater than the feature distance ratio threshold.
In an exemplary embodiment, the three-dimensional point cloud registration apparatus may further include: a homographic transformation module configurable to determine a homographic transformation of the first projection image and the second projection image; and the second matching point pair elimination module can be configured to eliminate the matching point pairs with the error larger than the error pixel threshold value after homography transformation.
In an exemplary embodiment, the point cloud registration module may include: the first point cloud submodule can be configured to determine a point cloud to be registered with the lowest elevation value in point clouds to be registered corresponding to the first pixel point in the matching pixel points as a first point cloud; the second point cloud submodule can be configured to determine the reference point cloud with the lowest elevation value in the reference point clouds corresponding to the second pixel points in the matching pixel points as the second point cloud; a transformation matrix sub-module configurable to determine a transformation matrix using the first point cloud and the second point cloud as a matching point cloud pair.
An electronic device 1200 according to this embodiment of the invention is described below with reference to fig. 12. The electronic device 1200 shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 12, the electronic device 1200 is embodied in the form of a general purpose computing device. The components of the electronic device 1200 may include, but are not limited to: the at least one processing unit 1210, the at least one memory unit 1220, and a bus 1230 connecting the various system components including the memory unit 1220 and the processing unit 1210.
Wherein the memory unit stores program code that is executable by the processing unit 1210 such that the processing unit 1210 performs steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification. For example, the processing unit 1210 may execute step S202 shown in fig. 1, and acquire a point cloud to be registered and a reference point cloud; step S204, determining a first projection image of the point cloud to be registered on the two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered; step S206, determining a second projection image of the reference point cloud on the two-dimensional plane, wherein the second projection image comprises a plurality of second pixel points, and each second pixel point corresponds to at least one reference point cloud; step S208, determining matched pixel point pairs according to a first pixel point of the first projection image and a second pixel point of the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point; step S210, determining a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in a matching pixel point and at least one reference point cloud corresponding to a second pixel point in the matching pixel point, and performing point cloud registration operation on the point cloud to be registered according to the transformation matrix.
The storage unit 1220 may include a readable medium in the form of a volatile memory unit, such as a random access memory unit (RAM)12201 and/or a cache memory unit 12202, and may further include a read only memory unit (ROM) 12203.
Storage unit 1220 may also include a program/utility 12204 having a set (at least one) of program modules 12205, such program modules 12205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1230 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1200 may also communicate with one or more external devices 1300 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1200, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 1250. Also, the electronic device 1200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 1260. As shown, the network adapter 1260 communicates with the other modules of the electronic device 1200 via the bus 1230. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 1200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A three-dimensional point cloud registration method is characterized by comprising the following steps:
acquiring point cloud to be registered and reference point cloud;
determining a first projection image of the point cloud to be registered on a two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered;
determining a second projection image of the reference point cloud on the two-dimensional plane, wherein the second projection image comprises a plurality of second pixel points, and each second pixel point corresponds to at least one reference point cloud;
determining matched pixel point pairs according to a first pixel point of the first projection image and a second pixel point of the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point;
and determining a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matching pixel points and at least one reference point cloud corresponding to a second pixel point in the matching pixel points, so as to perform point cloud registration operation on the point cloud to be registered according to the transformation matrix.
2. The method of claim 1, wherein determining a first projection image of the point cloud to be registered in a two-dimensional plane comprises:
dividing the two-dimensional plane according to the projection position of the point cloud to be registered on the two-dimensional plane to obtain a plurality of first pixel points;
determining the gray value of the first pixel point according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point;
and generating the first projection image according to the gray value of the first pixel point.
3. The method of claim 2, wherein determining the gray value of the first pixel point from the reflection intensity information of the at least one point cloud to be registered corresponding to the first pixel point comprises:
acquiring reflection intensity information of the point cloud to be registered;
determining the minimum value in the reflection intensity information of the point cloud to be registered as the global minimum reflection intensity of the point cloud to be registered, and determining the maximum value in the reflection intensity information of the point cloud to be registered as the global maximum reflection intensity of the point cloud to be registered;
determining the reflection sub-intensity of the first pixel point according to the reflection intensity information of at least one point cloud to be registered corresponding to the first pixel point;
and determining the gray value of the first pixel point according to the global minimum reflection intensity of the point cloud to be registered, the global maximum reflection intensity of the point cloud to be registered and the reflection sub-intensity of the first pixel point.
4. The method of claim 1, wherein determining a matching pixel pair from a first pixel in the first projection image and a second pixel in the second projection image comprises:
extracting the characteristics of the first pixel points to obtain first characteristic vectors of the first pixel points;
extracting the features of the second pixel points to obtain second feature vectors of the second pixel points;
determining the characteristic distance between the first characteristic vector of the first pixel point and the second characteristic vector of each second pixel point;
and determining the sum of the first pixel point and the second pixel point with the minimum characteristic distance as a matching point pair.
5. The method of claim 4, further comprising:
determining a characteristic distance ratio of a minimum value and a second minimum value in the characteristic distances;
and if the characteristic distance ratio is larger than a characteristic distance ratio threshold value, rejecting the matching point pairs.
6. The method of claim 4, further comprising:
determining a homographic transformation of the first projection image and the second projection image;
and eliminating the matched point pairs with the error larger than the error pixel threshold value after homography transformation.
7. The method of claim 1, wherein determining a transformation matrix from at least one point cloud to be registered corresponding to a first pixel point in the matched pixel points and at least one reference point cloud corresponding to a second pixel point in the matched pixel points comprises:
determining a point cloud to be registered with the lowest elevation value in point clouds to be registered corresponding to a first pixel point in the matching pixel points as a first point cloud;
determining the reference point cloud with the lowest elevation value in the reference point clouds corresponding to the second pixel points in the matching pixel points as a second point cloud;
determining the transformation matrix using the first point cloud and the second point cloud as a matching point cloud pair.
8. A three-dimensional point cloud registration apparatus, comprising:
the point cloud acquisition module is configured to acquire a point cloud to be registered and a reference point cloud;
the first image module is configured to determine a first projection image of the point cloud to be registered on the two-dimensional plane, wherein the first projection image comprises a plurality of first pixel points, and each first pixel point corresponds to at least one point cloud to be registered;
a second image module configured to determine a second projected image of the reference point cloud on the two-dimensional plane, the second projected image including a plurality of second pixel points, each second pixel point corresponding to at least one reference point cloud;
the pixel point pair module is configured to determine matched pixel point pairs according to a first pixel point of the first projection image and a second pixel point of the second projection image, wherein each matched pixel point pair comprises a first pixel point and a second pixel point;
and the point cloud registration module is configured to determine a transformation matrix according to at least one point cloud to be registered corresponding to a first pixel point in the matching pixel points and at least one reference point cloud corresponding to a second pixel point in the matching pixel points so as to perform point cloud registration operation on the point cloud to be registered according to the transformation matrix.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1-7.
CN202110040721.5A 2021-01-13 2021-01-13 Three-dimensional point cloud registration method and device, electronic equipment and readable medium Active CN113793370B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110040721.5A CN113793370B (en) 2021-01-13 2021-01-13 Three-dimensional point cloud registration method and device, electronic equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110040721.5A CN113793370B (en) 2021-01-13 2021-01-13 Three-dimensional point cloud registration method and device, electronic equipment and readable medium

Publications (2)

Publication Number Publication Date
CN113793370A true CN113793370A (en) 2021-12-14
CN113793370B CN113793370B (en) 2024-04-19

Family

ID=78876819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110040721.5A Active CN113793370B (en) 2021-01-13 2021-01-13 Three-dimensional point cloud registration method and device, electronic equipment and readable medium

Country Status (1)

Country Link
CN (1) CN113793370B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239776A (en) * 2022-07-14 2022-10-25 阿波罗智能技术(北京)有限公司 Point cloud registration method, device, equipment and medium
CN115409880A (en) * 2022-08-31 2022-11-29 深圳前海瑞集科技有限公司 Workpiece data registration method and device, electronic equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295239A (en) * 2013-06-07 2013-09-11 北京建筑工程学院 Laser-point cloud data automatic registration method based on plane base images
CN107316325A (en) * 2017-06-07 2017-11-03 华南理工大学 A kind of airborne laser point cloud based on image registration and Image registration fusion method
US20180101932A1 (en) * 2016-10-11 2018-04-12 The Boeing Company System and method for upsampling of sparse point cloud for 3d registration
CN109272537A (en) * 2018-08-16 2019-01-25 清华大学 A kind of panorama point cloud registration method based on structure light
CN109410256A (en) * 2018-10-29 2019-03-01 北京建筑大学 Based on mutual information cloud and image automatic, high precision method for registering
CN109493375A (en) * 2018-10-24 2019-03-19 深圳市易尚展示股份有限公司 The Data Matching and merging method of three-dimensional point cloud, device, readable medium
CN110111414A (en) * 2019-04-10 2019-08-09 北京建筑大学 A kind of orthography generation method based on three-dimensional laser point cloud
CN110852979A (en) * 2019-11-12 2020-02-28 广东省智能机器人研究院 Point cloud registration and fusion method based on phase information matching
CN110853081A (en) * 2019-11-18 2020-02-28 武汉数智云绘技术有限公司 Ground and airborne LiDAR point cloud registration method based on single-tree segmentation
CN110930443A (en) * 2019-11-27 2020-03-27 中国科学院深圳先进技术研究院 Image registration method and device and terminal equipment
CN110942476A (en) * 2019-10-17 2020-03-31 湖南大学 Improved three-dimensional point cloud registration method and system based on two-dimensional image guidance and readable storage medium
CN112001955A (en) * 2020-08-24 2020-11-27 深圳市建设综合勘察设计院有限公司 Point cloud registration method and system based on two-dimensional projection plane matching constraint
CN112184783A (en) * 2020-09-22 2021-01-05 西安交通大学 Three-dimensional point cloud registration method combined with image information

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295239A (en) * 2013-06-07 2013-09-11 北京建筑工程学院 Laser-point cloud data automatic registration method based on plane base images
US20180101932A1 (en) * 2016-10-11 2018-04-12 The Boeing Company System and method for upsampling of sparse point cloud for 3d registration
CN107316325A (en) * 2017-06-07 2017-11-03 华南理工大学 A kind of airborne laser point cloud based on image registration and Image registration fusion method
CN109272537A (en) * 2018-08-16 2019-01-25 清华大学 A kind of panorama point cloud registration method based on structure light
CN109493375A (en) * 2018-10-24 2019-03-19 深圳市易尚展示股份有限公司 The Data Matching and merging method of three-dimensional point cloud, device, readable medium
CN109410256A (en) * 2018-10-29 2019-03-01 北京建筑大学 Based on mutual information cloud and image automatic, high precision method for registering
CN110111414A (en) * 2019-04-10 2019-08-09 北京建筑大学 A kind of orthography generation method based on three-dimensional laser point cloud
CN110942476A (en) * 2019-10-17 2020-03-31 湖南大学 Improved three-dimensional point cloud registration method and system based on two-dimensional image guidance and readable storage medium
CN110852979A (en) * 2019-11-12 2020-02-28 广东省智能机器人研究院 Point cloud registration and fusion method based on phase information matching
CN110853081A (en) * 2019-11-18 2020-02-28 武汉数智云绘技术有限公司 Ground and airborne LiDAR point cloud registration method based on single-tree segmentation
CN110930443A (en) * 2019-11-27 2020-03-27 中国科学院深圳先进技术研究院 Image registration method and device and terminal equipment
CN112001955A (en) * 2020-08-24 2020-11-27 深圳市建设综合勘察设计院有限公司 Point cloud registration method and system based on two-dimensional projection plane matching constraint
CN112184783A (en) * 2020-09-22 2021-01-05 西安交通大学 Three-dimensional point cloud registration method combined with image information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵夫群;周明全;: "基于二维图像特征的点云配准方法", 测绘通报, no. 10, pages 39 - 42 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239776A (en) * 2022-07-14 2022-10-25 阿波罗智能技术(北京)有限公司 Point cloud registration method, device, equipment and medium
CN115409880A (en) * 2022-08-31 2022-11-29 深圳前海瑞集科技有限公司 Workpiece data registration method and device, electronic equipment and storage medium
CN115409880B (en) * 2022-08-31 2024-03-22 深圳前海瑞集科技有限公司 Workpiece data registration method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113793370B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
CN107330439B (en) Method for determining posture of object in image, client and server
CN111523414B (en) Face recognition method, device, computer equipment and storage medium
CN109582880B (en) Interest point information processing method, device, terminal and storage medium
KR101548928B1 (en) Invariant visual scene and object recognition
US11145080B2 (en) Method and apparatus for three-dimensional object pose estimation, device and storage medium
CN111539428A (en) Rotating target detection method based on multi-scale feature integration and attention mechanism
JP2013025799A (en) Image search method, system, and program
WO2021217924A1 (en) Method and apparatus for identifying vehicle type at traffic checkpoint, and device and storage medium
JP2015504215A (en) Method and system for comparing images
CN111932577B (en) Text detection method, electronic device and computer readable medium
WO2023142602A1 (en) Image processing method and apparatus, and computer-readable storage medium
CN113793370B (en) Three-dimensional point cloud registration method and device, electronic equipment and readable medium
Huang et al. A coarse-to-fine algorithm for registration in 3D street-view cross-source point clouds
CN114782499A (en) Image static area extraction method and device based on optical flow and view geometric constraint
CN111914756A (en) Video data processing method and device
CN110910375A (en) Detection model training method, device, equipment and medium based on semi-supervised learning
CN112907569A (en) Head image area segmentation method and device, electronic equipment and storage medium
CN112767478A (en) Appearance guidance-based six-degree-of-freedom pose estimation method
CN111738319A (en) Clustering result evaluation method and device based on large-scale samples
CN111709269B (en) Human hand segmentation method and device based on two-dimensional joint information in depth image
CN117237681A (en) Image processing method, device and related equipment
CN113160258B (en) Method, system, server and storage medium for extracting building vector polygon
CN113763468B (en) Positioning method, device, system and storage medium
Zhang et al. Edge detection from RGB-D image based on structured forests
CN110399892B (en) Environmental feature extraction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant