CN110991526A - Non-iterative point cloud matching method, medium, terminal and device - Google Patents

Non-iterative point cloud matching method, medium, terminal and device Download PDF

Info

Publication number
CN110991526A
CN110991526A CN201911207626.9A CN201911207626A CN110991526A CN 110991526 A CN110991526 A CN 110991526A CN 201911207626 A CN201911207626 A CN 201911207626A CN 110991526 A CN110991526 A CN 110991526A
Authority
CN
China
Prior art keywords
point cloud
matrix
amplitude value
grid
reference point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911207626.9A
Other languages
Chinese (zh)
Other versions
CN110991526B (en
Inventor
蔡龙生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yogo Robot Co Ltd
Original Assignee
Shanghai Yogo Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yogo Robot Co Ltd filed Critical Shanghai Yogo Robot Co Ltd
Priority to CN201911207626.9A priority Critical patent/CN110991526B/en
Publication of CN110991526A publication Critical patent/CN110991526A/en
Application granted granted Critical
Publication of CN110991526B publication Critical patent/CN110991526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing

Abstract

The invention discloses a non-iterative point cloud matching method, a medium, a terminal and a device. The method comprises the following steps: acquiring two adjacent frames of point cloud data; performing coordinate transformation on two adjacent frames of point cloud data to generate a first grid matrix and a second grid matrix; performing Fourier transform on the first grid matrix and the second grid matrix respectively to generate a first amplitude value matrix and a second amplitude value matrix; and transforming the first amplitude value matrix and the second amplitude value matrix by adopting a phase correlation method to generate a pulse function for representing the translation amount and the rotation amount between the two frames of point clouds, and acquiring the relative transformation of the two frames of point clouds according to the pulse position. The laser point cloud is taken as a binary image, the translation and the rotation between two frames of point clouds can be calculated by utilizing Fourier transform and a phase correlation algorithm in the field of image processing, and the method has the characteristics of high calculation speed, easiness in obtaining real relative transformation and the like, so that effective guarantee is provided for the rapid calculation of the global position of the autonomous mobile robot in a scene.

Description

Non-iterative point cloud matching method, medium, terminal and device
[ technical field ] A method for producing a semiconductor device
The invention relates to the field of navigation positioning, in particular to a non-iterative point cloud matching method, medium, terminal and device.
[ background of the invention ]
With the continuous maturity of autonomous mobile robotics research, its landing in a commercial environment has also become a possible event. The successful operation of autonomous mobile robots in a variety of scenarios has one of the indispensable capabilities of determining where the robot is located in the scenario. The current popular positioning method in the industry is to match the point cloud scanned by the current laser with the previously established raster map to obtain global position information. Because some scenes are very huge and the laser scanning range is limited, matching the current laser point cloud with the raster map will face huge search cost. Note that the point cloud obtained by laser scanning is a description of the outline of the environment outside the robot, which essentially gives information on the distance of the robot from the edge of the environment. And one fact is that the scene or the grid map does not move during the moving process of the robot, so that the relative displacement between the two adjacent frames of laser point clouds can be calculated. According to the starting point and the relative displacement information, the moving track of the robot can be calculated. Therefore, the basic capability of the robot to move autonomously is to fully utilize the information of the point cloud.
The principle of calculating the relative displacement by two adjacent frames of laser point clouds is the so-called point cloud matching algorithm. The point cloud matching is also an important component of robot synchronous positioning and Mapping (simultaneouslly Localization and Mapping). In two successive frames of the point cloud, it has a very similar shape. Due to the rigid nature of the environment contour, the point cloud matching outputs the amount of rotation and translation between two frames of point clouds. If the position coordinates of one frame of point cloud are given, the position of the other frame of point cloud can be calculated. By analogy, the motion track of the robot can be obtained based on a point cloud matching algorithm through a point cloud sequence carried by the robot and scanned by laser. The basic idea of point cloud matching is to make two frames of point clouds overlap as much as possible by computing a rigid transformation (rotation and translation) between the two frames of point clouds. The specific scheme is that two frames of point clouds are marked as a reference frame and a current frame, point pairs corresponding to the reference frame are selected from the current frame, error functions among all the point pairs are established, an optimization algorithm is adopted to solve the error functions related to rigid body transformation, the obtained rigid body transformation is applied to the current frame, and the process is repeated until the calculated rigid body transformation is not changed any more.
Point cloud matching is essentially an optimization problem. Iterative algorithms are generally adopted to solve such optimization problems, and the iterative process at this time depends strongly on initial values. If the initial value is not given well, the iterative algorithm is easy to fall into a local optimal solution. The local optimal solution is a source of accumulated errors of robot trajectory estimation. It is important to give an initial value that is relatively close to the global optimum. However, this is difficult to do without the assistance of other information. On the other hand, in point cloud matching, an error function is established from the corresponding point pairs calculated from the closest point rule. At this time, the search of the closest point will occupy most of the calculation time, which is the most important factor affecting the point cloud matching efficiency.
[ summary of the invention ]
The invention provides a non-iterative point cloud matching method, medium, terminal and device, which solve the technical problems.
The technical scheme for solving the technical problems is as follows: a non-iterative point cloud matching method comprises the following steps:
step 1, acquiring two adjacent frames of point cloud data, wherein the first frame is a reference point cloud, and the second frame is a current point cloud;
step 2, performing coordinate transformation on two adjacent frames of point cloud data to generate a first grid matrix corresponding to the reference point cloud and a second grid matrix corresponding to the current point cloud;
step 3, respectively carrying out Fourier transform on the first grid matrix and the second grid matrix to generate a first amplitude value matrix corresponding to the reference point cloud and a second amplitude value matrix corresponding to the current point cloud;
and 4, transforming the first amplitude value matrix and the second amplitude value matrix by adopting a phase correlation method to generate an impulse function for representing the translation amount and the rotation amount between the reference point cloud and the current point cloud, and acquiring the relative transformation of the reference point cloud and the current point cloud according to the coordinate values corresponding to the impulse function.
In a preferred embodiment, coordinate transformation is performed on two adjacent frames of point cloud data to generate a first grid matrix corresponding to a reference point cloud and a second grid matrix corresponding to a current point cloud, specifically:
s201, converting the polar coordinates of the reference point cloud and the current point cloud into a Cartesian coordinate system, and rounding the converted Cartesian coordinate values;
s202, generating a first grid matrix and a second grid matrix according to the integer coordinate position of the laser endpoint, wherein corresponding matrix elements at the endpoint coordinate position in the first grid matrix and the second grid matrix are 1, and the rest are 0;
and S203, completing the smaller matrix of the first grid matrix and the second grid matrix, wherein the completing element is 0, so that the first grid matrix and the second grid matrix have the same size.
In a preferred embodiment, the fourier transform is performed on the first grid matrix and the second grid matrix to generate a first amplitude value matrix corresponding to the reference point cloud and a second amplitude value matrix corresponding to the current point cloud, specifically:
s301, performing Fourier-Mellin transform on the first grid matrix and the second grid matrix respectively to generate a first frequency spectrum matrix corresponding to a reference point cloud and a second frequency spectrum matrix corresponding to a current point cloud;
s302, performing complex modulus operation on the first frequency spectrum matrix and the second frequency spectrum matrix respectively to generate a first amplitude value matrix and a second amplitude value matrix;
and S303, performing high-pass filtering on the first amplitude value matrix and the second amplitude value matrix.
In a preferred embodiment, a phase correlation method is used to transform the first amplitude value matrix and the second amplitude value matrix to generate an impulse function representing the amount of translation and the amount of rotation between the reference point cloud and the current point cloud, and a relative transform between the reference point cloud and the current point cloud is obtained according to coordinate values corresponding to the impulse function, specifically:
s401, converting the first amplitude value matrix and the second amplitude value matrix to a polar coordinate system;
s402, after Fourier transformation is carried out on the polar coordinate transformation result, a first pulse function representing the rotation angle between the reference point cloud and the current point cloud is generated by adopting a phase correlation method;
s403, substituting the solving result of the first pulse function into the first amplitude value matrix and the second amplitude value matrix, and generating a second pulse function representing the translation amount between the reference point cloud and the current point cloud by using the phase correlation method again;
s404, solving the second impulse function to generate the translation amount of the reference point cloud and the current point cloud.
A second aspect of the embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the non-iterative point cloud matching method described above.
A third aspect of the embodiments of the present invention provides a non-iterative point cloud matching terminal, including the computer-readable storage medium and a processor, where the processor implements the steps of the non-iterative point cloud matching method when executing a computer program on the computer-readable storage medium.
A fourth aspect of an embodiment of the present invention provides a non-iterative point cloud matching apparatus, including a data acquisition module, a coordinate transformation module, a fourier transformation module, and a pulse transformation module,
the data acquisition module is used for acquiring two adjacent frames of point cloud data, wherein the first frame is a reference point cloud, and the second frame is a current point cloud;
the coordinate transformation module is used for carrying out coordinate transformation on two adjacent frames of point cloud data to generate a first grid matrix corresponding to the reference point cloud and a second grid matrix corresponding to the current point cloud;
the Fourier transform module is used for respectively carrying out Fourier transform on the first grid matrix and the second grid matrix to generate a first amplitude value matrix corresponding to the reference point cloud and a second amplitude value matrix corresponding to the current point cloud;
the pulse transformation module is used for transforming the first amplitude value matrix and the second amplitude value matrix by adopting a phase correlation method to generate a pulse function for representing the translation amount and the rotation amount between the reference point cloud and the current point cloud, and acquiring the relative transformation of the reference point cloud and the current point cloud according to the coordinate values corresponding to the pulse function.
In a preferred embodiment, the coordinate transformation module specifically includes:
the first coordinate transformation unit is used for transforming the polar coordinates of the reference point cloud and the current point cloud into a Cartesian coordinate system and carrying out rounding processing on the transformed Cartesian coordinate values;
the matrix transformation unit is used for generating a first grid matrix and a second grid matrix according to the integer coordinate position of the laser endpoint, wherein the corresponding matrix elements at the endpoint coordinate position in the first grid matrix and the second grid matrix are 1, and the rest matrix elements are 0;
and the supplementing unit is used for supplementing a smaller matrix in the first grid matrix and the second grid matrix, and the supplementing element is 0 so as to enable the first grid matrix and the second grid matrix to be the same in size.
In a preferred embodiment, the fourier transform module specifically includes:
the Fourier transform unit is used for respectively carrying out Fourier-Mellin transform on the first grid matrix and the second grid matrix to generate a first frequency spectrum matrix corresponding to the reference point cloud and a second frequency spectrum matrix corresponding to the current point cloud;
the complex number modulus taking unit is used for respectively carrying out complex number modulus taking on the first frequency spectrum matrix and the second frequency spectrum matrix to generate a first amplitude value matrix and a second amplitude value matrix;
a filtering unit, configured to perform high-pass filtering on the first amplitude value matrix and the second amplitude value matrix.
In a preferred embodiment, the pulse transformation module specifically includes:
the second coordinate transformation unit is used for transforming the first amplitude value matrix and the second amplitude value matrix to a polar coordinate system;
the first pulse transformation unit is used for carrying out Fourier transformation on the polar coordinate transformation result and then generating a first pulse function representing the rotation angle between the reference point cloud and the current point cloud by adopting a phase correlation method;
the second pulse transformation unit is used for substituting the solving result of the first pulse function into the first amplitude value matrix and the second amplitude value matrix and generating a second pulse function representing the translation amount between the reference point cloud and the current point cloud by utilizing the phase correlation method again;
and the calculating unit is used for solving the second pulse function to generate the translation amount of the reference point cloud and the current point cloud.
The invention provides a non-iterative point cloud matching method, medium, terminal and device, wherein laser point cloud is taken as a binary image, and translation and rotation, namely relative displacement, between two frames of point cloud can be calculated by utilizing relatively mature Fourier transform and phase correlation algorithm in the field of image processing. The algorithm does not require the given initial relative transformation to carry out iterative solving of the optimal relative transformation and does not require corresponding point matching on two frames of point clouds, so that the problem that the traditional point cloud matching algorithm is easy to fall into local minimum is solved, the problem of huge calculation amount for searching corresponding point pairs is also solved, and the method has the characteristics of high calculation speed, easiness in obtaining real relative transformation and the like, thereby effectively ensuring that the autonomous mobile robot can quickly calculate the global position in a scene.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of a non-iterative point cloud matching method provided in embodiment 1;
FIG. 2 is a schematic structural diagram of a non-iterative point cloud matching apparatus provided in embodiment 2;
fig. 3 is a schematic structural diagram of a non-iterative point cloud matching terminal provided in embodiment 3.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantageous effects of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and the detailed description. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a schematic flow chart of a non-iterative point cloud matching method provided in embodiment 1 of the present invention, and as shown in fig. 1, the method includes the following steps:
step 1, acquiring two adjacent frames of point cloud data, wherein the first frame is a reference point cloud, and the second frame is a current point cloud;
step 2, performing coordinate transformation on two adjacent frames of point cloud data to generate a first grid matrix corresponding to the reference point cloud and a second grid matrix corresponding to the current point cloud;
step 3, respectively carrying out Fourier transform on the first grid matrix and the second grid matrix to generate a first amplitude value matrix corresponding to the reference point cloud and a second amplitude value matrix corresponding to the current point cloud;
and 4, transforming the first amplitude value matrix and the second amplitude value matrix by adopting a phase correlation method to generate an impulse function for representing the translation amount and the rotation amount between the reference point cloud and the current point cloud, and acquiring the relative transformation of the reference point cloud and the current point cloud according to the coordinate values corresponding to the impulse function.
In the embodiment, the laser point cloud is converted into the raster image, and the translation and the rotation, namely the relative displacement, between two frames of point cloud can be calculated by utilizing relatively mature Fourier transform and phase correlation algorithm in the field of image processing. The algorithm does not require the given initial relative transformation to carry out iterative solving of the optimal relative transformation and does not require corresponding point matching on two frames of point clouds, so that the problem that the traditional point cloud matching algorithm is easy to fall into local minimum is solved, the problem of huge calculation amount for searching corresponding point pairs is also solved, and the method has the characteristics of high calculation speed, easiness in obtaining real relative transformation and the like, thereby effectively ensuring that the autonomous mobile robot can quickly calculate the global position in a scene.
The steps of the above examples are explained in detail below.
And S01, acquiring two adjacent frames of point cloud data, wherein the first frame is a reference point cloud, the second frame is a current point cloud, the reference point cloud and the current point cloud are projected in a Cartesian coordinate system through polar coordinate conversion, and the converted Cartesian coordinate values are rounded, namely, each laser endpoint is provided with an integer coordinate value. The laser scanning is to emit laser beams according to a fixed angle, and the angle can be obtained from the parameters of a laser device; the distance data of the laser point cloud is transformed to a Cartesian coordinate system according to the corresponding laser beam angle and the following formula:
Figure BDA0002297266910000091
where ρ represents the distance of the scanning point from the lidar and θ represents the scanning angle. And carrying out rounding operation on the Cartesian coordinate values, determining the column width of the matrix according to the minimum and maximum coordinate values of the X axis, and determining the row width of the matrix according to the minimum and maximum coordinate values of the Y axis.
S02, extracting a first grid matrix I corresponding to the reference point cloud according to the range of the laser end point1A second grid matrix I corresponding to the current point cloud2. First grid matrix I1And a second grid matrix I2In the process, even if the matrix is two-dimensional, only one layer of circulation is needed when the elements of the matrix are assigned. When the matrixes of the reference point cloud and the current point cloud are calculated, the same size is intercepted, if the two matrixes are not the same size, the smaller matrix is filled, the filling element is 0, and the first grid matrix and the second grid matrix are the same in size.
S03, assuming that the translation and rotation existing between the reference point cloud and the current point cloud are (Δ x, Δ y) and Δ θ, respectively, where (Δ x, Δ θ)y) Representing the relative amount of translation, and Δ θ representing the relative amount of angle, i.e. the first grid matrix I1And a second grid matrix I2Can be established as followsEquation:
I1(x,y)=I2(xcosΔθ+ysinΔθ-Δx,-xsinΔθ+ycosΔθ-Δy).
s04, Fourier transform is carried out on two sides of the above formula to obtain:
F(u,v)=e-2πi(uΔx+vΔy)G(ucosΔθ+vsinΔθ,-usinΔθ+vcosΔθ)。
where in the above and subsequent fourier transforms u and v represent only the variables of the position, i is the imaginary unit, and F and G (and subsequent P, Q, M, N, S, T) are the matrices after the fourier transform, respectively. The matrix obtained after Fourier-Mellin transform is also a matrix which is a frequency spectrum matrix of the original point cloud. The value of the frequency spectrum at each frequency point is a complex number uniquely determined by a modulus and an argument, so the frequency spectrum can be decomposed into a magnitude spectrum (i.e., a function of the modulus of the complex number with respect to frequency) and a phase spectrum (i.e., a function of the argument of the complex number with respect to frequency).
And S05, calculating the amplitude spectrum of two sides of the equation to obtain:
P(u,v)=Q(ucosΔθ+vsinΔθ,-usinΔθ+vcosΔθ).
the two matrices of amplitude values obtained by complex modulo are identical but differ by a rotational relationship.
S06, performing high-pass filtering on the two magnitude spectrum matrices obtained above to improve the peak value and reduce the spectrum aliasing, where the high-pass filtering function is:
Figure BDA0002297266910000101
here, H and X both represent a two-dimensional matrix, and X (X, y) represents a gray value at coordinates (X, y) in the image domain.
S07, and substituting the transform u ═ ρ cos θ, v ═ ρ sin θ into the formula of S05 can transform the magnitude spectrum into polar coordinate space, i.e.:
M(ρ,θ)=N(ρ,θ-Δθ).
the rotation relation of the two amplitude value matrixes can be converted into the translation relation by performing polar coordinate transformation on the amplitude value matrixes.
S08, performing fourier transform on the above formula to obtain:
S(s,t)=e-2πitΔθT(s,t),
the basic theory of complex field can be obtained:
Figure BDA0002297266910000111
since the matrix obtained after fourier transformation of T (x, y) is a complex matrix, S (S, T) here represents a conjugate matrix.
S09, applying phase correlation algorithm to the above formula, that is, performing inverse Fourier transform to the left and right sides to obtain a pulse function, where the function is 0 at other positions and is not 0 only at the translation position (the pulse position is the relative rotation of the two frame point clouds), and the translation amount obtained at this time is actually the relative angle value between the two frame point clouds.
S10, the rotation amount Δ θ obtained above is substituted into the formula of S04, which can be simplified to the following formula again according to the basic theory of complex field:
Figure BDA0002297266910000112
s11, the phase correlation algorithm is applied again to the above formula to obtain the relative translation (Δ x, Δ y) between the reference point cloud and the current point cloud. When the relative translation and the angle are solved, the initial relative transformation is not required to be given, and the corresponding point pair between the point clouds is not required to be searched in the calculation process, so that the point cloud matching speed and the point cloud matching precision are improved. In a preferred embodiment, in the process of obtaining the pulse function by using a phase correlation algorithm, i.e. inverse fourier transform, the result is usually centered in the process of fourier transform in order to obtain the coordinates of the pulse function conveniently.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
The embodiment of the invention also provides a computer readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the non-iterative point cloud matching method is realized.
Fig. 2 is a schematic structural diagram of a non-iterative point cloud matching apparatus according to embodiment 2 of the present invention, as shown in fig. 2, including a data acquisition module 100, a coordinate transformation module 200, a fourier transformation module 300, and an impulse transformation module 400,
the data acquisition module 100 is configured to acquire two adjacent frames of point cloud data, where a first frame is a reference point cloud and a second frame is a current point cloud;
the coordinate transformation module 200 is configured to perform coordinate transformation on two adjacent frames of point cloud data to generate a first grid matrix corresponding to a reference point cloud and a second grid matrix corresponding to a current point cloud;
the fourier transform module 300 is configured to perform fourier transform on the first grid matrix and the second grid matrix respectively to generate a first amplitude value matrix corresponding to a reference point cloud and a second amplitude value matrix corresponding to a current point cloud;
the pulse transformation module 400 is configured to transform the first amplitude value matrix and the second amplitude value matrix by using a phase correlation method to generate a pulse function representing a translation amount and a rotation amount between the reference point cloud and the current point cloud, and obtain a relative transformation between the reference point cloud and the current point cloud according to a coordinate value corresponding to the pulse function.
In a preferred embodiment, the coordinate transformation module 200 specifically includes:
a first coordinate transformation unit 201, configured to transform polar coordinates of the reference point cloud and the current point cloud to a cartesian coordinate system, and perform rounding processing on the transformed cartesian coordinate values;
the matrix transformation unit 202 is configured to generate a first grid matrix and a second grid matrix according to the integer coordinate position of the laser endpoint, where matrix elements corresponding to the endpoint coordinates in the first grid matrix and the second grid matrix are 1, and the others are 0;
and a padding unit 203 for padding a smaller matrix of the first grid matrix and the second grid matrix, wherein padding elements are 0, so that the first grid matrix and the second grid matrix have the same size.
In a preferred embodiment, the fourier transform module 300 specifically includes:
a fourier transform unit 301, configured to perform fourier-mellin transform on the first grid matrix and the second grid matrix respectively to generate a first spectrum matrix corresponding to a reference point cloud and a second spectrum matrix corresponding to a current point cloud;
a complex modulus unit 302, configured to perform complex modulus on the first spectrum matrix and the second spectrum matrix respectively to generate a first amplitude value matrix and a second amplitude value matrix;
a filtering unit 303, configured to perform high-pass filtering on the first amplitude value matrix and the second amplitude value matrix.
In a preferred embodiment, the pulse transformation module 400 specifically includes:
a second coordinate transformation unit 401, configured to transform the first amplitude value matrix and the second amplitude value matrix to a polar coordinate system;
a first pulse transformation unit 402, configured to perform fourier transformation on the polar coordinate transformation result and then generate a first pulse function representing a rotation angle between the reference point cloud and the current point cloud by using a phase correlation method;
a second pulse transformation unit 403, configured to bring the solution result of the first pulse function into the first amplitude value matrix and the second amplitude value matrix, and generate a second pulse function representing the amount of translation between the reference point cloud and the current point cloud by using the phase correlation method again;
and a calculating unit 404, configured to solve the second impulse function to generate the translation amount of the reference point cloud and the current point cloud.
The embodiment of the invention also provides a non-iterative point cloud matching terminal which comprises the computer readable storage medium and a processor, wherein the processor realizes the steps of the non-iterative point cloud matching method when executing the computer program on the computer readable storage medium. Fig. 3 is a schematic structural diagram of a non-iterative point cloud matching terminal provided in embodiment 3 of the present invention, and as shown in fig. 3, the non-iterative point cloud matching terminal 8 of this embodiment includes: a processor 80, a readable storage medium 81 and a computer program 82 stored in said readable storage medium 81 and executable on said processor 80. The processor 80, when executing the computer program 82, implements the steps in the various method embodiments described above, such as steps 1 through 4 shown in fig. 1. Alternatively, the processor 80, when executing the computer program 82, implements the functions of the modules in the above-described device embodiments, such as the functions of the modules 100 to 400 shown in fig. 2.
Illustratively, the computer program 82 may be partitioned into one or more modules that are stored in the readable storage medium 81 and executed by the processor 80 to implement the present invention. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 82 in the non-iterative point cloud matching terminal 8.
The non-iterative point cloud matching terminal 8 may include, but is not limited to, a processor 80 and a readable storage medium 81. Those skilled in the art will appreciate that fig. 3 is merely an example of the non-iterative point cloud matching terminal 8, and does not constitute a limitation of the non-iterative point cloud matching terminal 8, and may include more or less components than those shown, or combine some components, or different components, for example, the non-iterative point cloud matching terminal may further include a power management module, an arithmetic processing module, an input/output device, a network access device, a bus, and the like.
The Processor 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The readable storage medium 81 may be an internal storage unit of the non-iterative point cloud matching terminal 8, such as a hard disk or a memory of the non-iterative point cloud matching terminal 8. The readable storage medium 81 may also be an external storage device of the non-iterative point cloud matching terminal 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like equipped on the non-iterative point cloud matching terminal 8. Further, the readable storage medium 81 may also include both an internal storage unit and an external storage device of the non-iterative point cloud matching terminal 8. The readable storage medium 81 is used for storing the computer program and other programs and data required by the non-iterative point cloud matching terminal. The readable storage medium 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The invention is not limited solely to that described in the specification and embodiments, and additional advantages and modifications will readily occur to those skilled in the art, so that the invention is not limited to the specific details, representative apparatus, and illustrative examples shown and described herein, without departing from the spirit and scope of the general concept as defined by the appended claims and their equivalents.

Claims (10)

1. A non-iterative point cloud matching method is characterized by comprising the following steps:
step 1, acquiring two adjacent frames of point cloud data, wherein the first frame is a reference point cloud, and the second frame is a current point cloud;
step 2, performing coordinate transformation on two adjacent frames of point cloud data to generate a first grid matrix corresponding to the reference point cloud and a second grid matrix corresponding to the current point cloud;
step 3, respectively carrying out Fourier transform on the first grid matrix and the second grid matrix to generate a first amplitude value matrix corresponding to the reference point cloud and a second amplitude value matrix corresponding to the current point cloud;
and 4, transforming the first amplitude value matrix and the second amplitude value matrix by adopting a phase correlation method to generate an impulse function for representing the translation amount and the rotation amount between the reference point cloud and the current point cloud, and acquiring the relative transformation of the reference point cloud and the current point cloud according to the coordinate values corresponding to the impulse function.
2. The non-iterative point cloud matching method according to claim 1, wherein coordinate transformation is performed on two adjacent frames of point cloud data to generate a first grid matrix corresponding to a reference point cloud and a second grid matrix corresponding to a current point cloud, specifically:
s201, converting the polar coordinates of the reference point cloud and the current point cloud into a Cartesian coordinate system, and rounding the converted Cartesian coordinate values;
s202, generating a first grid matrix and a second grid matrix according to the integer coordinate position of the laser endpoint, wherein in the first grid matrix and the second grid matrix, the matrix elements corresponding to the endpoint coordinate position are 1, and the rest are 0;
and S203, completing the smaller matrix of the first grid matrix and the second grid matrix, wherein the completing element is 0, so that the first grid matrix and the second grid matrix have the same size.
3. The non-iterative point cloud matching method according to claim 1 or 2, wherein the fourier transform is performed on the first grid matrix and the second grid matrix to generate a first amplitude value matrix corresponding to a reference point cloud and a second amplitude value matrix corresponding to a current point cloud, specifically:
s301, performing Fourier-Mellin transform on the first grid matrix and the second grid matrix respectively to generate a first frequency spectrum matrix corresponding to a reference point cloud and a second frequency spectrum matrix corresponding to a current point cloud;
s302, performing complex modulus operation on the first frequency spectrum matrix and the second frequency spectrum matrix respectively to generate a first amplitude value matrix and a second amplitude value matrix;
and S303, performing high-pass filtering on the first amplitude value matrix and the second amplitude value matrix.
4. The non-iterative point cloud matching method according to claim 3, wherein the first amplitude value matrix and the second amplitude value matrix are transformed by a phase correlation method to generate an impulse function representing a translation amount and a rotation amount between the reference point cloud and the current point cloud, and a relative transformation between the reference point cloud and the current point cloud is obtained according to a coordinate value corresponding to the impulse function, specifically:
s401, converting the first amplitude value matrix and the second amplitude value matrix to a polar coordinate system;
s402, after Fourier transformation is carried out on the polar coordinate transformation result, a first pulse function representing the rotation angle between the reference point cloud and the current point cloud is generated by adopting a phase correlation method;
s403, substituting the solving result of the first pulse function into the first amplitude value matrix and the second amplitude value matrix, and generating a second pulse function representing the translation amount between the reference point cloud and the current point cloud by using the phase correlation method again;
s404, solving the second impulse function to generate the translation amount of the reference point cloud and the current point cloud.
5. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the non-iterative point cloud matching method according to any one of claims 1 to 4.
6. A non-iterative point cloud matching terminal comprising the computer-readable storage medium of claim 5 and a processor, the processor implementing the steps of the non-iterative point cloud matching method of any one of claims 1-4 when executing the computer program on the computer-readable storage medium.
7. A non-iterative point cloud matching device is characterized by comprising a data acquisition module, a coordinate transformation module, a Fourier transformation module and a pulse transformation module,
the data acquisition module is used for acquiring two adjacent frames of point cloud data, wherein the first frame is a reference point cloud, and the second frame is a current point cloud;
the coordinate transformation module is used for carrying out coordinate transformation on two adjacent frames of point cloud data to generate a first grid matrix corresponding to the reference point cloud and a second grid matrix corresponding to the current point cloud;
the Fourier transform module is used for respectively carrying out Fourier transform on the first grid matrix and the second grid matrix to generate a first amplitude value matrix corresponding to the reference point cloud and a second amplitude value matrix corresponding to the current point cloud;
the pulse transformation module is used for transforming the first amplitude value matrix and the second amplitude value matrix by adopting a phase correlation method to generate a pulse function for representing the translation amount and the rotation amount between the reference point cloud and the current point cloud, and acquiring the relative transformation of the reference point cloud and the current point cloud according to the coordinate values corresponding to the pulse function.
8. The non-iterative point cloud matching device of claim 7, wherein the coordinate transformation module specifically comprises:
the first coordinate transformation unit is used for transforming the polar coordinates of the reference point cloud and the current point cloud into a Cartesian coordinate system and carrying out rounding processing on the transformed Cartesian coordinate values;
the matrix transformation unit is used for generating a first grid matrix and a second grid matrix according to the integer coordinate position of the laser endpoint, wherein the corresponding matrix elements at the endpoint coordinate position in the first grid matrix and the second grid matrix are 1, and the rest matrix elements are 0;
and the supplementing unit is used for supplementing a smaller matrix in the first grid matrix and the second grid matrix, and the supplementing element is 0 so as to enable the first grid matrix and the second grid matrix to be the same in size.
9. The non-iterative point cloud matching device according to claim 7 or 8, wherein the fourier transform module specifically comprises:
the Fourier transform unit is used for respectively carrying out Fourier-Mellin transform on the first grid matrix and the second grid matrix to generate a first frequency spectrum matrix corresponding to the reference point cloud and a second frequency spectrum matrix corresponding to the current point cloud;
the complex number modulus taking unit is used for respectively carrying out complex number modulus taking on the first frequency spectrum matrix and the second frequency spectrum matrix to generate a first amplitude value matrix and a second amplitude value matrix;
a filtering unit, configured to perform high-pass filtering on the first amplitude value matrix and the second amplitude value matrix.
10. The non-iterative point cloud matching device of claim 9, wherein the pulse transformation module specifically comprises:
the second coordinate transformation unit is used for transforming the first amplitude value matrix and the second amplitude value matrix to a polar coordinate system;
the first pulse transformation unit is used for carrying out Fourier transformation on the polar coordinate transformation result and then generating a first pulse function representing the rotation angle between the reference point cloud and the current point cloud by adopting a phase correlation method;
the second pulse transformation unit is used for substituting the solving result of the first pulse function into the first amplitude value matrix and the second amplitude value matrix and generating a second pulse function representing the translation amount between the reference point cloud and the current point cloud by utilizing the phase correlation method again;
and the calculating unit is used for solving the second pulse function to generate the translation amount of the reference point cloud and the current point cloud.
CN201911207626.9A 2019-11-29 2019-11-29 Non-iterative point cloud matching method, medium, terminal and device Active CN110991526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911207626.9A CN110991526B (en) 2019-11-29 2019-11-29 Non-iterative point cloud matching method, medium, terminal and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911207626.9A CN110991526B (en) 2019-11-29 2019-11-29 Non-iterative point cloud matching method, medium, terminal and device

Publications (2)

Publication Number Publication Date
CN110991526A true CN110991526A (en) 2020-04-10
CN110991526B CN110991526B (en) 2023-11-28

Family

ID=70088807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911207626.9A Active CN110991526B (en) 2019-11-29 2019-11-29 Non-iterative point cloud matching method, medium, terminal and device

Country Status (1)

Country Link
CN (1) CN110991526B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112649814A (en) * 2021-01-14 2021-04-13 北京斯年智驾科技有限公司 Matching method, device, equipment and storage medium for laser positioning
CN112861595A (en) * 2020-07-31 2021-05-28 北京京东乾石科技有限公司 Method and device for identifying data points and computer-readable storage medium
CN117292140A (en) * 2023-10-17 2023-12-26 小米汽车科技有限公司 Point cloud data processing method and device, vehicle and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3506203A1 (en) * 2017-12-29 2019-07-03 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for fusing point cloud data technical field
CN110009667A (en) * 2018-12-19 2019-07-12 南京理工大学 Multi-viewpoint cloud global registration method based on Douglas Rodríguez transformation
CN110345936A (en) * 2019-07-09 2019-10-18 上海有个机器人有限公司 The track data processing method and its processing system of telecontrol equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3506203A1 (en) * 2017-12-29 2019-07-03 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for fusing point cloud data technical field
CN110009667A (en) * 2018-12-19 2019-07-12 南京理工大学 Multi-viewpoint cloud global registration method based on Douglas Rodríguez transformation
CN110345936A (en) * 2019-07-09 2019-10-18 上海有个机器人有限公司 The track data processing method and its processing system of telecontrol equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
秦绪佳;王建奇;郑红波;梁震华;: "三维不变矩特征估计的点云拼接" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861595A (en) * 2020-07-31 2021-05-28 北京京东乾石科技有限公司 Method and device for identifying data points and computer-readable storage medium
WO2022022186A1 (en) * 2020-07-31 2022-02-03 北京京东乾石科技有限公司 Data point identification method, identification device, and computer readable storage medium
CN112649814A (en) * 2021-01-14 2021-04-13 北京斯年智驾科技有限公司 Matching method, device, equipment and storage medium for laser positioning
CN112649814B (en) * 2021-01-14 2022-12-23 北京斯年智驾科技有限公司 Matching method, device, equipment and storage medium for laser positioning
CN117292140A (en) * 2023-10-17 2023-12-26 小米汽车科技有限公司 Point cloud data processing method and device, vehicle and storage medium
CN117292140B (en) * 2023-10-17 2024-04-02 小米汽车科技有限公司 Point cloud data processing method and device, vehicle and storage medium

Also Published As

Publication number Publication date
CN110991526B (en) 2023-11-28

Similar Documents

Publication Publication Date Title
CN110991526B (en) Non-iterative point cloud matching method, medium, terminal and device
US20210110599A1 (en) Depth camera-based three-dimensional reconstruction method and apparatus, device, and storage medium
Garro et al. Solving the pnp problem with anisotropic orthogonal procrustes analysis
CN111160298B (en) Robot and pose estimation method and device thereof
CN111079801A (en) Method, medium, terminal and device for quickly searching closest point based on point cloud matching
CN111311632B (en) Object pose tracking method, device and equipment
CN110969649A (en) Matching evaluation method, medium, terminal and device of laser point cloud and map
US20200211293A1 (en) Three-dimensional point data alignment with pre-alignment
CN110930444B (en) Point cloud matching method, medium, terminal and device based on bilateral optimization
WO2022016942A1 (en) Target detection method and apparatus, electronic device, and storage medium
CN110599586A (en) Semi-dense scene reconstruction method and device, electronic equipment and storage medium
CN110887493A (en) Trajectory estimation method, medium, terminal and device based on local map matching
CN114593737A (en) Control method, control device, robot and storage medium
CN111881985A (en) Stereo matching method, device, terminal and storage medium
JP2022014921A (en) Three-dimensional sensing information acquisition method and road side apparatus based on external parameter of road side camera
CN110738730A (en) Point cloud matching method and device, computer equipment and storage medium
CN114089316A (en) Combined calibration system, method and medium for laser radar-inertial navigation
CN111221934B (en) Unmanned aerial vehicle operation boundary determination method and device
CN109859313B (en) 3D point cloud data acquisition method and device, and 3D data generation method and system
CN110991085B (en) Method, medium, terminal and device for constructing robot image simulation data
KR102333768B1 (en) Hand recognition augmented reality-intraction apparatus and method
CN115267724B (en) Position re-identification method of mobile robot capable of estimating pose based on laser radar
CN110706288A (en) Target detection method, device, equipment and readable storage medium
CN116415652A (en) Data generation method and device, readable storage medium and terminal equipment
Malm et al. A new approach to hand-eye calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant