CN114549608B - Point cloud fusion method and device, electronic equipment and storage medium - Google Patents
Point cloud fusion method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114549608B CN114549608B CN202210426803.8A CN202210426803A CN114549608B CN 114549608 B CN114549608 B CN 114549608B CN 202210426803 A CN202210426803 A CN 202210426803A CN 114549608 B CN114549608 B CN 114549608B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- fusion
- matrix
- weight
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
Abstract
The application belongs to the technical field of data processing, and provides a point cloud fusion method, a point cloud fusion device, electronic equipment and a storage medium, wherein the point cloud fusion method comprises the following steps: acquiring first point cloud data and second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; and updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the first point cloud data fusion weight and the update change rate of the second point cloud data fusion weight are respectively smaller than a preset threshold value, and stopping iteration to obtain the optimal fusion parameters. The point cloud fusion method, the point cloud fusion device, the electronic equipment and the storage medium reduce the calculation complexity in the fusion process and improve the efficiency and the precision of point cloud fusion.
Description
Technical Field
The application relates to the technical field of data processing, in particular to a point cloud fusion method and device, electronic equipment and a storage medium.
Background
Three-dimensional reconstruction refers to the establishment of a mathematical model suitable for computer representation and processing of a three-dimensional object, is the basis for processing, operating and analyzing the properties of the three-dimensional object in a computer environment, and is also a key technology for establishing virtual reality expressing an objective world in a computer.
At present, the existing three-dimensional reconstruction point cloud fusion method mainly adopts a fusion method based on space voxels and a point cloud fusion method based on a clustering idea, but the fusion method based on the space voxels and the point cloud fusion method based on the clustering idea have the problems of low fusion efficiency and low precision.
In view of the above problems, no effective technical solution exists at present.
Disclosure of Invention
The application aims to provide a point cloud fusion method, a point cloud fusion device, an electronic device and a storage medium, which can reduce the calculation complexity in the point cloud fusion process and improve the efficiency and the precision of the point cloud fusion.
In a first aspect, the present application provides a point cloud fusion method for fusing first point cloud data acquired by an unmanned aerial vehicle and second point cloud data acquired by a quadruped robot, the method comprising the steps of:
acquiring the first point cloud data and the second point cloud data;
performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result;
calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data;
updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold value, and stopping iteration to obtain optimal fusion parameters;
and fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
According to the point cloud fusion method, the computation complexity in the point cloud fusion process can be reduced through a parallel computation method, the efficiency and the precision of point cloud fusion are improved, moreover, self-adaptive adjustment can be carried out on the condition that the input dimensions are not uniform, and the time complexity is reduced.
Optionally, in the point cloud fusion method described in the present application, before performing preliminary fusion on the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism, the method includes the following steps:
and randomly sampling the first point cloud data and the second point cloud data according to a set proportion.
According to the method and the device, the first point cloud data and the second point cloud data are subjected to random sampling and then are subjected to preliminary fusion, and the calculation complexity in the fusion process is further reduced.
Optionally, in the point cloud fusion method described in the present application, the calculating the weight attenuation amount of the first point cloud data and the weight attenuation amount of the second point cloud data each includes the following steps:
solving a fusion matrix;
calculating to obtain a fusion matrix weight vector according to the fusion matrix;
calculating to obtain a fusion matrix output vector according to the fusion matrix and the fusion matrix weight vector;
calculating by using a feedforward neural network based on the weight matrix and the bias to obtain an output;
calculating according to the output to obtain a loss function;
and solving a partial derivative according to the loss function to obtain a weight attenuation amount.
According to the method, the weight attenuation of the preliminary fusion result is calculated through the steps, so that each point coordinate of the output fusion point cloud depends on all input point coordinates, and the fusion is more accurate.
Optionally, in the point cloud fusion method described in the present application, the solving of the fusion matrix is calculated by the following formula:
wherein the fusion matrix comprises、And,in order to enter the key fusion matrix, the key fusion matrix is entered,in order to fuse the matrices for the input values,in order to output the fusion matrix, the fusion matrix is output,、andthe initial value of (2) is in accordance with a random number matrix of 0-1 range of Gaussian distribution;represent the first in the first point cloud dataPoint cloud data or the first point cloud data of the second point cloud dataThe point cloud data is obtained by the point cloud data,,is the total number of point cloud data in the first point cloud data or in the second point cloud data,、andrespectively represent fusion matrix by input keyInput value fusion matrixAnd output fusion matrixFor the first point in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataThe preliminary fusion result of (1).
Optionally, in the point cloud fusion method described in the present application, the weight vector of the fusion matrix obtained by calculation according to the fusion matrix is calculated by the following formula:
wherein the content of the first and second substances,represents the weight vector of the fusion matrix and,representing fusion matrices through input keysFor the first point in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataAs a result of the preliminary fusion of (a),representing input value fusion matricesFor the first point in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataThe preliminary fusion result of (1).
Optionally, in the point cloud fusion method described in the present application, the fusion matrix output vector obtained by calculating according to the fusion matrix and the fusion matrix weight vector is calculated by the following formula:
wherein the content of the first and second substances,represents the output vector of the fusion matrix and,weight vector for the fusion matrixTo (1)The number of the elements is one,is shown to pass throughAnd withThe further fusion result obtained by the calculation is obtained,representing input value fusion matricesFor the first point in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataAs a result of the preliminary fusion of (a),representing output fusion matricesFor the first point in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataThe preliminary fusion result of (1).
Optionally, in the point cloud fusion method described in the present application, the preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism is calculated by the following formula:
wherein the content of the first and second substances,for the preliminary fusion result output from the attention layer,andall are multi-head weighting matrixes with the same row value and column value,representing the first point cloud data and the second point cloud data,representing the second point cloud data and,,representing the first in the first point cloud dataThe number of the data is one,,representing the first point cloud data based on a first mechanism of self-attentionnThe output of the first and second processors is,representing the second point cloud data as being based on a self-attention mechanismnThe output of the first and second processors is,representing the first point cloud dataAnd (4) the data.
In a second aspect, the present application further provides a point cloud fusion device for fusing first point cloud data collected by an unmanned aerial vehicle and second point cloud data collected by a quadruped robot, the device comprising:
the acquisition module is used for acquiring the first point cloud data and the second point cloud data;
the initial fusion module is used for performing initial fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain an initial fusion result;
the computing module is used for computing the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data;
the updating module is used for respectively updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold value, and stopping iteration to obtain the optimal fusion parameters;
and the fusion module is used for fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
The point cloud fusion device provided by the application can reduce the computation complexity in the point cloud fusion process through a parallel computation method, improve the efficiency and the precision of point cloud fusion, and can also perform self-adaptive adjustment on the condition that the input dimensions are not uniform, so that the time complexity is reduced.
In a third aspect, the present application provides an electronic device comprising a processor and a memory, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, perform the steps of the method as provided in the first aspect.
In a fourth aspect, the present application provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as provided in the first aspect above.
As can be seen from the above, the point cloud fusion method, device, electronic device and storage medium provided by the present application acquire the first point cloud data and the second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result; therefore, the integration of the first point cloud data acquired by the unmanned aerial vehicle and the second point cloud data acquired by the quadruped robot is realized, the calculation complexity in the point cloud integration process can be reduced by a parallel calculation method, the point cloud integration efficiency and precision are improved, in addition, the self-adaptive adjustment can be carried out on the condition of non-uniform input dimensions, and the time complexity is reduced.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
Fig. 1 is a flowchart of a point cloud fusion method provided in an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a point cloud fusion apparatus provided in the embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not construed as indicating or implying relative importance.
In recent years, unmanned aerial vehicle three-dimensional scene reconstruction is widely applied to the fields of unmanned aerial vehicle self-positioning navigation, urban digital twinning, topographic mapping and the like, but is limited by the flight height of the unmanned aerial vehicle and the limited resolution of a holder camera, under complex scenes (intricate buildings, jungles, dense grasslands and the like), the unmanned aerial vehicle cannot carry out three-dimensional reconstruction on the complex scenes on the ground, the unmanned vehicle is also limited by the motion performance of the unmanned aerial vehicle and cannot run under the complex scenes, and the quadruped robot can be suitable for various complex terrains, including jungles, dense vegetation, forests, ramps, stairs and the like. Therefore, under the complex scene environment, the point cloud fusion is carried out through the three-dimensional reconstruction map of the aerial unmanned aerial vehicle and the fine three-dimensional reconstruction map of the ground quadruped robot, the accuracy of the three-dimensional scene reconstruction under the existing complex environment can be effectively improved, and the map scene construction is facilitated.
However, the existing three-dimensional reconstruction point cloud fusion method mainly adopts a fusion method based on space voxels and a point cloud fusion method based on a clustering idea. The fusion method based on the spatial voxels, such as the TSDF method, needs to divide the point cloud space into tiny voxels, the subdivision degree is related to the precision, and when the method is applied to an application scene with higher precision requirement and larger point cloud space distribution, a large amount of memory resources are consumed, so the point cloud fusion method based on the spatial voxels is only suitable for point cloud fusion with low precision requirement and rapid reconstruction of a three-dimensional scene; the point cloud fusion method based on the clustering idea needs to input point cloud and normal direction at the same time. The method comprises the steps of utilizing a clustering mode to position an overlapping area, utilizing a least square method to project a point set of the overlapping area to a fitting plane, selecting an intersection point of a straight line formed by points of the overlapping area and a normal direction and the fitting plane as data of data after fusion, and when the data volume is large, the clustering process and the fitting process are time-consuming and the fusion efficiency is low.
Therefore, the current point cloud fusion algorithm is difficult to meet the requirements of large data volume and high precision of point cloud fusion of three-dimensional reconstruction of complex scenes. Based on the point cloud fusion method and device, the electronic equipment and the storage medium.
In a first aspect, please refer to fig. 1, fig. 1 is a flowchart of a point cloud fusion method in some embodiments of the present application. The point cloud fusion method is used for fusing first point cloud data acquired by an unmanned aerial vehicle and second point cloud data acquired by a quadruped robot, and comprises the following steps:
s101, obtaining first point cloud data and second point cloud data.
S102, performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result.
S103, calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data.
And S104, updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the first point cloud data fusion weight and the update change rate of the second point cloud data fusion weight are smaller than a preset threshold respectively, and stopping iteration to obtain the optimal fusion parameters.
And S105, fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
According to the point cloud fusion method, preliminary fusion is carried out based on a multi-head self-attention mechanism, then the fusion parameters are updated according to weight attenuation back propagation in the preliminary fusion result to obtain the optimal fusion parameters, fusion is carried out according to the optimal fusion parameters to obtain the optimal fusion result, the calculation complexity in the point cloud fusion process is reduced, and the efficiency and the precision of point cloud fusion are improved.
In step S101, the acquired first point cloud data may be obtained by converting after being shot by a high-definition pan-tilt camera with depth information carried by the unmanned aerial vehicle, and the acquired second point cloud data may be obtained by converting after being shot by a high-definition RGB-D camera.
In step S102, the preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism is calculated by the following formula:
wherein the content of the first and second substances,for the preliminary fusion result output from the attention layer,andall are multi-head weighting matrixes with the same row value and column value,indicating that the first point cloud data is based on the output of a multi-headed self-attentive mechanism,,representing the first in the first point cloud dataThe number of the data is one,representing the output of the second point cloud data based on a multi-headed self-attentive mechanism,,representing the first point cloud dataAnd (4) data.
Specifically, in some embodiments, before the preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism in step S102, the following steps are further included: and respectively carrying out random sampling on the first point cloud data and the second point cloud data according to a set proportion. The first point cloud data and the second point cloud data are respectively subjected to random sampling and then are subjected to preliminary fusion, so that the computational complexity in the fusion process is further reduced.
Specifically, in some embodiments, step S103 includes the following sub-steps: s1031, solving a fusion matrix; s1032, calculating according to the fusion matrix to obtain a fusion matrix weight vector; s1033, calculating to obtain a fusion matrix output vector according to the fusion matrix and the fusion matrix weight vector; s1034, further calculating by utilizing a feedforward neural network based on the weight matrix and the bias to obtain output; s1035, obtaining a loss function according to output calculation; s1036, obtaining weight attenuation quantity by calculating partial derivatives according to the loss function.
Wherein, in step S1031, the fusion matrix is calculated by the following formula:
wherein the fusion matrix comprises、And,in order to enter the key fusion matrix, the key fusion matrix is entered,in order to fuse the matrices for the input values,in order to output the fusion matrix,、andthe initial value of (a) is in accordance with a random number matrix of 0-1 range of Gaussian distribution;representing the first in the first point cloud dataFirst of the individual point cloud data or the second point cloud dataThe point cloud data is stored in a memory of the computer,,the total number of point cloud data in the first point cloud data or the second point cloud data,、andrespectively represent fusion matrix by input keyInput value fusion matrixAnd output fusion matrixTo the first in the first point cloud dataPoint cloud dataOr the first in the second point cloud dataPoint cloud dataThe preliminary fusion result of (1).
In step S1032, the weight vector of the fusion matrix obtained by calculation according to the fusion matrix is calculated by the following formula:
in the formula (I), the compound is shown in the specification,in order to fuse the weight vectors of the matrix,representing fusion matrices through input keysTo the first in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataAs a result of the preliminary fusion of (a),representing input value fusion matricesTo the first in the first point cloud dataPoint cloud dataOr the first in the second point cloud dataPoint cloud dataThe preliminary fusion result of (2).
In step S1033, the fusion matrix output vector is calculated by the following formula:
wherein the content of the first and second substances,represents the output vector of the fusion matrix and,to fuse matrix weight vectorsToThe number of the elements is one,is shown to pass throughAndthe further fusion result obtained by the calculation is obtained,representing input value fusion matricesTo the first in the first point cloud dataPoint cloud dataOr the first in the second point cloud dataPoint cloud dataAs a result of the preliminary fusion of (a),representing output fusion matricesTo the first in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataThe preliminary fusion result of (1).
In step S1034, the output is calculated by the following formula:
wherein the content of the first and second substances,in order to be output, the output is,,an output weight matrix representing the output layer,of dimension of,In order to be offset,has a dimension of。
In step S1035, the loss function is calculated by the following formula:
wherein the content of the first and second substances,in order to be a function of the loss,the accurate labels in the single-head point cloud are obtained by calibration after manual selection.
In step S104, updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data, respectively, and calculating according to the following formulas:
wherein the content of the first and second substances,、、andin order to obtain the fusion parameters before the update,in order to enter the key fusion matrix, the key fusion matrix is entered,in order to fuse the matrices for the input values,in order to output the fusion matrix, the fusion matrix is output,、、andare the fusion parameters after the corresponding update respectively,it is shown that the derivation is calculated,it is indicated that the learning rate is,the value is 0.005.
By pair fusion matrix、、Andand updating the fusion parameters by continuous back propagation, and gradually converting the input data (the first point cloud data and the second point cloud data) into the same dimension.
As can be seen from the above, the point cloud fusion method provided by the embodiment of the present application obtains the first point cloud data and the second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result; therefore, the first point cloud data acquired by the unmanned aerial vehicle and the second point cloud data acquired by the quadruped robot are fused, the calculation complexity in the point cloud fusion process can be reduced through a parallel calculation method, the point cloud fusion efficiency and precision are improved, in addition, the self-adaptive adjustment can be carried out on the condition that the input dimensions are not uniform, and the time complexity is reduced.
In a second aspect, please refer to fig. 2, fig. 2 is a point cloud fusion apparatus for fusing first point cloud data acquired by an unmanned aerial vehicle and second point cloud data acquired by a quadruped robot, the point cloud fusion apparatus being integrated in a ground station device in wireless communication connection with the unmanned aerial vehicle and the quadruped robot in the form of a computer program, the point cloud fusion apparatus comprising: the system comprises an acquisition module 201, a preliminary fusion module 202, a calculation module 203, an update module 204 and a fusion module 205.
The acquiring module 201 is configured to acquire first point cloud data and second point cloud data; the acquired first point cloud data can be obtained by converting after being shot by a high-definition cloud platform camera with depth information carried by an unmanned aerial vehicle, and the acquired second point cloud data can be obtained by converting after being shot by a high-definition RGB-D camera.
The initial fusion module 202 is configured to perform initial fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism, so as to obtain an initial fusion result. The preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism is calculated by the following formula:
wherein the content of the first and second substances,for the preliminary fusion result output from the attention layer,andall are multi-head weighting matrixes with the same row value and column value,indicating that the first point cloud data is based on the output of a multi-headed self-attentive mechanism,,representing the first in the first point cloud dataThe number of the data is one,representing the output of the second point cloud data based on a multi-headed self-attentive mechanism,,representing the first point cloud dataAnd (4) data.
Specifically, in some embodiments, the point cloud fusion device further comprises a random sampling module. The random sampling module is configured to, before the preliminary fusion module 202 performs preliminary fusion on the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism, respectively perform random sampling on the first point cloud data and the second point cloud data according to a set proportion, so as to reduce the computational complexity in the fusion process.
The calculating module 203 is configured to calculate a weight attenuation amount of the first point cloud data and a weight attenuation amount of the second point cloud data. And calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data to ensure that each point coordinate of the output fusion point cloud depends on all input point coordinates, so that the fusion is more accurate.
Specifically, in some embodiments, the calculation module 203 includes: the first calculation unit is used for solving a fusion matrix; the second calculation unit is used for calculating to obtain a fusion matrix weight vector according to the fusion matrix; the third calculation unit is used for calculating to obtain a fusion matrix output vector according to the fusion matrix and the fusion matrix weight vector; the fourth calculation unit is used for further calculating by utilizing a feedforward neural network based on the weight matrix and the bias to obtain output; a fifth calculating unit, configured to calculate a loss function according to the output; and the sixth calculating unit is used for solving the partial derivative according to the loss function to obtain the weight attenuation.
Specifically, the first calculation unit calculates a fusion matrix of the preliminary fusion result by the following formula:
in the formula, the fusion matrix comprises、And,in order to enter the key fusion matrix, the key fusion matrix is entered,in order to fuse the matrices for the input values,in order to output the fusion matrix, the fusion matrix is output,、andthe initial value of (a) is in accordance with a random number matrix of 0-1 range of Gaussian distribution;representing the first in the first point cloud dataFirst of the individual point cloud data or the second point cloud dataThe point cloud data is stored in a memory of the computer,,to the total number of point cloud data in the preliminary fusion result,、andrespectively representing fusion matrices through input keysInput value fusion matrixAnd output fusion matrixTo the first in the first point cloud dataPoint cloud dataOr the first in the second point cloud dataPoint cloud dataThe preliminary fusion result of (1).
Specifically, the second calculation unit calculates the fusion matrix weight vector by the following formula:
in the formula (I), the compound is shown in the specification,in order to fuse the weight vectors of the matrix,representing fusion matrices through input keysTo the first in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataAs a result of the preliminary fusion of (a),representing input value fusion matricesTo the first in the first point cloud dataPoint cloud dataOr the first in the second point cloud dataPoint cloud dataThe preliminary fusion result of (1).
Specifically, the third calculation unit calculates the fusion matrix output vector by the following formula:
in the formula (I), the compound is shown in the specification,represents the output vector of the fusion matrix and,to fuse matrix weight vectorsTo (1)The number of the elements is one,is shown to pass throughAndthe further fusion result obtained by the calculation is obtained,representing input value fusion matricesTo the first in the first point cloud dataPoint cloud dataOr the first in the second point cloud dataPoint cloud dataAs a result of the preliminary fusion of (a),representing output fusion matricesTo the first in the first point cloud dataPoint cloud dataOr the first in the second point cloud dataPoint cloud dataThe preliminary fusion result of (1).
Specifically, the fourth calculation unit calculates the output by the following formula:
in the formula (I), the compound is shown in the specification,in order to output the information,,an output weight matrix representing the output layer,has a dimension of,In order to be offset,has a dimension of。
Specifically, the fifth calculation unit calculates the loss function by the following formula:
in the formula (I), the compound is shown in the specification,in order to be a function of the loss,the accurate labels in the single-head point cloud are obtained by calibration after manual selection.
The updating module 204 is configured to update the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation manner, respectively, until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold, and stop the iteration to obtain the optimal fusion parameters.
Specifically, updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data through the following formulas:
in the formula (I), the compound is shown in the specification,、、andin order to obtain the fusion parameters before the update,in order to enter the key fusion matrix, the key fusion matrix is entered,in order to fuse the matrices for the input values,in order to output the fusion matrix, the fusion matrix is output,、、andare respectively corresponding toThe new parameters of the fusion are obtained by the method,it is shown that the derivation is calculated,it is indicated that the learning rate is,the value is 0.005.
By pair fusion matrix、、Andand updating the fusion parameters by continuously back-propagating, so that the input data (the first point cloud data and the second point cloud data) can be gradually converted into the same dimension.
As can be seen from the above, the point cloud fusion device provided in the embodiment of the present application obtains the first point cloud data and the second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result; therefore, the integration of the first point cloud data acquired by the unmanned aerial vehicle and the second point cloud data acquired by the quadruped robot is realized, the calculation complexity in the point cloud integration process can be reduced by a parallel calculation method, the point cloud integration efficiency and precision are improved, in addition, the self-adaptive adjustment can be carried out on the condition of non-uniform input dimensions, and the time complexity is reduced.
In a third aspect, referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the present application provides an electronic device, including: the processor 301 and the memory 302, the processor 301 and the memory 302 being interconnected and communicating with each other via a communication bus 303 and/or other form of connection mechanism (not shown), the memory 302 storing a computer program executable by the processor 301, the computer program being executable by the processor 301 when the computing device is running to perform the method in any of the alternative implementations of the above embodiments when the processor 301 executes the computer program to perform the following functions: acquiring first point cloud data and second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; and fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
In a fourth aspect, the present application provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program performs the method in any optional implementation manner of the foregoing embodiment to implement the following functions: acquiring first point cloud data and second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; and fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result. The storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units into only one type of logical function may be implemented in other ways, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (10)
1. A point cloud fusion method is used for fusing first point cloud data acquired by an unmanned aerial vehicle and second point cloud data acquired by a quadruped robot, and is characterized by comprising the following steps:
acquiring the first point cloud data and the second point cloud data;
performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result;
calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data;
updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold value, and stopping iteration to obtain optimal fusion parameters;
and fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
2. The point cloud fusion method of claim 1, wherein the preliminary fusion of the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism comprises the following steps:
and randomly sampling the first point cloud data and the second point cloud data according to a set proportion.
3. The point cloud fusion method of claim 1, wherein the calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data each comprises:
solving a fusion matrix;
calculating to obtain a fusion matrix weight vector according to the fusion matrix;
calculating to obtain a fusion matrix output vector according to the fusion matrix and the fusion matrix weight vector;
calculating by using a feedforward neural network based on the weight matrix and the bias to obtain an output;
calculating according to the output to obtain a loss function;
and obtaining the weight attenuation amount by solving the partial derivative according to the loss function.
4. The point cloud fusion method of claim 3, wherein the solving of the fusion matrix is calculated by the following formula:
wherein the fusion matrix comprises、And,in order to input the key fusion matrix,in order to fuse the matrices for the input values,in order to output the fusion matrix, the fusion matrix is output,、andthe initial value of (a) is in accordance with a random number matrix of 0-1 range of Gaussian distribution;representing the first in the first point cloud dataPoint cloud data or the first point cloud data of the second point cloud dataThe point cloud data is stored in a memory of the computer,,is the total number of point cloud data in the first point cloud data or in the second point cloud data,、andrespectively representing fusion matrices through input keysInput value fusion matrixAnd output fusion matrixFor the first point in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataThe preliminary fusion result of (1).
5. The point cloud fusion method of claim 3, wherein the fusion matrix weight vector calculated from the fusion matrix is calculated by the following formula:
wherein the content of the first and second substances,represents the weight vector of the fusion matrix and,representing fusion matrices through input keysFor the first point in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataAs a result of the preliminary fusion of (a),representing input value fusion matricesFor the first point in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataThe preliminary fusion result of (1).
6. The point cloud fusion method of claim 3, wherein the fusion matrix output vector calculated from the fusion matrix and the fusion matrix weight vector is calculated by the following formula:
wherein the content of the first and second substances,represents the output vector of the fusion matrix and,weight vector for the fusion matrixTo (1)The number of the elements is one,is shown to pass throughAndthe further fusion result obtained by the calculation is obtained,representing fusion matrices through input keysFor the first point in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataAs a result of the preliminary fusion of (a),representing input value fusion matricesFor the first point in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataAs a result of the preliminary fusion of (a),representing output fusion matricesFor the first point in the first point cloud dataPoint cloud dataOr the first point cloud dataPoint cloud dataThe preliminary fusion result of (1).
7. The point cloud fusion method of claim 1, wherein the preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism is calculated by the following formula:
wherein the content of the first and second substances,for preliminary fusion of outputs from attention layersAs a result of which,andall are multi-head weighting matrixes with the same row value and column value,representing the first point cloud data based on an output of a multi-headed self-attention mechanism,,representing the first in the first point cloud dataThe number of the data is one,representing an output of the second point cloud data based on a multi-headed self-attention mechanism,,representing the first point cloud dataAnd (4) data.
8. The utility model provides a point cloud fuses device for the second point cloud data that gathers to the first point cloud data that unmanned aerial vehicle gathered and four-footed robot fuses, its characterized in that, the device includes:
the acquisition module is used for acquiring the first point cloud data and the second point cloud data;
the initial fusion module is used for performing initial fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain an initial fusion result;
the computing module is used for computing the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data;
the updating module is used for respectively updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold value, and stopping iteration to obtain the optimal fusion parameters;
and the fusion module is used for fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
9. An electronic device comprising a processor and a memory, the memory storing computer readable instructions which, when executed by the processor, perform the steps of the method of any one of claims 1-7.
10. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210426803.8A CN114549608B (en) | 2022-04-22 | 2022-04-22 | Point cloud fusion method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210426803.8A CN114549608B (en) | 2022-04-22 | 2022-04-22 | Point cloud fusion method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114549608A CN114549608A (en) | 2022-05-27 |
CN114549608B true CN114549608B (en) | 2022-10-18 |
Family
ID=81666948
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210426803.8A Active CN114549608B (en) | 2022-04-22 | 2022-04-22 | Point cloud fusion method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114549608B (en) |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11556777B2 (en) * | 2017-11-15 | 2023-01-17 | Uatc, Llc | Continuous convolution and fusion in neural networks |
US11221413B2 (en) * | 2018-03-14 | 2022-01-11 | Uatc, Llc | Three-dimensional object detection |
CN109919893B (en) * | 2019-03-20 | 2021-04-23 | 湖北亿咖通科技有限公司 | Point cloud correction method and device and readable storage medium |
CN109978165A (en) * | 2019-04-04 | 2019-07-05 | 重庆大学 | A kind of generation confrontation network method merged from attention mechanism |
CN109978808B (en) * | 2019-04-25 | 2022-02-01 | 北京迈格威科技有限公司 | Method and device for image fusion and electronic equipment |
CN112184603B (en) * | 2019-07-04 | 2022-06-24 | 浙江商汤科技开发有限公司 | Point cloud fusion method and device, electronic equipment and computer storage medium |
US20210122045A1 (en) * | 2019-10-24 | 2021-04-29 | Nvidia Corporation | In-hand object pose tracking |
US11928873B2 (en) * | 2020-03-04 | 2024-03-12 | Magic Leap, Inc. | Systems and methods for efficient floorplan generation from 3D scans of indoor scenes |
US20210374345A1 (en) * | 2020-06-01 | 2021-12-02 | Google Llc | Processing large-scale textual inputs using neural networks |
US11941875B2 (en) * | 2020-07-27 | 2024-03-26 | Waymo Llc | Processing perspective view range images using neural networks |
CN111860666A (en) * | 2020-07-27 | 2020-10-30 | 湖南工程学院 | 3D target detection method based on point cloud and image self-attention mechanism fusion |
CN111950467B (en) * | 2020-08-14 | 2021-06-25 | 清华大学 | Fusion network lane line detection method based on attention mechanism and terminal equipment |
CN113487739A (en) * | 2021-05-19 | 2021-10-08 | 清华大学 | Three-dimensional reconstruction method and device, electronic equipment and storage medium |
CN113269147B (en) * | 2021-06-24 | 2022-07-05 | 浙江海康智联科技有限公司 | Three-dimensional detection method and system based on space and shape, and storage and processing device |
CN113345106A (en) * | 2021-06-24 | 2021-09-03 | 西南大学 | Three-dimensional point cloud analysis method and system based on multi-scale multi-level converter |
CN113658100A (en) * | 2021-07-16 | 2021-11-16 | 上海高德威智能交通系统有限公司 | Three-dimensional target object detection method and device, electronic equipment and storage medium |
CN113989340A (en) * | 2021-10-29 | 2022-01-28 | 天津大学 | Point cloud registration method based on distribution |
CN114004871B (en) * | 2022-01-04 | 2022-04-15 | 山东大学 | Point cloud registration method and system based on point cloud completion |
CN114066960B (en) * | 2022-01-13 | 2022-04-22 | 季华实验室 | Three-dimensional reconstruction method, point cloud fusion method, device, equipment and storage medium |
CN114078151B (en) * | 2022-01-19 | 2022-04-22 | 季华实验室 | Point cloud fusion method and device, electronic equipment and storage medium |
-
2022
- 2022-04-22 CN CN202210426803.8A patent/CN114549608B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114549608A (en) | 2022-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109658445A (en) | Network training method, increment build drawing method, localization method, device and equipment | |
CN110383340A (en) | Path planning is carried out using sparse volume data | |
CN108204814B (en) | Unmanned aerial vehicle three-dimensional scene path navigation platform and three-dimensional improved path planning method thereof | |
US11302105B2 (en) | Grid map obstacle detection method fusing probability and height information | |
CN111750857B (en) | Route generation method, route generation device, terminal and storage medium | |
WO2016029348A1 (en) | Measuring traffic speed in a road network | |
CN110347971A (en) | Particle filter method, device and storage medium based on TSK fuzzy model | |
CN114066960B (en) | Three-dimensional reconstruction method, point cloud fusion method, device, equipment and storage medium | |
US20240037844A1 (en) | 3d structure engine-based computation platform | |
CN110181508A (en) | Underwater robot three-dimensional Route planner and system | |
CN113916130B (en) | Building position measuring method based on least square method | |
Gan et al. | Research on role modeling and behavior control of virtual reality animation interactive system in Internet of Things | |
Yuan et al. | Feature preserving multiresolution subdivision and simplification of point clouds: A conformal geometric algebra approach | |
CN116518960B (en) | Road network updating method, device, electronic equipment and storage medium | |
CN112241676A (en) | Method for automatically identifying terrain sundries | |
CN116720632B (en) | Engineering construction intelligent management method and system based on GIS and BIM | |
CN114549608B (en) | Point cloud fusion method and device, electronic equipment and storage medium | |
CN115393542B (en) | Generalized building three-dimensional geometric reconstruction method | |
CN116482711A (en) | Local static environment sensing method and device for autonomous selection of landing zone | |
WO2023164933A1 (en) | Building modeling method and related apparatus | |
CN115797256A (en) | Unmanned aerial vehicle-based tunnel rock mass structural plane information processing method and device | |
CN115375836A (en) | Point cloud fusion three-dimensional reconstruction method and system based on multivariate confidence filtering | |
Hou et al. | Poisson disk sampling in geodesic metric for DEM simplification | |
CN107247833A (en) | A kind of CAE mass data light weight methods under cloud computing | |
CN114511571A (en) | Point cloud data semantic segmentation method and system and related components |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |