CN114549608B - Point cloud fusion method and device, electronic equipment and storage medium - Google Patents

Point cloud fusion method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114549608B
CN114549608B CN202210426803.8A CN202210426803A CN114549608B CN 114549608 B CN114549608 B CN 114549608B CN 202210426803 A CN202210426803 A CN 202210426803A CN 114549608 B CN114549608 B CN 114549608B
Authority
CN
China
Prior art keywords
point cloud
cloud data
fusion
matrix
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210426803.8A
Other languages
Chinese (zh)
Other versions
CN114549608A (en
Inventor
邓涛
张晟东
李志建
古家威
霍震
陈海龙
黄秀韦
何昊名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210426803.8A priority Critical patent/CN114549608B/en
Publication of CN114549608A publication Critical patent/CN114549608A/en
Application granted granted Critical
Publication of CN114549608B publication Critical patent/CN114549608B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Abstract

The application belongs to the technical field of data processing, and provides a point cloud fusion method, a point cloud fusion device, electronic equipment and a storage medium, wherein the point cloud fusion method comprises the following steps: acquiring first point cloud data and second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; and updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the first point cloud data fusion weight and the update change rate of the second point cloud data fusion weight are respectively smaller than a preset threshold value, and stopping iteration to obtain the optimal fusion parameters. The point cloud fusion method, the point cloud fusion device, the electronic equipment and the storage medium reduce the calculation complexity in the fusion process and improve the efficiency and the precision of point cloud fusion.

Description

Point cloud fusion method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of data processing, in particular to a point cloud fusion method and device, electronic equipment and a storage medium.
Background
Three-dimensional reconstruction refers to the establishment of a mathematical model suitable for computer representation and processing of a three-dimensional object, is the basis for processing, operating and analyzing the properties of the three-dimensional object in a computer environment, and is also a key technology for establishing virtual reality expressing an objective world in a computer.
At present, the existing three-dimensional reconstruction point cloud fusion method mainly adopts a fusion method based on space voxels and a point cloud fusion method based on a clustering idea, but the fusion method based on the space voxels and the point cloud fusion method based on the clustering idea have the problems of low fusion efficiency and low precision.
In view of the above problems, no effective technical solution exists at present.
Disclosure of Invention
The application aims to provide a point cloud fusion method, a point cloud fusion device, an electronic device and a storage medium, which can reduce the calculation complexity in the point cloud fusion process and improve the efficiency and the precision of the point cloud fusion.
In a first aspect, the present application provides a point cloud fusion method for fusing first point cloud data acquired by an unmanned aerial vehicle and second point cloud data acquired by a quadruped robot, the method comprising the steps of:
acquiring the first point cloud data and the second point cloud data;
performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result;
calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data;
updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold value, and stopping iteration to obtain optimal fusion parameters;
and fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
According to the point cloud fusion method, the computation complexity in the point cloud fusion process can be reduced through a parallel computation method, the efficiency and the precision of point cloud fusion are improved, moreover, self-adaptive adjustment can be carried out on the condition that the input dimensions are not uniform, and the time complexity is reduced.
Optionally, in the point cloud fusion method described in the present application, before performing preliminary fusion on the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism, the method includes the following steps:
and randomly sampling the first point cloud data and the second point cloud data according to a set proportion.
According to the method and the device, the first point cloud data and the second point cloud data are subjected to random sampling and then are subjected to preliminary fusion, and the calculation complexity in the fusion process is further reduced.
Optionally, in the point cloud fusion method described in the present application, the calculating the weight attenuation amount of the first point cloud data and the weight attenuation amount of the second point cloud data each includes the following steps:
solving a fusion matrix;
calculating to obtain a fusion matrix weight vector according to the fusion matrix;
calculating to obtain a fusion matrix output vector according to the fusion matrix and the fusion matrix weight vector;
calculating by using a feedforward neural network based on the weight matrix and the bias to obtain an output;
calculating according to the output to obtain a loss function;
and solving a partial derivative according to the loss function to obtain a weight attenuation amount.
According to the method, the weight attenuation of the preliminary fusion result is calculated through the steps, so that each point coordinate of the output fusion point cloud depends on all input point coordinates, and the fusion is more accurate.
Optionally, in the point cloud fusion method described in the present application, the solving of the fusion matrix is calculated by the following formula:
Figure 100002_DEST_PATH_IMAGE001
wherein the fusion matrix comprises
Figure DEST_PATH_IMAGE002
Figure 100002_DEST_PATH_IMAGE003
And
Figure DEST_PATH_IMAGE004
Figure 997082DEST_PATH_IMAGE002
in order to enter the key fusion matrix, the key fusion matrix is entered,
Figure 234291DEST_PATH_IMAGE003
in order to fuse the matrices for the input values,
Figure 994436DEST_PATH_IMAGE004
in order to output the fusion matrix, the fusion matrix is output,
Figure 448420DEST_PATH_IMAGE002
Figure 116162DEST_PATH_IMAGE003
and
Figure 722724DEST_PATH_IMAGE004
the initial value of (2) is in accordance with a random number matrix of 0-1 range of Gaussian distribution;
Figure 100002_DEST_PATH_IMAGE005
represent the first in the first point cloud data
Figure DEST_PATH_IMAGE006
Point cloud data or the first point cloud data of the second point cloud data
Figure 549645DEST_PATH_IMAGE006
The point cloud data is obtained by the point cloud data,
Figure 100002_DEST_PATH_IMAGE007
Figure DEST_PATH_IMAGE008
is the total number of point cloud data in the first point cloud data or in the second point cloud data,
Figure 100002_DEST_PATH_IMAGE009
Figure DEST_PATH_IMAGE010
and
Figure 100002_DEST_PATH_IMAGE011
respectively represent fusion matrix by input key
Figure 333668DEST_PATH_IMAGE002
Input value fusion matrix
Figure 805100DEST_PATH_IMAGE003
And output fusion matrix
Figure 266169DEST_PATH_IMAGE004
For the first point in the first point cloud data
Figure 384428DEST_PATH_IMAGE006
Point cloud data
Figure 563737DEST_PATH_IMAGE005
Or the first point cloud data
Figure 25811DEST_PATH_IMAGE006
Point cloud data
Figure 823610DEST_PATH_IMAGE005
The preliminary fusion result of (1).
Optionally, in the point cloud fusion method described in the present application, the weight vector of the fusion matrix obtained by calculation according to the fusion matrix is calculated by the following formula:
Figure DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE013
represents the weight vector of the fusion matrix and,
Figure 221093DEST_PATH_IMAGE009
representing fusion matrices through input keys
Figure 372851DEST_PATH_IMAGE002
For the first point in the first point cloud data
Figure 389348DEST_PATH_IMAGE006
Point cloud data
Figure 339856DEST_PATH_IMAGE005
Or the first point cloud data
Figure 469092DEST_PATH_IMAGE006
Point cloud data
Figure 357414DEST_PATH_IMAGE005
As a result of the preliminary fusion of (a),
Figure DEST_PATH_IMAGE014
representing input value fusion matrices
Figure 161291DEST_PATH_IMAGE003
For the first point in the first point cloud data
Figure 451458DEST_PATH_IMAGE008
Point cloud data
Figure 100002_DEST_PATH_IMAGE015
Or the first point cloud data
Figure 551263DEST_PATH_IMAGE008
Point cloud data
Figure 379411DEST_PATH_IMAGE015
The preliminary fusion result of (1).
Optionally, in the point cloud fusion method described in the present application, the fusion matrix output vector obtained by calculating according to the fusion matrix and the fusion matrix weight vector is calculated by the following formula:
Figure DEST_PATH_IMAGE016
Figure 100002_DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE018
represents the output vector of the fusion matrix and,
Figure 100002_DEST_PATH_IMAGE019
weight vector for the fusion matrix
Figure 16672DEST_PATH_IMAGE013
To (1)
Figure DEST_PATH_IMAGE020
The number of the elements is one,
Figure 380919DEST_PATH_IMAGE019
is shown to pass through
Figure 415740DEST_PATH_IMAGE009
And with
Figure 100002_DEST_PATH_IMAGE021
The further fusion result obtained by the calculation is obtained,
Figure 13075DEST_PATH_IMAGE021
representing input value fusion matrices
Figure 922869DEST_PATH_IMAGE003
For the first point in the first point cloud data
Figure DEST_PATH_IMAGE022
Point cloud data
Figure 100002_DEST_PATH_IMAGE023
Or the first point cloud data
Figure 46683DEST_PATH_IMAGE022
Point cloud data
Figure 222711DEST_PATH_IMAGE023
As a result of the preliminary fusion of (a),
Figure DEST_PATH_IMAGE024
representing output fusion matrices
Figure 556609DEST_PATH_IMAGE004
For the first point in the first point cloud data
Figure 522291DEST_PATH_IMAGE022
Point cloud data
Figure 123781DEST_PATH_IMAGE023
Or the first point cloud data
Figure 47874DEST_PATH_IMAGE022
Point cloud data
Figure 869069DEST_PATH_IMAGE023
The preliminary fusion result of (1).
Optionally, in the point cloud fusion method described in the present application, the preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism is calculated by the following formula:
Figure 100002_DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE026
for the preliminary fusion result output from the attention layer,
Figure 100002_DEST_PATH_IMAGE027
and
Figure DEST_PATH_IMAGE028
all are multi-head weighting matrixes with the same row value and column value,
Figure 100002_DEST_PATH_IMAGE029
representing the first point cloud data and the second point cloud data,
Figure DEST_PATH_IMAGE030
representing the second point cloud data and,
Figure 100002_DEST_PATH_IMAGE031
Figure DEST_PATH_IMAGE032
representing the first in the first point cloud data
Figure 100002_DEST_PATH_IMAGE033
The number of the data is one,
Figure DEST_PATH_IMAGE034
Figure 825392DEST_PATH_IMAGE032
representing the first point cloud data based on a first mechanism of self-attentionnThe output of the first and second processors is,
Figure 100002_DEST_PATH_IMAGE035
representing the second point cloud data as being based on a self-attention mechanismnThe output of the first and second processors is,
Figure 986115DEST_PATH_IMAGE035
representing the first point cloud data
Figure 815531DEST_PATH_IMAGE033
And (4) the data.
In a second aspect, the present application further provides a point cloud fusion device for fusing first point cloud data collected by an unmanned aerial vehicle and second point cloud data collected by a quadruped robot, the device comprising:
the acquisition module is used for acquiring the first point cloud data and the second point cloud data;
the initial fusion module is used for performing initial fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain an initial fusion result;
the computing module is used for computing the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data;
the updating module is used for respectively updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold value, and stopping iteration to obtain the optimal fusion parameters;
and the fusion module is used for fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
The point cloud fusion device provided by the application can reduce the computation complexity in the point cloud fusion process through a parallel computation method, improve the efficiency and the precision of point cloud fusion, and can also perform self-adaptive adjustment on the condition that the input dimensions are not uniform, so that the time complexity is reduced.
In a third aspect, the present application provides an electronic device comprising a processor and a memory, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, perform the steps of the method as provided in the first aspect.
In a fourth aspect, the present application provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as provided in the first aspect above.
As can be seen from the above, the point cloud fusion method, device, electronic device and storage medium provided by the present application acquire the first point cloud data and the second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result; therefore, the integration of the first point cloud data acquired by the unmanned aerial vehicle and the second point cloud data acquired by the quadruped robot is realized, the calculation complexity in the point cloud integration process can be reduced by a parallel calculation method, the point cloud integration efficiency and precision are improved, in addition, the self-adaptive adjustment can be carried out on the condition of non-uniform input dimensions, and the time complexity is reduced.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
Fig. 1 is a flowchart of a point cloud fusion method provided in an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a point cloud fusion apparatus provided in the embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not construed as indicating or implying relative importance.
In recent years, unmanned aerial vehicle three-dimensional scene reconstruction is widely applied to the fields of unmanned aerial vehicle self-positioning navigation, urban digital twinning, topographic mapping and the like, but is limited by the flight height of the unmanned aerial vehicle and the limited resolution of a holder camera, under complex scenes (intricate buildings, jungles, dense grasslands and the like), the unmanned aerial vehicle cannot carry out three-dimensional reconstruction on the complex scenes on the ground, the unmanned vehicle is also limited by the motion performance of the unmanned aerial vehicle and cannot run under the complex scenes, and the quadruped robot can be suitable for various complex terrains, including jungles, dense vegetation, forests, ramps, stairs and the like. Therefore, under the complex scene environment, the point cloud fusion is carried out through the three-dimensional reconstruction map of the aerial unmanned aerial vehicle and the fine three-dimensional reconstruction map of the ground quadruped robot, the accuracy of the three-dimensional scene reconstruction under the existing complex environment can be effectively improved, and the map scene construction is facilitated.
However, the existing three-dimensional reconstruction point cloud fusion method mainly adopts a fusion method based on space voxels and a point cloud fusion method based on a clustering idea. The fusion method based on the spatial voxels, such as the TSDF method, needs to divide the point cloud space into tiny voxels, the subdivision degree is related to the precision, and when the method is applied to an application scene with higher precision requirement and larger point cloud space distribution, a large amount of memory resources are consumed, so the point cloud fusion method based on the spatial voxels is only suitable for point cloud fusion with low precision requirement and rapid reconstruction of a three-dimensional scene; the point cloud fusion method based on the clustering idea needs to input point cloud and normal direction at the same time. The method comprises the steps of utilizing a clustering mode to position an overlapping area, utilizing a least square method to project a point set of the overlapping area to a fitting plane, selecting an intersection point of a straight line formed by points of the overlapping area and a normal direction and the fitting plane as data of data after fusion, and when the data volume is large, the clustering process and the fitting process are time-consuming and the fusion efficiency is low.
Therefore, the current point cloud fusion algorithm is difficult to meet the requirements of large data volume and high precision of point cloud fusion of three-dimensional reconstruction of complex scenes. Based on the point cloud fusion method and device, the electronic equipment and the storage medium.
In a first aspect, please refer to fig. 1, fig. 1 is a flowchart of a point cloud fusion method in some embodiments of the present application. The point cloud fusion method is used for fusing first point cloud data acquired by an unmanned aerial vehicle and second point cloud data acquired by a quadruped robot, and comprises the following steps:
s101, obtaining first point cloud data and second point cloud data.
S102, performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result.
S103, calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data.
And S104, updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the first point cloud data fusion weight and the update change rate of the second point cloud data fusion weight are smaller than a preset threshold respectively, and stopping iteration to obtain the optimal fusion parameters.
And S105, fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
According to the point cloud fusion method, preliminary fusion is carried out based on a multi-head self-attention mechanism, then the fusion parameters are updated according to weight attenuation back propagation in the preliminary fusion result to obtain the optimal fusion parameters, fusion is carried out according to the optimal fusion parameters to obtain the optimal fusion result, the calculation complexity in the point cloud fusion process is reduced, and the efficiency and the precision of point cloud fusion are improved.
In step S101, the acquired first point cloud data may be obtained by converting after being shot by a high-definition pan-tilt camera with depth information carried by the unmanned aerial vehicle, and the acquired second point cloud data may be obtained by converting after being shot by a high-definition RGB-D camera.
In step S102, the preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism is calculated by the following formula:
Figure 937071DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure 15795DEST_PATH_IMAGE026
for the preliminary fusion result output from the attention layer,
Figure 844074DEST_PATH_IMAGE027
and
Figure 828080DEST_PATH_IMAGE028
all are multi-head weighting matrixes with the same row value and column value,
Figure 374599DEST_PATH_IMAGE029
indicating that the first point cloud data is based on the output of a multi-headed self-attentive mechanism,
Figure 236506DEST_PATH_IMAGE031
Figure 388133DEST_PATH_IMAGE032
representing the first in the first point cloud data
Figure 339778DEST_PATH_IMAGE033
The number of the data is one,
Figure 373593DEST_PATH_IMAGE030
representing the output of the second point cloud data based on a multi-headed self-attentive mechanism,
Figure 973945DEST_PATH_IMAGE034
Figure 776816DEST_PATH_IMAGE035
representing the first point cloud data
Figure 899361DEST_PATH_IMAGE033
And (4) data.
Specifically, in some embodiments, before the preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism in step S102, the following steps are further included: and respectively carrying out random sampling on the first point cloud data and the second point cloud data according to a set proportion. The first point cloud data and the second point cloud data are respectively subjected to random sampling and then are subjected to preliminary fusion, so that the computational complexity in the fusion process is further reduced.
Specifically, in some embodiments, step S103 includes the following sub-steps: s1031, solving a fusion matrix; s1032, calculating according to the fusion matrix to obtain a fusion matrix weight vector; s1033, calculating to obtain a fusion matrix output vector according to the fusion matrix and the fusion matrix weight vector; s1034, further calculating by utilizing a feedforward neural network based on the weight matrix and the bias to obtain output; s1035, obtaining a loss function according to output calculation; s1036, obtaining weight attenuation quantity by calculating partial derivatives according to the loss function.
Wherein, in step S1031, the fusion matrix is calculated by the following formula:
Figure 420473DEST_PATH_IMAGE001
wherein the fusion matrix comprises
Figure 93025DEST_PATH_IMAGE002
Figure 812719DEST_PATH_IMAGE003
And
Figure 512691DEST_PATH_IMAGE004
Figure 521098DEST_PATH_IMAGE002
in order to enter the key fusion matrix, the key fusion matrix is entered,
Figure 259990DEST_PATH_IMAGE003
in order to fuse the matrices for the input values,
Figure 37454DEST_PATH_IMAGE004
in order to output the fusion matrix,
Figure 252534DEST_PATH_IMAGE002
Figure 466347DEST_PATH_IMAGE003
and
Figure 261127DEST_PATH_IMAGE004
the initial value of (a) is in accordance with a random number matrix of 0-1 range of Gaussian distribution;
Figure 174988DEST_PATH_IMAGE005
representing the first in the first point cloud data
Figure 560970DEST_PATH_IMAGE006
First of the individual point cloud data or the second point cloud data
Figure 996499DEST_PATH_IMAGE006
The point cloud data is stored in a memory of the computer,
Figure 126129DEST_PATH_IMAGE007
Figure 563671DEST_PATH_IMAGE008
the total number of point cloud data in the first point cloud data or the second point cloud data,
Figure 589395DEST_PATH_IMAGE009
Figure 43379DEST_PATH_IMAGE010
and
Figure 976700DEST_PATH_IMAGE011
respectively represent fusion matrix by input key
Figure 52104DEST_PATH_IMAGE002
Input value fusion matrix
Figure 796200DEST_PATH_IMAGE003
And output fusion matrix
Figure 488212DEST_PATH_IMAGE004
To the first in the first point cloud data
Figure 146596DEST_PATH_IMAGE006
Point cloud data
Figure 404402DEST_PATH_IMAGE005
Or the first in the second point cloud data
Figure 303088DEST_PATH_IMAGE006
Point cloud data
Figure 699040DEST_PATH_IMAGE005
The preliminary fusion result of (1).
In step S1032, the weight vector of the fusion matrix obtained by calculation according to the fusion matrix is calculated by the following formula:
Figure 708585DEST_PATH_IMAGE012
in the formula (I), the compound is shown in the specification,
Figure 86477DEST_PATH_IMAGE013
in order to fuse the weight vectors of the matrix,
Figure 608594DEST_PATH_IMAGE009
representing fusion matrices through input keys
Figure 9619DEST_PATH_IMAGE002
To the first in the first point cloud data
Figure 776849DEST_PATH_IMAGE006
Point cloud data
Figure 9247DEST_PATH_IMAGE005
Or the first point cloud data
Figure 187419DEST_PATH_IMAGE006
Point cloud data
Figure 387325DEST_PATH_IMAGE005
As a result of the preliminary fusion of (a),
Figure 207513DEST_PATH_IMAGE014
representing input value fusion matrices
Figure 28839DEST_PATH_IMAGE003
To the first in the first point cloud data
Figure 391294DEST_PATH_IMAGE008
Point cloud data
Figure 298070DEST_PATH_IMAGE015
Or the first in the second point cloud data
Figure 905638DEST_PATH_IMAGE008
Point cloud data
Figure 112628DEST_PATH_IMAGE015
The preliminary fusion result of (2).
In step S1033, the fusion matrix output vector is calculated by the following formula:
Figure 632602DEST_PATH_IMAGE016
Figure 777407DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 736135DEST_PATH_IMAGE018
represents the output vector of the fusion matrix and,
Figure 735316DEST_PATH_IMAGE019
to fuse matrix weight vectors
Figure 737775DEST_PATH_IMAGE013
To
Figure 884723DEST_PATH_IMAGE020
The number of the elements is one,
Figure 319247DEST_PATH_IMAGE019
is shown to pass through
Figure 983053DEST_PATH_IMAGE009
And
Figure 110409DEST_PATH_IMAGE021
the further fusion result obtained by the calculation is obtained,
Figure 197182DEST_PATH_IMAGE021
representing input value fusion matrices
Figure 232134DEST_PATH_IMAGE003
To the first in the first point cloud data
Figure 2644DEST_PATH_IMAGE022
Point cloud data
Figure 97639DEST_PATH_IMAGE023
Or the first in the second point cloud data
Figure 907595DEST_PATH_IMAGE022
Point cloud data
Figure 480658DEST_PATH_IMAGE023
As a result of the preliminary fusion of (a),
Figure 823784DEST_PATH_IMAGE024
representing output fusion matrices
Figure 355259DEST_PATH_IMAGE004
To the first in the first point cloud data
Figure 901778DEST_PATH_IMAGE022
Point cloud data
Figure 964019DEST_PATH_IMAGE023
Or the first point cloud data
Figure 427230DEST_PATH_IMAGE022
Point cloud data
Figure 332869DEST_PATH_IMAGE023
The preliminary fusion result of (1).
In step S1034, the output is calculated by the following formula:
Figure DEST_PATH_IMAGE036
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE037
in order to be output, the output is,
Figure DEST_PATH_IMAGE038
Figure DEST_PATH_IMAGE039
an output weight matrix representing the output layer,
Figure 179734DEST_PATH_IMAGE039
of dimension of
Figure DEST_PATH_IMAGE040
Figure DEST_PATH_IMAGE041
In order to be offset,
Figure 475627DEST_PATH_IMAGE041
has a dimension of
Figure DEST_PATH_IMAGE042
In step S1035, the loss function is calculated by the following formula:
Figure DEST_PATH_IMAGE043
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE044
in order to be a function of the loss,
Figure DEST_PATH_IMAGE045
the accurate labels in the single-head point cloud are obtained by calibration after manual selection.
In step S104, updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data, respectively, and calculating according to the following formulas:
Figure DEST_PATH_IMAGE046
Figure DEST_PATH_IMAGE047
Figure DEST_PATH_IMAGE048
Figure DEST_PATH_IMAGE049
wherein the content of the first and second substances,
Figure 167246DEST_PATH_IMAGE002
Figure 40524DEST_PATH_IMAGE003
Figure 545324DEST_PATH_IMAGE004
and
Figure 732723DEST_PATH_IMAGE039
in order to obtain the fusion parameters before the update,
Figure 452417DEST_PATH_IMAGE002
in order to enter the key fusion matrix, the key fusion matrix is entered,
Figure 450591DEST_PATH_IMAGE003
in order to fuse the matrices for the input values,
Figure 458999DEST_PATH_IMAGE004
in order to output the fusion matrix, the fusion matrix is output,
Figure DEST_PATH_IMAGE050
Figure DEST_PATH_IMAGE051
Figure DEST_PATH_IMAGE052
and
Figure DEST_PATH_IMAGE053
are the fusion parameters after the corresponding update respectively,
Figure DEST_PATH_IMAGE054
it is shown that the derivation is calculated,
Figure DEST_PATH_IMAGE055
it is indicated that the learning rate is,
Figure 650421DEST_PATH_IMAGE055
the value is 0.005.
By pair fusion matrix
Figure 224622DEST_PATH_IMAGE002
Figure 128118DEST_PATH_IMAGE003
Figure 154980DEST_PATH_IMAGE004
And
Figure 933449DEST_PATH_IMAGE039
and updating the fusion parameters by continuous back propagation, and gradually converting the input data (the first point cloud data and the second point cloud data) into the same dimension.
As can be seen from the above, the point cloud fusion method provided by the embodiment of the present application obtains the first point cloud data and the second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result; therefore, the first point cloud data acquired by the unmanned aerial vehicle and the second point cloud data acquired by the quadruped robot are fused, the calculation complexity in the point cloud fusion process can be reduced through a parallel calculation method, the point cloud fusion efficiency and precision are improved, in addition, the self-adaptive adjustment can be carried out on the condition that the input dimensions are not uniform, and the time complexity is reduced.
In a second aspect, please refer to fig. 2, fig. 2 is a point cloud fusion apparatus for fusing first point cloud data acquired by an unmanned aerial vehicle and second point cloud data acquired by a quadruped robot, the point cloud fusion apparatus being integrated in a ground station device in wireless communication connection with the unmanned aerial vehicle and the quadruped robot in the form of a computer program, the point cloud fusion apparatus comprising: the system comprises an acquisition module 201, a preliminary fusion module 202, a calculation module 203, an update module 204 and a fusion module 205.
The acquiring module 201 is configured to acquire first point cloud data and second point cloud data; the acquired first point cloud data can be obtained by converting after being shot by a high-definition cloud platform camera with depth information carried by an unmanned aerial vehicle, and the acquired second point cloud data can be obtained by converting after being shot by a high-definition RGB-D camera.
The initial fusion module 202 is configured to perform initial fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism, so as to obtain an initial fusion result. The preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism is calculated by the following formula:
Figure 362156DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure 685821DEST_PATH_IMAGE026
for the preliminary fusion result output from the attention layer,
Figure 682203DEST_PATH_IMAGE027
and
Figure 15095DEST_PATH_IMAGE028
all are multi-head weighting matrixes with the same row value and column value,
Figure 485259DEST_PATH_IMAGE029
indicating that the first point cloud data is based on the output of a multi-headed self-attentive mechanism,
Figure 776564DEST_PATH_IMAGE031
Figure 466433DEST_PATH_IMAGE032
representing the first in the first point cloud data
Figure 134175DEST_PATH_IMAGE033
The number of the data is one,
Figure 990004DEST_PATH_IMAGE030
representing the output of the second point cloud data based on a multi-headed self-attentive mechanism,
Figure 921051DEST_PATH_IMAGE034
Figure 144222DEST_PATH_IMAGE035
representing the first point cloud data
Figure 566720DEST_PATH_IMAGE033
And (4) data.
Specifically, in some embodiments, the point cloud fusion device further comprises a random sampling module. The random sampling module is configured to, before the preliminary fusion module 202 performs preliminary fusion on the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism, respectively perform random sampling on the first point cloud data and the second point cloud data according to a set proportion, so as to reduce the computational complexity in the fusion process.
The calculating module 203 is configured to calculate a weight attenuation amount of the first point cloud data and a weight attenuation amount of the second point cloud data. And calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data to ensure that each point coordinate of the output fusion point cloud depends on all input point coordinates, so that the fusion is more accurate.
Specifically, in some embodiments, the calculation module 203 includes: the first calculation unit is used for solving a fusion matrix; the second calculation unit is used for calculating to obtain a fusion matrix weight vector according to the fusion matrix; the third calculation unit is used for calculating to obtain a fusion matrix output vector according to the fusion matrix and the fusion matrix weight vector; the fourth calculation unit is used for further calculating by utilizing a feedforward neural network based on the weight matrix and the bias to obtain output; a fifth calculating unit, configured to calculate a loss function according to the output; and the sixth calculating unit is used for solving the partial derivative according to the loss function to obtain the weight attenuation.
Specifically, the first calculation unit calculates a fusion matrix of the preliminary fusion result by the following formula:
Figure 824526DEST_PATH_IMAGE001
in the formula, the fusion matrix comprises
Figure 910162DEST_PATH_IMAGE002
Figure 89471DEST_PATH_IMAGE003
And
Figure 99015DEST_PATH_IMAGE004
Figure 962060DEST_PATH_IMAGE002
in order to enter the key fusion matrix, the key fusion matrix is entered,
Figure 234910DEST_PATH_IMAGE003
in order to fuse the matrices for the input values,
Figure 150782DEST_PATH_IMAGE004
in order to output the fusion matrix, the fusion matrix is output,
Figure 587186DEST_PATH_IMAGE002
Figure 944218DEST_PATH_IMAGE003
and
Figure 184707DEST_PATH_IMAGE004
the initial value of (a) is in accordance with a random number matrix of 0-1 range of Gaussian distribution;
Figure 823761DEST_PATH_IMAGE005
representing the first in the first point cloud data
Figure 175107DEST_PATH_IMAGE006
First of the individual point cloud data or the second point cloud data
Figure 262012DEST_PATH_IMAGE006
The point cloud data is stored in a memory of the computer,
Figure 860353DEST_PATH_IMAGE007
Figure 32708DEST_PATH_IMAGE008
to the total number of point cloud data in the preliminary fusion result,
Figure 187746DEST_PATH_IMAGE009
Figure 80222DEST_PATH_IMAGE010
and
Figure 865775DEST_PATH_IMAGE011
respectively representing fusion matrices through input keys
Figure 525427DEST_PATH_IMAGE002
Input value fusion matrix
Figure 467844DEST_PATH_IMAGE003
And output fusion matrix
Figure 467024DEST_PATH_IMAGE004
To the first in the first point cloud data
Figure 220216DEST_PATH_IMAGE006
Point cloud data
Figure 852317DEST_PATH_IMAGE005
Or the first in the second point cloud data
Figure 552420DEST_PATH_IMAGE006
Point cloud data
Figure 468423DEST_PATH_IMAGE005
The preliminary fusion result of (1).
Specifically, the second calculation unit calculates the fusion matrix weight vector by the following formula:
Figure 641785DEST_PATH_IMAGE012
in the formula (I), the compound is shown in the specification,
Figure 213711DEST_PATH_IMAGE013
in order to fuse the weight vectors of the matrix,
Figure 465308DEST_PATH_IMAGE009
representing fusion matrices through input keys
Figure 235818DEST_PATH_IMAGE002
To the first in the first point cloud data
Figure 534075DEST_PATH_IMAGE006
Point cloud data
Figure 639303DEST_PATH_IMAGE005
Or the first point cloud data
Figure 150050DEST_PATH_IMAGE006
Point cloud data
Figure 525799DEST_PATH_IMAGE005
As a result of the preliminary fusion of (a),
Figure 57274DEST_PATH_IMAGE014
representing input value fusion matrices
Figure 338214DEST_PATH_IMAGE003
To the first in the first point cloud data
Figure 229815DEST_PATH_IMAGE008
Point cloud data
Figure 912601DEST_PATH_IMAGE015
Or the first in the second point cloud data
Figure 614977DEST_PATH_IMAGE008
Point cloud data
Figure 154454DEST_PATH_IMAGE015
The preliminary fusion result of (1).
Specifically, the third calculation unit calculates the fusion matrix output vector by the following formula:
Figure 600478DEST_PATH_IMAGE016
Figure 652617DEST_PATH_IMAGE017
in the formula (I), the compound is shown in the specification,
Figure 525895DEST_PATH_IMAGE018
represents the output vector of the fusion matrix and,
Figure 515848DEST_PATH_IMAGE019
to fuse matrix weight vectors
Figure 719558DEST_PATH_IMAGE013
To (1)
Figure 908094DEST_PATH_IMAGE020
The number of the elements is one,
Figure 404803DEST_PATH_IMAGE019
is shown to pass through
Figure 147631DEST_PATH_IMAGE009
And
Figure 886524DEST_PATH_IMAGE021
the further fusion result obtained by the calculation is obtained,
Figure 663987DEST_PATH_IMAGE021
representing input value fusion matrices
Figure 66018DEST_PATH_IMAGE003
To the first in the first point cloud data
Figure 561722DEST_PATH_IMAGE022
Point cloud data
Figure 107235DEST_PATH_IMAGE023
Or the first in the second point cloud data
Figure 4784DEST_PATH_IMAGE022
Point cloud data
Figure 46558DEST_PATH_IMAGE023
As a result of the preliminary fusion of (a),
Figure 560716DEST_PATH_IMAGE024
representing output fusion matrices
Figure 641411DEST_PATH_IMAGE004
To the first in the first point cloud data
Figure 127887DEST_PATH_IMAGE022
Point cloud data
Figure 137300DEST_PATH_IMAGE023
Or the first in the second point cloud data
Figure 607596DEST_PATH_IMAGE022
Point cloud data
Figure 229332DEST_PATH_IMAGE023
The preliminary fusion result of (1).
Specifically, the fourth calculation unit calculates the output by the following formula:
Figure 835894DEST_PATH_IMAGE036
in the formula (I), the compound is shown in the specification,
Figure 812946DEST_PATH_IMAGE037
in order to output the information,
Figure 239380DEST_PATH_IMAGE038
Figure 976392DEST_PATH_IMAGE039
an output weight matrix representing the output layer,
Figure 716421DEST_PATH_IMAGE039
has a dimension of
Figure 818369DEST_PATH_IMAGE040
Figure 246945DEST_PATH_IMAGE041
In order to be offset,
Figure 928594DEST_PATH_IMAGE041
has a dimension of
Figure 791639DEST_PATH_IMAGE042
Specifically, the fifth calculation unit calculates the loss function by the following formula:
Figure 533330DEST_PATH_IMAGE036
in the formula (I), the compound is shown in the specification,
Figure 980360DEST_PATH_IMAGE044
in order to be a function of the loss,
Figure 996858DEST_PATH_IMAGE045
the accurate labels in the single-head point cloud are obtained by calibration after manual selection.
The updating module 204 is configured to update the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation manner, respectively, until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold, and stop the iteration to obtain the optimal fusion parameters.
Specifically, updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data through the following formulas:
Figure DEST_PATH_IMAGE056
Figure DEST_PATH_IMAGE057
Figure DEST_PATH_IMAGE058
Figure DEST_PATH_IMAGE059
in the formula (I), the compound is shown in the specification,
Figure 977059DEST_PATH_IMAGE002
Figure 483127DEST_PATH_IMAGE003
Figure 856601DEST_PATH_IMAGE004
and
Figure 207948DEST_PATH_IMAGE039
in order to obtain the fusion parameters before the update,
Figure 294853DEST_PATH_IMAGE002
in order to enter the key fusion matrix, the key fusion matrix is entered,
Figure 627614DEST_PATH_IMAGE003
in order to fuse the matrices for the input values,
Figure 688718DEST_PATH_IMAGE004
in order to output the fusion matrix, the fusion matrix is output,
Figure 827444DEST_PATH_IMAGE050
Figure 175380DEST_PATH_IMAGE051
Figure 773982DEST_PATH_IMAGE052
and
Figure 371317DEST_PATH_IMAGE053
are respectively corresponding toThe new parameters of the fusion are obtained by the method,
Figure 782576DEST_PATH_IMAGE054
it is shown that the derivation is calculated,
Figure 47335DEST_PATH_IMAGE055
it is indicated that the learning rate is,
Figure 17172DEST_PATH_IMAGE055
the value is 0.005.
By pair fusion matrix
Figure 367381DEST_PATH_IMAGE002
Figure 316752DEST_PATH_IMAGE003
Figure 436017DEST_PATH_IMAGE004
And
Figure 625690DEST_PATH_IMAGE039
and updating the fusion parameters by continuously back-propagating, so that the input data (the first point cloud data and the second point cloud data) can be gradually converted into the same dimension.
As can be seen from the above, the point cloud fusion device provided in the embodiment of the present application obtains the first point cloud data and the second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result; therefore, the integration of the first point cloud data acquired by the unmanned aerial vehicle and the second point cloud data acquired by the quadruped robot is realized, the calculation complexity in the point cloud integration process can be reduced by a parallel calculation method, the point cloud integration efficiency and precision are improved, in addition, the self-adaptive adjustment can be carried out on the condition of non-uniform input dimensions, and the time complexity is reduced.
In a third aspect, referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the present application provides an electronic device, including: the processor 301 and the memory 302, the processor 301 and the memory 302 being interconnected and communicating with each other via a communication bus 303 and/or other form of connection mechanism (not shown), the memory 302 storing a computer program executable by the processor 301, the computer program being executable by the processor 301 when the computing device is running to perform the method in any of the alternative implementations of the above embodiments when the processor 301 executes the computer program to perform the following functions: acquiring first point cloud data and second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; and fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
In a fourth aspect, the present application provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program performs the method in any optional implementation manner of the foregoing embodiment to implement the following functions: acquiring first point cloud data and second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; and fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result. The storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units into only one type of logical function may be implemented in other ways, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A point cloud fusion method is used for fusing first point cloud data acquired by an unmanned aerial vehicle and second point cloud data acquired by a quadruped robot, and is characterized by comprising the following steps:
acquiring the first point cloud data and the second point cloud data;
performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result;
calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data;
updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold value, and stopping iteration to obtain optimal fusion parameters;
and fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
2. The point cloud fusion method of claim 1, wherein the preliminary fusion of the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism comprises the following steps:
and randomly sampling the first point cloud data and the second point cloud data according to a set proportion.
3. The point cloud fusion method of claim 1, wherein the calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data each comprises:
solving a fusion matrix;
calculating to obtain a fusion matrix weight vector according to the fusion matrix;
calculating to obtain a fusion matrix output vector according to the fusion matrix and the fusion matrix weight vector;
calculating by using a feedforward neural network based on the weight matrix and the bias to obtain an output;
calculating according to the output to obtain a loss function;
and obtaining the weight attenuation amount by solving the partial derivative according to the loss function.
4. The point cloud fusion method of claim 3, wherein the solving of the fusion matrix is calculated by the following formula:
Figure DEST_PATH_IMAGE001
wherein the fusion matrix comprises
Figure 312363DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
And
Figure 576991DEST_PATH_IMAGE004
Figure 547221DEST_PATH_IMAGE002
in order to input the key fusion matrix,
Figure 292323DEST_PATH_IMAGE003
in order to fuse the matrices for the input values,
Figure 555552DEST_PATH_IMAGE004
in order to output the fusion matrix, the fusion matrix is output,
Figure 714001DEST_PATH_IMAGE002
Figure 19080DEST_PATH_IMAGE003
and
Figure 946585DEST_PATH_IMAGE004
the initial value of (a) is in accordance with a random number matrix of 0-1 range of Gaussian distribution;
Figure DEST_PATH_IMAGE005
representing the first in the first point cloud data
Figure 118066DEST_PATH_IMAGE006
Point cloud data or the first point cloud data of the second point cloud data
Figure 763811DEST_PATH_IMAGE006
The point cloud data is stored in a memory of the computer,
Figure DEST_PATH_IMAGE007
Figure 669319DEST_PATH_IMAGE008
is the total number of point cloud data in the first point cloud data or in the second point cloud data,
Figure DEST_PATH_IMAGE009
Figure 949865DEST_PATH_IMAGE010
and
Figure DEST_PATH_IMAGE011
respectively representing fusion matrices through input keys
Figure 915416DEST_PATH_IMAGE002
Input value fusion matrix
Figure 48457DEST_PATH_IMAGE003
And output fusion matrix
Figure 400066DEST_PATH_IMAGE004
For the first point in the first point cloud data
Figure 974267DEST_PATH_IMAGE006
Point cloud data
Figure 517244DEST_PATH_IMAGE005
Or the first point cloud data
Figure 606422DEST_PATH_IMAGE006
Point cloud data
Figure 525837DEST_PATH_IMAGE005
The preliminary fusion result of (1).
5. The point cloud fusion method of claim 3, wherein the fusion matrix weight vector calculated from the fusion matrix is calculated by the following formula:
Figure 577713DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE013
represents the weight vector of the fusion matrix and,
Figure 760433DEST_PATH_IMAGE009
representing fusion matrices through input keys
Figure 602487DEST_PATH_IMAGE002
For the first point in the first point cloud data
Figure 60013DEST_PATH_IMAGE006
Point cloud data
Figure 969325DEST_PATH_IMAGE005
Or the first point cloud data
Figure 119684DEST_PATH_IMAGE006
Point cloud data
Figure 183455DEST_PATH_IMAGE005
As a result of the preliminary fusion of (a),
Figure 444672DEST_PATH_IMAGE014
representing input value fusion matrices
Figure 582392DEST_PATH_IMAGE003
For the first point in the first point cloud data
Figure 136608DEST_PATH_IMAGE008
Point cloud data
Figure DEST_PATH_IMAGE015
Or the first point cloud data
Figure 546730DEST_PATH_IMAGE008
Point cloud data
Figure 18162DEST_PATH_IMAGE015
The preliminary fusion result of (1).
6. The point cloud fusion method of claim 3, wherein the fusion matrix output vector calculated from the fusion matrix and the fusion matrix weight vector is calculated by the following formula:
Figure 338285DEST_PATH_IMAGE016
Figure DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 597491DEST_PATH_IMAGE018
represents the output vector of the fusion matrix and,
Figure DEST_PATH_IMAGE019
weight vector for the fusion matrix
Figure 229329DEST_PATH_IMAGE013
To (1)
Figure 566770DEST_PATH_IMAGE020
The number of the elements is one,
Figure 944661DEST_PATH_IMAGE019
is shown to pass through
Figure 371838DEST_PATH_IMAGE009
And
Figure DEST_PATH_IMAGE021
the further fusion result obtained by the calculation is obtained,
Figure 631918DEST_PATH_IMAGE009
representing fusion matrices through input keys
Figure 507470DEST_PATH_IMAGE002
For the first point in the first point cloud data
Figure 864502DEST_PATH_IMAGE006
Point cloud data
Figure 668773DEST_PATH_IMAGE005
Or the first point cloud data
Figure 353832DEST_PATH_IMAGE006
Point cloud data
Figure 298654DEST_PATH_IMAGE005
As a result of the preliminary fusion of (a),
Figure 244613DEST_PATH_IMAGE021
representing input value fusion matrices
Figure 921582DEST_PATH_IMAGE003
For the first point in the first point cloud data
Figure 156255DEST_PATH_IMAGE022
Point cloud data
Figure DEST_PATH_IMAGE023
Or the first point cloud data
Figure 403303DEST_PATH_IMAGE022
Point cloud data
Figure 672610DEST_PATH_IMAGE023
As a result of the preliminary fusion of (a),
Figure 379535DEST_PATH_IMAGE024
representing output fusion matrices
Figure 101503DEST_PATH_IMAGE004
For the first point in the first point cloud data
Figure 60232DEST_PATH_IMAGE022
Point cloud data
Figure 482248DEST_PATH_IMAGE023
Or the first point cloud data
Figure 563337DEST_PATH_IMAGE022
Point cloud data
Figure 710285DEST_PATH_IMAGE023
The preliminary fusion result of (1).
7. The point cloud fusion method of claim 1, wherein the preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism is calculated by the following formula:
Figure DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure 128496DEST_PATH_IMAGE026
for preliminary fusion of outputs from attention layersAs a result of which,
Figure DEST_PATH_IMAGE027
and
Figure 363210DEST_PATH_IMAGE028
all are multi-head weighting matrixes with the same row value and column value,
Figure DEST_PATH_IMAGE029
representing the first point cloud data based on an output of a multi-headed self-attention mechanism,
Figure 208675DEST_PATH_IMAGE030
Figure DEST_PATH_IMAGE031
representing the first in the first point cloud data
Figure 265755DEST_PATH_IMAGE032
The number of the data is one,
Figure DEST_PATH_IMAGE033
representing an output of the second point cloud data based on a multi-headed self-attention mechanism,
Figure 753237DEST_PATH_IMAGE034
Figure DEST_PATH_IMAGE035
representing the first point cloud data
Figure 146916DEST_PATH_IMAGE032
And (4) data.
8. The utility model provides a point cloud fuses device for the second point cloud data that gathers to the first point cloud data that unmanned aerial vehicle gathered and four-footed robot fuses, its characterized in that, the device includes:
the acquisition module is used for acquiring the first point cloud data and the second point cloud data;
the initial fusion module is used for performing initial fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain an initial fusion result;
the computing module is used for computing the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data;
the updating module is used for respectively updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold value, and stopping iteration to obtain the optimal fusion parameters;
and the fusion module is used for fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
9. An electronic device comprising a processor and a memory, the memory storing computer readable instructions which, when executed by the processor, perform the steps of the method of any one of claims 1-7.
10. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method according to any one of claims 1-7.
CN202210426803.8A 2022-04-22 2022-04-22 Point cloud fusion method and device, electronic equipment and storage medium Active CN114549608B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210426803.8A CN114549608B (en) 2022-04-22 2022-04-22 Point cloud fusion method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210426803.8A CN114549608B (en) 2022-04-22 2022-04-22 Point cloud fusion method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114549608A CN114549608A (en) 2022-05-27
CN114549608B true CN114549608B (en) 2022-10-18

Family

ID=81666948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210426803.8A Active CN114549608B (en) 2022-04-22 2022-04-22 Point cloud fusion method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114549608B (en)

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11556777B2 (en) * 2017-11-15 2023-01-17 Uatc, Llc Continuous convolution and fusion in neural networks
US11221413B2 (en) * 2018-03-14 2022-01-11 Uatc, Llc Three-dimensional object detection
CN109919893B (en) * 2019-03-20 2021-04-23 湖北亿咖通科技有限公司 Point cloud correction method and device and readable storage medium
CN109978165A (en) * 2019-04-04 2019-07-05 重庆大学 A kind of generation confrontation network method merged from attention mechanism
CN109978808B (en) * 2019-04-25 2022-02-01 北京迈格威科技有限公司 Method and device for image fusion and electronic equipment
CN112184603B (en) * 2019-07-04 2022-06-24 浙江商汤科技开发有限公司 Point cloud fusion method and device, electronic equipment and computer storage medium
US20210122045A1 (en) * 2019-10-24 2021-04-29 Nvidia Corporation In-hand object pose tracking
US11928873B2 (en) * 2020-03-04 2024-03-12 Magic Leap, Inc. Systems and methods for efficient floorplan generation from 3D scans of indoor scenes
US20210374345A1 (en) * 2020-06-01 2021-12-02 Google Llc Processing large-scale textual inputs using neural networks
US11941875B2 (en) * 2020-07-27 2024-03-26 Waymo Llc Processing perspective view range images using neural networks
CN111860666A (en) * 2020-07-27 2020-10-30 湖南工程学院 3D target detection method based on point cloud and image self-attention mechanism fusion
CN111950467B (en) * 2020-08-14 2021-06-25 清华大学 Fusion network lane line detection method based on attention mechanism and terminal equipment
CN113487739A (en) * 2021-05-19 2021-10-08 清华大学 Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN113269147B (en) * 2021-06-24 2022-07-05 浙江海康智联科技有限公司 Three-dimensional detection method and system based on space and shape, and storage and processing device
CN113345106A (en) * 2021-06-24 2021-09-03 西南大学 Three-dimensional point cloud analysis method and system based on multi-scale multi-level converter
CN113658100A (en) * 2021-07-16 2021-11-16 上海高德威智能交通系统有限公司 Three-dimensional target object detection method and device, electronic equipment and storage medium
CN113989340A (en) * 2021-10-29 2022-01-28 天津大学 Point cloud registration method based on distribution
CN114004871B (en) * 2022-01-04 2022-04-15 山东大学 Point cloud registration method and system based on point cloud completion
CN114066960B (en) * 2022-01-13 2022-04-22 季华实验室 Three-dimensional reconstruction method, point cloud fusion method, device, equipment and storage medium
CN114078151B (en) * 2022-01-19 2022-04-22 季华实验室 Point cloud fusion method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114549608A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN109658445A (en) Network training method, increment build drawing method, localization method, device and equipment
CN110383340A (en) Path planning is carried out using sparse volume data
CN108204814B (en) Unmanned aerial vehicle three-dimensional scene path navigation platform and three-dimensional improved path planning method thereof
US11302105B2 (en) Grid map obstacle detection method fusing probability and height information
CN111750857B (en) Route generation method, route generation device, terminal and storage medium
WO2016029348A1 (en) Measuring traffic speed in a road network
CN110347971A (en) Particle filter method, device and storage medium based on TSK fuzzy model
CN114066960B (en) Three-dimensional reconstruction method, point cloud fusion method, device, equipment and storage medium
US20240037844A1 (en) 3d structure engine-based computation platform
CN110181508A (en) Underwater robot three-dimensional Route planner and system
CN113916130B (en) Building position measuring method based on least square method
Gan et al. Research on role modeling and behavior control of virtual reality animation interactive system in Internet of Things
Yuan et al. Feature preserving multiresolution subdivision and simplification of point clouds: A conformal geometric algebra approach
CN116518960B (en) Road network updating method, device, electronic equipment and storage medium
CN112241676A (en) Method for automatically identifying terrain sundries
CN116720632B (en) Engineering construction intelligent management method and system based on GIS and BIM
CN114549608B (en) Point cloud fusion method and device, electronic equipment and storage medium
CN115393542B (en) Generalized building three-dimensional geometric reconstruction method
CN116482711A (en) Local static environment sensing method and device for autonomous selection of landing zone
WO2023164933A1 (en) Building modeling method and related apparatus
CN115797256A (en) Unmanned aerial vehicle-based tunnel rock mass structural plane information processing method and device
CN115375836A (en) Point cloud fusion three-dimensional reconstruction method and system based on multivariate confidence filtering
Hou et al. Poisson disk sampling in geodesic metric for DEM simplification
CN107247833A (en) A kind of CAE mass data light weight methods under cloud computing
CN114511571A (en) Point cloud data semantic segmentation method and system and related components

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant