CN114549608A - Point cloud fusion method and device, electronic equipment and storage medium - Google Patents

Point cloud fusion method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114549608A
CN114549608A CN202210426803.8A CN202210426803A CN114549608A CN 114549608 A CN114549608 A CN 114549608A CN 202210426803 A CN202210426803 A CN 202210426803A CN 114549608 A CN114549608 A CN 114549608A
Authority
CN
China
Prior art keywords
point cloud
cloud data
fusion
matrix
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210426803.8A
Other languages
Chinese (zh)
Other versions
CN114549608B (en
Inventor
邓涛
张晟东
李志建
古家威
霍震
陈海龙
黄秀韦
何昊名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210426803.8A priority Critical patent/CN114549608B/en
Publication of CN114549608A publication Critical patent/CN114549608A/en
Application granted granted Critical
Publication of CN114549608B publication Critical patent/CN114549608B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Laser Beam Processing (AREA)
  • Numerical Control (AREA)

Abstract

The application belongs to the technical field of data processing, and provides a point cloud fusion method, a point cloud fusion device, electronic equipment and a storage medium, wherein the point cloud fusion method comprises the following steps: acquiring first point cloud data and second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; and updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the first point cloud data fusion weight and the update change rate of the second point cloud data fusion weight are respectively smaller than a preset threshold value, and stopping iteration to obtain the optimal fusion parameters. The point cloud fusion method, the point cloud fusion device, the electronic equipment and the storage medium reduce the calculation complexity in the fusion process and improve the efficiency and the precision of point cloud fusion.

Description

Point cloud fusion method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of data processing, in particular to a point cloud fusion method and device, electronic equipment and a storage medium.
Background
Three-dimensional reconstruction refers to the establishment of a mathematical model suitable for computer representation and processing of a three-dimensional object, is the basis for processing, operating and analyzing the properties of the three-dimensional object in a computer environment, and is also a key technology for establishing virtual reality expressing an objective world in a computer.
At present, the existing three-dimensional reconstruction point cloud fusion method mainly adopts a fusion method based on space voxels and a point cloud fusion method based on a clustering idea, but the fusion method based on the space voxels and the point cloud fusion method based on the clustering idea have the problems of low fusion efficiency and low precision.
In view of the above problems, no effective technical solution exists at present.
Disclosure of Invention
The application aims to provide a point cloud fusion method, a point cloud fusion device, an electronic device and a storage medium, which can reduce the calculation complexity in the point cloud fusion process and improve the efficiency and the precision of the point cloud fusion.
In a first aspect, the present application provides a point cloud fusion method for fusing first point cloud data acquired by an unmanned aerial vehicle and second point cloud data acquired by a quadruped robot, the method including the following steps:
acquiring the first point cloud data and the second point cloud data;
performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result;
calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data;
updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold value, and stopping iteration to obtain optimal fusion parameters;
and fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
According to the point cloud fusion method, the computation complexity in the point cloud fusion process can be reduced through a parallel computation method, the efficiency and the precision of point cloud fusion are improved, moreover, self-adaptive adjustment can be carried out on the condition that the input dimensions are not uniform, and the time complexity is reduced.
Optionally, in the point cloud fusion method described in the present application, before performing preliminary fusion on the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism, the method includes the following steps:
and randomly sampling the first point cloud data and the second point cloud data according to a set proportion.
According to the method and the device, the first point cloud data and the second point cloud data are subjected to random sampling and then are subjected to preliminary fusion, and the calculation complexity in the fusion process is further reduced.
Optionally, in the point cloud fusion method described in the present application, the calculating the weight attenuation amount of the first point cloud data and the weight attenuation amount of the second point cloud data each includes the following steps:
solving a fusion matrix;
calculating to obtain a fusion matrix weight vector according to the fusion matrix;
calculating to obtain a fusion matrix output vector according to the fusion matrix and the fusion matrix weight vector;
calculating by using a feedforward neural network based on the weight matrix and the bias to obtain an output;
calculating according to the output to obtain a loss function;
and obtaining the weight attenuation amount by solving the partial derivative according to the loss function.
According to the method, the weight attenuation of the preliminary fusion result is calculated through the steps, so that each point coordinate of the output fusion point cloud depends on all input point coordinates, and the fusion is more accurate.
Optionally, in the point cloud fusion method described in the present application, the solving of the fusion matrix is calculated by the following formula:
Figure 85415DEST_PATH_IMAGE001
wherein the fusion matrix comprises
Figure 428321DEST_PATH_IMAGE002
Figure 676900DEST_PATH_IMAGE003
And
Figure 806530DEST_PATH_IMAGE004
Figure 542274DEST_PATH_IMAGE002
in order to enter the key fusion matrix, the key fusion matrix is entered,
Figure 833578DEST_PATH_IMAGE003
in order to fuse the matrices for the input values,
Figure 835032DEST_PATH_IMAGE004
in order to output the fusion matrix, the fusion matrix is output,
Figure 456769DEST_PATH_IMAGE002
Figure 860068DEST_PATH_IMAGE003
and
Figure 525536DEST_PATH_IMAGE004
the initial value of (a) is in accordance with a random number matrix of 0-1 range of Gaussian distribution;
Figure 263553DEST_PATH_IMAGE005
representing the first in the first point cloud data
Figure 469407DEST_PATH_IMAGE007
Point cloud data or the first point cloud data of the second point cloud data
Figure 664896DEST_PATH_IMAGE007
The point cloud data is stored in a memory of the computer,
Figure 780226DEST_PATH_IMAGE008
Figure 756272DEST_PATH_IMAGE009
is the total number of point cloud data in the first point cloud data or in the second point cloud data,
Figure 500237DEST_PATH_IMAGE010
Figure 65080DEST_PATH_IMAGE011
and
Figure 134667DEST_PATH_IMAGE012
respectively representing fusion matrices through input keys
Figure 286425DEST_PATH_IMAGE002
Input value fusion matrix
Figure 99660DEST_PATH_IMAGE003
And output fusion matrix
Figure 269741DEST_PATH_IMAGE004
For the first point in the first point cloud data
Figure 25077DEST_PATH_IMAGE007
Point cloud data
Figure 647819DEST_PATH_IMAGE005
Or the first point cloud data
Figure 264745DEST_PATH_IMAGE007
Point cloud data
Figure 54714DEST_PATH_IMAGE005
The preliminary fusion result of (1).
Optionally, in the point cloud fusion method described in the present application, the fusion matrix weight vector calculated according to the fusion matrix is calculated by the following formula:
Figure 466104DEST_PATH_IMAGE013
wherein, the first and the second end of the pipe are connected with each other,
Figure 638459DEST_PATH_IMAGE014
represents the weight vector of the fusion matrix and,
Figure 777185DEST_PATH_IMAGE010
representing fusion matrices through input keys
Figure 921859DEST_PATH_IMAGE002
For the first point in the first point cloud data
Figure 504150DEST_PATH_IMAGE007
Point cloud data
Figure 914534DEST_PATH_IMAGE005
Or the first point cloud data
Figure 279787DEST_PATH_IMAGE007
Point cloud data
Figure 324972DEST_PATH_IMAGE005
As a result of the preliminary fusion of (a),
Figure 78165DEST_PATH_IMAGE015
representing input value fusion matrices
Figure 162795DEST_PATH_IMAGE003
For the first point in the first point cloud data
Figure 673018DEST_PATH_IMAGE009
Point cloud data
Figure 526704DEST_PATH_IMAGE016
Or the first point cloud data
Figure 185219DEST_PATH_IMAGE009
Point cloud data
Figure 68730DEST_PATH_IMAGE016
The preliminary fusion result of (1).
Optionally, in the point cloud fusion method described in the present application, the fusion matrix output vector obtained by calculating according to the fusion matrix and the fusion matrix weight vector is calculated by the following formula:
Figure 306944DEST_PATH_IMAGE017
Figure 77454DEST_PATH_IMAGE018
wherein, the first and the second end of the pipe are connected with each other,
Figure 657602DEST_PATH_IMAGE019
represents the output vector of the fusion matrix and,
Figure 779142DEST_PATH_IMAGE020
as the weight vector of the fusion matrix
Figure 821048DEST_PATH_IMAGE014
To (1)
Figure 695332DEST_PATH_IMAGE021
The number of the elements is one,
Figure 164490DEST_PATH_IMAGE020
is shown to pass through
Figure 507747DEST_PATH_IMAGE010
And
Figure 107005DEST_PATH_IMAGE022
the further fusion result obtained by the calculation is obtained,
Figure 586528DEST_PATH_IMAGE022
representing input value fusion matrices
Figure 288905DEST_PATH_IMAGE003
For the first point in the first point cloud data
Figure 103146DEST_PATH_IMAGE023
Point cloud data
Figure 486854DEST_PATH_IMAGE024
Or the first point cloud data
Figure 86462DEST_PATH_IMAGE023
Point cloud data
Figure 444894DEST_PATH_IMAGE024
As a result of the preliminary fusion of (a),
Figure 700426DEST_PATH_IMAGE025
representing output fusion matrices
Figure 684562DEST_PATH_IMAGE004
For the first point in the first point cloud data
Figure 387945DEST_PATH_IMAGE023
Point cloud data
Figure 369807DEST_PATH_IMAGE024
Or the first point cloud data
Figure 174952DEST_PATH_IMAGE023
Point cloud data
Figure 648266DEST_PATH_IMAGE024
The preliminary fusion result of (1).
Optionally, in the point cloud fusion method described in the present application, the preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism is calculated by the following formula:
Figure 956887DEST_PATH_IMAGE026
wherein the content of the first and second substances,
Figure 171968DEST_PATH_IMAGE027
for the preliminary fusion result output from the attention layer,
Figure 385780DEST_PATH_IMAGE028
and
Figure 977299DEST_PATH_IMAGE029
all are multi-head weighting matrixes with the same row value and column value,
Figure 140427DEST_PATH_IMAGE030
representing the first point cloud data and the second point cloud data,
Figure 214824DEST_PATH_IMAGE031
representing the second point cloud data and,
Figure 728982DEST_PATH_IMAGE032
Figure 858612DEST_PATH_IMAGE033
representing the first in the first point cloud data
Figure 328777DEST_PATH_IMAGE034
The number of the data is one,
Figure 682398DEST_PATH_IMAGE035
Figure 621535DEST_PATH_IMAGE033
a first point cloud data representing the first point cloud data based on a self-attention mechanismnThe output of the first and second processors is,
Figure 296799DEST_PATH_IMAGE036
representing the second point cloud data based on a self-attention mechanismnThe output of the first and second processors is,
Figure 372203DEST_PATH_IMAGE036
representing the first point cloud data
Figure 99987DEST_PATH_IMAGE034
And (4) data.
In a second aspect, the present application further provides a point cloud fusion device for fusing first point cloud data collected by an unmanned aerial vehicle and second point cloud data collected by a quadruped robot, the device comprising:
the acquisition module is used for acquiring the first point cloud data and the second point cloud data;
the initial fusion module is used for performing initial fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain an initial fusion result;
the computing module is used for computing the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data;
the updating module is used for updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold value, and stopping iteration to obtain the optimal fusion parameters;
and the fusion module is used for fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
The point cloud fusion device provided by the application can reduce the computation complexity in the point cloud fusion process through a parallel computation method, improve the efficiency and the precision of point cloud fusion, and can also perform self-adaptive adjustment on the condition that the input dimensions are not uniform, so that the time complexity is reduced.
In a third aspect, the present application provides an electronic device comprising a processor and a memory, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, perform the steps of the method as provided in the first aspect.
In a fourth aspect, the present application provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as provided in the first aspect above.
As can be seen from the above, the point cloud fusion method, device, electronic device and storage medium provided by the present application acquire the first point cloud data and the second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result; therefore, the integration of the first point cloud data acquired by the unmanned aerial vehicle and the second point cloud data acquired by the quadruped robot is realized, the calculation complexity in the point cloud integration process can be reduced by a parallel calculation method, the point cloud integration efficiency and precision are improved, in addition, the self-adaptive adjustment can be carried out on the condition of non-uniform input dimensions, and the time complexity is reduced.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
Fig. 1 is a flowchart of a point cloud fusion method provided in an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a point cloud fusion apparatus provided in the embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not construed as indicating or implying relative importance.
In recent years, unmanned aerial vehicle three-dimensional scene reconstruction is widely applied to the fields of unmanned aerial vehicle self-positioning navigation, urban digital twinning, topographic mapping and the like, but is limited by the flight height of the unmanned aerial vehicle and the limited resolution of a holder camera, under complex scenes (intricate buildings, jungles, dense grasslands and the like), the unmanned aerial vehicle cannot carry out three-dimensional reconstruction on the complex scenes on the ground, the unmanned vehicle is also limited by the motion performance of the unmanned aerial vehicle and cannot run under the complex scenes, and the quadruped robot can be suitable for various complex terrains, including jungles, dense vegetation, forests, ramps, stairs and the like. Therefore, under the complex scene environment, the point cloud fusion is carried out through the three-dimensional reconstruction map of the aerial unmanned aerial vehicle and the fine three-dimensional reconstruction map of the ground quadruped robot, the accuracy of the three-dimensional scene reconstruction under the existing complex environment can be effectively improved, and the map scene construction is facilitated.
However, the existing three-dimensional reconstruction point cloud fusion method mainly adopts a fusion method based on space voxels and a point cloud fusion method based on a clustering idea. The fusion method based on the spatial voxels, such as the TSDF method, needs to divide the point cloud space into tiny voxels, the subdivision degree is related to the precision, and when the method is applied to an application scene with higher precision requirement and larger point cloud space distribution, a large amount of memory resources are consumed, so the point cloud fusion method based on the spatial voxels is only suitable for point cloud fusion with low precision requirement and rapid reconstruction of a three-dimensional scene; the point cloud fusion method based on the clustering idea needs to input point cloud and normal simultaneously. The method comprises the steps of utilizing a clustering mode to position an overlapping area, utilizing a least square method to project a point set of the overlapping area to a fitting plane, selecting an intersection point of a straight line formed by points of the overlapping area and a normal direction and the fitting plane as data of data after fusion, and when the data volume is large, the clustering process and the fitting process are time-consuming and the fusion efficiency is low.
Therefore, the current point cloud fusion algorithm is difficult to meet the requirements of large data volume and high precision of point cloud fusion of three-dimensional reconstruction of complex scenes. Based on the point cloud fusion method and device, the electronic equipment and the storage medium are provided.
In a first aspect, please refer to fig. 1, fig. 1 is a flowchart of a point cloud fusion method in some embodiments of the present application. The point cloud fusion method is used for fusing first point cloud data acquired by an unmanned aerial vehicle and second point cloud data acquired by a quadruped robot, and comprises the following steps:
s101, obtaining first point cloud data and second point cloud data.
S102, performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result.
S103, calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data.
And S104, updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the first point cloud data fusion weight and the update change rate of the second point cloud data fusion weight are smaller than a preset threshold respectively, and stopping iteration to obtain the optimal fusion parameters.
And S105, fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
According to the point cloud fusion method, preliminary fusion is carried out on the basis of a multi-head self-attention mechanism, then the fusion parameters are updated according to weight attenuation back propagation in a preliminary fusion result to obtain the optimal fusion parameters, fusion is carried out according to the optimal fusion parameters to obtain the optimal fusion result, the calculation complexity in the point cloud fusion process is reduced, and the efficiency and the precision of point cloud fusion are improved.
In step S101, the acquired first point cloud data may be obtained by converting after being shot by a high-definition pan-tilt camera with depth information carried by the unmanned aerial vehicle, and the acquired second point cloud data may be obtained by converting after being shot by a high-definition RGB-D camera.
In step S102, the preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism is calculated by the following formula:
Figure 41267DEST_PATH_IMAGE026
wherein the content of the first and second substances,
Figure 512700DEST_PATH_IMAGE027
for the preliminary fusion result output from the attention layer,
Figure 458921DEST_PATH_IMAGE028
and
Figure 623186DEST_PATH_IMAGE029
all are multi-head weighting matrixes with the same row value and column value,
Figure 536916DEST_PATH_IMAGE030
indicating that the first point cloud data is based on the output of a multi-headed self-attentive mechanism,
Figure 998990DEST_PATH_IMAGE032
Figure 111303DEST_PATH_IMAGE033
representing the first in the first point cloud data
Figure 180890DEST_PATH_IMAGE034
The number of the data is set to be,
Figure 595297DEST_PATH_IMAGE031
representing the output of the second point cloud data based on a multi-headed self-attentive mechanism,
Figure 142953DEST_PATH_IMAGE035
Figure 562302DEST_PATH_IMAGE036
representing the first point cloud data
Figure 802791DEST_PATH_IMAGE034
And (4) data.
Specifically, in some embodiments, before the preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism in step S102, the following steps are further included: and respectively carrying out random sampling on the first point cloud data and the second point cloud data according to a set proportion. The first point cloud data and the second point cloud data are respectively subjected to random sampling and then are subjected to preliminary fusion, so that the computational complexity in the fusion process is further reduced.
Specifically, in some embodiments, step S103 includes the following sub-steps: s1031, solving a fusion matrix; s1032, calculating according to the fusion matrix to obtain a fusion matrix weight vector; s1033, calculating to obtain a fusion matrix output vector according to the fusion matrix and the fusion matrix weight vector; s1034, further calculating by utilizing a feedforward neural network based on the weight matrix and the bias to obtain output; s1035, obtaining a loss function according to output calculation; s1036, obtaining weight attenuation quantity by calculating partial derivatives according to the loss function.
Wherein, in step S1031, the fusion matrix is calculated by the following formula:
Figure 691112DEST_PATH_IMAGE001
wherein the fusion matrix comprises
Figure 793191DEST_PATH_IMAGE002
Figure 83358DEST_PATH_IMAGE003
And
Figure 760327DEST_PATH_IMAGE004
Figure 119633DEST_PATH_IMAGE002
in order to enter the key fusion matrix, the key fusion matrix is entered,
Figure 9092DEST_PATH_IMAGE003
for the input value fusion matrix, the input value fusion matrix is obtained,
Figure 173007DEST_PATH_IMAGE004
in order to output the fusion matrix, the fusion matrix is output,
Figure 755298DEST_PATH_IMAGE002
Figure 149370DEST_PATH_IMAGE003
and
Figure 560629DEST_PATH_IMAGE004
the initial value of (a) is in accordance with a random number matrix of 0-1 range of Gaussian distribution;
Figure 356547DEST_PATH_IMAGE005
representing the first in the first point cloud data
Figure 313001DEST_PATH_IMAGE007
First of the individual point cloud data or the second point cloud data
Figure 210681DEST_PATH_IMAGE007
The point cloud data is stored in a memory of the computer,
Figure 114046DEST_PATH_IMAGE008
Figure 748159DEST_PATH_IMAGE009
the total number of point cloud data in the first point cloud data or in the second point cloud data,
Figure 672253DEST_PATH_IMAGE010
Figure 991982DEST_PATH_IMAGE011
and
Figure 292513DEST_PATH_IMAGE012
respectively representing fusion matrices through input keys
Figure 797444DEST_PATH_IMAGE002
Input value fusion matrix
Figure 610548DEST_PATH_IMAGE003
And output fusion matrix
Figure 669771DEST_PATH_IMAGE004
To the first in the first point cloud data
Figure 242835DEST_PATH_IMAGE007
Point cloud data
Figure 618584DEST_PATH_IMAGE005
Or the first in the second point cloud data
Figure 150059DEST_PATH_IMAGE007
Point cloud data
Figure 430999DEST_PATH_IMAGE005
The preliminary fusion result of (1).
In step S1032, the weight vector of the fusion matrix obtained by calculation according to the fusion matrix is calculated by the following formula:
Figure 322600DEST_PATH_IMAGE013
in the formula (I), the compound is shown in the specification,
Figure 67702DEST_PATH_IMAGE014
in order to fuse the matrix weight vectors,
Figure 707762DEST_PATH_IMAGE010
representing fusion matrices through input keys
Figure 38117DEST_PATH_IMAGE002
To the first in the first point cloud data
Figure 218562DEST_PATH_IMAGE007
Point cloud data
Figure 21433DEST_PATH_IMAGE005
Or the first in the second point cloud data
Figure 143979DEST_PATH_IMAGE007
Point cloud data
Figure 399511DEST_PATH_IMAGE005
As a result of the preliminary fusion of (a),
Figure 118068DEST_PATH_IMAGE015
representing input value fusion matrices
Figure 588495DEST_PATH_IMAGE003
To the first in the first point cloud data
Figure 367095DEST_PATH_IMAGE009
Point cloud data
Figure 109923DEST_PATH_IMAGE016
Or the first in the second point cloud data
Figure 881439DEST_PATH_IMAGE009
Point cloud data
Figure 455640DEST_PATH_IMAGE016
The preliminary fusion result of (1).
In step S1033, the output vector of the fusion matrix is calculated by the following formula:
Figure 405141DEST_PATH_IMAGE017
Figure 383068DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 709007DEST_PATH_IMAGE019
represents the output vector of the fusion matrix and,
Figure 75398DEST_PATH_IMAGE020
to fuse matrix weight vectors
Figure 445068DEST_PATH_IMAGE014
To (1)
Figure 224805DEST_PATH_IMAGE021
The number of the elements is one,
Figure 292118DEST_PATH_IMAGE020
is shown to pass through
Figure 60485DEST_PATH_IMAGE010
And
Figure 882948DEST_PATH_IMAGE022
the further fusion result obtained by the calculation is obtained,
Figure 618823DEST_PATH_IMAGE022
representing input value fusion matrices
Figure 473515DEST_PATH_IMAGE003
To the first in the first point cloud data
Figure 611235DEST_PATH_IMAGE023
Point cloud data
Figure 604599DEST_PATH_IMAGE024
Or the first in the second point cloud data
Figure 562191DEST_PATH_IMAGE023
Point cloud data
Figure 990548DEST_PATH_IMAGE024
As a result of the preliminary fusion of (a),
Figure 248354DEST_PATH_IMAGE025
representing output fusion matrices
Figure 881461DEST_PATH_IMAGE004
To the first in the first point cloud data
Figure 44457DEST_PATH_IMAGE023
Point cloud data
Figure 319581DEST_PATH_IMAGE024
Or the first in the second point cloud data
Figure 166314DEST_PATH_IMAGE023
Point cloud data
Figure 189896DEST_PATH_IMAGE024
The preliminary fusion result of (1).
In step S1034, the output is calculated by the following formula:
Figure 122080DEST_PATH_IMAGE037
wherein the content of the first and second substances,
Figure 653425DEST_PATH_IMAGE038
in order to be output, the output is,
Figure 885823DEST_PATH_IMAGE039
Figure 63994DEST_PATH_IMAGE040
an output weight matrix representing the output layers,
Figure 496856DEST_PATH_IMAGE040
of dimension of
Figure 113782DEST_PATH_IMAGE041
Figure 935108DEST_PATH_IMAGE042
In order to be offset,
Figure 533448DEST_PATH_IMAGE042
of dimension of
Figure 440224DEST_PATH_IMAGE043
In step S1035, the loss function is calculated by the following formula:
Figure 860842DEST_PATH_IMAGE044
wherein the content of the first and second substances,
Figure 490668DEST_PATH_IMAGE045
in order to be a function of the loss,
Figure 72959DEST_PATH_IMAGE046
the accurate labels in the single-head point cloud are obtained by calibration after manual selection.
In step S104, updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data, respectively, and calculating according to the following formulas:
Figure 935873DEST_PATH_IMAGE047
Figure 878290DEST_PATH_IMAGE048
Figure 939787DEST_PATH_IMAGE049
Figure 692979DEST_PATH_IMAGE050
wherein the content of the first and second substances,
Figure 519553DEST_PATH_IMAGE002
Figure 219656DEST_PATH_IMAGE003
Figure 870080DEST_PATH_IMAGE004
and
Figure 512283DEST_PATH_IMAGE040
in order to fuse the parameters before the update,
Figure 146527DEST_PATH_IMAGE002
in order to enter the key fusion matrix, the key fusion matrix is entered,
Figure 181479DEST_PATH_IMAGE003
in order to fuse the matrices for the input values,
Figure 640404DEST_PATH_IMAGE004
in order to output the fusion matrix, the fusion matrix is output,
Figure 204241DEST_PATH_IMAGE051
Figure 43890DEST_PATH_IMAGE052
Figure 351374DEST_PATH_IMAGE053
and
Figure 241970DEST_PATH_IMAGE054
respectively corresponding to the updated fusion parameters,
Figure 255669DEST_PATH_IMAGE055
it is shown that the derivation is calculated,
Figure 536609DEST_PATH_IMAGE056
it is indicated that the learning rate is,
Figure 178943DEST_PATH_IMAGE056
the value is 0.005.
By pair fusion matrix
Figure 110995DEST_PATH_IMAGE002
Figure 813372DEST_PATH_IMAGE003
Figure 378346DEST_PATH_IMAGE004
And
Figure 512786DEST_PATH_IMAGE040
and updating the fusion parameters by continuously back-propagating, so that the input data (the first point cloud data and the second point cloud data) can be gradually converted into the same dimension.
As can be seen from the above, the point cloud fusion method provided by the embodiment of the present application obtains the first point cloud data and the second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result; therefore, the integration of the first point cloud data acquired by the unmanned aerial vehicle and the second point cloud data acquired by the quadruped robot is realized, the calculation complexity in the point cloud integration process can be reduced by a parallel calculation method, the point cloud integration efficiency and precision are improved, in addition, the self-adaptive adjustment can be carried out on the condition of non-uniform input dimensions, and the time complexity is reduced.
In a second aspect, please refer to fig. 2, fig. 2 is a point cloud fusion apparatus for fusing first point cloud data acquired by an unmanned aerial vehicle and second point cloud data acquired by a quadruped robot, the point cloud fusion apparatus being integrated in a ground station device in wireless communication connection with the unmanned aerial vehicle and the quadruped robot in the form of a computer program, the point cloud fusion apparatus comprising: the system comprises an acquisition module 201, a preliminary fusion module 202, a calculation module 203, an update module 204 and a fusion module 205.
The acquiring module 201 is configured to acquire first point cloud data and second point cloud data; the acquired first point cloud data can be obtained by converting after being shot by a high-definition cloud platform camera with depth information carried by an unmanned aerial vehicle, and the acquired second point cloud data can be obtained by converting after being shot by a high-definition RGB-D camera.
The initial fusion module 202 is configured to perform initial fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism, so as to obtain an initial fusion result. The preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism is calculated by the following formula:
Figure 377974DEST_PATH_IMAGE026
wherein the content of the first and second substances,
Figure 438203DEST_PATH_IMAGE027
for the preliminary fusion result output from the attention layer,
Figure 490472DEST_PATH_IMAGE028
and
Figure 677871DEST_PATH_IMAGE029
all are multi-head weighting matrixes with the same row value and column value,
Figure 151228DEST_PATH_IMAGE030
indicating that the first point cloud data is based on the output of a multi-headed self-attentiveness mechanism,
Figure 929828DEST_PATH_IMAGE032
Figure 407077DEST_PATH_IMAGE033
representing the first in the first point cloud data
Figure 444172DEST_PATH_IMAGE034
The number of the data is one,
Figure 956056DEST_PATH_IMAGE031
representing the output of the second point cloud data based on a multi-headed self-attentive mechanism,
Figure 905557DEST_PATH_IMAGE035
Figure 948731DEST_PATH_IMAGE036
representing the first point cloud data
Figure 212353DEST_PATH_IMAGE034
And (4) data.
Specifically, in some embodiments, the point cloud fusion device further comprises a random sampling module. The random sampling module is configured to perform random sampling on the first point cloud data and the second point cloud data according to a set ratio before the initial fusion module 202 performs initial fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism, so as to reduce the computational complexity in the fusion process.
The calculating module 203 is configured to calculate a weight attenuation amount of the first point cloud data and a weight attenuation amount of the second point cloud data. And calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data to ensure that each point coordinate of the output fusion point cloud depends on all input point coordinates, so that the fusion is more accurate.
Specifically, in some embodiments, the calculation module 203 includes: the first calculation unit is used for solving a fusion matrix; the second calculation unit is used for calculating to obtain a fusion matrix weight vector according to the fusion matrix; the third calculation unit is used for calculating to obtain a fusion matrix output vector according to the fusion matrix and the fusion matrix weight vector; the fourth calculation unit is used for further calculating by utilizing a feedforward neural network based on the weight matrix and the bias to obtain output; a fifth calculating unit, configured to calculate a loss function according to the output; and the sixth calculating unit is used for solving the partial derivative according to the loss function to obtain the weight attenuation.
Specifically, the first calculation unit calculates a fusion matrix of the preliminary fusion result by the following formula:
Figure 906639DEST_PATH_IMAGE001
in the formula, the fusion matrix comprises
Figure 276310DEST_PATH_IMAGE002
Figure 728151DEST_PATH_IMAGE003
And
Figure 592201DEST_PATH_IMAGE004
Figure 888797DEST_PATH_IMAGE002
in order to enter the key fusion matrix, the key fusion matrix is entered,
Figure 383364DEST_PATH_IMAGE003
in order to fuse the matrices for the input values,
Figure 384818DEST_PATH_IMAGE004
in order to output the fusion matrix, the fusion matrix is output,
Figure 301827DEST_PATH_IMAGE002
Figure 642810DEST_PATH_IMAGE003
and
Figure 370594DEST_PATH_IMAGE004
the initial value of (a) is in accordance with a random number matrix of 0-1 range of Gaussian distribution;
Figure 547760DEST_PATH_IMAGE005
indicating a first pointFirst in cloud data
Figure 19192DEST_PATH_IMAGE007
First of the individual point cloud data or the second point cloud data
Figure 276998DEST_PATH_IMAGE007
The point cloud data is obtained by the point cloud data,
Figure 362635DEST_PATH_IMAGE008
Figure 541944DEST_PATH_IMAGE009
to the total number of point cloud data in the preliminary fusion result,
Figure 582448DEST_PATH_IMAGE010
Figure 429181DEST_PATH_IMAGE011
and
Figure 702031DEST_PATH_IMAGE012
respectively representing fusion matrices through input keys
Figure 149062DEST_PATH_IMAGE002
Input value fusion matrix
Figure 962297DEST_PATH_IMAGE003
And output fusion matrix
Figure 866799DEST_PATH_IMAGE004
To the first in the first point cloud data
Figure 123599DEST_PATH_IMAGE007
Point cloud data
Figure 215183DEST_PATH_IMAGE005
Or the first in the second point cloud data
Figure 81377DEST_PATH_IMAGE007
Point cloud data
Figure 105964DEST_PATH_IMAGE005
The preliminary fusion result of (1).
Specifically, the second calculation unit calculates the fusion matrix weight vector by the following formula:
Figure 517354DEST_PATH_IMAGE013
in the formula (I), the compound is shown in the specification,
Figure 437512DEST_PATH_IMAGE014
in order to fuse the weight vectors of the matrix,
Figure 795812DEST_PATH_IMAGE010
representing fusion matrices through input keys
Figure 471644DEST_PATH_IMAGE002
To the first in the first point cloud data
Figure 568782DEST_PATH_IMAGE007
Point cloud data
Figure 166117DEST_PATH_IMAGE005
Or the first in the second point cloud data
Figure 859266DEST_PATH_IMAGE007
Point cloud data
Figure 671496DEST_PATH_IMAGE005
As a result of the preliminary fusion of (a),
Figure 159109DEST_PATH_IMAGE015
representing input value fusion matrices
Figure 243739DEST_PATH_IMAGE003
To the first in the first point cloud data
Figure 989847DEST_PATH_IMAGE009
Point cloud data
Figure 905851DEST_PATH_IMAGE016
Or the first in the second point cloud data
Figure 829944DEST_PATH_IMAGE009
Point cloud data
Figure 933030DEST_PATH_IMAGE016
The preliminary fusion result of (1).
Specifically, the third calculation unit calculates the fusion matrix output vector by the following formula:
Figure 924906DEST_PATH_IMAGE017
Figure 960995DEST_PATH_IMAGE018
in the formula (I), the compound is shown in the specification,
Figure 55990DEST_PATH_IMAGE019
represents the output vector of the fusion matrix and,
Figure 98902DEST_PATH_IMAGE020
as a fusion matrix weight vector
Figure 203124DEST_PATH_IMAGE014
To (1)
Figure 828140DEST_PATH_IMAGE021
The number of the elements is one,
Figure 48031DEST_PATH_IMAGE020
is shown to pass through
Figure 125709DEST_PATH_IMAGE010
And
Figure 768043DEST_PATH_IMAGE022
computingThe result of the further fusion obtained is that,
Figure 496833DEST_PATH_IMAGE022
representing input value fusion matrices
Figure 136893DEST_PATH_IMAGE003
To the first in the first point cloud data
Figure 918511DEST_PATH_IMAGE023
Point cloud data
Figure 98956DEST_PATH_IMAGE024
Or the first in the second point cloud data
Figure 432986DEST_PATH_IMAGE023
Point cloud data
Figure 306264DEST_PATH_IMAGE024
As a result of the preliminary fusion of (a),
Figure 607801DEST_PATH_IMAGE025
representing output fusion matrices
Figure 795200DEST_PATH_IMAGE004
To the first in the first point cloud data
Figure 983736DEST_PATH_IMAGE023
Point cloud data
Figure 778648DEST_PATH_IMAGE024
Or the first in the second point cloud data
Figure 521476DEST_PATH_IMAGE023
Point cloud data
Figure 309303DEST_PATH_IMAGE024
The preliminary fusion result of (1).
Specifically, the fourth calculation unit calculates the output by the following formula:
Figure 70455DEST_PATH_IMAGE037
in the formula (I), the compound is shown in the specification,
Figure 19956DEST_PATH_IMAGE038
in order to be output, the output is,
Figure 257603DEST_PATH_IMAGE039
Figure 317963DEST_PATH_IMAGE040
an output weight matrix representing the output layer,
Figure 746670DEST_PATH_IMAGE040
has a dimension of
Figure 132652DEST_PATH_IMAGE041
Figure 833761DEST_PATH_IMAGE042
In order to be offset,
Figure 963391DEST_PATH_IMAGE042
has a dimension of
Figure 246604DEST_PATH_IMAGE043
Specifically, the fifth calculation unit calculates the loss function by the following formula:
Figure 757482DEST_PATH_IMAGE037
in the formula (I), the compound is shown in the specification,
Figure 696620DEST_PATH_IMAGE045
in order to be a function of the loss,
Figure 613629DEST_PATH_IMAGE046
the accurate labels in the single-head point cloud are obtained by calibration after manual selection.
The updating module 204 is configured to update the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation manner, respectively, until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold, and stop the iteration to obtain the optimal fusion parameters.
Specifically, updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data through the following formulas:
Figure 16928DEST_PATH_IMAGE057
Figure 682396DEST_PATH_IMAGE058
Figure 918949DEST_PATH_IMAGE059
Figure 328065DEST_PATH_IMAGE060
in the formula (I), the compound is shown in the specification,
Figure 585871DEST_PATH_IMAGE002
Figure 937086DEST_PATH_IMAGE003
Figure 913133DEST_PATH_IMAGE004
and
Figure 657098DEST_PATH_IMAGE040
in order to fuse the parameters before the update,
Figure 723405DEST_PATH_IMAGE002
in order to enter the key fusion matrix, the key fusion matrix is entered,
Figure 996255DEST_PATH_IMAGE003
in order to fuse the matrices for the input values,
Figure 443285DEST_PATH_IMAGE004
in order to output the fusion matrix, the fusion matrix is output,
Figure 256521DEST_PATH_IMAGE051
Figure 223339DEST_PATH_IMAGE052
Figure 686332DEST_PATH_IMAGE053
and
Figure 309074DEST_PATH_IMAGE054
respectively corresponding to the updated fusion parameters,
Figure 926000DEST_PATH_IMAGE055
it is shown that the derivation is calculated,
Figure 996593DEST_PATH_IMAGE056
it is indicated that the learning rate is,
Figure 80087DEST_PATH_IMAGE056
the value is 0.005.
By pair fusion matrix
Figure 675278DEST_PATH_IMAGE002
Figure 282846DEST_PATH_IMAGE003
Figure 693099DEST_PATH_IMAGE004
And
Figure 540969DEST_PATH_IMAGE040
and updating the fusion parameters by continuously back-propagating, so that the input data (the first point cloud data and the second point cloud data) can be gradually converted into the same dimension.
As can be seen from the above, the point cloud fusion device provided in the embodiment of the present application obtains the first point cloud data and the second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result; therefore, the integration of the first point cloud data acquired by the unmanned aerial vehicle and the second point cloud data acquired by the quadruped robot is realized, the calculation complexity in the point cloud integration process can be reduced by a parallel calculation method, the point cloud integration efficiency and precision are improved, in addition, the self-adaptive adjustment can be carried out on the condition of non-uniform input dimensions, and the time complexity is reduced.
In a third aspect, referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the present application provides an electronic device, including: the processor 301 and the memory 302, the processor 301 and the memory 302 being interconnected and communicating with each other via a communication bus 303 and/or other form of connection mechanism (not shown), the memory 302 storing a computer program executable by the processor 301, the computer program being executable by the processor 301 when the computing device is running to perform the method in any of the alternative implementations of the above embodiments when the processor 301 executes the computer program to perform the following functions: acquiring first point cloud data and second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; and fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
In a fourth aspect, the present application provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program performs the method in any optional implementation manner of the foregoing embodiment to implement the following functions: acquiring first point cloud data and second point cloud data; performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result; calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data; updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are smaller than a preset threshold respectively, and stopping iteration to obtain optimal fusion parameters; and fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result. The storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A point cloud fusion method is used for fusing first point cloud data acquired by an unmanned aerial vehicle and second point cloud data acquired by a quadruped robot, and is characterized by comprising the following steps:
acquiring the first point cloud data and the second point cloud data;
performing preliminary fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain a preliminary fusion result;
calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data;
updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data respectively according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold value, and stopping iteration to obtain optimal fusion parameters;
and fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
2. The point cloud fusion method of claim 1, wherein the preliminary fusion of the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism comprises the following steps:
and randomly sampling the first point cloud data and the second point cloud data according to a set proportion.
3. The point cloud fusion method of claim 1, wherein the calculating the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data each comprises:
solving a fusion matrix;
calculating to obtain a fusion matrix weight vector according to the fusion matrix;
calculating to obtain a fusion matrix output vector according to the fusion matrix and the fusion matrix weight vector;
calculating by using a feedforward neural network based on the weight matrix and the bias to obtain an output;
calculating according to the output to obtain a loss function;
and obtaining the weight attenuation amount by solving the partial derivative according to the loss function.
4. The point cloud fusion method of claim 3, wherein the solving of the fusion matrix is calculated by the following formula:
Figure 630964DEST_PATH_IMAGE001
wherein the fusion matrix comprises
Figure 527376DEST_PATH_IMAGE002
Figure 323294DEST_PATH_IMAGE003
And
Figure 27551DEST_PATH_IMAGE004
Figure 908920DEST_PATH_IMAGE002
in order to enter the key fusion matrix, the key fusion matrix is entered,
Figure 671339DEST_PATH_IMAGE003
in order to fuse the matrices for the input values,
Figure 774293DEST_PATH_IMAGE004
in order to output the fusion matrix, the fusion matrix is output,
Figure 698387DEST_PATH_IMAGE002
Figure 21046DEST_PATH_IMAGE003
and
Figure 321578DEST_PATH_IMAGE004
the initial value of (a) is in accordance with a random number matrix of 0-1 range of Gaussian distribution;
Figure 295350DEST_PATH_IMAGE005
representing the first in the first point cloud data
Figure 639612DEST_PATH_IMAGE006
Point cloud data or the first point cloud data of the second point cloud data
Figure 698835DEST_PATH_IMAGE006
The point cloud data is stored in a memory of the computer,
Figure 803057DEST_PATH_IMAGE007
Figure 916157DEST_PATH_IMAGE008
is the total number of point cloud data in the first point cloud data or in the second point cloud data,
Figure 385315DEST_PATH_IMAGE009
Figure 728572DEST_PATH_IMAGE010
and
Figure 823436DEST_PATH_IMAGE011
respectively representing fusion matrices through input keys
Figure 568538DEST_PATH_IMAGE002
Input value fusion matrix
Figure 959330DEST_PATH_IMAGE003
And output fusion matrix
Figure 789883DEST_PATH_IMAGE004
For the first point in the first point cloud data
Figure 173591DEST_PATH_IMAGE006
Point cloud data
Figure 22467DEST_PATH_IMAGE005
Or the first point cloud data
Figure 630166DEST_PATH_IMAGE006
Point cloud data
Figure 885698DEST_PATH_IMAGE005
The preliminary fusion result of (1).
5. The point cloud fusion method of claim 3, wherein the fusion matrix weight vector calculated from the fusion matrix is calculated by the following formula:
Figure 617637DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 275015DEST_PATH_IMAGE013
represents the weight vector of the fusion matrix and,
Figure 319194DEST_PATH_IMAGE009
representing fusion matrices through input keys
Figure 108027DEST_PATH_IMAGE002
For the first point in the first point cloud data
Figure 833538DEST_PATH_IMAGE006
Point cloud data
Figure 407739DEST_PATH_IMAGE005
Or the first point cloud data
Figure 311235DEST_PATH_IMAGE006
Point cloud data
Figure 72517DEST_PATH_IMAGE005
As a result of the preliminary fusion of (a),
Figure 664036DEST_PATH_IMAGE014
representing input value fusion matrices
Figure 279694DEST_PATH_IMAGE003
For the first point in the first point cloud data
Figure 400097DEST_PATH_IMAGE008
Point cloud data
Figure 179834DEST_PATH_IMAGE015
Or the first point cloud data
Figure 12528DEST_PATH_IMAGE008
Point cloud data
Figure 295741DEST_PATH_IMAGE015
The preliminary fusion result of (2).
6. The point cloud fusion method of claim 3, wherein the fusion matrix output vector calculated from the fusion matrix and the fusion matrix weight vector is calculated by the following formula:
Figure 852625DEST_PATH_IMAGE016
Figure 41030DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 974350DEST_PATH_IMAGE018
represents the output vector of the fusion matrix and,
Figure 800486DEST_PATH_IMAGE019
as the weight vector of the fusion matrix
Figure 262692DEST_PATH_IMAGE013
To (1)
Figure 938392DEST_PATH_IMAGE020
The number of the elements is one,
Figure 144246DEST_PATH_IMAGE019
is shown to pass through
Figure 402052DEST_PATH_IMAGE009
And
Figure 251803DEST_PATH_IMAGE021
the further fusion result obtained by the calculation is obtained,
Figure 962270DEST_PATH_IMAGE021
representing input value fusion matrices
Figure 175076DEST_PATH_IMAGE003
For the first point in the first point cloud data
Figure 536657DEST_PATH_IMAGE022
Point cloud data
Figure 606244DEST_PATH_IMAGE023
Or the first point cloud data
Figure 7269DEST_PATH_IMAGE022
Point cloud data
Figure 305658DEST_PATH_IMAGE023
As a result of the preliminary fusion of (a),
Figure 538056DEST_PATH_IMAGE024
representing output fusion matrices
Figure 716227DEST_PATH_IMAGE004
For the first point in the first point cloud data
Figure 650554DEST_PATH_IMAGE022
Point cloud data
Figure 267480DEST_PATH_IMAGE023
Or the first point cloud data
Figure 514572DEST_PATH_IMAGE022
Point cloud data
Figure 191541DEST_PATH_IMAGE023
The preliminary fusion result of (1).
7. The point cloud fusion method of claim 1, wherein the preliminary fusion of the first point cloud data and the second point cloud data based on the multi-head self-attention mechanism is calculated by the following formula:
Figure 301579DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure 705885DEST_PATH_IMAGE026
for the preliminary fusion result output from the attention layer,
Figure 116137DEST_PATH_IMAGE027
and
Figure 432849DEST_PATH_IMAGE028
all are multi-head weighting matrixes with the same row value and column value,
Figure 780916DEST_PATH_IMAGE029
representing the first point cloud data based on an output of a multi-headed self-attention mechanism,
Figure 739645DEST_PATH_IMAGE030
Figure 988093DEST_PATH_IMAGE031
representing the first in the first point cloud data
Figure 6864DEST_PATH_IMAGE032
The number of the data is one,
Figure 888233DEST_PATH_IMAGE033
representing an output of the second point cloud data based on a multi-headed self-attention mechanism,
Figure 70559DEST_PATH_IMAGE034
Figure 658666DEST_PATH_IMAGE035
representing the first point cloud data
Figure 97607DEST_PATH_IMAGE032
And (4) data.
8. The utility model provides a point cloud fuses device for the second point cloud data that gathers to the first point cloud data that unmanned aerial vehicle gathered and four-footed robot fuses, its characterized in that, the device includes:
the acquisition module is used for acquiring the first point cloud data and the second point cloud data;
the initial fusion module is used for performing initial fusion on the first point cloud data and the second point cloud data based on a multi-head self-attention mechanism to obtain an initial fusion result;
the computing module is used for computing the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data;
the updating module is used for respectively updating the fusion parameters of the first point cloud data and the fusion parameters of the second point cloud data according to the weight attenuation of the first point cloud data and the weight attenuation of the second point cloud data in a back propagation mode until the update change rate of the fusion weight of the first point cloud data and the update change rate of the fusion weight of the second point cloud data are respectively smaller than a preset threshold value, and stopping iteration to obtain the optimal fusion parameters;
and the fusion module is used for fusing the first point cloud data and the second point cloud data according to the optimal fusion parameters to obtain a final fusion result.
9. An electronic device comprising a processor and a memory, said memory storing computer readable instructions which, when executed by said processor, perform the steps of the method according to any one of claims 1 to 7.
10. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method according to any one of claims 1-7.
CN202210426803.8A 2022-04-22 2022-04-22 Point cloud fusion method and device, electronic equipment and storage medium Active CN114549608B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210426803.8A CN114549608B (en) 2022-04-22 2022-04-22 Point cloud fusion method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210426803.8A CN114549608B (en) 2022-04-22 2022-04-22 Point cloud fusion method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114549608A true CN114549608A (en) 2022-05-27
CN114549608B CN114549608B (en) 2022-10-18

Family

ID=81666948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210426803.8A Active CN114549608B (en) 2022-04-22 2022-04-22 Point cloud fusion method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114549608B (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190147335A1 (en) * 2017-11-15 2019-05-16 Uber Technologies, Inc. Continuous Convolution and Fusion in Neural Networks
CN109919893A (en) * 2019-03-20 2019-06-21 湖北亿咖通科技有限公司 Point cloud modification method, device and readable storage medium storing program for executing
CN109978165A (en) * 2019-04-04 2019-07-05 重庆大学 A kind of generation confrontation network method merged from attention mechanism
US20200025931A1 (en) * 2018-03-14 2020-01-23 Uber Technologies, Inc. Three-Dimensional Object Detection
US20200342580A1 (en) * 2019-04-25 2020-10-29 Megvii (Beijing) Technology Co., Ltd. A method, apparatus and electric device for image fusion
CN111860666A (en) * 2020-07-27 2020-10-30 湖南工程学院 3D target detection method based on point cloud and image self-attention mechanism fusion
US20210122045A1 (en) * 2019-10-24 2021-04-29 Nvidia Corporation In-hand object pose tracking
US20210241435A1 (en) * 2019-07-04 2021-08-05 Zhejiang Sense Time Technology Development Co., Ltd. Point cloud fusion method, electronic device, and computer storage medium
CN113269147A (en) * 2021-06-24 2021-08-17 浙江海康智联科技有限公司 Three-dimensional detection method and system based on space and shape, and storage and processing device
CN113345106A (en) * 2021-06-24 2021-09-03 西南大学 Three-dimensional point cloud analysis method and system based on multi-scale multi-level converter
US20210279950A1 (en) * 2020-03-04 2021-09-09 Magic Leap, Inc. Systems and methods for efficient floorplan generation from 3d scans of indoor scenes
CN113487739A (en) * 2021-05-19 2021-10-08 清华大学 Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN113658100A (en) * 2021-07-16 2021-11-16 上海高德威智能交通系统有限公司 Three-dimensional target object detection method and device, electronic equipment and storage medium
US20210374345A1 (en) * 2020-06-01 2021-12-02 Google Llc Processing large-scale textual inputs using neural networks
US11222217B1 (en) * 2020-08-14 2022-01-11 Tsinghua University Detection method using fusion network based on attention mechanism, and terminal device
CN113989340A (en) * 2021-10-29 2022-01-28 天津大学 Point cloud registration method based on distribution
CN114004871A (en) * 2022-01-04 2022-02-01 山东大学 Point cloud registration method and system based on point cloud completion
US20220044068A1 (en) * 2020-07-27 2022-02-10 Waymo Llc Processing perspective view range images using neural networks
CN114066960A (en) * 2022-01-13 2022-02-18 季华实验室 Three-dimensional reconstruction method, point cloud fusion method, device, equipment and storage medium
CN114078151A (en) * 2022-01-19 2022-02-22 季华实验室 Point cloud fusion method and device, electronic equipment and storage medium

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190147335A1 (en) * 2017-11-15 2019-05-16 Uber Technologies, Inc. Continuous Convolution and Fusion in Neural Networks
US20200025931A1 (en) * 2018-03-14 2020-01-23 Uber Technologies, Inc. Three-Dimensional Object Detection
CN109919893A (en) * 2019-03-20 2019-06-21 湖北亿咖通科技有限公司 Point cloud modification method, device and readable storage medium storing program for executing
CN109978165A (en) * 2019-04-04 2019-07-05 重庆大学 A kind of generation confrontation network method merged from attention mechanism
US20200342580A1 (en) * 2019-04-25 2020-10-29 Megvii (Beijing) Technology Co., Ltd. A method, apparatus and electric device for image fusion
US20210241435A1 (en) * 2019-07-04 2021-08-05 Zhejiang Sense Time Technology Development Co., Ltd. Point cloud fusion method, electronic device, and computer storage medium
US20210122045A1 (en) * 2019-10-24 2021-04-29 Nvidia Corporation In-hand object pose tracking
US20210279950A1 (en) * 2020-03-04 2021-09-09 Magic Leap, Inc. Systems and methods for efficient floorplan generation from 3d scans of indoor scenes
US20210374345A1 (en) * 2020-06-01 2021-12-02 Google Llc Processing large-scale textual inputs using neural networks
CN111860666A (en) * 2020-07-27 2020-10-30 湖南工程学院 3D target detection method based on point cloud and image self-attention mechanism fusion
US20220044068A1 (en) * 2020-07-27 2022-02-10 Waymo Llc Processing perspective view range images using neural networks
US11222217B1 (en) * 2020-08-14 2022-01-11 Tsinghua University Detection method using fusion network based on attention mechanism, and terminal device
CN113487739A (en) * 2021-05-19 2021-10-08 清华大学 Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN113345106A (en) * 2021-06-24 2021-09-03 西南大学 Three-dimensional point cloud analysis method and system based on multi-scale multi-level converter
CN113269147A (en) * 2021-06-24 2021-08-17 浙江海康智联科技有限公司 Three-dimensional detection method and system based on space and shape, and storage and processing device
CN113658100A (en) * 2021-07-16 2021-11-16 上海高德威智能交通系统有限公司 Three-dimensional target object detection method and device, electronic equipment and storage medium
CN113989340A (en) * 2021-10-29 2022-01-28 天津大学 Point cloud registration method based on distribution
CN114004871A (en) * 2022-01-04 2022-02-01 山东大学 Point cloud registration method and system based on point cloud completion
CN114066960A (en) * 2022-01-13 2022-02-18 季华实验室 Three-dimensional reconstruction method, point cloud fusion method, device, equipment and storage medium
CN114078151A (en) * 2022-01-19 2022-02-22 季华实验室 Point cloud fusion method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XUE-YAO GAO ET AL: "Multi-Head Self-Attention for 3D Point Cloud Classification", 《IEEE ACCESS》 *
钟诚等: "一种基于注意力机制的三维点云物体识别方法", 《计算机技术与发展》 *
陈亮等: "一种基于无人机序列图像的地形地貌三维快速重建方法", 《北京测绘》 *

Also Published As

Publication number Publication date
CN114549608B (en) 2022-10-18

Similar Documents

Publication Publication Date Title
CN110383340A (en) Path planning is carried out using sparse volume data
CN108204814B (en) Unmanned aerial vehicle three-dimensional scene path navigation platform and three-dimensional improved path planning method thereof
US11302105B2 (en) Grid map obstacle detection method fusing probability and height information
WO2016029348A1 (en) Measuring traffic speed in a road network
CN111750857B (en) Route generation method, route generation device, terminal and storage medium
US20240037844A1 (en) 3d structure engine-based computation platform
CN110347971A (en) Particle filter method, device and storage medium based on TSK fuzzy model
CN110181508A (en) Underwater robot three-dimensional Route planner and system
CN112348867A (en) Method and system for constructing city high-precision three-dimensional terrain based on LiDAR point cloud data
CN116310219A (en) Three-dimensional foot shape generation method based on conditional diffusion model
CN115661374A (en) Rapid retrieval method based on space division and model voxelization
CN116518960A (en) Road network updating method, device, electronic equipment and storage medium
CN112241676A (en) Method for automatically identifying terrain sundries
CN116720632B (en) Engineering construction intelligent management method and system based on GIS and BIM
CN114549608B (en) Point cloud fusion method and device, electronic equipment and storage medium
WO2023164933A1 (en) Building modeling method and related apparatus
CN112926681B (en) Target detection method and device based on deep convolutional neural network
CN115393542A (en) Generalized building three-dimensional geometric reconstruction method
CN111898819B (en) Space grid dividing method and device
CN114511571A (en) Point cloud data semantic segmentation method and system and related components
CN107247833A (en) A kind of CAE mass data light weight methods under cloud computing
CN111414802A (en) Protein data feature extraction method
CN115631320B (en) Pre-calculation cell display method, pre-calculation cell generation method and device
CN114663524B (en) Multi-camera online calibration method and device, electronic equipment and computer readable medium
CN115576424B (en) Method for enhancing real-time performance of VR teaching interactive operation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant