CN117574314A - Information fusion method, device and equipment of sensor and storage medium - Google Patents

Information fusion method, device and equipment of sensor and storage medium Download PDF

Info

Publication number
CN117574314A
CN117574314A CN202311603877.5A CN202311603877A CN117574314A CN 117574314 A CN117574314 A CN 117574314A CN 202311603877 A CN202311603877 A CN 202311603877A CN 117574314 A CN117574314 A CN 117574314A
Authority
CN
China
Prior art keywords
fusion
vector
information
detection
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311603877.5A
Other languages
Chinese (zh)
Inventor
许恩永
谭雪峰
何水龙
彭吉优
李超
林长波
李慧
展新
冯海波
王善超
冯高山
许家毅
邓聚才
陈乾
唐荣江
鲍家定
郑伟光
胡超凡
陶林
王方圆
陈钰烨
赵德平
吴佳英
张释天
梁明运
庞凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Natural Resources And Planning Bureau Of Xiangxi Tujia And Miao Autonomous Prefecture
Guilin University of Electronic Technology
Dongfeng Liuzhou Motor Co Ltd
Original Assignee
Natural Resources And Planning Bureau Of Xiangxi Tujia And Miao Autonomous Prefecture
Guilin University of Electronic Technology
Dongfeng Liuzhou Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Natural Resources And Planning Bureau Of Xiangxi Tujia And Miao Autonomous Prefecture, Guilin University of Electronic Technology, Dongfeng Liuzhou Motor Co Ltd filed Critical Natural Resources And Planning Bureau Of Xiangxi Tujia And Miao Autonomous Prefecture
Priority to CN202311603877.5A priority Critical patent/CN117574314A/en
Publication of CN117574314A publication Critical patent/CN117574314A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of decision-level fusion, and discloses a method, a device, equipment and a storage medium for information fusion of a sensor. When the data to be fused of a plurality of fusion sensors are received, determining three-dimensional detection vectors of the fusion sensors; respectively inputting three-dimensional detection vectors of the fusion sensors to the target twin neural model to obtain similarity information vectors; performing size unification on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to a target size unification model to obtain a detection unification vector and a similarity unification vector of each fusion sensor; and determining target fusion information according to the similarity unified vector, the detection unified vector of each fusion sensor and the target decision fusion model. By the method, the validity and the integrity of information are guaranteed when the characteristics are fused, the limitation among single sensors is made up, and the precision and the efficiency of the fusion of a plurality of sensor decision stages are greatly improved.

Description

Information fusion method, device and equipment of sensor and storage medium
Technical Field
The present invention relates to the field of decision-level fusion technologies, and in particular, to a method, an apparatus, a device, and a storage medium for information fusion of a sensor.
Background
The multi-sensor fusion is a basic task in the field of automatic driving perception, because each sensor has a working scene of advantages and disadvantages, certain defects can exist only through the perception of single-mode data, and the effect of complementary advantages can be achieved through fusion of the data of each sensor. At present, final characteristic information is mostly obtained by adopting a decision-level fusion mode, but in the prior art, decision-level fusion is mostly realized by adopting a plurality of traditional methods, but the multi-sensor decision-level fusion performance based on the traditional algorithm is poor, the fusion result is greatly dependent on the respective detection results of all sensors, and the complementary advantages among the sensors cannot be realized.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a sensor information fusion method, device, equipment and storage medium, and aims to solve the technical problem of how to improve the accuracy and robustness of a plurality of sensors in decision-level fusion in the prior art.
In order to achieve the above object, the present invention provides an information fusion method of a sensor, the method comprising the steps of:
when data to be fused of a plurality of fusion sensors are received, determining three-dimensional detection vectors of the fusion sensors;
respectively inputting three-dimensional detection vectors of the fusion sensors to the target twin neural model to obtain similarity information vectors;
performing size unification on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to a target size unification model to obtain a detection unification vector and a similarity unification vector of each fusion sensor;
and determining target fusion information according to the similarity unified vector, the detection unified vector of each fusion sensor and the target decision fusion model.
Optionally, the determining the three-dimensional detection vector of each fusion sensor includes:
respectively carrying out target detection on data to be fused of each fusion sensor according to a target detection mode, and determining basic information of detection frames and the number of detection frames corresponding to each data to be fused;
determining three-dimensional detection vectors of the fusion sensors according to the basic information of the detection frames and the number of the detection frames corresponding to the data to be fused
Optionally, the inputting the three-dimensional detection vectors of the fusion sensors to the target twin neural model to obtain the similarity information vector includes:
Respectively inputting three-dimensional detection vectors of the fusion sensors to a convolutional neural network in a target twin neural model to obtain extracted feature vectors of the three-dimensional detection vectors;
performing difference calculation on the extracted feature vectors of the three-dimensional detection vectors to obtain feature difference values;
and obtaining a similarity information vector according to a preset activation function in the target twin nerve model and the characteristic difference value.
Optionally, before the three-dimensional detection vectors of the fusion sensors are respectively input to the target twin neural model to obtain the similarity information vector, the method further includes:
determining a sample detection vector of each sample sensor according to the sample detection data set;
calculating the similarity between the sample detection vectors according to the sample detection vectors and a preset matching mode;
training the initial twin neural network according to the similarity among the detection vectors of all the samples and the detection vectors of all the samples to obtain a target twin neural model.
Optionally, the performing size unification on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to the target size unification model to obtain a detection unification vector and a similarity unification vector of each fusion sensor includes:
Respectively carrying out size adjustment on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to a target size unified model;
obtaining feature vectors of a plurality of sizes corresponding to the similarity information vector and feature vectors of a plurality of sizes corresponding to the three-dimensional detection vectors according to the size adjustment result;
and respectively carrying out feature stitching on the feature vectors of a plurality of sizes corresponding to the similarity information vectors and the feature vectors of a plurality of sizes corresponding to the three-dimensional detection vectors to obtain detection unified vectors and similarity unified vectors of the fusion sensors.
Optionally, the determining the target fusion information according to the similarity unified vector, the detection unified vector of each fusion sensor and the target decision fusion model includes:
respectively inputting the similarity unified vector and the detection unified vector of each fusion sensor to a target feature extraction model for local feature extraction to obtain a similarity extraction feature vector of the similarity unified vector and a detection extraction feature vector of each detection unified vector;
inputting the similar extraction feature vector and the detection extraction feature vector of each detection unified vector to a target decision fusion model for global feature extraction to obtain a decision fusion feature vector;
And carrying out information screening according to the decision fusion feature vector to obtain target fusion information.
Optionally, the information filtering according to the decision fusion feature vector to obtain target fusion information includes:
determining a plurality of decision fusion detection information according to the decision fusion feature vector;
carrying out confidence calculation according to the decision fusion detection information, and determining the confidence of the decision fusion detection information;
and sequencing the confidence degrees of the decision fusion detection information, and determining target fusion information in the multiple decision fusion detection information according to the sequencing result.
In addition, in order to achieve the above object, the present invention also provides an information fusion device of a sensor, including:
the processing module is used for determining three-dimensional detection vectors of the fusion sensors when receiving data to be fused of the fusion sensors;
the input module is used for respectively inputting the three-dimensional detection vectors of the fusion sensors to the target twin neural model to obtain similarity information vectors;
the adjustment module is used for carrying out size unification on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to the target size unification model to obtain a detection unification vector and a similarity unification vector of each fusion sensor;
And the processing module is also used for determining target fusion information according to the similarity unified vector, the detection unified vector of each fusion sensor and the target decision fusion model.
In addition, to achieve the above object, the present invention also proposes an information fusion apparatus of a sensor, the information fusion apparatus of a sensor including: the system comprises a memory, a processor and a sensor information fusion program stored on the memory and capable of running on the processor, wherein the sensor information fusion program is configured to realize the steps of the sensor information fusion method.
In addition, in order to achieve the above object, the present invention also proposes a storage medium having stored thereon an information fusion program of a sensor, which when executed by a processor, implements the steps of the information fusion method of a sensor as described above.
When the data to be fused of a plurality of fusion sensors are received, determining three-dimensional detection vectors of the fusion sensors; respectively inputting three-dimensional detection vectors of the fusion sensors to the target twin neural model to obtain similarity information vectors; performing size unification on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to a target size unification model to obtain a detection unification vector and a similarity unification vector of each fusion sensor; and determining target fusion information according to the similarity unified vector, the detection unified vector of each fusion sensor and the target decision fusion model. According to the method, the similarity information vector is obtained based on the three-dimensional detection vector of each fusion sensor and the target twin neural model, the size unification is carried out on the similarity information vector and the three-dimensional detection vector of each fusion sensor by utilizing the target size unification model, the target fusion information of a plurality of fusion sensors is obtained by combining the obtained similarity unification vector and the detection unification vector of each fusion sensor with the target decision fusion model, the validity and the integrity of the information during feature fusion are ensured, the limitation among single sensors is made up, and the precision, the efficiency and the robustness during the fusion of a plurality of sensor decision stages are greatly improved.
Drawings
FIG. 1 is a schematic diagram of a device for information fusion method of a sensor of a hardware running environment according to an embodiment of the present invention;
FIG. 2 is a flowchart of a first embodiment of an information fusion method of a sensor according to the present invention;
FIG. 3 is a schematic diagram of a twin neural network according to an embodiment of the information fusion method of the sensor of the present invention;
FIG. 4 is a schematic diagram of a convolutional neural network in a twin neural network according to an embodiment of the information fusion method of the sensor of the present invention;
FIG. 5 is a schematic diagram of a unified model network structure of target dimensions according to an embodiment of the information fusion method of the sensor of the present invention;
FIG. 6 is a schematic overall flow chart of an embodiment of an information fusion method of the sensor of the present invention;
FIG. 7 is a flowchart of a second embodiment of an information fusion method of the sensor of the present invention;
fig. 8 is a schematic block conva network structure diagram of an embodiment of an information fusion method of a sensor according to the present invention;
FIG. 9 is a schematic diagram of residual connection of an embodiment of a method for information fusion of a sensor according to the present invention;
FIG. 10 is a schematic diagram of a depth separable convolution operation of an embodiment of an information fusion method of a sensor according to the present invention;
FIG. 11 is a schematic diagram of an inverse residual structure of an embodiment of an information fusion method of a sensor according to the present invention;
FIG. 12 is a schematic diagram of an SE attention module according to an embodiment of the sensor information fusion method of the present invention;
fig. 13 is a schematic block convb network structure diagram of an embodiment of the information fusion method of the sensor of the present invention;
fig. 14 is a schematic block convc network structure diagram of an embodiment of an information fusion method of a sensor according to the present invention;
FIG. 15 is a schematic diagram of a channel attention module according to an embodiment of a sensor information fusion method of the present invention;
FIG. 16 is a schematic diagram of a spatial attention module of an embodiment of a method for information fusion of a sensor according to the present invention;
FIG. 17 is a schematic view of a feature map of an embodiment of a method for information fusion of a sensor according to the present invention;
FIG. 18 is a diagram illustrating a multi-head attention mechanism according to an embodiment of a sensor information fusion method of the present invention;
fig. 19 is a block diagram showing the construction of a first embodiment of the information fusion device of the sensor of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an information fusion device of a sensor of a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the information fusion device of the sensor may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) Memory or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the information fusion device of the sensor, and may include more or fewer components than shown, or may combine certain components, or may be arranged in different components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and an information fusion program of a sensor may be included in the memory 1005 as one storage medium.
In the information fusion device of the sensor shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the information fusion device of the sensor of the present invention may be disposed in the information fusion device of the sensor, where the information fusion device of the sensor invokes the information fusion program of the sensor stored in the memory 1005 through the processor 1001, and executes the information fusion method of the sensor provided by the embodiment of the present invention.
The embodiment of the invention provides a sensor information fusion method, and referring to fig. 2, fig. 2 is a flow chart of a first embodiment of the sensor information fusion method.
In this embodiment, the information fusion method of the sensor includes the following steps:
step S10: and when the data to be fused of the fusion sensors are received, determining the three-dimensional detection vector of each fusion sensor.
It should be noted that, the execution body of the embodiment is an information fusion device of a sensor, where the information fusion device of the sensor has functions of data processing, data communication, program running, and the like, and the information fusion device of the sensor may be an integrated controller, a control computer, and other devices with similar functions, and the embodiment is not limited to this.
It can be understood that the fusion sensor refers to a sensor that needs to perform information fusion, the data acquired by each fusion sensor is to-be-fused data of each fusion sensor, and target detection is performed on each to-be-fused data to obtain three-dimensional detection vectors corresponding to each fusion sensor, where the three-dimensional detection vectors of each fusion sensor can be represented by 1×n×9 in this embodiment, N represents the number of detection frames, and 9 represents nine pieces of information (coordinates x, y, z of a central point of a detection frame, length, width, height, l, w, h, yaw angle yaw, confidence level conf, class of the detection frame) of the detection frame. The fusion sensor includes, but is not limited to, a camera, a laser radar, etc., and in this embodiment, data fusion of two fusion sensors is taken as an example.
In a specific implementation, to ensure that an accurate three-dimensional detection vector is obtained, further, the determining a three-dimensional detection vector of each fusion sensor includes: respectively carrying out target detection on data to be fused of each fusion sensor according to a target detection mode, and determining basic information of detection frames and the number of detection frames corresponding to each data to be fused; and determining the three-dimensional detection vector of each fusion sensor according to the detection frame basic information and the detection frame quantity corresponding to each data to be fused.
It should be noted that, the target detection method is used to detect the target of the data to be fused of each fusion sensor, and determine the basic information of the detection frame corresponding to each data to be fused, where the basic information of the detection frame includes, but is not limited to, the coordinates x, y, z of the center point of the detection frame, the length, width, height, l, w, h, yaw angle yaw, confidence level conf and class, and the target detection method refers to a preset program for detecting the target.
It can be understood that the three-dimensional detection vector 1×n×9 of each fusion sensor can be determined based on the detection frame basic information and the number of detection frames corresponding to each data to be fused.
Step S20: and respectively inputting the three-dimensional detection vectors of the fusion sensors to the target twin neural model to obtain a similarity information vector.
It should be noted that, the target twin neural model is obtained after training the twin neural network, a model of similarity information between the fusion sensors can be calculated, three-dimensional detection vectors of the fusion sensors are respectively input into the target twin neural model, a similarity list with a length of n×m can be obtained, the similarity list is expressed as a similarity information vector of 1×nm×1, and N, M is the number of detection frames of the fusion sensors.
It may be appreciated that, to accurately obtain the similarity information vector, further, the step of respectively inputting the three-dimensional detection vectors of the fusion sensors to the target twin neural model to obtain the similarity information vector includes: respectively inputting three-dimensional detection vectors of the fusion sensors to a convolutional neural network in a target twin neural model to obtain extracted feature vectors of the three-dimensional detection vectors; performing difference calculation on the extracted feature vectors of the three-dimensional detection vectors to obtain feature difference values; and obtaining a similarity information vector according to a preset activation function in the target twin nerve model and the characteristic difference value.
In a specific implementation, the twin neural network (Siamese neural network) is a neural network structure for measuring and comparing similarity. Its design inspiration derives from the similarity of twinning brothers/sisters, which have a high degree of similarity as a whole, even if they differ slightly. The twin neural network is a coupling framework established based on two artificial neural networks, the network inputs two inputs into two neural networks with the same structure and shared weight, and the two neural networks map the input into a new feature space respectively to form a representation in the new space. By comparing the differences in the two characteristic representations, the similarity of the two inputs can be measured, and the structure of the network is shown in fig. 3.
It should be noted that, in the twin neural network, the convolutional neural network is commonly used to perform feature extraction on the input vector, and the difference of the input vector can be represented by comparing the output difference of the convolutional neural network. In this embodiment, since one-dimensional data is input, the overall design of the CNN is feature extraction by several one-dimensional convolutions. The convolutional neural network is commonly used for two-dimensional convolution in the field of image recognition, wherein the two-dimensional convolution is to perform sliding window operation on a feature map in two directions of width and height, and perform weighted summation on corresponding positions, while the one-dimensional convolution is slightly different, and the convolution kernel is also two-dimensional, but performs sliding window in one direction and performs weighted summation. The detection frame information vector with network input of 1*9 is extracted by two 1*3 convolution layers, two 1*5 convolution layers and one 1*8 convolution layer. By setting proper step length and filling value of the convolution layer, the length and width of the input vector are kept unchanged, and the number of input channels is increased. The output of each convolution layer is processed by an activation function and output to the next convolution layer. The convolution operation of each layer of the neural network is equivalent to linear transformation, and the activation function is introduced to increase the nonlinearity of the neural network model. In this embodiment, in order to increase the convergence rate, a Relu activation function is selected, and the network structure is shown in fig. 4.
It can be understood that the three-dimensional detection vectors of the fusion sensors are respectively input to the convolutional neural network in the target twin neural model, the size of the feature vector is 1*9 after five-layer convolution operation, and the number of channels is increased to 256. At this time, the length and width are compressed to 1*1 by a convolution kernel with a size of 1*9 and a channel number of 256, and then the compressed length and width are output to a subsequent Loss calculation module to calculate the difference between the two inputs. And the three-dimensional detection vectors of the fusion sensors pass through the convolutional neural network and then respectively output 1 x 256 eigenvectors, namely the extracted eigenvectors of the three-dimensional detection vectors.
In a specific implementation, a Loss calculation module in the target twin neural model calculates differences of the extracted feature vectors, takes absolute values to represent differences of the input feature spaces, and the absolute values of the obtained differences are feature differences.
After obtaining the characteristic difference value, the characteristic difference value is transmitted into a full-connection layer in the target twin nerve model to output one-dimensional data, and the one-dimensional data is adjusted to be between 0 and 1 through the preset activation function, so that a similarity information vector between all fusion sensors is obtained.
It may be understood that, in order to obtain an accurate target twin neural model, further, before the three-dimensional detection vectors of the fusion sensors are respectively input to the target twin neural model to obtain the similarity information vector, the method further includes: determining a sample detection vector of each sample sensor according to the sample detection data set; calculating the similarity between the sample detection vectors according to the sample detection vectors and a preset matching mode; training the initial twin neural network according to the similarity among the detection vectors of all the samples and the detection vectors of all the samples to obtain a target twin neural model.
In a specific implementation, the sample detection data set includes a plurality of sample sensors and sample data collected by each sample sensor, the sample data of each sample sensor is subjected to target detection to obtain sample detection vectors of each sample sensor, each sample detection vector is matched by using a preset matching mode, the similarity between each sample detection vector is determined, the similarity between two matched sample detection vectors is 1, otherwise, the similarity between two matched sample detection vectors is 0, the initial twin neural network is trained by using the similarity between each sample detection vector and each sample detection vector, and network parameters are trained by using a back propagation algorithm, so that the target twin neural model is obtained. In this embodiment, the preset matching method may use a hungarian matching algorithm, or may use other methods, which is not limited in this embodiment.
Step S30: and carrying out size unification on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to a target size unification model to obtain a detection unification vector and a similarity unification vector of each fusion sensor.
It should be noted that, the unified model of target size is to convert each three-dimensional detection vector and the similarity information vector to the same size, so that feature extraction is convenient to be performed later, therefore, the three-dimensional detection vector and the similarity information vector of each fusion sensor are respectively input into the unified model of target size, the three-dimensional detection vector and the similarity information vector with unified size are output, the three-dimensional detection vector with unified size of each fusion sensor is the unified detection vector of each fusion sensor, and the similarity information vector with unified size is the unified similarity vector.
It can be understood that, in order to ensure accuracy of size unification, further, the size unification is performed on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to the target size unification model to obtain a detection unification vector and a similarity unification vector of each fusion sensor, which includes: respectively carrying out size adjustment on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to a target size unified model; obtaining feature vectors of a plurality of sizes corresponding to the similarity information vector and feature vectors of a plurality of sizes corresponding to the three-dimensional detection vectors according to the size adjustment result; and respectively carrying out feature stitching on the feature vectors of a plurality of sizes corresponding to the similarity information vectors and the feature vectors of a plurality of sizes corresponding to the three-dimensional detection vectors to obtain detection unified vectors and similarity unified vectors of the fusion sensors.
In a specific implementation, the target size unified model is similar to the idea of an SPP (Spatial Pyramid Pooling ) network, and the pooling operation is utilized to integrate different-size inputs into the same size, and the SPP is implemented by adding a SPPLayer, SPPLayer after a convolution layer to pull a feature map into a set fixed-length feature vector, and then inputting the feature map into a subsequent full-connection layer. The SPP divides the convolved feature map into different sizes at different scales and then combines them into a fixed shape. SPP has three main advantages in convolutional neural networks: SPP can ignore input size and produce fixed length output; SPP uses multi-level spatial bins multi-scale spatial containers, rather than just one scale sliding window for pooling; the SPP is pooled at different scales, so that the recognition accuracy can be effectively improved, and therefore, the network structure of the unified model of the target size in the embodiment is presented as shown in fig. 5.
It should be noted that, the blockA in the unified model of the target size is a first module, and the blockB is a second module, where the first module is used to adjust the three-dimensional detection vector of each fusion sensor, and first, one layer 1*3 convolution and one layer 1*5 convolution are used to extract the features in the one-dimensional direction, and increase the number of channels. Then, four different-level pooling layers are used for obtaining the characteristic vectors of 1*1, 1*2, 1*4 and 1*8 with different sizes, and the characteristic vectors are spliced into the characteristic vectors of 1 x 15 x 128. The blockB is used to adjust the size of the similarity vector. The difference from blockA is that more feature information is extracted by convolution operations of more layers due to the larger size of the similarity vector.
Step S40: and determining target fusion information according to the similarity unified vector, the detection unified vector of each fusion sensor and the target decision fusion model.
The feature extraction is performed by sequentially passing the similarity unified vector with the same size and the detection unified vector of each fusion sensor through a convolution block, and the high-dimensional features are mined. And finally integrating the unified vector of the similarity and the feature vectors corresponding to the detection unified vectors of the fusion sensors respectively into a sequence, inputting the sequence into a multi-head self-attention mechanism module in a target decision fusion model, further extracting global correlation information, transmitting the extracted feature vectors to a full-connection layer, outputting 50 fused detection frame information after the activation function processing, and screening out a detection frame with high confidence coefficient, namely the target fusion information, as shown in fig. 6.
In the embodiment, when data to be fused of a plurality of fusion sensors are received, three-dimensional detection vectors of the fusion sensors are determined; respectively inputting three-dimensional detection vectors of the fusion sensors to the target twin neural model to obtain similarity information vectors; performing size unification on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to a target size unification model to obtain a detection unification vector and a similarity unification vector of each fusion sensor; and determining target fusion information according to the similarity unified vector, the detection unified vector of each fusion sensor and the target decision fusion model. According to the method, the similarity information vector is obtained based on the three-dimensional detection vector of each fusion sensor and the target twin neural model, the size unification is carried out on the similarity information vector and the three-dimensional detection vector of each fusion sensor by utilizing the target size unification model, the target fusion information of a plurality of fusion sensors is obtained by combining the obtained similarity unification vector and the detection unification vector of each fusion sensor with the target decision fusion model, the validity and the integrity of the information during feature fusion are ensured, the limitation among single sensors is made up, and the precision, the efficiency and the robustness during the fusion of a plurality of sensor decision stages are greatly improved.
Referring to fig. 7, fig. 7 is a flowchart of a second embodiment of a method for information fusion of a sensor according to the present invention.
Based on the first embodiment, the information fusion method of the sensor of the present embodiment includes, at step S40:
step S41: and respectively inputting the similarity unified vector and the detection unified vector of each fusion sensor to a target feature extraction model for local feature extraction to obtain a similarity extraction feature vector of the similarity unified vector and a detection extraction feature vector of each detection unified vector.
It should be noted that the target feature extraction model refers to a feature extraction network based on a convolutional neural network, and performs local feature extraction on the similarity unified vector and the detection unified vector of each fusion sensor through a plurality of convolutional blocks, so as to obtain a similarity extracted feature vector of the similarity unified vector and a detection extracted feature vector of each detection unified vector, and provide richer feature information for a multi-head self-attention mechanism network.
It can be understood that in this embodiment, in order to fuse the data to be fused of two fusion sensors, three convolution blocks exist, and the three convolution blocks perform feature extraction on the detection unified vector and the similarity unified vector of each fusion sensor respectively. The network structure of the convolution block blockConvA is shown in fig. 8, and the convolution or pooling is carried out on a plurality of different scales by widening the width, and finally the convolution blocks blockConvA are spliced together in the characteristic dimension. By aggregating the filters of different sizes together in the width direction, not only can the receptive field of the network be increased, but also the robustness of the neural network can be improved. The superposition of a plurality of filters in one layer means that a plurality of features with different scales can be generated in one layer of convolution operation, and the richer features can make the self training of the features of the network more important, so that the feature expression capability of the final network is improved. The blockConvA replaces the large convolution kernel with a plurality of small convolution kernels, so that the parameter quantity of the network can be reduced, the depth of the network can be deepened, and the complexity of the network is increased. In the network, the convolution kernel of 1*5 is replaced by two 1*3 convolution kernels, so that the data volume of convolution operation is effectively reduced, and the calculation speed is improved. With respect to conventional deep neural networks, the residual block is defined on two interconnected layers, and the structure is shown in fig. 9, where a nonlinear variation function is used to describe the input and output of a network, i.e., the input is X, the output is F (X), and F generally includes operations such as convolution, activation, and the like. The input is added to the output, i.e. the original output is converted into a superposition of inputs and a nonlinear transformation of the inputs. Such an operation makes the transfer of information show a progressive trend, so that errors can be always transferred after thinking, and the training process is easier.
In a specific implementation, the blockConvA uses depth separable convolution instead of the traditional convolution operation, with the inputs being less computationally intensive convolution operations through Depthwise Convolution and Pointwise Convolution. In the conventional convolution operation, the number of convolution kernels of the filter is equal to the number of input channels, the number of the filters is equal to the number of output channels, and the depth separable convolution decomposes the convolution process into DW and PW two-step convolutions. Unlike conventional convolution operations, a filter of DW convolution has only one convolution kernel, processes one channel of the input, and the number of filters is equal to the number of channels of the input vector. So a vector of 1 x 15 x 256, with no change in size and number of channels after DW convolution, does not effectively use the feature information of different channels at the same spatial location, and therefore it is necessary to combine these feature maps into a new feature map after PD convolution. PD convolution operation is similar to conventional convolution, and the weighted summation is carried out on the characteristic diagram of the last step in the channel direction to generate a new characteristic diagram. Taking a 1×15×256 vector as an example, the parameters subjected to the conventional 1×3×256 convolution operation are: 256×256×1×3= 196608, and the parameter amounts of the depth separable convolution are: 256×1×3+1×256= 66304, the parameter amount is only 33.7%, and the depth separable convolution operation is shown in fig. 10. Depth separable convolution uses an inverse residual structure to accelerate the convergence speed of the model. Because the number of layers of the DW convolution is the number of input channels, there is inherently less, and if the number of channels is first compressed as in the bottleneck in the residual network, then a portion of the feature information is lost. In this embodiment, an inverse residual structure is adopted, which is obtained by convolving a first rising dimension with 1*1 and then convolving with DW, as shown in fig. 11.
It should be noted that, the blockConvA adds an SE channel attention module in the depth separable convolution operation to pay attention to the more important channels in the feature vector. The core idea of the SE channel attention module is to make an adaptive weight adjustment of importance for each channel (dimension of feature map) to enhance the transfer of useful information and suppress unimportant information. The module comprises two main steps: squeeze and specification. In the network, the output result of the DW convolution is compressed by using average pooling to obtain global information of each channel, then the weight which should be enhanced or suppressed for each channel is learned through two full-connection layers, a 1*1 weight vector of the number of input channels is output after passing through the last full-connection layer, and the weight vector is multiplied by the output value of the DW, so that a new feature vector can be obtained. The SE channel attention module is used for enhancing the transmission of useful information and weakening the influence of unimportant information. In this way, the model may focus more on features that are more helpful to the task, thereby improving the representation capabilities and performance of the model. A block diagram of depth separable convolution addition SE channel attention module is shown in fig. 12.
It will be appreciated that the network structure of the blockConvB is similar to that of the blockConvA, except that the output channels are different, the network structure of the blockConvB is shown in fig. 13, and the network structure of the blockConvC is shown in fig. 14. The difference between the blockConvB and blockConvA is that the SE channel attention module is replaced with a CBAM attention module. The CBAM attention module consists of two main components: channel attention and spatial attention. These two components work together to enhance the representation of the model by weighting the feature map channel by channel and space by space. Channel attention is enhanced by learning the importance of each channel, unlike SE, CBAM attention module introduces two pooling layers, one maximum pooling and one average pooling, at this step, which can aggregate feature space information, compressing the number of spatial channels that input the feature map. During the back propagation, the average pooling will have an effect on each feature point on the feature map, while the maximum pooling will only have gradient updates for the corresponding maximum place in the feature map, with the channel attention portion as shown in fig. 15.
In a specific implementation, the spatial attention part is to perform maximum pooling and average pooling on each channel based on the same feature point, then reduce two results to a channel size through a convolution operation after the two results are relied on, generate spatial attention features through an activation function, multiply the spatial attention features with input features, and obtain a final module output, and the spatial attention part is shown in fig. 16. After the three convolution blocks, feature vectors of three input vectors can be obtained, and the feature vectors are sent to a Multi-Head Self-attribute network for global feature extraction, and then detection frame information is regressed.
Step S42: and inputting the similar extraction feature vector and the detection extraction feature vector of each detection unified vector to a target decision fusion model for global feature extraction to obtain a decision fusion feature vector.
It should be noted that, the target decision fusion model is a Multi-Head Self-attribute-based data fusion network, and uses the label file of the kitti data set as a true value, and the detection result of each sample sensor is used as a training sample, for example, the detection results of a camera and a laser radar, and the network parameters are trained by a back propagation algorithm.
It will be appreciated that the self-attention mechanism is a method for extracting relationships and features between elements in a sequence, multi-head self-attention is an extension of this, by introducing multiple attention heads, so that a model can learn attention weights from different representation subspaces, the network forms three feature vectors extracted by the upper convolutional neural network into a sequence and inputs the sequence into the self-attention network, and for each attention head, the input feature vectors are mapped to three different subspaces by different linear transformations: query, key, value. This process is shown in fig. 17. The product of the query vector and all the keys is calculated, and the attention score is obtained through a scaling factor to represent the correlation between the query vector and each key vector. And normalizing the attention score by a softmax function to obtain the attention weight of the query and each key, and obtaining the attention representation of the attention head corresponding to the query after the value is weighted and summed by the attention weight. The representations of each attention header are stitched in the feature dimension to form the final multi-headed attention representation, the structure of which is shown in fig. 18.
In the specific implementation, the relation among three input features can be learned through the target decision fusion model, information of different scales and semantic levels is captured, and richer feature representations are extracted for the following full-connection layer to return to an accurate detection frame. The multi-head self-attention network outputs a characteristic vector of 3 x 850, the characteristic is normalized by a Layer-Norm Layer and then is input to a full-connection Layer, a vector of 1 x 50 x 9 dimensionality is output, 50 is the number of fused detection frames, and 9 is information of each detection frame, wherein the information comprises a central point coordinate x, y, z, a length-width height l, w, h, a yaw angle yaw, a confidence coefficient conf and a class.
It should be noted that, the detected extracted feature vectors similar to the extracted feature vector and the detected unified vector are input to the target decision fusion model to perform global feature extraction, and the obtained 1×50×9 dimension vector is the decision fusion vector.
Step S43: and carrying out information screening according to the decision fusion feature vector to obtain target fusion information.
It should be noted that, in order to obtain accurate target fusion information based on the decision fusion feature vector, further, the information screening is performed according to the decision fusion feature vector to obtain target fusion information, including: determining a plurality of decision fusion detection information according to the decision fusion feature vector; carrying out confidence calculation according to the decision fusion detection information, and determining the confidence of the decision fusion detection information; and sequencing the confidence degrees of the decision fusion detection information, and determining target fusion information in the multiple decision fusion detection information according to the sequencing result.
It can be understood that a plurality of fusion output detection frames are determined according to the decision fusion feature vector, and the fusion output detection frames and the detection frame information corresponding to the fusion output detection frames are decision fusion detection information. And calculating the confidence coefficient of each fusion output detection frame, sequencing the confidence coefficient of each fusion output detection frame, and taking the fusion output detection frame with the maximum confidence coefficient and the corresponding detection frame information as target fusion information.
In the embodiment, the similarity unified vector and the detection unified vector of each fusion sensor are respectively input into a target feature extraction model to perform local feature extraction, so that a similarity extraction feature vector of the similarity unified vector and a detection extraction feature vector of each detection unified vector are obtained; inputting the similar extraction feature vector and the detection extraction feature vector of each detection unified vector to a target decision fusion model for global feature extraction to obtain a decision fusion feature vector; and carrying out information screening according to the decision fusion feature vector to obtain target fusion information. By the method, accuracy, efficiency and robustness of decision-level fusion are guaranteed.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium stores an information fusion program of the sensor, and the information fusion program of the sensor realizes the steps of the information fusion method of the sensor when being executed by a processor.
Referring to fig. 19, fig. 19 is a block diagram showing the structure of a first embodiment of the information fusion device of the sensor of the present invention.
As shown in fig. 19, an information fusion device of a sensor according to an embodiment of the present invention includes:
the processing module 10 is configured to determine a three-dimensional detection vector of each fusion sensor when receiving data to be fused of the plurality of fusion sensors.
The input module 20 is configured to input the three-dimensional detection vectors of the fusion sensors to the target twin neural model, respectively, to obtain a similarity information vector.
And the adjustment module 30 is configured to perform size unification on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to a target size unification model, so as to obtain a detection unification vector and a similarity unification vector of each fusion sensor.
The processing module 10 is further configured to determine target fusion information according to the similarity unified vector, the detection unified vector of each fusion sensor, and a target decision fusion model.
In the embodiment, when data to be fused of a plurality of fusion sensors are received, three-dimensional detection vectors of the fusion sensors are determined; respectively inputting three-dimensional detection vectors of the fusion sensors to the target twin neural model to obtain similarity information vectors; performing size unification on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to a target size unification model to obtain a detection unification vector and a similarity unification vector of each fusion sensor; and determining target fusion information according to the similarity unified vector, the detection unified vector of each fusion sensor and the target decision fusion model. According to the method, the similarity information vector is obtained based on the three-dimensional detection vector of each fusion sensor and the target twin neural model, the size unification is carried out on the similarity information vector and the three-dimensional detection vector of each fusion sensor by utilizing the target size unification model, the target fusion information of a plurality of fusion sensors is obtained by combining the obtained similarity unification vector and the detection unification vector of each fusion sensor with the target decision fusion model, the validity and the integrity of the information during feature fusion are ensured, the limitation among single sensors is made up, and the precision, the efficiency and the robustness during the fusion of a plurality of sensor decision stages are greatly improved.
In an embodiment, the processing module 10 is further configured to perform target detection on to-be-fused data of each fusion sensor according to a target detection manner, and determine basic detection frame information and the number of detection frames corresponding to each to-be-fused data;
and determining the three-dimensional detection vector of each fusion sensor according to the detection frame basic information and the detection frame quantity corresponding to each data to be fused.
In an embodiment, the input module 20 is further configured to input three-dimensional detection vectors of each fusion sensor to a convolutional neural network in the target twin neural model, so as to obtain extracted feature vectors of each three-dimensional detection vector;
performing difference calculation on the extracted feature vectors of the three-dimensional detection vectors to obtain feature difference values;
and obtaining a similarity information vector according to a preset activation function in the target twin nerve model and the characteristic difference value.
In an embodiment, the input module 20 is further configured to determine a sample detection vector of each sample sensor according to the sample detection data set;
calculating the similarity between the sample detection vectors according to the sample detection vectors and a preset matching mode;
training the initial twin neural network according to the similarity among the detection vectors of all the samples and the detection vectors of all the samples to obtain a target twin neural model.
In an embodiment, the adjusting module 30 is further configured to adjust the sizes of the similarity information vector and the three-dimensional detection vector of each fusion sensor according to a unified model of the target size;
obtaining feature vectors of a plurality of sizes corresponding to the similarity information vector and feature vectors of a plurality of sizes corresponding to the three-dimensional detection vectors according to the size adjustment result;
and respectively carrying out feature stitching on the feature vectors of a plurality of sizes corresponding to the similarity information vectors and the feature vectors of a plurality of sizes corresponding to the three-dimensional detection vectors to obtain detection unified vectors and similarity unified vectors of the fusion sensors.
In an embodiment, the processing module 10 is further configured to input the similarity unified vector and the detection unified vectors of the fusion sensors to a target feature extraction model to perform local feature extraction, so as to obtain a similarity extracted feature vector of the similarity unified vector and a detection extracted feature vector of the detection unified vectors;
inputting the similar extraction feature vector and the detection extraction feature vector of each detection unified vector to a target decision fusion model for global feature extraction to obtain a decision fusion feature vector;
And carrying out information screening according to the decision fusion feature vector to obtain target fusion information.
In an embodiment, the processing module 10 is further configured to determine a plurality of decision fusion detection information according to the decision fusion feature vector;
carrying out confidence calculation according to the decision fusion detection information, and determining the confidence of the decision fusion detection information;
and sequencing the confidence degrees of the decision fusion detection information, and determining target fusion information in the multiple decision fusion detection information according to the sequencing result.
It should be understood that the foregoing is illustrative only and is not limiting, and that in specific applications, those skilled in the art may set the invention as desired, and the invention is not limited thereto.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily occurring in sequence, but may be performed alternately or alternately with other steps or at least a portion of the other steps or stages.
It should be noted that the above-described working procedure is merely illustrative, and does not limit the scope of the present invention, and in practical application, a person skilled in the art may select part or all of them according to actual needs to achieve the purpose of the embodiment, which is not limited herein.
Furthermore, it should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of embodiments, it will be clear to a person skilled in the art that the above embodiment method may be implemented by means of software plus a necessary general hardware platform, but may of course also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk) and comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. The information fusion method of the sensor is characterized by comprising the following steps of:
when data to be fused of a plurality of fusion sensors are received, determining three-dimensional detection vectors of the fusion sensors;
respectively inputting three-dimensional detection vectors of the fusion sensors to the target twin neural model to obtain similarity information vectors;
performing size unification on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to a target size unification model to obtain a detection unification vector and a similarity unification vector of each fusion sensor;
and determining target fusion information according to the similarity unified vector, the detection unified vector of each fusion sensor and the target decision fusion model.
2. The method of information fusion of sensors of claim 1, wherein determining a three-dimensional detection vector for each fused sensor comprises:
Respectively carrying out target detection on data to be fused of each fusion sensor according to a target detection mode, and determining basic information of detection frames and the number of detection frames corresponding to each data to be fused;
and determining the three-dimensional detection vector of each fusion sensor according to the detection frame basic information and the detection frame quantity corresponding to each data to be fused.
3. The method for fusing information of sensors according to claim 1, wherein the step of inputting three-dimensional detection vectors of the fused sensors to the target twin neural model to obtain similarity information vectors comprises:
respectively inputting three-dimensional detection vectors of the fusion sensors to a convolutional neural network in a target twin neural model to obtain extracted feature vectors of the three-dimensional detection vectors;
performing difference calculation on the extracted feature vectors of the three-dimensional detection vectors to obtain feature difference values;
and obtaining a similarity information vector according to a preset activation function in the target twin nerve model and the characteristic difference value.
4. The method for fusing information of sensors according to claim 1, wherein before inputting three-dimensional detection vectors of the fused sensors to the target twin neural model to obtain similarity information vectors, the method further comprises:
Determining a sample detection vector of each sample sensor according to the sample detection data set;
calculating the similarity between the sample detection vectors according to the sample detection vectors and a preset matching mode;
training the initial twin neural network according to the similarity among the detection vectors of all the samples and the detection vectors of all the samples to obtain a target twin neural model.
5. The method for information fusion of sensors according to claim 1, wherein the step of performing size unification on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to a target size unification model to obtain a detection unification vector and a similarity unification vector of each fusion sensor comprises:
respectively carrying out size adjustment on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to a target size unified model;
obtaining feature vectors of a plurality of sizes corresponding to the similarity information vector and feature vectors of a plurality of sizes corresponding to the three-dimensional detection vectors according to the size adjustment result;
and respectively carrying out feature stitching on the feature vectors of a plurality of sizes corresponding to the similarity information vectors and the feature vectors of a plurality of sizes corresponding to the three-dimensional detection vectors to obtain detection unified vectors and similarity unified vectors of the fusion sensors.
6. The method for fusing information of sensors according to claim 1, wherein determining target fusion information based on the similarity unified vector, the detection unified vector of each fusion sensor, and a target decision fusion model comprises:
respectively inputting the similarity unified vector and the detection unified vector of each fusion sensor to a target feature extraction model for local feature extraction to obtain a similarity extraction feature vector of the similarity unified vector and a detection extraction feature vector of each detection unified vector;
inputting the similar extraction feature vector and the detection extraction feature vector of each detection unified vector to a target decision fusion model for global feature extraction to obtain a decision fusion feature vector;
and carrying out information screening according to the decision fusion feature vector to obtain target fusion information.
7. The method for information fusion of sensors according to claim 6, wherein the step of performing information screening according to the decision fusion feature vector to obtain target fusion information comprises:
determining a plurality of decision fusion detection information according to the decision fusion feature vector;
carrying out confidence calculation according to the decision fusion detection information, and determining the confidence of the decision fusion detection information;
And sequencing the confidence degrees of the decision fusion detection information, and determining target fusion information in the multiple decision fusion detection information according to the sequencing result.
8. An information fusion device of a sensor, characterized in that the information fusion device of the sensor comprises:
the processing module is used for determining three-dimensional detection vectors of the fusion sensors when receiving data to be fused of the fusion sensors;
the input module is used for respectively inputting the three-dimensional detection vectors of the fusion sensors to the target twin neural model to obtain similarity information vectors;
the adjustment module is used for carrying out size unification on the similarity information vector and the three-dimensional detection vector of each fusion sensor according to the target size unification model to obtain a detection unification vector and a similarity unification vector of each fusion sensor;
and the processing module is also used for determining target fusion information according to the similarity unified vector, the detection unified vector of each fusion sensor and the target decision fusion model.
9. An information fusion device of a sensor, the device comprising: a memory, a processor, and a sensor information fusion program stored on the memory and executable on the processor, the sensor information fusion program configured to implement the sensor information fusion method of any one of claims 1 to 7.
10. A storage medium, wherein an information fusion program of a sensor is stored on the storage medium, and the information fusion program of the sensor, when executed by a processor, implements the information fusion method of the sensor according to any one of claims 1 to 7.
CN202311603877.5A 2023-11-28 2023-11-28 Information fusion method, device and equipment of sensor and storage medium Pending CN117574314A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311603877.5A CN117574314A (en) 2023-11-28 2023-11-28 Information fusion method, device and equipment of sensor and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311603877.5A CN117574314A (en) 2023-11-28 2023-11-28 Information fusion method, device and equipment of sensor and storage medium

Publications (1)

Publication Number Publication Date
CN117574314A true CN117574314A (en) 2024-02-20

Family

ID=89886059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311603877.5A Pending CN117574314A (en) 2023-11-28 2023-11-28 Information fusion method, device and equipment of sensor and storage medium

Country Status (1)

Country Link
CN (1) CN117574314A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170117723A (en) * 2016-04-14 2017-10-24 국방과학연구소 Apparatus and Method for multi-sensor information fusion based on feature information
CN109677341A (en) * 2018-12-21 2019-04-26 深圳市元征科技股份有限公司 A kind of information of vehicles blending decision method and device
CN111353510A (en) * 2018-12-20 2020-06-30 长沙智能驾驶研究院有限公司 Multi-sensor target detection method and device, computer equipment and storage medium
CN111582399A (en) * 2020-05-15 2020-08-25 吉林省森祥科技有限公司 Multi-sensor information fusion method for sterilization robot
CN112348054A (en) * 2020-10-12 2021-02-09 北京国电通网络技术有限公司 Data processing method, device, medium and system for multi-type sensor
CN113537287A (en) * 2021-06-11 2021-10-22 北京汽车研究总院有限公司 Multi-sensor information fusion method and device, storage medium and automatic driving system
CN114528940A (en) * 2022-02-18 2022-05-24 深圳海星智驾科技有限公司 Multi-sensor target fusion method and device
CN114677655A (en) * 2022-02-15 2022-06-28 上海芯物科技有限公司 Multi-sensor target detection method and device, electronic equipment and storage medium
CN114898319A (en) * 2022-05-25 2022-08-12 山东大学 Vehicle type recognition method and system based on multi-sensor decision-level information fusion
CN116433712A (en) * 2021-12-30 2023-07-14 魔门塔(苏州)科技有限公司 Fusion tracking method and device based on pre-fusion of multi-sensor time sequence sensing results

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170117723A (en) * 2016-04-14 2017-10-24 국방과학연구소 Apparatus and Method for multi-sensor information fusion based on feature information
CN111353510A (en) * 2018-12-20 2020-06-30 长沙智能驾驶研究院有限公司 Multi-sensor target detection method and device, computer equipment and storage medium
CN109677341A (en) * 2018-12-21 2019-04-26 深圳市元征科技股份有限公司 A kind of information of vehicles blending decision method and device
CN111582399A (en) * 2020-05-15 2020-08-25 吉林省森祥科技有限公司 Multi-sensor information fusion method for sterilization robot
CN112348054A (en) * 2020-10-12 2021-02-09 北京国电通网络技术有限公司 Data processing method, device, medium and system for multi-type sensor
CN113537287A (en) * 2021-06-11 2021-10-22 北京汽车研究总院有限公司 Multi-sensor information fusion method and device, storage medium and automatic driving system
CN116433712A (en) * 2021-12-30 2023-07-14 魔门塔(苏州)科技有限公司 Fusion tracking method and device based on pre-fusion of multi-sensor time sequence sensing results
CN114677655A (en) * 2022-02-15 2022-06-28 上海芯物科技有限公司 Multi-sensor target detection method and device, electronic equipment and storage medium
WO2023155387A1 (en) * 2022-02-15 2023-08-24 上海芯物科技有限公司 Multi-sensor target detection method and apparatus, electronic device and storage medium
CN114528940A (en) * 2022-02-18 2022-05-24 深圳海星智驾科技有限公司 Multi-sensor target fusion method and device
CN114898319A (en) * 2022-05-25 2022-08-12 山东大学 Vehicle type recognition method and system based on multi-sensor decision-level information fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHIVASHANKARAPPA, N: "Kalman filter based multiple sensor data fusion in systems with time delayed state", 2016 3RD INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND INTEGRATED NETWORKS (SPIN), 6 October 2016 (2016-10-06) *
吴瑕;周焰;: "模糊传感器与区间型多属性决策的信息融合方法", 宇航学报, no. 06, 30 June 2011 (2011-06-30) *

Similar Documents

Publication Publication Date Title
CN110276316B (en) Human body key point detection method based on deep learning
CN109934282B (en) SAGAN sample expansion and auxiliary information-based SAR target classification method
CN110135267B (en) Large-scene SAR image fine target detection method
EP3971772B1 (en) Model training method and apparatus, and terminal and storage medium
CN108229381B (en) Face image generation method and device, storage medium and computer equipment
WO2021249255A1 (en) Grabbing detection method based on rp-resnet
CN111507378A (en) Method and apparatus for training image processing model
CN110070107B (en) Object recognition method and device
CN109377530A (en) A kind of binocular depth estimation method based on deep neural network
CN109886066A (en) Fast target detection method based on the fusion of multiple dimensioned and multilayer feature
CN112364931A (en) Low-sample target detection method based on meta-feature and weight adjustment and network model
CN115100574A (en) Action identification method and system based on fusion graph convolution network and Transformer network
CN112149590A (en) Hand key point detection method
CN113688765A (en) Attention mechanism-based action recognition method for adaptive graph convolution network
CN113743417A (en) Semantic segmentation method and semantic segmentation device
CN116740422A (en) Remote sensing image classification method and device based on multi-mode attention fusion technology
CN112149662A (en) Multi-mode fusion significance detection method based on expansion volume block
CN115830596A (en) Remote sensing image semantic segmentation method based on fusion pyramid attention
CN112668421B (en) Attention mechanism-based rapid classification method for hyperspectral crops of unmanned aerial vehicle
CN115861595B (en) Multi-scale domain self-adaptive heterogeneous image matching method based on deep learning
CN111814804A (en) Human body three-dimensional size information prediction method and device based on GA-BP-MC neural network
CN116758419A (en) Multi-scale target detection method, device and equipment for remote sensing image
CN115761332A (en) Smoke and flame detection method, device, equipment and storage medium
CN117574314A (en) Information fusion method, device and equipment of sensor and storage medium
CN115761888A (en) Tower crane operator abnormal behavior detection method based on NL-C3D model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination