CN115049996A - Dual-sensor target detection fusion method and system based on evidence reasoning rule - Google Patents

Dual-sensor target detection fusion method and system based on evidence reasoning rule Download PDF

Info

Publication number
CN115049996A
CN115049996A CN202210590810.1A CN202210590810A CN115049996A CN 115049996 A CN115049996 A CN 115049996A CN 202210590810 A CN202210590810 A CN 202210590810A CN 115049996 A CN115049996 A CN 115049996A
Authority
CN
China
Prior art keywords
sensor
target
data
evidence
importance weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210590810.1A
Other languages
Chinese (zh)
Inventor
任明仑
何佩
周俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202210590810.1A priority Critical patent/CN115049996A/en
Publication of CN115049996A publication Critical patent/CN115049996A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)
  • Testing Or Calibration Of Command Recording Devices (AREA)

Abstract

The invention provides a dual-sensor target detection fusion method and system based on evidence reasoning rules, and relates to the technical field of target fusion. The method is based on the improved evidence reasoning rule to fuse the recognition results of the target categories of the two sensors to obtain the accurate category of the target; wherein the improved evidence reasoning rules comprise: acquiring the importance weight of the sensor based on the historical accuracy rate and the real-time detection result of the sensor; acquiring the credibility of the sensor according to the change of the value of the sensor in the current time window; and (4) carrying out credibility distribution again in the evidence reasoning rule process through the credibility and the importance weight of the sensor. According to the invention, the credibility and importance weight of the sensor are obtained through the monitoring data of the sensor, rather than through subjective experience or depending on the difference between evidences, the detection results of the two sensors can be effectively fused, and the objectivity and accuracy of the fused result are ensured.

Description

Dual-sensor target detection fusion method and system based on evidence reasoning rule
Technical Field
The invention relates to the technical field of target fusion, in particular to a dual-sensor target detection fusion method and system based on evidence reasoning rules.
Background
Target detection is a fundamental and important task in many technical fields of computer vision, and in the technical field of unmanned driving, the result of target detection directly influences the driving behavior decision of a vehicle. The target detection comprises the acquisition of the category, the position, the state and the like of a target, wherein the category of the target is used as important semantic information, can provide relevant knowledge about the speed, the direction and the like of the target and has significance for the formation of driving decisions. The determination of the target class requires a lot of detailed information about the target and so relies mainly on lidar and on-board cameras in the on-board sensor system. Considering uncertainty generated by influence of environmental factors, sensor faults and accuracy of a detection algorithm on a detection result of a single sensor, more accurate description about target categories can be obtained by combining the detection results of the laser radar and the vehicle-mounted camera, and accuracy of the detection result is effectively improved.
The fusion of the recognition results of the target categories belongs to the fusion problem of decision level, and is solved by a common evidence theory method. The existing research mostly adopts support degree, evidence distance, belief entropy and the like to describe the reliability of the evidence. These methods measure the difference between evidences, and consider that evidences that differ greatly from other evidences are relatively unreliable and evidences that differ little from other evidences are relatively reliable. For fusion tasks with a plurality of evidence sources, the method can depict the reliability of the evidence and effectively reduce the influence of the evidence with poor reliability on the fusion result. However, for the task of fusing the target detection results of the laser radar and the vehicle-mounted camera, there are two evidence sources, and the difference between the evidence does not reflect the reliability of the evidence.
According to the above, the existing method can well depict the reliability of the evidence when the number of evidence sources is large, but is not suitable for the information fusion problem when the number of evidence sources is small, so that the accuracy of the fusion result is low.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a dual-sensor target detection fusion method and system based on an evidence reasoning rule, and solves the technical problem that the accuracy of fusion results is low when the number of evidence sources is small in the conventional method.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme:
in a first aspect, the invention provides a dual-sensor target detection fusion method based on evidence reasoning rules, which comprises the following steps:
s1, acquiring two kinds of sensor data, and processing the two kinds of sensor data to obtain the identification results of the target categories of the two kinds of sensors;
s2, fusing the recognition results of the target categories of the two sensors based on an improved evidence reasoning rule to obtain the accurate category of the target; wherein the improved evidence reasoning rules comprise: acquiring an importance weight of the sensor based on the detection result; obtaining the credibility of the sensor according to the change of the value in the current time window of the sensor; and (4) carrying out credibility distribution again in the evidence reasoning rule process through the credibility and the importance weight of the sensor.
Preferably, the obtaining of the importance weight of the sensor based on the detection result includes:
and acquiring the importance weight of the sensor adaptability according to the detection result of the sensor on the historical data and the real-time detection result of the sensor.
Preferably, the obtaining of the importance weight of the sensor according to the detection result of the sensor on the historical data and the real-time detection result of the sensor includes:
s201a, calculating the initial importance weight of the sensor according to the historical data;
s201b, calculating the variation coefficient of the sensor according to the real-time detection result;
s201c, adjusting the initial importance weight according to the variation coefficient to obtain the importance weight of the sensor.
Preferably, the acquiring two types of sensor data and processing the two types of vehicle-mounted sensor data to obtain the identification results of the target categories of the two types of sensors includes:
s101, acquiring first sensor data and second sensor data in a certain time window;
and S102, processing the first sensor data and the second sensor data to obtain an identification result of the target type of the first sensor and an identification result of the target type of the second sensor.
Preferably, the processing the first sensor data and the second sensor data to obtain the recognition result of the target class of the first sensor and the recognition result of the target class of the second sensor includes:
the first sensor is a vehicle-mounted laser radar, and the second sensor is a vehicle-mounted camera;
analyzing and processing vehicle-mounted laser radar data in a certain time window through Second deep learning to obtain a recognition result of the detected target category
Figure BDA0003667276970000031
Wherein, { theta } 12 ,…,θ N Is the set of target classes,
Figure BDA0003667276970000032
at time t, the vehicle-mounted laser radar is aligned with the target class theta 1 (ii) an observation probability of;
carrying out target detection on an original image in vehicle-mounted camera data through Yolov3 deep learning to obtain a recognition result of a target category
Figure BDA0003667276970000041
Wherein the content of the first and second substances,
Figure BDA0003667276970000042
at time t, the vehicle-mounted camera is aligned with the object class theta 1 Probability of observation of。
Preferably, the obtaining the reliability of the sensor according to the change of the value in the current time window of the sensor includes:
setting the difference of the classification results at different moments as delta p, describing the reliability by using a Logistic model based on the delta p, and calculating as follows:
Figure BDA0003667276970000043
the time window is set to be l,
Figure BDA0003667276970000044
representing the average difference of the sensor Si for the time t for the N class classification results with respect to other times within the time window,
Figure BDA0003667276970000045
then, r i,t Representing the reliability of the sensor Si at time t:
Figure BDA0003667276970000046
wherein λ and μ are important parameters for adjusting the variation of the sensor confidence with the data difference within the time window;
Figure BDA0003667276970000047
is the ith sensor t time to theta k The rating of the category.
Preferably, the performing of the confidence allocation again in the course of the evidence reasoning rule through the sensor confidence and the importance weight includes:
for evidence e i Basic confidence is assigned m θ,i Then, include confidence level r i And an importance weight w i Is expressed as
Figure BDA0003667276970000048
Figure BDA0003667276970000049
Wherein, c rw,i =1/(1+w i -r i ) Is a normalization factor, to ensure that:
Figure BDA0003667276970000051
Figure BDA0003667276970000052
through sensor credibility and importance weight, the N categories theta to which the target belongs are determined as { theta ═ theta 12 ,…,θ N Fourthly, reliability distribution in the evidence reasoning rule process is carried out again to obtain the first sensor S l And a second sensor S c Basic confidence m of θ,l 、m θ,c If the time t includes the confidence r l,t 、r c,t And an importance weight W l,t 、W c,t Is expressed as
Figure BDA0003667276970000053
Figure BDA0003667276970000054
Figure BDA0003667276970000055
In the formula (I), the compound is shown in the specification,
c rw,l =1/(1+W l,t -r l,t )
c rw,c =1/(1+W c,t -r c,t )
wherein Θ ═ θ 12 ,…,θ N Is a set of N targets, any i, j ∈ {1,2, …, N } and i ≠ j,
Figure BDA0003667276970000056
n targets are mutually exclusive pairwise; the set consisting of Θ and all its subsets is called the power set of Θ, denoted as P { Θ);
W l,t 、r l,t the importance weight and the credibility of the first sensor at the moment t are respectively;
W c,t 、r c,t respectively, the importance weight and the confidence level of the second sensor at the time t.
In a second aspect, the present invention provides a dual-sensor target detection fusion system based on evidence reasoning rules, including:
the data processing module is used for acquiring two types of sensor data and processing the two types of sensor data to obtain identification results of target categories of the two types of vehicle-mounted sensors;
the target category fusion module is used for fusing the recognition results of the target categories of the two sensors based on the improved evidence reasoning rule to obtain the accurate category of the target; wherein the improved evidence reasoning rules comprise: acquiring the importance weight of the sensor based on the historical accuracy rate and the real-time detection result of the sensor; obtaining the credibility of the sensor according to the change of the value in the current time window of the sensor; and (4) carrying out credibility distribution again in the evidence reasoning rule process through the credibility and the importance weight of the sensor.
In a third aspect, the present invention provides a computer-readable storage medium storing a computer program for evidence inference rule based dual-sensor target detection fusion, wherein the computer program causes a computer to execute the evidence inference rule based dual-sensor target detection fusion method as described above.
In a fourth aspect, the present invention provides an electronic device comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs comprising instructions for performing a dual sensor target detection fusion method based on evidence-based reasoning rules as described above.
(III) advantageous effects
The invention provides a dual-sensor target detection fusion method and system based on evidence reasoning rules. Compared with the prior art, the method has the following beneficial effects:
the method comprises the steps of obtaining data of two sensors, and processing the data of the two vehicle-mounted sensors to obtain identification results of target categories of the two sensors; fusing the recognition results of the target categories of the two sensors based on an improved evidence reasoning rule to obtain the accurate category of the target; wherein the improved evidence reasoning rules comprise: acquiring importance weight of the sensor based on the historical accuracy rate and the real-time detection result of the sensor; obtaining the credibility of the sensor according to the change of the value in the current time window of the sensor; and (4) carrying out credibility distribution again in the evidence reasoning rule process through the credibility and the importance weight of the sensor. According to the invention, the credibility and importance weight of the sensor are obtained through the monitoring data of the sensor, rather than through subjective experience or depending on the difference between evidences, the detection results of the two sensors can be effectively fused, and the objectivity and accuracy of the fused result are ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a block diagram of a dual-sensor target detection fusion method based on evidence reasoning rules in an embodiment of the present invention;
FIG. 2 is a graph showing the reliability of a sensor according to an embodiment of the present invention versus the difference between the detection results at different times;
FIG. 3 is a graph showing the variation of the curve with μ according to the embodiment of the present invention;
FIG. 4 is a graph showing the variation of the curve with the value of λ in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention are clearly and completely described, and it is obvious that the described embodiments are a part of the embodiments of the present invention, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application provides the dual-sensor target detection fusion method and system based on the evidence reasoning rule, solves the technical problem that the accuracy of the fusion result is low when the number of evidence sources is small in the existing method, realizes that the reliability of the evidence is described through the characteristics of the evidence without depending on the difference between the evidences, and improves the accuracy of the fusion result.
In order to solve the technical problems, the general idea of the embodiment of the application is as follows:
in evidence reliability calculation, most of the existing methods rely on the principle that a few evidence sources obey most of the evidence, the method can well depict the reliability of the evidence when the number of the evidence sources is large, but the method is not suitable for the information fusion problem when the number of the evidence sources is small, for example, when unmanned driving is used for target detection, the detection results of two types of sensors, namely a laser radar and a camera, are fused. Therefore, a new method needs to be proposed, which does not depend on the difference between the evidences, but describes the credibility of the evidences through the characteristics of the evidences. The embodiment of the invention provides a new evidence credibility and importance calculation method on the basis of an evidence reasoning rule method, solves the fusion problem of two evidence sources, and is applied to a vehicle-mounted sensor system for the fusion problem of a laser radar and a vehicle-mounted camera about target categories. The reliability and the importance of the sensor are obtained through the characteristics of the data of the sensor, and the objectivity and the accuracy of a fusion result can be effectively guaranteed instead of through subjective experience.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
The embodiment of the invention provides a dual-sensor target detection fusion method based on an evidence reasoning rule, which comprises the following steps of:
s1, acquiring data of two sensors, and processing the data of the two vehicle-mounted sensors to obtain recognition results of target categories of the two sensors;
s2, fusing the recognition results of the target categories of the two sensors based on an improved evidence reasoning rule to obtain the accurate category of the target; wherein the improved evidence reasoning rules comprise: acquiring an importance weight of the sensor based on the detection result; obtaining the credibility of the sensor according to the change of the value in the current time window of the sensor; and (4) carrying out credibility distribution again in the evidence reasoning rule process through the credibility and the importance weight of the sensor.
According to the embodiment of the invention, the reliability and importance weight of the sensor are obtained through the monitoring data of the sensor, and the objectivity and accuracy of the fusion result can be effectively ensured through subjective experience or dependence on the difference between evidences.
The following describes each step in detail:
in step S1, two types of sensor data are acquired and processed to obtain the recognition results of the target categories of the two types of sensors. The specific implementation process is as follows:
in an embodiment of the invention, the sensor comprises a vehicle laser radar and a vehicle camera.
S101, vehicle-mounted laser radar data and vehicle-mounted camera data in a certain time window are obtained.
And S102, processing the vehicle-mounted laser radar data and the vehicle-mounted camera data to obtain a recognition result of the target type of the vehicle-mounted laser radar and a recognition result of the target type of the vehicle-mounted camera data. The method specifically comprises the following steps:
for the laser radar data, the embodiment of the invention adopts a Second deep learning framework to analyze and process the laser radar data in a certain time window to obtain the identification result, the position and other parameters of the detected target class. The recognition results of the target classes are as follows:
Figure BDA0003667276970000101
for vehicle-mounted camera data, the embodiment of the invention adopts a Yolov3 deep learning framework to detect the target aiming at the original image in the vehicle-mounted camera data, and obtains parameters such as the category and the position of the target. The recognition result of the object class of the vehicle-mounted camera is as follows:
Figure BDA0003667276970000102
it should be noted that, in the specific implementation process, the recognition result of the target category of the sensor data obtained by other methods is also considered, for example, for the lidar data, algorithms such as pointpilars, voxelnet, 3DSSD, STD, Point R-CNN, and the like may be used to obtain parameters such as the category and the position of the target. For vehicle-mounted camera data, algorithms such as Fast R-CNN and SSD can be adopted to obtain parameters such as the category and the position of a target.
In step S2, based on the improved evidence reasoning rule, the recognition results of the target categories of the two sensors are fused to obtain the accurate category of the target; wherein the improved evidence reasoning rules comprise: acquiring the importance weight of the sensor based on the historical accuracy rate and the real-time detection result of the sensor; obtaining the credibility of the sensor according to the change of the value in the current time window of the sensor; and (4) carrying out credibility distribution again in the evidence reasoning rule process through the credibility and the importance weight of the sensor. The specific implementation process is as follows:
s201, acquiring importance weight of the sensor according to the detection result of the sensor on the historical data and the real-time detection result of the sensor, wherein the importance weight comprises the following steps:
s201a, calculating the initial importance weight of the sensor according to the historical data, including:
the weight of the sensor depends on the ability of the sensor to resolve the problem. In historical events, sensors that can solve problems more accurately or have a more accurate description are more important, i.e., have more weight.
Therefore, historical data of a certain scale is obtained to be used as a test set for analysis, and the initial importance weight w of the sensor is obtained l And w c
Suppose at time t, the actual class of a target is θ i Then the classification accuracy of the lidar and the vehicle-mounted camera for this target is a l And a c
Then, the initial importance weights w of the lidar and the onboard camera l And w c Expressed by the percentage of the times that the sensor is classified accurately on the historical data to all times, respectively:
Figure BDA0003667276970000111
s201b, calculating the variation coefficient of the sensor according to the real-time detection result, including:
in the automatic driving process, the importance weight of the sensor is not constant due to the limitation of the working principle of the sensor and the influence of the environment. In order to enable the fusion algorithm to better adapt to the vehicle driving environment, an adaptation factor needs to be set to adaptively adjust the weight.
By analyzing the detection results on the test set, it can be found that when the working conditions of the sensor are good and more accurate results can be obtained, the score of the category to which the sensor belongs is far higher than the scores of other categories, that is, the scores of the categories are more discrete. Therefore, the embodiment of the invention adopts the coefficient of variation to describe the dispersion degree of the detection results of the single sensor at the current moment, and the sensor with higher dispersion degree has higher weight.
The real-time detection result of the sensor calculates a variation coefficient, which can be used to measure the degree of dispersion of data, and is defined as follows:
Figure BDA0003667276970000121
wherein the content of the first and second substances,
Figure BDA0003667276970000122
Figure BDA0003667276970000123
wherein, N is the number of categories,
Figure BDA0003667276970000124
is the ith sensor t time to theta k The rating of the category.
S201c, adjusting the initial importance weight according to the coefficient of variation to obtain an importance weight of the sensor, including:
and dynamically adjusting the weight of the sensor by adopting the coefficient of variation as an adaptive factor, wherein the final weight of the sensor is calculated as follows:
Figure BDA0003667276970000125
Figure BDA0003667276970000126
s202, obtaining the credibility of the sensor according to the change of the value in the current time window of the sensor, wherein the step comprises the following steps:
because the sensor is influenced by environmental factors in the process of target detection, the detection result has great uncertainty. In order to eliminate the influence of the uncertainty on the fusion result, the reliability of the sensor timing characteristics is considered, and the reliability is regarded as the credibility of the sensor.
In practice, the process of target observation by the sensor is continuous from the appearance of the target until the disappearance of the target. Therefore, the time sequence characteristics of the sensor in a certain time window can be used as the judgment basis of the reliability of the sensor.
Taking a vehicle-mounted laser radar as an example, assuming that within a certain time window l, the observation result of the vehicle-mounted laser radar on a certain target is as follows:
Figure BDA0003667276970000131
in the absence of uncertainty, the observed values of the vehicle-mounted lidar should be the same at different times within a certain time window, that is:
Figure BDA0003667276970000132
however, in practical applications, the observed value of the sensor cannot be kept constant due to environmental disturbance and detection algorithm. When the recognition result of the sensor changes less, the credibility of the sensor can be considered to be relatively high; when the recognition result of the sensor changes greatly, the reliability of the sensor is considered to be relatively low. Especially, when the class with the highest probability in the detection result of the sensor is changed greatly, the credibility of the sensor is considered to be low. Therefore, it is necessary to calculate the data difference of the sensor within a certain time window and define the reliability of the sensor according to the difference degree of the data.
Assume that the confidence level of the sensor lies between 0, 1. The definition of the confidence level of the sensor should satisfy the following conditions:
(1) the overall size decreases as the difference in the sensor detection results becomes larger.
(2) Since the sensor detection result itself has normal fluctuation, the reliability of the sensor is high when the difference of the result is small.
(3) Since the reliability of the sensor is between [0,1], the reliability of the sensor is close to 0 when the data difference is large to some extent.
In order to satisfy the above conditions, the variation of the reliability of the sensor with respect to the difference of the detection results at different times should be as shown in fig. 2:
from the above analysis, it can be seen that the variation of the confidence level is similar to the Logistic model definition function. The detection result difference relative to different time instants is defined by the embodiment of the present invention, and if the difference of the classification result at different time instants is Δ p, the reliability is calculated as follows:
Figure BDA0003667276970000141
where lambda and mu time adjust important parameters of sensor confidence that vary with data differences within a time window.
The time window is set to be l,
Figure BDA0003667276970000142
representing the average difference of the sensor Si for time t for the N class classification results with respect to other times within the time window.
Figure BDA0003667276970000143
Then, r i,t Representing the reliability of the sensor Si at time t:
Figure BDA0003667276970000144
and S203, fusing the detection results of the laser radar and the vehicle-mounted camera according to the improved evidence reasoning rule at the moment t to obtain the accurate category of the target. The method specifically comprises the following steps:
Θ={θ 12 ,…,θ N is a set of N targets, where for any i, j ∈ {1,2, …, N } and i ≠ j,
Figure BDA0003667276970000145
i.e., N targets are mutually exclusive in pairs. The set of Θ and all its subsets is called the power set of Θ, denoted as P (Θ) or 2 Θ
Figure BDA0003667276970000146
Basic belief allocation (BPA) is for arbitrary
Figure BDA0003667276970000147
Defining m (θ), wherein:
Figure BDA0003667276970000151
each evidence will have a basic confidence assignment for P (Θ). In the evidence reasoning rules, for evidence e i Defines a confidence level r i And an importance weight w i . Wherein the degree of confidence r i Embodies the evidence e i Importance weight w for the ability to provide an accurate assessment or solution to a question i Embodies the evidence e i Importance compared to other evidence. For evidence e i Basic confidence is assigned m θ,i Then, include confidence level r i And an importance weight w i Is expressed as
Figure BDA0003667276970000152
Figure BDA0003667276970000153
Wherein, c rw,i =1/(1+w i -r i ) Is a normalization factor, to ensure that:
Figure BDA0003667276970000154
Figure BDA0003667276970000155
if two sets of evidence e 1 And e 2 Mutually independent, and fused by evidence reasoning rules to obtain e 1 And e 2 Joint support proposition
Figure BDA0003667276970000156
Of the belief function p θ,e(2) The definition is as follows:
Figure BDA0003667276970000157
Figure BDA0003667276970000158
according to the detection results of the two sensors, the N categories theta to which the target belongs are set to be { theta ═ theta 12 ,…,θ N Basic credibility allocation is carried out to obtain the sensor S l And S c Basic confidence m of θ,l 、m θ,c . Then time t contains a confidence level r l,t 、r c,t And an importance weight W l,t 、W c,t Is expressed as
Figure BDA0003667276970000159
Figure BDA0003667276970000161
Figure BDA0003667276970000162
Wherein, the first and the second end of the pipe are connected with each other,
c rw,l =1/(1+W l,t -r l,t )
c rw,c =1/(1+W c,t -r c,t )
since the two sensors observe independently, the fusion result of the sensor system for each category at time t is:
Figure BDA0003667276970000163
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003667276970000164
indicates that the target at time t belongs to the category theta n The reliability of (2). The specific calculation method is shown in the above formula (3) (4).
Finally, the post-fusion class of the observed target:
Figure BDA0003667276970000165
in order to verify the feasibility and the effectiveness of the method, the embodiment of the invention adopts a large-scale automatic driving data set NuScenes established by an automatic driving company NuTonomy to carry out experiments. Nuscenes is a typical driving multi-mode data set, contains mainstream vehicle-mounted sensor data such as laser radar data, millimeter wave radar data and image data, and can better support the research of information fusion of unmanned vehicles. The Nuscenes data set classifies and stores the sensor data in a scene mode, so that the verification process performs parameter analysis by using data of 10 scenes, and performs experimental verification by using data of 100 scenes. The verification process performs data fusion on the laser radar data and the vehicle-mounted camera data in the acquired scenes.
For the laser radar data, a Second deep learning framework is adopted in the verification process, and the original data is analyzed and processed to obtain parameters such as the type and the position of a detection target. The Second deep learning framework is a mature laser radar point cloud target identification framework, the position, the category and the like of a target can be obtained, and a better performance is obtained on a KITTI data set.
For vehicle-mounted camera data, a Yolov3 deep learning framework is adopted in the text, target detection is performed on an original image, and parameters such as the category and the position of a target are obtained. Yolov3 is a common deep learning framework for target detection, can well meet the requirement of real-time performance while ensuring accuracy, and has good performance on common data sets such as KITTI and Nuscenes.
For the detection results of the laser radar and the vehicle-mounted camera, a nearest neighbor method is adopted to realize data association after the detection results of the laser radar and the vehicle-mounted camera are in the same coordinate system, observation targets at different moments are associated, and the detection results of different sensors on the same target at the same moment are associated.
Finally, time series about target classes observed by the two sensors for the same target are obtained, and fusion experiments are further carried out on the time series.
In the method, a plurality of parameters are designed, wherein the parameters which have important influence on the result are parameters which describe the variation of the credibility of the sensor along with the data difference in a time window when the real-time credibility calculation of the sensor is carried out, and the parameters comprise lambda and mu. Where λ determines the degree of curve bending and μ determines the curve position.
As fig. 3 shows the variation of the curve with μ, μ should be determined by the distribution interval [ a, b ] of most values in the difference between the recognition results of the target classes of the previous and next frames.
Figure BDA0003667276970000171
By analyzing data in 10 scenes, in order to make the method of the embodiment of the present invention reflect changes in sensor reliability due to data differences as much as possible on the premise of reflecting 99% of the data, differences in recognition results of target classes between previous and next frames are concentrated between [0,0.38], and therefore, it is considered that:
Figure BDA0003667276970000181
when μ is 0.19, the analysis of the value of λ is started. As shown in fig. 4, the curve varies with the value of λ, and the larger λ is, the smaller Δ p is.
It is assumed that when the recognition rate of the correct class is equal to or greater than α (α is a threshold of the recognition rate), the sensor is considered to be able to detect the target class well, and the sensor at the current time is reliable. The value of a determines the value of lambda.
And (3) carrying out derivation on the credibility function f (x) to obtain:
Figure BDA0003667276970000182
and calculating the average detection result difference a of all the time points when the identification rate is greater than or equal to alpha in the sample.
When x is a, λ satisfies:
Figure BDA0003667276970000183
Figure BDA0003667276970000184
in the verification process, the data will be
Figure BDA0003667276970000185
Set to-0.1.
Therefore, according to the above formula, the values of λ corresponding to different α and β are calculated. And then substituting the obtained lambda value into the reliability of the calculation sensor, carrying out information fusion, comparing fusion accuracy rates under different lambda values, and selecting the lambda with the highest accuracy rate.
TABLE 1 comparison of results for different lambda
Figure BDA0003667276970000186
Figure BDA0003667276970000191
From the above calculation, it is known that the accuracy is highest when λ is 45, and the verification process sets λ to 45.
The improved evidence reasoning rules in the embodiment of the invention are used for fusing the obtained identification results of the target categories of the laser radar and the camera, and in the verification process, the detected targets are divided into 6 categories as shown in the following table:
TABLE 2 object classes
Categories class1 class2 class3 class4 class5 class6
Description of the invention Car (R.C.) Truck/bus Pedestrian Bicycle with a wheel Motorcycle with a motorcycle body Others
Wherein the first class and the second class identify classes that are easily confused, and the 3 rd, 4 th and 5 th classes are easily confused.
For the recognition result of the sensor target category at the current time t, the specific operation steps are as follows:
1) and normalizing the recognition result of the target category of the sensor to obtain the basic credibility distribution of each sensor.
2) And calculating the reliability of the sensor according to the change of the value in the current time window of the sensor.
3) And calculating the importance weight of the sensor according to the performance of the sensor on the test set and the real-time detection result of the sensor.
4) And re-performing reliability distribution according to the acquired sensor reliability and importance weight.
5) And fusing the recognition results of the target categories of the two sensors according to an evidence reasoning rule to obtain a final recognition result of the target category.
In order to show the process of the method specifically, the actual data at a certain moment is taken as an example to be demonstrated.
1) At the time t, after the identification results of the target categories of the laser radar and the vehicle-mounted camera sensor in the time window L are obtained through experiments and normalized, the following results are obtained:
TABLE 3 lidar basic confidence assignment
class1 class2 class3 class4 class5 class6
t-5 0.810856 0.189144 0 0 0 0
t-4 0.809976 0.190024 0 0 0 0
t-3 0.870176 0.129824 0 0 0 0
t-2 0.866913 0.133087 0 0 0 0
t-1 0.819288 0.180713 0 0 0 0
t 0.797979 0.202021 0 0 0 0
TABLE 4 vehicle Camera basic confidence assignment
class1 class2 class3 class4 class5 class6
t-5 0.98745 0.007307 0.001093 0 0.00415 0
t-4 0.993874 0.001737 0.001727 0.000211 0.00245 0
t-3 0.992457 0.003448 0.001633 0.00024 0.002221 0
t-2 0.996662 0.001483 0.000657 0.000182 0.001015 0
t-1 0.997024 0.001159 0.000753 0.000128 0.000936 0
t 0.394346 0.603464 0.00067 0.000225 0.001295 0
2) And calculating the reliability of the sensor at the time t according to the lambda and the mu obtained by parameter optimization.
For lidar sensors, the confidence r at time t l,t The calculation is as follows:
Figure BDA0003667276970000201
for an onboard camera sensor, the confidence r at time t c,t The calculation is as follows:
Figure BDA0003667276970000202
3) and calculating the importance weight of the sensor according to the performance of the sensor on the test set and the real-time detection result of the sensor.
Analyzing the detection accuracy of the sensor on the test set to obtain the detection accuracy rate a of the laser radar and the vehicle-mounted camera on the test set l And a c 0.82 and 0.88 respectively. Thus, the initial weights w of the lidar and the onboard camera l And w c Respectively as follows:
Figure BDA0003667276970000211
Figure BDA0003667276970000212
calculating the coefficient of variation v of the laser radar and the vehicle-mounted camera l And v c The following:
Figure BDA0003667276970000213
Figure BDA0003667276970000214
adjusting the original weight according to the variation coefficient to obtain an adaptive weight W l And W c Are respectively as
Figure BDA0003667276970000215
Figure BDA0003667276970000216
4) The basic schedule distribution of the lidar and the vehicle-mounted camera needs to be adjusted according to reliability and weight.
c rw,l =1/(1+W l,t -r l,t )=1.24
c rw,c =1/(1+W c,t -r c,t )=1.88
According to
Figure BDA0003667276970000217
Figure BDA0003667276970000221
The adjusted confidence degree distribution of the laser radar and the vehicle-mounted camera at the time t is obtained as follows:
TABLE 5 sensor confidence assignment
class1 class2 class3 class4 class5 class6
Laser radar 0.79674 0.201707 0 0 0 0
Vehicle-mounted camera 0.230679 0.353006 0.000392 0.000132 0.000758 0
5) And obtaining the recognition result of the fused sensor target category according to the combination principle of the evidence reasoning rules.
TABLE 6 recognition results of sensor target classes
class1 class2 class3 class4 class5 class6
After fusion 0.764187 0.235811 5.49E-07 1.84E-07 1.06E-06 0
From the fusion results, the target observed by the sensor belongs to class 1.
It can be seen from the above that, the method provided by the embodiment of the invention can effectively cope with the situation that errors or errors occur in the detection of each sensor, so that the identification result of the fused target category is more accurate.
To further verify the validity of embodiments of the present invention, the present verification compares the detection accuracy of multiple methods on a data set. The indexes for measuring the accuracy comprise:
(1) the average classification accuracy MA, describes the percentage of times the fusion algorithm detects on the dataset that are accurate over all fusion times. The larger the MA, the better the fusion algorithm works.
Figure BDA0003667276970000222
Wherein, c acc For the algorithm to classify the exact fusion times on the data set, c all Is the total number of fusions of the algorithm on the data set.
(2) And average accurate class reliability MP, which describes the average reliability of the fusion algorithm on the data set to the accurate class. The larger the MP, the better the fusion algorithm.
Figure BDA0003667276970000231
Wherein tau is a Boolean variable, 0 represents that the fusion result is inaccurate, and 1 represents that the fusion result is accurate; p is a radical of acc Representing the confidence of the accurate class.
The method provided by the embodiment of the invention is compared with the following algorithm, which is specifically shown in the following table:
TABLE 7 comparison Algorithm
Figure BDA0003667276970000232
The results of the method proposed by the embodiment of the present invention (improved _ ER) and the other three methods are obtained by experiments on the data set, and are shown in the following table:
TABLE 8 comparison of fusion Process accuracy
DS ED ER improved_ER
MA 0.89741 0.91140 0.94145 0.96214
MP 0.63471 0.65147 0.70624 0.73808
The experiments show that the method provided by the embodiment of the invention has better performance on average classification accuracy and average accuracy class reliability than other methods. Therefore, the method provided by the embodiment of the invention can better depict the reliability and importance weight of the sensor, so that the sensor can better implement the current driving environment and obtain better effect. Compared with an evidence theory method (DS) and an evidence discount method (ED), the method provided by the invention has higher classification accuracy, and shows that an evidence reasoning rule is indeed more suitable for solving the problem of fusion of recognition results of target classes in the unmanned process, and the consideration of the real-time reliability of the sensor and the importance weight of the sensor is significant; compared with an evidence reasoning rule method (ER), the method provided by the text has higher classification accuracy, which indicates that the fixed weight can not adapt to the change of the environment, and the fusion accuracy can be effectively improved according to the adaptive weight defined by the real-time detection result of the sensor.
The embodiment of the invention also provides a dual-sensor target detection fusion system based on the evidence reasoning rule, which comprises:
the data processing module is used for acquiring two types of sensor data and processing the two types of sensor data to obtain identification results of target categories of the two types of vehicle-mounted sensors;
the target category fusion module is used for fusing the recognition results of the target categories of the two sensors based on the improved evidence reasoning rule to obtain the accurate category of the target; wherein the improved evidence reasoning rules comprise: acquiring an importance weight of the sensor based on the detection result; obtaining the credibility of the sensor according to the change of the value in the current time window of the sensor; and (4) carrying out credibility distribution again in the evidence reasoning rule process through the credibility and the importance weight of the sensor.
It can be understood that, the dual-sensor target detection fusion system based on the evidence reasoning rule provided by the embodiment of the present invention corresponds to the dual-sensor target detection fusion method based on the evidence reasoning rule, and the explanation, examples, and beneficial effects of the relevant contents thereof may refer to the corresponding contents in the dual-sensor target detection fusion method based on the evidence reasoning rule, and are not described herein again.
Embodiments of the present invention further provide a computer-readable storage medium storing a computer program for dual-sensor target detection fusion based on evidence reasoning rules, where the computer program enables a computer to execute the dual-sensor target detection fusion method based on evidence reasoning rules as described above.
An embodiment of the present invention further provides an electronic device, including:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs comprising instructions for performing a dual sensor target detection fusion method based on evidence-based reasoning rules as described above.
In summary, compared with the prior art, the method has the following beneficial effects:
1. according to the embodiment of the invention, the reliability and importance weight of the sensor are obtained through the monitoring data of the sensor, and the objectivity and accuracy of the fusion result can be effectively ensured through subjective experience or dependence on the difference between evidences.
2. Different from the prior art, in the calculation process of the importance weight, not only the historical performance of the sensor is considered, but also the real-time detection result is considered, so that the calculation result of the importance weight can effectively adapt to the change of the environment, and the accuracy of the fusion result is further improved.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A dual-sensor target detection fusion method based on evidence reasoning rules is characterized by comprising the following steps:
s1, acquiring two kinds of sensor data, and processing the two kinds of sensor data to obtain the identification results of the target categories of the two kinds of sensors;
s2, fusing the recognition results of the target categories of the two sensors based on an improved evidence reasoning rule to obtain the accurate category of the target; wherein the improved evidence reasoning rules comprise: acquiring an importance weight of the sensor based on the detection result; acquiring the credibility of the sensor according to the change of the value of the sensor in the current time window; and (4) carrying out credibility distribution again in the evidence reasoning rule process through the credibility and the importance weight of the sensor.
2. The dual-sensor target detection fusion method based on evidence reasoning rules of claim 1, wherein the obtaining of importance weights of the sensors based on the detection results comprises:
and acquiring the importance weight of the sensor adaptability according to the detection result of the sensor on the historical data and the real-time detection result of the sensor.
3. The evidence reasoning rule-based dual-sensor target detection fusion method of claim 2, wherein the obtaining of the importance weight of the sensor according to the detection result of the sensor on the historical data and the real-time detection result of the sensor comprises:
s201a, calculating the initial importance weight of the sensor according to historical data;
s201b, calculating the variation coefficient of the sensor according to the real-time detection result;
s201c, adjusting the initial importance weight according to the variation coefficient to obtain the importance weight of the sensor.
4. The evidence reasoning rule-based dual-sensor target detection fusion method of claim 1, wherein the acquiring of two types of sensor data and the processing of the two types of vehicle-mounted sensor data to obtain the recognition results of the target classes of the two types of sensors comprises:
s101, acquiring first sensor data and second sensor data in a certain time window;
and S102, processing the first sensor data and the second sensor data to obtain the identification result of the target category of the first sensor and the identification result of the target category of the second sensor.
5. The evidence reasoning rule-based dual-sensor target detection fusion method of claim 4, wherein the processing the first sensor data and the second sensor data to obtain the recognition result of the target class of the first sensor and the recognition result of the target class of the second sensor comprises:
the first sensor is a vehicle-mounted laser radar, and the second sensor is a vehicle-mounted camera;
analyzing and processing vehicle-mounted laser radar data in a certain time window through Second deep learning to obtain a recognition result of the detection target category
Figure FDA0003667276960000021
Wherein, { theta } 12 ,…,θ N Is the set of target classes,
Figure FDA0003667276960000022
at time t, the vehicle-mounted laser radar is aligned with the target class theta 1 (ii) an observation probability of;
by Yolov3, carrying out target detection on the original image in the vehicle-mounted camera data through deep learning to obtain the identification result of the target category
Figure FDA0003667276960000023
Wherein the content of the first and second substances,
Figure FDA0003667276960000024
at time t, the vehicle-mounted camera is aligned with the object class theta 1 The probability of observation of (2).
6. The evidence reasoning rule-based dual-sensor target detection fusion method of any one of claims 1 to 5, wherein the obtaining of the credibility of the sensor according to the change of the value of the sensor in the current time window comprises:
setting the difference of the classification results at different moments as delta p, describing the reliability by using a Logistic model based on the delta p, and calculating as follows:
Figure FDA0003667276960000031
the time window is set to be l,
Figure FDA0003667276960000032
representing the average difference of the sensor Si for the time t for the N class classification results with respect to other times within the time window,
Figure FDA0003667276960000033
then r i,t Representing the reliability of the sensor Si at time t:
Figure FDA0003667276960000034
where λ and μ are windows of conditioned sensor confidence over timeImportant parameters of intra-data variance;
Figure FDA0003667276960000035
is the ith sensor t time to theta k The rating of the category.
7. The dual-sensor target detection fusion method based on the evidence reasoning rule as claimed in any one of claims 1 to 6, wherein the re-performing of the confidence degree assignment in the evidence reasoning rule process by the sensor confidence degree and the importance weight comprises:
for evidence e i Basic confidence is assigned m θ,i Then, include confidence level r i And an importance weight w i Is expressed as
Figure FDA0003667276960000036
Figure FDA0003667276960000037
Wherein, c rw,i =1/(1+w i -r i ) Is a normalization factor, to ensure that:
Figure FDA0003667276960000038
Figure FDA0003667276960000041
through sensor credibility and importance weight, the N categories theta to which the target belongs are determined as { theta ═ theta 12 ,…,θ N Fourthly, reliability distribution in the evidence reasoning rule process is carried out again to obtain the first sensor S l And a second sensor S c Basic confidence m of θ,l 、m θ,c If the time t includes the confidence r l,t 、r c,t And an importance weight W l,t 、W c,t Is expressed as
Figure FDA0003667276960000042
Figure FDA0003667276960000043
Figure FDA0003667276960000044
In the formula (I), the compound is shown in the specification,
c rw,l =1/(1+W l,t -r l,t )
c rw,c =1/(1+W c,t -r c,t )
wherein Θ ═ θ 12 ,…,θ N Is a set of N targets, any i, j ∈ {1,2, …, N } and i ≠ j,
Figure FDA0003667276960000045
n targets are mutually exclusive pairwise; the set consisting of Θ and all its subsets is called the power set of Θ, denoted as P (Θ);
W l,t 、r l,t the importance weight and the credibility of the first sensor at the moment t are respectively;
W c,t 、r c,t respectively, the importance weight and the confidence level of the second sensor at time t.
8. A dual-sensor target detection fusion system based on evidence reasoning rules is characterized by comprising:
the data processing module is used for acquiring two types of sensor data and processing the two types of sensor data to obtain identification results of target categories of the two types of vehicle-mounted sensors;
the target category fusion module is used for fusing the recognition results of the target categories of the two sensors based on the improved evidence reasoning rule to obtain the accurate category of the target; wherein the improved evidence reasoning rules comprise: acquiring the importance weight of the sensor based on the historical accuracy rate and the real-time detection result of the sensor; obtaining the credibility of the sensor according to the change of the value in the current time window of the sensor; and (4) carrying out credibility distribution again in the evidence reasoning rule process through the credibility and the importance weight of the sensor.
9. A computer-readable storage medium storing a computer program for evidence inference rule based dual-sensor target detection fusion, wherein the computer program causes a computer to execute the evidence inference rule based dual-sensor target detection fusion method according to any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs comprising instructions for performing the evidence inference rule based dual sensor target detection fusion method of any of claims 1-7.
CN202210590810.1A 2022-05-27 2022-05-27 Dual-sensor target detection fusion method and system based on evidence reasoning rule Pending CN115049996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210590810.1A CN115049996A (en) 2022-05-27 2022-05-27 Dual-sensor target detection fusion method and system based on evidence reasoning rule

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210590810.1A CN115049996A (en) 2022-05-27 2022-05-27 Dual-sensor target detection fusion method and system based on evidence reasoning rule

Publications (1)

Publication Number Publication Date
CN115049996A true CN115049996A (en) 2022-09-13

Family

ID=83159653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210590810.1A Pending CN115049996A (en) 2022-05-27 2022-05-27 Dual-sensor target detection fusion method and system based on evidence reasoning rule

Country Status (1)

Country Link
CN (1) CN115049996A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115996503A (en) * 2023-03-23 2023-04-21 深圳市森辉智能自控技术有限公司 Self-optimizing building illumination sensor energy-saving control system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115996503A (en) * 2023-03-23 2023-04-21 深圳市森辉智能自控技术有限公司 Self-optimizing building illumination sensor energy-saving control system

Similar Documents

Publication Publication Date Title
CN108469806B (en) Driving right transfer method in alternating type man-machine common driving
US20190012548A1 (en) Unified deep convolutional neural net for free-space estimation, object detection and object pose estimation
Peng et al. Uncertainty evaluation of object detection algorithms for autonomous vehicles
CN111709975B (en) Multi-target tracking method, device, electronic equipment and storage medium
US11210559B1 (en) Artificial neural networks having attention-based selective plasticity and methods of training the same
US20220004782A1 (en) Method for generating control settings for a motor vehicle
US11748593B2 (en) Sensor fusion target prediction device and method for vehicles and vehicle including the device
US20220230536A1 (en) Method and device for analyzing a sensor data stream and method for guiding a vehicle
CN115049996A (en) Dual-sensor target detection fusion method and system based on evidence reasoning rule
US20220324470A1 (en) Monitoring of an ai module of a vehicle driving function
CN112613617A (en) Uncertainty estimation method and device based on regression model
Ren et al. Decision fusion of two sensors object classification based on the evidential reasoning rule
CN114137526A (en) Label-based vehicle-mounted millimeter wave radar multi-target detection method and system
CN113487223A (en) Risk assessment method and risk assessment system based on information fusion
Zhao et al. Road friction estimation based on vision for safe autonomous driving
JP4284322B2 (en) A method for rating and temporal stabilization of classification results
US11562184B2 (en) Image-based vehicle classification
CN111368792B (en) Feature point labeling model training method and device, electronic equipment and storage medium
CN114758270A (en) Follow-up driving auxiliary system based on deep learning
US20240095595A1 (en) Device and method for training a variational autoencoder
US20230004757A1 (en) Device, memory medium, computer program and computer-implemented method for validating a data-based model
US20230410490A1 (en) Deep Association for Sensor Fusion
CN113705786B (en) Model-based data processing method, device and storage medium
JP7314965B2 (en) Object detection device and program
EP4181088A1 (en) Clustering track pairs for multi-sensor track association

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination