CN118280109A - Collaborative perception reliability assessment method for complex road scene - Google Patents

Collaborative perception reliability assessment method for complex road scene Download PDF

Info

Publication number
CN118280109A
CN118280109A CN202410379723.0A CN202410379723A CN118280109A CN 118280109 A CN118280109 A CN 118280109A CN 202410379723 A CN202410379723 A CN 202410379723A CN 118280109 A CN118280109 A CN 118280109A
Authority
CN
China
Prior art keywords
collaborative
model
scene
data
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410379723.0A
Other languages
Chinese (zh)
Inventor
柴晨
任昊岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202410379723.0A priority Critical patent/CN118280109A/en
Publication of CN118280109A publication Critical patent/CN118280109A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of cooperative sensing, and provides a cooperative sensing reliability assessment method for a complex road scene, which comprises the steps of obtaining scene road topology information and main vehicle point cloud data and preprocessing the data; obtaining a frame image based on the main vehicle point cloud data and the road topology information, and representing the frame image by using a set of two-dimensional European plane midpoints; screening out extreme conditions, namely scene parameters, for enabling the collaborative perception model to be lowered in a complex road scene; and carrying out reliability evaluation on the collaborative awareness model based on the screened extreme conditions. The invention intuitively displays the fluctuation of the reliability along with the change of data and time by comparing the actual behaviors of the vehicles affected by the perception failure; the invention screens out the most challenging conditions or scenes for the collaborative sensing algorithm, can evaluate the algorithm more comprehensively and indicates the optimization direction of the collaborative sensing algorithm.

Description

Collaborative perception reliability assessment method for complex road scene
Technical Field
The application relates to the technical field of collaborative awareness, in particular to a collaborative awareness reliability assessment method for a complex road scene.
Background
The cooperative sensing can greatly expand the sensing range of the bicycle and improve the sensing capability, and new intelligent elements represented by high-dimensional data are introduced to realize group intelligence, so that the automatic driving safety is ensured, and the automatic driving operation design domain is expanded. The reliability of the collaborative sensing algorithm is evaluated, which is not only important to reduce the test cost, but also indicates the optimization direction of the sensing algorithm.
The existing collaborative sensing algorithm mostly uses single influencing factors such as the running speed of the vehicle, the permeability of the intelligent network-connected vehicle, the traffic volume and the like to influence the collaborative sensing result, and lacks intelligent network-connected vehicle collaborative sensing reliability assessment which comprehensively considers the state of the vehicle and the state of the external traffic environment. Moreover, the existing collaborative sensing algorithm faces long tail effect, faces complex road scenes such as poor illumination condition, shielding of other traffic participants, dense traffic flow and the like, and the performance of the collaborative sensing algorithm can be obviously reduced. The most challenging scene parameters for the collaborative sensing algorithm are found out, so that the algorithm can be more comprehensively evaluated, and the optimization direction of the sensing algorithm is indicated.
Disclosure of Invention
Aiming at the problems, the invention provides a collaborative perception reliability assessment method for a complex road scene, which comprises the steps of obtaining scene road topology information and main vehicle point cloud data; inputting the main vehicle point cloud data and the road topology information into a cooperative sensing model to obtain a multi-vehicle sensing image and radar point cloud data, and obtaining a frame image according to Lei Dadian cloud data comprehensive characteristic information; according to the frame image, based on a Markov decision process, obtaining stable distribution of the Markov process through strategy iteration, and screening extreme conditions for enabling the collaborative perception model to represent reduction through a threshold value; and extracting characteristics of the multi-vehicle sensing image and radar point cloud data output by the screened collaborative sensing model under extreme conditions, inputting the characteristic data into a Bayesian Neural Network (BNN) to obtain uncertainty of the model, and modeling the reliability based on the uncertainty so as to evaluate the reliability of the collaborative sensing model.
The invention discloses a collaborative awareness reliability assessment method for a complex road scene, which specifically comprises the following steps of:
s1: acquiring scene road topology information and cloud data of a host vehicle and performing data preprocessing;
S2: obtaining a frame image based on the main vehicle point cloud data and the road topology information, and representing the frame image by using a set of two-dimensional European plane midpoints;
s3: screening out extreme conditions for enabling the collaborative perception model to be lowered in a complex road scene;
s4: and carrying out reliability evaluation on the collaborative awareness model based on the screened extreme conditions.
Further, in the step S1, the host vehicle is any vehicle selected in the scene.
The method specifically comprises the following steps:
S11, acquiring map information of a scene, and generating road topology information by the map information;
S12, obtaining main vehicle point cloud data from a vehicle-mounted sensor of a main vehicle, and preprocessing to input a collaborative perception model, wherein: the preprocessing comprises the steps of cleaning data of original point cloud data, filling default values, removing abnormal values and redundant values, and the collaborative perception model is a model for perceiving and predicting road conditions and traffic environments by integrating information from a plurality of sensors, infrastructures and other vehicles.
Further, the step S2 specifically includes:
S21, inputting the main vehicle point cloud data and the road topology information into a collaborative perception model for perception fusion to obtain a multi-vehicle perception image and radar point cloud data;
s22, obtaining a frame image according to the main vehicle point cloud data and the multi-vehicle sensing image and the radar point cloud data obtained in the S21 and the comprehensive characteristic information;
S23, mapping the scene and the frame image into a two-dimensional Euclidean plane, and representing the frame image by using a set of points in the plane;
Wherein the mapping is an affine transformation, keeping points on edges of the border collinear.
Further, in the step S3, through a markov decision process, the overlapping rate of the predicted frame and the real frame is defined as the precision of the collaborative perception model, and the model precision difference between the two states is used as a reward function to screen out extreme conditions, namely scene parameters;
the measurement index of the decline of the cooperative sensing model is the precision z of the cooperative sensing model under the current scene condition, and the precision is defined as the overlapping rate of the predicted frame and the real frame:
Real i represents a circumscribed rectangle of an ith object in the scene, and Pre i represents a frame obtained by outputting point cloud data of the ith object by the collaborative perception model;
If each component of the coordinates of any point in Pre i does not fall within the range of each dimension of Real i
The step S3 specifically includes:
s31, based on a Markov decision process, obtaining stable distribution of the Markov process through strategy iteration;
the Markov process takes the condition in the scene as a state set S, the probability of the agent selecting each state is the state transition probability T of each action in the current state, the condition to be changed at the next moment is the action set A, and the rewarding function obtained by the agent from the environment is R;
at the initial moment, the agent traverses the state set, and calculates a reward function R (S, a, S ') in each state, wherein S, S' epsilon S, a epsilon A, the average value of the initial reward function values represents the collaborative perception model expression under the normal condition, and the normalization of the reward function value sequence is divided by Card (S) to obtain initial probability distribution, wherein Card (S) represents the number of elements of the set S;
The method for calculating the rewarding function comprises the following steps:
R(s,a,s′)=z(s)-z(s′)
Wherein z(s) is the cooperative sensing model precision in the state s, and z (s ') is the cooperative sensing model precision in the state s';
The initial time reward function is:
R(a,s)=z(s);
Step i+1, traversing the state set according to the state transition probability T (s, a, s ') of the step i, calculating a reward function R (s, a, s') in each state, and calculating a cost function V i+1(s) in each state:
Where γ is the discount rate of the feedback, the value of the value sequence is normalized and divided by Card (S) to obtain the state transition probabilities T i+1 (S, a, S');
Determining whether the current cost function is convergent, i.e N >0 is present such that when N > N, there is
Vn+1-Vn<∈
At this point the cost function converges. If the state transition is not converged, calculating a reward function and a cost function at the next moment according to the state transition. After finite step iteration, the cost function converges, and the state transition probability converges to a Card (S) dimensional vector pi= (p 1,p2,…,pn), namely the stable distribution of the markov process;
S32, each component of the steady distribution vector is arranged as pi' = (p (1),p(2),…p(n)) according to the ascending order of the values, a third quartile Q 3 of p (i) is used as a threshold value, and a component S s={pi∈π∣pi>Q3 which is larger than the threshold value is screened out, and the state in which the component subscripts correspond to the state set is the extreme condition.
Further, the step S4 specifically includes:
S41, extracting features of a multi-vehicle sensing image and radar point cloud data output by a collaborative sensing model under extreme conditions, and defining the obtained space-time data as D f=(si,ti, wherein S i represents spatial features and t i represents temporal features;
S42, inputting the space-time data obtained in the step S41 into a Bayesian Neural Network (BNN) to obtain uncertainty parameters of the model;
The method comprises the following steps: defining training set D= (x i,yi) and weight w i of collaborative perception algorithm to obey normal distribution, wherein x i is input data, y i is prediction data, and using Describing uncertainty of collaborative awareness model on data x i, where Y is real data, using BNN-based algorithm to obtain
Given a data set, the posterior distribution of the algorithm is
Wherein w is a perceived weight;
The loss function of the algorithm is
Solving by variational reasoning to approximate the solved posterior distribution
Obtaining uncertainty parameters of the model.
S43, modeling reliability based on uncertainty to evaluate the reliability of the collaborative awareness model;
the reliability modeling method comprises the following steps:
Different degrees of fault event and normal state are quantified through KL divergence:
Wherein the method comprises the steps of Is the cooperative sensing information under the processed extreme condition,Directly obtaining from a simulation scene;
calculating probability of prediction failure on time level
Wherein the method comprises the steps ofTo temporally base the perceptual information of t i on the spatiotemporal dataset D f,To obtain t i -based information from a simulation scene at a time level;
Calculating probability of prediction failure at spatial level
Wherein the method comprises the steps ofTo spatially base s i on the perceptual information on the spatiotemporal dataset D f,To obtain s i -based information from a simulation scene on a spatial level;
Taking into account the uncertainty parameters in step S42, the reliability of the collaborative awareness model may be characterized as
Wherein the method comprises the steps ofThe uncertainty parameter obtained in step S42.
The invention has the following advantages:
(1) According to the collaborative perception reliability assessment method for the complex road scene, fluctuation of reliability along with data and time change is intuitively displayed by comparing actual behaviors of vehicles affected by perception failure.
(2) According to the invention, through a Markov decision process, a frame image is mapped onto a two-dimensional Euclidean plane through affine transformation, the overlapping rate of a predicted frame and a real frame is defined as the precision of a collaborative perception model, and the model precision difference value of two states is used as a reward function to screen out extreme conditions.
(3) The invention screens out the most challenging conditions or scenes for the collaborative sensing algorithm, can evaluate the algorithm more comprehensively and indicates the optimization direction of the collaborative sensing algorithm.
Drawings
FIG. 1 is a schematic flow chart of a collaborative awareness reliability assessment method for a complex road scene;
FIG. 2 is a schematic diagram of radar point cloud data input to a collaborative awareness model provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a two-dimensional European representation of a bezel at a midpoint of the plane provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a set of states, i.e., scene parameter settings, of a Markov decision process provided by an embodiment of the present invention;
fig. 5 is a schematic flow chart of a markov decision process provided by an embodiment of the present invention.
Detailed Description
The technical scheme provided by the application is further described below with reference to specific embodiments and attached drawings. The advantages and features of the present application will become more apparent in conjunction with the following description.
The invention discloses a collaborative perception reliability assessment method for a complex road scene, which is shown in a figure 1 and sequentially comprises the following steps:
Acquiring scene road topology information and main vehicle point cloud data; obtaining a frame image based on the main vehicle point cloud data and the road topology information, and representing the frame by using a set of two-dimensional European plane midpoints; screening out extreme conditions for enabling the collaborative perception model to be lowered in a complex road scene; and carrying out reliability evaluation on the collaborative awareness model based on the screened extreme conditions.
The following is a detailed description of embodiments.
The invention aims to evaluate reliability of a collaborative perception model in a complex road scene, and takes a left turn scene without protection at a signal control intersection as an example:
Step S1: acquiring scene road topology information and main vehicle point cloud data and performing data preprocessing; wherein the host vehicle is any vehicle selected in the scene.
S11, generating a signal control intersection, a host vehicle and surrounding vehicles in simulation software, acquiring map information of the signal control intersection, and generating road topology information by the map information;
S12, in the process of carrying out unprotected left turn on the host vehicle, obtaining host vehicle point cloud data from an on-vehicle sensor of the host vehicle, and preprocessing to input a collaborative perception model, as shown in fig. 2, wherein: the preprocessing comprises the steps of cleaning data of original point cloud data, filling default values, removing abnormal values and redundant values, and the collaborative perception model is a model for perceiving and predicting road conditions and traffic environments by integrating information from a plurality of sensors, infrastructures and other vehicles.
Step S2: and obtaining a frame image based on the main vehicle point cloud data and the road topology information, and representing the frame image by using a set of two-dimensional European plane midpoints.
S21, inputting the main vehicle point cloud data and the road topology information into a collaborative perception model for perception fusion to obtain a multi-vehicle perception image and radar point cloud data;
S22, obtaining a frame image according to the main vehicle point cloud data and the multi-vehicle sensing image and the radar point cloud data obtained in the S21 and the comprehensive feature information, as shown in fig. 3;
s23, mapping the scene into a two-dimensional Euclidean plane, and representing a frame image by using a set of midpoints of the plane; wherein the mapping is an affine transformation, keeping points on edges of the border collinear.
Step S3: screening out extreme conditions, namely scene parameters, for enabling the collaborative perception model to be lowered in a complex road scene;
the measurement index of the decline of the cooperative sensing model is the precision z of the cooperative sensing model under the current scene condition, and the precision is defined as the overlapping rate of the predicted frame and the real frame:
Real i represents a circumscribed rectangle of an ith object in the scene, and Pre i represents a frame obtained by outputting point cloud data of the ith object by the collaborative perception model;
If each component of the coordinates of any point in Pre i does not fall within the range of each dimension of Real i
S31, based on a Markov decision process, obtaining stable distribution of the Markov process through strategy iteration;
The markov process uses the condition in the scene as a state set S, where the state set in this embodiment is defined as s= { V max,Amax,Vol0, light, latency }, as shown in fig. 4, where V max is the maximum speed of the host vehicle, a max is the maximum acceleration of all vehicles, vol 0 is the traffic volume, light is the weather condition of the scene, and Latency is the communication delay of the internet-enabled vehicle; the probability of each state selected by the intelligent agent is the state transition probability T of each action in the current state, the condition to be changed at the next moment is an action set A, and the rewarding function obtained by the intelligent agent from the environment is R;
The smooth distribution flow of the markov process is shown in fig. 5;
At the initial moment, the agent traverses the state set, and calculates a reward function R (S, a, S ') in each state, wherein S, S ' epsilon S, a epsilon A, S represent the state at the current moment, S ' represent the state at the next moment, a represents the action at the current moment, the average value of the initial reward function value represents the cooperative sensing model expression under the normal condition, and the normalization of the reward function value sequence is divided by Card (S) to obtain initial probability distribution, wherein Card (S) represents the number of elements of the set S;
Wherein the method for calculating the reward function is as follows
R(s,a,s′)=z(s)-z(s′)
Where z(s) is the co-perceptual model precision in state s and z (s ') is the co-perceptual model precision in state s'. When the model precision in the state of the next moment is higher than the model precision in the state of the current moment, the reward function takes a negative value; when the model precision in the next moment is lower than that in the current moment, the rewarding function takes a positive value, which indicates that the intelligent agent tends to select the state with lower model precision, namely the condition for making the model to be reduced;
initial time reward function
R(a,s)=z(s);
Step i+1, traversing the state set according to the state transition probability T (s, a, s ') of the step i, calculating a reward function R (s, a, s') in each state, and calculating a cost function V i+1(s) in each state:
Wherein γ is the discount rate of feedback, γ is taken as 0.5,0.7,0.9 in this embodiment, and the value function sequence is normalized and divided by Card (S) to obtain the state transition probability T i+1 (S, a, S');
Determining whether the current cost function is convergent, i.e N >0 is present such that when N > N, there is
Vn+1-Vn<∈
At this point the cost function converges. If the state transition is not converged, calculating a reward function and a cost function at the next moment according to the state transition. Taking e=10 -5, after finite step iteration, converging the cost function, and converging the state transition probability to a vector pi with the dimension of Card (S), namely considering the stable distribution of the markov process;
S32, each component of the steady distribution vector is arranged as pi' = (p (1),p(2),…p(n)) according to the ascending order of the values, a third quartile Q 3 of p (i) is used as a threshold value, and a component S s={pi∈π∣pi>Q3 which is larger than the threshold value is screened out, and the state in which the component subscripts correspond to the state set is the extreme condition.
Step S4: and carrying out reliability evaluation on the collaborative awareness model based on the screened extreme conditions.
S41, extracting features of a multi-vehicle sensing image and radar point cloud data output by a collaborative sensing model under an extreme condition S e, and defining the obtained space-time data as D f=(si,ti), wherein S i is a spatial feature observed by using a current state, t i is a temporal feature of k sliding time windows observed by using a history, and k=5 is taken in the embodiment, wherein i epsilon {1,2, …, N } is a time stamp. t i is the calculation of speed, acceleration, MTTC, DARC and speed difference. Standard deviation (S dev), coefficient of variation (C v)、jth percentile (Q j), coefficient of quartile variation (Q cv), and magnitude of variation (a v) are used to describe the volatility of the driving data at time t i;
S42, inputting the space-time data obtained in the step S41 into a Bayesian Neural Network (BNN) to obtain uncertainty parameters of the model;
The method comprises the following steps: defining training set D= (x i,yi) and weight w i of collaborative perception algorithm to obey normal distribution, wherein x i is input data, y i is prediction data, and using Describing uncertainty of collaborative awareness model on data x i, where Y is real data, using BNN-based algorithm to obtain
Given a data set, the posterior distribution of the algorithm is
Wherein w is a perceived weight;
The loss function of the algorithm is
Solving by variational reasoning to approximate the solved posterior distribution
And obtaining uncertainty of the model.
S43, modeling reliability based on uncertainty to evaluate the reliability of the collaborative awareness model;
Wherein the reliability modeling method comprises:
Different degrees of fault event and normal state are quantified through KL divergence:
Wherein the method comprises the steps of Is an unprotected left-turn scene in extreme conditions, with collaborative perceptions from the algorithm,Directly obtaining from a simulation scene;
calculating the probability of prediction failure on a time level:
Wherein the method comprises the steps of To temporally base the perceptual information of t i on the spatiotemporal dataset D f,To obtain t i -based information from a simulation scene at a time level;
Calculating the probability of prediction failure at the spatial level:
Wherein the method comprises the steps of To spatially base s i on the perceptual information on the spatiotemporal dataset D f,To obtain s i -based information from a simulation scene on a spatial level;
Taking into account the uncertainty in step S42, the reliability of the collaborative awareness model may be characterized as
Wherein the method comprises the steps ofThe uncertainty obtained in step S42.
The above description is only illustrative of the preferred embodiments of the application and is not intended to limit the scope of the application in any way. Any alterations or modifications of the application, which are obvious to those skilled in the art based on the teachings disclosed above, are intended to be equally effective embodiments, and are intended to be within the scope of the appended claims.

Claims (9)

1. The collaborative perception reliability evaluation method for the complex road scene is characterized by comprising the following steps of:
step S1: acquiring scene road topology information and main vehicle point cloud data and performing data preprocessing;
Step S2: obtaining a frame image based on the main vehicle point cloud data and the road topology information, and representing the frame image by using a set of two-dimensional European plane midpoints;
Step S3: screening out extreme conditions for enabling the collaborative perception model to be lowered in a complex road scene;
Step S4: and carrying out reliability evaluation on the collaborative awareness model based on the screened extreme conditions.
2. The collaborative awareness reliability assessment method of claim 1, wherein the host vehicle is any vehicle selected in the scene;
the step S1 specifically includes:
S11, acquiring map information of a scene, and generating road topology information by the map information;
S12, acquiring point cloud data of a host vehicle from an on-board sensor of the host vehicle, and preprocessing to input a collaborative awareness model, wherein: the preprocessing comprises the steps of cleaning data of original point cloud data, filling default values, removing abnormal values and redundant values, and the collaborative perception model is a model for perceiving and predicting road conditions and traffic environments by integrating information from a plurality of sensors, infrastructures and other vehicles.
3. The collaborative awareness reliability assessment method according to claim 1, wherein the step 2, using a set of two-dimensional european planar midpoints to characterize a bounding box, comprises:
S21: inputting the main vehicle point cloud data and the road topology information into a cooperative sensing model for sensing fusion to obtain a multi-vehicle sensing image and radar point cloud data;
S22: obtaining a frame image according to the main vehicle point cloud data, the multi-vehicle perceived image and the radar point cloud data and the comprehensive characteristic information;
s23: the scene and the edge frame image are mapped into a two-dimensional Euclidean plane, and the edge frame image is characterized by a set of points in the plane.
4. The collaborative awareness reliability assessment method according to claim 1, wherein the measure of the degradation of the collaborative awareness model in step S3 is the accuracy z of the collaborative awareness model under the current scene condition, where the accuracy is defined as the overlap ratio of the predicted frame and the real frame:
wherein Real i represents the circumscribed rectangle of the ith object in the scene, and Pre i represents the frame obtained by outputting the point cloud data of the ith object by the collaborative perception model.
5. The collaborative awareness reliability assessment method according to claim 4, wherein the step 3 of screening out extreme conditions for degrading collaborative awareness model in a complex road scene includes:
S31, obtaining the stable distribution of the Markov process in a strategy iteration mode based on the Markov decision process:
the Markov process takes the condition in the scene as a state set S, the probability of the agent selecting each state is the state transition probability T of each action in the current state, the condition to be changed at the next moment is the action set A, and the rewarding function obtained by the agent from the environment is R;
at the initial moment, the agent traverses the state set, and calculates a reward function R (S, a, S ') in each state, wherein S, S' epsilon S, a epsilon A, the average value of the initial reward function values represents the collaborative perception model expression under the normal condition, and the normalization of the reward function value sequence is divided by Card (S) to obtain initial probability distribution, wherein Card (S) represents the number of elements of the set S;
Step i+1, traversing the state set according to the state transition probability T (s, a, s ') of the step i, calculating a reward function R (s, a, s') and a cost function V i+1(s) under each state,
Where gamma is the discount rate of the feedback,
Dividing the value of the value sequence by the Card (S) to obtain the next state transition probability T i+1 (S, a, S');
Judging whether the current cost function is converged or not, if not, calculating a reward function and a cost function at the next moment according to state transition; after finite step iteration, the cost function converges, and the state transition probability converges to a Card (S) dimension vector pi= (p 1,p2,…,pn), namely the stable distribution of the Markov process;
S32, arranging components of the steady distribution vector into pi =(p(1),p(2),…p(n) according to the ascending order of the values), taking the third quartile Q 3 of p (i) as a threshold value, and screening out components S s={pi∈π∣pi>Q3 which are larger than the threshold value, wherein the states in which the component subscripts correspond to the state sets are extreme conditions.
6. The collaborative awareness reliability assessment method of claim 5, wherein the bonus function calculation method of the markov decision process comprises:
R(s,a,s′)=z(s)-z(s′)
wherein z(s) is the precision of the cooperative sensing model in the state s, z (s ') is the precision of the cooperative sensing model in the state s', and a is the action at the current moment;
The initial time reward function is:
R(a,s)=z(s)。
7. The collaborative awareness reliability assessment method according to claim 1, wherein the step S4 performs reliability assessment on the collaborative awareness model based on the screened extreme conditions, and the strategy includes:
S41, extracting characteristics of a multi-vehicle sensing image and radar point cloud data which are output by the collaborative sensing model under extreme conditions;
s42, inputting the space-time data obtained in the step S41 into a Bayesian neural network to obtain uncertainty parameters of the model;
S43 models the reliability based on the uncertainty parameters of step S42 to evaluate the reliability of the collaborative awareness model.
8. The collaborative awareness reliability assessment method according to claim 7, characterized in that step S42 is specifically:
defining training set D= (x i,yi) and weight w i of collaborative perception algorithm to obey normal distribution, wherein x i is input data, y i is prediction data, and using Describing uncertainty of collaborative awareness model on data x i, where Y is real data, using BNN-based algorithm to obtain
Given a data set, the posterior distribution of the algorithm is
Wherein w is a perceived weight;
The loss function of the algorithm is
Solving by variational reasoning to approximate the solved posterior distribution
θ*=argminθKL[q(w∣θ)‖P(w∣D)].
Obtaining uncertainty parameters of the model.
9. The collaborative awareness reliability assessment method according to claim 7, wherein step S43, the reliability modeling method based on uncertainty parameters comprises:
Different degrees of fault event and normal state are quantified through KL divergence:
Wherein the method comprises the steps of Is the cooperative sensing information under the processed extreme condition,Directly obtaining from a simulation scene;
calculating probability of prediction failure on time level
Calculating probability of prediction failure at spatial level
Wherein D f=(si,ti) is the processed spatiotemporal data, s i represents a spatial feature, and t i represents a temporal feature;
(D f∣ti) is the perceptual information on the spatiotemporal dataset D f based on t i at the temporal level, To obtain t i -based information from a simulation scene at a time level;
to spatially base s i on the perceptual information on the spatiotemporal dataset D f, To obtain s i -based information from a simulation scene on a spatial level;
Taking into account the uncertainty parameters in step S42, the reliability of the collaborative awareness model is characterized as
Wherein the method comprises the steps ofThe uncertainty parameter obtained in step S42.
CN202410379723.0A 2024-03-29 2024-03-29 Collaborative perception reliability assessment method for complex road scene Pending CN118280109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410379723.0A CN118280109A (en) 2024-03-29 2024-03-29 Collaborative perception reliability assessment method for complex road scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410379723.0A CN118280109A (en) 2024-03-29 2024-03-29 Collaborative perception reliability assessment method for complex road scene

Publications (1)

Publication Number Publication Date
CN118280109A true CN118280109A (en) 2024-07-02

Family

ID=91648293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410379723.0A Pending CN118280109A (en) 2024-03-29 2024-03-29 Collaborative perception reliability assessment method for complex road scene

Country Status (1)

Country Link
CN (1) CN118280109A (en)

Similar Documents

Publication Publication Date Title
CN111507343B (en) Training of semantic segmentation network and image processing method and device thereof
US11501572B2 (en) Object behavior anomaly detection using neural networks
US20210334420A1 (en) Driving simulation method and apparatus, electronic device, and computer storage medium
US20210089895A1 (en) Device and method for generating a counterfactual data sample for a neural network
Castignani et al. Driver behavior profiling using smartphones: A low-cost platform for driver monitoring
CN110533695A (en) A kind of trajectory predictions device and method based on DS evidence theory
US11242050B2 (en) Reinforcement learning with scene decomposition for navigating complex environments
CN112784885B (en) Automatic driving method, device, equipment, medium and vehicle based on artificial intelligence
CN115617217B (en) Vehicle state display method, device, equipment and readable storage medium
CN114360239A (en) Traffic prediction method and system for multilayer space-time traffic knowledge map reconstruction
JP2009096365A (en) Risk recognition system
Vlachogiannis et al. A reinforcement learning model for personalized driving policies identification
US20220266854A1 (en) Method for Operating a Driver Assistance System of a Vehicle and Driver Assistance System for a Vehicle
US20220402517A1 (en) Systems and methods for increasing the safety of voice conversations between drivers and remote parties
US20220262103A1 (en) Computer-implemented method for testing conformance between real and synthetic images for machine learning
CN111539360A (en) Safety belt wearing identification method and device and electronic equipment
CN112462759B (en) Evaluation method, system and computer storage medium of rule control algorithm
CN114842370A (en) Computer-implemented method for training a computer vision model
CN112509321A (en) Unmanned aerial vehicle-based driving control method and system for urban complex traffic situation and readable storage medium
CN118280109A (en) Collaborative perception reliability assessment method for complex road scene
JP7092958B1 (en) Information processing methods, information processing devices, and programs
CN113642682A (en) Trajectory primitive extraction and analysis method and system under multi-vehicle interaction environment
CN111696200A (en) Method, system, device and storage medium for displaying alarm situation
US20240112562A1 (en) Systems and methods for increasing the safety of voice conversations between drivers and remote parties
CN116295469B (en) High-precision map generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication