CN115544888A - Dynamic scene boundary assessment method based on physical mechanism and machine learning hybrid theory - Google Patents

Dynamic scene boundary assessment method based on physical mechanism and machine learning hybrid theory Download PDF

Info

Publication number
CN115544888A
CN115544888A CN202211269248.9A CN202211269248A CN115544888A CN 115544888 A CN115544888 A CN 115544888A CN 202211269248 A CN202211269248 A CN 202211269248A CN 115544888 A CN115544888 A CN 115544888A
Authority
CN
China
Prior art keywords
scene
vehicle
boundary
risk
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211269248.9A
Other languages
Chinese (zh)
Inventor
孙博华
张宇飞
吴官朴
马芳武
赵帅
翟洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202211269248.9A priority Critical patent/CN115544888A/en
Publication of CN115544888A publication Critical patent/CN115544888A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M17/00Testing of vehicles
    • G01M17/007Wheeled or endless-tracked vehicles
    • G01M17/0078Shock-testing of vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a dynamic scene boundary evaluation method based on a physical mechanism and machine learning hybrid theory, which comprises the following steps: firstly, solving a dynamic scene boundary through physical modeling; secondly, solving the boundary of the dangerous domain and the safety domain of the dynamic scene through a support vector machine; thirdly, solving danger degree boundaries of different grades in the danger domain of the dynamic scene through support vector regression; fourthly, evaluating the dynamic scene boundary based on the physical mechanism and the machine learning hybrid theory; has the advantages that: and establishing a set of complete and standard construction process of the target scene physical model. The collision detection of the vehicle and the solution of the scene risk degree are facilitated. And the dynamic scene boundary is solved, and the method is simple, quick and efficient. The division of dangerous scene working conditions and safe scene working conditions is realized. The scene danger domain and the security domain are divided according to the training samples.

Description

Dynamic scene boundary assessment method based on physical mechanism and machine learning hybrid theory
Technical Field
The invention relates to a scene boundary assessment method, in particular to a dynamic scene boundary assessment method based on a physical mechanism and machine learning hybrid theory.
Background
At present, intelligent networking has become one of the mainstream automobile technology development directions, and intelligent networking automobiles are the development trend in the future. The premise that the intelligent networked automobile can be on the road is that the driving safety of the intelligent networked automobile is fully verified and reaches the relevant standard, and in order to fully test the safety of the intelligent networked automobile and the reliability of an intelligent algorithm of the intelligent networked automobile, the intelligent networked automobile needs to be subjected to simulation test, field test and road test in sequence. Although the three testing methods are different in the phase, the testing flow and the implementation manner, a specific testing scenario needs to be designed to execute the testing process. The test scene should firstly have the challenge to detect the processing capacity of the intelligent networked automobile facing the dangerous driving situation, and secondly meet the self difficulty perceptibility, so that the capacity level of the intelligent networked automobile for processing the dangerous driving situation can be quantized. Therefore, the boundary of the dynamic scene in the test scene space, such as the boundary between the danger domain and the security domain and the boundary between different levels of danger degrees in the danger domain, needs to be determined, and the test scene with known danger and known danger degree needs to be selected to test the intelligent networking automobile, so that the test efficiency and credibility of the intelligent networking automobile can be greatly improved, and the deployment process of the intelligent networking automobile is accelerated.
A method for solving dynamic scene boundaries based on a physical mechanism analyzes the movement and interaction conditions of a vehicle and surrounding target vehicles by establishing a physical model of a target scene and combining the kinematic relationship of the vehicle, and researches from the theoretical level from the viewpoint of kinematics and geometry, and has the advantages that the obtained dynamic scene boundaries are more accurate, the misjudgment of dangerous domain scenes and safety domain scenes basically cannot occur, but the method has the defects that because the driving condition of a real road is not considered, the probability of many scenes in the dangerous domain occurring on the real road is extremely low, and the practical significance of testing the intelligent internet-connected vehicle by using the scenes is not great; the method for solving the dynamic scene boundary based on machine learning is based on real road driving data in a natural driving database, adopts a machine learning algorithm, and obtains the dynamic scene boundary in a model self-learning mode. The methods based on the physical mechanism and the machine learning have the advantages and can supplement each other, so that the method based on the physical mechanism and the machine learning is combined to form a hybrid theory based on the physical mechanism and the machine learning, and the method is favorable for solving the problem of scene boundary evaluation to a great extent.
Chinese patent CN202210941004.4 discloses an automatic driving test scenario generation method, apparatus, device and storage medium, which can generate an automatic driving test scenario with high risk according to target traffic participant data, but fails to classify the risk of these test scenarios and to find out the risk domain boundary of the scenarios. Chinese patent CN202210804060.3 discloses an automatic driving test scene generation method, apparatus, vehicle and storage medium, which can generate a simulation test scene by loading a key parameter interval of the simulation test scene into a scene design document and using a preset script; chinese patent No. cn202210741420.X discloses an automatic simulation test system for intelligent driving and related equipment, wherein a scene generation module of the system can create test scenes according to driving parameters of a main vehicle, generalize the test scenes, and generate one or more test scenes for testing an intelligent driving algorithm, but the risk of the scenes and the boundary of the whole test scene space are not known.
Disclosure of Invention
The invention aims to determine the boundary of a dynamic scene in a test scene space, such as the boundary of a dangerous domain and a safe domain and the boundary of different levels of danger degree in the dangerous domain, by a hybrid theory based on a physical mechanism and machine learning, lays a foundation for selecting a scene with known danger and known danger degree to test an intelligent networked automobile, greatly improves the test efficiency and credibility of the intelligent networked automobile, and accelerates the deployment process of the intelligent networked automobile.
The invention provides a dynamic scene boundary evaluation method based on a physical mechanism and a machine learning hybrid theory to achieve the purpose.
The invention provides a dynamic scene boundary evaluation method based on a physical mechanism and machine learning hybrid theory, which comprises the following steps:
firstly, solving the dynamic scene boundary through physical modeling, wherein the specific process is as follows:
firstly, establishing a physical model of a target scene, determining the type of the target scene to be researched, including a cut-in scene, a cut-out scene, a following scene, a highway exit scene and a merge ramp scene, and then further designing an operation design domain of the target scene aiming at the selected type of the target scene, namely, determining what kind of scene the target scene is specifically, wherein in the scene research, a vehicle refers to an intelligent internet vehicle to be tested, other vehicles except the vehicle in the target vehicle-referring scene determine all static scene elements of the target scene when designing ODD, the static scene elements related to the vehicle comprise the number and the type of the target vehicle and the initial positions of the vehicle and the target vehicle, the static scene elements related to the vehicle comprise the number and the type of lanes and the lanes to which the vehicle and the target vehicle belong, and the static scene elements related to the environment comprise the number and the type of traffic signs, illumination intensity and weather conditions; finally, according to the interaction mode of the vehicle and the surrounding target vehicles specified by the type of the target scene, discretizing the dynamic scene elements of the target scene, such as the speed and the acceleration of the vehicle and the target vehicles, the trigger mode, the trigger distance and the trigger time of the target scene, arranging and combining the discretized dynamic scene elements to obtain a scene space of the target scene, and establishing a physical model of the target scene;
secondly, modeling the driving tracks of the vehicle and the target vehicle based on the vehicle kinematic relationship, solving the driving tracks of the vehicle and the target vehicle by combining the vehicle kinematic relationship on the basis of the physical model of the target scene obtained in the first step, wherein the static scene elements of the target scene comprise the initial position information of the vehicle and the target vehicle, the dynamic scene elements comprise the information of the speed sequence and the acceleration sequence of the vehicle and the target vehicle changing along with time, and the driving tracks of the vehicle and the target vehicle are obtained by combining the vehicle kinematic relationship and calculating, and the modeling methods of the driving tracks of the vehicle and the target vehicle are the same and are divided into two modes: a track model taking the center of a vehicle as a reference point is established, wherein the track model comprises a left front end contour point, a right front end contour point, a left rear end contour point and a right rear end contour point; secondly, establishing a track model of other contour points by taking one of a left front end contour point, a right front end contour point, a left rear end contour point or a right rear end contour point of the vehicle as a reference point, wherein the two selected reference points are different, but the modeling principle and the method are consistent, and the reference point convenient for calculation and solution is selected for modeling according to a specific problem during actual modeling;
taking the right front end contour point of the vehicle as a reference point for modeling as an example, the initial time t 0 The position of the outline point at the right front end of the vehicle at the moment is the origin of coordinates, and the vehicle runs for a period of time to t 1 At time, the coordinates of the right front contour point are expressed as:
Figure BDA0003894491710000021
in the formula, v e (t) represents the speed of the vehicle at time t, θ e (t) shows the heading angle of the vehicle at time t, p ex rf Showing the profile point of the right front end of the vehicle at t 1 Abscissa of time, p ey rf The outline point of the right front end of the vehicle is shown at t 1 The ordinate of the moment;
let the length of the vehicle be a e Width of b e Approximating the vehicle contour as a rectangle, then t 1 Coordinates (p) of the vehicle's left front end contour point at that moment ex lf ,p ey lf ) Comprises the following steps:
Figure BDA0003894491710000031
t 1 coordinates (p) of the vehicle right rear end contour point at the moment ex rr ,p ey rr ) Comprises the following steps:
Figure BDA0003894491710000032
t 1 coordinates (p) of contour point at the left rear end of the vehicle at the moment ex lr ,p ey lr ) Comprises the following steps:
Figure BDA0003894491710000033
after the coordinates of each contour point of the vehicle are solved, the coordinates of each contour point of the vehicle at different moments in the cut-in process are connected to obtain the running track of the vehicle in the cut-in process;
step three, solving the dynamic scene boundary through a separation axis theorem to obtain the running tracks of the vehicle and the target vehicle, further analyzing the interaction situation of the vehicle and the target vehicle through the separation axis theorem to solve the boundary state of the vehicle and the target vehicle, and determining the dynamic scene boundary according to the relative distance, the relative speed and the scene risk index of the relative acceleration of the vehicle and the target vehicle in the boundary state;
based on the established physical model and the established driving track model, solving the boundary of the dynamic scene through a separation axis theorem, assuming that the driving of the vehicle is movement in a two-dimensional plane, modeling the vehicle as a directed bounding box, namely, the shape and the driving direction of the vehicle are considered in, according to the separation axis theorem, for any two mutually-separated convex polyhedrons, a separation axis exists so that the two polyhedrons have certain intervals on the axis, and the projections of the two polyhedrons on the separation axis are also mutually separated, for one directed bounding box, at most, only whether two edge direction vectors meet a separation axis condition needs to be detected, and for the two directed bounding boxes, whether the four edge direction vectors meet the separation axis condition needs to be detected, so long as any one of the four edge direction vectors is a separation axis, that the two directed bounding boxes do not intersect, namely, the vehicle does not collide is judged, A represents the target vehicle, B represents the target vehicle, according to the separation axis theorem, when the following relation is satisfied, that the two vehicles do not collide is judged:
|s·l|>d A +d B ,l∈{a u ,a v ,b u ,b v } (5)
wherein s represents a distance vector between the center of the vehicle and the center of the target vehicle, l represents a direction projection axis of the normalized vector, and a u 、a v Normalized vectors representing the directions of both sides of the target vehicle, b u 、b v Normalized vectors representing directions of two sides of the vehicle, d A Representing the projected length on the projection axis in front of the center point of the target vehicle, d B Representing a projection length on a projection axis behind a center point of the vehicle;
d A can be obtained by the following formula:
Figure BDA0003894491710000045
in the formula (I), the compound is shown in the specification,
Figure BDA0003894491710000041
respectively represent the target vehicles at a u 、a v Positive half side length in the direction;
d B can be obtained by the following formula:
Figure BDA0003894491710000046
in the formula (I), the compound is shown in the specification,
Figure BDA0003894491710000042
respectively show the vehicle in b u 、b v Positive half side length in the direction;
according to the separation axis theorem, the scene boundary corresponding to the boundary state that the vehicle is cut into the rear of the target vehicle and the right front end contour point of the vehicle does not collide with the right rear end contour point of the target vehicle is obtained as follows:
Figure BDA0003894491710000043
in the formula, t p1 V, the moment when the outline point at the right front end of the vehicle and the outline point at the right rear end of the target vehicle are positioned on the same straight line is calculated through the longitudinal displacement of the running track of the outline point at the right front end of the vehicle o Representing the longitudinal speed, D, of the target vehicle 1 Represents t 0 The initial distance between the vehicle and the target vehicle at the moment;
in the same way, the scene boundary corresponding to the boundary state that the vehicle is cut into the front of the target vehicle and the left rear end contour point of the vehicle does not collide with the right front end contour point of the target vehicle is obtained as follows:
Figure BDA0003894491710000044
in the formula, t p2 The moment when the left rear end contour point of the vehicle and the right front end contour point of the target vehicle are positioned on the same straight line is represented, and the moment is obtained through the longitudinal displacement calculation of the running track of the left rear end contour point of the vehicle;
therefore, the scene boundary of the target dynamic scene under the set ODD is obtained in a physical modeling mode;
secondly, solving the boundary of the dangerous domain and the safety domain of the dynamic scene through a support vector machine, wherein the specific process is as follows:
step one, scene data acquisition and data preprocessing, carry on laser radar, millimeter wave radar, GPS high accuracy and be used to lead, high definition digtal camera, on-vehicle CAN bus, lane line sensor, rainfall sensor and illumination sensor on the scene data acquisition vehicle, gather all sensor data according to fixed acquisition cycle, the sensor data form of gathering includes: the method comprises the following steps of generating a spatial three-dimensional point cloud with a frame as a unit by a laser radar, generating an obstacle state list with the frame as the unit by a millimeter wave radar, generating positioning and attitude data with a time sequence by GPS high-precision inertial navigation, generating a color image with the frame as the unit by a high-definition camera and a lane line sensor, generating vehicle operation and motion state data with the time sequence by a vehicle-mounted CAN bus, generating voltage data with the time sequence by a rainfall sensor and an illumination sensor, and preprocessing the specific content of the data comprises the following steps: time alignment and space alignment of the sensor data; verifying the validity of the sensor data; generating a vehicle-mounted bus alignment signal, a vehicle state alignment signal and a multi-mode environmental sensor alignment signal for use in subsequent steps;
step two, extracting dangerous scene working conditions and safe scene working conditions, intercepting a plurality of segments of scene data consistent with a target scene and an ODD thereof from all preprocessed data by adopting a mode of manually watching videos according to the target scene type and the ODD type thereof selected in the step two, wherein each segment of complete target scene data is called a primary scene working condition and represents a complete target scene event on a real road, and in the scene working condition extraction stage, the dangerousness of the scene is represented by a plurality of vehicle state quantities, for example, the longitudinal speed, the longitudinal acceleration, the lateral acceleration and the yaw rate of the vehicle in the vehicle-mounted bus alignment signal and the vehicle state alignment signal obtained in the step one can cause different scenes under different longitudinal speeds due to the same driving operation, so that the longitudinal speed of the vehicle is divided into a plurality of intervals, and the longitudinal acceleration, the lateral acceleration and the yaw rate are taken as scene danger indexes, and the dangerous scene working condition extraction standards of the indexes at different vehicle speeds are established;
dividing the longitudinal speed of the vehicle into a plurality of intervals by taking a cut-in scene and an ODD (optical disc device) designed for the cut-in scene as a target scene, and designing dangerous working condition standards of longitudinal acceleration, lateral acceleration and yaw angular velocity of the vehicle in different longitudinal speed intervals according to the characteristics of the cut-in scene;
step three, solving the boundary of the dangerous domain and the safety domain of the dynamic scene through a support vector machine, firstly selecting the moment capable of representing the danger of the target scene as a key moment, researching the interaction situation of the vehicle and the target vehicle at the moment, and using the moment as a basis for dividing the dangerous domain and the safety domain of the scene, in order to improve the classification effect of the support vector machine, selecting a comprehensive physical quantity capable of fusing a plurality of vehicle state quantities and target vehicle state quantities to represent the danger of the scene instead of using a single vehicle state quantity in the step two as a scene danger index, wherein the comprehensive physical quantity comprises collision time, vehicle head time distance, and relative speed and relative acceleration of the vehicle and the target vehicle, TTC represents the time required by the two vehicles from the current moment to the collision under the condition of maintaining the current motion state, and the smaller value is, the higher the danger of the representative scene is obtained by the following calculation:
Figure BDA0003894491710000051
wherein Δ R represents the relative distance between the two vehicles, v r Indicating the speed, v, of the rear vehicle f Representing the speed of the leading vehicle;
THW represents the time taken by the two vehicles to reach the position of the front vehicle under the condition of keeping the current motion state unchanged, the smaller the value of THW is, the higher the risk of the representative scene is, and THW is calculated by the following formula:
Figure BDA0003894491710000052
secondly, calculating TTCs and THWs of all dangerous scene working conditions and safety scene working conditions at key moments and scene risk indexes of relative speeds and relative accelerations of the vehicle and surrounding vehicles, and finally inputting the calculated scene risk indexes and scene working condition risk labels, namely whether the scene working conditions are dangerous or safe, into a support vector machine algorithm together as training samples, and outputting the boundary of a dangerous domain and a safety domain of the dynamic scene through model self-learning;
taking a cut-in scene and an ODD (optical Density Detector) designed for the cut-in scene as a target scene, selecting the moment when the center of the vehicle coincides with a lane line as a key moment according to the characteristics of the cut-in scene, calculating scene risk indexes of all scene working conditions at the key moment, including TTC (total thermal coefficient), THW (total thermal coefficient), relative speed of the vehicle and the target vehicle and relative acceleration of the vehicle and the target vehicle, and setting a training sample as (x) i ,y i ) I = 1.. N, where n represents the total number of training samples, i.e., the total number of dangerous scene conditions and safe scene conditions, x i Characteristic vector y representing that the ith scene condition is composed of scene risk indexes i The method comprises the following steps of representing a danger label of the ith scene working condition, setting the danger label of the danger scene working condition to be +1, setting the danger label of the safety scene working condition to be-1, and setting a decision surface equation capable of dividing a danger domain and a safety domain as follows:
ξx+c=0 (12)
in the formula, xi represents a normal vector of the decision surface and determines the direction of the decision surface; c represents a displacement item and determines the distance between the decision surface and the origin; x represents a characteristic vector consisting of scene risk indexes under the scene working condition;
the distance r from the feature vector x to the decision surface is:
Figure BDA0003894491710000061
the decision surface needs to be satisfied to correctly classify the training samples, so for any training sample, there are:
Figure BDA0003894491710000062
the training samples on two sides of the decision surface closest to the decision surface are called support vectors, and the sum gamma of the distances from two heterogeneous support vectors to the decision surface represents the classification interval:
Figure BDA0003894491710000063
to find the decision surface with the best classification effect, the classification interval is maximized, i.e. the following formula is solved:
Figure BDA0003894491710000064
meanwhile, the following conditions are satisfied:
y i (ξx i +c)≥1,i=1,2,...,n. (17)
solving the problem that the optimal decision surface belongs to convex quadratic programming, obtaining the dual problem by adopting a Lagrange multiplier method, and adding a Lagrange multiplier alpha to each constraint of the formula (17) i The lagrange function is constructed as follows:
Figure BDA0003894491710000065
let the partial derivative of the lagrange function L to ξ, c be zero, one can obtain:
Figure BDA0003894491710000071
Figure BDA0003894491710000072
the optimization problem with respect to equation (18) then translates further to the parameter α i The convex quadratic optimization dual problem is as follows:
Figure BDA0003894491710000073
in the formula, alpha j 、x j And y j Respectively formed by alpha i 、x i And y i Dual obtaining;
solve to obtain alpha i Is optimally solved as
Figure BDA0003894491710000074
The ξ optimal solution ξ obtained by equation (19) * Obtaining an optimal solution c of c by equation (17) *
Therefore, the boundary equation f (x) of the dangerous domain and the safe domain of the dynamic scene output by the support vector machine is as follows:
f(x)=ξ * x+c * (22)
when the training sample is not linearly divided, mapping the training sample from the original space to a higher-dimensional feature space to make the training sample linearly divided in the feature space, making phi (x) represent the feature vector corresponding to mapping x to the high-dimensional feature space, and the solving process involves calculating phi (x) i )φ(x j ) I.e. x i And x j And (3) introducing a kernel function to calculate the dot product between any two feature vectors mapped into the high-dimensional space by the inner product after mapping into the feature space:
Figure BDA0003894491710000075
wherein, κ (x) i ,x j ) Representing a kernel function, commonly used kernel functions include the following types:
linear kernel function:
κ(x i ,x j )=x i ·x j (24)
polynomial kernel function:
κ(x i ,x j )=(x i ·x j ) d (25)
wherein d is the degree of a polynomial and is more than or equal to 1;
gaussian kernel function:
Figure BDA0003894491710000081
wherein, sigma is the bandwidth of the Gaussian kernel function, and sigma is more than 0;
laplace kernel function:
Figure BDA0003894491710000082
sigmiod kernel function:
κ(x i ,x j )=tanh(βx i x j +θ) (28)
wherein, tanh is hyperbolic tangent function, beta is more than 0, theta is less than 0;
after the kernel function is introduced, it is not necessary to directly calculate the inner product in the feature space of high dimension or even infinite dimension, so equation (21) is transformed into:
Figure BDA0003894491710000083
and obtaining a boundary equation F (x) of the dangerous domain and the safe domain of the dynamic scene output by the support vector machine after solving, wherein the boundary equation F (x) comprises the following steps:
Figure BDA0003894491710000084
thirdly, solving the risk degree boundaries of different grades in the dynamic scene risk domain through support vector regression, wherein the specific process is as follows:
firstly, pre-dividing the risk degree boundaries of different levels in the risk domain according to the TTCs, calculating TTCs of working condition key moments of all the risk scenes and formulating pre-division standards of the risk degree boundaries of the risk domain scenes in order to obtain accurate and complete scene risk degree boundaries;
selecting the coincidence moment of the center of the vehicle and the lane line as a key moment according to the characteristics of the cut-in scene and the ODD thereof as target scenes, calculating TTCs of all dangerous scene working conditions at the key moment, and formulating a pre-division standard of dangerous domain scene danger degree boundaries as follows:
when TTC belongs to [0s,1s ], the risk level of the scene is a collision scene; when the TTC belongs to (1s, 3s), the danger level of the scene is an emergency scene, when the TTC belongs to (3s, 5s), the danger level of the scene is a conflict scene, when the TTC belongs to (5 s, and is +/-infinity), the scene is judged as a safe scene which is wrongly included in the working condition of the dangerous scene due to the large value of the TTC, and the scene is directly discarded;
selecting proper scene risk indexes to construct combined distribution, calculating scene risk indexes of dangerous scene working conditions at key time, wherein the scene risk indexes comprise TTC (time to live), THW (total velocity) and relative speeds of the vehicle and the target vehicle as well as relative accelerations of the vehicle and the target vehicle, combining the scene risk indexes in pairs to construct combined distribution, observing the distribution condition of the dangerous scene working conditions by combining the pre-division result of the scene risk degree boundary in the step one, and selecting the combined distribution of the scene risk indexes with better classification effect as the input of a support vector regression algorithm;
thirdly, solving danger degree boundaries of different grades in the dynamic scene danger domain through support vector regression, firstly, extracting scene boundary points without boundary lines in the joint distribution of the scene danger indexes with better classification effect obtained in the second step, taking the scene boundary points as training samples, inputting the training samples into the support vector regression algorithm, taking the boundary points belonging to the same boundary line as a training sample set during extraction, independently inputting the training sample set into the support vector regression algorithm to ensure that each training sample set only corresponds to one scene danger degree boundary, and finally summarizing all the scene danger degree boundary lines to obtain the danger degree boundaries of different grades in the dynamic scene danger domain;
let the training samples in the training sample set be (t) i ,g i ) I = 1.. M, where m represents the number of training samples in the training sample set, i.e. the number of boundary points, t i Abscissa, g, representing the i-th boundary point in the joint distribution of the scene risk indicator i Expressing the ordinate of the ith boundary point in the joint distribution of the scene risk index, the support vector regression algorithm assumes that the maximum deviation of epsilon can exist between the model output f (t) and the ordinate g of the boundary point, namely, the loss is calculated only when the absolute value of the difference between f (t) and g is larger than epsilon, a spacing zone with the width of 2 epsilon is constructed by taking f (t) as the center, and if the training is carried outIf the training sample falls into the interval zone, the prediction is considered to be correct;
let the output f (t) of the support vector regression algorithm be:
f(t)=ηt+q (31)
in the formula, eta represents a normal vector of the decision surface and determines the direction of the decision surface; q represents a displacement term and determines the distance between the decision surface and the origin; t represents the abscissa of the boundary point in the joint distribution of the scene risk indicator;
the solution problem is then formalized as:
Figure BDA0003894491710000091
in the formula, D represents a regularization constant, l ε Represents an ε -insensitive function, whose expression is:
Figure BDA0003894491710000092
introduction of relaxation variable delta i And delta i ', equation (32) translates to:
Figure BDA0003894491710000093
introducing Lagrange multiplier mu i 、μ i ’、α i And alpha i ', construct the Lagrangian function:
Figure BDA0003894491710000101
let the Lagrange function L pair eta, q, delta i And delta i The partial derivative of' is zero, which gives:
Figure BDA0003894491710000102
Figure BDA0003894491710000103
α ii =D (38)
α′ i +μ′ i =D (39)
then, the dual problem with equation (35) is:
Figure BDA0003894491710000104
in the formula, t jj ,α j ' respectively composed of t i ,α i And alpha i ' dual yield;
solving for alpha by the above equation i Optionally selecting 0 < alpha i Solving the training sample of < D to obtain q:
Figure BDA0003894491710000105
further, q is determined by selecting a plurality of the satisfying conditions 0 < alpha i The training samples less than D are obtained in an average value mode after solving;
thus, the scene risk boundary equation f (t) for the support vector regression output is:
Figure BDA0003894491710000106
if the training samples are linear and inseparable, a method of introducing kernel functions in the third step of the second step is adopted, and if t is mapped to a high-dimensional feature space and corresponds to phi (t), eta becomes:
Figure BDA0003894491710000107
and obtaining a scene risk degree boundary equation F (t) for supporting vector regression output after solving as follows:
Figure BDA0003894491710000111
wherein, κ (t) i ,t j )=φ(t i )φ(t j ) Representing a kernel function;
solving scene risk degree boundaries corresponding to all training sample sets according to the method, and combining the scene risk degree boundaries obtained by pre-dividing in the step one to obtain risk degree boundaries with different grades in the dynamic scene risk domain;
fourthly, evaluating the dynamic scene boundary based on the hybrid theory of the physical mechanism and the machine learning, wherein the specific process is as follows:
the consistency analysis of the dynamic scene risk degree boundary based on the physical mechanism and the dynamic scene risk degree boundary based on machine learning is carried out, the scene risk degree indexes at the dynamic scene boundary based on the physical mechanism and the dynamic scene boundary based on machine learning are calculated, the difference degree of the scene risk indexes corresponding to the two methods is calculated, and the consistency standard is set according to the average difference degree of all risk indexes;
the method comprises the following steps of taking an ODD (optical data device) designed by a cut-in scene as a target scene, describing scene boundaries by establishing a functional relation among scene risk indexes, carrying out consistency analysis, namely comparing the difference degree of each scene risk index value at the dynamic scene boundary, calculating each scene risk index at the dynamic scene boundary obtained by the first step based on a physical mechanism and the third step based on machine learning, solving the difference degree of each scene risk index and the average difference degree of all scene risk indexes corresponding to the two methods, and setting a consistency standard according to the characteristics of the cut-in scene as follows:
when the average difference degree of the risk indexes of all scenes is less than 15%, the consistency is determined to be good; when the average difference degree of the risk indexes of all scenes is less than 30%, determining that the consistency is general; when the average difference degree of the risk indexes of all scenes is more than 30%, determining that the consistency is poor;
and step two, generating a high-confidence dynamic scene boundary based on a physical mechanism and a machine learning hybrid theory, wherein the high-confidence scene boundary generation adopts the following principle according to the consistency standard set in the step one, namely the setting basis of the consistency standard:
when the consistency analysis result is good, the deviation weight is freely distributed to the dynamic scene boundary obtained by two solving modes according to the actual research requirements, namely when the real road traffic condition needs to be considered more for the researched problem, the high-confidence dynamic scene boundary is deviated to the direction of the dynamic scene boundary obtained by machine learning more, and when the researched problem is more focused on the theoretical level analysis, the high-confidence dynamic scene boundary is deviated to the direction of the dynamic scene boundary obtained by a physical mechanism more; when the consistency analysis result is general, the solving method based on the machine learning shows that the boundary deviation of the dynamic scene obtained by solving is larger due to smaller training sample data amount or lower data quality, and more offset weights are distributed for a mode based on a physical mechanism according to the average difference degree; when the consistency analysis result is poor, the deviation of the boundary of the dynamic scene obtained by solving based on the machine learning solving method is very large, and at the moment, the boundary of the high-confidence dynamic scene is solved by selecting a physical mechanism-based mode;
and step three, verifying dynamic scene boundaries based on a physical mechanism and machine learning hybrid theory, respectively taking scene spaces of three levels of danger degrees of a danger domain and scene spaces of a safety domain as independent test scene libraries according to a generation result of the high-confidence-degree dynamic scene boundaries, and sequentially sampling scenes from the four scene libraries to test the intelligent networked automobile until the accident rate of each scene library is converged, wherein if the accident rate of the converged four scene libraries is increased along with the increase of the danger degrees of the scenes and an obvious step-shaped distribution rule is presented, the effect of the generated dynamic scene boundaries is better.
The invention has the beneficial effects that:
the dynamic scene boundary evaluation method based on the physical mechanism and the machine learning hybrid theory obtains the boundary of the dynamic scene, such as the boundary of a danger domain and a security domain and the boundary of different levels of danger degree in the danger domain, through solving the hybrid theory based on the physical mechanism and the machine learning, lays a foundation for selecting the scene with known danger and danger degree to test the intelligent internet automobile, greatly improves the test efficiency and credibility of the intelligent internet automobile, and accelerates the deployment process of the intelligent internet automobile. The method has the following specific beneficial effects:
1) The invention provides a method for establishing a physical model of a target scene. Firstly, designing an operation design domain of a target scene according to the type of the target scene, then discretizing dynamic scene elements of the target scene, and finally establishing a physical model of the target scene in a permutation and combination mode, thereby establishing a set of complete and standard construction process of the physical model of the target scene.
2) The invention provides a vehicle kinematics relationship-based method for modeling the driving tracks of a vehicle and a target vehicle. The method is based on a vehicle kinematic model, the driving track of each contour point of the vehicle in a target scene is solved, and compared with a common method for calculating the driving track of the vehicle center, the method is more convenient for collision detection of the vehicle and solving of the scene risk.
3) The invention provides a method for solving dynamic scene boundaries through a separation axis theorem. According to the method, the vehicles are simplified into the directed bounding boxes, whether the vehicles collide during interaction is detected through the separation axis theorem, so that the dynamic scene boundary is solved, and the method is simple, rapid and efficient.
4) The invention provides a method for extracting dangerous scene working conditions and safe scene working conditions. The method takes the driving state quantity of the vehicle as an index, and establishes a set of stepped dangerous scene working condition evaluation standards according to the characteristics of a target scene, so that the division of dangerous scene working conditions and safe scene working conditions is realized.
5) The invention provides a method for solving the boundary of a dangerous domain and a safe domain of a dynamic scene through a support vector machine. According to the method, by utilizing the characteristic that a support vector machine in machine learning is good at solving the two classification problems, the boundary of the dangerous domain and the safety domain of the dynamic scene is solved in a model self-learning mode, and the classification of the dangerous domain and the safety domain of the scene according to training samples is realized.
6) The invention provides a method for selecting a proper scene risk index to construct joint distribution. The method adopts a mode of combining risk indexes two by two to construct joint distribution, selects the joint distribution which is beneficial to classification of scene working condition distribution, provides effective input for a support vector regression algorithm, and provides basis for the boundary division of the risk degrees of different grades in the risk domain.
7) The invention provides a method for solving different-grade danger degree boundaries in a dynamic scene danger domain through support vector regression. According to the method, the characteristics that support vector regression in machine learning is good at solving the boundary fitting problem are utilized, the risk degree boundaries of different levels in the dynamic scene risk domain are solved in a model self-learning mode, and the division of the risk degree scenes of different levels in the scene risk domain according to training samples is achieved.
8) The invention provides a method for analyzing consistency of a dynamic scene risk boundary based on a physical mechanism and a dynamic scene risk boundary based on machine learning. According to the method, a set of consistency standard of the risk degree of the dynamic scene is established according to the average difference degree of the risk indexes of the scene and aiming at the characteristics of the target scene, so that the consistency analysis of the boundary of the dynamic scene is realized.
9) The invention provides a method for generating a high-confidence dynamic scene boundary based on a physical mechanism and a machine learning hybrid theory. The method is based on the consistency analysis result of the dynamic scene boundary, integrates a method based on a physical mechanism and machine learning, and establishes a hybrid theory based on the physical mechanism and the machine learning, so that the dynamic scene boundary with high confidence level is obtained.
10 The invention provides a method for verifying dynamic scene boundaries based on a physical mechanism and a machine learning hybrid theory. The method effectively evaluates the generation effect of the dynamic scene boundary based on the physical mechanism and machine learning hybrid theory by sampling scenes with different risk levels to carry out intelligent networking automobile testing and taking the rate of a large number of converged accidents after testing as an index, and verifies the dynamic scene boundary generation method based on the physical mechanism and machine learning hybrid theory.
Drawings
Fig. 1 is a schematic overall flow chart of the dynamic scene boundary assessment method according to the present invention.
Fig. 2 is a block diagram of a method architecture of the dynamic scene boundary estimation method according to the present invention.
Fig. 3 is a schematic diagram of an exemplary embodiment of step one of the first step of the present invention.
Fig. 4 is a schematic diagram of an exemplary embodiment of step two of the first step of the present invention.
Fig. 5 is a schematic diagram of an exemplary embodiment of step three of the first step of the present invention.
Fig. 6 is an exemplary block diagram of a first step of the second step of the present invention.
Fig. 7 is an exemplary block diagram of step three of the second step of the present invention.
Fig. 8 is an exemplary operation result of step two of the third step according to the present invention.
Fig. 9 is an exemplary operation result of step three of the third step according to the present invention.
Detailed Description
Please refer to fig. 1 to 9:
the invention provides a dynamic scene boundary evaluation method based on a physical mechanism and machine learning hybrid theory, which comprises the following steps:
firstly, solving a dynamic scene boundary through physical modeling;
secondly, solving the boundary of a dangerous domain and a safety domain of the dynamic scene through a support vector machine;
thirdly, solving danger degree boundaries of different grades in the danger domain of the dynamic scene through support vector regression;
and fourthly, evaluating the boundary of the dynamic scene based on the hybrid theory of the physical mechanism and the machine learning.
The process of solving the dynamic scene boundary through physical modeling in the first step is as follows:
step one, establishing a physical model of a target scene. Firstly, determining the type of a target scene to be researched, such as a cut-in scene, a cut-out scene, a following scene, a highway exit scene, an interchange ramp scene and the like, and then further designing an Operation Design Domain (ODD) of the selected target scene type, namely determining what kind of scene the target scene is. In the scene research, the vehicle generally refers to an intelligent internet vehicle to be tested, and the target vehicle refers to other vehicles except the vehicle in the scene. When the ODD is designed, all static scene elements of the target scene should be determined, the static scene elements related to the vehicle include the number and the type of the target vehicles, the initial positions of the vehicle and the target vehicles, etc., the static scene elements related to the lanes include the number and the type of the lanes, the lanes to which the vehicle and the target vehicles belong, etc., and the static scene elements related to the environment include the number and the type of traffic signs, the illumination intensity, the weather condition, etc. And finally, discretizing the dynamic scene elements of the target scene according to the interaction mode of the vehicle and the surrounding target vehicles specified by the type of the target scene, such as the speed and the acceleration of the vehicle and the target vehicles, the trigger mode, the trigger distance, the trigger time and the like of the target scene, arranging and combining the discretized dynamic scene elements to obtain a scene space of the target scene, and establishing a physical model of the target scene.
An exemplary embodiment of a first step, step one, is shown in fig. 3, which designs the ODD shown in fig. 3 with the type of cut-in scene as the target scene: the vehicle and the target vehicle are in two adjacent lanes, the target vehicle always keeps straight-line running, and after the cut-in behavior is triggered, the vehicle drives away from the current lane and enters the lane where the target vehicle is located. Wherein, A 1 Representing the initial state of the vehicle during the cut-in behavior triggering, B 1 Representing the state of the target vehicle at the moment; a. The 2 And A 3 Two boundary states representing the host vehicle during the cut-in process, B 2 Representing the state of the green car at the boundary state; the black dotted line represents the cut-in track corresponding to two boundary states of the vehicle in the cut-in process, and the white dotted line represents the running track of the target vehicle; the area enclosed by the black dotted line represents a dangerous area where two vehicles collide, and the area outside the black dotted line is a safety area. Thus, a scene targeted at the cut-in scene can be establishedThe physical model of (1).
And step two, modeling the driving tracks of the vehicle and the target vehicle based on the vehicle kinematic relationship. And on the basis of the physical model of the target scene obtained in the step one, the driving tracks of the vehicle and the target vehicle are solved by combining the kinematic relationship of the vehicle. The static scene elements of the target scene comprise the initial position information of the vehicle and the target vehicle, and the dynamic scene elements comprise the information of the speed sequence, the acceleration sequence and the like of the vehicle and the target vehicle along with the change of time, so the driving tracks of the vehicle and the target vehicle can be calculated by combining a vehicle kinematics model. The modeling method of the driving track of the vehicle is the same as that of the target vehicle, and the two modes can be divided into: a track model taking the center of a vehicle as a reference point is established, wherein the track model comprises a left front end contour point, a right front end contour point, a left rear end contour point and a right rear end contour point; and the second method is to establish a track model of other contour points by taking one of a left front end contour point, a right front end contour point, a left rear end contour point or a right rear end contour point of the vehicle as a reference point. The datum points selected by the two modes are different, but the modeling principle and the modeling method are consistent, and the datum points convenient for calculation and solution can be selected for modeling according to specific problems during actual modeling.
FIG. 4 shows an exemplary embodiment of the first step two, in which the front-right end contour point of the vehicle is used as a reference point, and the initial time t is used 0 At the moment, the position of the outline point at the right front end of the vehicle is the coordinate origin, and the vehicle runs for a period of time to t 1 At time, the coordinates of the right front contour point may be expressed as:
Figure BDA0003894491710000141
in the formula, v e (t) represents the speed of the vehicle at time t, θ e (t) shows the heading angle of the vehicle at time t, p ex rf The outline point of the right front end of the vehicle is shown at t 1 Abscissa of time, p ey rf The outline point of the right front end of the vehicle is shown at t 1 The ordinate of the moment.
Let the length of the vehicle be a e Width of b e Approximating the vehicle contour as a rectangle, then t 1 Coordinates (p) of the vehicle's left front end contour point at that moment ex lf ,p ey lf ) Comprises the following steps:
Figure BDA0003894491710000142
t 1 coordinates (p) of the vehicle right rear end contour point at the moment ex rr ,p ey rr ) Comprises the following steps:
Figure BDA0003894491710000151
t 1 the coordinates (p) of the contour point at the left rear end of the vehicle at the moment ex lr ,p ey lr ) Comprises the following steps:
Figure BDA0003894491710000152
after the coordinates of each contour point of the vehicle are solved, the coordinates of each contour point of the vehicle at different moments in the cutting process are connected to obtain the running track of the vehicle in the cutting process.
And step three, solving the dynamic scene boundary through a separation axis theorem. After the driving tracks of the vehicle and the target vehicle are obtained, the interaction situation of the vehicle and the target vehicle can be further analyzed through a separation axis theorem, the boundary state of the vehicle and the target vehicle in fig. 3 is solved, and the dynamic scene boundary is determined according to scene risk indexes such as the relative distance, the relative speed and the relative acceleration of the vehicle and the target vehicle in the boundary state.
Fig. 5 shows an exemplary embodiment of the first step three, which still uses the cut-in scene as the type of the target scene, and solves the dynamic scene boundary by the theorem of separation axes on the basis of the physical model created in fig. 3 and the driving trajectory model created in fig. 4. Assuming that the vehicle is traveling in a two-dimensional plane, the vehicle is modeled as a directional bounding box, i.e., both the vehicle's shape and direction of travel are taken into account. According to the separation axis theorem, for any two mutually separated convex polyhedrons, there is a separation axis such that the two polyhedrons are spaced apart on the axis and their projections on the separation axis are also separated from each other. For one directed bounding box, at most, only two edge direction vectors of the directed bounding box need to be detected to determine whether the two edge direction vectors meet the separating axis condition, and for the two directed bounding boxes, at most, four edge direction vectors need to be detected to determine whether the four edge direction vectors meet the separating axis condition. In fig. 5, a represents a target vehicle and B represents a host vehicle, and it can be determined that the two vehicles do not collide with each other when the following relation is satisfied according to the theorem on the separation axis:
|s·l|>d A +d B ,l∈{a u ,a v ,b u ,b v } (5)
wherein s represents a distance vector between the center of the vehicle and the center of the target vehicle, l represents a direction projection axis of the normalized vector, and a u 、a v Normalized vectors representing the directions of the two sides of the target vehicle, b u 、b v Normalized vectors representing directions of two sides of the vehicle, d A Representing the projected length on the projection axis before the center point of the target vehicle, d B The projection length on the projection axis behind the center point of the vehicle is shown.
d A Can be obtained by the following formula:
Figure BDA0003894491710000153
in the formula (I), the compound is shown in the specification,
Figure BDA0003894491710000161
respectively represent the target vehicles at a u 、a v The positive half side in the direction.
d B Can be obtained by the following formula:
Figure BDA0003894491710000162
in the formula (I), the compound is shown in the specification,
Figure BDA0003894491710000163
respectively show the vehicle at b u 、b v The positive half side in the direction.
According to the theorem of the separation axis, the scene boundary corresponding to the boundary state in which the vehicle cuts into the rear of the target vehicle and the right front end contour point of the vehicle just does not collide with the right rear end contour point of the target vehicle in fig. 3 can be obtained as follows:
Figure BDA0003894491710000164
in the formula, t p1 The moment when the contour point at the right front end of the vehicle is just positioned on the same straight line with the contour point at the right rear end of the target vehicle is easily obtained by the longitudinal displacement calculation of the running track of the contour point at the right front end of the vehicle, v o Indicating the longitudinal speed, D, of the target vehicle 1 Denotes t 0 The initial distance between the vehicle and the target vehicle at the moment.
Similarly, the scene boundary corresponding to the boundary state that the vehicle cuts into the front of the target vehicle in fig. 3 and the left rear end contour point of the vehicle just does not collide with the right front end contour point of the target vehicle can be obtained as follows:
Figure BDA0003894491710000165
in the formula, t p2 The time when the contour point of the left rear end of the vehicle is just positioned on the same straight line with the contour point of the right front end of the target vehicle is shown, and the time is easily obtained by calculating the longitudinal displacement of the running track of the contour point of the left rear end of the vehicle.
Thus, the scene boundary of the target dynamic scene under the set ODD is obtained by means of physical modeling.
In the second step, the process of solving the boundary between the dangerous domain and the safe domain of the dynamic scene through the support vector machine is as follows:
the method comprises the steps of firstly, scene data acquisition and data preprocessing. Carry on sensors such as laser radar, millimeter wave radar, GPS high accuracy are used to lead, high definition digtal camera, on-vehicle CAN bus, lane line sensor, rainfall sensor and light intensity sensor on the scene data acquisition vehicle, gather all sensor data according to fixed acquisition cycle, and the sensor data form of gathering includes: the system comprises a space three-dimensional point cloud which is generated by a laser radar and takes a frame as a unit, an obstacle state list which is generated by a millimeter wave radar and takes the frame as a unit, positioning and attitude data which is generated by GPS high-precision inertial navigation and takes a time sequence as a representation, a color image which is generated by a high-definition camera and a lane line sensor and takes the frame as a unit, vehicle operation and motion state data which is generated by a vehicle-mounted CAN bus and takes the time sequence as a representation, and voltage data which is generated by a rainfall sensor and an illumination sensor and takes the time sequence as a representation. The specific content of the data preprocessing comprises the following steps: time alignment and space alignment of the sensor data; verifying the validity of the sensor data; and generating a vehicle bus alignment signal, a vehicle state alignment signal and a multi-mode environment sensor alignment signal for subsequent steps. An exemplary architectural block diagram of the second step, step one, is shown in fig. 6.
And step two, extracting dangerous scene working conditions and safe scene working conditions. And according to the target scene type and the ODD type thereof selected in the second step, intercepting a plurality of sections of scene data consistent with the target scene and the ODD thereof from all preprocessed data in a mode of manually watching the video, wherein each section of complete target scene data is called a primary scene working condition and represents a complete target scene event occurring on a real road. In the scene condition extraction stage, the danger of the scene can be characterized by some vehicle state quantities, such as the longitudinal speed, the longitudinal acceleration, the lateral acceleration, the yaw rate and the like of the vehicle in the vehicle bus alignment signal obtained in the step one and the vehicle state alignment signal. Because different scene dangerousness can be caused under different longitudinal speeds by the same driving operation, the longitudinal speed of the vehicle is divided into a plurality of sections, the longitudinal acceleration, the lateral acceleration and the yaw rate are used as scene danger indexes, and dangerous scene working condition extraction standards of the indexes in different vehicle speed sections are established.
Table 1: scene risk index list
Figure BDA0003894491710000171
As shown in Table 1, the embodiments still target the cut-in scene and the ODD designed for it. The embodiment divides the longitudinal speed of the vehicle into a plurality of intervals, and designs dangerous working condition standards of longitudinal acceleration, lateral acceleration and yaw rate of the vehicle in different longitudinal speed intervals according to the characteristics of the cut-in scene. When one of the longitudinal acceleration, the lateral acceleration and the yaw rate of the vehicle reaches the dangerous scene working condition standard at a certain moment in the scene working condition, the scene working condition is judged as the dangerous scene working condition, so that all the scene working conditions can be subjected to dangerous division, and the dangerous scene working condition and the safe scene working condition are extracted.
And step three, solving the boundary of the dangerous domain and the safety domain of the dynamic scene through a support vector machine. Firstly, selecting a moment capable of representing the danger of a target scene as a key moment, researching the interaction condition of the vehicle and the target vehicle at the moment, and using the moment as a basis for dividing a scene dangerous domain and a safety domain. In order To improve the classification effect of the support vector machine, the single vehicle state quantity in the step two is not used as the scene risk index, but a comprehensive physical quantity capable of fusing a plurality of vehicle state quantities and the target vehicle state quantity is selected To characterize the risk of the scene, such as Time To Collision (TTC), time Headway (THW), and relative speed and relative acceleration between the vehicle and the target vehicle. The TTC represents the time required from the current moment to the collision of the two vehicles while maintaining the current motion state, and the smaller the TTC, the higher the risk of the scene. TTC can be calculated by the following formula:
Figure BDA0003894491710000172
wherein Δ R represents a relative distance between the two vehicles, v r Indicating speed of rear vehicleDegree, v f Indicating the speed of the leading vehicle.
THW represents the time taken by the two vehicles to reach the position of the front vehicle under the condition of keeping the current motion state unchanged, and the smaller the value of THW, the higher the risk of the scene is represented. THW can be calculated by the following formula:
Figure BDA0003894491710000173
and then, calculating TTC and THW of all dangerous scene working conditions and safe scene working conditions at the key moment, relative speed and relative acceleration of the vehicle and surrounding vehicles and other scene risk indexes. And finally, inputting the calculated scene risk index and the scene working condition risk label (namely, whether the scene working condition is dangerous or safe) together serving as training samples into a support vector machine algorithm, and outputting the boundary of the dynamic scene dangerous domain and the safety domain through model self-learning.
In fig. 7, an exemplary implementation of the second step, step two, is shown, and the embodiment still targets the cut-in scene and the ODD designed for it. According to the characteristics of the cut-in scene, selecting the coincidence moment of the center of the vehicle and the lane line as a key moment, and calculating scene risk indexes of all scene working conditions at the key moment, including TTC, THW, the relative speed of the vehicle and the target vehicle and the relative acceleration of the vehicle and the target vehicle. Let the training sample be (x) i ,y i ) I = 1.. N, where n represents the total number of training samples, i.e., the total number of dangerous scene conditions and safe scene conditions, x i Characteristic vector y representing that the ith scene condition is composed of scene risk indexes i And the danger label represents the i-th scene working condition, the danger label of the danger scene working condition is +1, and the danger label of the safety scene working condition is-1. The decision surface equation capable of dividing the danger domain and the security domain is set as follows:
ξx+c=0 (12)
in the formula, xi represents a normal vector of the decision surface and determines the direction of the decision surface; c represents a displacement item and determines the distance between the decision surface and the origin; and x represents a characteristic vector consisting of scene risk indexes of scene working conditions.
The distance r from the feature vector x to the decision surface is:
Figure BDA0003894491710000181
the decision surface should satisfy the requirement of correctly classifying the training samples, so for any training sample, there are:
Figure BDA0003894491710000182
the training samples on both sides of the decision surface closest to the decision surface are called support vectors, and the sum gamma of the distances from two heterogeneous support vectors to the decision surface represents the classification interval:
Figure BDA0003894491710000183
in order to find the decision surface with the best classification effect, the classification interval needs to be maximized, that is, the following formula is solved:
Figure BDA0003894491710000184
meanwhile, the following conditions need to be satisfied:
y i (ξx i +c)≥1,i=1,2,...,n. (17)
because the optimal decision surface is solved by the convex quadratic programming problem, the dual problem is obtained by adopting a Lagrange multiplier method. Adding Lagrange multiplier alpha to each constraint of equation (17) i The lagrange function is constructed as follows:
Figure BDA0003894491710000191
let the partial derivative of the lagrange function L to ξ, c be zero, one can obtain:
Figure BDA0003894491710000192
Figure BDA0003894491710000193
thus, the optimization problem with respect to equation (18) may be further translated with respect to parameter α i The dual problem of convex quadratic optimization optimizing is as follows:
Figure BDA0003894491710000194
in the formula, alpha j 、x j And y j Respectively formed by alpha i 、x i And y i And (4) dual reaction.
Solve to obtain alpha i Is optimally solved as
Figure BDA0003894491710000195
Then the ξ -optimal solution ξ available through equation (19) is * The optimal solution c of c can be obtained by the formula (17) *
Therefore, the boundary equation f (x) of the dangerous domain and the safe domain of the dynamic scene output by the support vector machine is as follows:
f(x)=ξ * x+c * (22)
when the training samples are not linearly separable, the training samples may be mapped from the original space to a higher-dimensional feature space such that the training samples are linearly separable in this feature space. Let φ (x) denote the feature vectors corresponding to the mapping of x to the high-dimensional feature space, the solving process involves computing φ (x) i )φ(x j ) I.e. x i And x j The inner product after mapping to the feature space. Direct computation is often difficult because the feature space dimension can be very high, and may even be an infinite solution. To circumvent this obstacle, a kernel function is introduced to compute the dot product between any two eigenvectors mapped into high-dimensional space:
Figure BDA0003894491710000196
wherein, κ (x) i ,x j ) Representing kernel functions, commonly used kernel functions include the following types:
linear kernel function:
κ(x i ,x j )=x i ·x j (24)
polynomial kernel function:
κ(x i ,x j )=(x i ·x j ) d (25)
wherein d is the degree of the polynomial and is more than or equal to 1.
Gaussian kernel function:
Figure BDA0003894491710000201
where σ is the bandwidth of the Gaussian kernel, and σ > 0.
Laplace kernel function:
Figure BDA0003894491710000202
sigmiod kernel function:
κ(x i ,x j )=tanh(βx i x j +θ) (28)
wherein, tanh is hyperbolic tangent function, beta is more than 0, and theta is less than 0.
After the kernel function is introduced, it is not necessary to directly calculate the inner product in the high-dimensional or even infinite-dimensional feature space, and then equation (21) can be transformed into:
Figure BDA0003894491710000203
and solving to obtain a boundary equation F (x) of the dangerous domain and the safe domain of the dynamic scene output by the support vector machine, wherein the boundary equation F (x) comprises the following steps:
Figure BDA0003894491710000204
in the third step, the process of solving the risk degree boundaries of different levels in the dynamic scene risk domain through support vector regression is as follows:
step one, pre-dividing the danger degree boundaries of different grades in the danger domain according to the TTC. TTC is used as the most common scene risk index, the risk degree of a simple low-dimensional scene can be well evaluated, but for a complex high-dimensional scene, because the TTC only considers two factors of the relative distance and the relative speed of two vehicles, an accurate, comprehensive and complete scene risk degree boundary cannot be obtained through the TTC, in addition, the division of the scene risk degree boundary only through the TTC has a defect that some dangerous scenes with small TTC values can hardly occur on a real road under the condition that the occurrence probability of the scenes on the real road is considered, and the scenes are not suitable for being taken into a dangerous area for intelligent networking automobile testing. Therefore, in order to obtain an accurate and complete scene risk boundary, the risk boundaries of different levels in the risk domain may be pre-divided according to the TTC. And calculating TTCs of all working condition key moments of the dangerous scene, and formulating a pre-division standard of the dangerous domain scene danger degree boundary.
Table 2: scene risk level schedule
TTC value Scene Risk level
[0,1] Collision scene (crash scenarios)
(1,3] Emergency scene (emergency sciences)
(3,5] Conflict scene (conflict scenarios)
(5,+∞) Safe scene, abandoning
As shown in Table 2, the embodiment still targets the cut-in scene and its ODD. According to the characteristics of the cut-in scene, selecting the coincidence moment of the center of the vehicle and the lane line as a key moment, calculating TTCs of all dangerous scene working conditions at the key moment, and establishing a pre-division standard of dangerous domain scene danger degree boundaries as follows: when TTC belongs to [0s,1s ], the risk level of the scene is a collision scene (crash scenarios); when TTC belongs to (1s, 3s), the danger level of the scene is an emergency scene (emergency scenes), when TTC belongs to (3s, 5s), the danger level of the scene is a conflict scene (conflict scenes), when TTC belongs to (5 s, plus infinity), the scene is determined as a safe scene which is wrongly classified into the working condition of the dangerous scene due to the fact that the value of TTC is large, and the scene is directly discarded.
And step two, selecting appropriate scene risk indexes to construct combined distribution. Calculating scene risk indexes of dangerous scene working conditions at key moments, such as TTC and THW, relative speed of the vehicle and the target vehicle and relative acceleration of the vehicle and the target vehicle, combining the scene risk indexes pairwise to construct combined distribution, observing distribution conditions of the dangerous scene working conditions by combining pre-division results of scene risk degree boundaries in the first step, and selecting the combined distribution of the scene risk indexes with better classification effect as input of a support vector regression algorithm.
Fig. 8 shows an exemplary operation result of the second step, which is still a cut-in scenario and an ODD designed for the cut-in scenario. According to the characteristics of the cut-in scene, the coincidence moment of the center of the vehicle and the lane line is selected as a key moment, the method of the step one is adopted to pre-divide the danger degree boundaries with different grades in the danger domain according to the TTC, the danger indexes of each scene are combined in pairs to construct combined distribution, the distribution situation of the working conditions of the dangerous scenes is observed, and finally the classification effect of the working conditions of the dangerous scenes in the combined distribution of the TTC and the relative acceleration of the two vehicles is best.
And step three, solving the risk degree boundaries of different grades in the dynamic scene risk domain through support vector regression. Firstly, extracting scene boundary points at positions without boundary lines in the combined distribution of the scene risk indexes with good classification effects obtained in the second step, inputting the scene boundary points as training samples into a support vector regression algorithm, taking the boundary points belonging to one boundary line as a training sample set during extraction, independently inputting the training sample set into the support vector regression algorithm to ensure that each training sample set only corresponds to one scene risk boundary, and finally summarizing all scene risk boundary lines to obtain the risk boundaries with different grades in the dynamic scene risk domain.
Since the solution method of support vector regression is basically the same for each training sample set, a method for solving risk degree boundaries of different levels in a scene risk domain by using a support vector regression algorithm is described below by taking one training sample set as an example. Let the training samples in the training sample set be (t) i ,g i ) I = 1.. Times, m, where m represents the number of training samples in the set of training samples, i.e. the number of boundary points, t i Abscissa, g, representing the i-th boundary point in the joint distribution of the scene risk indicators i And (3) an ordinate of the ith boundary point in the joint distribution of the scene risk indicator. The support vector regression algorithm assumes that the maximum deviation of epsilon can exist between the model output f (t) and the ordinate g of the boundary point, namely, the loss is calculated only when the absolute value of the difference between f (t) and g is larger than epsilon, which is equivalent to that a spacing zone with the width of 2 epsilon is constructed by taking f (t) as the center, and if the training sample falls into the spacing zone, the prediction is considered to be correct.
Let the output f (t) of the support vector regression algorithm be:
f(t)=ηt+q (31)
in the formula, eta represents a normal vector of the decision surface and determines the direction of the decision surface; q represents a displacement term and determines the distance between the decision surface and the origin; t represents the abscissa of the boundary point in the joint distribution of the scene risk indicator.
The solution problem can then be formalized as:
Figure BDA0003894491710000221
in the formula, D represents a regularization constant, l ε Represents an ε -insensitive function, whose expression is:
Figure BDA0003894491710000222
introduction of relaxation variable delta i And delta i ', equation (32) can be converted to:
Figure BDA0003894491710000223
introducing lagrange multiplier mu i 、μ i ’、α i And alpha i ', construct the Lagrangian function:
Figure BDA0003894491710000224
let the Lagrangian function L be on eta, q, delta i And delta i The partial derivative of' is zero, which gives:
Figure BDA0003894491710000225
Figure BDA0003894491710000226
α ii =D (38)
α′ i +μ′ i =D (39)
then, the dual problem with equation (35) is:
Figure BDA0003894491710000231
in the formula, t jj ,α j ' respectively composed of t i ,α i And alpha i ' dual result.
By the above formula, alpha can be obtained i Optionally selecting 0 < alpha i The training sample < D can be solved to obtain q:
Figure BDA0003894491710000232
in addition, q can also be obtained by selecting a plurality of compounds satisfying the condition 0 < alpha i And (4) obtaining training samples smaller than D in an average value mode after solving.
Thus, the scene risk boundary equation f (t) for the support vector regression output is:
Figure BDA0003894491710000233
if the training samples are linear, the method of introducing kernel functions in step three of the second step can be adopted. Assuming that t is mapped to the high-dimensional feature space and then corresponds to phi (t), eta will become:
Figure BDA0003894491710000234
and obtaining a scene risk degree boundary equation F (t) which supports vector regression output after solving as follows:
Figure BDA0003894491710000235
wherein, κ (t) i ,t j )=φ(t i )φ(t j ) Representing a kernel function.
And solving scene risk degree boundaries corresponding to all the training sample sets according to the method, and combining the scene risk degree boundaries obtained by pre-dividing in the step one to obtain the risk degree boundaries with different grades in the dynamic scene risk domain.
Fig. 9 shows an exemplary operation result of the third step three, and the embodiment still uses the cut-in scenario and the ODD designed for the cut-in scenario as the target scenario. According to the pre-division result of the scene risk boundary in the first step, the combined distribution of the scene risk indexes with a good classification effect obtained in the second step, that is, the combined distribution of the TTC and the relative acceleration of the two vehicles is equivalent to the existence of four risk boundaries determined by TTC =0, TTC =1, TTC =3 and TTC =5, but because the combined distribution contains more scene risk indexes and the distribution characteristics of the scene working conditions themselves, the scenes in the risk domain cannot be completely separated only by the three risk boundaries determined by the TTC, and some scene boundary points do not have boundary lines, such as the boundary points on the left side and the right side of the combined distribution diagram shown in fig. 8, the scene risk boundary of the boundary point at which no boundary line exists needs to be further solved through support vector regression. And respectively taking the boundary points on the left side and the right side of the joint distribution diagram as a single training sample set, inputting the training sample set into a support vector regression algorithm for learning, and selecting a linear kernel function for solving as the boundary of the joint distribution diagram is close to linearity, so as to obtain the scene risk degree boundary on the two sides of the joint distribution diagram. Combining the obtained risk degree boundary with the risk degree boundary obtained by pre-dividing according to the TTC, and obtaining the risk degree boundaries of different grades in the risk domain as follows:
Figure BDA0003894491710000241
in the formula I 1 And l r The risk degree boundaries, l, on the left and right sides of the joint distribution map obtained by support vector regression learning m1 、l m2 、l m3 And l m4 Are pre-scribed according to TTC respectivelyThe resulting risk boundary is divided.
In the fourth step, the process of evaluating the dynamic scene boundary based on the hybrid theory of the physical mechanism and the machine learning is as follows:
step one, consistency analysis is carried out on a dynamic scene risk degree boundary based on a physical mechanism and a dynamic scene risk degree boundary based on machine learning. And calculating the risk indexes of each scene at the dynamic scene boundary based on the physical mechanism and the machine learning, calculating the difference of the risk indexes of each scene corresponding to the two methods, and setting a consistency standard according to the average difference of all the risk indexes.
Table 3: consistency analysis results List
Average degree of difference Results of consistency analysis
[0,15%] Good effect
(15%,30%] In general terms
(30%,100%] Is poor
As shown in Table 3, the embodiment still targets the cut-in scene and the ODD designed for it. Although the solution method based on the physical mechanism is different from that based on the machine learning, the final solution result, that is, the expression form of the dynamic scene boundary is basically consistent, the scene boundary is described by establishing the functional relationship between the scene risk indexes, so that the consistency analysis is to compare the difference degree of each scene risk index value at the dynamic scene boundary. Calculating each scene risk degree index at the boundary of the dynamic scene obtained based on the physical mechanism in the first step and the machine learning in the third step, solving the difference degree of each scene risk index corresponding to the two methods and the average difference degree of all scene risk indexes, and setting the consistency standard according to the characteristics of the cut-in scene as follows: when the average difference degree of the risk indexes of all scenes is less than 15%, the consistency is considered to be good; when the average difference degree of the risk indexes of all scenes is less than 30%, the consistency is considered to be general; when the average degree of difference of the risk indicators of all scenes is more than 30%, the consistency is considered to be poor.
The consistency standard is set by the solving principle of two solving modes, the dynamic scene boundary solving method based on the physical mechanism and the machine learning embodies two mainstream research ideas in the scene research, one method is to deduce and solve in a pure theory research mode, and the other method is to learn rules from real road driving data. The solving method based on the physical mechanism analyzes the movement and interaction conditions of the vehicle and the surrounding target vehicles by establishing a physical model of a target scene and combining the kinematic relationship of the vehicle, and obtains the danger degree boundary of a dynamic scene from the theoretical level from the viewpoint of kinematics and geometry, and has the advantages that the obtained dynamic scene boundary is more accurate, the misjudgment of dangerous domain scenes and safety domain scenes basically does not occur, but the defect that the probability of occurrence of a plurality of scenes in the dangerous domain on real roads is extremely low because the driving condition of the real roads is not considered, and the practical significance of testing the intelligent internet-connected vehicles by using the scenes is not great; the solving method based on machine learning is based on real road driving data in a natural driving database, a support vector machine and a support vector regression algorithm are adopted, and the risk degree boundary of a dynamic scene is obtained in a model self-learning mode. Therefore, the dynamic scene boundaries obtained by the two methods have certain deviation, so that a looser consistency determination threshold is set, but on the premise that the two methods are correct in solution, the dynamic scene boundaries obtained by the two methods are basically consistent in a large range.
And step two, generating a high-confidence dynamic scene boundary based on a physical mechanism and a machine learning hybrid theory. According to the consistency standard set in the first step, namely the setting basis, the high-confidence scene boundary generation can adopt the following principle: when the consistency analysis result is good, the deviation weight can be freely distributed to the dynamic scene boundary obtained by two solving modes according to the actual research requirements, namely when the researched problem needs to consider more real road traffic conditions, the high-confidence dynamic boundary can be more deviated to the direction of the dynamic scene boundary obtained by machine learning, and when the researched problem focuses on analysis from a theoretical level, the high-confidence dynamic scene boundary can be more deviated to the direction of the dynamic scene boundary obtained by a physical mechanism; when the consistency analysis result is general, the solution method based on machine learning is indicated to have larger deviation of the boundary of the dynamic scene obtained by solution probably because the training sample data size is smaller or the data quality is lower, and more offset weights are distributed for a physical mechanism-based mode according to the average difference degree; when the consistency analysis result is poor, the dynamic scene boundary obtained by solving based on the machine learning is very large in deviation, and at the moment, the high-confidence dynamic scene boundary is solved by selecting a physical mechanism-based mode.
And step three, verifying the dynamic scene boundary based on the physical mechanism and the machine learning hybrid theory. According to the generation result of the high-confidence dynamic scene boundary, respectively taking the scene spaces of the three levels of the risk degrees of the dangerous domain and the scene space of the safety domain as independent test scene libraries, and sequentially sampling scenes from the four scene libraries to test the intelligent networked automobile until the accident rate of each scene library is converged. If the accident rate after the convergence of the four scene libraries is increased along with the increase of the scene risk degree and an obvious step-shaped distribution rule is presented, the effect of the generated dynamic scene boundary is better.

Claims (1)

1. A dynamic scene boundary assessment method based on a physical mechanism and machine learning hybrid theory is characterized in that: the method comprises the following steps:
firstly, solving the dynamic scene boundary through physical modeling, wherein the specific process is as follows:
firstly, establishing a physical model of a target scene, determining the type of the target scene to be researched, including a cut-in scene, a cut-out scene, a following scene, a highway exit scene and a merge ramp scene, and then further designing an operation design domain of the target scene aiming at the selected type of the target scene, namely, determining what kind of scene the target scene is specifically, wherein in the scene research, a vehicle refers to an intelligent internet vehicle to be tested, other vehicles except the vehicle in the target vehicle-referring scene determine all static scene elements of the target scene when designing ODD, the static scene elements related to the vehicle comprise the number and the type of the target vehicle and the initial positions of the vehicle and the target vehicle, the static scene elements related to the vehicle comprise the number and the type of lanes and the lanes to which the vehicle and the target vehicle belong, and the static scene elements related to the environment comprise the number and the type of traffic signs, illumination intensity and weather conditions; finally, according to the interaction mode of the vehicle and the surrounding target vehicles specified by the type of the target scene, discretizing the dynamic scene elements of the target scene, such as the speed and the acceleration of the vehicle and the target vehicles, the trigger mode, the trigger distance and the trigger time of the target scene, arranging and combining the discretized dynamic scene elements to obtain a scene space of the target scene, and establishing a physical model of the target scene;
secondly, modeling the driving tracks of the vehicle and the target vehicle based on the vehicle kinematic relationship, solving the driving tracks of the vehicle and the target vehicle by combining the vehicle kinematic relationship on the basis of the physical model of the target scene obtained in the first step, wherein the static scene elements of the target scene comprise the initial position information of the vehicle and the target vehicle, the dynamic scene elements comprise the information of the speed sequence and the acceleration sequence of the vehicle and the target vehicle changing along with time, and the driving tracks of the vehicle and the target vehicle are obtained by combining the vehicle kinematic relationship and calculating, and the modeling methods of the driving tracks of the vehicle and the target vehicle are the same and are divided into two modes: a track model is established by taking the center of a vehicle as a datum point, and a left front end contour point, a right front end contour point, a left rear end contour point and a right rear end contour point of the vehicle are established; the second method is that one of a left front end contour point, a right front end contour point, a left rear end contour point or a right rear end contour point of a vehicle is used as a reference point, a track model of other contour points is established, the reference points selected in the two modes are different, but the modeling principle and the method are consistent, and the reference point convenient for calculation and solution is selected for modeling according to a specific problem during actual modeling;
taking the right front contour point of the vehicle as the reference point for modeling, and taking the initial time t as an example 0 The position of the outline point at the right front end of the vehicle at the moment is the origin of coordinates, and the vehicle runs for a period of time to t 1 At time, the coordinates of the right front contour point are expressed as:
Figure FDA0003894491700000021
in the formula, v e (t) represents the speed of the vehicle at time t, θ e (t) shows the heading angle of the vehicle at time t, p ex rf The outline point of the right front end of the vehicle is shown at t 1 Abscissa of time, p ey rf The outline point of the right front end of the vehicle is shown at t 1 The ordinate of the moment;
let the length of the vehicle be a e Width of b e Approximating the vehicle contour as a rectangle, then t 1 Coordinates (p) of the vehicle's left front end contour point at that moment ex lf ,p ey lf ) Comprises the following steps:
Figure FDA0003894491700000022
t 1 coordinates (p) of the vehicle right rear end contour point at the moment ex rr ,p ey rr ) Comprises the following steps:
Figure FDA0003894491700000023
t 1 coordinates (p) of contour point at the left rear end of the vehicle at the moment ex lr ,p ey lr ) Comprises the following steps:
Figure FDA0003894491700000031
after the coordinates of each contour point of the vehicle are solved, the coordinates of each contour point of the vehicle at different moments in the cut-in process are connected to obtain the running track of the vehicle in the cut-in process;
step three, solving the dynamic scene boundary through a separation axis theorem to obtain the running tracks of the vehicle and the target vehicle, further analyzing the interaction situation of the vehicle and the target vehicle through the separation axis theorem to solve the boundary state of the vehicle and the target vehicle, and determining the dynamic scene boundary according to the relative distance, the relative speed and the scene risk index of the relative acceleration of the vehicle and the target vehicle in the boundary state;
based on an established physical model and an established driving track model, solving a dynamic scene boundary through a separation axis theorem, assuming that the driving of a vehicle is motion in a two-dimensional plane, modeling the vehicle as a directed bounding box, namely, the shape and the driving direction of the vehicle are considered in, according to the separation axis theorem, for any two mutually-separated convex polyhedrons, a separation axis exists so that the two polyhedrons have certain intervals on the axis, and the projections of the two polyhedrons on the separation axis are also mutually separated, for one directed bounding box, at most, only whether two edge direction vectors meet a separation axis condition needs to be detected, and for the two directed bounding boxes, at most, whether the four edge direction vectors meet the separation axis condition needs to be detected, if any one of the four edge direction vectors is the separation axis, namely, the two directed bounding boxes are judged to be not intersected, namely, the vehicle does not collide, A represents a target vehicle, B represents the vehicle, according to the separation axis theorem, when the following relation is satisfied, the two vehicles are judged not to collide:
|s·l|>d A +d B ,l∈{a u ,a v ,b u ,b v } (5)
wherein s represents a distance vector between the center of the vehicle and the center of the target vehicle, l represents a direction projection axis of the normalized vector, and a u 、a v Normalized vectors representing the directions of both sides of the target vehicle, b u 、b v Normalized vectors representing directions of two sides of the vehicle, d A Representing the projected length on the projection axis before the center point of the target vehicle, d B Representing a projection length on a projection axis behind a center point of the vehicle;
d A can be obtained by the following formula:
Figure FDA0003894491700000041
in the formula (I), the compound is shown in the specification,
Figure FDA0003894491700000042
respectively represent the target vehicles at a u 、a v Positive half side length in direction;
d B can be obtained by the following formula:
Figure FDA0003894491700000043
in the formula (I), the compound is shown in the specification,
Figure FDA0003894491700000044
respectively show the vehicle at b u 、b v Positive half side length in direction;
according to the separation axis theorem, the scene boundary corresponding to the boundary state that the vehicle is cut into the rear part of the target vehicle and the right front end contour point of the vehicle does not collide with the right rear end contour point of the target vehicle is obtained as follows:
Figure FDA0003894491700000045
in the formula, t p1 V represents the moment when the vehicle right front end contour point and the target vehicle right rear end contour point are positioned on the same straight line, and the moment is obtained by calculating the longitudinal displacement of the running track of the vehicle right front end contour point o Representing the longitudinal speed, D, of the target vehicle 1 Represents t 0 The initial distance between the vehicle and the target vehicle at the moment;
similarly, the scene boundary corresponding to the boundary state that the vehicle is cut into the front of the target vehicle and the left rear end contour point of the vehicle does not collide with the right front end contour point of the target vehicle is obtained as follows:
Figure FDA0003894491700000046
in the formula, t p2 The moment when the left rear end contour point of the vehicle and the right front end contour point of the target vehicle are positioned on the same straight line is represented, and the moment is obtained through the longitudinal displacement calculation of the running track of the left rear end contour point of the vehicle;
therefore, the scene boundary of the target dynamic scene under the set ODD is obtained in a physical modeling mode;
secondly, solving the boundary of the dangerous domain and the safety domain of the dynamic scene through a support vector machine, wherein the specific process is as follows:
step one, scene data acquisition and data preprocessing, carry on laser radar, millimeter wave radar, GPS high accuracy and be used to lead, high definition digtal camera, on-vehicle CAN bus, lane line sensor, rainfall sensor and illumination sensor on the scene data acquisition vehicle, gather all sensor data according to fixed acquisition cycle, the sensor data form of gathering includes: the method comprises the following steps of generating a three-dimensional point cloud in space by taking a frame as a unit by a laser radar, generating an obstacle state list by taking the frame as the unit by a millimeter wave radar, generating positioning and attitude data by a GPS high-precision inertial navigation and expressing by a time sequence, generating a color image by a high-definition camera and a lane line sensor and expressing by the frame as the unit, generating vehicle operation and motion state data by a vehicle-mounted CAN bus and expressing by the time sequence, generating voltage data by a rainfall sensor and an illumination sensor and expressing by the time sequence, and preprocessing data, wherein the specific content comprises the following steps: time alignment and space alignment of the sensor data; verifying the validity of the sensor data; generating a vehicle bus alignment signal, a vehicle state alignment signal and a multi-mode environmental sensor alignment signal for use in subsequent steps;
step two, extracting dangerous scene working conditions and safe scene working conditions, intercepting a plurality of segments of scene data consistent with a target scene and an ODD thereof from all preprocessed data by adopting a mode of manually watching videos according to the target scene type and the ODD type thereof selected in the step two, wherein each segment of complete target scene data is called a primary scene working condition and represents a complete target scene event on a real road, and in the scene working condition extraction stage, the dangerousness of the scene is represented by a plurality of vehicle state quantities, for example, the longitudinal speed, the longitudinal acceleration, the lateral acceleration and the yaw rate of the vehicle in the vehicle-mounted bus alignment signal and the vehicle state alignment signal obtained in the step one can cause different scenes under different longitudinal speeds due to the same driving operation, so that the longitudinal speed of the vehicle is divided into a plurality of intervals, and the longitudinal acceleration, the lateral acceleration and the yaw rate are taken as scene danger indexes, and the dangerous scene working condition extraction standards of the indexes at different vehicle speeds are established;
dividing the longitudinal speed of the vehicle into a plurality of intervals by taking a cut-in scene and an ODD (optical disc device) designed for the cut-in scene as a target scene, and designing dangerous working condition standards of longitudinal acceleration, lateral acceleration and yaw angular velocity of the vehicle in different longitudinal speed intervals according to the characteristics of the cut-in scene;
step three, solving the boundary between the dangerous domain and the safety domain of the dynamic scene through a support vector machine, firstly selecting the moment capable of representing the danger of the target scene as a key moment, researching the interaction condition of the vehicle and the target vehicle at the moment, and taking the interaction condition as a basis for dividing the dangerous domain and the safety domain of the scene, and in order to improve the classification effect of the support vector machine, selecting a comprehensive physical quantity capable of fusing a plurality of vehicle state quantities and target vehicle state quantities to represent the danger of the scene instead of using a single vehicle state quantity in the step two as a scene danger index, wherein the comprehensive physical quantity comprises collision time, head time interval, and relative speed and relative acceleration of the vehicle and the target vehicle, TTC represents the time required by the two vehicles from the current moment to the collision under the condition of maintaining the current motion state, and the smaller value is, the higher the risk represented by the scene is, and the TTC is calculated by the following formula:
Figure FDA0003894491700000061
wherein Δ R represents a relative distance between the two vehicles, v r Indicating the speed, v, of the rear vehicle f Representing the speed of the leading vehicle;
THW represents the time taken by the two vehicles to reach the position of the front vehicle under the condition of keeping the current motion state unchanged, the smaller the value of THW is, the higher the risk of the representative scene is, and THW is calculated by the following formula:
Figure FDA0003894491700000062
secondly, calculating TTC and THW of all dangerous scene working conditions and safety scene working conditions at key time and scene risk indexes of relative speed and relative acceleration of the vehicle and surrounding vehicles, and finally inputting the calculated scene risk indexes and scene working condition risk labels, namely whether the scene working conditions are dangerous or safe, into a support vector machine algorithm together as training samples, and outputting the boundary of a dangerous domain and a safety domain of the dynamic scene through model self-learning;
taking a cut-in scene and an ODD (optical Density Detector) designed for the cut-in scene as a target scene, selecting the moment when the center of the vehicle coincides with a lane line as a key moment according to the characteristics of the cut-in scene, calculating scene risk indexes of all scene working conditions at the key moment, including TTC (total thermal coefficient), THW (total thermal coefficient), relative speed of the vehicle and the target vehicle and relative acceleration of the vehicle and the target vehicle, and setting a training sample as (x) i ,y i ) I = 1.. N, where n represents the total number of training samples, i.e., the total number of dangerous scene conditions and safe scene conditions, x i Characteristic vector y representing that the ith scene condition is composed of scene risk indexes i The method comprises the following steps of representing a danger label of the ith scene working condition, setting the danger label of the danger scene working condition to be +1, setting the danger label of the safety scene working condition to be-1, and setting a decision surface equation capable of dividing a danger domain and a safety domain as follows:
ξx+c=0 (12)
in the formula, xi represents a normal vector of the decision surface and determines the direction of the decision surface; c represents a displacement item and determines the distance between the decision surface and the origin; x represents a characteristic vector consisting of scene risk indexes under the scene working condition;
the distance r from the feature vector x to the decision surface is:
Figure FDA0003894491700000071
the decision surface needs to be satisfied to correctly classify the training samples, so for any training sample, there are:
Figure FDA0003894491700000072
the training samples on two sides of the decision surface closest to the decision surface are called support vectors, and the sum gamma of the distances from two heterogeneous support vectors to the decision surface represents the classification interval:
Figure FDA0003894491700000073
to find the decision surface with the best classification effect, the classification interval is maximized, that is, the following formula is solved:
Figure FDA0003894491700000081
meanwhile, the following conditions are satisfied:
y i (ξx i +c)≥1,i=1,2,...,n. (17)
solving the problem that the optimal decision surface belongs to convex quadratic programming, obtaining the dual problem by adopting a Lagrange multiplier method, and adding a Lagrange multiplier alpha to each constraint of the formula (17) i The lagrange function is constructed as follows:
Figure FDA0003894491700000082
let the partial derivative of the lagrange function L to ξ, c be zero, one can obtain:
Figure FDA0003894491700000083
Figure FDA0003894491700000084
the optimization problem with respect to equation (18) then translates further to the parameter α i The convex quadratic optimization dual problem is as follows:
Figure FDA0003894491700000085
in the formula, alpha j 、x j And y j Respectively formed by alpha i 、x i And y i Obtaining by dual;
to obtain alpha by solution i Is alpha i * Then xi is the optimal solution xi obtained by equation (19) * Obtaining an optimal solution c of c by equation (17) *
Therefore, the boundary equation f (x) of the dangerous domain and the safe domain of the dynamic scene output by the support vector machine is as follows:
f(x)=ξ * x+c * (22)
when the training sample is not linearly divided, mapping the training sample from the original space to a higher-dimensional feature space to make the training sample linearly divided in the feature space, making phi (x) represent the feature vector corresponding to mapping x to the high-dimensional feature space, and the solving process involves calculating phi (x) i )φ(x j ) I.e. x i And x j And (3) introducing a kernel function to calculate the dot product between any two feature vectors mapped into the high-dimensional space by the inner product after mapping into the feature space:
Figure FDA0003894491700000091
wherein, κ (x) i ,x j ) Representing a kernel function, commonly used kernel functions include the following types:
linear kernel function:
κ(x i ,x j )=x i ·x j (24)
polynomial kernel function:
κ(x i ,x j )=(x i ·x j ) d (25)
wherein d is the degree of a polynomial and is more than or equal to 1;
gaussian kernel function:
Figure FDA0003894491700000092
wherein, sigma is the bandwidth of the Gaussian kernel function, and sigma is more than 0;
laplace kernel function:
Figure FDA0003894491700000093
sigmiod kernel function:
κ(x i ,x j )=tanh(βx i x j +θ) (28)
wherein, tanh is hyperbolic tangent function, beta is more than 0, theta is less than 0;
after the kernel function is introduced, it is not necessary to directly calculate the inner product in the high-dimensional or even infinite-dimensional feature space, and then equation (21) is transformed into:
Figure FDA0003894491700000101
and solving to obtain a boundary equation F (x) of the dangerous domain and the safe domain of the dynamic scene output by the support vector machine, wherein the boundary equation F (x) comprises the following steps:
Figure FDA0003894491700000102
thirdly, solving the risk degree boundaries of different grades in the dynamic scene risk domain through support vector regression, wherein the specific process is as follows:
firstly, pre-dividing the risk degree boundaries of different levels in the risk domain according to the TTCs, in order to obtain accurate and complete scene risk degree boundaries, pre-dividing the risk degree boundaries of different levels in the risk domain according to the TTCs, calculating the TTCs of all working conditions of the risk scene, and formulating a pre-division standard of the risk degree boundaries of the risk domain scene;
taking a cut-in scene and an ODD designed for the cut-in scene as target scenes, selecting the moment when the center of the vehicle coincides with a lane line as a key moment according to the characteristics of the cut-in scene, calculating TTCs of all dangerous scene working conditions at the key moment, and establishing a pre-division standard of dangerous domain scene danger degree boundaries as follows:
when TTC belongs to [0s,1s ], the risk level of the scene is a collision scene; when the TTC belongs to (1s, 3s), the danger level of the scene is an emergency scene, when the TTC belongs to (3s, 5s), the danger level of the scene is a conflict scene, when the TTC belongs to (5 s, and is +/-infinity), the scene is judged as a safe scene which is wrongly included in the working condition of the dangerous scene due to the large value of the TTC, and the scene is directly discarded;
selecting appropriate scene risk indexes to construct combined distribution, calculating scene risk indexes of dangerous scene working conditions at key time, wherein the scene risk indexes comprise TTC (time to live), THW (total velocity) and relative speeds of the vehicle and the target vehicle, combining the scene risk indexes pairwise to construct combined distribution, observing the distribution condition of the dangerous scene working conditions by combining the pre-division result of the scene risk degree boundary in the step one, and selecting the combined distribution of the scene risk indexes with better classification effect as the input of a support vector regression algorithm;
thirdly, solving risk degree boundaries of different grades in the dynamic scene risk domain through support vector regression, firstly, extracting scene boundary points at positions without boundary lines in the joint distribution of the scene risk indexes with good classification effect obtained in the second step, taking the scene boundary points as training samples, inputting the training samples into the support vector regression algorithm, taking the boundary points belonging to one boundary line as a training sample set during extraction, independently inputting the training sample set into the support vector regression algorithm to ensure that each training sample set only corresponds to one scene risk degree boundary, and finally summarizing all scene risk degree boundary lines to obtain risk degree boundaries of different grades in the dynamic scene risk domain;
let the training samples in the training sample set be (t) i ,g i ) I = 1.. Times, m, where m represents the number of training samples in the set of training samples, i.e. the number of boundary points, t i Abscissa, g, representing the i-th boundary point in the joint distribution of the scene risk indicators i The ordinate of the ith boundary point in the joint distribution of the scene risk indicators is represented, and the support vector regression algorithm assumes that the output f (t) of the model and the ordinate g of the boundary point can be at mostIf the training samples fall into the interval zone, the prediction is considered to be correct;
let the output f (t) of the support vector regression algorithm be:
f(t)=ηt+q (31)
in the formula, eta represents a normal vector of the decision surface and determines the direction of the decision surface; q represents a displacement term and determines the distance between the decision surface and the origin; t represents the abscissa of the boundary point in the joint distribution of the scene risk indicators;
the solution problem is then formalized as:
Figure FDA0003894491700000111
in the formula, D represents a regularization constant, l ε Represents an ε -insensitive function, whose expression is:
Figure FDA0003894491700000121
introduction of relaxation variable delta i And delta i ', equation (32) translates to:
Figure FDA0003894491700000122
introducing lagrange multiplier mu i 、μ i ’、α i And alpha i ', construct the Lagrangian function:
Figure FDA0003894491700000123
let the Lagrangian function L be on eta, q, delta i And delta i The partial derivative of' is zero, which gives:
Figure FDA0003894491700000124
Figure FDA0003894491700000125
α ii =D (38)
α' i +μ' i =D (39)
then, the dual problem with equation (35) is:
Figure FDA0003894491700000126
in the formula, t jj ,α j ' respectively composed of t i ,α i And alpha i ' dual result;
solving for alpha by the above equation i Optionally selecting 0 < alpha i Solving the training sample of < D to obtain q:
Figure FDA0003894491700000127
further, q is determined by selecting a plurality of the satisfying conditions 0 < alpha i Training samples smaller than D are obtained by taking an average value after solving;
thus, the scene risk boundary equation f (t) that supports vector regression output is:
Figure FDA0003894491700000131
if the training samples are linear and inseparable, a method of introducing kernel functions in the third step of the second step is adopted, and if t is mapped to a high-dimensional feature space and corresponds to phi (t), eta becomes:
Figure FDA0003894491700000132
and solving to obtain a scene risk degree boundary equation F (t) supporting vector regression output, wherein the boundary equation F (t) is as follows:
Figure FDA0003894491700000133
wherein, κ (t) i ,t j )=φ(t i )φ(t j ) Representing a kernel function;
solving scene risk degree boundaries corresponding to all training sample sets according to the method, and combining the scene risk degree boundaries with the scene risk degree boundaries obtained by pre-dividing in the step one to obtain risk degree boundaries with different grades in the dynamic scene risk domain;
fourthly, evaluating the dynamic scene boundary based on the hybrid theory of the physical mechanism and the machine learning, wherein the specific process is as follows:
step one, consistency analysis is carried out on a dynamic scene risk degree boundary based on a physical mechanism and a dynamic scene risk degree boundary based on machine learning, scene risk degree indexes at the dynamic scene boundary obtained based on the physical mechanism and the machine learning are calculated, the difference degree of the scene risk indexes corresponding to the two methods is calculated, and a consistency standard is set according to the average difference degree of all risk indexes;
the method comprises the following steps of taking a cut-in scene and an ODD (optical data device) designed for the cut-in scene as target scenes, describing scene boundaries by establishing a functional relationship between scene risk indexes, carrying out consistency analysis, namely comparing the difference degree of each scene risk index value at the dynamic scene boundary, calculating each scene risk index at the dynamic scene boundary obtained by the first step based on a physical mechanism and the third step based on machine learning, solving the difference degree of each scene risk index and the average difference degree of all scene risk indexes corresponding to the two methods, and setting a consistency standard according to the characteristics of the cut-in scene as follows:
when the average difference degree of the risk indexes of all scenes is less than 15%, the consistency is determined to be good; when the average difference degree of the risk indexes of all scenes is less than 30%, determining that the consistency is general; when the average difference degree of the risk indexes of all scenes is more than 30%, determining that the consistency is poor;
and step two, generating a high-confidence dynamic scene boundary based on a physical mechanism and a machine learning hybrid theory, wherein the high-confidence scene boundary generation adopts the following principle according to the consistency standard set in the step one, namely the setting basis of the consistency standard:
when the consistency analysis result is good, the deviation weight is freely distributed to the dynamic scene boundary obtained by two solving modes according to the actual research requirement, namely when the researched problem needs to consider more real road traffic conditions, the high-confidence dynamic boundary is more deviated to the direction of the dynamic scene boundary obtained by machine learning, and when the researched problem focuses on the theoretical level analysis, the high-confidence dynamic scene boundary is more deviated to the direction of the dynamic scene boundary obtained by a physical mechanism; when the consistency analysis result is general, the solving method based on the machine learning shows that the boundary deviation of the dynamic scene obtained by solving is larger due to smaller training sample data amount or lower data quality, and more offset weights are distributed for a mode based on a physical mechanism according to the average difference degree; when the consistency analysis result is poor, the deviation of the boundary of the dynamic scene obtained by solving based on the machine learning solving method is very large, and at the moment, the boundary of the high-confidence dynamic scene is solved by selecting a physical mechanism-based mode;
and step three, verifying the dynamic scene boundary based on a physical mechanism and machine learning hybrid theory, respectively taking the scene spaces of three levels of danger degrees of a dangerous domain and the scene spaces of a safety domain as independent test scene libraries according to the generation result of the high-confidence-degree dynamic scene boundary, and sequentially sampling scenes from the four scene libraries to test the intelligent networked automobile until the accident rate of each scene library is converged, wherein if the accident rate after the convergence of the four scene libraries is increased along with the increase of the scene danger degree and an obvious step-shaped distribution rule is presented, the generated dynamic scene boundary has a good effect.
CN202211269248.9A 2022-10-17 2022-10-17 Dynamic scene boundary assessment method based on physical mechanism and machine learning hybrid theory Pending CN115544888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211269248.9A CN115544888A (en) 2022-10-17 2022-10-17 Dynamic scene boundary assessment method based on physical mechanism and machine learning hybrid theory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211269248.9A CN115544888A (en) 2022-10-17 2022-10-17 Dynamic scene boundary assessment method based on physical mechanism and machine learning hybrid theory

Publications (1)

Publication Number Publication Date
CN115544888A true CN115544888A (en) 2022-12-30

Family

ID=84736274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211269248.9A Pending CN115544888A (en) 2022-10-17 2022-10-17 Dynamic scene boundary assessment method based on physical mechanism and machine learning hybrid theory

Country Status (1)

Country Link
CN (1) CN115544888A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116401111A (en) * 2023-05-26 2023-07-07 中国第一汽车股份有限公司 Function detection method and device of brain-computer interface, electronic equipment and storage medium
CN116628584A (en) * 2023-07-21 2023-08-22 国网智能电网研究院有限公司 Power sensitive data processing method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116401111A (en) * 2023-05-26 2023-07-07 中国第一汽车股份有限公司 Function detection method and device of brain-computer interface, electronic equipment and storage medium
CN116401111B (en) * 2023-05-26 2023-09-05 中国第一汽车股份有限公司 Function detection method and device of brain-computer interface, electronic equipment and storage medium
CN116628584A (en) * 2023-07-21 2023-08-22 国网智能电网研究院有限公司 Power sensitive data processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Al-qaness et al. An improved YOLO-based road traffic monitoring system
CN112166304B (en) Error detection of sensor data
CN112700470B (en) Target detection and track extraction method based on traffic video stream
WO2022206942A1 (en) Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
WO2020264010A1 (en) Low variance region detection for improved detection
CN115544888A (en) Dynamic scene boundary assessment method based on physical mechanism and machine learning hybrid theory
Kowol et al. Yodar: Uncertainty-based sensor fusion for vehicle detection with camera and radar sensors
CN116685874A (en) Camera-laser radar fusion object detection system and method
Zhang et al. Pedestrian safety analysis in mixed traffic conditions using video data
Rezaei et al. Computer vision for driver assistance
CN116830164A (en) LiDAR decorrelated object detection system and method
Zhang et al. A framework for turning behavior classification at intersections using 3D LIDAR
CN113592905A (en) Monocular camera-based vehicle running track prediction method
KR20210122101A (en) Radar apparatus and method for classifying object
CN114822044A (en) Driving safety early warning method and device based on tunnel
CN116205024A (en) Self-adaptive automatic driving dynamic scene general generation method for high-low dimension evaluation scene
Sabour et al. Deepflow: Abnormal traffic flow detection using siamese networks
Kloeker et al. High-precision digital traffic recording with multi-lidar infrastructure sensor setups
Lin et al. Vehicle detection and tracking using low-channel roadside LiDAR
Hosseini et al. An unsupervised learning framework for detecting adaptive cruise control operated vehicles in a vehicle trajectory data
Zardosht et al. Identifying Driver Behavior in Preturning Maneuvers Using In‐Vehicle CANbus Signals
Zhao Exploring the fundamentals of using infrastructure-based LiDAR sensors to develop connected intersections
CN114926729A (en) High-risk road section identification system and method based on driving video
Luo LiDAR based perception system: Pioneer technology for safety driving
Rangkuti et al. Optimization of Vehicle Object Detection Based on UAV Dataset: CNN Model and Darknet Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination