CN115662166A - Automatic driving data processing method and automatic driving traffic system - Google Patents

Automatic driving data processing method and automatic driving traffic system Download PDF

Info

Publication number
CN115662166A
CN115662166A CN202211137920.9A CN202211137920A CN115662166A CN 115662166 A CN115662166 A CN 115662166A CN 202211137920 A CN202211137920 A CN 202211137920A CN 115662166 A CN115662166 A CN 115662166A
Authority
CN
China
Prior art keywords
traffic
automatic driving
traffic environment
model
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211137920.9A
Other languages
Chinese (zh)
Other versions
CN115662166B (en
Inventor
董是
袁长伟
王建伟
徐婷
齐玉亮
毛新华
李淑梅
高超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Expressway Jingxiong Management Center
Changan University
Original Assignee
Hebei Expressway Jingxiong Management Center
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Expressway Jingxiong Management Center, Changan University filed Critical Hebei Expressway Jingxiong Management Center
Priority to CN202211137920.9A priority Critical patent/CN115662166B/en
Publication of CN115662166A publication Critical patent/CN115662166A/en
Application granted granted Critical
Publication of CN115662166B publication Critical patent/CN115662166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention discloses an automatic driving data processing method and an automatic driving traffic system, which are applied to road sides and/or vehicle ends and comprise the following steps: s1: acquiring the current traffic environment by using a multivariate fusion perception method; s2: identifying traffic participants in the current traffic environment, and predicting the action tracks of the traffic participants to obtain a prediction result; s3: according to the current traffic environment and the prediction result, a decision model and a constraint rule of the automatic driving vehicle in the current traffic environment are established; s4: determining an envelope interval of a decision model of the autonomous vehicle; s5: determining a passenger evaluation model of an envelope interval by using a data driving method; s6: judging whether the passenger evaluation model is a target passenger evaluation model or not, and if so, entering a step S7; otherwise, returning to the step S2; s7: and optimizing a decision model and a constraint rule of the automatic driving vehicle by using the passenger evaluation model, and sending the decision model and the constraint rule to the automatic driving vehicle.

Description

Automatic driving data processing method and automatic driving traffic system
Technical Field
The invention relates to the technical field of automatic driving, in particular to an automatic driving data processing method and an automatic driving traffic system.
Background
The construction of digital traffic infrastructure can fully make up the defect that the calculation power and the sensing capability required by the safe driving of the automatic driving automobile are insufficient in a complex traffic scene, the hardware upgrading cost of the automobile is obviously reduced through a vehicle-road cooperation mode, the overall safety of the system is improved, and the automatic driving automobile safety system plays an important supporting role in falling to the ground in the automatic driving industry.
In recent years, the vehicle-road coordination is promoted at home and abroad from the administrative and application level. The vehicle-road cooperative V2X communication technology is mainly based on three standards, namely a DSRC standard of IEEE 802.11p, an LTE-V technical standard based on an LTE cellular network and a 5G-V2X (NR) standard, and can meet the requirements of different service scenes. At present, the DSRC standard is slowly popularized, LTE-V2X already has an industrial application foundation, and the 5G-V2X (NR) standard is frozen and is yet to be released. The LTE-V2X is in the former stage of 5G-V2X (NR), can solve most basic safety early warning and efficiency improvement application requirements in the defined scene at present, and is mainly used for driving assistance scenes. 5G-V2X (NR) is primarily intended to meet high-level autopilot application scenario requirements.
Currently, many research works are carried out in academia and industry for V2X communication, and these research focuses on these aspects: a series of traffic information such as real-time road conditions, road information, pedestrian information and the like is obtained through cooperative communication among vehicles, so that driving safety is improved, congestion is reduced, traffic efficiency is improved, and rich vehicle-mounted entertainment information is provided. However, since the urban traffic environment relates to a complex mixed traffic scene with mixed traffic of people and vehicles, the difficulty of researching the urban road automatic driving technology is increased. Meanwhile, the application scenario of the multi-sensor fusion algorithm is relatively simple at present, so that the research on a complex dynamic scenario is still to be enhanced, and the problem of time synchronization and online sub-calibration still exists in the engineering aspect, and a solution is urgently needed.
Disclosure of Invention
The invention aims to provide an automatic driving data processing method and an automatic driving traffic system, which are used for solving the problems of multi-source fusion perception, traffic participant identification and trajectory prediction and automatic driving intelligent decision and path planning at the same time.
The technical scheme for solving the technical problems is as follows:
the invention provides an automatic driving data processing method which is applied to road sides and/or vehicle ends, and comprises the following steps:
s1: acquiring the current traffic environment by using a multivariate fusion perception method;
s2: identifying the traffic participants in the current traffic environment, and predicting the action tracks of the traffic participants to obtain a prediction result;
s3: according to the current traffic environment and the prediction result, a decision model and a constraint rule of the automatic driving vehicle in the current traffic environment are established;
s4: determining an envelope interval of the autonomous vehicle decision model;
s5: determining a passenger evaluation model of the envelope interval by using a data driving method;
s6: judging whether the passenger evaluation model is a target passenger evaluation model or not, and if so, entering a step S7; otherwise, returning to the step S2;
s7: optimizing a decision model and a constraint rule of the autonomous vehicle by using the passenger evaluation model, and sending the decision model and the constraint rule of the autonomous vehicle to the autonomous vehicle.
Optionally, the current traffic environment includes a current road section traffic environment and a current intersection traffic environment, and the step S1 includes:
s11: acquiring the traffic environment of the current road section by using the multivariate fusion perception method;
s12: acquiring the traffic environment of the current intersection by using the multivariate fusion perception method; and
s13: and performing multi-sensor information fusion on the current road section traffic environment and the current intersection traffic environment by using the two-dimensional candidate area to obtain the current traffic environment.
Alternatively, the step S11 includes:
s111: acquiring the information of a current road section traffic environment image;
s112: acquiring target key points in the current traffic environment image information by adopting an anchor-free target detection and identification method;
s113: predicting the size of a bounding box of the current traffic environment image information by using the target key point;
s114: obtaining the current road section traffic environment according to the size of the surrounding frame of the current traffic environment image information and the target key point;
the step S12 includes:
s121: acquiring laser point cloud data of a current intersection traffic environment;
s122: fusing the laser point cloud data of the current intersection traffic environment by adopting a multi-size representation extraction algorithm of a sparse convolutional neural network to obtain fused data;
s123: distributing the parameter weight of the fusion data by using a self-adaptive weight regulation mechanism to obtain the processed fusion data;
s124: and acquiring key information of the laser point cloud data of the current intersection traffic environment by using the processed fusion data.
Optionally, in step S112, the anchor-free target detecting and identifying method includes:
a1: extracting features of different scales in the current road section traffic environment image information by using a full convolution neural network;
a2: extracting and mining the interaction of the characteristics among different scales by utilizing high-dimensional convolution;
a3: according to the interaction, fusing the characteristics among different scales to obtain fused image information;
a4: and obtaining target key points in the current traffic environment image information by using key point detection and a logistic regression loss model according to the fused image information.
Optionally, in step A2, the high-dimensional convolution structure is:
y=upsample(w 1 *x l+1 )+w 0 *x l +w -1 *s 2 x l-1
wherein y is the result after the convolution operation; upsample () is an upsampling operation, which expands a low-resolution image into a high-resolution image; w is a 1 Is the value of the element in convolution kernel 1; x is a radical of a fluorine atom l+1 Is the value of the resolution layer 1 + matrix; x is the number of l Is the value of the resolution layer i matrix; w is a 0 Is the value of the element in convolution kernel 0; w is a -1 Is the value of the element in convolution kernel-1; s 2 Sampling rules for scaling down a high resolution image to a low resolution image; x is a radical of a fluorine atom l-1 Is the value of the resolution layer l-1 matrix; * Represents a convolution operation;
in the step A4, the logistic regression loss model is:
Figure BDA0003852942150000041
wherein, L is a logistic regression loss model; a is a regression parameter; y is a label of an object in machine learning; y' is a predicted value for y; gamma is a regression parameter.
Optionally, the step S2 includes:
performing attitude intent prediction on the traffic participant; and/or
And predicting the track of the traffic participant.
Optionally, the pose intent prediction for the traffic participant comprises:
acquiring key points of each traffic participant and an affinity field coding graph between the key points;
connecting the key points through the affinity field coding graph to obtain the attitude estimation fitting characteristics of the traffic participants;
according to the attitude estimation fitting characteristics, respectively carrying out mode matching on all traffic participants in the current traffic environment to obtain matching results;
performing attitude intention prediction on the traffic participants by using the matching result;
the predicting the trajectory of the traffic participant comprises:
constructing a pedestrian trajectory prediction model according to the position coordinates of the traffic participants and the pedestrian anisotropic data; constructing a sequence-to-sequence neural network model according to the position coordinates of the traffic participants, the pedestrian anisotropy data and the traffic participant coding information; the traffic participant coding information is obtained through a convolutional neural network generated by image data of the current traffic environment;
obtaining a first pedestrian track prediction result according to the pedestrian track prediction model; obtaining a second pedestrian track prediction result according to the sequence-to-sequence neural network model;
constructing a stack generalization model according to the first pedestrian prediction result and the second pedestrian trajectory prediction result;
and obtaining a track prediction result of the traffic participant according to the stack generalization model.
Optionally, the step S3 includes:
s31: constructing physical constraint conditions of the automatic driving vehicle according to the current traffic environment;
s32: establishing a physical model based on micro kinematics and dynamics according to a typical extreme traffic scene;
s33: establishing a corresponding collision risk expression according to the physical model and the physical constraint condition;
s34: constructing a safety data field around the automatic driving vehicle according to the collision risk expression;
s35: according to the track prediction of the traffic participants, obtaining the change trend of a safety data field around the automatic driving vehicle;
s36: and generating a path planning strategy of the automatic driving vehicle by utilizing the variation trend.
The invention also provides an automatic driving traffic system, which applies the automatic driving data processing method and further comprises the following steps:
a plurality of perception modules: the plurality of sensing modules are used for sensing a plurality of single characteristics in the current traffic environment;
a fusion module for fusing a plurality of the single features to form the complete current traffic environment;
the identification module is used for identifying traffic participants in the current traffic environment and generating action tracks of the traffic participants;
the processing module is used for processing the action track, predicting the action track according to the action track and generating a prediction result; according to the current traffic environment and the prediction result, a decision model and a constraint rule of the automatic driving vehicle in the current traffic environment are established; determining an envelope interval of the autonomous vehicle decision model; determining a passenger evaluation model of the envelope interval by using a data driving method; judging whether the passenger evaluation model is a target passenger evaluation model or not; and optimizing a decision model and a constraint rule of the autonomous vehicle by using the passenger evaluation model, and sending the decision model and the constraint rule of the autonomous vehicle to the autonomous vehicle.
The invention has the following beneficial effects:
(1) The complex urban road automatic driving technology based on the digital infrastructure aims at the safety of vulnerable traffic participants and the comfort of automatic driving vehicle passengers, and researches on automatic driving comprehensive perception and decision technology based on the digital road traffic infrastructure are explored through multi-source fusion perception of a roadside mobile edge computing unit and a vehicle-mounted sensor, traffic participant identification and track prediction and automatic driving intelligent decision and path planning, so that the road traffic capacity of a complex traffic scene of an urban road is improved.
(2) The digitization of the infrastructure is realized through data modeling of the air-ground road infrastructure, arrangement of roadside sensors and the like. And uploading the infrastructure, the traffic sign and marking lines, the intersection control information, the bicycle information and the like to a roadside computing unit, performing multi-view space-time coupling of the vehicle and road information at the edge end, and assisting the automatically driven vehicle to complete the decision of path planning.
(3) Based on a V2X communication technology, a vehicle-road fusion communication architecture and a cooperative operation mechanism are focused, vehicle-road cooperative information transmission architecture, a cloud-side-end cooperative operation decision mechanism and a vehicle-road cooperative test technology combined with virtuality and reality, which are oriented to a typical urban traffic scene, are developed, the theoretical and technical bottlenecks of guaranteeing the reliability, accessibility and low time delay of communication data are broken through, the efficient and balanced distribution of cloud-side cooperative computing power is realized, a virtual simulation accelerated test method based on meta-scene particle extraction and rapid construction is explored, and support is provided for the actual application and landing of a vehicle-road cooperative system.
Drawings
FIG. 1 is a diagram illustrating an overall architecture of traffic participant identification and trajectory prediction in an automatic driving data processing method according to the present invention;
FIG. 2 is a flow chart of a method for processing autopilot data in accordance with the present invention;
FIG. 3 is a flowchart illustrating the sub-steps of step S11;
FIG. 4 is a flowchart illustrating the sub-steps of step S12;
FIG. 5 is a flow chart of a method for anchor-free target detection and identification in an autopilot data processing method provided by the present invention;
fig. 6 is a flowchart illustrating the steps of step S3 in fig. 2.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Examples
The urban traffic environment relates to a complex mixed traffic scene of mixed traffic of people and vehicles, and the difficulty of researching the urban road automatic driving technology is increased. In order to improve the traffic capacity of urban roads and improve the traffic environment, the project solves the technical problem of urban road automatic driving from multiple sources of fusion perception, traffic participant identification and trajectory prediction and automatic driving intelligent decision and path planning in multiple dimensions on the basis of an urban digital road basic information facility, and the specific content comprises the following steps:
multi-source fusion perception of automatic driving complex traffic environment in vehicle-road cooperative mode
In the prior art, under complex traffic environments such as intersections and the like, all-weather reliable sensing of the environment is difficult to realize by means of a single sensor, and a feature fusion method and an anchor-free target detection method based on high-dimensional convolution are designed and established aiming at the target multi-scale problem and the detection efficiency problem existing in the target detection and identification of a vehicle-end and road-end vision sensor camera in a complex scene application process under a vehicle-road cooperation mode.
Specifically, the steps include:
acquiring current road section traffic environment image information;
adopting a non-anchor target detection and identification method to carry out detection and identification on target key points in the current traffic environment image information;
predicting the size of a bounding box of the current traffic environment image information by using the target key point;
and obtaining the current road section traffic environment according to the size of the surrounding frame of the current traffic environment image information and the target key point.
The anchor-free target detection and identification method comprises the following steps:
firstly, extracting features by utilizing a full convolution neural network. Secondly, interaction among different scales is extracted and excavated by using high-dimensional convolution, the change of the feature scale can be adapted through aligning an inner core, the scale balance among the layers is kept, and the features of different scales are fused. Wherein the high-dimensional convolution structure is:
y=upsample(w 1 *x l+1 )+w 0 *x l +w -1 *s 2 x l-1
wherein y is the result after convolution operation; upsample () is an upsampling operation, and expands a low-resolution image into a high-resolution image; w is a 1 Is the value of the element in convolution kernel 1; x is the number of l+1 Is the value of the resolution layer l +1 matrix; x is the number of l Is the value of the resolution layer I matrix; w is a 0 Is the value of the element in convolution kernel 0; w is a -1 Is the value of the element in convolution kernel-1; s 2 Sampling rules for scaling down a high resolution image to a low resolution image; x is the number of l-1 Is the value of the resolution layer l-1 matrix; * Represents a convolution operation;
and detecting and identifying the extracted multi-scale fusion features by using key points and regression of the size of the bounding box on the premise of not using preset anchor points. And inputting the image information into a full convolution network to obtain a thermodynamic diagram, wherein a peak point of the thermodynamic diagram is a key point of the target, and then predicting the width and height information of the target enclosing frame through the key point. Wherein, the logistic regression loss function is adopted as follows:
Figure BDA0003852942150000081
wherein, L is a logistic regression loss model; a is a regression parameter; y is a label of an object in machine learning; y' is a predicted value for y; gamma is a regression parameter.
The method does not need to exhaust post-processing operations such as potential target positions, non-maximum value inhibition and the like, and effectively improves the efficiency and cross-domain adaptability of target detection and identification.
Specifically, the multisource fusion perception method based on sensors such as vision and point cloud information comprises the following steps:
acquiring laser point cloud data of a current intersection traffic environment;
fusing the laser point cloud data of the current intersection traffic environment by adopting a multi-size representation extraction algorithm of a sparse convolutional neural network to obtain fused data;
distributing the parameter weight of the fusion data by using a self-adaptive weight regulating mechanism to obtain the processed fusion data;
and acquiring key information of the laser point cloud data of the current intersection traffic environment by using the processed fusion data.
The method comprises the steps of accurately detecting unit sensor targets and shielding problems existing in the application process of vehicle end and roadside laser radars to complex scenes in the cooperative scene of a vehicle and a road, and inputting point clouds P to a plurality of laser radars i (i =1,2, \8230;, n), the adaptive matching fused point cloud can be expressed as:
Figure BDA0003852942150000091
H i selecting a most widely distributed point cloud P according to a farthest point random sampling mechanism for a transformation matrix continuously optimized through gradual consistent sampling t
P t =max{P i |distance(P i ,P t-1 )}
And further compression of the transmitted data volume is realized while the spatial representation is complete, so that the point cloud performance is improved.
A multi-size representation extraction algorithm utilizing a sparse convolutional network is researched, cross-layer network connection is established based on the fused point cloud input, features of different levels and scales are combined and fused, a self-adaptive weight adjusting mechanism is introduced to distribute parameter weights so as to extract more key information in a point cloud view, and a loss function is set as follows:
Figure BDA0003852942150000092
wherein L is cls And finally improving the laser radar identification performance of the multi-target object under the complex scene of the intersection through optimized learning for the focal local obtained through classified calculation and then the smooth-L1 error between the predicted value and the actual value. Aiming at the problems of variable target scale, detection efficiency, shielding and the like existing in target detection and identification of a vehicle end and a road end in a complex scene application process in a vehicle-road cooperation mode, a sensor multi-source fusion sensing method based on vision, point cloud information and the like is provided.
Therefore, the accurate detection problem and the shielding problem of unit sensor targets existing in the application process of the vehicle end and roadside point cloud information sensors in the vehicle-road cooperative scene can be researched, roadside point cloud combination refinement is realized by researching a redundancy self-adaptive multi-sensor point cloud fusion technology, a multi-size sparse convolution neural network is designed to carry out feature extraction, cross-layer network connection is established, the feature joint fusion of different levels and scales is realized, and the laser radar identification performance of multiple targets under the intersection complex scene is finally improved.
A plurality of sensors such as a vehicle end and a road end are interconnected through the Ethernet, and clock synchronization and space unification are achieved. And generating a candidate region based on a two-dimensional detector, forming a viewing cone search space in a point cloud space, and regressing to obtain a three-dimensional detection result. For the urban road GPS signal occlusion road section, the road end sensor is regarded as a beacon node with a known position, and the vehicle position is an undetermined node. And calculating the position information of the mobile vehicle by adopting a unilateral synchronous two-way distance measurement algorithm according to the information interaction between the position information of the beacon node and the information interaction between the vehicle end and the road end sensor in the vehicle-road cooperative environment.
The method comprises the steps of establishing a pedestrian intention estimation model of a CNN convolutional neural network and a traffic participant track prediction model based on a traffic participant identification and track prediction algorithm driven by V2I and local data in a combined mode, and realizing accurate identification and accurate prediction of multiple traffic participants by means of fusion of V2I and vehicle-mounted sensor data.
And estimating the two-dimensional attitude of the pedestrian based on the CNN convolutional neural network. The research uses the attitude estimation for detecting the intentions (stop, walk and run) of the pedestrians, comprehensively utilizes the framework fitting feature extraction and the neural network training and classification, and realizes the accurate output of the intentions of the pedestrians. The research solidifies the identification result of the traffic participant to a V2I hardware carrier, and sends the identification result to the automatic driving automobile in real time, fills the target detection blind area of the vehicle-mounted sensor, and realizes the comprehensive vehicle-road-human communication.
The method is characterized in that a Stacking method is utilized to carry out deep fusion on the improved social force model and the long-term and short-term memory network to obtain a pedestrian and non-motor vehicle track prediction model, the combination of the social essence and big data of pedestrians is realized, and the tracks of pedestrians and non-motor vehicles in special and complex scenes can be accurately predicted. The method specifically comprises the following steps:
acquiring key points of each traffic participant and an affinity field coding pattern between the key points;
connecting the key points through the affinity field coding graph to obtain the attitude estimation fitting characteristics of each traffic participant;
according to the attitude estimation fitting characteristics, performing mode matching on all traffic participants in the current traffic environment respectively to obtain matching results;
carrying out posture intention prediction on the traffic participants by utilizing the matching result;
the predicting the trajectory of the traffic participant comprises:
constructing a pedestrian trajectory prediction model according to the position coordinates of the traffic participants and the pedestrian anisotropic data; constructing a sequence-to-sequence neural network model according to the position coordinates of the traffic participants, the pedestrian anisotropy data and the traffic participant coding information; the traffic participant coding information is obtained through a convolutional neural network generated by image data of the current traffic environment;
obtaining a first pedestrian track prediction result according to the pedestrian track prediction model; obtaining a second pedestrian track prediction result according to the sequence-to-sequence neural network model;
constructing a stack generalization model according to the first pedestrian prediction result and the second pedestrian trajectory prediction result;
and obtaining a track prediction result of the traffic participant according to the stack generalization model.
And simultaneously detecting key points of the human body part and affinity fields connected between the key points from the input image by using a cyclic convolution neural network with double branches. And connecting key points in the image through the affinity field coding graph to obtain the attitude estimation fitting characteristics of the pedestrian, wherein the overall detection process is as shown in the figure. And carrying out mode matching on the attitude estimation of each pedestrian in the image through a multilayer perceptron. The joints of arms, legs and the like which can reflect the movement intention of the pedestrian in the human body fitting skeleton obtained in the posture estimation are used as key points, the positions of the key points are normalized according to the height, the positions and the joint angles among the key points are extracted as feature vectors, the feature vectors are matched by using a multilayer perceptron, and the current movement intention of the pedestrian is detected. The method comprises the steps of collecting real traffic environment information by using V2I sensing equipment, a vehicle-mounted sensor and the like, establishing a vehicle-to-road communication mode for data transmission and interaction, using the V2I and the vehicle-mounted terminal as core carriers for vehicle driving environment sensing and interaction, completing the establishment of unified coordinates of a data conversion system, correlating data collected by the V2I sensor and the vehicle-mounted sensor, establishing a characteristic-level data fusion model, performing structural integration on the data after the fusion of traffic participants, completing the accurate identification of the traffic participants, and simultaneously issuing an identification result to an OBU (on-board unit) of an automatic driving automobile in real time.
The method comprises the steps of quantifying kinematic characteristics such as expected speeds, maximum speeds and reaction times of pedestrians and non-motor vehicles of different ages and sexes, considering influences of traffic environments and vehicles on traffic participants, and establishing a pedestrian trajectory prediction model for improving social force. A stack generalization (Stacking) model is established, the defect that a traffic participant trajectory is predicted by simply using a social force model or a long-short term memory network is overcome, the established meta model uses a full-connection network, and a base model is an improved social force model and an LSTM-based sequence-to-sequence neural network model, and is specifically referred to fig. 1.
In the complex urban road environment, a macro traffic map in a main vehicle correlation domain is taken as a background, an automatic driving automobile intelligent decision and path planning technology integrating vehicle riding experience evaluation and optimization is researched, macro traffic efficiency and micro vehicle control quality are balanced, and traffic efficiency of various road vehicles in a hybrid traffic mode is improved.
When the rule constraint-based method is used for path planning and obstacle avoidance, the extension characteristic of the algorithm is poor, the deep reinforcement learning method is used for poor interpretability of planning, the rule-based and data drive coupling-based method is researched, vehicles are used as intelligent bodies, the environmental characteristics are abstracted, the optimal solution is locally searched in a feasible space domain, and the optimal decision and planning strategy is obtained through online interaction under the physical constraint condition of the automatically-driven vehicles and the driving environment.
Aiming at the problems that a single vehicle sensing area is limited, and for the problem that a single vehicle in a road network of an urban area cannot acquire traffic situations in other directions in an intersection and an adjacent intersection through a sensing system, a traffic model combining a traffic macroscopic state and a microscopic driving rule is established based on a traffic macroscopic basic diagram, and the mutual influence and mapping relation between driving behavior parameters and macroscopic traffic parameters are analyzed, so that the road network traffic is macroscopically regulated and controlled, and the vehicle behavior is microscopically improved.
The current automatic driving automobile takes function realization as a primary target, and a great deal of technical progress shows that the subjective and objective recognition (confidence) of passengers in the automobile to the automobile behavior becomes a key factor for restricting the intelligent degree of the automatic driving automobile, so that in the invention, a decision model and a constraint rule of the automatic driving automobile in the current environment are constructed according to the current traffic environment and the prediction result of the action track of a traffic participant, and the decision model and the constraint rule specifically comprise the following steps:
constructing physical constraint conditions of the automatic driving vehicle according to the current traffic environment;
establishing a physical model based on micro kinematics and dynamics according to a typical extreme traffic scene;
establishing a corresponding collision risk expression according to the physical model and the physical constraint condition;
constructing a safety data field around the automatic driving vehicle according to the collision risk expression;
obtaining the change trend of a safety data field around the automatic driving vehicle according to the track prediction of the traffic participants;
and generating a path planning strategy of the automatic driving vehicle by utilizing the variation trend.
The physical constraint condition of the road traffic environment on the automatic driving automobile is established by a rule-based method, the constraint is used as a boundary condition of vehicle decision behaviors, the safety of the automatic driving automobile is ensured in a strong constraint mode, the automatic driving automobile can be ensured to always run in a safe driving area under the constraint, and the dangerous influence on surrounding traffic participants can be avoided. And in the rule constraint layer, a typical extreme scene is analyzed, the mechanism analysis of extreme working conditions is carried out under longitudinal, lateral and parallel working conditions, a physical model based on micro kinematics and dynamics is established, the collision time interval is fused, the kinematic analysis of a braking process is combined, and a corresponding collision risk expression is established to form a comprehensive collision risk evaluation method. Constructing a safety data field around the vehicle, obtaining the superposition change of the safety data field around the intelligent vehicle at different moments according to the track prediction of the surrounding vehicle, judging the change trend of the safety data field around the intelligent vehicle by using the constructed comprehensive evaluation method of collision danger, and finally realizing the situation evaluation of the safety around the intelligent vehicle; in the data-driven layer, by a data-based method and taking a reinforcement learning decision model and a deep reinforcement learning decision model as optimization means, the model can find the optimal decision and planning strategy within the physical constraint conditions of the automatic driving automobile.
For the interaction of traffic and automatic driving vehicles, the urban road network is divided into two areas by analyzing the running state of the urban road network: the method comprises the steps of establishing a macroscopic basic graph (MFD) mathematical model of each of a central area and a peripheral area, and fitting real data and a simulation result to obtain model parameters. The method comprises the steps of considering the influence of area periphery control and an automatic driving vehicle decision method, introducing a control factor and an influence factor, establishing an MFD macroscopic traffic flow model based on two areas of a city, taking the area traffic completion maximization as a control target, and solving an objective function through a genetic algorithm. And establishing a microscopic traffic flow model considering the regional macroscopic traffic parameters, and obtaining the stability and the safety of microscopic driving behaviors considering different traffic influence parameters through a numerical experiment and a simulation analysis method.
Mathematical expression for MFD:
G(N(t))=a·N 3 (t)+b·N 2 (t)+c·N(t)+d
in the formula: g (N (t)) is the road network travel completion traffic, and N (t) is the road network cumulative vehicle number.
According to related medical research results, the changes of indexes such as respiration change conditions, pulse change conditions, skin conductivity and the like of passengers in the vehicle can accurately reflect the stress change conditions of the passengers in a test interval, and can be used as important reference indexes of physiological comfort of the passengers in the current test scene. In addition to objective physiological index collection, in different scene tests, the psychological confidence indexes of passengers in the vehicle are given in a subjective scoring mode when the vehicle passes through related scenes and serve as psychological subjective evaluation indexes in a test interval. A time sequence neural network model is used as a network frame, pre-collected vehicle states of a vehicle under different driving conditions and vehicle surrounding environment information are used as model input, in-vehicle passenger main/objective evaluation is used as a quantitative evaluation value of the vehicle passing through the scene, and a long-short term memory network/gated cycle unit (LSTM/GRU) model algorithm is used for training, so that a mapping relation between automatic driving vehicle dynamic parameters and passenger comfort levels (a psychological level and a physiological level) is established, and the automatic driving passenger comfort levels can be ensured.
Example 2
The technical scheme for solving the technical problems is as follows:
the invention provides an automatic driving data processing method, which is applied to road sides and/or vehicle ends and is shown in figure 2, and the automatic driving data processing method comprises the following steps:
s1: acquiring the current traffic environment by using a multivariate fusion perception method;
s2: identifying traffic participants in the current traffic environment, and predicting the action tracks of the traffic participants to obtain a prediction result;
s3: according to the current traffic environment and the prediction result, a decision model and a constraint rule of the automatic driving vehicle in the current traffic environment are established;
s4: determining an envelope interval of the autonomous vehicle decision model;
s5: determining a passenger evaluation model of the envelope interval by using a data driving method;
for the interaction of traffic and automatic driving vehicles, the urban road network is divided into two areas by analyzing the running state of the urban road network: the method comprises the steps of establishing a macroscopic basic graph (MFD) mathematical model of each of a central area and a peripheral area, and fitting real data and a simulation result to obtain model parameters. The method comprises the steps of considering the influence of area periphery control and an automatic driving vehicle decision method, introducing a control factor and an influence factor, establishing an MFD macroscopic traffic flow model based on two areas of a city, taking the area traffic completion maximization as a control target, and solving an objective function through a genetic algorithm. And establishing a microscopic traffic flow model considering the regional macroscopic traffic parameters, and obtaining the stability and the safety of microscopic driving behaviors considering different traffic influence parameters through a numerical experiment and a simulation analysis method.
S6: judging whether the passenger evaluation model is a target passenger evaluation model or not, and if so, entering a step S7; otherwise, returning to the step S2;
s7: optimizing a decision model and a constraint rule of the autonomous vehicle by using the passenger evaluation model, and sending the decision model and the constraint rule of the autonomous vehicle to the autonomous vehicle.
Optionally, the current traffic environment includes a current road section traffic environment and a current intersection traffic environment, and the step S1 includes:
s11: acquiring the traffic environment of the current road section by using the multivariate fusion perception method;
s12: acquiring the traffic environment of the current intersection by using the multivariate fusion perception method; and
s13: and performing multi-sensor information fusion on the current road section traffic environment and the current intersection traffic environment by using the two-dimensional candidate area to obtain the current traffic environment.
Alternatively, referring to fig. 3, the step S11 includes:
s111: acquiring current road section traffic environment image information;
s112: acquiring target key points in the current traffic environment image information by adopting an anchor-free target detection and identification method;
s113: predicting the size of an enclosure of the current traffic environment image information by using the target key point;
s114: obtaining the current road section traffic environment according to the size of the surrounding frame of the current traffic environment image information and the target key point;
referring to fig. 4, the step S12 includes:
s121: acquiring laser point cloud data of a current intersection traffic environment;
s122: fusing the laser point cloud data of the current intersection traffic environment by adopting a multi-size representation extraction algorithm of a sparse convolutional neural network to obtain fused data;
a multi-size representation extraction algorithm of a sparse convolution network is utilized, cross-layer network connection is established based on the fused point cloud input, features of different levels and scales are combined and fused, and a self-adaptive weight adjusting mechanism is introduced to distribute parameter weights so as to extract more key information in a point cloud view.
The multi-size characterization extraction algorithm is an algorithm based on a sparse convolutional neural network, and aims to extract key feature points in traffic environment laser point cloud model data obtained by three-dimensional laser scanning, such as: road boundaries, building or structure outlines, signs, markings, instructional information, and the like.
S123: distributing the parameter weight of the fusion data by using a self-adaptive weight regulation mechanism to obtain the processed fusion data;
s124: and acquiring key information of the laser point cloud data of the current intersection traffic environment by using the processed fusion data.
Optionally, referring to fig. 5, in step S112, the anchor-free target detection and identification method includes:
a1: extracting features among different scales in the current road section traffic environment image information by using a full convolution neural network;
a2: extracting and excavating the interaction of the features among different scales by utilizing high-dimensional convolution; wherein the high-dimensional convolution structure is:
y=upsample(w 1 *x l+1 )+w 0 *x l +w -1 *s 2 x l-1
wherein y is the result after the convolution operation; upsample () is an upsampling operation, and expands a low-resolution image into a high-resolution image; w is a 1 Is the value of the element in convolution kernel 1; x is the number of l+1 Is the value of the resolution layer 1 + matrix; x is the number of l Is the value of the resolution layer i matrix; w is a 0 Is the value of the element in convolution kernel 0; w is a -1 Is the value of the element in convolution kernel-1; s 2 Sampling rules for scaling down a high resolution image to a low resolution image; x is the number of l-1 Is the value of the resolution layer 1 matrix; * Representing a convolution operation.
A3: according to the interaction, fusing the features of different scales to obtain fused image information;
a4: obtaining a target key point in the current traffic environment image information by using key point detection and a logistic regression loss model according to the fused image information, wherein the logistic regression loss model is as follows:
Figure BDA0003852942150000171
wherein, L is a logistic regression loss model; a is a regression parameter; y is a label of an object in machine learning; y' is a predicted value for y; gamma is a regression parameter.
Optionally, the step S2 includes:
performing posture intention prediction on the traffic participants; and/or
And predicting the track of the traffic participant.
Optionally, the pose intent prediction for the traffic participant comprises:
acquiring key points of each traffic participant and an affinity field coding graph between the key points;
connecting the key points through the affinity field coding graph to obtain the attitude estimation fitting characteristics of each traffic participant;
according to the attitude estimation fitting characteristics, respectively carrying out mode matching on all traffic participants in the current traffic environment to obtain matching results;
performing attitude intention prediction on the traffic participants by using the matching result;
the predicting the trajectory of the traffic participant comprises:
constructing a pedestrian trajectory prediction model according to the position coordinates of the traffic participants and the pedestrian anisotropic data; constructing a sequence-to-sequence neural network model according to the position coordinates of the traffic participants, the pedestrian anisotropy data and the traffic participant coding information; the traffic participant coding information is obtained through a convolutional neural network generated by image data of the current traffic environment;
obtaining a first pedestrian track prediction result according to the pedestrian track prediction model; obtaining a second pedestrian track prediction result according to the sequence-to-sequence neural network model;
constructing a stack generalization model according to the first pedestrian prediction result and the second pedestrian trajectory prediction result;
and obtaining a track prediction result of the traffic participant according to the stack generalization model.
Alternatively, referring to fig. 6, the step S3 includes:
s31: constructing physical constraint conditions of the automatic driving vehicle according to the current traffic environment;
s32: establishing a physical model based on micro kinematics and dynamics according to a typical extreme traffic scene;
s33: establishing a corresponding collision danger expression according to the physical model and the physical constraint condition;
s34: constructing a safety data field around the automatic driving vehicle according to the collision risk expression;
s35: according to the track prediction of the traffic participants, obtaining the change trend of a safety data field around the automatic driving vehicle;
s36: and generating a path planning strategy of the automatic driving vehicle by utilizing the variation trend.
The invention also provides an automatic driving traffic system, which applies the automatic driving data processing method and further comprises the following steps:
a plurality of perception modules: the plurality of sensing modules are used for sensing a plurality of single characteristics in the current traffic environment;
a fusion module for fusing the plurality of single features to form the complete current traffic environment;
the identification module is used for identifying traffic participants in the current traffic environment and generating action tracks of the traffic participants;
the processing module is used for processing the action track, predicting the action track according to the action track and generating a prediction result; according to the current traffic environment and the prediction result, a decision model and a constraint rule of the automatic driving vehicle in the current traffic environment are established; determining an envelope interval of the autonomous vehicle decision model; determining a passenger evaluation model of the envelope interval by using a data driving method; judging whether the passenger evaluation model is a target passenger evaluation model or not; and optimizing a decision model and a constraint rule of the autonomous vehicle by using the passenger evaluation model, and sending the decision model and the constraint rule of the autonomous vehicle to the autonomous vehicle.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (10)

1. An automatic driving data processing method is applied to a roadside and/or a vehicle end, and comprises the following steps:
s1: acquiring the current traffic environment by using a multivariate fusion perception method;
s2: identifying the traffic participants in the current traffic environment, and predicting the action tracks of the traffic participants to obtain a prediction result;
s3: according to the current traffic environment and the prediction result, a decision model and a constraint rule of the automatic driving vehicle in the current traffic environment are established;
s4: determining an envelope interval of the autonomous vehicle decision model;
s5: determining a passenger evaluation model of the envelope interval by using a data driving method;
s6: judging whether the passenger evaluation model is a target passenger evaluation model or not, and if so, entering a step S7; otherwise, returning to the step S2;
s7: and optimizing a decision model and a constraint rule of the automatic driving vehicle by using the passenger evaluation model, and sending the decision model and the constraint rule of the automatic driving vehicle to the automatic driving vehicle.
2. The automatic driving data processing method according to claim 1, wherein the current traffic environment includes a current road section traffic environment and a current intersection traffic environment, and the step S1 includes:
s11: acquiring the traffic environment of the current road section by using the multivariate fusion perception method;
s12: acquiring the traffic environment of the current intersection by using the multivariate fusion perception method;
s13: and performing multi-sensor information fusion on the current road section traffic environment and the current intersection traffic environment by using the two-dimensional candidate area to obtain the current traffic environment.
3. The automatic driving data processing method according to claim 2, wherein the step S11 includes:
s111: acquiring current road section traffic environment image information;
s112: acquiring target key points in the current traffic environment image information by adopting an anchor-free target detection and identification method;
s113: predicting the size of a bounding box of the current traffic environment image information by using the target key point;
s114: and obtaining the current road section traffic environment according to the size of the surrounding frame of the current traffic environment image information and the target key point.
4. The automatic driving data processing method of claim 3, wherein in the step S112, the anchor-free target detection and identification method comprises:
a1: extracting features among different scales in the current road section traffic environment image information by using a full convolution neural network;
a2: extracting and mining the interaction of the characteristics among different scales by utilizing high-dimensional convolution;
a3: according to the interaction, fusing the features of different scales to obtain fused image information;
a4: and obtaining target key points in the current traffic environment image information by using key point detection and a logistic regression loss model according to the fused image information.
5. The automatic driving data processing method according to claim 4, wherein in the step A2, the high-dimensional convolution structure is:
y=upsample(w 1 *x l+1 )+w 0 *x l +w -1 *s 2 x l-1
wherein y is the result after the convolution operation; upsample () is an upsampling operation, which expands a low-resolution image into a high-resolution image; w is a 1 Is the value of the element in convolution kernel 1; x is the number of l+1 Is the value of the resolution layer l +1 matrix; x is the number of l Is the value of the resolution layer I matrix; w is a 0 Is the value of the element in convolution kernel 0; w is a -1 Is the value of the element in convolution kernel-1; s 2 Sampling rules for scaling down a high resolution image to a low resolution image; x is a radical of a fluorine atom l-1 Is the value of the resolution layer l-1 matrix; * Represents a convolution operation;
in the step A4, the logistic regression loss model is:
Figure FDA0003852942140000021
wherein, L is a logistic regression loss model; a is a regression parameter; y is the label of the target in machine learning; y' is a predicted value for y; gamma is a regression parameter.
6. The automatic driving data processing method according to claim 2, wherein the step S12 includes:
s121: acquiring laser point cloud data of a current intersection traffic environment;
s122: fusing the laser point cloud data of the current intersection traffic environment by adopting a multi-size representation extraction algorithm of a sparse convolutional neural network to obtain fused data;
s123: distributing the parameter weight of the fusion data by using a self-adaptive weight regulation mechanism to obtain the processed fusion data;
s124: and acquiring key information of the laser point cloud data of the current intersection traffic environment by using the processed fusion data.
7. The automatic driving data processing method according to claim 1, wherein the step S2 includes:
performing posture intention prediction on the traffic participants; and/or
And predicting the track of the traffic participant.
8. The autopilot data processing method of claim 7 wherein the gesture intent prediction of the traffic participant comprises:
acquiring key points of each traffic participant and an affinity field coding pattern between the key points;
connecting the key points through the affinity field coding graph to obtain the attitude estimation fitting characteristics of the traffic participants;
according to the attitude estimation fitting characteristics, performing mode matching on all traffic participants in the current traffic environment respectively to obtain matching results;
carrying out posture intention prediction on the traffic participants by utilizing the matching result;
the predicting the trajectory of the traffic participant comprises:
constructing a pedestrian trajectory prediction model according to the position coordinates of the traffic participants and the pedestrian anisotropic data; constructing a sequence-to-sequence neural network model according to the position coordinates of the traffic participants, the pedestrian anisotropy data and the traffic participant coding information; the traffic participant coding information is obtained through a convolutional neural network generated by image data of the current traffic environment;
obtaining a first pedestrian track prediction result according to the pedestrian track prediction model; obtaining a second pedestrian track prediction result according to the sequence-to-sequence neural network model;
constructing a stack generalization model according to the first pedestrian prediction result and the second pedestrian trajectory prediction result;
and obtaining a track prediction result of the traffic participant according to the stack generalization model.
9. The automatic driving data processing method according to claim 1, wherein the step S3 includes:
s31: constructing physical constraint conditions of the automatic driving vehicle according to the current traffic environment;
s32: establishing a physical model based on micro kinematics and dynamics according to a typical extreme traffic scene;
s33: establishing a corresponding collision risk expression according to the physical model and the physical constraint condition;
s34: constructing a safety data field around the automatic driving vehicle according to the collision risk expression;
s35: obtaining the change trend of a safety data field around the automatic driving vehicle according to the track prediction of the traffic participants;
s36: and generating a path planning strategy of the automatic driving vehicle by utilizing the variation trend.
10. An autonomous driving traffic system, characterized in that the autonomous driving traffic system applies the autonomous driving data processing method according to any of claims 1-9, the autonomous driving traffic system further comprising:
a plurality of perception modules: the plurality of sensing modules are used for sensing a plurality of single characteristics in the current traffic environment;
a fusion module for fusing a plurality of the single features to form the complete current traffic environment;
the identification module is used for identifying traffic participants in the current traffic environment and generating action tracks of the traffic participants;
the processing module is used for processing the action track, predicting the action track according to the action track and generating a prediction result; according to the current traffic environment and the prediction result, a decision model and a constraint rule of the automatic driving vehicle in the current traffic environment are established; determining an envelope interval of the autonomous vehicle decision model; determining a passenger evaluation model of the envelope interval by using a data driving method; judging whether the passenger evaluation model is a target passenger evaluation model or not; and optimizing a decision model and a constraint rule of the autonomous vehicle by using the passenger evaluation model, and sending the decision model and the constraint rule of the autonomous vehicle to the autonomous vehicle.
CN202211137920.9A 2022-09-19 2022-09-19 Automatic driving data processing method and automatic driving traffic system Active CN115662166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211137920.9A CN115662166B (en) 2022-09-19 2022-09-19 Automatic driving data processing method and automatic driving traffic system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211137920.9A CN115662166B (en) 2022-09-19 2022-09-19 Automatic driving data processing method and automatic driving traffic system

Publications (2)

Publication Number Publication Date
CN115662166A true CN115662166A (en) 2023-01-31
CN115662166B CN115662166B (en) 2024-04-09

Family

ID=84984134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211137920.9A Active CN115662166B (en) 2022-09-19 2022-09-19 Automatic driving data processing method and automatic driving traffic system

Country Status (1)

Country Link
CN (1) CN115662166B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115782835A (en) * 2023-02-09 2023-03-14 江苏天一航空工业股份有限公司 Automatic parking remote driving control method for passenger boarding vehicle
CN117076816A (en) * 2023-07-19 2023-11-17 清华大学 Response prediction method, response prediction apparatus, computer device, storage medium, and program product
CN117894181A (en) * 2024-03-14 2024-04-16 北京动视元科技有限公司 Global traffic abnormal condition integrated monitoring method and system

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120316717A1 (en) * 2011-06-13 2012-12-13 Wolfgang Daum System and method for controlling and powering a vehicle
CN106710242A (en) * 2017-02-20 2017-05-24 广西交通科学研究院有限公司 Method for recognizing vehicle quantity of motorcade based on dynamic strain of bridge
CN112201070A (en) * 2020-09-29 2021-01-08 上海交通大学 Deep learning-based automatic driving expressway bottleneck section behavior decision method
CN112309122A (en) * 2020-11-19 2021-02-02 北京清研宏达信息科技有限公司 Intelligent bus grading decision-making system based on multi-system cooperation
JPWO2021028708A1 (en) * 2019-08-13 2021-02-18
CN112622937A (en) * 2021-01-14 2021-04-09 长安大学 Pass right decision method for automatically driving automobile in face of pedestrian
CN112896186A (en) * 2021-01-30 2021-06-04 同济大学 Automatic driving longitudinal decision control method under cooperative vehicle and road environment
CN112896170A (en) * 2021-01-30 2021-06-04 同济大学 Automatic driving transverse control method under vehicle-road cooperative environment
CN112948984A (en) * 2021-05-13 2021-06-11 西南交通大学 Vehicle-mounted track height irregularity peak interval detection method
CN113330497A (en) * 2020-06-05 2021-08-31 曹庆恒 Automatic driving method and device based on intelligent traffic system and intelligent traffic system
WO2021192771A1 (en) * 2020-03-26 2021-09-30 Mitsubishi Electric Corporation Adaptive optimization of decision making for vehicle control
CN113468670A (en) * 2021-07-20 2021-10-01 合肥工业大学 Method for evaluating performance of whole vehicle grade of automatic driving vehicle
CN113487855A (en) * 2021-05-25 2021-10-08 浙江工业大学 Traffic flow prediction method based on EMD-GAN neural network structure
CN114386826A (en) * 2022-01-10 2022-04-22 湖南工业大学 Fuzzy network DEA packaging material evaluation method based on sense experience fusion
CN114446046A (en) * 2021-12-20 2022-05-06 上海智能网联汽车技术中心有限公司 LSTM model-based weak traffic participant track prediction method
CN114462667A (en) * 2021-12-20 2022-05-10 上海智能网联汽车技术中心有限公司 SFM-LSTM neural network model-based street pedestrian track prediction method
CN114612999A (en) * 2020-12-04 2022-06-10 丰田自动车株式会社 Target behavior classification method, storage medium and terminal

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120316717A1 (en) * 2011-06-13 2012-12-13 Wolfgang Daum System and method for controlling and powering a vehicle
CN106710242A (en) * 2017-02-20 2017-05-24 广西交通科学研究院有限公司 Method for recognizing vehicle quantity of motorcade based on dynamic strain of bridge
JPWO2021028708A1 (en) * 2019-08-13 2021-02-18
WO2021192771A1 (en) * 2020-03-26 2021-09-30 Mitsubishi Electric Corporation Adaptive optimization of decision making for vehicle control
CN113330497A (en) * 2020-06-05 2021-08-31 曹庆恒 Automatic driving method and device based on intelligent traffic system and intelligent traffic system
CN112201070A (en) * 2020-09-29 2021-01-08 上海交通大学 Deep learning-based automatic driving expressway bottleneck section behavior decision method
CN112309122A (en) * 2020-11-19 2021-02-02 北京清研宏达信息科技有限公司 Intelligent bus grading decision-making system based on multi-system cooperation
CN114612999A (en) * 2020-12-04 2022-06-10 丰田自动车株式会社 Target behavior classification method, storage medium and terminal
CN112622937A (en) * 2021-01-14 2021-04-09 长安大学 Pass right decision method for automatically driving automobile in face of pedestrian
CN112896186A (en) * 2021-01-30 2021-06-04 同济大学 Automatic driving longitudinal decision control method under cooperative vehicle and road environment
CN112896170A (en) * 2021-01-30 2021-06-04 同济大学 Automatic driving transverse control method under vehicle-road cooperative environment
CN112948984A (en) * 2021-05-13 2021-06-11 西南交通大学 Vehicle-mounted track height irregularity peak interval detection method
CN113487855A (en) * 2021-05-25 2021-10-08 浙江工业大学 Traffic flow prediction method based on EMD-GAN neural network structure
CN113468670A (en) * 2021-07-20 2021-10-01 合肥工业大学 Method for evaluating performance of whole vehicle grade of automatic driving vehicle
CN114446046A (en) * 2021-12-20 2022-05-06 上海智能网联汽车技术中心有限公司 LSTM model-based weak traffic participant track prediction method
CN114462667A (en) * 2021-12-20 2022-05-10 上海智能网联汽车技术中心有限公司 SFM-LSTM neural network model-based street pedestrian track prediction method
CN114386826A (en) * 2022-01-10 2022-04-22 湖南工业大学 Fuzzy network DEA packaging material evaluation method based on sense experience fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《中国公路学报》编辑部;: "中国汽车工程学术研究综述・2017", 中国公路学报, no. 06 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115782835A (en) * 2023-02-09 2023-03-14 江苏天一航空工业股份有限公司 Automatic parking remote driving control method for passenger boarding vehicle
CN115782835B (en) * 2023-02-09 2023-04-28 江苏天一航空工业股份有限公司 Automatic parking remote driving control method for passenger boarding vehicle
CN117076816A (en) * 2023-07-19 2023-11-17 清华大学 Response prediction method, response prediction apparatus, computer device, storage medium, and program product
CN117894181A (en) * 2024-03-14 2024-04-16 北京动视元科技有限公司 Global traffic abnormal condition integrated monitoring method and system
CN117894181B (en) * 2024-03-14 2024-05-07 北京动视元科技有限公司 Global traffic abnormal condition integrated monitoring method and system

Also Published As

Publication number Publication date
CN115662166B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
US11500099B2 (en) Three-dimensional object detection
Bachute et al. Autonomous driving architectures: insights of machine learning and deep learning algorithms
US11217012B2 (en) System and method for identifying travel way features for autonomous vehicle motion control
US11651240B2 (en) Object association for autonomous vehicles
US11755018B2 (en) End-to-end interpretable motion planner for autonomous vehicles
US11531346B2 (en) Goal-directed occupancy prediction for autonomous driving
US11768292B2 (en) Three-dimensional object detection
US20200379461A1 (en) Methods and systems for trajectory forecasting with recurrent neural networks using inertial behavioral rollout
Laugier et al. Probabilistic analysis of dynamic scenes and collision risks assessment to improve driving safety
CN115662166B (en) Automatic driving data processing method and automatic driving traffic system
CN107310550B (en) Road vehicles travel control method and device
CN113313154A (en) Integrated multi-sensor integrated automatic driving intelligent sensing device
Duan et al. V2I based environment perception for autonomous vehicles at intersections
US11521396B1 (en) Probabilistic prediction of dynamic object behavior for autonomous vehicles
US11891087B2 (en) Systems and methods for generating behavioral predictions in reaction to autonomous vehicle movement
Zhang et al. A cognitively inspired system architecture for the Mengshi cognitive vehicle
US20220153314A1 (en) Systems and methods for generating synthetic motion predictions
CN110356412A (en) The method and apparatus that automatically rule for autonomous driving learns
Zhang et al. Collision avoidance predictive motion planning based on integrated perception and V2V communication
Shangguan et al. Interactive perception-based multiple object tracking via CVIS and AV
Rezaei et al. A deep learning-based approach for vehicle motion prediction in autonomous driving
YU et al. Vehicle Intelligent Driving Technology
Zyner Naturalistic driver intention and path prediction using machine learning
ZeHao et al. Motion prediction for autonomous vehicles using ResNet-based model
Jin et al. An Object Association Matching Method Based on V2I System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant