CN115662166B - Automatic driving data processing method and automatic driving traffic system - Google Patents

Automatic driving data processing method and automatic driving traffic system Download PDF

Info

Publication number
CN115662166B
CN115662166B CN202211137920.9A CN202211137920A CN115662166B CN 115662166 B CN115662166 B CN 115662166B CN 202211137920 A CN202211137920 A CN 202211137920A CN 115662166 B CN115662166 B CN 115662166B
Authority
CN
China
Prior art keywords
traffic
model
traffic environment
current
automatic driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211137920.9A
Other languages
Chinese (zh)
Other versions
CN115662166A (en
Inventor
董是
袁长伟
王建伟
徐婷
齐玉亮
毛新华
李淑梅
高超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Expressway Jingxiong Management Center
Changan University
Original Assignee
Hebei Expressway Jingxiong Management Center
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Expressway Jingxiong Management Center, Changan University filed Critical Hebei Expressway Jingxiong Management Center
Priority to CN202211137920.9A priority Critical patent/CN115662166B/en
Publication of CN115662166A publication Critical patent/CN115662166A/en
Application granted granted Critical
Publication of CN115662166B publication Critical patent/CN115662166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention discloses an automatic driving data processing method and an automatic driving traffic system, which are applied to road sides and/or vehicle ends and comprise the following steps: s1: acquiring a current traffic environment by utilizing a multi-element fusion sensing method; s2: identifying traffic participants in the current traffic environment, and predicting the action track of the traffic participants to obtain a prediction result; s3: constructing a decision model and constraint rules of the automatic driving vehicle in the current traffic environment according to the current traffic environment and the prediction result; s4: determining an envelope interval of an automatic driving vehicle decision model; s5: determining a passenger evaluation model of the envelope section by using a data driving method; s6: judging whether the passenger evaluation model is a target passenger evaluation model, if so, entering step S7; otherwise, returning to the step S2; s7: and optimizing a decision model and constraint rules of the automatic driving vehicle by using the passenger evaluation model, and sending the decision model and constraint rules to the automatic driving vehicle.

Description

Automatic driving data processing method and automatic driving traffic system
Technical Field
The invention relates to the technical field of automatic driving, in particular to an automatic driving data processing method and an automatic driving traffic system.
Background
The construction of the digital traffic infrastructure can fully make up the defects of insufficient calculation force and perception capability required by the safe driving of the automatic driving automobile in a complex traffic scene, obviously reduces the hardware upgrading cost of the automobile and improves the overall safety of the system through the cooperative mode of the automobile and plays an important supporting role for the landing of the automatic driving industry.
In recent years, vehicle-road cooperation is advanced from the administrative and application level at home and abroad. The vehicle-road cooperative V2X communication technology is mainly based on three types of DSRC standards of IEEE 802.11p, LTE-V technical standards of LTE cellular networks and 5G-V2X (NR) standards at present, and can adapt to different service scene requirements. Currently, the DSRC standard is slowly popularized, the LTE-V2X has an industrial application foundation, and the 5G-V2X (NR) standard is frozen and still to be released. The LTE-V2X is in the previous stage of 5G-V2X (NR), and most basic safety early warning and efficiency improvement application requirements in the above defined scene can be solved at present, and the method is mainly used for assisting driving scenes. The 5G-V2X (NR) is mainly used for meeting the requirements of high-level automatic driving application scenes.
Many research efforts are currently being conducted by the academia and industry for V2X communications, with much focus on these aspects: a series of traffic information such as real-time road conditions, road information, pedestrian information and the like is obtained through cooperative communication among vehicles, so that driving safety is improved, congestion is reduced, traffic efficiency is improved, and rich vehicle-mounted entertainment information is provided. However, since the urban traffic environment involves a complex mixed traffic scene of mixed traffic of people and vehicles, the difficulty of researching the urban road automatic driving technology is increased. Meanwhile, the application scene of the multi-sensor fusion algorithm is relatively simple at present, so that the research on complex dynamic scenes is still to be enhanced, and the engineering aspect still faces the problems of time synchronization and online sub-calibration, and the problem needs to be solved.
Disclosure of Invention
The invention aims to provide an automatic driving data processing method and an automatic driving traffic system so as to solve the problems of multi-source fusion perception, traffic participant identification and track prediction and automatic driving intelligent decision and path planning.
The technical scheme for solving the technical problems is as follows:
the invention provides an automatic driving data processing method which is applied to a road side and/or a vehicle end, and comprises the following steps:
s1: acquiring a current traffic environment by utilizing a multi-element fusion sensing method;
s2: identifying traffic participants in the current traffic environment, and predicting action tracks of the traffic participants to obtain a prediction result;
s3: constructing a decision model and constraint rules of the automatic driving vehicle in the current traffic environment according to the current traffic environment and the prediction result;
s4: determining an envelope interval of the autonomous vehicle decision model;
s5: determining a passenger evaluation model of the envelope section by using a data driving method;
s6: judging whether the passenger evaluation model is a target passenger evaluation model, if so, entering step S7; otherwise, returning to the step S2;
S7: optimizing the decision model and constraint rules of the autonomous vehicle using the passenger assessment model, and transmitting the decision model and constraint rules of the autonomous vehicle to the autonomous vehicle.
Optionally, the current traffic environment includes a current road traffic environment and a current intersection traffic environment, and the step S1 includes:
s11: acquiring the current road section traffic environment by using the multi-element fusion perception method;
s12: acquiring the current intersection traffic environment by using the multi-element fusion sensing method; and
s13: and carrying out multi-element sensor information fusion on the current road section traffic environment and the current intersection traffic environment by utilizing the two-dimensional candidate region to obtain the current traffic environment.
Optionally, the step S11 includes:
s111: acquiring current road section traffic environment image information;
s112: acquiring target key points in the current traffic environment image information by adopting an anchor-free target detection and identification method;
s113: predicting the bounding box size of the current traffic environment image information by utilizing the target key points;
s114: obtaining the current road section traffic environment according to the bounding box size of the current traffic environment image information and the target key point;
The step S12 includes:
s121: acquiring laser point cloud data of the current intersection traffic environment;
s122: adopting a multi-size representation extraction algorithm of a sparse convolutional neural network to fuse laser point cloud data of the current intersection traffic environment to obtain fused data;
s123: distributing the parameter weight of the fusion data by utilizing a self-adaptive weight adjustment mechanism to obtain the processed fusion data;
s124: and acquiring key information of the laser point cloud data of the current intersection traffic environment by using the processed fusion data.
Optionally, in the step S112, the anchor-free target detection and identification method includes:
a1: extracting features among different scales in the current road section traffic environment image information by using a full convolution neural network;
a2: extracting and excavating the interaction of the features among different scales by utilizing high-dimensional convolution;
a3: according to the interaction, fusing the features among different scales to obtain fused image information;
a4: and obtaining target key points in the current traffic environment image information by using a key point detection and logistic regression loss model according to the fused image information.
Optionally, in the step A2, the high-dimensional convolution structure is:
y=upsample(w 1 *x l+1 )+w 0 *x l +w -1 *s 2 x l-1
Wherein y is the result of convolution operation; upsample () is an upsampling operation to expand a low resolution image into a high resolution image; w (w) 1 Values for elements in convolution kernel 1; x is x l+1 Values of the layer 1 matrix for resolution; x is x l Values for the resolution first layer matrix; w (w) 0 Values for elements in convolution kernel 0; w (w) -1 Values for elements in convolution kernel-1; s is(s) 2 Sampling rules for reducing the high resolution image to a low resolution image; x is x l-1 Values for the resolution layer 1 matrix; * Representing a convolution operation;
in the step A4, the logistic regression loss model is:
wherein L is a logistic regression loss model; a is a regression parameter; y is a label of the target in machine learning; y' is a predicted value for y; gamma is the regression parameter.
Optionally, the step S2 includes:
predicting gesture intention of the traffic participant; and/or
Track prediction is performed on the traffic participants.
Optionally, the predicting the gesture intent of the traffic participant includes:
acquiring affinity field coding graphs between key points of all traffic participants;
connecting the key points through the affinity field coding diagram to obtain the estimated fitting characteristics of the gesture of each traffic participant;
According to the gesture estimation fitting characteristics, respectively carrying out mode matching on all traffic participants in the current traffic environment to obtain matching results;
predicting the gesture intention of the traffic participant by utilizing the matching result;
the trajectory prediction of the traffic participant comprises:
constructing a pedestrian track prediction model according to the position coordinates of the traffic participants and the pedestrian anisotropy data; constructing a sequence-to-sequence neural network model according to the position coordinates of the traffic participants, the pedestrian anisotropy data and the traffic participant coding information; the traffic participant coding information is obtained through a convolutional neural network generated by image data of the current traffic environment;
obtaining a first pedestrian track prediction result according to the pedestrian track prediction model; obtaining a second pedestrian track prediction result according to the sequence-to-sequence neural network model;
constructing a stack generalization model according to the first pedestrian prediction result and the second pedestrian track prediction result;
and obtaining a track prediction result of the traffic participant according to the stack generalization model.
Optionally, the step S3 includes:
s31: constructing physical constraint conditions of the automatic driving vehicle according to the current traffic environment;
s32: establishing a physical model based on microscopic kinematics and dynamics according to a typical extreme traffic scene;
s33: establishing a corresponding collision risk expression according to the physical model and the physical constraint condition;
s34: constructing the surrounding safety data field of the automatic driving vehicle according to the collision risk expression;
s35: obtaining the change trend of the safety data field around the automatic driving vehicle according to the track prediction of the traffic participants;
s36: and generating a path planning strategy of the automatic driving vehicle by utilizing the change trend.
The invention also provides an automatic driving traffic system, which applies the automatic driving data processing method, and the automatic driving traffic system further comprises:
a plurality of perception modules: the sensing modules are used for sensing a plurality of single characteristics in the current traffic environment;
the fusion module is used for fusing the plurality of single features to form a complete current traffic environment;
The identifying module is used for identifying the traffic participants in the current traffic environment and generating action tracks of the traffic participants;
the processing module is used for processing the action track, predicting the action track according to the action track and generating a prediction result; constructing a decision model and constraint rules of the automatic driving vehicle in the current traffic environment according to the current traffic environment and the prediction result; determining an envelope interval of the autonomous vehicle decision model; determining a passenger evaluation model of the envelope section by using a data driving method; judging whether the passenger evaluation model is a target passenger evaluation model or not; and optimizing the decision model and constraint rules of the autonomous vehicle using the passenger assessment model and transmitting the decision model and constraint rules of the autonomous vehicle to the autonomous vehicle.
The invention has the following beneficial effects:
(1) The complex urban road automatic driving technology based on the digital infrastructure focuses on the safety of weak traffic participants and the comfort of automatic driving vehicle passengers, and researches on intelligent decision and path planning of traffic participant identification and trajectory prediction and automatic driving through multisource fusion perception of a road side mobile edge computing unit and a vehicle-mounted sensor so as to explore the automatic driving comprehensive perception and decision technology based on the digital road traffic infrastructure, thereby improving the road traffic capacity of the complex urban road traffic scene.
(2) The digitization of the infrastructure is realized through air-ground road infrastructure data modeling, road side sensor layout and the like. And uploading the infrastructure, traffic sign marks, intersection control information, bicycle information and the like to a road side calculation unit, performing multi-view space-time coupling of the bicycle information at the edge end, and assisting the automatic driving automobile in completing the decision of path planning.
(3) Based on the V2X communication technology, a vehicle-road cooperative information transmission architecture, a cloud-side-end cooperative operation decision mechanism and a virtual-real combined vehicle-road cooperative test technology research facing to urban traffic typical scenes are developed, theoretical and technical bottlenecks of guaranteeing communication data reliability, accessibility and low time delay are broken through, cloud-side-end cooperative calculation power efficient balanced distribution is realized, a virtual simulation acceleration test method based on meta-scene particle extraction and rapid construction is explored, and support is provided for practical application and landing of a vehicle-road cooperative system.
Drawings
FIG. 1 is a diagram of an overall architecture for traffic participant identification and trajectory prediction in an autopilot data processing method provided by the present invention;
FIG. 2 is a flow chart of an autopilot data processing method provided by the present invention;
FIG. 3 is a substep flow chart of step S11;
FIG. 4 is a substep flow chart of step S12;
FIG. 5 is a flow chart of an anchor-free target detection and recognition method of the autopilot data processing method provided by the present invention;
fig. 6 is a partial flow chart of step S3 in fig. 2.
Detailed Description
The principles and features of the present invention are described below with reference to the drawings, the examples are illustrated for the purpose of illustrating the invention and are not to be construed as limiting the scope of the invention.
Examples
The urban traffic environment relates to a complex mixed traffic scene of mixed traffic of people and vehicles, and the difficulty of researching the urban road automatic driving technology is increased. In order to improve the traffic capacity of the urban road and improve the traffic environment, the technical problem of urban road automatic driving is solved from multisource fusion perception, traffic participant identification and track prediction, automatic driving intelligent decision and path planning multidimensional on the basis of urban digital road infrastructure, and the specific contents comprise:
automatic driving complex traffic environment multisource fusion sensing under vehicle-road cooperative mode
The existing automatic driving vehicle is difficult to realize the all-weather reliable sensing of the environment by means of a single sensor under complex traffic environments such as intersections, the problems of multiple scales and detection efficiency of targets existing in the process of target detection and identification of a vehicle-end visual sensor camera and a road-end visual sensor camera in a complex scene application process under a vehicle-road cooperative mode are researched, and a feature fusion method and an anchor-free target detection method based on high-dimensional convolution are designed and established.
Specifically, the method comprises the following steps:
acquiring current road section traffic environment image information;
a non-anchor point target detection and identification method is adopted to detect target key points in the current traffic environment image information;
predicting the bounding box size of the current traffic environment image information by utilizing the target key points;
and obtaining the current road section traffic environment according to the bounding box size of the current traffic environment image information and the target key point.
The anchor point-free target detection and identification method comprises the following steps:
features are extracted by using a full convolution neural network. And then, the interaction among different scales is extracted and mined by utilizing high-dimensional convolution, the inter-layer scale balance can be kept through aligning the kernel to adapt to the change of the feature scale, and the features of different scales are fused. The high-dimensional convolution structure is as follows:
y=upsample(w 1 *x l+1 )+w 0 *x l +w -1 *s 2 x l-1
wherein y is the result of convolution operation; upsample () is an upsampling operation to expand a low resolution image into a high resolution image; w (w) 1 Values for elements in convolution kernel 1; x is x l+1 Values of the layer 1 matrix for resolution; x is x l Values for the resolution first layer matrix; w (w) 0 Values for elements in convolution kernel 0; w (w) -1 Values for elements in convolution kernel-1; s is(s) 2 Sampling rules for reducing the high resolution image to a low resolution image; x is x l-1 Values for the resolution layer 1 matrix; * Representing a convolution operation;
and carrying out target detection and identification on the extracted multi-scale fusion characteristics by utilizing key point detection and bounding box size regression on the premise of not using a preset anchor point. And inputting the image information into a full convolution network to obtain a thermodynamic diagram, wherein a thermodynamic diagram peak point is a key point of the target, and further predicting the width and height information of the target bounding box through the key point. Wherein, the logistic regression loss function is adopted as follows:
wherein L is a logistic regression loss model; a is a regression parameter; y is a label of the target in machine learning; y' is a predicted value for y; gamma is the regression parameter.
The method does not need post-processing operations such as exhaustive potential target positions, non-maximum suppression and the like, and effectively improves the efficiency and cross-domain adaptability of target detection and identification.
Specifically, the sensor multisource fusion perception method based on vision, point cloud information and the like comprises the following steps:
acquiring laser point cloud data of the current intersection traffic environment;
adopting a multi-size representation extraction algorithm of a sparse convolutional neural network to fuse laser point cloud data of the current intersection traffic environment to obtain fused data;
Distributing the parameter weight of the fusion data by utilizing a self-adaptive weight adjustment mechanism to obtain the processed fusion data;
and acquiring key information of the laser point cloud data of the current intersection traffic environment by using the processed fusion data.
Accurate detection and shielding of unit sensor targets in application process of vehicle end and road side laser radars in cooperative scene of vehicle and road for complex scene, and input point cloud P for multiple laser radars i (i=1, 2, …, n), the adaptive matching fusion point cloud can be expressed as:
H i for a transformation matrix which is continuously optimized through progressive consistent sampling, selecting the widest distributed point cloud P according to a furthest point random sampling mechanism t
P t =max{P i |distance(P i ,P t-1 )}
And further, the compression transmission data volume is realized while the space representation is complete, so that the point cloud expression capacity is improved.
Research and use the multi-size characterization extraction algorithm of the sparse convolution network, establish cross-layer network connection based on the fusion point cloud input, combine and fuse the characteristics of different levels and scales, introduce the self-adaptive weight adjustment mechanism to distribute the parameter weight so as to extract the more critical information in the point cloud vision, and set the loss function as follows:
wherein L is cls And the focal loss obtained by classification calculation is followed by a smooth-L1 error of a predicted value and an actual value, so that the laser radar recognition performance of multiple targets in a complex scene of the intersection is finally improved through optimization learning. Aiming at the problems of changeable target scale, detection efficiency, shielding and the like in the application process of complex scenes of a vehicle end and a road end in a vehicle-road cooperative mode, the sensor multisource fusion sense based on vision, point cloud information and the like is provided Methods are known.
Therefore, the problem of accurate detection and shielding of unit sensor targets in the application process of the vehicle end and road side point cloud information sensor in the vehicle-road collaborative scene can be studied, the road side point cloud combination refining is realized by researching the redundant self-adaptive multi-sensor point cloud fusion technology, the multi-size sparse convolutional neural network is designed to perform feature extraction, the cross-layer network connection is established, the joint fusion of features of different levels and scales is realized, and finally the laser radar recognition performance of multi-target objects in the complex scene of the intersection is improved.
The plurality of sensors such as the vehicle end and the road end are interconnected through the Ethernet, so that clock synchronization and space unification are realized. And generating a candidate region based on the two-dimensional detector, forming a view cone search space in the point cloud space, and obtaining a three-dimensional detection result by regression. For the urban road GPS signal shielding road section, the road end sensor is regarded as a beacon node with a known position, and the vehicle position is a pending node. And calculating the position information of the mobile vehicle by adopting a unilateral synchronous two-way ranging algorithm according to the position information of the beacon node and the information interaction between the vehicle end and the road end sensor in the vehicle-road cooperative environment.
The accurate identification and accurate prediction of multiple traffic participants are realized by utilizing the fusion of the V2I and the vehicle-mounted sensor data through establishing a pedestrian intention estimation model of a CNN convolutional neural network and a traffic participant track prediction model based on social factors based on a traffic participant identification and track prediction algorithm driven by the V2I and local data.
Pedestrian two-dimensional attitude estimation based on CNN convolutional neural network. The research uses the gesture estimation for detecting the intention (stop, walk and run) of the pedestrian, comprehensively uses skeleton fitting feature extraction, neural network training and classification, and realizes the accurate output of the intention of the pedestrian. And the research solidifies the identification result of the traffic participant to the V2I hardware carrier, and sends the identification result to the automatic driving automobile in real time, fills the target detection blind area of the vehicle-mounted sensor, and realizes the comprehensive communication of the automobile, the road and the people.
The method has the advantages that the method is used for carrying out the deep fusion of the improved social force model and the long-term and short-term memory network to the pedestrian and non-motor vehicle track prediction model, so that the combination of the social nature of the pedestrian and big data is realized, and the accurate prediction of the tracks of the pedestrian and the non-motor vehicle in special and complex scenes can be carried out. The method specifically comprises the following steps:
acquiring affinity field coding graphs between key points of all traffic participants;
connecting the key points through the affinity field coding diagram to obtain the estimated fitting characteristics of the gesture of each traffic participant;
according to the gesture estimation fitting characteristics, respectively carrying out mode matching on all traffic participants in the current traffic environment to obtain matching results;
Predicting the gesture intention of the traffic participant by utilizing the matching result;
the trajectory prediction of the traffic participant comprises:
constructing a pedestrian track prediction model according to the position coordinates of the traffic participants and the pedestrian anisotropy data; constructing a sequence-to-sequence neural network model according to the position coordinates of the traffic participants, the pedestrian anisotropy data and the traffic participant coding information; the traffic participant coding information is obtained through a convolutional neural network generated by image data of the current traffic environment;
obtaining a first pedestrian track prediction result according to the pedestrian track prediction model; obtaining a second pedestrian track prediction result according to the sequence-to-sequence neural network model;
constructing a stack generalization model according to the first pedestrian prediction result and the second pedestrian track prediction result;
and obtaining a track prediction result of the traffic participant according to the stack generalization model.
The affinity fields of key points and connection between the key points of the human body parts are detected simultaneously from the input images respectively by using a circular convolution neural network with double branches. And connecting key points in the image through the affinity field coding diagram to obtain the gesture estimation fitting characteristic of the pedestrian, wherein the whole detection flow is shown as the diagram. And carrying out pattern matching on the gesture estimation of each pedestrian in the image through the multi-layer perceptron. Joints such as arms and legs which can reflect the motion intention of a pedestrian in a human body fitting skeleton obtained in the gesture estimation are taken as key points, the positions of the joints are normalized according to the height, the positions among the key points are extracted as feature vectors, the feature vectors are matched by using a multi-layer perceptron, and the current motion intention of the pedestrian is detected. The method comprises the steps of collecting real traffic environment information by using V2I sensing equipment, vehicle-mounted sensors and the like, establishing a vehicle-road communication mode for data transmission and interaction, taking V2I and vehicle-mounted terminals as core carriers for sensing and interaction of the traffic environment, completing construction of unified coordinates of a data conversion system, correlating data collected by the V2I sensors and the vehicle-mounted sensors, establishing a feature-level data fusion model, structurally integrating data fused by traffic participants, completing accurate identification of the traffic participants, and publishing identification results to an OBU of an automatic driving automobile in real time.
And quantifying the expected speed, maximum speed, reaction time and other kinematic characteristics of pedestrians of different ages and sexes and non-motor vehicles, and establishing a pedestrian track prediction model for improving social force by considering the influence of traffic environment and vehicles on traffic participants. A stack generalization (Stacking) model is established, the defect of traffic participant track prediction by simply using a social force model or a long-term and short-term memory network is overcome, a fully-connected network is used for the established meta model, and the basic model is an improved social force model and an LSTM-based sequence-to-sequence neural network model, and the specific reference is made to FIG. 1.
In a complex urban road environment, a macroscopic traffic map in a main vehicle association domain is used as a background, an intelligent decision and path planning technology of an automatic driving vehicle which is integrated with vehicle riding experience evaluation and optimization is researched, macroscopic traffic efficiency and microscopic vehicle control quality are balanced, and traffic efficiency of various road vehicles in a mixed traffic mode is improved.
When the method based on rule constraint is used for path planning and obstacle avoidance, the algorithm has poor epitaxial characteristics, the deep reinforcement learning method is used for planning and has poor interpretability, the method based on rule and data driving coupling is researched, a vehicle is used as an intelligent body, environment characteristics are abstracted, an optimal solution is locally found in a movable space domain, and an optimal decision and planning strategy is obtained through online interaction under the physical constraint conditions of an automatic driving automobile and a driving environment.
Aiming at the problem that a single vehicle perception area is limited, road network single vehicles in an urban area cannot acquire other directions in an intersection and traffic situations in adjacent intersections through a perception system, a traffic model combining a traffic macroscopic state and a microscopic driving rule is established based on a traffic macroscopic basic diagram, and the interaction and mapping relation of driving behavior parameters and macroscopic traffic parameters are analyzed so as to macroscopically regulate and control road network traffic and microscopically improve vehicle behaviors.
The current automatic driving automobile takes the function implementation as a primary aim, and a great deal of technical progress shows that the subjective and objective acceptance (confidence) of the passengers in the automobile to the behavior of the automobile becomes a key factor for limiting the intelligent degree of the automatic driving automobile, so that in the invention, an automatic driving automobile decision model and constraint rules in the current environment are constructed according to the current traffic environment and the prediction result of the action tracks of traffic participants, and the method specifically comprises the following steps:
constructing physical constraint conditions of the automatic driving vehicle according to the current traffic environment;
establishing a physical model based on microscopic kinematics and dynamics according to a typical extreme traffic scene;
establishing a corresponding collision risk expression according to the physical model and the physical constraint condition;
Constructing the surrounding safety data field of the automatic driving vehicle according to the collision risk expression;
obtaining the change trend of the safety data field around the automatic driving vehicle according to the track prediction of the traffic participants;
and generating a path planning strategy of the automatic driving vehicle by utilizing the change trend.
The physical constraint condition of the road traffic environment on the automatic driving automobile is constructed by a rule-based method, the constraint is used as a boundary condition of the decision-making action of the automobile, the safety of the automatic driving automobile is ensured in a strong constraint mode, the automatic driving automobile can be ensured to always run in a safe driving area under the constraint, and the dangerous influence on surrounding traffic participants can be avoided. And on the rule constraint level, analyzing a typical extreme scene, performing mechanism analysis of the extreme working condition under longitudinal, lateral and parallel working conditions, establishing a physical model based on microscopic kinematics and dynamics, fusing collision time intervals and combining with the dynamic analysis of a braking process, and establishing a corresponding collision risk expression to form a comprehensive collision risk evaluation method. Constructing a vehicle surrounding safety data field, obtaining superposition changes of the intelligent vehicle surrounding safety data field at different moments according to track prediction of surrounding vehicles, judging the change trend of the intelligent vehicle surrounding safety data field by using the constructed comprehensive collision risk evaluation method, and finally realizing situation evaluation of the intelligent vehicle surrounding safety; in the data driving layer, by using a data-based method and taking a reinforcement learning decision model and a deep reinforcement learning decision model as optimization means, the model can find an optimal decision and planning strategy in the physical constraint condition of the automatic driving automobile.
For the interaction of traffic and automatically driven vehicles, the urban road network is divided into two areas by analyzing the running state of the urban road network: and establishing a mathematical model of a Macroscopic Fundamental Diagram (MFD) of each of the central region and the peripheral region, and fitting the real data with a simulation result to obtain model parameters. Taking the influences of regional peripheral control and an automatic driving vehicle decision method into consideration, introducing control factors and influence factors, establishing an MFD macroscopic traffic flow model based on two urban regions, taking maximization of regional traffic completion amount as a control target, and solving an objective function through a genetic algorithm. And establishing a microscopic traffic flow model considering regional macroscopic traffic parameters, and obtaining the stability and safety of microscopic driving behaviors considering different traffic influence parameters through a numerical experiment and simulation analysis method.
Mathematical expression of MFD:
G(N(t))=a·N 3 (t)+b·N 2 (t)+c·N(t)+d
wherein: g (N (t)) is road network travel completion traffic, N (t) is road network cumulative vehicle number.
According to related medical research results, changes of indexes such as respiration change conditions, pulse change conditions, skin conductivity and the like of passengers in the vehicle can accurately reflect tension change conditions of the passengers in a test interval, and the change can be used as important reference indexes of physiological comfort of the passengers in a current test scene. In addition to objective physiological index collection, psychological confidence indexes of passengers in the vehicle are given out through subjective scores in different scene tests at the same time when the vehicle passes through relevant scenes, and are used as psychological subjective evaluation indexes in test intervals. The method comprises the steps of taking a time sequence neural network model as a network frame, taking pre-collected vehicle states of vehicles under different running working conditions and vehicle surrounding environment information as model input, taking in-vehicle passenger main/objective evaluation as a quantitative evaluation value of the vehicles passing through the scene, and training through a long-short-term memory network/gate control circulation unit (LSTM/GRU) model algorithm, so that a mapping relation between automatic driving automobile dynamics parameters and passenger comfort levels (psychological level and physiological level) is established, and the comfort level of automatic driving passengers can be ensured.
Example 2
The technical scheme for solving the technical problems is as follows:
the invention provides an automatic driving data processing method, which is applied to a road side and/or a vehicle end, and is shown with reference to fig. 2, and comprises the following steps:
s1: acquiring a current traffic environment by utilizing a multi-element fusion sensing method;
s2: identifying traffic participants in the current traffic environment, and predicting action tracks of the traffic participants to obtain a prediction result;
s3: constructing a decision model and constraint rules of the automatic driving vehicle in the current traffic environment according to the current traffic environment and the prediction result;
s4: determining an envelope interval of the autonomous vehicle decision model;
s5: determining a passenger evaluation model of the envelope section by using a data driving method;
for the interaction of traffic and automatically driven vehicles, the urban road network is divided into two areas by analyzing the running state of the urban road network: and establishing a mathematical model of a Macroscopic Fundamental Diagram (MFD) of each of the central region and the peripheral region, and fitting the real data with a simulation result to obtain model parameters. Taking the influences of regional peripheral control and an automatic driving vehicle decision method into consideration, introducing control factors and influence factors, establishing an MFD macroscopic traffic flow model based on two urban regions, taking maximization of regional traffic completion amount as a control target, and solving an objective function through a genetic algorithm. And establishing a microscopic traffic flow model considering regional macroscopic traffic parameters, and obtaining the stability and safety of microscopic driving behaviors considering different traffic influence parameters through a numerical experiment and simulation analysis method.
S6: judging whether the passenger evaluation model is a target passenger evaluation model, if so, entering step S7; otherwise, returning to the step S2;
s7: optimizing the decision model and constraint rules of the autonomous vehicle using the passenger assessment model, and transmitting the decision model and constraint rules of the autonomous vehicle to the autonomous vehicle.
Optionally, the current traffic environment includes a current road traffic environment and a current intersection traffic environment, and the step S1 includes:
s11: acquiring the current road section traffic environment by using the multi-element fusion perception method;
s12: acquiring the current intersection traffic environment by using the multi-element fusion sensing method; and
s13: and carrying out multi-element sensor information fusion on the current road section traffic environment and the current intersection traffic environment by utilizing the two-dimensional candidate region to obtain the current traffic environment.
Alternatively, referring to fig. 3, the step S11 includes:
s111: acquiring current road section traffic environment image information;
s112: acquiring target key points in the current traffic environment image information by adopting an anchor-free target detection and identification method;
s113: predicting the bounding box size of the current traffic environment image information by utilizing the target key points;
S114: obtaining the current road section traffic environment according to the bounding box size of the current traffic environment image information and the target key point;
referring to fig. 4, the step S12 includes:
s121: acquiring laser point cloud data of the current intersection traffic environment;
s122: adopting a multi-size representation extraction algorithm of a sparse convolutional neural network to fuse laser point cloud data of the current intersection traffic environment to obtain fused data;
the multi-size characterization extraction algorithm of the sparse convolution network is researched and utilized, cross-layer network connection is established based on fusion point cloud input, the characteristics of different levels and scales are combined and fused, and a self-adaptive weight adjustment mechanism is introduced to distribute parameter weights so as to extract more critical information in the point cloud view.
The multi-size characterization extraction algorithm is an algorithm based on a sparse convolutional neural network, and aims to extract key characteristic points in traffic environment laser point cloud model data obtained by three-dimensional laser scanning, such as: road boundaries, building or structure outlines, signs, markings, indicating information, etc.
S123: distributing the parameter weight of the fusion data by utilizing a self-adaptive weight adjustment mechanism to obtain the processed fusion data;
S124: and acquiring key information of the laser point cloud data of the current intersection traffic environment by using the processed fusion data.
Optionally, referring to fig. 5, in step S112, the anchor-free target detection and identification method includes:
a1: extracting features among different scales in the current road section traffic environment image information by using a full convolution neural network;
a2: extracting and excavating the interaction of the features among different scales by utilizing high-dimensional convolution; wherein, the high-dimensional convolution structure is:
y=upsample(w 1 *x l+1 )+w 0 *x l +w -1 *s 2 x l-1
wherein y is the result of convolution operation; upsample () is an upsampling operation to expand a low resolution image into a high resolution image; w (w) 1 Values for elements in convolution kernel 1; x is x l+1 Values of the layer 1 matrix for resolution; x is x l Values for the resolution first layer matrix; w (w) 0 Values for elements in convolution kernel 0; w (w) -1 Values for elements in convolution kernel-1; s is(s) 2 Sampling rules for reducing the high resolution image to a low resolution image; x is x l-1 Values for the resolution layer 1 matrix; * Representing a convolution operation.
A3: according to the interaction, fusing the features among different scales to obtain fused image information;
a4: obtaining target key points in the current traffic environment image information by using key point detection and a logistic regression loss model according to the fused image information, wherein the logistic regression loss model is as follows:
Wherein L is a logistic regression loss model; a is a regression parameter; y is a label of the target in machine learning; y' is a predicted value for y; gamma is the regression parameter.
Optionally, the step S2 includes:
predicting gesture intention of the traffic participant; and/or
Track prediction is performed on the traffic participants.
Optionally, the predicting the gesture intent of the traffic participant includes:
acquiring affinity field coding graphs between key points of all traffic participants;
connecting the key points through the affinity field coding diagram to obtain the estimated fitting characteristics of the gesture of each traffic participant;
according to the gesture estimation fitting characteristics, respectively carrying out mode matching on all traffic participants in the current traffic environment to obtain matching results;
predicting the gesture intention of the traffic participant by utilizing the matching result;
the trajectory prediction of the traffic participant comprises:
constructing a pedestrian track prediction model according to the position coordinates of the traffic participants and the pedestrian anisotropy data; constructing a sequence-to-sequence neural network model according to the position coordinates of the traffic participants, the pedestrian anisotropy data and the traffic participant coding information; the traffic participant coding information is obtained through a convolutional neural network generated by image data of the current traffic environment;
Obtaining a first pedestrian track prediction result according to the pedestrian track prediction model; obtaining a second pedestrian track prediction result according to the sequence-to-sequence neural network model;
constructing a stack generalization model according to the first pedestrian prediction result and the second pedestrian track prediction result;
and obtaining a track prediction result of the traffic participant according to the stack generalization model.
Alternatively, referring to fig. 6, the step S3 includes:
s31: constructing physical constraint conditions of the automatic driving vehicle according to the current traffic environment;
s32: establishing a physical model based on microscopic kinematics and dynamics according to a typical extreme traffic scene;
s33: establishing a corresponding collision risk expression according to the physical model and the physical constraint condition;
s34: constructing the surrounding safety data field of the automatic driving vehicle according to the collision risk expression;
s35: obtaining the change trend of the safety data field around the automatic driving vehicle according to the track prediction of the traffic participants;
s36: and generating a path planning strategy of the automatic driving vehicle by utilizing the change trend.
The invention also provides an automatic driving traffic system, which applies the automatic driving data processing method, and the automatic driving traffic system further comprises:
A plurality of perception modules: the sensing modules are used for sensing a plurality of single characteristics in the current traffic environment;
the fusion module is used for fusing the plurality of single features to form a complete current traffic environment;
the identifying module is used for identifying the traffic participants in the current traffic environment and generating action tracks of the traffic participants;
the processing module is used for processing the action track, predicting the action track according to the action track and generating a prediction result; constructing a decision model and constraint rules of the automatic driving vehicle in the current traffic environment according to the current traffic environment and the prediction result; determining an envelope interval of the autonomous vehicle decision model; determining a passenger evaluation model of the envelope section by using a data driving method; judging whether the passenger evaluation model is a target passenger evaluation model or not; and optimizing the decision model and constraint rules of the autonomous vehicle using the passenger assessment model and transmitting the decision model and constraint rules of the autonomous vehicle to the autonomous vehicle.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (10)

1. An automatic driving data processing method, which is applied to a road side and/or a vehicle end, comprising:
s1: acquiring a current traffic environment by utilizing a multi-element fusion sensing method;
s2: identifying traffic participants in the current traffic environment, and predicting action tracks of the traffic participants to obtain a prediction result;
s3: constructing a decision model and constraint rules of the automatic driving vehicle in the current traffic environment according to the current traffic environment and the prediction result;
s4: determining an envelope interval of the autonomous vehicle decision model;
s5: determining a passenger evaluation model of the envelope section by using a data driving method;
s6: judging whether the passenger evaluation model is a target passenger evaluation model, if so, entering step S7; otherwise, returning to the step S2;
s7: optimizing the decision model and constraint rules of the autonomous vehicle using the passenger assessment model, and transmitting the decision model and constraint rules of the autonomous vehicle to the autonomous vehicle.
2. The automatic driving data processing method according to claim 1, wherein the current traffic environment includes a current road traffic environment and a current intersection traffic environment, and the step S1 includes:
s11: acquiring the current road section traffic environment by using the multi-element fusion perception method;
s12: acquiring the current intersection traffic environment by using the multi-element fusion sensing method;
s13: and carrying out multi-element sensor information fusion on the current road section traffic environment and the current intersection traffic environment by utilizing the two-dimensional candidate region to obtain the current traffic environment.
3. The automatic driving data processing method according to claim 2, characterized in that the step S11 includes:
s111: acquiring current road section traffic environment image information;
s112: acquiring target key points in the current traffic environment image information by adopting an anchor-free target detection and identification method;
s113: predicting the bounding box size of the current traffic environment image information by utilizing the target key points;
s114: and obtaining the current road section traffic environment according to the bounding box size of the current traffic environment image information and the target key point.
4. The automatic driving data processing method according to claim 3, wherein in the step S112, the anchor-free target detection and recognition method includes:
A1: extracting features among different scales in the current road section traffic environment image information by using a full convolution neural network;
a2: extracting and excavating the interaction of the features among different scales by utilizing high-dimensional convolution;
a3: according to the interaction, fusing the features among different scales to obtain fused image information;
a4: and obtaining target key points in the current traffic environment image information by using a key point detection and logistic regression loss model according to the fused image information.
5. The method for processing autopilot data according to claim 4, wherein in the step A2, the high-dimensional convolution structure is:
y=upsample(w 1 *x l+1 )+w 0 *x l +w -1 *s 2 x l-1
wherein y is the result of convolution operation; upsample () is an upsampling operation to expand a low resolution image into a high resolution image; w (w) 1 Values for elements in convolution kernel 1; x is x l+1 Values of the layer 1 matrix for resolution; x is x l Values for the resolution first layer matrix; w (w) 0 Values for elements in convolution kernel 0; w (w) -1 Values for elements in convolution kernel-1; s is(s) 2 To be high resolutionSampling rules for reducing the image to a low resolution image; x is x l-1 Values for the resolution layer 1 matrix; * Representing a convolution operation;
in the step A4, the logistic regression loss model is:
Wherein L is a logistic regression loss model; a is a regression parameter; y is a label of the target in machine learning; y' is a predicted value for y; gamma is the regression parameter.
6. The automatic driving data processing method according to claim 2, characterized in that the step S12 includes:
s121: acquiring laser point cloud data of the current intersection traffic environment;
s122: adopting a multi-size representation extraction algorithm of a sparse convolutional neural network to fuse laser point cloud data of the current intersection traffic environment to obtain fused data;
s123: distributing the parameter weight of the fusion data by utilizing a self-adaptive weight adjustment mechanism to obtain the processed fusion data;
s124: and acquiring key information of the laser point cloud data of the current intersection traffic environment by using the processed fusion data.
7. The automatic driving data processing method according to claim 1, wherein the step S2 includes:
predicting gesture intention of the traffic participant; and/or
Track prediction is performed on the traffic participants.
8. The automated driving data processing method of claim 7, wherein the gesture intent prediction of the traffic participant comprises:
Acquiring affinity field coding graphs between key points of all traffic participants;
connecting the key points through the affinity field coding diagram to obtain the estimated fitting characteristics of the gesture of each traffic participant;
according to the gesture estimation fitting characteristics, respectively carrying out mode matching on all traffic participants in the current traffic environment to obtain matching results;
predicting the gesture intention of the traffic participant by utilizing the matching result;
the trajectory prediction of the traffic participant comprises:
constructing a pedestrian track prediction model according to the position coordinates of the traffic participants and the pedestrian anisotropy data; constructing a sequence-to-sequence neural network model according to the position coordinates of the traffic participants, the pedestrian anisotropy data and the traffic participant coding information; the traffic participant coding information is obtained through a convolutional neural network generated by image data of the current traffic environment;
obtaining a first pedestrian track prediction result according to the pedestrian track prediction model; obtaining a second pedestrian track prediction result according to the sequence-to-sequence neural network model;
Constructing a stack generalization model according to the first pedestrian prediction result and the second pedestrian track prediction result;
and obtaining a track prediction result of the traffic participant according to the stack generalization model.
9. The automatic driving data processing method according to claim 1, wherein the step S3 includes:
s31: constructing physical constraint conditions of the automatic driving vehicle according to the current traffic environment;
s32: establishing a physical model based on microscopic kinematics and dynamics according to a typical extreme traffic scene;
s33: establishing a corresponding collision risk expression according to the physical model and the physical constraint condition;
s34: constructing the surrounding safety data field of the automatic driving vehicle according to the collision risk expression;
s35: obtaining the change trend of the safety data field around the automatic driving vehicle according to the track prediction of the traffic participants;
s36: and generating a path planning strategy of the automatic driving vehicle by utilizing the change trend.
10. An automated driving traffic system, wherein the automated driving traffic system applies the automated driving data processing method according to any one of claims 1 to 9, the automated driving traffic system further comprising:
A plurality of perception modules: the sensing modules are used for sensing a plurality of single characteristics in the current traffic environment;
the fusion module is used for fusing the plurality of single features to form a complete current traffic environment;
the identifying module is used for identifying the traffic participants in the current traffic environment and generating action tracks of the traffic participants;
the processing module is used for processing the action track, predicting the action track according to the action track and generating a prediction result; constructing a decision model and constraint rules of the automatic driving vehicle in the current traffic environment according to the current traffic environment and the prediction result; determining an envelope interval of the autonomous vehicle decision model; determining a passenger evaluation model of the envelope section by using a data driving method; judging whether the passenger evaluation model is a target passenger evaluation model or not; and optimizing the decision model and constraint rules of the autonomous vehicle using the passenger assessment model and transmitting the decision model and constraint rules of the autonomous vehicle to the autonomous vehicle.
CN202211137920.9A 2022-09-19 2022-09-19 Automatic driving data processing method and automatic driving traffic system Active CN115662166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211137920.9A CN115662166B (en) 2022-09-19 2022-09-19 Automatic driving data processing method and automatic driving traffic system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211137920.9A CN115662166B (en) 2022-09-19 2022-09-19 Automatic driving data processing method and automatic driving traffic system

Publications (2)

Publication Number Publication Date
CN115662166A CN115662166A (en) 2023-01-31
CN115662166B true CN115662166B (en) 2024-04-09

Family

ID=84984134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211137920.9A Active CN115662166B (en) 2022-09-19 2022-09-19 Automatic driving data processing method and automatic driving traffic system

Country Status (1)

Country Link
CN (1) CN115662166B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115782835B (en) * 2023-02-09 2023-04-28 江苏天一航空工业股份有限公司 Automatic parking remote driving control method for passenger boarding vehicle
CN117076816B (en) * 2023-07-19 2024-07-16 清华大学 Response prediction method, response prediction apparatus, computer device, storage medium, and program product
CN117894181B (en) * 2024-03-14 2024-05-07 北京动视元科技有限公司 Global traffic abnormal condition integrated monitoring method and system

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106710242A (en) * 2017-02-20 2017-05-24 广西交通科学研究院有限公司 Method for recognizing vehicle quantity of motorcade based on dynamic strain of bridge
CN112201070A (en) * 2020-09-29 2021-01-08 上海交通大学 Deep learning-based automatic driving expressway bottleneck section behavior decision method
JPWO2021028708A1 (en) * 2019-08-13 2021-02-18
CN112622937A (en) * 2021-01-14 2021-04-09 长安大学 Pass right decision method for automatically driving automobile in face of pedestrian
CN112896186A (en) * 2021-01-30 2021-06-04 同济大学 Automatic driving longitudinal decision control method under cooperative vehicle and road environment
CN112948984A (en) * 2021-05-13 2021-06-11 西南交通大学 Vehicle-mounted track height irregularity peak interval detection method
CN113330497A (en) * 2020-06-05 2021-08-31 曹庆恒 Automatic driving method and device based on intelligent traffic system and intelligent traffic system
WO2021192771A1 (en) * 2020-03-26 2021-09-30 Mitsubishi Electric Corporation Adaptive optimization of decision making for vehicle control
CN113468670A (en) * 2021-07-20 2021-10-01 合肥工业大学 Method for evaluating performance of whole vehicle grade of automatic driving vehicle
CN113487855A (en) * 2021-05-25 2021-10-08 浙江工业大学 Traffic flow prediction method based on EMD-GAN neural network structure
CN114386826A (en) * 2022-01-10 2022-04-22 湖南工业大学 Fuzzy network DEA packaging material evaluation method based on sense experience fusion
CN114446046A (en) * 2021-12-20 2022-05-06 上海智能网联汽车技术中心有限公司 LSTM model-based weak traffic participant track prediction method
CN114462667A (en) * 2021-12-20 2022-05-10 上海智能网联汽车技术中心有限公司 SFM-LSTM neural network model-based street pedestrian track prediction method
CN114612999A (en) * 2020-12-04 2022-06-10 丰田自动车株式会社 Target behavior classification method, storage medium and terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9545854B2 (en) * 2011-06-13 2017-01-17 General Electric Company System and method for controlling and powering a vehicle
CN112309122A (en) * 2020-11-19 2021-02-02 北京清研宏达信息科技有限公司 Intelligent bus grading decision-making system based on multi-system cooperation
CN112896170B (en) * 2021-01-30 2022-09-20 同济大学 Automatic driving transverse control method under vehicle-road cooperative environment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106710242A (en) * 2017-02-20 2017-05-24 广西交通科学研究院有限公司 Method for recognizing vehicle quantity of motorcade based on dynamic strain of bridge
JPWO2021028708A1 (en) * 2019-08-13 2021-02-18
WO2021192771A1 (en) * 2020-03-26 2021-09-30 Mitsubishi Electric Corporation Adaptive optimization of decision making for vehicle control
CN113330497A (en) * 2020-06-05 2021-08-31 曹庆恒 Automatic driving method and device based on intelligent traffic system and intelligent traffic system
CN112201070A (en) * 2020-09-29 2021-01-08 上海交通大学 Deep learning-based automatic driving expressway bottleneck section behavior decision method
CN114612999A (en) * 2020-12-04 2022-06-10 丰田自动车株式会社 Target behavior classification method, storage medium and terminal
CN112622937A (en) * 2021-01-14 2021-04-09 长安大学 Pass right decision method for automatically driving automobile in face of pedestrian
CN112896186A (en) * 2021-01-30 2021-06-04 同济大学 Automatic driving longitudinal decision control method under cooperative vehicle and road environment
CN112948984A (en) * 2021-05-13 2021-06-11 西南交通大学 Vehicle-mounted track height irregularity peak interval detection method
CN113487855A (en) * 2021-05-25 2021-10-08 浙江工业大学 Traffic flow prediction method based on EMD-GAN neural network structure
CN113468670A (en) * 2021-07-20 2021-10-01 合肥工业大学 Method for evaluating performance of whole vehicle grade of automatic driving vehicle
CN114446046A (en) * 2021-12-20 2022-05-06 上海智能网联汽车技术中心有限公司 LSTM model-based weak traffic participant track prediction method
CN114462667A (en) * 2021-12-20 2022-05-10 上海智能网联汽车技术中心有限公司 SFM-LSTM neural network model-based street pedestrian track prediction method
CN114386826A (en) * 2022-01-10 2022-04-22 湖南工业大学 Fuzzy network DEA packaging material evaluation method based on sense experience fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
中国汽车工程学术研究综述・2017;《中国公路学报》编辑部;;中国公路学报(06);全文 *

Also Published As

Publication number Publication date
CN115662166A (en) 2023-01-31

Similar Documents

Publication Publication Date Title
Bachute et al. Autonomous driving architectures: insights of machine learning and deep learning algorithms
CN115662166B (en) Automatic driving data processing method and automatic driving traffic system
CN110796856B (en) Vehicle lane change intention prediction method and training method of lane change intention prediction network
US20240144010A1 (en) Object Detection and Property Determination for Autonomous Vehicles
WO2022206942A1 (en) Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
Duan et al. V2I based environment perception for autonomous vehicles at intersections
CN110531754A (en) Control system, control method and the controller of autonomous vehicle
CN114970321A (en) Scene flow digital twinning method and system based on dynamic trajectory flow
CN113705636B (en) Method and device for predicting track of automatic driving vehicle and electronic equipment
US20220153314A1 (en) Systems and methods for generating synthetic motion predictions
CN110356412A (en) The method and apparatus that automatically rule for autonomous driving learns
GB2621048A (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
Zhang et al. A cognitively inspired system architecture for the Mengshi cognitive vehicle
CN111045422A (en) Control method for automatically driving and importing 'machine intelligence acquisition' model
Zhang et al. Collision avoidance predictive motion planning based on integrated perception and V2V communication
Zhang et al. Predictive trajectory planning for autonomous vehicles at intersections using reinforcement learning
Guo et al. Intelligence-sharing vehicular networks with mobile edge computing and spatiotemporal knowledge transfer
CN115451987A (en) Path planning learning method for automatic driving automobile
Swain et al. Evolution of machine learning algorithms for enhancement of self-driving vehicles security
CN111038521A (en) Method for forming automatic driving consciousness decision model
CN111046897A (en) Method for defining fuzzy event probability measure spanning different spaces
YU et al. Vehicle Intelligent Driving Technology
Curiel-Ramirez et al. Interactive urban route evaluation system for smart electromobility
CN116348938A (en) Method and system for predicting dynamic object behavior
Oh et al. Towards defensive autonomous driving: Collecting and probing driving demonstrations of mixed qualities

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant