CN109927719B - Auxiliary driving method and system based on obstacle trajectory prediction - Google Patents

Auxiliary driving method and system based on obstacle trajectory prediction Download PDF

Info

Publication number
CN109927719B
CN109927719B CN201711351196.9A CN201711351196A CN109927719B CN 109927719 B CN109927719 B CN 109927719B CN 201711351196 A CN201711351196 A CN 201711351196A CN 109927719 B CN109927719 B CN 109927719B
Authority
CN
China
Prior art keywords
vehicle
dynamic
obstacle
track
running track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711351196.9A
Other languages
Chinese (zh)
Other versions
CN109927719A (en
Inventor
夏中谱
潘屹峰
徐宝强
朱振广
蒋菲怡
潘余昌
杨旭光
詹锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Priority to CN201711351196.9A priority Critical patent/CN109927719B/en
Publication of CN109927719A publication Critical patent/CN109927719A/en
Application granted granted Critical
Publication of CN109927719B publication Critical patent/CN109927719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application provides an auxiliary driving method and system based on obstacle track prediction, wherein the method comprises the steps of acquiring environmental data around a vehicle, which is acquired by a vehicle-mounted sensor; determining a travelable region of a dynamic obstacle around the host vehicle based on the environmental data; predicting the driving track of the dynamic barrier by using the historical state information and the drivable area of the dynamic barrier; and judging the risk condition that the running track of the dynamic obstacle conflicts with the running track of the vehicle. The risk coefficient of the obstacle track and the vehicle track can be calculated, and early warning is carried out.

Description

Auxiliary driving method and system based on obstacle trajectory prediction
[ technical field ] A method for producing a semiconductor device
The application relates to the field of automatic control, in particular to an auxiliary driving method and system based on obstacle trajectory prediction.
[ background of the invention ]
An Adaptive Cruise Control system (Adaptive Cruise Control) and an Emergency automatic braking system (automatic braking) are common in an existing vehicle auxiliary driving system, and the system determines to adopt an acceleration strategy or a braking and deceleration strategy after certain conditions are triggered by detecting the state of an obstacle on a driving road.
However, these existing vehicle driving assistance systems have disadvantages:
1. the early warning time is short, and due to the fact that a passive detection technology is used, only when the conditions are triggered, for example, when the obstacle enters a driving area, an alarm or action is taken, and a corresponding strategy cannot be taken before the obstacle enters the driving area. When these conditions are triggered, the vehicle is already in a dangerous state, and the time reserved for the driver or the auxiliary driving system is often not enough to ensure safety and even cannot ensure comfort.
2. The method is narrow in applicable scene, only suitable for road scenes with simple road scenes, such as expressways and urban loops, and difficult to be suitable for common complex urban road scenes.
3. The active anticipation ability is lacking.
[ summary of the invention ]
Aspects of the application provide a driving assistance method and system based on obstacle trajectory prediction, which are used for calculating risk coefficients of an obstacle trajectory and a vehicle trajectory and performing early warning.
In one aspect of the present application, there is provided a driving assistance method based on obstacle trajectory prediction, including:
acquiring environmental data around the vehicle, which is acquired by a vehicle-mounted sensor;
determining a travelable region of a dynamic obstacle around the host vehicle based on the environmental data;
predicting the driving track of the dynamic barrier by using the historical state information and the drivable area of the dynamic barrier;
and judging the risk condition that the running track of the dynamic obstacle conflicts with the running track of the vehicle.
The above-described aspects and any possible implementations further provide an implementation in which the environmental data includes: dynamic obstacles, static obstacles, and traffic signals.
The above-described aspect and any possible implementation further provides an implementation in which determining, based on the environment data, a travelable region of a dynamic obstacle around the host vehicle includes:
and analyzing the relation between the dynamic barrier and the dynamic barrier, the relation between the dynamic barrier and the static barrier, and the relation between the dynamic barrier and the traffic signal according to a preset traffic rule, and extracting all travelable areas of the dynamic barrier.
The above-described aspects and any possible implementation manner further provide an implementation manner, wherein the predicting the driving track of the dynamic obstacle by using the historical state information and the drivable area of the dynamic obstacle comprises:
and inputting the historical state information of the dynamic obstacle and the travelable area into an obstacle track prediction model to predict the traveling track of the dynamic obstacle.
The above-described aspects and any possible implementations further provide an implementation in which the obstacle trajectory prediction model is a deep neural network model.
In accordance with one aspect of the present invention and any one of the possible implementations, there is further provided a method for determining a risk situation of a collision between a driving trajectory of a dynamic obstacle and a driving trajectory of a host vehicle, the method including:
according to the time difference of the predicted track of the dynamic obstacle and the same position of the running track of the vehicle; or, the danger coefficient is judged according to the speed difference and the distance difference at the same time point.
The above-described aspect and any possible implementation further provides an implementation in which the travel trajectory of the host vehicle is predicted based on the current state information of the host vehicle and the control instruction sent by the host vehicle control system.
In another aspect of the present application, there is provided a driving assistance system based on obstacle trajectory prediction, including:
the acquisition module is used for acquiring the environmental data around the vehicle, which is acquired by the vehicle-mounted sensor;
a travelable region determination module for determining a travelable region of the dynamic obstacle around the host vehicle based on the environmental data;
the obstacle track prediction module is used for predicting the driving track of the dynamic obstacle by utilizing the historical state information and the drivable area of the dynamic obstacle;
and the judging module is used for judging the risk condition that the running track of the dynamic obstacle conflicts with the running track of the vehicle.
The above-described aspects and any possible implementations further provide an implementation in which the environmental data includes: dynamic obstacles, static obstacles, and traffic signals.
The above-described aspect and any possible implementation manner further provide an implementation manner, where the travelable region determining module is specifically configured to:
and analyzing the relation between the dynamic barrier and the dynamic barrier, the relation between the dynamic barrier and the static barrier, and the relation between the dynamic barrier and the traffic signal according to a preset traffic rule, and extracting all travelable areas of the dynamic barrier.
The above-described aspect and any possible implementation further provide an implementation, where the obstacle trajectory prediction module is specifically configured to:
and inputting the historical state information of the dynamic obstacle and the travelable area into an obstacle track prediction model to predict the traveling track of the dynamic obstacle.
The above-described aspects and any possible implementations further provide an implementation in which the obstacle trajectory prediction model is a deep neural network model.
As for the above-mentioned aspect and any possible implementation manner, there is further provided an implementation manner, where the determining module is specifically configured to:
according to the time difference of the predicted track of the dynamic obstacle and the same position of the running track of the vehicle; or, the danger coefficient is judged according to the speed difference and the distance difference at the same time point.
The foregoing aspects and any possible implementations further provide an implementation in which the system further includes a vehicle trajectory prediction module configured to predict a vehicle trajectory according to current state information of the vehicle and a control instruction sent by the vehicle control system.
In another aspect of the present invention, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as described above when executing the program.
In another aspect of the invention, a computer-readable storage medium is provided, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method as set forth above.
According to the technical scheme, the risk coefficient of the obstacle track and the vehicle track can be calculated, and early warning is carried out.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and those skilled in the art can also obtain other drawings according to the drawings without inventive labor.
Fig. 1 is a schematic flowchart of a driving assistance method based on obstacle trajectory prediction according to an embodiment of the present application;
FIG. 2 is a diagram illustrating a specific scenario in an embodiment of the present application;
FIG. 3 is a schematic diagram of another embodiment of the present application;
fig. 4 is a schematic structural diagram of an assisted driving system based on obstacle trajectory prediction according to another embodiment of the present application;
fig. 5 illustrates a block diagram of an exemplary computer system/server 012 suitable for use in implementing embodiments of the invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic diagram of a driving assistance method based on obstacle trajectory prediction according to an embodiment of the present application, as shown in fig. 1, including the following steps:
step S11, acquiring environmental data around the vehicle, which is acquired by a vehicle-mounted sensor;
step S12, determining a drivable area of a dynamic obstacle around the vehicle based on the environment data;
step S13, predicting the driving track of the dynamic barrier by using the historical state information and the drivable area of the dynamic barrier;
step S14 is to determine a risk situation in which the traveling trajectory of the dynamic obstacle collides with the traveling trajectory of the vehicle.
In one preferred implementation of step S11,
preferably, data collected by a vehicle-mounted sensor is acquired, and a dynamic obstacle, a static obstacle and a traffic signal around the vehicle are detected based on the data; and acquiring the position and type information of the dynamic barrier, the static barrier and the traffic signal.
The vehicle-mounted sensor includes: the system comprises cameras arranged at the upper end of a front windshield of the vehicle, the rear end of the tail of the vehicle and two sides of a vehicle body, millimeter wave radars arranged at the center of a front bumper of the vehicle, two sides of the front bumper and two sides of a rear bumper, laser radars arranged at the center position, the front end, the left side and the right side of the top of the vehicle, a GPS-IMU combined navigation module arranged at the rear end of a top support and the like. A 360 degree perception centered on the host vehicle is formed.
Preferably, said step comprises the sub-steps of:
and a substep S111 of rapidly synchronizing the multiple sensors through the synchronous board card, or acquiring data information of the multiple sensors through a time line, so as to realize the joint acquisition of the data of the multiple sensors and detect the data based on the acquired data.
Wherein, the detection result includes: dynamic obstacles including, but not limited to, pedestrians, vehicles (both automotive and non-automotive), and the like; static obstacles including, but not limited to, barricade facilities, inter-lane barriers, and the like; traffic signals including, but not limited to, traffic lights, traffic signs, traffic markings, and the like.
Preferably, the first and second electrodes are formed of a metal,
controlling a camera to acquire an image and detecting the image;
preferably, the plurality of cameras are controlled to acquire images from different orientations. Images acquired from different orientations according to the plurality of cameras; carrying out feature point acquisition and feature point matching; and reconstructing three-dimensional points through a three-dimensional coordinate positioning algorithm on a plurality of spatial non-coplanar straight lines formed by a plurality of two-dimensional plane coordinates in all camera imaging planes to obtain the coordinates of the obstacle.
Preferably, the image is image-recognized, including recognizing vehicle type, vehicle turn signal, recognizing pedestrian, recognizing position, type, color of signal light, and recognizing traffic sign, road marking, road barrier facility, inter-lane guardrail, etc.
In the present embodiment, shooting is performed at a rate of 10 frames per second.
The millimeter wave radar is controlled to acquire a reflection signal of a target, preferably, FMCW continuous linear frequency modulation waves are used for detecting distance information of an obstacle, and direction information of the obstacle is detected through time delay, namely phase difference, of signals received by a plurality of receiving antennas.
And the GPS-IMU combined navigation module acquires a GPS signal and an inertial navigation signal of the vehicle and calculates the position and attitude information of the vehicle.
Preferably, the laser radar can be controlled to collect laser point cloud data and detect the laser point cloud data as a supplement of the camera. Preferably, the laser radar rotates at a constant angular speed, and continuously emits laser and collects information of a reflection point in the process, so as to obtain all-around environment information. The laser radar records the time and horizontal angle of the reflecting point during the process of collecting the distance of the reflecting point, each laser transmitter has a number and a fixed vertical angle, and the coordinates of all the reflecting points can be calculated according to the data. The collection of all the collected coordinates of the reflection points for each revolution of the lidar forms a point cloud. Filtering interference in the laser point cloud by using a filter, and detecting a target by a mode cluster analysis method according to the shape space position characteristics of the target to obtain the type of the obstacle; and recombining the subgroups divided by the clusters by a method of adjusting the distance threshold, determining a new cluster center to realize target positioning, and obtaining the coordinates of the obstacle.
A substep S112, selecting a reference coordinate system, and converting the coordinates of the detection result in each sensor coordinate system into the reference coordinate system;
the initial spatial arrangement of the sensors is known in advance and can be derived from the measurement data of a plurality of sensors on the body of the vehicle.
According to the relative position relationship between the obstacle and the vehicle and the position and posture information of the vehicle, the position information of the obstacle can be converted into a unified reference coordinate system. The reference coordinate system may be a geodetic coordinate system to facilitate further processing.
Preferably, the state information of each dynamic obstacle, including the current state information and the historical state information, may be obtained according to the position information of each dynamic obstacle at different time points obtained by continuous detection. The state information includes position, speed, and direction information of each dynamic obstacle.
Preferably, the status information further includes turn signal information of the vehicle in the dynamic obstacle by image recognition.
In one preferred implementation of step S12,
preferably, according to a preset traffic rule base, the relations between the dynamic obstacles and the dynamic obstacles, between the dynamic obstacles and the static obstacles, and between the dynamic obstacles and the traffic signals are analyzed, and all travelable areas of the dynamic obstacles are extracted.
Preferably, the above operation may be performed by a scene analysis module of a driving assistance system provided in the host vehicle, or the position and type information of the dynamic obstacle, the static obstacle, and the traffic signal obtained in step S11 may be uploaded to a server, and the server may perform the above operation.
Preferably, the travelable region represents a region which may be traveled by a dynamic obstacle under a normal driving state, and is a region in a road marking, including but not limited to a region through which a pedestrian traverses a road, a vehicle moves straight, changes lanes, turns left, turns right, turns around and the like.
The preset traffic rule base comprises:
the dynamic barrier needs to run in a corresponding road;
dynamic obstacles need to comply with traffic signals.
Preferably, when analyzing a dynamic obstacle, for example, a relationship between a vehicle and the dynamic obstacle, the host vehicle is analyzed as the dynamic obstacle corresponding to the vehicle.
For example, as shown in fig. 2, the host vehicle 0 recognizes the dynamic obstacle vehicle 1, and the travelable area of the dynamic obstacle vehicle 1 may be two possibilities, namely, straight traveling 2 and lane changing to the right 3.
As shown in fig. 3, the vehicle 0 recognizes the dynamic obstacle vehicle 1, and the driving area of the dynamic obstacle vehicle 1 may be three types, namely, a right turn 6, a straight travel 7, and a left turn 8; the travelable area of the pedestrian 7 is both possible across the roads 9 and 10.
In one preferred implementation of step S13,
preferably, historical state information of the dynamic obstacle and a travelable area are input into an obstacle track prediction model to generate a predicted track of the dynamic obstacle;
for example, as shown in fig. 2, a predicted trajectory 4 of the dynamic obstacle vehicle 1 is generated; as shown in fig. 3, a predicted trajectory 12 of the dynamic obstacle vehicle 1 and a predicted trajectory 11 of the dynamic obstacle pedestrian 5 are generated.
The obstacle trajectory prediction model is pre-trained by:
acquiring state information m seconds before each moment in the historical state information of the dynamic barrier; the data sampling is performed once every 0.1 second, and meanwhile, the travelable area at each moment is analyzed. And taking the state information n seconds after the last time as output. Wherein m and n are integers more than or equal to 1;
and according to the training set, taking the state information m seconds before each moment and the travelable area at the moment as input, and taking the state information n seconds after the moment as output, and training the obstacle track prediction model.
Preferably, the obstacle trajectory prediction model is a deep neural network model, and the deep neural network includes an input layer, an implicit layer, and an output layer, and is configured to receive state information m seconds before each time and a travelable region at the time, calculate a probability that the dynamic obstacle travels in each travelable region, calculate in the travelable region with the highest probability, and output corresponding state information n seconds after the calculation.
And adjusting the model parameters of the deep neural network by using a back propagation algorithm.
Preferably, m is 3 and n is 5.
In one preferred implementation of step S14,
preferably, the determining a risk condition, such as a risk coefficient, between the dynamic obstacle and the host vehicle in combination with the predicted trajectory of the dynamic obstacle and the traveling trajectory of the host vehicle includes:
judging a danger coefficient according to the time difference of the same position, namely the time difference between the dynamic barrier and the time when the vehicle runs to the intersection of the tracks; or the like, or, alternatively,
judging the danger coefficient according to the speed difference and the distance difference at the same time point, comprising the following steps: and judging the danger coefficient according to the longitudinal distance difference and the longitudinal speed difference, and judging the danger coefficient according to the transverse distance difference and the transverse speed difference. Wherein the speed difference is the approaching speed of the dynamic obstacle and the vehicle.
The method for receiving the control instruction sent by the vehicle control system by the driving assistance system comprises the following steps: the commands such as steering, acceleration, braking, etc. are predicted based on the current state information of the vehicle.
Preferably, when the predicted trajectory of the dynamic obstacle and the traveling trajectory of the host vehicle are on the same lane,
if the longitudinal distance difference is larger than the braking safety distance under the current longitudinal speed difference, the danger coefficient is 0;
if the longitudinal distance difference is smaller than the braking safety distance at the current longitudinal speed difference, the risk coefficient is 0.
For example, when the predicted trajectory of the dynamic obstacle and the travel trajectory of the host vehicle are on adjacent lanes,
if the transverse distance difference is larger than the safe reaction distance under the current transverse speed difference, the danger coefficient is 0;
if the lateral distance difference is less than the safe reaction distance at the current lateral velocity difference, the risk factor is 1.
Preferably, the determined possible collision event may be also pre-warned to the driver of the host vehicle in accordance with a risk coefficient between the dynamic obstacle and the host vehicle. And sending an alarm according to the danger coefficient, and giving a collision alarm to the driver through voice prompt, screen inquiry or any other output method, so that the driver has a certain distance of collision avoidance time, and the driver can perform correct operation to avoid collision at the moment.
According to the embodiment, the risk coefficient of the obstacle track and the vehicle track can be calculated, and early warning can be performed.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
The above is a description of method embodiments, and the embodiments of the present invention are further described below by way of apparatus embodiments.
In the embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
Fig. 4 is a schematic diagram of a driving assistance method based on obstacle trajectory prediction according to an embodiment of the present application, as shown in fig. 4, including:
an obtaining module 41, configured to obtain environmental data around the vehicle, which is acquired by a vehicle-mounted sensor;
a travelable region determination module 42 for determining a travelable region of the dynamic obstacle around the host vehicle based on the environmental data;
an obstacle trajectory prediction module 43, configured to predict a driving trajectory of the dynamic obstacle using the historical state information of the dynamic obstacle and the drivable area;
and the judging module 44 is used for judging the risk condition that the running track of the dynamic obstacle conflicts with the running track of the vehicle.
In a preferred implementation of the acquisition module 41,
preferably, data collected by a vehicle-mounted sensor are acquired, and a dynamic obstacle, a static obstacle and a traffic signal around the vehicle are detected based on the data; and acquiring the position and type information of the dynamic barrier, the static barrier and the traffic signal.
The vehicle-mounted sensor includes: the system comprises cameras arranged at the upper end of a front windshield of the vehicle, the rear end of the tail of the vehicle and two sides of a vehicle body, millimeter wave radars arranged at the center of a front bumper of the vehicle, two sides of the front bumper and two sides of a rear bumper, laser radars arranged at the center position, the front end, the left side and the right side of the top of the vehicle, a GPS-IMU combined navigation module arranged at the rear end of a top support and the like. A 360 degree perception centered on the host vehicle is formed.
Preferably, the multi-sensor is rapidly synchronized through the synchronization board card, or data information of a plurality of sensors is acquired through a time line, so that multi-sensor data joint acquisition is realized and detection is performed based on the acquired data.
Wherein, the detection result includes: dynamic obstacles including, but not limited to, pedestrians, vehicles (both automotive and non-automotive), and the like; static obstacles including, but not limited to, barricade facilities, inter-lane barriers, and the like; traffic signals including, but not limited to, traffic lights, traffic signs, traffic markings, and the like.
Preferably, the multi-sensor data joint acquisition comprises:
controlling a camera to acquire an image and detecting the image;
preferably, the plurality of cameras are controlled to acquire images from different orientations. Images acquired from different orientations according to the plurality of cameras; carrying out feature point acquisition and feature point matching; and reconstructing three-dimensional points through a three-dimensional coordinate positioning algorithm on a plurality of spatial non-coplanar straight lines formed by a plurality of two-dimensional plane coordinates in all camera imaging planes to obtain the coordinates of the obstacle.
Preferably, the image is image-recognized, including recognizing vehicle type, vehicle turn signal, recognizing pedestrian, recognizing position, type, color of signal light, and recognizing traffic sign, road marking, road barrier facility, inter-lane guardrail, etc.
In the present embodiment, shooting is performed at a rate of 10 frames per second.
The millimeter wave radar is controlled to acquire a reflection signal of a target, preferably, FMCW continuous linear frequency modulation waves are used for detecting distance information of an obstacle, and direction information of the obstacle is detected through time delay, namely phase difference, of signals received by a plurality of receiving antennas.
And the GPS-IMU combined navigation module acquires a GPS signal and an inertial navigation signal of the vehicle and calculates the position and attitude information of the vehicle.
Preferably, the laser radar can be controlled to collect laser point cloud data and detect the laser point cloud data as a supplement of the camera. Preferably, the laser radar rotates at a constant angular speed, and continuously emits laser and collects information of a reflection point in the process, so as to obtain all-around environment information. The laser radar records the time and horizontal angle of the reflecting point during the process of collecting the distance of the reflecting point, each laser transmitter has a number and a fixed vertical angle, and the coordinates of all the reflecting points can be calculated according to the data. The collection of all the collected coordinates of the reflection points for each revolution of the lidar forms a point cloud. Filtering interference in the laser point cloud by using a filter, and detecting a target by a mode cluster analysis method according to the shape space position characteristics of the target to obtain the type of the obstacle; and recombining the subgroups divided by the clusters by a method of adjusting the distance threshold, determining a new cluster center to realize target positioning, and obtaining the coordinates of the obstacle.
Selecting a reference coordinate system, and converting the coordinates of the detection result in each sensor coordinate system into the reference coordinate system;
the initial spatial arrangement of the sensors is known in advance and can be derived from the measurement data of a plurality of sensors on the body of the vehicle.
According to the relative position relationship between the obstacle and the vehicle and the position and posture information of the vehicle, the position information of the obstacle can be converted into a unified reference coordinate system. The reference coordinate system may be a geodetic coordinate system to facilitate further processing.
Preferably, the state information of each dynamic obstacle, including the current state information and the historical state information, may be obtained according to the position information of each dynamic obstacle at different time points obtained by continuous detection. The state information includes position, speed, and direction information of each dynamic obstacle.
Preferably, the status information further includes turn signal information of the vehicle in the dynamic obstacle by image recognition.
In one preferred implementation of the drivable region determination module 42,
preferably, according to a preset traffic rule base, the relations between the dynamic obstacles and the dynamic obstacles, between the dynamic obstacles and the static obstacles, and between the dynamic obstacles and the traffic signals are analyzed, and all travelable areas of the dynamic obstacles are extracted.
Preferably, the above operation may be performed by a scene analysis module of a driving assistance system provided in the host vehicle, or the position and type information of the dynamic obstacle, the static obstacle, and the traffic signal obtained in step S11 may be uploaded to a server, and the server may perform the above operation.
Preferably, the travelable region represents a region which may be traveled by a dynamic obstacle under a normal driving state, and is a region in a road marking, including but not limited to a region through which a pedestrian traverses a road, a vehicle moves straight, changes lanes, turns left, turns right, turns around and the like.
The preset traffic rule base comprises:
the dynamic barrier needs to run in a corresponding road;
dynamic obstacles need to comply with traffic signals.
Preferably, when analyzing a dynamic obstacle, for example, a relationship between a vehicle and the dynamic obstacle, the host vehicle is analyzed as the dynamic obstacle corresponding to the vehicle.
For example, as shown in fig. 2, the host vehicle 0 recognizes the dynamic obstacle vehicle 1, and the travelable area of the dynamic obstacle vehicle 1 may be two possibilities, namely, straight traveling 2 and lane changing to the right 3.
As shown in fig. 3, the vehicle 0 recognizes the dynamic obstacle vehicle 1, and the driving area of the dynamic obstacle vehicle 1 may be three types, namely, a right turn 6, a straight travel 7, and a left turn 8; the travelable area of the pedestrian 7 is both possible across the roads 9 and 10.
In a preferred implementation of the obstacle trajectory prediction module 43,
preferably, historical state information of the dynamic obstacle and a travelable area are input into an obstacle track prediction model to generate a predicted track of the dynamic obstacle;
for example, as shown in fig. 2, a predicted trajectory 4 of the dynamic obstacle vehicle 1 is generated; as shown in fig. 3, a predicted trajectory 12 of the dynamic obstacle vehicle 1 and a predicted trajectory 11 of the dynamic obstacle pedestrian 5 are generated.
The obstacle trajectory prediction model is pre-trained by:
acquiring state information m seconds before each moment in the historical state information of the dynamic barrier; the data sampling is performed once every 0.1 second, and meanwhile, the travelable area at each moment is analyzed. And taking the state information n seconds after the last time as output. Wherein m and n are integers more than or equal to 1;
and according to the training set, taking the state information m seconds before each moment and the travelable area at the moment as input, and taking the state information n seconds after the moment as output, and training the obstacle track prediction model.
Preferably, the obstacle trajectory prediction model is a deep neural network model, and the deep neural network includes an input layer, an implicit layer, and an output layer, and is configured to receive state information m seconds before each time and a travelable region at the time, calculate a probability that the dynamic obstacle travels in each travelable region, calculate in the travelable region with the highest probability, and output corresponding state information n seconds after the calculation.
And adjusting the model parameters of the deep neural network by using a back propagation algorithm.
Preferably, m is 3 and n is 5.
In one preferred implementation of step decision module 44,
preferably, the determining a risk condition, such as a risk coefficient, between the dynamic obstacle and the host vehicle in combination with the predicted trajectory of the dynamic obstacle and the traveling trajectory of the host vehicle includes:
judging a danger coefficient according to the time difference of the same position, namely the time difference between the dynamic barrier and the time when the vehicle runs to the intersection of the tracks; or the like, or, alternatively,
judging the danger coefficient according to the speed difference and the distance difference at the same time point, comprising the following steps: and judging the danger coefficient according to the longitudinal distance difference and the longitudinal speed difference, and judging the danger coefficient according to the transverse distance difference and the transverse speed difference. Wherein the speed difference is the approaching speed of the dynamic obstacle and the vehicle.
The driving track of the vehicle is predicted by a vehicle track prediction module according to the current state information of the vehicle and a control instruction sent by a vehicle control system.
Preferably, when the predicted trajectory of the dynamic obstacle and the traveling trajectory of the host vehicle are on the same lane,
if the longitudinal distance difference is larger than the braking safety distance under the current longitudinal speed difference, the danger coefficient is 0;
if the longitudinal distance difference is smaller than the braking safety distance at the current longitudinal speed difference, the risk coefficient is 0.
For example, when the predicted trajectory of the dynamic obstacle and the travel trajectory of the host vehicle are on adjacent lanes,
if the transverse distance difference is larger than the safe reaction distance under the current transverse speed difference, the danger coefficient is 0;
if the lateral distance difference is less than the safe reaction distance at the current lateral velocity difference, the risk factor is 1.
Preferably, the system further includes a prompt module, configured to warn a driver of the host vehicle of the determined possible collision event according to a risk coefficient between the dynamic obstacle and the host vehicle. And sending an alarm according to the danger coefficient, and giving a collision alarm to the driver through voice prompt, screen inquiry or any other output method, so that the driver has a certain distance of collision avoidance time, and the driver can perform correct operation to avoid collision at the moment.
According to the embodiment, the risk coefficient of the obstacle track and the vehicle track can be calculated, and early warning can be performed.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Fig. 5 illustrates a block diagram of an exemplary computer system/server 012 suitable for use in implementing embodiments of the invention. The computer system/server 012 shown in fig. 5 is only an example, and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in fig. 5, the computer system/server 012 is embodied as a general purpose computing device. The components of computer system/server 012 may include, but are not limited to: one or more processors or processing units 016, a system memory 028, and a bus 018 that couples various system components including the system memory 028 and the processing unit 016.
Bus 018 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer system/server 012 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 012 and includes both volatile and nonvolatile media, removable and non-removable media.
System memory 028 can include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)030 and/or cache memory 032. The computer system/server 012 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 034 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be connected to bus 018 via one or more data media interfaces. Memory 028 can include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the present invention.
Program/utility 040 having a set (at least one) of program modules 042 can be stored, for example, in memory 028, such program modules 042 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof might include an implementation of a network environment. Program modules 042 generally perform the functions and/or methodologies of embodiments of the present invention as described herein.
The computer system/server 012 may also communicate with one or more external devices 014 (e.g., keyboard, pointing device, display 024, etc.), hi the present invention, the computer system/server 012 communicates with an external radar device, and may also communicate with one or more devices that enable a user to interact with the computer system/server 012, and/or with any device (e.g., network card, modem, etc.) that enables the computer system/server 012 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 022. Also, the computer system/server 012 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 020. As shown in fig. 5, the network adapter 020 communicates with the other modules of the computer system/server 012 via bus 018. It should be appreciated that although not shown in fig. 5, other hardware and/or software modules may be used in conjunction with the computer system/server 012, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 016 executes the programs stored in the system memory 028, thereby performing the functions and/or methods of the described embodiments of the present invention.
The computer program described above may be provided in a computer storage medium encoded with a computer program that, when executed by one or more computers, causes the one or more computers to perform the method flows and/or apparatus operations shown in the above-described embodiments of the invention.
With the development of time and technology, the meaning of media is more and more extensive, and the propagation path of computer programs is not limited to tangible media any more, and can also be downloaded from a network directly and the like. Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More preferred examples (a non-exhaustive list) of the computer readable storage medium include: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (14)

1. An assisted driving method based on obstacle trajectory prediction, characterized by comprising:
acquiring environmental data around the vehicle, which is acquired by a vehicle-mounted sensor;
determining a travelable region of a dynamic obstacle around the host vehicle based on the environmental data;
inputting historical state information of the dynamic barrier and a travelable area into a barrier track prediction model to predict a traveling track of the dynamic barrier;
the method comprises the following steps of judging a risk condition that a running track of a dynamic obstacle conflicts with a running track of a vehicle, wherein the running track of the vehicle is obtained through prediction according to current state information of the vehicle, the risk condition is a risk coefficient, and the risk condition for judging the running track of the dynamic obstacle conflicts with the running track of the vehicle comprises the following steps: when the running track of the dynamic barrier and the running track of the vehicle are on the same lane, determining the danger coefficient according to the longitudinal distance difference and the braking safety distance under the current longitudinal speed difference; or when the running track of the dynamic obstacle and the running track of the vehicle are on adjacent lanes, determining the danger coefficient according to the transverse distance difference and the safe reaction distance under the current transverse speed difference; and the number of the first and second groups,
and early warning the vehicle according to the danger coefficient.
2. The method of claim 1,
the environmental data includes: dynamic obstacles, static obstacles, and traffic signals.
3. The method of claim 2, wherein determining, based on the environmental data, a travelable region for dynamic obstacles around the host vehicle comprises:
and analyzing the relation between the dynamic barrier and the dynamic barrier, the relation between the dynamic barrier and the static barrier, and the relation between the dynamic barrier and the traffic signal according to a preset traffic rule, and extracting all travelable areas of the dynamic barrier.
4. The method of claim 1,
the obstacle trajectory prediction model is a deep neural network model.
5. The method of claim 1, wherein determining a risk condition that the trajectory of the dynamic obstacle conflicts with the trajectory of the host vehicle comprises:
according to the time difference of the predicted track of the dynamic obstacle and the same position of the running track of the vehicle; or, the danger coefficient is judged according to the speed difference and the distance difference at the same time point.
6. The method of claim 1, wherein the trajectory of the host vehicle is predicted based on current state information of the host vehicle and control commands sent by the host vehicle control system.
7. A driving assistance system based on obstacle trajectory prediction, characterized by comprising:
the acquisition module is used for acquiring the environmental data around the vehicle, which is acquired by the vehicle-mounted sensor;
a travelable region determination module for determining a travelable region of the dynamic obstacle around the host vehicle based on the environmental data;
the obstacle track prediction module is used for inputting the historical state information of the dynamic obstacle and the travelable area into an obstacle track prediction model and predicting the traveling track of the dynamic obstacle;
the judgment module is used for judging a risk condition that a running track of a dynamic obstacle conflicts with a running track of a vehicle, the running track of the vehicle is obtained through prediction according to current state information of the vehicle, the risk condition is a risk coefficient, and the judgment of the risk condition that the running track of the dynamic obstacle conflicts with the running track of the vehicle comprises the following steps: when the running track of the dynamic barrier and the running track of the vehicle are on the same lane, determining the danger coefficient according to the longitudinal distance difference and the braking safety distance under the current longitudinal speed difference; or when the running track of the dynamic obstacle and the running track of the vehicle are on adjacent lanes, determining the danger coefficient according to the transverse distance difference and the safe reaction distance under the current transverse speed difference; and early warning the vehicle according to the danger coefficient.
8. The system of claim 7,
the environmental data includes: dynamic obstacles, static obstacles, and traffic signals.
9. The system of claim 8, wherein the drivable region determining module is specifically configured to:
and analyzing the relation between the dynamic barrier and the dynamic barrier, the relation between the dynamic barrier and the static barrier, and the relation between the dynamic barrier and the traffic signal according to a preset traffic rule, and extracting all travelable areas of the dynamic barrier.
10. The system of claim 7,
the obstacle trajectory prediction model is a deep neural network model.
11. The system of claim 7, wherein the determining module is specifically configured to:
according to the time difference of the predicted track of the dynamic obstacle and the same position of the running track of the vehicle; or, the danger coefficient is judged according to the speed difference and the distance difference at the same time point.
12. The system of claim 7, further comprising a vehicle trajectory prediction module for predicting the vehicle trajectory based on current state information of the vehicle and control commands sent by the vehicle control system.
13. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method of any one of claims 1 to 6.
14. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method of any one of claims 1 to 6.
CN201711351196.9A 2017-12-15 2017-12-15 Auxiliary driving method and system based on obstacle trajectory prediction Active CN109927719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711351196.9A CN109927719B (en) 2017-12-15 2017-12-15 Auxiliary driving method and system based on obstacle trajectory prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711351196.9A CN109927719B (en) 2017-12-15 2017-12-15 Auxiliary driving method and system based on obstacle trajectory prediction

Publications (2)

Publication Number Publication Date
CN109927719A CN109927719A (en) 2019-06-25
CN109927719B true CN109927719B (en) 2022-03-25

Family

ID=66980214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711351196.9A Active CN109927719B (en) 2017-12-15 2017-12-15 Auxiliary driving method and system based on obstacle trajectory prediction

Country Status (1)

Country Link
CN (1) CN109927719B (en)

Families Citing this family (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110371112B (en) * 2019-07-06 2021-10-01 深圳数翔科技有限公司 Intelligent obstacle avoidance system and method for automatic driving vehicle
US20210027629A1 (en) * 2019-07-25 2021-01-28 Baidu Usa Llc Blind area processing for autonomous driving vehicles
US11087477B2 (en) * 2019-07-29 2021-08-10 Honda Motor Co., Ltd. Trajectory prediction
US20210081843A1 (en) * 2019-09-17 2021-03-18 Seyed Ershad BANIJAMALI Methods and systems for observation prediction in autonomous vehicles
CN110758381B (en) * 2019-09-18 2021-05-04 北京汽车集团有限公司 Method and device for generating steering track, storage medium and electronic equipment
CN110615001B (en) * 2019-09-27 2021-04-27 汉纳森(厦门)数据股份有限公司 Driving safety reminding method, device and medium based on CAN data
WO2021062593A1 (en) * 2019-09-30 2021-04-08 Beijing Voyager Technology Co., Ltd. Systems and methods for predicting bicycle trajectory
CN110654380B (en) * 2019-10-09 2023-12-15 北京百度网讯科技有限公司 Method and device for controlling a vehicle
CN110654381B (en) * 2019-10-09 2021-08-31 北京百度网讯科技有限公司 Method and device for controlling a vehicle
KR20210044963A (en) * 2019-10-15 2021-04-26 현대자동차주식회사 Apparatus for determining lane change path of autonomous vehicle and method thereof
CN110834630A (en) * 2019-10-22 2020-02-25 中国第一汽车股份有限公司 Vehicle driving control method and device, vehicle and storage medium
CN112784628B (en) * 2019-11-06 2024-03-19 北京地平线机器人技术研发有限公司 Track prediction method, neural network training method and device for track prediction
CN110703770A (en) * 2019-11-12 2020-01-17 中科(徐州)人工智能研究院有限公司 Method and device for controlling automatic running of track inspection vehicle
CN112824832A (en) * 2019-11-20 2021-05-21 炬星科技(深圳)有限公司 Method, system, device and computer readable storage medium for predicting movement locus of obstacle
CN110843776B (en) * 2019-11-29 2022-04-15 深圳市元征科技股份有限公司 Vehicle anti-collision method and device
CN111027195B (en) * 2019-12-03 2023-02-28 阿波罗智能技术(北京)有限公司 Simulation scene generation method, device and equipment
CN111002980B (en) * 2019-12-10 2021-04-30 苏州智加科技有限公司 Road obstacle trajectory prediction method and system based on deep learning
US11860634B2 (en) * 2019-12-12 2024-01-02 Baidu Usa Llc Lane-attention: predicting vehicles' moving trajectories by learning their attention over lanes
CN111114554B (en) * 2019-12-16 2021-06-11 苏州智加科技有限公司 Method, device, terminal and storage medium for predicting travel track
CN111025297A (en) * 2019-12-24 2020-04-17 京东数字科技控股有限公司 Vehicle monitoring method and device, electronic equipment and storage medium
WO2021134354A1 (en) * 2019-12-30 2021-07-08 深圳元戎启行科技有限公司 Path prediction method and apparatus, computer device, and storage medium
CN113424121A (en) * 2019-12-31 2021-09-21 深圳元戎启行科技有限公司 Vehicle speed control method and device based on automatic driving and computer equipment
US11127142B2 (en) * 2019-12-31 2021-09-21 Baidu Usa Llc Vehicle trajectory prediction model with semantic map and LSTM
CN111177869B (en) * 2020-01-02 2023-09-01 北京百度网讯科技有限公司 Method, device and equipment for determining sensor layout scheme
CN111301404B (en) * 2020-02-06 2022-02-18 北京小马慧行科技有限公司 Vehicle control method and device, storage medium and processor
CN111469836B (en) * 2020-02-28 2022-12-20 广东中科臻恒信息技术有限公司 Obstacle avoidance method and device based on vehicle-mounted unit and road side unit, and storage medium
CN111626097A (en) * 2020-04-09 2020-09-04 吉利汽车研究院(宁波)有限公司 Method and device for predicting future trajectory of obstacle, electronic equipment and storage medium
CN111409631B (en) * 2020-04-10 2022-01-11 新石器慧通(北京)科技有限公司 Vehicle running control method and device, vehicle and storage medium
CN113537258B (en) * 2020-04-16 2024-04-05 北京京东乾石科技有限公司 Action track prediction method and device, computer readable medium and electronic equipment
CN112639821B (en) * 2020-05-11 2021-12-28 华为技术有限公司 Method and system for detecting vehicle travelable area and automatic driving vehicle adopting system
CN111710188B (en) * 2020-05-29 2024-03-29 腾讯科技(深圳)有限公司 Vehicle alarm prompting method, device, electronic equipment and storage medium
CN111982143B (en) * 2020-08-11 2024-01-19 北京汽车研究总院有限公司 Vehicle and vehicle path planning method and device
CN111976726B (en) * 2020-08-26 2022-01-18 中南大学 Steering auxiliary system of intelligent rail vehicle and control method thereof
CN112046494B (en) * 2020-09-11 2021-10-29 中国第一汽车股份有限公司 Vehicle control method, device, equipment and storage medium
CN112092809A (en) * 2020-09-15 2020-12-18 北京罗克维尔斯科技有限公司 Auxiliary reversing method, device and system and vehicle
CN111829545B (en) * 2020-09-16 2021-01-08 深圳裹动智驾科技有限公司 Automatic driving vehicle and dynamic planning method and system for motion trail of automatic driving vehicle
CN113819915A (en) * 2021-03-03 2021-12-21 京东鲲鹏(江苏)科技有限公司 Unmanned vehicle path planning method and related equipment
CN113753038B (en) * 2021-03-16 2023-09-01 京东鲲鹏(江苏)科技有限公司 Track prediction method and device, electronic equipment and storage medium
CN113075668B (en) * 2021-03-25 2024-03-08 广州小鹏自动驾驶科技有限公司 Dynamic obstacle object identification method and device
CN113022593B (en) * 2021-04-09 2022-10-14 长沙智能驾驶研究院有限公司 Obstacle processing method and device and traveling equipment
CN113147794A (en) * 2021-06-03 2021-07-23 北京百度网讯科技有限公司 Method, device and equipment for generating automatic driving early warning information and automatic driving vehicle
CN113264066B (en) * 2021-06-03 2023-05-23 阿波罗智能技术(北京)有限公司 Obstacle track prediction method and device, automatic driving vehicle and road side equipment
CN113205088B (en) * 2021-07-06 2021-09-24 禾多科技(北京)有限公司 Obstacle image presentation method, electronic device, and computer-readable medium
CN113479217B (en) * 2021-07-26 2022-07-29 惠州华阳通用电子有限公司 Lane changing and obstacle avoiding method and system based on automatic driving
CN113641734B (en) * 2021-08-12 2024-04-05 驭势科技(北京)有限公司 Data processing method, device, equipment and medium
CN113734190B (en) * 2021-09-09 2023-04-11 北京百度网讯科技有限公司 Vehicle information prompting method and device, electronic equipment, medium and vehicle
CN114018265B (en) * 2021-10-28 2024-02-02 山东新一代信息产业技术研究院有限公司 Method, equipment and medium for generating running track of inspection robot
CN114162133A (en) * 2021-11-10 2022-03-11 上海人工智能创新中心 Risk assessment method and device for driving scene and computer readable storage medium
CN114446092B (en) * 2022-01-19 2022-12-27 无锡学院 S-shaped road simulated obstacle early warning method based on three-dimensional camera networking
CN114663804A (en) * 2022-03-02 2022-06-24 小米汽车科技有限公司 Driving area detection method, device, mobile equipment and storage medium
CN114596553B (en) * 2022-03-11 2023-01-24 阿波罗智能技术(北京)有限公司 Model training method, trajectory prediction method and device and automatic driving vehicle
CN114670870A (en) * 2022-03-18 2022-06-28 北京智行者科技有限公司 Obstacle SLT space risk field environment modeling method and device and related products
CN115214719B (en) * 2022-05-30 2024-04-05 广州汽车集团股份有限公司 Obstacle track tracking method and device, intelligent driving equipment and storage medium
CN114863685B (en) * 2022-07-06 2022-09-27 北京理工大学 Traffic participant trajectory prediction method and system based on risk acceptance degree

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106873580A (en) * 2015-11-05 2017-06-20 福特全球技术公司 Based on perception data autonomous driving at the intersection

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4207088B2 (en) * 2007-06-20 2009-01-14 トヨタ自動車株式会社 Vehicle travel estimation device
CN105216792A (en) * 2014-06-12 2016-01-06 株式会社日立制作所 Obstacle target in surrounding environment is carried out to the method and apparatus of recognition and tracking
CN104614733B (en) * 2015-01-30 2015-12-09 福州华鹰重工机械有限公司 A kind of dynamic disorder object detecting method
CN105549597B (en) * 2016-02-04 2018-06-26 同济大学 A kind of unmanned vehicle dynamic path planning method based on environmental uncertainty
KR101795250B1 (en) * 2016-05-03 2017-11-07 현대자동차주식회사 Path planning apparatus and method for autonomous vehicle
CN107346611B (en) * 2017-07-20 2021-03-23 北京纵目安驰智能科技有限公司 Obstacle avoidance method and obstacle avoidance system for autonomous driving vehicle

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106873580A (en) * 2015-11-05 2017-06-20 福特全球技术公司 Based on perception data autonomous driving at the intersection

Also Published As

Publication number Publication date
CN109927719A (en) 2019-06-25

Similar Documents

Publication Publication Date Title
CN109927719B (en) Auxiliary driving method and system based on obstacle trajectory prediction
CN109817021B (en) Method and device for avoiding traffic participants in roadside blind areas of laser radar
CN106873580B (en) Autonomous driving at intersections based on perception data
US10926763B2 (en) Recognition and prediction of lane constraints and construction areas in navigation
US10618519B2 (en) Systems and methods for autonomous vehicle lane change control
CN107031650B (en) Predicting vehicle motion based on driver limb language
RU2767955C1 (en) Methods and systems for determining the presence of dynamic objects by a computer
JP5620147B2 (en) Movable object prediction apparatus and program
CN110060467B (en) Vehicle control device
CN114442101B (en) Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar
US11556127B2 (en) Static obstacle map based perception system
RU2769921C2 (en) Methods and systems for automated detection of the presence of objects
RU2750243C2 (en) Method and system for generating a trajectory for a self-driving car (sdc)
RU2757234C2 (en) Method and system for calculating data for controlling the operation of a self-driving car
RU2744012C1 (en) Methods and systems for automated determination of objects presence
RU2750118C1 (en) Methods and processors for controlling the operation of a self-driving car
Virdi Using deep learning to predict obstacle trajectories for collision avoidance in autonomous vehicles
RU2745804C1 (en) Method and processor for control of movement of autonomous vehicle in the traffic line
US10654453B2 (en) Systems and methods for low-latency braking action for an autonomous vehicle
CN110371025A (en) Method, system, equipment and the storage medium of the preposition collision detection for operating condition of overtaking other vehicles
WO2018165199A1 (en) Planning for unknown objects by an autonomous vehicle
CN114093023A (en) Moving body obstruction detection device, system, method, and storage medium
CN112912810A (en) Control method and device
CN114236521A (en) Distance measuring method and device, terminal equipment and automobile
CN114563007B (en) Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant