CN113283647A - Method and device for predicting obstacle track and automatic driving vehicle - Google Patents

Method and device for predicting obstacle track and automatic driving vehicle Download PDF

Info

Publication number
CN113283647A
CN113283647A CN202110546368.8A CN202110546368A CN113283647A CN 113283647 A CN113283647 A CN 113283647A CN 202110546368 A CN202110546368 A CN 202110546368A CN 113283647 A CN113283647 A CN 113283647A
Authority
CN
China
Prior art keywords
traffic light
information
barrier
obstacle
light information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110546368.8A
Other languages
Chinese (zh)
Other versions
CN113283647B (en
Inventor
林渲竺
郭文城
尹周建铖
韩旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Weride Technology Co Ltd
Original Assignee
Guangzhou Weride Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weride Technology Co Ltd filed Critical Guangzhou Weride Technology Co Ltd
Priority to CN202110546368.8A priority Critical patent/CN113283647B/en
Publication of CN113283647A publication Critical patent/CN113283647A/en
Application granted granted Critical
Publication of CN113283647B publication Critical patent/CN113283647B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Operations Research (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method and a device for predicting obstacle track and an automatic driving vehicle, wherein the method comprises the following steps: detecting first traffic light information and obstacle information; positioning the first traffic light information and the barrier information in a semantic map so as to determine the intersection position where the first traffic light information is located and the barrier position corresponding to the barrier information; predicting estimated motion tracks of the barrier corresponding to the barrier position in a plurality of motion directions in the semantic map according to the barrier position; determining second traffic light information corresponding to each estimated motion track based on the first traffic light information and the intersection position; and processing each estimated motion track according to the second traffic light information corresponding to each estimated motion track to obtain a target motion track, so that the efficiency and the accuracy of the predicted track are improved, and the reasonability of path planning of the automatic driving vehicle is improved.

Description

Method and device for predicting obstacle track and automatic driving vehicle
Technical Field
The embodiment of the application relates to the technical field of automatic driving, in particular to a method and a device for predicting obstacle tracks and an automatic driving vehicle.
Background
The automatic driving vehicle is one kind of intelligent automobile, also called as wheeled mobile robot, and mainly depends on the unmanned technology mainly including computer system in the automobile to make the automobile own environmental perception, path planning and realize the vehicle control autonomously, that is, the electronic technology is used to control the automobile to carry out the human-like driving or the automatic driving. The trajectory prediction is an important loop in the unmanned technology, and can provide reasonable cognition of the environment of the automatic driving vehicle according to the prediction of the future trajectory of the obstacle around the automatic driving vehicle, so as to guide the decision, planning and control of the automatic driving vehicle.
The trajectory prediction method mentioned in the related art generally employs a vehicle dynamic model to predict the trajectory of an obstacle by sensing the position, speed, and posture of the obstacle for a certain period of time, however, since the sensing range of an autonomous vehicle is limited, the autonomous vehicle may have a sensing blind area when passing through a complex environment such as a road intersection. When the environment information in the blind area position cannot be sensed by the automatic driving vehicle, the track prediction accuracy is low, and the result of the unmanned vehicle path planning is lack of rationality.
Disclosure of Invention
The application provides a method and a device for predicting a track of an obstacle and an automatic driving vehicle, which aim to solve the problem of low track prediction accuracy caused by the fact that environmental information cannot be sensed in the existing track prediction process.
In a first aspect, an embodiment of the present application provides a method for predicting an obstacle trajectory, where the method includes:
detecting first traffic light information and obstacle information;
positioning the first traffic light information and the barrier information in a semantic map so as to determine the intersection position where the first traffic light information is located and the barrier position corresponding to the barrier information;
predicting estimated motion tracks of the barrier corresponding to the barrier position in a plurality of motion directions in the semantic map according to the barrier position;
determining second traffic light information corresponding to each estimated motion track based on the first traffic light information and the intersection position;
and processing each estimated motion track according to the second traffic light information corresponding to each estimated motion track to obtain the target motion track.
In a second aspect, an embodiment of the present application further provides an apparatus for predicting an obstacle trajectory, where the apparatus includes:
the sensing module is used for detecting first traffic light information and barrier information;
the map positioning module is used for positioning the first traffic light information and the barrier information in a semantic map so as to determine the intersection position where the first traffic light information is located and the barrier position corresponding to the barrier information;
the track prediction module is used for predicting the predicted movement tracks of the barrier corresponding to the barrier position in a plurality of movement directions in the semantic map according to the barrier position;
the track traffic light information determining module is used for determining second traffic light information corresponding to each estimated motion track based on the first traffic light information and the intersection position;
and the track processing module is used for processing each estimated motion track according to the second traffic light information corresponding to each estimated motion track to obtain the target motion track.
In a third aspect, embodiments of the present application further provide an autonomous vehicle, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the program to implement the method described above.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method described above.
The application has the following beneficial effects:
in this embodiment, after the first traffic light information and the obstacle information are detected, the first traffic light information and the obstacle information may be located in the semantic map to determine the intersection position where the first traffic light information is located and the obstacle position corresponding to the obstacle information, and according to the positions of the obstacles, the estimated movement trajectories of the obstacles corresponding to the obstacle position in the plurality of movement directions are predicted in the semantic map. According to the first traffic light information and the positions of intersections where the traffic lights are located, second traffic light information corresponding to each estimated motion track can be predicted, then each estimated motion track is processed according to the second traffic light information corresponding to each estimated motion track, and finally the target motion track of the barrier under the influence of traffic light rules is obtained, so that the efficiency and the accuracy of the predicted track are improved, and the reasonability of path planning of the automatic driving vehicle is improved.
Drawings
Fig. 1 is a flowchart of an embodiment of a method for predicting an obstacle trajectory according to an embodiment of the present application;
fig. 2 is an exemplary flowchart for determining second traffic light information corresponding to each predicted motion trajectory according to an embodiment of the present application;
fig. 3 is a block diagram of an embodiment of an apparatus for predicting an obstacle trajectory according to a second embodiment of the present application;
fig. 4 is a schematic structural diagram of an autonomous vehicle according to a third embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of an embodiment of a method for predicting an obstacle trajectory according to an embodiment of the present disclosure, where the embodiment may be applied to an autonomous vehicle, and the autonomous vehicle may sense an environment and navigate without human input. Autonomous vehicles may be equipped with a high precision GPS navigation system and several laser scanners for detecting obstacles. Autonomous vehicles may also be configured to sense their surroundings using technologies such as cameras, radar, light detection and ranging (LIDAR), GPS, and other sensors.
The present embodiment may include the following steps:
step 110, detecting first traffic light information and obstacle information.
Illustratively, the first traffic light information may include, but is not limited to: traffic light position, traffic light shape, traffic light color, and the like. Obstacle information may include, but is not limited to: the frame of one or more obstacles, location information, speed of movement, etc.
In this embodiment, the autonomous vehicle may include a sensing module, the sensing module may include one or more sensors, and the autonomous vehicle may detect the first traffic light information and the obstacle information using the sensing module.
In one implementation, the sensing module may include a laser radar, and during the movement of the autonomous vehicle, the obstacle may be detected by the laser radar to obtain corresponding point cloud data. Then analyzing the point cloud data according to a preset identification rule (or a rule-based mixed model) to obtain obstacle information; the point cloud data may also be identified by using a machine learning method, or may also be identified by combining a machine learning and identification rule method, so as to obtain the obstacle information, which is not limited in this embodiment.
In other implementations, the sensing module may further acquire an image of the obstacle or the traffic light through the camera, and then may obtain the first traffic light information and the obstacle information by analyzing the image.
And 120, positioning the first traffic light information and the obstacle information in a semantic map to determine the intersection position where the first traffic light information is located and the obstacle position corresponding to the obstacle information.
In this embodiment, a high-definition semantic map, which is a high-precision map labeled with driving assistance information such as road points and various road actual information, may be included in the navigation system of the autonomous vehicle, and in addition, many semantic information such as the color of traffic lights, the speed limit of lanes, and the position where a left turn starts may be included in the semantic map.
In one embodiment, the positioning points of the first traffic light information and the barrier information in the semantic map can be respectively determined through a positioning point algorithm, so that the intersection position where the first traffic light information is located and the barrier position corresponding to the barrier information are obtained.
In one example, the process of determining the location point of the obstacle corresponding to the obstacle information in the semantic map by using the location point algorithm may be as follows: in this instance, the obstacle information may include a coordinate position where the obstacle is located on the actual road and a moving speed of the obstacle. In the semantic map, the coordinate position of the obstacle can be located, an initial area is selected by taking the coordinate position as the center of a circle and taking a preset selection radius as the basis of a selection range, the initial area can comprise the road distribution state of the area where the obstacle is located, and the road distribution state can be represented by a plurality of road points. Then, a plurality of road points in the initial area are screened by adopting a preset screening rule, one or more target road points matched with the coordinate position of the obstacle in the semantic map are finally obtained, and the one or more target road points are used as positioning points of the obstacle in the semantic map, namely the obstacle position.
Illustratively, the filtering rule may be: and (3) a rule for screening the road points according to at least one of the movement speed of the obstacle, the direction of the road points, the distance between the road points and the coordinates of the obstacle, the distance between the road points and the communication relationship between the road points. For example, the target road point may be obtained by deleting a road point having an angle greater than a preset angle threshold with respect to the moving speed direction of the obstacle, or deleting other road points on each road except for a point closest to the obstacle among road points mutually connected in pairs, or deleting a road point which has a distance greater than the distance to the obstacle in the semantic map and has a difference greater than half the width of the road.
In another example, the process of determining the traffic light positioning point corresponding to the first traffic light information in the semantic map by the positioning point algorithm may be as follows: in this example, the first traffic light information may include an actual coordinate position of the traffic light, a traffic light shape (e.g., pie or arrow), and a traffic light color (e.g., red, yellow, green). During positioning, the coordinate position of the current automatic driving vehicle can be obtained, a determination method of the barrier positioning point is referred to, a search area corresponding to the coordinate position of the current automatic driving vehicle is determined in a semantic map, traffic light points in the search area are obtained to serve as candidate positioning points, then the candidate positioning point closest to the actual coordinate position of the first traffic light information is determined, the candidate positioning point located in front of the automatic driving vehicle and closest to the actual coordinate position of the first traffic light information is determined to serve as the positioning point corresponding to the first traffic light information, and the positioning point can represent the intersection position where the first traffic light information is located.
And step 130, predicting the estimated motion tracks of the barrier corresponding to the barrier position in a plurality of motion directions in the semantic map according to the barrier position.
It should be noted that the obstacle information sensed by the sensing module may include one or more obstacles, the obstacle position of each obstacle in the semantic map needs to be determined, and then the estimated motion trajectories of each obstacle in multiple motion directions are predicted according to the obstacle position of the obstacle.
In implementation, a forward search function provided by a semantic map can be adopted, starting from one or more positioning points of the obstacle, all tracks which can be connected by the positioning points on the semantic map are searched to serve as the estimated motion track of the obstacle. For example, if the obstacle is located on a lane that is ahead of the intersection and can go straight, turn left, and turn right at the same time, the predicted movement trajectory generated based on the location point of the obstacle may include a straight-going trajectory, a left-turning trajectory, and a right-turning trajectory.
In other implementations, the predicted motion trajectory of the obstacle may also be predicted based on a deep learning model. For example, a trajectory prediction model for a semantic map may be generated, with the input being information for a location point and the output being an estimated motion trajectory for a plurality of motion directions that may be generated for the location point. The present embodiment does not limit the manner of generating the trajectory prediction model for the semantic map.
And 140, determining second traffic light information corresponding to each estimated motion track based on the first traffic light information and the intersection position.
In the step, after the estimated motion tracks of the barrier in a plurality of motion directions are obtained, each estimated motion track is analyzed, and second traffic light information of each estimated motion track at the intersection can be obtained. Illustratively, the second traffic light information may also include a traffic light shape, a traffic light color, and the like. For example, for the above example, the estimated movement track of the obstacle includes a straight-going track, a left-turning track and a right-turning track, and the three tracks respectively correspond to three traffic light guides in the intersection, namely, the shapes and colors of the traffic lights in three directions of straight-going, left-turning and right-turning.
In one embodiment, as shown in fig. 2, step 140 may further include the steps of:
and 140-1, predicting third traffic light information of other intersections at the intersection position according to the first traffic light information.
In this embodiment, if only the first traffic light information of one direction of the current intersection can be obtained, the third traffic light information of other directions of the intersection can be guessed by combining the semantic map in a priori assumed manner. Illustratively, the third traffic light information may also include, but is not limited to: the shape of the traffic light (such as round cake light or arrow lamp, the arrow lamp comprises left-turning arrow lamp, straight-going arrow lamp, right-turning arrow lamp, turning arrow lamp and the like), the color of the traffic light, the position of the stop line of the traffic light and the like.
In one embodiment, step 140-1 may further include the steps of:
and 140-1-1, judging whether the traffic light corresponding to the first traffic light information is a red light or not according to the color of the traffic light.
And 140-1-2, if the traffic light corresponding to the first traffic light information is not a red light, predicting third traffic light information of other intersections at the intersection position according to the shape of the traffic light and a preset traffic light rule, wherein the third traffic light information is the traffic light information of which the color of the traffic light is red.
In the crossroads, if the intersection where the first traffic light information detected at present is located is taken as the current intersection, intersections in other directions of the intersection position can be divided into a left intersection, a right intersection and an opposite intersection, the intersection where the current intersection arrives through a left turn is the left intersection, the intersection where the current intersection arrives through a right turn is the right intersection, and the intersection where the current intersection arrives through a straight line is the opposite intersection.
In the embodiment, the method is suitable for guessing whether the traffic lights in other directions are red light scenes according to the current intersection under the condition that the current intersection is not red light, and does not guess the green light states of the traffic lights in other directions.
Specifically, if the straight running light corresponding to the current intersection is a green light, a yellow light or a green flashing light, the third traffic light information of the intersections in other directions can be predicted. And the straight traveling light may include a pie light or a straight traveling arrow light, which is not limited in this embodiment.
If the straight-going light corresponding to the current intersection is a green light, a yellow light or a green flashing light, the third traffic light information of other intersections of the intersection can be predicted as follows:
the straight running light and the left turn light corresponding to the left intersection are red lights, and it should be noted that the straight running light and the left turn light mentioned in this embodiment may be a cake light or an arrow light, which is not limited in this embodiment;
the straight running light and the left turn light corresponding to the right intersection are also red lights;
if the current intersection has left-turn arrow lamps or straight arrow lamps, or the opposite intersection has left-turn arrow lamps or straight arrow lamps, the current intersection can be judged to be a left-turn protection intersection (the left-turn protection intersection is one of the left intersections), and the left-turn lamps and the turning lamps of the opposite intersection can be predicted to be red lamps.
If the left turn light or the head turn light corresponding to the current intersection is a green light, a yellow light or a green flashing light, the third traffic light information of other intersections of the intersection can be predicted as follows:
the straight lamp, the left turn lamp and the turn-down lamp corresponding to the left intersection are all red lamps;
the straight running light and the left turn light corresponding to the right intersection are red lights;
if the current intersection has a left-turn arrow lamp or a straight arrow lamp, or the opposite intersection has a left-turn arrow lamp or a straight arrow lamp, it can be determined that the current intersection is a left-turn protection intersection (the left-turn protection intersection is one of the left intersections), and the straight arrow lamp of the opposite intersection can be predicted to be a red lamp.
And 140-2, judging whether traffic light stop lines exist in each estimated motion track in the semantic map.
In the step, the traffic light stop line is marked and defined in the semantic map, and whether the traffic light stop line exists in each estimated motion track can be detected in the semantic map, namely whether each estimated motion track passes through the intersection or not is judged.
And 140-3, if the estimated motion track of the traffic light stop line exists, matching the traffic light position corresponding to the traffic light stop line of the estimated motion track with the traffic light position corresponding to the third traffic light information.
Specifically, if a traffic light stop line exists in the estimated motion trajectory, the position of the traffic light corresponding to the traffic light stop line can be obtained in the semantic map. And then matching the position of the traffic light corresponding to the traffic light stop line with the position of the traffic light of each road junction obtained in the step 140-1, namely judging whether the traffic light corresponding to the traffic light stop line is the traffic light predicted in the step 140-1.
And 140-4, taking the third traffic light information corresponding to the matched traffic light as the second traffic light information of the traffic light corresponding to the traffic light stop line of the estimated motion track.
In this step, if the position of the traffic light corresponding to the traffic light stop line of the estimated movement trajectory is the position of the traffic light predicted in step 140-1, the third traffic light information of the predicted traffic light can be directly used as the second traffic light information of the traffic light corresponding to the traffic light stop line. For example, if the intersection corresponding to the stop line of the traffic light of the estimated movement track is the left intersection in step 140-1, the traffic light information corresponding to the left intersection is used as the second traffic light information of the traffic light corresponding to the stop line of the traffic light.
In other embodiments, in addition to obtaining the traffic light information of the current traffic light stop line through the traffic light information predicted in step 140-1, the actual traffic light information may be collected through a sensor, and then the traffic light position of the traffic light stop line is matched with the collected traffic light position, so as to determine the traffic light information of the traffic light stop line.
And 150, processing each estimated motion track according to the second traffic light information corresponding to each estimated motion track to obtain a target motion track.
In the step, after the second traffic light information corresponding to each estimated motion track is obtained, each estimated motion track can be processed according to a preset traffic light rule, and finally the target motion track of the barrier is obtained.
In one embodiment, step 150 may further include the steps of:
150-1, obtaining the historical movement track of the barrier from the semantic map, and combining the historical movement track and the estimated movement track into a movement track.
In an implementation, after the autonomous vehicle detects an obstacle, the obstacle position of the obstacle in the semantic map may be recorded, and the position of the obstacle recorded over time may constitute a historical movement trajectory of the obstacle. Combining the historical movement track and the estimated movement track of the obstacle can obtain a plurality of movement tracks of the obstacle.
In one implementation, the historical motion trajectory and the predicted motion trajectory of the obstacle may be combined to obtain a plurality of motion trajectories as follows:
traversing each historical motion track in a semantic map, respectively communicating and judging the currently traversed historical motion track with each estimated motion track in the semantic map, and combining the estimated motion track which can be communicated with the current historical motion track into a motion track.
During implementation, the communication judgment can be performed on the current historical motion track and a certain predicted motion track in the following manner: whether the current historical motion track and the estimated motion track have coincident tracks in the semantic map is judged, for example, whether the current historical motion track and the estimated motion track have coincident track points can be judged in the semantic map, if so, the coincident tracks are shown, and then the current historical motion track and the estimated motion track can be judged to be communicated. If the current historical motion track and the estimated motion track do not have coincident tracks in the semantic map, acquiring an end point track point of the current historical motion track and a start point track point of the estimated motion track, then calculating the distance between the end point track point and the start point track point in the semantic map, and if the distance is smaller than or equal to a preset distance threshold, judging that the current historical motion track and the estimated motion track can be communicated; if the distance is greater than the preset distance threshold, it can be determined that the current historical motion track and the estimated motion track cannot be communicated. And 150-2, judging that the barrier passes through the red light or is about to pass through the red light according to the motion tracks and the second traffic light information.
After the movement track of the obstacle is obtained, the obstacle can be judged to pass through a red light or pass through a red light according to second traffic light information of the movement track.
In one embodiment, step 150-2 may further include the steps of:
150-2-1, detecting the position of the traffic light stop line of the motion track, and determining the traffic light color corresponding to the position of the traffic light stop line according to the second traffic light information.
After the motion tracks are obtained, whether the traffic light stop lines exist in the motion tracks or not can be searched in a semantic map according to the motion tracks, and the positions of the traffic light stop lines are determined when the traffic light stop lines exist. And if the traffic light corresponding to the position of the traffic light stop line is the traffic light for which the second traffic light information is determined, obtaining the color of the traffic light from the second traffic light information as the color of the traffic light corresponding to the position of the traffic light stop line.
And 150-2-2, if the color of the traffic light corresponding to the position of the traffic light stop line is red, comparing the position of the obstacle with the position of the traffic light stop line to determine whether the obstacle passes through the traffic light stop line.
And if the color of the traffic light corresponding to the position of the traffic light stop line is determined to be red, namely the traffic light corresponding to the traffic light stop line is red, further judging whether the current barrier passes through the red light. During implementation, the position of the barrier can be compared with the position of the stop line of the traffic light, the driving direction of the barrier is taken as a judgment reference, and if the position of the stop line of the traffic light is behind the position of the barrier, the barrier is judged to pass through the stop line of the traffic light; otherwise, if the position of the stop line of the traffic light is in front of the position of the obstacle, the obstacle is judged not to pass through the stop line of the traffic light.
In practice, when judging whether the barrier passes through the traffic light stop line, whether the barrier has a deceleration behavior before the traffic light stop line can be analyzed by combining the movement speed of the barrier. If it is determined that the obstacle decelerates at a relatively large deceleration (e.g., the deceleration exceeds a preset deceleration threshold) before the stop line of the traffic light based on the movement speed of the obstacle, but the obstacle cannot stop before the stop line of the traffic light, it may also be determined that the obstacle has passed the red light. On the contrary, if it is determined that the obstacle decelerates at a larger deceleration before the stop line of the traffic light according to the movement speed of the obstacle and the obstacle can stop before the stop line of the traffic light, it may be determined that the obstacle has not passed the red light, that is, is about to pass the red light.
And 150-3, filtering the motion trail which passes through the red light, updating the estimated motion trail in the motion trail which is about to pass through the red light, and finally obtaining the target motion trail.
In one embodiment, if a certain motion trajectory of the obstacle is determined that the obstacle passes the red light, the motion trajectory passing the red light may be directly filtered. In other embodiments, the motion trajectory that has currently passed the red light may also be processed in conjunction with the determination of the other motion trajectories of the obstacle, for example, if the traffic lights of the obstacle and the other motion trajectories are not detected or predicted as red lights, the motion trajectory that has passed the red light may be filtered out. Alternatively, if all the movement trajectories of the obstacle are determined to have passed through the movement trajectories of the red light, the movement trajectories are not processed, but the current obstacle may be marked, for example, as a special obstacle that needs attention. Or if all the movement tracks of the obstacle are judged to be the movement tracks which do not need to pass through the red light, the movement tracks are not processed.
On the other hand, if a certain movement locus of the obstacle is determined that the obstacle has not passed the red light yet but is about to pass the red light, the movement locus may not be processed. And if the obstacle has a deceleration behavior before the red light, updating the estimated motion trail of the current motion trail according to the deceleration behavior.
In an embodiment, the step of updating the estimated motion trail in the motion trail going through the red light further includes the following steps:
if the obstacle is judged to decelerate before the traffic light stop line according to the movement speed of the obstacle, and deceleration information is determined, wherein the deceleration information comprises a deceleration starting position, a stop position, deceleration time and deceleration; determining a deceleration distance according to the deceleration time and the deceleration; and positioning the deceleration starting position and the stopping position in the estimated motion track in the motion track to obtain a road section to be updated, and updating the road section to be updated by taking the positioning point of the deceleration starting position as a starting point and the deceleration distance as the length of the road section.
In this embodiment, if it is determined that the obstacle has deceleration behavior before the traffic light stop line according to the movement speed of the obstacle recorded over a period of time, and the deceleration exceeds a preset deceleration threshold, a position where the obstacle starts decelerating in the actual road may be obtained as a deceleration start position, and a position where the obstacle finally stops on the road after decelerating may be obtained as a stop position, and at the same time, the deceleration of the obstacle may be obtained and a period of time from the time when the obstacle starts decelerating to stop may be calculated as a deceleration time. The deceleration distance is then calculated from the deceleration time and the deceleration. And positioning the deceleration starting position and the stop position in the corresponding estimated motion track of the semantic map, wherein the positioning point of the deceleration starting position and the positioning point of the stop position form a road section to be updated, and then updating the road section to be updated by using the positioning point of the deceleration starting position as a starting point and the deceleration distance as the length of the road section, determining a new road section, and then replacing the road section to be updated with the new road section in the estimated motion track. By updating the estimated motion trail, the barrier can stop in front of the stop line of the traffic light.
Through the processing of screening, updating and the like, the finally remained estimated motion trail (including the updated estimated motion trail) can be used as the target motion trail of the current barrier under the influence of the traffic light rule.
In other embodiments, in order to reduce the amount of motion of the autonomous driving vehicle, reduce the influence of the obstacle on the autonomous driving vehicle, all estimated motion trajectories of the obstacle may also be filtered out, thereby implementing a higher-intensity traffic light rule.
In this embodiment, after the first traffic light information and the obstacle information are detected, the first traffic light information and the obstacle information may be located in the semantic map to determine the intersection position where the first traffic light information is located and the obstacle position corresponding to the obstacle information, and according to the positions of the obstacles, the estimated movement trajectories of the obstacles corresponding to the obstacle position in the plurality of movement directions are predicted in the semantic map. According to the first traffic light information and the positions of intersections where the traffic lights are located, second traffic light information corresponding to each estimated motion track can be predicted, then each estimated motion track is processed according to the second traffic light information corresponding to each estimated motion track, and finally the target motion track of the barrier under the influence of traffic light rules is obtained, so that the efficiency and the accuracy of the predicted track are improved, and the reasonability of path planning of the automatic driving vehicle is improved.
Example two
Fig. 3 is a block diagram of a structure of an embodiment of an apparatus for predicting an obstacle trajectory according to a second embodiment of the present application, where the embodiment of the apparatus may be applied to an autonomous vehicle, and may include the following modules:
the sensing module 310 is configured to detect first traffic light information and obstacle information;
the map positioning module 320 is configured to position the first traffic light information and the obstacle information in a semantic map to determine a crossing position where the first traffic light information is located and an obstacle position corresponding to the obstacle information;
the track prediction module 330 is configured to predict, according to the position of the obstacle, predicted motion tracks of the obstacle in multiple motion directions, where the obstacle corresponds to the position of the obstacle, in the semantic map;
a track traffic light information determining module 340, configured to determine, based on the first traffic light information and the intersection position, second traffic light information corresponding to each predicted movement track;
and the track processing module 350 is configured to process each estimated motion track according to the second traffic light information corresponding to each estimated motion track to obtain a target motion track.
In one embodiment, the track traffic light information determination module 340 may include the following sub-modules:
the other intersection traffic light information determining submodule is used for predicting third traffic light information of other intersections at the intersection position according to the first traffic light information;
the traffic light stop line judgment submodule is used for judging whether a traffic light stop line exists in each estimated motion track in the semantic map;
and the traffic light matching sub-module is used for matching the traffic light position corresponding to the traffic light stop line of the estimated motion track with the traffic light position corresponding to the third traffic light information if the estimated motion track of the traffic light stop line exists, and taking the third traffic light information corresponding to the matched traffic light as the second traffic light information of the traffic light corresponding to the traffic light stop line of the estimated motion track.
In one embodiment, the first traffic light information includes a traffic light shape and a traffic light color;
the other intersection traffic light information determination submodule is specifically configured to:
judging whether the traffic light corresponding to the first traffic light information is a red light or not according to the color of the traffic light;
and if the traffic light corresponding to the first traffic light information is not a red light, predicting third traffic light information of other intersections at the intersection position according to the shape of the traffic light and a preset traffic light rule, wherein the third traffic light information is the traffic light information of which the traffic light color is red.
In one embodiment, the trajectory processing module 350 includes:
the motion trail generation submodule is used for acquiring the historical motion trail of the barrier from the semantic map and combining the historical motion trail and the estimated motion trail into a motion trail;
the red light judgment sub-module is used for judging that the barrier passes through the red light or is about to pass through the red light according to the motion tracks and the second traffic light information;
and the track processing submodule is used for filtering the motion track which passes through the red light, updating the estimated motion track in the motion track which is about to pass through the red light and finally obtaining the target motion track.
In one embodiment, the apparatus further comprises the following modules:
and the obstacle marking module is used for marking the obstacle when all the motion tracks are the motion tracks which pass through the red light.
In one embodiment, the obstacle information includes a movement speed of the obstacle; the red light judgment submodule is specifically configured to:
detecting the position of a traffic light stop line of the motion track, and determining the color of a traffic light corresponding to the position of the traffic light stop line according to the second traffic light information;
and if the traffic light corresponding to the position of the traffic light stop line is red, comparing the position of the barrier with the position of the traffic light stop line to determine whether the barrier passes through the traffic light stop line.
In an embodiment, the trajectory processing submodule is specifically configured to:
if the obstacle is judged to decelerate before the traffic light stop line according to the movement speed of the obstacle, and deceleration information is determined, wherein the deceleration information comprises a deceleration starting position, a stop position, deceleration time and deceleration;
determining a deceleration distance according to the deceleration time and the deceleration;
positioning the deceleration starting position and the stopping position in the estimated motion trail to obtain a road section to be updated;
and updating the road section to be updated by taking the positioning point of the deceleration starting position as a starting point and the deceleration distance as the length of the road section.
It should be noted that the apparatus for predicting an obstacle trajectory provided in the embodiment of the present application may perform the method for predicting an obstacle trajectory provided in the embodiment of the present application, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE III
Fig. 4 is a schematic structural diagram of an autonomous vehicle according to a fourth embodiment of the present disclosure, as shown in fig. 4, the autonomous vehicle includes a processor 410, a memory 420, an input device 430, and an output device 440; the number of processors 410 in the autonomous vehicle may be one or more, with one processor 410 being illustrated in FIG. 4; the processor 410, memory 420, input device 430, and output device 440 in the autonomous vehicle may be connected by a bus or other means, as exemplified by the bus connection in fig. 4.
The memory 420 serves as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the methods in the embodiments of the present application. The processor 410 executes various functional applications and data processing of the autonomous vehicle, i.e., implements the methods described above, by executing software programs, instructions and modules stored in the memory 420.
The memory 420 may mainly include a program storage area and a data storage area, wherein the program storage area
The operating system and the application program required by at least one function can be stored; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 420 may further include memory located remotely from processor 410, which may be connected to the autonomous vehicle via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may be used to receive entered numerical or character information and to generate key signal inputs relating to user settings and function controls of the autonomous vehicle. The output device 440 may include a display device such as a display screen.
Example four
The fourth embodiment of the present application further provides a storage medium containing computer-executable instructions, which when executed by a processor of a server, are configured to perform the method of any one of the first embodiment.
From the above description of the embodiments, it is obvious for those skilled in the art that the present application can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods described in the embodiments of the present application.
It should be noted that, in the embodiment of the apparatus, the included units and modules are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (10)

1. A method of obstacle trajectory prediction, the method comprising:
detecting first traffic light information and obstacle information;
positioning the first traffic light information and the barrier information in a semantic map so as to determine the intersection position where the first traffic light information is located and the barrier position corresponding to the barrier information;
predicting estimated motion tracks of the barrier corresponding to the barrier position in a plurality of motion directions in the semantic map according to the barrier position;
determining second traffic light information corresponding to each estimated motion track based on the first traffic light information and the intersection position;
and processing each estimated motion track according to the second traffic light information corresponding to each estimated motion track to obtain the target motion track.
2. The method of claim 1, wherein predicting second traffic light information corresponding to each predicted motion trajectory based on the first traffic light information and the intersection location comprises:
according to the first traffic light information, predicting third traffic light information of other intersections at the intersection position;
judging whether traffic light stop lines exist in each estimated motion track or not in the semantic map;
if the estimated motion track of the traffic light stop line exists, matching the traffic light position corresponding to the traffic light stop line of the estimated motion track with the traffic light position corresponding to the third traffic light information;
and taking the third traffic light information corresponding to the matched traffic light as the second traffic light information of the traffic light corresponding to the traffic light stop line of the estimated motion trail.
3. The method of claim 2, wherein the first traffic light information comprises a traffic light shape and a traffic light color;
the predicting of the third traffic light information of other intersections at the intersection position according to the first traffic light information comprises the following steps:
judging whether the traffic light corresponding to the first traffic light information is a red light or not according to the color of the traffic light;
and if the traffic light corresponding to the first traffic light information is not a red light, predicting third traffic light information of other intersections at the intersection position according to the shape of the traffic light and a preset traffic light rule, wherein the third traffic light information is the traffic light information of which the traffic light color is red.
4. The method according to any one of claims 1 to 3, wherein the processing each predicted motion trail according to the second traffic light information corresponding to each predicted motion trail to obtain the target motion trail comprises:
obtaining a historical movement track of the barrier from the semantic map, and combining the historical movement track and the estimated movement track into a movement track;
judging that the barrier passes through a red light or is about to pass through the red light according to the motion tracks and the second traffic light information;
and filtering the motion trail which passes through the red light, updating the estimated motion trail in the motion trail which is about to pass through the red light, and finally obtaining the target motion trail.
5. The method of claim 4, further comprising:
and when all the motion tracks are the motion tracks which have passed through the red light, marking the obstacle.
6. The method of claim 4, wherein the obstacle information includes a speed of movement of the obstacle; the judging that the barrier passes through the red light or is about to pass through the red light according to the motion tracks and the second traffic light information comprises the following steps:
detecting the position of a traffic light stop line of the motion track, and determining the color of a traffic light corresponding to the position of the traffic light stop line according to the second traffic light information;
and if the traffic light corresponding to the position of the traffic light stop line is red, comparing the position of the barrier with the position of the traffic light stop line to determine whether the barrier passes through the traffic light stop line.
7. The method of claim 4, wherein updating the estimated motion trajectory of the motion trajectories that will pass through the red light comprises:
if the obstacle is judged to decelerate before the traffic light stop line according to the movement speed of the obstacle, and deceleration information is determined, wherein the deceleration information comprises a deceleration starting position, a stop position, deceleration time and deceleration;
determining a deceleration distance according to the deceleration time and the deceleration;
positioning the deceleration starting position and the stopping position in the estimated motion trail to obtain a road section to be updated;
and updating the road section to be updated by taking the positioning point of the deceleration starting position as a starting point and the deceleration distance as the length of the road section.
8. An apparatus for obstacle trajectory prediction, the apparatus comprising:
the sensing module is used for detecting first traffic light information and barrier information;
the map positioning module is used for positioning the first traffic light information and the barrier information in a semantic map so as to determine the intersection position where the first traffic light information is located and the barrier position corresponding to the barrier information;
the track prediction module is used for predicting the predicted movement tracks of the barrier corresponding to the barrier position in a plurality of movement directions in the semantic map according to the barrier position;
the track traffic light information determining module is used for determining second traffic light information corresponding to each estimated motion track based on the first traffic light information and the intersection position;
and the track processing module is used for processing each estimated motion track according to the second traffic light information corresponding to each estimated motion track to obtain the target motion track.
9. An autonomous vehicle comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the program, implements the method according to any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202110546368.8A 2021-05-19 2021-05-19 Method and device for predicting obstacle track and automatic driving vehicle Active CN113283647B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110546368.8A CN113283647B (en) 2021-05-19 2021-05-19 Method and device for predicting obstacle track and automatic driving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110546368.8A CN113283647B (en) 2021-05-19 2021-05-19 Method and device for predicting obstacle track and automatic driving vehicle

Publications (2)

Publication Number Publication Date
CN113283647A true CN113283647A (en) 2021-08-20
CN113283647B CN113283647B (en) 2023-04-07

Family

ID=77280047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110546368.8A Active CN113283647B (en) 2021-05-19 2021-05-19 Method and device for predicting obstacle track and automatic driving vehicle

Country Status (1)

Country Link
CN (1) CN113283647B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113793520A (en) * 2021-09-15 2021-12-14 苏州挚途科技有限公司 Vehicle track prediction method and device and electronic equipment
CN113895456A (en) * 2021-09-08 2022-01-07 北京汽车研究总院有限公司 Intersection driving method and device for automatic driving vehicle, vehicle and medium
CN113968235A (en) * 2021-11-30 2022-01-25 广州文远知行科技有限公司 Method, device, equipment and medium for determining regional hierarchy of obstacle
CN115292435A (en) * 2022-10-09 2022-11-04 智道网联科技(北京)有限公司 High-precision map updating method and device, electronic equipment and storage medium
CN115790606A (en) * 2023-01-09 2023-03-14 深圳鹏行智能研究有限公司 Trajectory prediction method, trajectory prediction device, robot, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109760675A (en) * 2019-03-12 2019-05-17 百度在线网络技术(北京)有限公司 Predict method, apparatus, storage medium and the terminal device of track of vehicle
CN109801508A (en) * 2019-02-26 2019-05-24 百度在线网络技术(北京)有限公司 The motion profile prediction technique and device of barrier at crossing
CN110660256A (en) * 2019-10-22 2020-01-07 北京地平线机器人技术研发有限公司 Method and device for estimating state of signal lamp
CN111284485A (en) * 2019-10-10 2020-06-16 中国第一汽车股份有限公司 Method and device for predicting driving behavior of obstacle vehicle, vehicle and storage medium
CN111380555A (en) * 2020-02-28 2020-07-07 北京京东乾石科技有限公司 Vehicle behavior prediction method and device, electronic device, and storage medium
CN111626097A (en) * 2020-04-09 2020-09-04 吉利汽车研究院(宁波)有限公司 Method and device for predicting future trajectory of obstacle, electronic equipment and storage medium
CN112078592A (en) * 2019-06-13 2020-12-15 初速度(苏州)科技有限公司 Method and device for predicting vehicle behavior and/or vehicle track
CN112212874A (en) * 2020-11-09 2021-01-12 福建牧月科技有限公司 Vehicle track prediction method and device, electronic equipment and computer readable medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801508A (en) * 2019-02-26 2019-05-24 百度在线网络技术(北京)有限公司 The motion profile prediction technique and device of barrier at crossing
CN109760675A (en) * 2019-03-12 2019-05-17 百度在线网络技术(北京)有限公司 Predict method, apparatus, storage medium and the terminal device of track of vehicle
CN112078592A (en) * 2019-06-13 2020-12-15 初速度(苏州)科技有限公司 Method and device for predicting vehicle behavior and/or vehicle track
CN111284485A (en) * 2019-10-10 2020-06-16 中国第一汽车股份有限公司 Method and device for predicting driving behavior of obstacle vehicle, vehicle and storage medium
CN110660256A (en) * 2019-10-22 2020-01-07 北京地平线机器人技术研发有限公司 Method and device for estimating state of signal lamp
CN111380555A (en) * 2020-02-28 2020-07-07 北京京东乾石科技有限公司 Vehicle behavior prediction method and device, electronic device, and storage medium
CN111626097A (en) * 2020-04-09 2020-09-04 吉利汽车研究院(宁波)有限公司 Method and device for predicting future trajectory of obstacle, electronic equipment and storage medium
CN112212874A (en) * 2020-11-09 2021-01-12 福建牧月科技有限公司 Vehicle track prediction method and device, electronic equipment and computer readable medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姜方桃等: "《物流信息系统》", 31 May 2019, 西安电子科技大学出版社 *
龙兴明等: "《单片机图形化编程及应用》", 30 June 2020, 重庆大学出版社 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113895456A (en) * 2021-09-08 2022-01-07 北京汽车研究总院有限公司 Intersection driving method and device for automatic driving vehicle, vehicle and medium
CN113793520A (en) * 2021-09-15 2021-12-14 苏州挚途科技有限公司 Vehicle track prediction method and device and electronic equipment
CN113793520B (en) * 2021-09-15 2023-09-01 苏州挚途科技有限公司 Vehicle track prediction method and device and electronic equipment
CN113968235A (en) * 2021-11-30 2022-01-25 广州文远知行科技有限公司 Method, device, equipment and medium for determining regional hierarchy of obstacle
CN115292435A (en) * 2022-10-09 2022-11-04 智道网联科技(北京)有限公司 High-precision map updating method and device, electronic equipment and storage medium
CN115790606A (en) * 2023-01-09 2023-03-14 深圳鹏行智能研究有限公司 Trajectory prediction method, trajectory prediction device, robot, and storage medium

Also Published As

Publication number Publication date
CN113283647B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN113283647B (en) Method and device for predicting obstacle track and automatic driving vehicle
JP7050100B2 (en) Methods, devices, terminals, storage media, and programs for predicting the motion trajectory of intersection obstacles.
US11462022B2 (en) Traffic signal analysis system
CN109863513B (en) Neural network system for autonomous vehicle control
CN109606354B (en) Automatic parking method and auxiliary system based on hierarchical planning
JP6443550B2 (en) Scene evaluation device, driving support device, and scene evaluation method
US9933781B1 (en) Data-driven planning for automated driving
KR20220054278A (en) Travelling track prediction method and device for vehicle
Tsugawa et al. An architecture for cooperative driving of automated vehicles
JP6800575B2 (en) Methods and systems to assist drivers in their own vehicles
CN111583715B (en) Vehicle track prediction method, vehicle collision early warning method, device and storage medium
CN111788102A (en) Odometer system and method for tracking traffic lights
Kohlhaas et al. Semantic state space for high-level maneuver planning in structured traffic scenes
JPWO2017013749A1 (en) Operation planning device, travel support device, and operation planning method
US11753012B2 (en) Systems and methods for controlling the operation of an autonomous vehicle using multiple traffic light detectors
CN113071487B (en) Automatic driving vehicle control method and device and cloud equipment
KR102501489B1 (en) Obstacle Avoidance Trajectory Planning for Autonomous Vehicles
JPWO2019106789A1 (en) Processing apparatus and processing method
CN112325898B (en) Path planning method, device, equipment and storage medium
US20210343143A1 (en) Method and system for traffic light signal detection and usage
Luettel et al. Combining multiple robot behaviors for complex off-road missions
US20230415767A1 (en) Protolanes for testing autonomous vehicle intent
JP2023085060A (en) Lighting state discrimination apparatus, lighting state discrimination method, and computer program for lighting state discrimination
JP2022123239A (en) Division line recognition device
CN112904843B (en) Automatic driving scene determining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant