CN111832667A - Driving behavior decision-making system of park unmanned sightseeing vehicle - Google Patents

Driving behavior decision-making system of park unmanned sightseeing vehicle Download PDF

Info

Publication number
CN111832667A
CN111832667A CN202010977809.5A CN202010977809A CN111832667A CN 111832667 A CN111832667 A CN 111832667A CN 202010977809 A CN202010977809 A CN 202010977809A CN 111832667 A CN111832667 A CN 111832667A
Authority
CN
China
Prior art keywords
vehicle
control point
module
obstacle
behavior decision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010977809.5A
Other languages
Chinese (zh)
Other versions
CN111832667B (en
Inventor
华一丁
郭蓬
龚进峰
戎辉
唐风敏
李鑫慧
李长娟
王梦丹
郝晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Technology and Research Center Co Ltd
CATARC Tianjin Automotive Engineering Research Institute Co Ltd
Original Assignee
China Automotive Technology and Research Center Co Ltd
CATARC Tianjin Automotive Engineering Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Technology and Research Center Co Ltd, CATARC Tianjin Automotive Engineering Research Institute Co Ltd filed Critical China Automotive Technology and Research Center Co Ltd
Priority to CN202010977809.5A priority Critical patent/CN111832667B/en
Publication of CN111832667A publication Critical patent/CN111832667A/en
Application granted granted Critical
Publication of CN111832667B publication Critical patent/CN111832667B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems

Abstract

The invention provides a driving behavior decision-making system of a garden unmanned sightseeing vehicle, which comprises an environment sensing module, a preprocessing module, a scene judging module and a behavior decision-making module, wherein the environment sensing module is used for sensing the driving behavior of the garden unmanned sightseeing vehicle; the environment sensing module is used for acquiring the surrounding environment information of the vehicle through the vehicle-mounted sensor and outputting the relevant information required by the unmanned vehicle; the preprocessing module is used for processing the data output by the environment sensing module and removing obstacles; the scene judgment module is used for making scene judgment according to vehicle positioning information acquired by a vehicle-mounted sensor in the environment perception module and sending a judgment result to the behavior decision module; and the behavior decision module is used for calculating the vehicle behavior state at the next moment by combining the current vehicle state information according to the information output by the preprocessing module and the scene judging module. The method has the advantages of low calculation complexity and short algorithm execution time, and has higher popularization and use values for the low-speed unmanned vehicles in the park.

Description

Driving behavior decision-making system of park unmanned sightseeing vehicle
Technical Field
The invention belongs to the technical field of automatic driving, and particularly relates to a driving behavior decision system of a park unmanned sightseeing vehicle.
Background
The 21 st century automobile will make a dramatic development in the aspects of safety, comfort, convenience, high efficiency and the like, and the automatic driving technology describes a beautiful blueprint without traffic accidents for road traffic, and becomes one of the driving forces for promoting the sustainable development of the automobile industry. The development of intelligent automobiles and intelligent traffic can play an irreplaceable role in further improving traffic environment, reducing traffic jams, preventing traffic accidents, reducing social activity cost and the like. However, because the intelligent networked automobile integrates a large amount of new technologies such as information perception, intelligent decision, automatic vehicle control, network communication and the like on the basis of the traditional automobile technology, great challenges are brought to the development of related technologies, and meanwhile, the complexity and diversity of the environment limit the development of the related technologies, so that the unmanned technology is difficult to fall to the ground. In order to solve the above problems, various major scientific and technological companies and host factories have gradually developed unmanned application technologies for specific application scenarios such as industrial parks, highways, university campuses, autonomous valet parking, and the like.
At present, for the unmanned driving behavior decision in the campus, a supervised learning mode is usually adopted for model training of the unmanned driving behavior decision, but when the supervised learning mode is adopted for model training, a large amount of sample data needs to be collected and labeled. And the acquisition of a large amount of sample data and the labeling of the label on the sample data consume huge manpower resources, so the efficiency of model training is low. Moreover, because the sample data is difficult to expand, the accuracy of the trained model is low when the behavior decision is made.
Disclosure of Invention
In view of the above, the present invention is directed to a driving behavior decision system for a garden unmanned sightseeing vehicle, so as to solve the problems of large workload of sample data and sample data labeling, low model training and low behavior decision accuracy when a supervised learning method is adopted for behavior decision model training. The rapidity and the real-time performance of the behavior decision system are ensured, the safe and efficient running of the vehicle on the road is ensured, and the corresponding running task is completed.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a behavior decision-making system for unmanned sightseeing vehicles in a park comprises an environment sensing module, a preprocessing module, a scene judging module and a behavior decision-making module;
the environment perception module outputs relevant information required by the unmanned vehicle through perception and processing of the surrounding environment of the vehicle by the vehicle-mounted sensor, and the vehicle-mounted sensor comprises a 16-line laser radar sensor, a GPS (global positioning system) positioning sensor, a millimeter wave radar sensor and a high-precision map.
The system comprises a preprocessing module, a 16-line laser radar, a network transmission module and a data processing module, wherein the 16-line laser radar is used for detecting obstacles around an unmanned vehicle and transmitting the detected obstacle results to the preprocessing module in a network transmission mode; peripheral obstacles can be simply divided into two types of static obstacles and moving obstacles according to the moving speed of the obstacles, and the 16-line laser radar outputs obstacle information comprising: obstacle number, obstacle lateral distance, obstacle longitudinal distance, obstacle lateral velocity, obstacle longitudinal velocity, obstacle attribute (moving/stationary);
the GPS positioning sensor is used for positioning the vehicle and transmitting the positioning information to the scene judgment module in a serial port mode; calculating the scene of the vehicle according to the position information of the vehicle coordinates in the high-precision map;
the millimeter wave radar is used for the running speed of an obstacle in front of the vehicle;
the high-precision map storage comprises intersection information, lane line information and road boundary information.
The preprocessing module is used for preprocessing output information of a 16-line laser radar sensor and a millimeter wave radar sensor in the environment sensing module and comprises an obstacle coordinate conversion unit and an obstacle filtering unit;
the coordinate conversion unit converts the coordinates of all the obstacles output by the environment sensing module into a vehicle body coordinate system through a coordinate conversion formula;
the obstacle preprocessing unit comprehensively considers information such as the current vehicle speed, the current steering wheel angle, the transverse distance and the longitudinal distance between the obstacle and the vehicle, the speed of the obstacle and the like, and filters the obstacle which has no influence on the decision of the vehicle behavior.
The scene judging module makes scene judgment according to vehicle positioning information provided by a GPS positioning sensor in the environment sensing module and sends a judgment result to the behavior decision module. The scenes in the scene judging module comprise 5 scenes of a road center, a pre-crossing, a parking point and a departure point.
The behavior decision module calculates the vehicle behavior state at the next moment according to the obstacle information output by the preprocessing module and the scene type information of the current vehicle output by the scene judging module, and comprehensively considering the state information of the current vehicle, such as the vehicle speed, the steering wheel angle, the course angle and the like, wherein the behavior state comprises: and 7 types of vehicles are normally tracked, limited-speed, followed, emergently braked, obstacle-avoiding and detouring, stopped at a stopping point and stopped in front of an obstacle to wait.
Compared with the prior art, the driving behavior decision-making system of the park unmanned sightseeing vehicle has the following advantages:
compared with a deep learning calculation method adopted by a traditional unmanned automobile behavior decision making system, the method adopted by the invention has the advantages of lower calculation complexity and shorter algorithm execution time, and has higher popularization and use values for the low-speed unmanned automobile in the park.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic diagram illustrating a front obstacle-free behavior decision result according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a decision result of a behavior of an obstacle in front according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating an operation principle of a driving behavior decision system of an unmanned sightseeing bus in a park according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a control point setting according to an embodiment of the present invention;
FIG. 5 is a flow chart of a behavior decision algorithm for determining the absence of obstacles according to an embodiment of the present invention;
fig. 6 is a flowchart of a behavior decision algorithm for determining the existence of an obstacle according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art through specific situations.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
The working principle of the present invention is shown in fig. 3. The behavior decision system firstly reads data information obtained by measurement of the vehicle-mounted sensor and carries out corresponding coordinate conversion, and then obstacles which do not influence behavior decision are filtered; secondly, judging the current scene of the vehicle by combining the GPS coordinate information and the high-precision map information of the current vehicle, and finally integrating the information to make a behavior decision, wherein the method specifically comprises the following steps:
a. the environment sensing module reads data of 16-line laser radar and millimeter wave radar, vector information output to the obstacle is [ ObNum, LidarCoORObLaDis, LidarCoORObLoDis, ObLaVel, ObLoVel, ObAtt ],
wherein ObNum is the serial number of the barrier, and [ ObNum is more than or equal to 0 and less than or equal to the total number of the barrier ]; LidarCoorObLaDis is the transverse distance of an obstacle under a laser radar coordinate system, LidarCoorObLoDis is the longitudinal distance of the obstacle under the laser radar coordinate system, ObLaVel is the transverse speed of the obstacle, ObLoVel is the longitudinal speed of the obstacle, ObAtt is the attribute of the obstacle,
wherein
Figure DEST_PATH_IMAGE001
b. Converting the output obstacle coordinates in the laser radar coordinate system into the vehicle body coordinate system, wherein the coordinate conversion formula is as follows:
Figure 937451DEST_PATH_IMAGE002
(ii) a The vehicle body coordinate system takes a central point of a rear axle of the vehicle as an origin of coordinates, a vertical rear axle of the vehicle as an X axis and a parallel rear axle as a Y axis. Transforming the obstacle coordinate vector
[ ObNum, CarCoORObLaDis, CarCoORObLoDis, ObLaVel, ObLoVel, ObAtt ] to the preprocessing module;
c. the environment sensing module reads the data of the GPS sensor and inputs the data to the scene judging module;
d. and the scene judgment module performs corresponding distance calculation with the control points marked in the high-precision map according to the vehicle GPS positioning information input by the environment sensing module to obtain the scene information of the current vehicle. The control points are labeled as shown in fig. 4 below. The scenes in the scene judging module are 5 scenes in the total road, the pre-crossing, the parking point and the departure point. The method specifically comprises the following steps:
(1) reading GPS coordinate information of a current vehicle and reading control point information in a high-precision map, wherein an example of control point setting is shown in the following figure 4, wherein a control point 1 and a control point 6 are respectively a departure point and a stop point of the vehicle, a control point 2, a control point 4, a control point 7 and a control point 9 are intersection entry points, and a control point 3, a control point 5, a control point 8 and a control point 10 are intersection exit points.
(2) Sequentially calculating the distance information between the current vehicle and 10 control points; obtaining two control points closest to the vehicle according to the distance from the vehicle to the control points;
(3) scene judgment:
when the vehicle position meets any one of the following conditions, the scene judgment result is the intersection:
condition 1: the vehicle position being between control point 2 and control point 3
Condition 2: the vehicle position being between control point 4 and control point 5
Condition 3: the vehicle position being between control point 7 and control point 8
Condition 4: the vehicle position being between control point 9 and control point 10
When the vehicle position meets any one of the following conditions, the scene judgment result is a pre-intersection:
condition 1: the vehicle position is between control point 3 and control point 4 and the distance from control point 4 is less than a threshold;
condition 2: the vehicle position is between control point 6 and control point 7 and the distance from control point 6 is less than the threshold;
condition 3: the vehicle position is between control point 8 and control point 9 and the distance from control point 9 is less than a threshold;
condition 4: the vehicle position is between the control point 1 and the control point 2 and the distance from the control point 2 is less than the threshold value;
and thirdly, when the vehicle position meets the following conditions, the scene judgment result is the departure point:
conditions are as follows: the vehicle position is between control point 10 and control point 1 and the distance from control point 1 is less than the threshold;
and fourthly, when the vehicle position meets the following conditions, the scene judgment result is the parking spot:
conditions are as follows: the vehicle position is between control point 5 and control point 6 and the distance from control point 6 is less than a threshold;
and when the vehicle position does not meet all the conditions, determining that the vehicle scene judgment result is in the road.
e. The preprocessing module filters out obstacles that do not affect the behavior decision plan by the following rules:
(1) when the obstacle is a static obstacle, the CarCoorobLoDis is more than or equal to MaxLoThreshold, wherein,
Figure DEST_PATH_IMAGE003
for a set longitudinal threshold value, vehicle current speed and comfort deceleration are respectively set as vehicle comfort deceleration;
(2) when the obstacle is a static obstacle, the CarCoorobLaDis is more than or equal to MaxLaThreshold, wherein MaxLaThreshold =0.5 VehicleWidth + SafetyValue is a set transverse threshold, VehicleWidth is the vehicle width, SafetyValue is a set safety value, and the value is adjusted according to the actual debugging condition;
(3) when the obstacle is a longitudinally moving obstacle, the CarCoorobLoDis is more than or equal to MaxLoThreshold, wherein,
Figure 566841DEST_PATH_IMAGE004
for the set longitudinal threshold value, vehicle current speed, obstacled longitudinal speed, comfort deceleration and SafeValue are set safety threshold values and adjusted according to actual conditions;
(4) when the obstacle is a longitudinally moving obstacle, the CarCoorobLaDis is more than or equal to MaxLaThreshold, wherein MaxLaThreshold =0.5 VehicleWidth + SafetyValue is a set transverse threshold, VehicleWidth is the vehicle width, SafetyValue is a set safety value, and the value is adjusted according to the actual situation;
and when the obstacle meets any one of the four rules, filtering the obstacle.
f. And synthesizing the obstacle information, the vehicle scene and the current state information of the vehicle to make a vehicle behavior decision.
Fig. 5 shows a flow of a behavior decision module algorithm when no obstacle exists, and the specific steps are as follows:
a. when no obstacle exists in front of the current vehicle, acquiring a vehicle scene;
b. when the vehicle scene is in the road or at the departure point, outputting a behavior decision result as normal tracking driving;
c. when the vehicle scene is a parking spot, outputting a behavior decision result as parking at the parking spot;
d. otherwise, the vehicle is driven at the limited speed.
Fig. 6 shows a flow of a behavior decision algorithm when an obstacle exists, and the specific steps are as follows:
a. when there is the barrier in the front of the current vehicle, whether the barrier is in the collision range is judged:
when in use
Figure DEST_PATH_IMAGE005
And considering that collision risk occurs, and taking the action decision result as emergency braking. In the above formula, brakingdegradation is vehicle emergency braking deceleration, α is braking delay compensation factor, and d _ extra is half of the vehicle body length;
b. judging the attribute of the barrier, judging the barrier avoiding condition of the barrier when the barrier is a static barrier, and judging that the barrier is avoided when the following conditions are met, wherein the behavior decision result is that the barrier is avoided to detour, otherwise, the behavior decision result is that the barrier is parked in front of the barrier for waiting:
condition 1: no vehicle comes from an adjacent lane;
condition 2: CarCoOrObLoDis is not less than k1 × vehicle speed + d1, k1 is speed factor, d1 is set threshold
c. When the barrier is a moving barrier, judging a vehicle scene, when the vehicle scene is a non-road and non-departure point but a parking point, waiting for parking in front of the barrier according to a behavior decision result, and when the vehicle scene is a non-road, non-departure point and non-parking point, driving with the vehicle according to the behavior decision result;
d. when the vehicle scene is in the road or at the departure point, judging the obstacle tracking range, and if MinSpeedDifferThreshold is less than or equal to ObLoVel-vehicle is less than or equal to MaxSpeedDifferThreshold, judging that the behavior decision result is the following driving; wherein, MaxSpeedDifferThreshold and MinSpeedDifferThreshold are speed difference threshold values, and are modified according to actual debugging results.
e. Otherwise, judging the bypassing range, wherein the behavior decision result is obstacle avoidance bypassing when the following conditions are met, and otherwise, the behavior decision result is normal tracking driving;
condition 1: CarCoOrObLoDis is not less than k2 vehicle speed + d2, wherein k2 is a speed factor, and d2 is a set threshold.
Condition 2: Vehiclespeed-ObLoVel ≧ speedThreshold, where speedThreshold is the speed threshold.
f. And (6) ending.
Based on the park environment of the Chinese automobile technical research center, corresponding routes and control points are set, and the driving behavior decision in the park environment is realized by applying the method provided by the patent. As shown in fig. 1, dots in the graph are set control points, a pentagram is a current position of the vehicle, the current sensing system detects that no obstacle exists in front and the position of the vehicle is in the road, and the behavior decision result is tracking driving. As shown in fig. 2, the obstacle is detected in front and is in the detour range, and the action decision result is detour.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. The utility model provides a driving action decision-making system of park unmanned sightseeing vehicle which characterized in that: the system comprises an environment perception module, a preprocessing module, a scene judgment module and a behavior decision module;
the environment sensing module is used for acquiring the surrounding environment information of the vehicle through the vehicle-mounted sensor and outputting the relevant information required by the unmanned vehicle;
the preprocessing module is used for processing the data output by the environment sensing module and removing obstacles which do not influence the decision of the vehicle behavior;
the scene judgment module is used for making scene judgment according to vehicle positioning information acquired by a vehicle-mounted sensor in the environment perception module and sending a judgment result to the behavior decision module;
the behavior decision module is used for calculating the vehicle behavior state at the next moment by combining the state information of the current vehicle according to the information output by the preprocessing module and the scene judging module;
the preprocessing module comprises an obstacle coordinate conversion unit and an obstacle filtering unit;
the coordinate conversion unit converts the coordinates of all the obstacles output by the environment sensing module into a vehicle body coordinate system through a coordinate conversion formula;
the obstacle filtering unit is used for filtering obstacles which do not influence the decision of the vehicle behavior.
2. The driving behavior decision system of a garden unmanned sightseeing vehicle of claim 1, characterized in that: the vehicle-mounted sensor comprises a 16-line laser radar sensor, a GPS (global positioning system) positioning sensor, a millimeter wave radar sensor and a high-precision map;
the 16-line laser radar sensor is used for detecting obstacles around the unmanned vehicle, outputting obstacle information and transmitting the obstacle information to the preprocessing module in a network transmission mode;
the GPS positioning sensor is used for positioning the vehicle and transmitting positioning information to the scene judgment module in a serial port mode;
the millimeter wave radar is used for detecting the running speed of an obstacle in front of the vehicle;
the high-precision map is used for storing intersection information, lane line information and road boundary information.
3. The driving behavior decision system of a garden unmanned sightseeing vehicle of claim 1, characterized in that: the preprocessing module filters out obstacles that do not affect the behavior decision plan by the following rules:
(1) when the obstacle is a static obstacle, the CarCoorobLoDis is more than or equal to MaxLoThreshold, wherein,
Figure DEST_PATH_IMAGE002
for a set longitudinal threshold, VehicleSpeed is the current vehicle speed,comfort deceleration is comfortable deceleration of the vehicle;
(2) when the obstacle is a static obstacle, the CarCoorobLaDis is more than or equal to MaxLaThreshold, wherein MaxLaThreshold =0.5 VehicleWidth + SafetyValue is a set transverse threshold, VehicleWidth is the vehicle width, SafetyValue is a set safety value, and the value is adjusted according to the actual debugging condition;
(3) when the obstacle is a longitudinally moving obstacle, the CarCoorobLoDis is more than or equal to MaxLoThreshold, wherein,
Figure DEST_PATH_IMAGE004
for the set longitudinal threshold value, vehicle current speed, obstacled longitudinal speed, comfort deceleration and SafeValue are set safety threshold values and adjusted according to actual conditions;
(4) when the obstacle is a longitudinally moving obstacle, the CarCoorobLaDis is more than or equal to MaxLaThreshold, wherein MaxLaThreshold =0.5 VehicleWidth + SafetyValue is a set transverse threshold, VehicleWidth is the vehicle width, SafetyValue is a set safety value, and the value is adjusted according to the actual situation;
and when the obstacle meets any one of the four rules, filtering the obstacle.
4. The driving behavior decision system of a garden unmanned sightseeing vehicle of claim 1, characterized in that: the scenes in the scene judging module comprise 5 scenes of a road center, a pre-crossing, a parking point and a departure point.
5. The driving behavior decision system of the unmanned sightseeing bus of the park as claimed in claim 4, wherein: and the scene judgment carries out corresponding distance calculation with the control point marked in the high-precision map according to the vehicle GPS positioning information input by the environment sensing module to obtain the scene information of the current vehicle.
6. The driving behavior decision system of the unmanned sightseeing bus of the park as claimed in claim 5, wherein: the specific steps of scene judgment comprise:
(1) reading GPS coordinate information of a current vehicle and reading control point information in a high-precision map, wherein the control points are set as follows, a control point 1 and a control point 6 are respectively a departure point and a stop point of the vehicle, a control point 2, a control point 4, a control point 7 and a control point 9 are intersection entry points, and a control point 3, a control point 5, a control point 8 and a control point 10 are intersection exit points;
(2) sequentially calculating the distance information between the current vehicle and 10 control points; obtaining two control points closest to the vehicle according to the distance from the vehicle to the control points;
(3) scene judgment:
when the vehicle position meets any one of the following conditions, the scene judgment result is the intersection:
condition 1: the vehicle position is between control point 2 and control point 3;
condition 2: the vehicle position is between control point 4 and control point 5;
condition 3: the vehicle position is between control point 7 and control point 8;
condition 4: the vehicle position is between control point 9 and control point 10;
when the vehicle position meets any one of the following conditions, the scene judgment result is a pre-intersection:
condition 1: the vehicle position is between control point 3 and control point 4 and the distance from control point 4 is less than a threshold;
condition 2: the vehicle position is between control point 6 and control point 7 and the distance from control point 6 is less than the threshold;
condition 3: the vehicle position is between control point 8 and control point 9 and the distance from control point 9 is less than a threshold;
condition 4: the vehicle position is between the control point 1 and the control point 2 and the distance from the control point 2 is less than the threshold value;
and thirdly, when the vehicle position meets the following conditions, the scene judgment result is the departure point:
conditions are as follows: the vehicle position is between control point 10 and control point 1 and the distance from control point 1 is less than the threshold;
and fourthly, when the vehicle position meets the following conditions, the scene judgment result is the parking spot:
conditions are as follows: the vehicle position is between control point 5 and control point 6 and the distance from control point 6 is less than a threshold;
and when the vehicle position does not meet all the conditions, determining that the vehicle scene judgment result is in the road.
7. The driving behavior decision system of a garden unmanned sightseeing vehicle of claim 6, characterized in that: the behavior state output by the behavior decision module comprises: and 7 types of vehicles are normally tracked, limited-speed, followed, emergently braked, obstacle-avoiding and detouring, stopped at a stopping point and stopped in front of an obstacle to wait.
8. The driving behavior decision system of a garden unmanned sightseeing vehicle of claim 7, characterized in that: in the behavior decision module, when no obstacle exists, the algorithm flow of the behavior decision module comprises the following specific steps:
a. when no obstacle exists in front of the current vehicle, acquiring a vehicle scene;
b. when the vehicle scene is in the road or at the departure point, outputting a behavior decision result as normal tracking driving;
c. when the vehicle scene is a parking spot, outputting a behavior decision result as parking at the parking spot;
d. otherwise, the vehicle is driven at the limited speed.
9. The driving behavior decision system of a garden unmanned sightseeing vehicle of claim 7, characterized in that: the behavior decision algorithm flow when the obstacle exists comprises the following specific steps:
a. when there is the barrier in the front of the current vehicle, whether the barrier is in the collision range is judged: when in use
Figure DEST_PATH_IMAGE006
When a collision risk is considered to occurThe action decision result is emergency braking;
wherein BrakingDecelaction is vehicle emergency braking deceleration, alpha is braking delay compensation factor, and d _ extra is half of the length of the vehicle body;
b. judging the attribute of the barrier, judging the barrier avoiding condition of the barrier when the barrier is a static barrier, and judging that the barrier is avoided when the following conditions are met, wherein the behavior decision result is that the barrier is avoided to detour, otherwise, the behavior decision result is that the barrier is parked in front of the barrier for waiting:
condition 1: no vehicle comes from an adjacent lane;
condition 2: the CarCoOrObLoDis is more than or equal to k1 × vehicle predicted + d1, k1 is a speed factor, and d1 is a set threshold;
c. when the barrier is a moving barrier, judging a vehicle scene, when the vehicle scene is a non-road and non-departure point but a parking point, waiting for parking in front of the barrier according to a behavior decision result, and when the vehicle scene is a non-road, non-departure point and non-parking point, driving with the vehicle according to the behavior decision result;
d. when the vehicle scene is in the road or at the departure point, judging the obstacle tracking range, and if MinSpeedDifferThreshold is less than or equal to ObLoVel-vehicle is less than or equal to MaxSpeedDifferThreshold, judging that the behavior decision result is the following driving;
wherein, MaxSpeedDifferThreshold and MinSpeedDifferThreshold are speed difference threshold values, and are modified according to actual debugging results;
e. otherwise, judging the bypassing range, wherein the behavior decision result is obstacle avoidance bypassing when the following conditions are met, and otherwise, the behavior decision result is normal tracking driving;
condition 1: the speed of the carrier is greater than or equal to k2 vehicle predicted + d2, wherein k2 is a speed factor, and d2 is a set threshold;
condition 2: the vehicle-ObLoVel is more than or equal to speedThreshold, wherein speedThreshold is a speed threshold;
f. and (6) ending.
CN202010977809.5A 2020-09-17 2020-09-17 Driving behavior decision-making system of park unmanned sightseeing vehicle Active CN111832667B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010977809.5A CN111832667B (en) 2020-09-17 2020-09-17 Driving behavior decision-making system of park unmanned sightseeing vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010977809.5A CN111832667B (en) 2020-09-17 2020-09-17 Driving behavior decision-making system of park unmanned sightseeing vehicle

Publications (2)

Publication Number Publication Date
CN111832667A true CN111832667A (en) 2020-10-27
CN111832667B CN111832667B (en) 2020-12-08

Family

ID=72918504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010977809.5A Active CN111832667B (en) 2020-09-17 2020-09-17 Driving behavior decision-making system of park unmanned sightseeing vehicle

Country Status (1)

Country Link
CN (1) CN111832667B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112798009A (en) * 2020-12-24 2021-05-14 北京经纬恒润科技股份有限公司 Remote driving auxiliary display method and device
CN112937607A (en) * 2021-03-29 2021-06-11 紫清智行科技(北京)有限公司 Internet automatic driving system and method for scenic spot sightseeing vehicle
CN113177509A (en) * 2021-05-19 2021-07-27 浙江大华技术股份有限公司 Method and device for recognizing backing behavior

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101813492A (en) * 2010-04-19 2010-08-25 清华大学 Vehicle navigation system and method
US20160314363A1 (en) * 2015-04-24 2016-10-27 Electronics And Telecommunications Research Institute Obstacle detection apparatus and method
CN109059902A (en) * 2018-09-07 2018-12-21 百度在线网络技术(北京)有限公司 Relative pose determines method, apparatus, equipment and medium
CN109829351A (en) * 2017-11-23 2019-05-31 华为技术有限公司 Detection method, device and the computer readable storage medium of lane information
CN110135377A (en) * 2019-05-21 2019-08-16 北京百度网讯科技有限公司 Object moving state detection method, device, server and computer-readable medium
CN111240328A (en) * 2020-01-16 2020-06-05 中智行科技有限公司 Vehicle driving safety monitoring method and device and unmanned vehicle
CN111600925A (en) * 2020-04-01 2020-08-28 清华大学 Obstacle information analysis method and analysis device, internet of things equipment and chip

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101813492A (en) * 2010-04-19 2010-08-25 清华大学 Vehicle navigation system and method
US20160314363A1 (en) * 2015-04-24 2016-10-27 Electronics And Telecommunications Research Institute Obstacle detection apparatus and method
CN109829351A (en) * 2017-11-23 2019-05-31 华为技术有限公司 Detection method, device and the computer readable storage medium of lane information
CN109059902A (en) * 2018-09-07 2018-12-21 百度在线网络技术(北京)有限公司 Relative pose determines method, apparatus, equipment and medium
CN110135377A (en) * 2019-05-21 2019-08-16 北京百度网讯科技有限公司 Object moving state detection method, device, server and computer-readable medium
CN111240328A (en) * 2020-01-16 2020-06-05 中智行科技有限公司 Vehicle driving safety monitoring method and device and unmanned vehicle
CN111600925A (en) * 2020-04-01 2020-08-28 清华大学 Obstacle information analysis method and analysis device, internet of things equipment and chip

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112798009A (en) * 2020-12-24 2021-05-14 北京经纬恒润科技股份有限公司 Remote driving auxiliary display method and device
CN112798009B (en) * 2020-12-24 2024-04-05 北京经纬恒润科技股份有限公司 Remote driving auxiliary display method and device
CN112937607A (en) * 2021-03-29 2021-06-11 紫清智行科技(北京)有限公司 Internet automatic driving system and method for scenic spot sightseeing vehicle
CN113177509A (en) * 2021-05-19 2021-07-27 浙江大华技术股份有限公司 Method and device for recognizing backing behavior

Also Published As

Publication number Publication date
CN111832667B (en) 2020-12-08

Similar Documents

Publication Publication Date Title
CN111832667B (en) Driving behavior decision-making system of park unmanned sightseeing vehicle
CN113386795B (en) Intelligent decision-making and local track planning method for automatic driving vehicle and decision-making system thereof
WO2021136130A1 (en) Trajectory planning method and apparatus
WO2019214163A1 (en) Valet parking method and device
CN111422196A (en) Intelligent networking automatic driving system and method suitable for mini bus
RU2719495C2 (en) Method and device for driving assistance
WO2022007655A1 (en) Automatic lane changing method and apparatus, and device and storage medium
WO2021103511A1 (en) Operational design domain (odd) determination method and apparatus and related device
CN113916246A (en) Unmanned obstacle avoidance path planning method and system
CN107618506B (en) Obstacle avoidance system for automatic driving device and obstacle avoidance method thereof
CN111994068B (en) Intelligent driving automobile control system based on intelligent tire touch perception
CN115291596A (en) Road travelable area reasoning method and device
JP7271259B2 (en) Vehicle management system, vehicle management device, and vehicle management method
WO2022016351A1 (en) Method and apparatus for selecting driving decision
US20190129432A1 (en) Semantic object clustering for autonomous vehicle decision making
US20190176830A1 (en) Vehicle lane change
WO2022051951A1 (en) Lane line detection method, related device, and computer readable storage medium
CN113428180A (en) Method, system and terminal for controlling single-lane running speed of unmanned vehicle
CN115416650A (en) Intelligent driving obstacle avoidance system of vehicle
CN115938154A (en) Method for setting autonomous parking system of large electric truck based on field-side cooperation
CN109656242A (en) A kind of automatic Pilot planning driving path planning system
CN113870246A (en) Obstacle detection and identification method based on deep learning
US20220176987A1 (en) Trajectory limiting for autonomous vehicles
CN116135654A (en) Vehicle running speed generation method and related equipment
CN115131963A (en) Mine car radar cooperation method, system, device and medium based on laser radar and millimeter wave radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant