CN112068574A - Control method and system for unmanned vehicle in dynamic complex environment - Google Patents

Control method and system for unmanned vehicle in dynamic complex environment Download PDF

Info

Publication number
CN112068574A
CN112068574A CN202011119650.XA CN202011119650A CN112068574A CN 112068574 A CN112068574 A CN 112068574A CN 202011119650 A CN202011119650 A CN 202011119650A CN 112068574 A CN112068574 A CN 112068574A
Authority
CN
China
Prior art keywords
vehicle
unmanned vehicle
traffic light
computer controller
automobile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011119650.XA
Other languages
Chinese (zh)
Inventor
高洪波
李陈畅
李智军
朱菊萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202011119650.XA priority Critical patent/CN112068574A/en
Publication of CN112068574A publication Critical patent/CN112068574A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a control system and a method of an unmanned vehicle in a dynamic complex environment, which comprises the following steps: the point cloud data of the environment perception system is transmitted into a computer controller, and the detection, the segmentation and the identification of the obstacles are completed through a deep learning method; transmitting the image shot by the environment sensing system into a computer controller to obtain the accurate position of the traffic light frame, and identifying the color of the traffic light according to the position of the traffic light frame; the environment perception system transmits the obstacle detection identification data and the position and color identification data of the traffic light frame into the computer controller, and the minimum safety distance for avoiding collision is obtained through calculation; inputting the target speed and angular speed of the automobile into a computer controller, generating an obstacle avoidance path by the computer controller according to the minimum safe distance through an unmanned vehicle dynamics model, and finally enabling the automobile to autonomously finish a driving task to reach a target point; the invention can help the unmanned vehicle to sense and operate in a dynamic complex environment, and realize unmanned driving and intelligent traffic.

Description

Control method and system for unmanned vehicle in dynamic complex environment
Technical Field
The invention relates to vehicle engineering, in particular to a control method and a control system of an unmanned vehicle in a dynamic complex environment, and more particularly to a solution method for researching how the unmanned vehicle perceives, controls and plans a route in the dynamic complex environment.
Background
The development of vehicle technology makes the unmanned technology become a hot spot of domestic and foreign research in recent years, and unmanned driving becomes one of the current important research directions. The unmanned technology is developed by combining vehicle engineering on the basis of the robot technology, relates to interdisciplines such as artificial intelligence and computer vision, and can play a very important role in the fields of logistics distribution, shared travel, public transportation, sanitation and the like. Unmanned driving has three advantages over a vehicle driven by a driver. Firstly, the reaction speed of the system to the environment is improved. The position accuracy of human perception of the change of the external environment is limited, the position and the speed of an obstacle can only be estimated approximately under the driving condition of the vehicle, and the vehicle-mounted sensor of the unmanned intelligent vehicle can greatly improve the information accuracy in the environment. Second, the driver's response to the external environment change stimulus is slow, and the human reaction time is also affected by factors such as weather, age, mood, etc., and the human reaction time is longer in case of fatigue driving. The brake reflecting time of the unmanned intelligent automobile is generally a fixed value and cannot exceed 0.3 second, and the influence of external factors on the unmanned intelligent automobile is very small compared with the influence on people. Third, unmanned driving can provide a comfortable and natural environment for passengers without gripping the steering wheel to worry about accidents, so that people can spend more time and energy on more things.
Patent document CN108520559A (application number: 201810299122.3) discloses a method for positioning and navigating an unmanned aerial vehicle based on binocular vision, which obtains left and right images of an image and left and right images after camera parameters are corrected according to a binocular camera of an unmanned aerial vehicle-mounted control system, and further obtains depth information of corresponding pixels; extracting key points of the left view for filtering and screening; then searching a matching key point set in the current frame through optical flow tracking to obtain a matching key point pair; calculating a cost function according to the matching key point pairs to obtain a final pose result; and finally, screening the input continuous image frames to obtain key image frames, calculating a joint cost function for the key point set and the pose of the key image frames, and optimizing and solving the cost function to obtain the updated pose.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a control system and a control method for an unmanned vehicle in a dynamic complex environment.
According to the invention, the control system of the unmanned vehicle in the dynamic complex environment comprises the following components:
module M1: the point cloud data of the environment perception system is transmitted into a computer controller, and the detection, the segmentation and the identification of the obstacles are completed through a deep learning method;
module M2: transmitting the image shot by the environment sensing system into a computer controller to obtain the accurate position of the traffic light frame, and identifying the color of the traffic light according to the position of the traffic light frame;
module M3: the environment perception system transmits the obstacle detection identification data and the position and color identification data of the traffic light frame into the computer controller, and the minimum safety distance for avoiding collision is obtained through calculation;
module M4: according to the minimum safe distance for avoiding collision obtained by calculation, the angular speed and the speed of the automobile target are obtained by calculation through a computer controller;
module M5: inputting the target speed and angular speed of the automobile into a computer controller, generating an obstacle avoidance path by the computer controller according to the minimum safe distance through an unmanned vehicle dynamics model, and finally enabling the automobile to autonomously finish a driving task to reach a target point;
the unmanned vehicle dynamics model comprises the dynamic property, braking property, smoothness and stability of an automobile, and measures the relation between the mass and stress condition of the automobile and the movement of the automobile.
Preferably, said module M1 comprises:
the environment sensing system comprises an ultrasonic radar 1, a millimeter wave radar 2, a laser radar 6 and a camera;
the computer controller 9 comprises an inertia measurement unit 8 and a central processing unit and graphic processor 10;
the cameras include a forward facing camera 3, a side facing camera 4 and a rearward facing camera 5;
the laser radar 6 is mounted on the vehicle roof;
module M1.1: based on the laser radar 6 point cloud data, the central processing unit and the graphic processor 10 learn point cloud characteristics through a convolutional neural network model, predict relevant attributes of obstacles, and perform obstacle segmentation according to the relevant attributes of the obstacles, so as to detect and identify the obstacles;
module M1.2: based on the millimeter wave radar 2 point cloud data, the central processing unit and the graphic processor 10 process the data so as to detect and identify the obstacle;
module M1.3: the central processing unit and the graphic processing unit 10 fuse the obstacle recognition results of the laser radar 7 and the millimeter wave radar 2 through a fusion laser radar algorithm to obtain the position of the obstacle accurately detected by the unmanned vehicle;
the fusion laser radar algorithm mainly performs management and matching of single sensor results and fusion results and barrier speed fusion based on Kalman filtering.
Preferably, the inertial measurement unit 8 comprises: the inertia measurement unit measures the angular velocity and the acceleration of the unmanned vehicle; the inertial measurement unit is mounted near the center of gravity of the unmanned vehicle.
Preferably, said module M2 comprises:
the environment sensing system comprises an ultrasonic radar 1, a millimeter wave radar 2, a laser radar 6 and a camera;
the cameras comprise a forward camera 3, a preset number of lateral cameras 4 and a backward camera 5;
the laser radar 6 is mounted on the vehicle roof;
selecting an interested area outside a projection area according to a camera in an environment sensing system, running traffic light detection to obtain an accurate traffic light frame position, and carrying out color identification on the traffic light according to the traffic light frame position to obtain the current state of the traffic light; and further confirming the final state of the traffic light through a time sequence filtering correction algorithm according to the traffic light state of the single frame.
Preferably, said module M3 comprises: the minimum safety distance required by the unmanned vehicle to face the obstacle in the complex environment is calculated by comparing the self information of the vehicle with the information of the external obstacle;
the vehicle self message comprises size information and dynamic model information of the vehicle; the dynamic model information comprises automobile acceleration, speed and angular speed; comparing the current image information collected by the camera with the image information of the next frame to complete closed-loop detection, and realizing the map building and positioning of the automobile;
the external obstacle information comprises obstacle detection identification of the environment sensing system.
Preferably, the unmanned vehicle dynamics model in the module M5 includes:
Figure BDA0002731562120000031
Figure BDA0002731562120000032
wherein k is1,k2The cornering stiffness of the front and rear wheels, m the total mass of the vehicle, a the distance from the center of gravity of the vehicle to the front axle, b the distance from the center of gravity of the vehicle to the rear axle, the front wheel turning angle, IzThe rotational inertia of the automobile body; beta represents the centroid slip angle, wrRepresents the yaw rate, u represents the forward speed,
Figure BDA0002731562120000033
the lateral acceleration is represented as the lateral acceleration,
Figure BDA0002731562120000034
the yaw angular acceleration is shown.
Preferably, the module M5 further includes: a behavior decision output system and a vehicle control system;
the behavior decision output system comprises an electric energy chassis, an electric control system and a control system, wherein the electric energy chassis controls the speed of the unmanned vehicle wheels according to the angular speed and the speed calculated by the computer controller, and turns; when the environment sensing system senses that the front obstacle exists, the computer controller informs the chassis of a command to enable the chassis to avoid or stop at a stable deceleration;
the vehicle control system tracks the target path by adopting a PID control algorithm, and ensures that the vehicle can track and run according to the given path information point.
The invention provides a control method of an unmanned vehicle in a dynamic complex environment, which comprises the following steps:
step M1: the point cloud data of the environment perception system is transmitted into a computer controller, and the detection, the segmentation and the identification of the obstacles are completed through a deep learning method;
step M2: transmitting the image shot by the environment sensing system into a computer controller to obtain the accurate position of the traffic light frame, and identifying the color of the traffic light according to the position of the traffic light frame;
step M3: the environment perception system transmits the obstacle detection identification data and the position and color identification data of the traffic light frame into the computer controller, and the minimum safety distance for avoiding collision is obtained through calculation;
step M4: according to the minimum safe distance for avoiding collision obtained by calculation, the angular speed and the speed of the automobile target are obtained by calculation through a computer controller;
step M5: inputting the target speed and angular speed of the automobile into a computer controller, generating an obstacle avoidance path by the computer controller according to the minimum safe distance through an unmanned vehicle dynamics model, and finally enabling the automobile to autonomously finish a driving task to reach a target point;
the unmanned vehicle dynamics model comprises the dynamic property, braking property, smoothness and stability of an automobile, and measures the relation between the mass and stress condition of the automobile and the movement of the automobile.
Preferably, the step M1 includes:
the environment sensing system comprises an ultrasonic radar 1, a millimeter wave radar 2, a laser radar 6 and a camera;
the computer controller 9 comprises an inertia measurement unit 8 and a central processing unit and graphic processor 10;
the cameras include a forward facing camera 3, a side facing camera 4 and a rearward facing camera 5;
the laser radar 6 is mounted on the vehicle roof;
step M1.1: based on the laser radar 6 point cloud data, the central processing unit and the graphic processor 10 learn point cloud characteristics through a convolutional neural network model, predict relevant attributes of obstacles, and perform obstacle segmentation according to the relevant attributes of the obstacles, so as to detect and identify the obstacles;
step M1.2: based on the millimeter wave radar 2 point cloud data, the central processing unit and the graphic processor 10 process the data so as to detect and identify the obstacle;
step M1.3: the central processing unit and the graphic processing unit 10 fuse the obstacle recognition results of the laser radar 7 and the millimeter wave radar 2 through a fusion laser radar algorithm to obtain the position of the obstacle accurately detected by the unmanned vehicle;
the fusion laser radar algorithm mainly performs management and matching of a single sensor result and a fusion result and barrier speed fusion based on Kalman filtering;
the inertial measurement unit 8 includes: the inertia measurement unit measures the angular velocity and the acceleration of the unmanned vehicle; the inertial measurement unit is mounted near the center of gravity of the unmanned vehicle.
Preferably, the step M2 includes:
the environment sensing system comprises an ultrasonic radar 1, a millimeter wave radar 2, a laser radar 6 and a camera;
the cameras comprise a forward camera 3, a preset number of lateral cameras 4 and a backward camera 5;
the laser radar 6 is mounted on the vehicle roof;
selecting an interested area outside a projection area according to a camera in an environment sensing system, running traffic light detection to obtain an accurate traffic light frame position, and carrying out color identification on the traffic light according to the traffic light frame position to obtain the current state of the traffic light; according to the traffic light state of a single frame, further confirming the final state of the traffic light through a filtering correction algorithm of a time sequence;
the step M3 includes: the minimum safety distance required by the unmanned vehicle to face the obstacle in the complex environment is calculated by comparing the self information of the vehicle with the information of the external obstacle;
the vehicle self message comprises size information and dynamic model information of the vehicle; the dynamic model information comprises automobile acceleration, speed and angular speed; comparing the current image information collected by the camera with the image information of the next frame to complete closed-loop detection, and realizing the map building and positioning of the automobile;
the external obstacle information comprises obstacle detection identification of an environment sensing system;
the unmanned vehicle dynamics model in the step M5 includes:
Figure BDA0002731562120000051
Figure BDA0002731562120000052
wherein k is1,k2The cornering stiffness of the front and rear wheels, m the total mass of the vehicle, a the distance from the center of gravity of the vehicle to the front axle, b the distance from the center of gravity of the vehicle to the rear axle, the front wheel turning angle, IzThe rotational inertia of the automobile body; beta represents the centroid slip angle, wrRepresents the yaw rate, u represents the forward speed,
Figure BDA0002731562120000053
the lateral acceleration is represented as the lateral acceleration,
Figure BDA0002731562120000054
representing yaw angular acceleration;
the step M5 further includes: a behavior decision output system and a vehicle control system;
the behavior decision output system comprises an electric energy chassis, an electric control system and a control system, wherein the electric energy chassis controls the speed of the unmanned vehicle wheels according to the angular speed and the speed calculated by the computer controller, and turns; when the environment sensing system senses that the front obstacle exists, the computer controller informs the chassis of a command to enable the chassis to avoid or stop at a stable deceleration;
the vehicle control system tracks the target path by adopting a PID control algorithm, and ensures that the vehicle can track and run according to the given path information point.
Compared with the prior art, the invention has the following beneficial effects:
1. compared with the traditional unmanned vehicle adopting binocular vision, the unmanned vehicle adopting the fusion scheme of the laser radar, the millimeter wave radar and the camera can accurately sense the surrounding environment, so that the unmanned vehicle is not limited to clear weather and has high precision in obstacle detection;
2. the unmanned vehicle running in a dynamic complex environment is designed, the positions of traffic lights on a road surface and the flickering condition of the traffic lights can be detected, so that the applicable scenes of the unmanned vehicle are increased, and the unmanned vehicle is not limited to simple scenes such as park logistics and the like;
3. the computer controller designed by the invention is provided with a computer processor and a graphics processor GTX1080, and more computing resources enable the unmanned vehicle to quickly calculate surrounding obstacles and react;
4. the decision output system designed by the invention executes the instruction of the computer controller, can stably complete tasks of acceleration and deceleration, braking, turning and the like of the automobile through the dynamic characteristics of the unmanned automobile, and has high robustness.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic diagram of an overall structure of an unmanned vehicle in a dynamic complex environment;
FIG. 2 is a schematic view of the unmanned vehicle model 2 with degrees of freedom;
FIG. 3 is a schematic diagram of a lidar;
FIG. 4 is a frame diagram of an unmanned vehicle system;
FIG. 5 is an unmanned vehicle software overall framework;
FIG. 6 is a diagram of the architecture and relationship of the unmanned vehicle system;
wherein, 1 is ultrasonic radar, 2 is millimeter wave radar, 3 is forward camera, 4 is side direction camera, 5 is backward camera, 6 is laser radar, 7 is the automobile body and includes the electric energy chassis, 8 is inertial measurement unit, 9 is computer control ware, 10 is central processing unit and graphic processor.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The invention provides a solution for an unmanned vehicle in a dynamic complex environment, and the vehicle integrates environment perception, vehicle control and path planning technologies.
Example 1
According to the invention, the control system of the unmanned vehicle in the dynamic complex environment is provided, as shown in fig. 1 and fig. 4, and comprises: the system comprises an environment perception system, a computer controller and a behavior decision output system; as shown in the figures 5-6 of the drawings,
module M1: the point cloud data of the environment perception system is transmitted into a computer controller, and the detection, the segmentation and the identification of the obstacles are completed through a deep learning method;
module M2: transmitting the image shot by the environment sensing system into a computer controller to obtain the accurate position of the traffic light frame, and identifying the color of the traffic light according to the position of the traffic light frame;
module M3: the environment perception system transmits the obstacle detection identification data and the position and color identification data of the traffic light frame into the computer controller, and the minimum safety distance for avoiding collision is obtained through calculation;
module M4: according to the minimum safe distance for avoiding collision obtained by calculation, the angular speed and the speed of the automobile target are obtained by calculation through a computer controller;
module M5: inputting the target speed and angular speed of the automobile into a computer controller, generating an obstacle avoidance path by the computer controller according to the minimum safe distance through an unmanned vehicle dynamics model, and finally enabling the automobile to autonomously finish a driving task to reach a target point;
the unmanned vehicle dynamics model comprises the dynamic property, braking property, smoothness and stability of an automobile, and measures the relation between the mass and stress condition of the automobile and the movement of the automobile.
Specifically, the module M1 includes:
the environment sensing system comprises an ultrasonic radar 1, a millimeter wave radar 2, a laser radar 6, a camera and a GPS antenna;
a millimeter wave radar and an ultrasonic radar are placed on the vehicle head;
the computer controller 9 comprises an inertia measurement unit 8 and a central processing unit and graphic processor 10; the current environment condition and the obstacle condition of the unmanned vehicle can be solved quickly through the central processing unit of the computer and the graphic processor 10, and a decision result is calculated;
the cameras comprise a forward camera 3, a preset number of lateral cameras 4 and a backward camera 5; the four lateral cameras are fixed at four corners of the unmanned vehicle body; a front camera is arranged above the front windshield of the automobile, and a rear camera is fixed above the rear windshield;
the laser radar 6 is mounted on the vehicle roof;
module M1.1: as shown in fig. 3, based on the lidar 6 point cloud data, the central processor and the graphic processor 10 learn point cloud characteristics through a convolutional neural network model trained under the line, predict relevant attributes of obstacles, and perform obstacle segmentation according to the relevant attributes of the obstacles, thereby detecting and identifying the obstacles;
module M1.2: the obstacle detection and identification based on the millimeter wave radar 2 point cloud data of the vehicle head are mainly used for processing the millimeter wave radar original data through a central processing unit and a graphic processor 10 to obtain an obstacle result;
module M1.3: the central processing unit and the graphic processing unit 10 fuse the obstacle recognition results of the laser radar 7 and the millimeter wave radar 2 through a fusion laser radar algorithm to obtain the position of the obstacle accurately detected by the unmanned vehicle;
the fusion laser radar algorithm mainly performs management and matching of single sensor results and fusion results and barrier speed fusion based on Kalman filtering.
Specifically, the inertial measurement unit 8 includes: the inertial measurement unit measures the three-axis attitude angle (or angular rate) and acceleration of the unmanned vehicle to improve the reliability; the inertial measurement unit is mounted as close to the center of gravity of the unmanned vehicle as possible.
Specifically, the module M2 includes:
the environment sensing system comprises an ultrasonic radar 1, a millimeter wave radar 2, a laser radar 6 and a camera;
the cameras comprise a forward camera 3, a preset number of lateral cameras 4 and a backward camera 5;
the laser radar 6 is mounted on the vehicle roof;
selecting a larger region of interest outside the projection region according to a camera in an environment sensing system, running traffic light detection to obtain an accurate traffic light frame position, and performing color identification on the traffic light according to the traffic light frame position to obtain the current state of the traffic light; and further confirming the final state of the traffic light through a time sequence filtering correction algorithm according to the traffic light state of the single frame.
Specifically, the module M3 includes: the minimum safety distance required by the unmanned vehicle to face the obstacle in the complex environment is calculated by comparing the self information of the vehicle with the information of the external obstacle;
the vehicle self message comprises size information and dynamic model information of the vehicle; the dynamic model information comprises automobile acceleration, speed and angular speed; comparing the current image information collected by the camera with the image information of the next frame to complete closed-loop detection, and realizing the map building and positioning of the automobile;
the external obstacle information comprises obstacle detection identification of the environment sensing system.
Specifically, as shown in fig. 2, the model M5 includes:
Figure BDA0002731562120000081
Figure BDA0002731562120000082
wherein k is1,k2The cornering stiffness of the front and rear wheels, m the total mass of the vehicle, a the distance from the center of gravity of the vehicle to the front axle, b the distance from the center of gravity of the vehicle to the rear axle, the front wheel turning angle, IzThe rotational inertia of the automobile body; beta represents the centroid slip angle, wrRepresents the yaw rate, u represents the forward speed,
Figure BDA0002731562120000083
the lateral acceleration is represented as the lateral acceleration,
Figure BDA0002731562120000084
the yaw angular acceleration is shown.
Specifically, the module M5 further includes: a behavior decision output system and a vehicle control system;
the behavior decision output system executes an instruction output by the computer controller, and the unmanned vehicle pure electric wire electric energy chassis controls the speed of the unmanned vehicle wheels according to the angular speed and the speed calculated by the computer controller and turns; when the environment sensing system senses that the front obstacle exists, the computer controller informs the chassis of a command to enable the chassis to avoid or stop at a stable deceleration; the chassis corresponds to the dynamics characteristic of the unmanned vehicle, and stable PID control is easy to realize, so that passengers or transported goods have a safe and comfortable environment and experience;
in terms of hardware, the behavior layer is processed by a computer processor and a graphic image processor. However, the behavior decision system needs to reasonably decide the current vehicle behavior according to the information output by the sensing layer; and the guide track planning module plans appropriate path, vehicle speed and other information and sends the information to the control layer.
The chassis corresponds to the dynamics characteristic of the unmanned vehicle, is easy to stably control and has better robustness;
the vehicle control system tracks the target path by adopting a PID control algorithm, ensures that the vehicle can track and run according to the given path information point, and finally runs the complete path according to the requirement.
The invention provides a control method of an unmanned vehicle in a dynamic complex environment, which comprises the following steps: the system comprises an environment perception system, a computer controller and a behavior decision output system;
step M1: the point cloud data of the environment perception system is transmitted into a computer controller, and the detection, the segmentation and the identification of the obstacles are completed through a deep learning method;
step M2: transmitting the image shot by the environment sensing system into a computer controller to obtain the accurate position of the traffic light frame, and identifying the color of the traffic light according to the position of the traffic light frame;
step M3: the environment perception system transmits the obstacle detection identification data and the position and color identification data of the traffic light frame into the computer controller, and the minimum safety distance for avoiding collision is obtained through calculation;
step M4: according to the minimum safe distance for avoiding collision obtained by calculation, the angular speed and the speed of the automobile target are obtained by calculation through a computer controller;
step M5: inputting the target speed and angular speed of the automobile into a computer controller, generating an obstacle avoidance path by the computer controller according to the minimum safe distance through an unmanned vehicle dynamics model, and finally enabling the automobile to autonomously finish a driving task to reach a target point;
the unmanned vehicle dynamics model comprises the dynamic property, braking property, smoothness and stability of an automobile, and measures the relation between the mass and stress condition of the automobile and the movement of the automobile.
Specifically, the step M1 includes:
the environment sensing system comprises an ultrasonic radar 1, a millimeter wave radar 2, a laser radar 6, a camera and a GPS antenna;
a millimeter wave radar and an ultrasonic radar are placed on the vehicle head;
the computer controller 9 comprises an inertia measurement unit 8 and a central processing unit and graphic processor 10; the current environment condition and the obstacle condition of the unmanned vehicle can be solved quickly through the central processing unit of the computer and the graphic processor 10, and a decision result is calculated;
the cameras comprise a forward camera 3, a preset number of lateral cameras 4 and a backward camera 5; the four lateral cameras are fixed at four corners of the unmanned vehicle body; a front camera is arranged above the front windshield of the automobile, and a rear camera is fixed above the rear windshield;
the laser radar 6 is mounted on the vehicle roof;
step M1.1: based on the laser radar 6 point cloud data, the central processing unit and the graphic processor 10 learn point cloud characteristics through a convolutional neural network model trained under lines, predict relevant attributes of obstacles, and perform obstacle segmentation according to the relevant attributes of the obstacles, so as to detect and identify the obstacles;
step M1.2: the obstacle detection and identification based on the millimeter wave radar 2 point cloud data of the vehicle head are mainly used for processing the millimeter wave radar original data through a central processing unit and a graphic processor 10 to obtain an obstacle result;
step M1.3: the central processing unit and the graphic processing unit 10 fuse the obstacle recognition results of the laser radar 7 and the millimeter wave radar 2 through a fusion laser radar algorithm to obtain the position of the obstacle accurately detected by the unmanned vehicle;
the fusion laser radar algorithm mainly performs management and matching of single sensor results and fusion results and barrier speed fusion based on Kalman filtering.
Specifically, the inertial measurement unit 8 includes: the inertial measurement unit measures the three-axis attitude angle (or angular rate) and acceleration of the unmanned vehicle to improve the reliability; the inertial measurement unit is mounted as close to the center of gravity of the unmanned vehicle as possible.
Specifically, the step M2 includes:
the environment sensing system comprises an ultrasonic radar 1, a millimeter wave radar 2, a laser radar 6 and a camera;
the cameras comprise a forward camera 3, a preset number of lateral cameras 4 and a backward camera 5;
the laser radar 6 is mounted on the vehicle roof;
selecting a larger region of interest outside the projection region according to a camera in an environment sensing system, running traffic light detection to obtain an accurate traffic light frame position, and performing color identification on the traffic light according to the traffic light frame position to obtain the current state of the traffic light; and further confirming the final state of the traffic light through a time sequence filtering correction algorithm according to the traffic light state of the single frame.
Specifically, the step M3 includes: the minimum safety distance required by the unmanned vehicle to face the obstacle in the complex environment is calculated by comparing the self information of the vehicle with the information of the external obstacle;
the vehicle self message comprises size information and dynamic model information of the vehicle; the dynamic model information comprises automobile acceleration, speed and angular speed; comparing the current image information collected by the camera with the image information of the next frame to complete closed-loop detection, and realizing the map building and positioning of the automobile;
the external obstacle information comprises obstacle detection identification of the environment sensing system.
Specifically, the unmanned vehicle dynamics model in step M5 includes:
Figure BDA0002731562120000111
Figure BDA0002731562120000112
wherein k is1,k2The cornering stiffness of the front and rear wheels, m the total mass of the vehicle, a the distance from the center of gravity of the vehicle to the front axle, b the distance from the center of gravity of the vehicle to the rear axle, the front wheel turning angle, IzThe rotational inertia of the automobile body; beta represents the centroid slip angle, wrRepresents the yaw rate, u represents the forward speed,
Figure BDA0002731562120000113
the lateral acceleration is represented as the lateral acceleration,
Figure BDA0002731562120000114
the yaw angular acceleration is shown.
Specifically, the step M5 further includes: a behavior decision output system and a vehicle control system;
the behavior decision output system executes an instruction output by the computer controller, and the unmanned vehicle pure electric wire electric energy chassis controls the speed of the unmanned vehicle wheels according to the angular speed and the speed calculated by the computer controller and turns; when the environment sensing system senses that the front obstacle exists, the computer controller informs the chassis of a command to enable the chassis to avoid or stop at a stable deceleration; the chassis corresponds to the dynamics characteristic of the unmanned vehicle, and stable PID control is easy to realize, so that passengers or transported goods have a safe and comfortable environment and experience;
in terms of hardware, the behavior layer is processed by a computer processor and a graphic image processor. However, the behavior decision system needs to reasonably decide the current vehicle behavior according to the information output by the sensing layer; and the guide track planning module plans appropriate path, vehicle speed and other information and sends the information to the control layer.
The chassis corresponds to the dynamics characteristic of the unmanned vehicle, is easy to stably control and has better robustness;
the vehicle control system tracks the target path by adopting a PID control algorithm, ensures that the vehicle can track and run according to the given path information point, and finally runs the complete path according to the requirement.
Example 2
Example 2 is a modification of example 1
An unmanned vehicle operating in a dynamic complex environment includes an environmental awareness system, a computer controller, and a behavioral decision output system. The environment sensing system comprises an ultrasonic radar 1 and a millimeter wave radar 2 which are respectively arranged in the front of the vehicle and at the rear of the vehicle. Ultrasonic and millimeter wave radars are used for obstacle detection and identification.
The laser radar 6 is fixed on the top of the vehicle body 7 and used for detecting and identifying the obstacles, learning and predicting the related attributes of the obstacles, and dividing the obstacles according to the attributes. The side cameras 4 are respectively arranged at four corners of the vehicle body, the calibration and correction of a coordinate system are completed by a computer vision method, and the obstacle conditions of two sides of the vehicle body are detected.
The front camera 3 is fixed below the laser radar and above the front windshield of the vehicle and is used for detecting and identifying traffic lights, detecting pedestrians and lane lines and the like. The rear camera 5 is fixed above the rear windshield of the vehicle, and whether the conditions of rear obstacles and pedestrians influence the driving is observed.
The computer controller 9 is located inside the unmanned vehicle so as not to be damaged. The computer controller includes an inertial measurement unit 8 for measuring the unmanned vehicle attitude angle and acceleration, and thereby improving reliability.
The central processor and the graphic processor 10 are located in the vehicle, belong to a computer controller, and are the brains of the unmanned vehicle. The radar data, image data and the like are executed and operated by the processor, and a control instruction is sent to the decision output system.
The ultrasonic radar 1 and the millimeter wave radar 2 transmit point cloud data and the like into the central processing unit and the graphic processor 10, and the detection, segmentation and identification of the obstacles are completed by a deep learning method.
The camera 3 transmits the shot image to the graphic processor 10, obtains the accurate position of the traffic light frame by selecting a larger region of interest and running traffic light detection in the region of interest, and carries out color identification on the traffic light according to the position of the traffic light frame to obtain the current state of the traffic light. And after the traffic light state of the single frame is obtained, the final state of the traffic light is further confirmed through a time sequence filtering correction algorithm.
The environmental awareness system transmits the data to the central processing unit 10, which calculates the minimum safe distance to avoid collision.
The computer controller 9 obtains the angular speed, the speed and the like of the vehicle through the unmanned vehicle dynamic model calculation, and tells the vehicle body 7 the instruction to be executed by the unmanned vehicle.
Wherein the multiple unmanned vehicle dynamics model may be written as:
Figure BDA0002731562120000121
Figure BDA0002731562120000122
wherein k is1,k2The cornering stiffness of the front and rear wheels, m the total mass of the vehicle, a the distance from the center of gravity of the vehicle to the front axle, b the distance from the center of gravity of the vehicle to the rear axle, the front wheel turning angle, IzThe rotational inertia of the automobile body; beta represents the centroid slip angle, wrRepresents the yaw rate, u represents the forward speed,
Figure BDA0002731562120000123
the lateral acceleration is represented as the lateral acceleration,
Figure BDA0002731562120000124
the yaw angular acceleration is shown.
In the description of the present application, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present application.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A control system for an unmanned vehicle in a dynamic complex environment, comprising:
module M1: the point cloud data of the environment perception system is transmitted into a computer controller, and the detection, the segmentation and the identification of the obstacles are completed through a deep learning method;
module M2: transmitting the image shot by the environment sensing system into a computer controller to obtain the accurate position of the traffic light frame, and identifying the color of the traffic light according to the position of the traffic light frame;
module M3: the environment perception system transmits the obstacle detection identification data and the position and color identification data of the traffic light frame into the computer controller, and the minimum safety distance for avoiding collision is obtained through calculation;
module M4: according to the minimum safe distance for avoiding collision obtained by calculation, the angular speed and the speed of the automobile target are obtained by calculation through a computer controller;
module M5: inputting the target speed and angular speed of the automobile into a computer controller, generating an obstacle avoidance path by the computer controller according to the minimum safe distance through an unmanned vehicle dynamics model, and finally enabling the automobile to autonomously finish a driving task to reach a target point;
the unmanned vehicle dynamics model comprises the dynamic property, braking property, smoothness and stability of an automobile, and measures the relation between the mass and stress condition of the automobile and the movement of the automobile.
2. The control system of the unmanned vehicle in the dynamic complex environment of claim 1, wherein the module M1 comprises:
the environment perception system comprises an ultrasonic radar (1), a millimeter wave radar (2), a laser radar (6) and a camera;
the computer controller (9) comprises an inertia measurement unit (8) and a central processor and a graphic processor (10);
the cameras comprise a forward facing camera (3), a side facing camera (4) and a rearward facing camera (5);
the laser radar (6) is arranged on the roof of the vehicle;
module M1.1: based on the point cloud data of the laser radar (6), the central processing unit and the graphic processor (10) learn point cloud characteristics through a convolutional neural network model and predict the related attributes of the obstacles, and obstacle segmentation is carried out according to the related attributes of the obstacles, so that the obstacles are detected and identified;
module M1.2: based on the millimeter wave radar (2) point cloud data, the central processing unit and the graphic processor (10) process the point cloud data so as to detect and identify obstacles;
module M1.3: the central processing unit and the graphic processing unit (10) fuse the obstacle recognition results of the laser radar (7) and the millimeter wave radar (2) through a fusion laser radar algorithm to obtain the position of the unmanned vehicle for accurately detecting the obstacle;
the fusion laser radar algorithm mainly performs management and matching of single sensor results and fusion results and barrier speed fusion based on Kalman filtering.
3. Control system of unmanned vehicles in dynamic complex environments according to claim 2, characterized in that the inertial measurement unit (8) comprises: the inertial measurement unit measures the angular speed and the acceleration of the unmanned vehicle; the inertial measurement unit is mounted near the center of gravity of the unmanned vehicle.
4. The control system of the unmanned vehicle in the dynamic complex environment of claim 1, wherein the module M2 comprises:
the environment perception system comprises an ultrasonic radar (1), a millimeter wave radar (2), a laser radar (6) and a camera;
the cameras comprise a forward camera (3), a preset number of lateral cameras (4) and a backward camera (5);
the laser radar (6) is arranged on the roof of the vehicle;
selecting an interested area outside a projection area according to a camera in an environment sensing system, running traffic light detection to obtain an accurate traffic light frame position, and carrying out color identification on the traffic light according to the traffic light frame position to obtain the current state of the traffic light; and further confirming the final state of the traffic light through a time sequence filtering correction algorithm according to the traffic light state of the single frame.
5. The control system of the unmanned vehicle in the dynamic complex environment of claim 1, wherein the module M3 comprises: the minimum safety distance required by the unmanned vehicle to face the obstacle in the complex environment is calculated by comparing the self information of the vehicle with the information of the external obstacle;
the vehicle self message comprises size information and dynamic model information of the vehicle; the dynamic model information comprises automobile acceleration, speed and angular speed; comparing the current image information collected by the camera with the image information of the next frame to complete closed-loop detection, and realizing the map building and positioning of the automobile;
the external obstacle information comprises obstacle detection identification of the environment sensing system.
6. The control system of the unmanned vehicle in the dynamic complex environment of claim 1, wherein the unmanned vehicle dynamics model in the module M5 comprises:
Figure FDA0002731562110000021
Figure FDA0002731562110000022
wherein k is1,k2The cornering stiffness of the front and rear wheels, m the total mass of the vehicle, a the distance from the center of gravity of the vehicle to the front axle, b the distance from the center of gravity of the vehicle to the rear axle, the front wheel turning angle, IzThe rotational inertia of the automobile body; beta represents the centroid slip angle, wrRepresents the yaw rate, u represents the forward speed,
Figure FDA0002731562110000023
the lateral acceleration is represented as the lateral acceleration,
Figure FDA0002731562110000024
the yaw angular acceleration is shown.
7. The control system of the unmanned vehicle in a dynamic complex environment of claim 1, wherein the module M5 further comprises: a behavior decision output system and a vehicle control system;
the behavior decision output system comprises an electric energy chassis, an electric control system and a control system, wherein the electric energy chassis controls the speed of the unmanned vehicle wheels according to the angular speed and the speed calculated by the computer controller, and turns; when the environment sensing system senses that the front obstacle exists, the computer controller informs the chassis of a command to enable the chassis to avoid or stop at a stable deceleration;
the vehicle control system tracks the target path by adopting a PID control algorithm, and ensures that the vehicle can track and run according to the given path information point.
8. A control method of an unmanned vehicle in a dynamic complex environment is characterized by comprising the following steps:
step M1: the point cloud data of the environment perception system is transmitted into a computer controller, and the detection, the segmentation and the identification of the obstacles are completed through a deep learning method;
step M2: transmitting the image shot by the environment sensing system into a computer controller to obtain the accurate position of the traffic light frame, and identifying the color of the traffic light according to the position of the traffic light frame;
step M3: the environment perception system transmits the obstacle detection identification data and the position and color identification data of the traffic light frame into the computer controller, and the minimum safety distance for avoiding collision is obtained through calculation;
step M4: according to the minimum safe distance for avoiding collision obtained by calculation, the angular speed and the speed of the automobile target are obtained by calculation through a computer controller;
step M5: inputting the target speed and angular speed of the automobile into a computer controller, generating an obstacle avoidance path by the computer controller according to the minimum safe distance through an unmanned vehicle dynamics model, and finally enabling the automobile to autonomously finish a driving task to reach a target point;
the unmanned vehicle dynamics model comprises the dynamic property, braking property, smoothness and stability of an automobile, and measures the relation between the mass and stress condition of the automobile and the movement of the automobile.
9. The method for controlling an unmanned vehicle in a dynamic complex environment according to claim 8, wherein the step M1 comprises:
the environment perception system comprises an ultrasonic radar (1), a millimeter wave radar (2), a laser radar (6) and a camera;
the computer controller (9) comprises an inertia measurement unit (8) and a central processor and a graphic processor (10);
the cameras comprise a forward facing camera (3), a side facing camera (4) and a rearward facing camera (5);
the laser radar (6) is arranged on the roof of the vehicle;
step M1.1: based on the point cloud data of the laser radar (6), the central processing unit and the graphic processor (10) learn point cloud characteristics through a convolutional neural network model and predict the related attributes of the obstacles, and obstacle segmentation is carried out according to the related attributes of the obstacles, so that the obstacles are detected and identified;
step M1.2: based on the millimeter wave radar (2) point cloud data, the central processing unit and the graphic processor (10) process the point cloud data so as to detect and identify obstacles;
step M1.3: the central processing unit and the graphic processing unit (10) fuse the obstacle recognition results of the laser radar (7) and the millimeter wave radar (2) through a fusion laser radar algorithm to obtain the position of the unmanned vehicle for accurately detecting the obstacle;
the fusion laser radar algorithm mainly performs management and matching of a single sensor result and a fusion result and barrier speed fusion based on Kalman filtering;
the inertial measurement unit (8) comprises: the inertia measurement unit measures the angular velocity and the acceleration of the unmanned vehicle; the inertial measurement unit is mounted near the center of gravity of the unmanned vehicle.
10. The method for controlling an unmanned vehicle in a dynamic complex environment according to claim 8, wherein the step M2 comprises:
the environment perception system comprises an ultrasonic radar (1), a millimeter wave radar (2), a laser radar (6) and a camera;
the cameras comprise a forward camera (3), a preset number of lateral cameras (4) and a backward camera (5);
the laser radar (6) is arranged on the roof of the vehicle;
selecting an interested area outside a projection area according to a camera in an environment sensing system, running traffic light detection to obtain an accurate traffic light frame position, and carrying out color identification on the traffic light according to the traffic light frame position to obtain the current state of the traffic light; according to the traffic light state of a single frame, further confirming the final state of the traffic light through a filtering correction algorithm of a time sequence;
the step M3 includes: the minimum safety distance required by the unmanned vehicle to face the obstacle in the complex environment is calculated by comparing the self information of the vehicle with the information of the external obstacle;
the vehicle self message comprises size information and dynamic model information of the vehicle; the dynamic model information comprises automobile acceleration, speed and angular speed; comparing the current image information collected by the camera with the image information of the next frame to complete closed-loop detection, and realizing the map building and positioning of the automobile;
the external obstacle information comprises obstacle detection identification of an environment sensing system;
the unmanned vehicle dynamics model in the step M5 includes:
Figure FDA0002731562110000041
Figure FDA0002731562110000042
wherein k is1,k2The cornering stiffness of the front and rear wheels, m the total mass of the vehicle, a the distance from the center of gravity of the vehicle to the front axle, b the distance from the center of gravity of the vehicle to the rear axle, the front wheel turning angle, IzThe rotational inertia of the automobile body; beta represents the centroid slip angle, wrRepresents the yaw rate, u represents the forward speed,
Figure FDA0002731562110000043
the lateral acceleration is represented as the lateral acceleration,
Figure FDA0002731562110000044
representing yaw angular acceleration;
the step M5 further includes: a behavior decision output system and a vehicle control system;
the behavior decision output system comprises an electric energy chassis, an electric control system and a control system, wherein the electric energy chassis controls the speed of the unmanned vehicle wheels according to the angular speed and the speed calculated by the computer controller, and turns; when the environment sensing system senses that the front obstacle exists, the computer controller informs the chassis of a command to enable the chassis to avoid or stop at a stable deceleration;
the vehicle control system tracks the target path by adopting a PID control algorithm, and ensures that the vehicle can track and run according to the given path information point.
CN202011119650.XA 2020-10-19 2020-10-19 Control method and system for unmanned vehicle in dynamic complex environment Pending CN112068574A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011119650.XA CN112068574A (en) 2020-10-19 2020-10-19 Control method and system for unmanned vehicle in dynamic complex environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011119650.XA CN112068574A (en) 2020-10-19 2020-10-19 Control method and system for unmanned vehicle in dynamic complex environment

Publications (1)

Publication Number Publication Date
CN112068574A true CN112068574A (en) 2020-12-11

Family

ID=73655332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011119650.XA Pending CN112068574A (en) 2020-10-19 2020-10-19 Control method and system for unmanned vehicle in dynamic complex environment

Country Status (1)

Country Link
CN (1) CN112068574A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112817315A (en) * 2020-12-31 2021-05-18 江苏集萃智能制造技术研究所有限公司 Obstacle avoidance method and system for unmanned cleaning vehicle in dynamic environment
CN112823377A (en) * 2021-01-14 2021-05-18 深圳市锐明技术股份有限公司 Road edge segmentation method and device, terminal equipment and readable storage medium
CN113341697A (en) * 2021-06-11 2021-09-03 常州工程职业技术学院 Separated vehicle moving robot cooperative control method capable of accurately extracting vehicle center
CN113515813A (en) * 2021-07-16 2021-10-19 长安大学 On-site verification method for simulation reliability of automobile dynamics simulation software
CN113619605A (en) * 2021-09-02 2021-11-09 盟识(上海)科技有限公司 Automatic driving method and system for underground mining articulated vehicle
CN113635893A (en) * 2021-07-16 2021-11-12 安徽工程大学 Urban intelligent traffic-based unmanned vehicle steering control method
CN113884090A (en) * 2021-09-28 2022-01-04 中国科学技术大学先进技术研究院 Intelligent platform vehicle environment sensing system and data fusion method thereof
CN113895543A (en) * 2021-10-09 2022-01-07 西安电子科技大学 Intelligent unmanned vehicle driving system based on park environment
CN115145272A (en) * 2022-06-21 2022-10-04 大连华锐智能化科技有限公司 Coke oven vehicle environment sensing system and method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104182991A (en) * 2014-08-15 2014-12-03 辽宁工业大学 Vehicle running state estimation method and vehicle running state estimation device
CN107272687A (en) * 2017-06-29 2017-10-20 深圳市海梁科技有限公司 A kind of driving behavior decision system of automatic Pilot public transit vehicle
CN107817798A (en) * 2017-10-30 2018-03-20 洛阳中科龙网创新科技有限公司 A kind of farm machinery barrier-avoiding method based on deep learning system
CN107867290A (en) * 2017-11-07 2018-04-03 长春工业大学 A kind of automobile emergency collision avoidance layer-stepping control method for considering moving obstacle
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
CN108225364A (en) * 2018-01-04 2018-06-29 吉林大学 A kind of pilotless automobile driving task decision system and method
CN108519773A (en) * 2018-03-07 2018-09-11 西安交通大学 The paths planning method of automatic driving vehicle under a kind of structured environment
CN109649390A (en) * 2018-12-19 2019-04-19 清华大学苏州汽车研究院(吴江) A kind of autonomous follow the bus system and method for autonomous driving vehicle
CN110488805A (en) * 2018-05-15 2019-11-22 武汉小狮科技有限公司 A kind of unmanned vehicle obstacle avoidance system and method based on 3D stereoscopic vision
CN110614998A (en) * 2019-08-21 2019-12-27 南京航空航天大学 Aggressive driving-assisted curve obstacle avoidance and road changing path planning system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104182991A (en) * 2014-08-15 2014-12-03 辽宁工业大学 Vehicle running state estimation method and vehicle running state estimation device
CN107272687A (en) * 2017-06-29 2017-10-20 深圳市海梁科技有限公司 A kind of driving behavior decision system of automatic Pilot public transit vehicle
CN107817798A (en) * 2017-10-30 2018-03-20 洛阳中科龙网创新科技有限公司 A kind of farm machinery barrier-avoiding method based on deep learning system
CN107867290A (en) * 2017-11-07 2018-04-03 长春工业大学 A kind of automobile emergency collision avoidance layer-stepping control method for considering moving obstacle
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
CN108225364A (en) * 2018-01-04 2018-06-29 吉林大学 A kind of pilotless automobile driving task decision system and method
CN108519773A (en) * 2018-03-07 2018-09-11 西安交通大学 The paths planning method of automatic driving vehicle under a kind of structured environment
CN110488805A (en) * 2018-05-15 2019-11-22 武汉小狮科技有限公司 A kind of unmanned vehicle obstacle avoidance system and method based on 3D stereoscopic vision
CN109649390A (en) * 2018-12-19 2019-04-19 清华大学苏州汽车研究院(吴江) A kind of autonomous follow the bus system and method for autonomous driving vehicle
CN110614998A (en) * 2019-08-21 2019-12-27 南京航空航天大学 Aggressive driving-assisted curve obstacle avoidance and road changing path planning system and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
曾仕峰,等: "基于ROS的无人驾驶智能车", 《物联网技术》 *
杨万福,等: "《汽车理论》", 31 August 2010, 广州:华南理工大学出版社 *
陈庆樟,等: "汽车四轮转向的最优控制研究", 《常熟理工学院学报(自然科学)》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112817315A (en) * 2020-12-31 2021-05-18 江苏集萃智能制造技术研究所有限公司 Obstacle avoidance method and system for unmanned cleaning vehicle in dynamic environment
CN112823377A (en) * 2021-01-14 2021-05-18 深圳市锐明技术股份有限公司 Road edge segmentation method and device, terminal equipment and readable storage medium
CN112823377B (en) * 2021-01-14 2024-02-09 深圳市锐明技术股份有限公司 Road edge segmentation method and device, terminal equipment and readable storage medium
WO2022151147A1 (en) * 2021-01-14 2022-07-21 深圳市锐明技术股份有限公司 Curb segmentation method and apparatus, and terminal device and readable storage medium
CN113341697B (en) * 2021-06-11 2022-12-09 常州工程职业技术学院 Separated vehicle moving robot cooperative control method capable of accurately extracting vehicle center
CN113341697A (en) * 2021-06-11 2021-09-03 常州工程职业技术学院 Separated vehicle moving robot cooperative control method capable of accurately extracting vehicle center
CN113515813A (en) * 2021-07-16 2021-10-19 长安大学 On-site verification method for simulation reliability of automobile dynamics simulation software
CN113635893A (en) * 2021-07-16 2021-11-12 安徽工程大学 Urban intelligent traffic-based unmanned vehicle steering control method
CN113515813B (en) * 2021-07-16 2023-03-14 长安大学 On-site verification method for simulation reliability of automobile dynamics simulation software
CN113619605A (en) * 2021-09-02 2021-11-09 盟识(上海)科技有限公司 Automatic driving method and system for underground mining articulated vehicle
CN113619605B (en) * 2021-09-02 2022-10-11 盟识(上海)科技有限公司 Automatic driving method and system for underground mining articulated vehicle
CN113884090A (en) * 2021-09-28 2022-01-04 中国科学技术大学先进技术研究院 Intelligent platform vehicle environment sensing system and data fusion method thereof
CN113895543A (en) * 2021-10-09 2022-01-07 西安电子科技大学 Intelligent unmanned vehicle driving system based on park environment
CN115145272A (en) * 2022-06-21 2022-10-04 大连华锐智能化科技有限公司 Coke oven vehicle environment sensing system and method
CN115145272B (en) * 2022-06-21 2024-03-29 大连华锐智能化科技有限公司 Coke oven vehicle environment sensing system and method

Similar Documents

Publication Publication Date Title
CN112068574A (en) Control method and system for unmanned vehicle in dynamic complex environment
US11042157B2 (en) Lane/object detection and tracking perception system for autonomous vehicles
Levinson et al. Towards fully autonomous driving: Systems and algorithms
Liu et al. The role of the hercules autonomous vehicle during the covid-19 pandemic: An autonomous logistic vehicle for contactless goods transportation
CN111873995A (en) System and method for automatically driving on-off ramps on highway
De Lima et al. Navigation of an autonomous car using vector fields and the dynamic window approach
CN113071518B (en) Automatic unmanned driving method, minibus, electronic equipment and storage medium
CN111208814B (en) Memory-based optimal motion planning for an automatic vehicle using dynamic models
US10871777B2 (en) Autonomous vehicle sensor compensation by monitoring acceleration
US20190163201A1 (en) Autonomous Vehicle Sensor Compensation Using Displacement Sensor
US11718290B2 (en) Methods and systems for safe out-of-lane driving
WO2021153176A1 (en) Autonomous movement device, autonomous movement control method, and program
JP7376682B2 (en) Object localization for autonomous driving using visual tracking and image reprojection
US20230111354A1 (en) Method and system for determining a mover model for motion forecasting in autonomous vehicle control
US20210389770A1 (en) Methods and systems for performing inter-trajectory re-linearization about an evolving reference path for an autonomous vehicle
Liu et al. Hercules: An autonomous logistic vehicle for contact-less goods transportation during the COVID-19 outbreak
Nahavandi et al. Autonomous convoying: A survey on current research and development
US11794811B2 (en) Determining estimated steering data for a vehicle
EP4141482A1 (en) Systems and methods for validating camera calibration in real-time
Pagire et al. Autonomous Vehicle using Computer Vision and LiDAR
Tian et al. Autonomous formula racecar: Overall system design and experimental validation
WO2019176278A1 (en) Information processing device, information processing method, program, and mobile body
Buyval et al. The architecture of the self-driving car project at innopolis university
US20230252638A1 (en) Systems and methods for panoptic segmentation of images for autonomous driving
US20230415736A1 (en) Systems and methods for controlling longitudinal acceleration based on lateral objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination