WO2018161278A1 - Driverless automobile system and control method thereof, and automobile - Google Patents

Driverless automobile system and control method thereof, and automobile Download PDF

Info

Publication number
WO2018161278A1
WO2018161278A1 PCT/CN2017/075983 CN2017075983W WO2018161278A1 WO 2018161278 A1 WO2018161278 A1 WO 2018161278A1 CN 2017075983 W CN2017075983 W CN 2017075983W WO 2018161278 A1 WO2018161278 A1 WO 2018161278A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
obstacle
driverless
visual
radar
Prior art date
Application number
PCT/CN2017/075983
Other languages
French (fr)
Chinese (zh)
Inventor
邱纯鑫
刘乐天
Original Assignee
深圳市速腾聚创科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市速腾聚创科技有限公司 filed Critical 深圳市速腾聚创科技有限公司
Priority to PCT/CN2017/075983 priority Critical patent/WO2018161278A1/en
Publication of WO2018161278A1 publication Critical patent/WO2018161278A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • B60W40/105Speed

Definitions

  • the invention relates to the technical field of automobiles, in particular to an unmanned vehicle system, a control method thereof and an automobile.
  • the current self-driving car technology is basically equipped with automatic operation and driving ability.
  • advanced instruments such as cameras, radar sensors and laser detectors are installed on the car to sense the speed limit and roadside traffic signs of the road. If you want to leave, just use the map to navigate.
  • the driverless system mainly uses the on-board sensor to sense the surrounding environment of the vehicle, and controls the steering and speed of the vehicle according to the road, vehicle position and obstacle information obtained by the perception, so that the vehicle can travel safely and reliably on the road.
  • a driverless car is a kind of smart car, which mainly relies on a computer-based smart pilot in the car to realize driverless driving.
  • the difficulty of the driverless system lies in the ability to distinguish between roadside traffic and surrounding environment identification, which may result in inaccurate data collected by the driverless system.
  • a driverless car system comprising:
  • the environment sensing subsystem is configured to collect vehicle information and surrounding environment information of the driverless vehicle, and the surrounding environment information includes image information of the surrounding environment and three-dimensional coordinate information;
  • a data fusion subsystem for merging image information and three-dimensional coordinate information of the surrounding environment and extracting obstacle identification information
  • a path planning decision subsystem for planning a travel route based on vehicle information, obstacle identification information, and travel destination information
  • the travel control subsystem is configured to generate a control command according to the travel route, and control the driverless car according to the control command.
  • the above-mentioned driverless car system integrates surrounding environment information including image information and three-dimensional coordinate information through a data fusion subsystem, and extracts obstacle information, lane line information, traffic identification information, and tracking information of dynamic obstacles, thereby improving the surrounding area.
  • the route planning decision subsystem plans the driving route according to the information extracted by the data fusion subsystem and the driving destination information, and the driving control subsystem generates a control command according to the driving route, and controls the driverless car according to the control command, thereby realizing safety. Extremely high performance driverless.
  • a car including the above-described driverless car system.
  • a control method for an driverless car including:
  • Collecting vehicle information and surrounding environment information of the driverless vehicle, and the surrounding environment information includes image information of the surrounding environment and three-dimensional coordinate information;
  • Integrating image information and three-dimensional coordinate information of the surrounding environment and extracting obstacle identification information are integralating image information and three-dimensional coordinate information of the surrounding environment and extracting obstacle identification information
  • a control command is generated according to the driving route, and the driverless car is controlled according to the control command.
  • 1 is a structural frame diagram of an unmanned vehicle system in an embodiment
  • FIG. 2 is a structural diagram of an environment sensing subsystem in an embodiment
  • FIG. 3 is a structural diagram of a data fusion subsystem in an embodiment
  • FIG. 4 is a flow chart of a control method of an driverless car in one embodiment.
  • An unmanned vehicle system includes an environment sensing subsystem 10, a data fusion subsystem 20, a path planning decision subsystem 30, and a travel control subsystem 40.
  • the environment sensing subsystem 10 is configured to collect vehicle information and surrounding environment information of the driverless vehicle, wherein the surrounding environment information includes image information of the surrounding environment and three-dimensional coordinate information.
  • the data fusion subsystem 20 is configured to fuse the image information and the three-dimensional coordinate information of the surrounding environment and extract the obstacle identification information.
  • the path planning decision subsystem 30 is configured to plan a travel route based on the vehicle information, the obstacle identification information, and the travel destination information.
  • the travel control subsystem 40 is configured to generate a control command according to the travel route, and control the driverless car according to the control command.
  • the above-mentioned driverless car system integrates surrounding environment information including image information and three-dimensional coordinate information through the data fusion subsystem 20, and extracts obstacle identification information, thereby improving the recognition ability and accuracy of the surrounding environment information.
  • the route planning decision subsystem 30 plans a travel route based on the vehicle information, the obstacle identification information, and the travel destination information, and the travel control subsystem 40 generates a control command according to the travel route, and controls the driverless vehicle according to the control command, thereby Achieve unparalleled safety features.
  • the context aware subsystem 10 includes a vision sensor 110 and a radar 120.
  • the visual sensor 110 is mainly composed of one or two graphic sensors, and sometimes with a light projector and other auxiliary equipment.
  • Image sensors can use laser scanners, line and area array CCD cameras or TV cameras, or the latest digital cameras.
  • the vision sensor 110 is installed on the driverless car for collecting the surrounding environment information of the driverless car, and collecting real-time road condition information near the driverless car, including obstacle information, lane line information, traffic sign information, and Dynamic tracking information for obstacles.
  • the collected surrounding environment information is image information of the surrounding environment, and may also be referred to as video information.
  • the radar 120 is used to collect three-dimensional coordinate information of the surrounding environment of the driverless car.
  • the radar 105 is included in the driverless vehicle system.
  • the plurality of radars 120 include a lidar and a millimeter wave radar.
  • Lidar uses mechanical multi-beam laser radar, which mainly detects the position and velocity of the target by emitting a laser beam. It can also use the echo intensity information of the laser radar to detect and track obstacles. Lidar has the advantage of wider detection range and high detection accuracy.
  • the wavelength of the millimeter wave radar is between the centimeter wave and the light wave. It has the advantages of microwave guidance and photoelectric guidance, and its guide head has small volume, light weight and high spatial resolution.
  • the millimeter wave seeker penetrates fog and smoke. The ability to have strong dust.
  • the simultaneous use of lidar and millimeter-wave radar can solve the drawbacks of Lidar's inability to perform in extreme climates, which can greatly improve the detection performance of driverless vehicles.
  • the context aware subsystem 10 is also used to collect vehicle information for a driverless car.
  • the vehicle information includes the current geographical location and time of the driverless car, the posture of the vehicle, and the current running speed.
  • the environment awareness subsystem 10 further includes a GPS positioning navigator 130 and an inertial measurement unit (Inertial) Measurement Unit, IMU) 140 and vehicle speed acquisition module 150.
  • the GPS location navigator 130 collects the current geographic location and time of the driverless car. When the driverless car is driving, the global locator installed in the car will always get the exact position of the car to further improve safety.
  • the inertial measurement unit 140 is used to measure the vehicle attitude of the driverless car.
  • the vehicle speed collecting module 150 is configured to acquire the speed at which the driverless car is currently running.
  • the data fusion subsystem 20 is configured to fuse image information and three-dimensional coordinate information and extract obstacle identification information, wherein the obstacle identification information includes lane line information, obstacle information, traffic identification information, and Tracking information for dynamic obstacles.
  • the data fusion subsystem 20 includes a lane line fusion module 210, an obstacle recognition fusion module 220, a traffic identification fusion module 230, and an obstacle dynamic tracking fusion module 240.
  • the lane line fusion module 210 is configured to superimpose or exclude surrounding environment information collected by the vision sensor 110 and the radar 120, and extract lane line information.
  • the obstacle recognition fusion module 220 is configured to fuse the surrounding environment information collected by the vision sensor 110 and the radar 120, and extract obstacle information.
  • the traffic identifier fusion module 230 is configured to detect surrounding environment information collected by the vision sensor 110 and the radar 120, and extract traffic identification information.
  • the obstacle dynamic tracking fusion module 240 is configured to fuse the surrounding environment information collected by the visual sensor 110 and the radar 120, and extract lane line information.
  • the lane line fusion module 210 includes a visual lane line detection unit 211 and a radar lane line detection unit 213.
  • the visual lane line detecting unit 211 is configured to process the image information and extract the visual lane line information.
  • the visual lane line detecting unit 211 performs preprocessing such as denoising, enhancement, and segmentation on the image information acquired by the visual sensor 110, and extracts visual lane line information.
  • the radar lane line detecting unit 213 is configured to extract road surface information driven by the driverless vehicle, and acquire lane outer contour information according to the road surface information.
  • the radar lane line detecting unit 213 calibrates the three-dimensional coordinate information of the driving ground of the driverless vehicle acquired by the laser radar, and calculates discrete points in the three-dimensional coordinate information, wherein the discrete points can be defined A point at which the distance between two adjacent points is greater than a preset range.
  • the discrete points are filtered, and the position information of the ground is fitted by the random sampling consistency method to obtain the outer contour information of the lane, that is, the radar lane line information is obtained.
  • the lane line fusion module 210 fuses (superimposes) or excludes the acquired visual lane line information and the lane outer contour information to obtain real-time lane line information. Through the lane line fusion module 210, the accuracy of the identification of the lane line information can be improved, and the situation that the lane line information is acquired can be avoided.
  • the obstacle recognition fusion module 220 includes a visual obstacle recognition unit 221 and a radar obstacle recognition unit 223.
  • the visual obstacle recognition unit 221 is configured to segment the background information and the foreground information according to the image information, and identify the foreground information to obtain the visual obstacle information having the color information.
  • the visual obstacle recognition unit 221 processes the image information by means of pattern recognition or machine learning, and uses the background update algorithm to create a background model and segment the foreground. The segmented foreground is identified to obtain visual obstacle information having color information.
  • the radar obstacle recognition unit 223 is configured to identify radar obstacle information having three-dimensional coordinate information within a first preset height range.
  • the radar obstacle recognition unit 223 preprocesses the surrounding environment information of the driverless vehicle acquired by the laser radar, removes the ground information, and filters and recognizes the three-dimensional coordinate information of the surrounding environment within the first preset height range. Detecting the region of interest based on the constraints of lane line information (region Of Interest, ROI), wherein the region of interest outlines the area to be processed in the form of a box, a circle, an ellipse, an irregular polygon, and the like. The data information of the identified region of interest is rasterized, and the obstacle block clustering is performed. The original lidar point cloud data corresponding to each obstacle block is subjected to secondary clustering, and the under division is placed.
  • ROI lane line information
  • the point cloud data of the quadratic cluster is used as a training sample set, the classifier model is generated according to the training sample set, and then the training model is used to classify and identify the obstacle block after the quadratic clustering and acquire the radar with three-dimensional coordinate information. Obstacle information.
  • the obstacle recognition fusion module 220 is configured to fuse the visual obstacle information and the radar obstacle information to acquire the obstacle information. Since the visual obstacle information will fail in a strong light environment or a scene in which the light changes rapidly, the radar 120 detects the obstacle information through the active light source, and the stability is strong. When the driverless car is driving in a scene with a strong light environment or a rapidly changing light, the obstacle recognition information fusion module 220 can superimpose the visual obstacle information and the radar obstacle information, so that the light environment or the light changes rapidly. Get accurate obstacle information in the scene.
  • the obstacle information acquired by the visual obstacle recognition unit 221 contains rich red, green and blue RGB information, and the pixels are high.
  • the obstacle information including the color information and the three-dimensional information can be simultaneously acquired.
  • the obstacle recognition fusion module 220 the false recognition rate can be reduced, the recognition accuracy can be improved, and safe driving can be further ensured.
  • the traffic sign fusion module 230 includes a visual traffic sign detection unit 231 and a radar traffic sign detection unit 233.
  • the visual traffic sign detecting unit 231 detects the image information and extracts the visual traffic sign information.
  • the visual traffic sign detecting unit 231 detects the image information, processes the image information by means of pattern recognition or machine learning, and acquires visual traffic sign information, wherein the visual traffic sign information includes red, green and blue RGB color information.
  • the radar traffic identification detecting unit 233 is configured to extract the ground traffic identification information; and is further configured to detect the suspended traffic identification information in the second preset height range.
  • the radar traffic sign detecting unit 233 extracts the traffic sign line points according to the reflection intensity gradient, and then uses the curve to fit the ground traffic sign information (ground traffic sign line), and can also obtain the second pre- according to the obstacle clustering principle.
  • Set the target in the height range and the shape is a standard rectangle and a circle, and define the target as the hanging traffic identification information.
  • the traffic identification fusion module 230 is configured to determine the location of the traffic identification information according to the ground traffic identification information and the suspended traffic identification information. In the acquired specific location area, the category or kind of the traffic identification information is identified based on the visual traffic identification information acquired by the visual traffic identification detecting unit 231.
  • the traffic sign fusion module 230 can accurately acquire various traffic sign information of the bottom surface or the suspension, and can ensure that the driverless car drives safely under the precursor of the traffic rules.
  • the obstacle dynamic tracking fusion module 240 includes a visual dynamic tracking unit 241 and a radar dynamic tracking unit 243.
  • the visual dynamic tracking unit 241 is configured to identify the image information, and locate the dynamic obstacle in the adjacent two consecutive frames, and obtain the color information of the dynamic obstacle.
  • the visual dynamic tracking unit 241 processes the image information (video image) sequence by means of pattern recognition or machine learning, identifies and locates the dynamic obstacle in successive frames of the video image, and acquires the color information of the obstacle.
  • the radar dynamic tracking unit 243 is used to track three-dimensional coordinate information of the dynamic obstacle.
  • the radar dynamic tracking unit 243 combines the nearest neighbor matching algorithm and the multi-hypothesis tracking algorithm to determine that the obstacles of two adjacent frames or frames are the same target according to the related target association algorithm.
  • the three-dimensional position information and speed information of the target are obtained according to the test data of the laser radar, and then the target after the association is tracked.
  • the Kalman filter and the particle filter filtering algorithm can be used to filter the measured state and the predicted state of the obtained target to obtain the three-dimensional coordinate information of the relatively accurate dynamic obstacle.
  • the obstacle dynamic tracking fusion module 240 is configured to combine the color information of the dynamic obstacle and the three-dimensional coordinate information of the obstacle to obtain the tracking information of the dynamic obstacle. Since the visual dynamic obstacle information is easily interfered by strong light or illumination changes, there is no precise three-dimensional coordinate information of the dynamic obstacle, but the visual dynamic obstacle information contains rich red, green and blue RGB color information. The dynamic obstacle information acquired by the lightning has no red, green and blue RGB color information. When the occlusion and occlusion are separated during the movement, it is impossible to identify which dynamic object is specific. However, the dynamic obstacle information acquired by the laser radar is stable.
  • the obstacle dynamic tracking fusion module 240 can fuse the color information of the dynamic obstacle acquired from the image information and the three-dimensional coordinate information of the dynamic obstacle information acquired by the laser radar, and can acquire the color information and the three-dimensional coordinate information. Dynamic obstacles allow accurate tracking of dynamic obstacles.
  • the path planning decision subsystem 30 is configured to plan the travel path based on the vehicle information, the obstacle identification information extracted by the data fusion subsystem 20, and the travel destination information.
  • the path planning decision subsystem 30 can determine the surrounding environment information extracted by the data fusion subsystem 20 based on the vehicle information acquired by the environment aware subsystem 10 (the current geographic location and time of the driverless vehicle, the vehicle attitude and the current running speed).
  • the obstacle information, the lane line information, the traffic sign information, and the dynamic tracking information of the obstacle, and the driving destination information of the driverless car are used to plan the traveling route.
  • the path planning decision subsystem 30 combines the planned driving path to plan the position of the driver's car at the next moment, and calculates the control data of the driverless car, including the angular velocity, the line speed, the traveling direction, and the like.
  • the travel control subsystem 40 is configured to generate a control command based on the travel path and control the driverless vehicle in accordance with the control command.
  • the travel control subsystem 40 generates control commands based on the control data calculated by the path planning decision subsystem 30, including the travel speed of the vehicle, the direction of travel (front, rear, left, and right), the throttle, and the form of the vehicle. Control, and thus ensure that the driverless vehicle can drive safely and smoothly, and realize the function of driverless.
  • the driverless vehicle system further includes a communication subsystem 50 for transmitting the travel path planned by the path planning decision subsystem 30 to the external monitoring center in real time.
  • the driving status of the driverless car is monitored by an external monitoring center.
  • the above-mentioned driverless car system integrates surrounding environment information including image information and three-dimensional coordinate information through the data fusion subsystem 20, and extracts obstacle information, lane line information, traffic identification information, and tracking information of dynamic obstacles, thereby improving the pair.
  • the ability and accuracy of the surrounding environment information The route planning decision subsystem 30 plans the travel route according to the information extracted by the data fusion subsystem 20 and the travel destination information, and the travel control subsystem 40 generates a control command according to the travel route, and controls the unmanned vehicle according to the control command, and further Unmanned features with extremely high safety performance can be achieved.
  • an embodiment of the present invention also provides an automobile including the driverless automobile system in each of the above embodiments.
  • the surrounding environment information including the image information and the three-dimensional coordinate information can be fused by the data fusion subsystem 20 in the driverless car system in the automobile, and the obstacle information, the lane line information, and the traffic sign can be extracted.
  • Information and tracking information of dynamic obstacles improve the ability and accuracy of information about the surrounding environment.
  • the route planning decision subsystem 30 plans the travel route according to the information extracted by the data fusion subsystem 20 and the travel destination information, and the travel control subsystem 40 generates a control command according to the travel route, and controls the unmanned vehicle according to the control command, and further Unmanned features with extremely high safety performance can be achieved.
  • a control method for a driverless car is also provided.
  • 4 is a flow chart of a control method of a driverless car. The control method is based on the above-described driverless car system, and the method includes:
  • Step S410 Collect vehicle information and surrounding environment information of the driverless car.
  • the context aware subsystem 10 in a driverless vehicle system includes a vision sensor 110 and a radar 120.
  • the visual sensor 110 is mainly composed of one or two graphic sensors, and sometimes with a light projector and other auxiliary equipment.
  • Image sensors can use laser scanners, line and area array CCD cameras or TV cameras, or the latest digital cameras.
  • the surrounding environment information of the driverless car is collected by the visual sensor 110 installed on the driverless car, and the real-time road condition information near the driverless car is collected, including obstacle information, lane line information, traffic sign information, and obstacles. Dynamic tracking information of objects.
  • the collected surrounding environment information is image information of the surrounding environment, and may also be referred to as video information.
  • the three-dimensional coordinate information of the surrounding environment of the driverless car is collected by the radar 120.
  • the radar 105 is included in the driverless vehicle system.
  • the plurality of radars 120 include a lidar and a millimeter wave radar.
  • Lidar uses mechanical multi-beam laser radar, which mainly detects the position and velocity of the target by emitting a laser beam. It can also use the echo intensity information of the laser radar to detect and track obstacles. Lidar has the advantage of wider detection range and high detection accuracy.
  • the wavelength of the millimeter wave radar is between the centimeter wave and the light wave. It has the advantages of microwave guidance and photoelectric guidance, and its guide head has small volume, light weight and high spatial resolution.
  • the millimeter wave seeker penetrates fog and smoke. The ability to have strong dust.
  • the simultaneous use of lidar and millimeter-wave radar can solve the drawbacks of Lidar's inability to perform in extreme climates, which can greatly improve the detection performance of driverless vehicles.
  • Step S420 Fusion of vehicle information and surrounding environment information of the surrounding environment and extraction of obstacle identification information.
  • the data fusion subsystem 20 in the driverless car system can fuse the image information and the three-dimensional coordinate information and extract the obstacle identification information, wherein the obstacle identification information includes lane line information, obstacle information, traffic identification information, and dynamic obstacles. tracking information.
  • the image information is subjected to pre-processing such as denoising, enhancement, and segmentation, and the visual lane line information is extracted.
  • the obtained three-dimensional coordinate information is processed to obtain the road surface information of the driverless vehicle, and the outer contour information of the lane is obtained according to the road surface information, that is, the radar lane line information is acquired; and the acquired visual lane line information and the lane outer contour information are performed. Fusion (overlay) or exclusion to get real-time lane line information.
  • Segmenting background information and foreground information according to image information identifying foreground information to obtain visual obstacle information having color information; identifying radar obstacle information having three-dimensional coordinate information within a first preset height range; and integrating visual obstacles Information and Radar Obstacle Information Obtaining obstacle information allows you to obtain accurate obstacle information in scenes with strong light conditions or rapidly changing light.
  • the image information is detected and the visual traffic identification information is extracted.
  • the traffic sign line points are extracted, and the ground traffic identification information (ground traffic identification line) is fitted by the curve, and the shape can be obtained within the second preset height range according to the obstacle clustering principle. Rectangular and circular targets, and define the target as hanging traffic identification information.
  • the location of the traffic identification information is determined according to the ground traffic identification information and the suspended traffic identification information.
  • the category or kind of the traffic identification information is identified based on the visual traffic identification information acquired by the visual traffic identification detecting unit 231.
  • the traffic sign fusion module 230 can accurately acquire various traffic sign information of the bottom surface or the suspension, and can ensure that the driverless car drives safely under the precursor of the traffic rules.
  • the image information is identified, and the dynamic obstacle is located in two consecutive frames of consecutive frames, and the color information of the dynamic obstacle is obtained.
  • the nearest neighbor matching algorithm and the multi-hypothesis tracking algorithm are combined to determine that the obstacles of two adjacent frames or multiple frames are the same target.
  • the three-dimensional position information and speed information of the target are obtained according to the test data of the laser radar, and then the target after the association is tracked.
  • the Kalman filter and the particle filter filtering algorithm can be used to filter the measured state and the predicted state of the obtained target to obtain the three-dimensional coordinate information of the relatively accurate dynamic obstacle.
  • the color information of the dynamic obstacle and the three-dimensional coordinate information of the obstacle are combined to obtain the tracking information of the dynamic obstacle.
  • the fusion of the color information of the dynamic obstacle obtained from the image information and the three-dimensional coordinate information of the dynamic obstacle information acquired by the laser radar can acquire a dynamic obstacle including the color information and the three-dimensional coordinate information, and can perform the dynamic obstacle Precise tracking.
  • Step S430 Plan the travel route based on the vehicle information, the obstacle identification information, and the travel destination information.
  • the vehicle information (the current geographic location and time of the driverless car, the vehicle posture, and the current running speed) acquired by the environment sensing subsystem 10, and the surrounding environment information (obstacle information, lane line) extracted by the data fusion subsystem 20 Information, traffic identification information, and dynamic tracking information for obstacles) and driving destination information of driverless cars to plan driving routes.
  • Step S440 Generate a control command according to the driving route, and control the driverless car according to the control command.
  • the control command is generated according to the control data calculated by the path planning decision subsystem 30, and the control command includes control of the traveling speed of the vehicle, the driving direction (front, rear, left, and right), the throttle, and the gear position of the vehicle, thereby ensuring no People driving vehicles can drive safely and smoothly, achieving unmanned functions.
  • control method of the driverless vehicle further includes: collecting the current geographic location and time of the driverless car; measuring the vehicle attitude of the driverless car; and obtaining the current running speed of the driverless car.
  • the control method of the driverless vehicle further includes the step of collecting vehicle information of the driverless vehicle.
  • the vehicle information includes the current geographical location and time of the driverless car, the posture of the vehicle, and the current running speed.
  • the current location and time of the driverless car can be collected by the GPS positioning navigator 130.
  • the global locator installed in the car will always get the exact position of the car to further improve safety.
  • the vehicle attitude of the driverless car is measured by the inertial measurement unit 140.
  • the speed of the current running of the driverless car is obtained by the vehicle speed collecting module 150.
  • control method of the driverless vehicle further includes: transmitting the planned travel path of the route planning decision subsystem to the external monitoring center in real time.
  • the driving status of the driverless car is monitored by an external monitoring center.
  • the above control method of the driverless vehicle can improve the recognition ability and accuracy of the surrounding environment information by integrating the vehicle information and the surrounding environment information and extracting the obstacle identification information.
  • the driving route is planned according to the vehicle information, the obstacle identification information, and the driving destination information; and the control command is generated according to the driving route, and the driverless car is controlled according to the control command, thereby achieving the unsafe driving function with extremely high safety performance. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided is a driverless automobile system, comprising: an environment sensing subsystem (10), used for acquiring vehicle information and surrounding-environment information, said surrounding-environment information comprising image information and three-dimensional coordinate information of the surrounding environment; a data fusion subsystem (20), used for fusing the image information and three-dimensional coordinate information of the surrounding environment and extracting obstacle identification information; a path planning decision-making subsystem (30), used for planning a travel path according to the vehicle information, obstacle identification information, and travel destination information; a travel control subsystem (40), used for generating a control instruction according to the travel path and controlling the driverless automobile according to the control instruction. The system improves the safety performance of the driverless automobile. Also disclosed are a driverless-automobile control method and an automobile.

Description

无人驾驶汽车系统及其控制方法、汽车Driverless car system and its control method, automobile
【技术领域】[Technical Field]
本发明涉及汽车技术领域,特别是涉及无人驾驶汽车系统及其控制方法、汽车。The invention relates to the technical field of automobiles, in particular to an unmanned vehicle system, a control method thereof and an automobile.
【背景技术】【Background technique】
目前的自动驾驶汽车技术已经基本具备自动操作和行驶能力,例如,在汽车上安装摄像头、雷达传感器和激光探测器等先进的仪器,可通过它们来感知公路的限速和路旁交通标志,以及周围的车辆移动情况,如果要出发的话只需借助地图来导航即可。无人驾驶系统主要利用车载传感器来感知车辆周围环境,并根据感知所获得的道路、车辆位置和障碍物信息,控制车辆的转向和速度,从而使车辆能够安全、可靠地在道路上行驶。The current self-driving car technology is basically equipped with automatic operation and driving ability. For example, advanced instruments such as cameras, radar sensors and laser detectors are installed on the car to sense the speed limit and roadside traffic signs of the road. If you want to leave, just use the map to navigate. The driverless system mainly uses the on-board sensor to sense the surrounding environment of the vehicle, and controls the steering and speed of the vehicle according to the road, vehicle position and obstacle information obtained by the perception, so that the vehicle can travel safely and reliably on the road.
目前,无人驾驶汽车是一种智能汽车,主要依靠车内的以计算机系统为主的智能驾驶仪来实现无人驾驶。但是,无人驾驶系统其中难点在于对路旁交通及周围环境识别情况的辨别能力,从而可能导致无人驾驶系统采集到的数据不准确等。At present, a driverless car is a kind of smart car, which mainly relies on a computer-based smart pilot in the car to realize driverless driving. However, the difficulty of the driverless system lies in the ability to distinguish between roadside traffic and surrounding environment identification, which may result in inaccurate data collected by the driverless system.
【发明内容】 [Summary of the Invention]
基于此,有必要提供一种对周围环境信息的识别能力强和精准度高且能够安全行驶的无人驾驶汽车系统和汽车。Based on this, it is necessary to provide a driverless car system and a car that have strong recognition ability and high precision for surrounding environment information and can safely travel.
一种无人驾驶汽车系统,包括:A driverless car system comprising:
环境感知子系统,用于采集无人驾驶汽车的车辆信息和周围环境信息,周围环境信息包括周围环境的影像信息和三维坐标信息;The environment sensing subsystem is configured to collect vehicle information and surrounding environment information of the driverless vehicle, and the surrounding environment information includes image information of the surrounding environment and three-dimensional coordinate information;
数据融合子系统,用于融合周围环境的影像信息和三维坐标信息并提取障碍物标识信息;a data fusion subsystem for merging image information and three-dimensional coordinate information of the surrounding environment and extracting obstacle identification information;
路径规划决策子系统,用于根据车辆信息、障碍物标识信息以及行驶目的地信息规划行驶路径;及a path planning decision subsystem for planning a travel route based on vehicle information, obstacle identification information, and travel destination information;
行驶控制子系统,用于根据行驶路径生成控制指令,并根据控制指令控制对无人驾驶汽车进行控制。The travel control subsystem is configured to generate a control command according to the travel route, and control the driverless car according to the control command.
上述无人驾驶汽车系统,通过数据融合子系统融合包括影像信息和三维坐标信息的周围环境信息,并提取障碍物信息、车道线信息、交通标识信息以及动态障碍物的追踪信息,提高了对周围环境信息的识别能力和精准度。路径规划决策子系统根据数据融合子系统提取的信息以及行驶目的地信息规划行驶路径,行驶控制子系统根据行驶路径生成控制指令,并根据控制指令控制对无人驾驶汽车进行控制,进而可以实现安全性能极高的无人驾驶功能。The above-mentioned driverless car system integrates surrounding environment information including image information and three-dimensional coordinate information through a data fusion subsystem, and extracts obstacle information, lane line information, traffic identification information, and tracking information of dynamic obstacles, thereby improving the surrounding area. The ability to identify and prioritize environmental information. The route planning decision subsystem plans the driving route according to the information extracted by the data fusion subsystem and the driving destination information, and the driving control subsystem generates a control command according to the driving route, and controls the driverless car according to the control command, thereby realizing safety. Extremely high performance driverless.
此外,还提供一种汽车,包括上述的无人驾驶汽车系统。In addition, a car is provided, including the above-described driverless car system.
此外,还提供一种无人驾驶汽车的控制方法,包括:In addition, a control method for an driverless car is provided, including:
采集无人驾驶汽车的车辆信息和周围环境信息,周围环境信息包括周围环境的影像信息和三维坐标信息;Collecting vehicle information and surrounding environment information of the driverless vehicle, and the surrounding environment information includes image information of the surrounding environment and three-dimensional coordinate information;
融合周围环境的影像信息和三维坐标信息并提取障碍物标识信息;Integrating image information and three-dimensional coordinate information of the surrounding environment and extracting obstacle identification information;
根据车辆信息、障碍物标识信息以及行驶目的地信息规划行驶路径;及Planning a travel route based on vehicle information, obstacle identification information, and travel destination information; and
根据行驶路径生成控制指令,并根据控制指令控制对无人驾驶汽车进行控制。A control command is generated according to the driving route, and the driverless car is controlled according to the control command.
【附图说明】[Description of the Drawings]
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他实施例的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below. Obviously, the drawings in the following description are only It is a certain embodiment of the present invention, and those skilled in the art can obtain drawings of other embodiments according to the drawings without any creative work.
图1为一个实施例中无人驾驶汽车系统的结构框架图;1 is a structural frame diagram of an unmanned vehicle system in an embodiment;
图2为一个实施例中环境感知子系统的结构框架图;2 is a structural diagram of an environment sensing subsystem in an embodiment;
图3为一个实施例中数据融合子系统的结构框架图;3 is a structural diagram of a data fusion subsystem in an embodiment;
图4为一个实施例中无人驾驶汽车的控制方法的流程图。4 is a flow chart of a control method of an driverless car in one embodiment.
【具体实施方式】 【detailed description】
为了便于理解本发明,下面将参照相关附图对本发明进行更全面的描述。附图中给出了本发明的较佳实施例。但是,本发明可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本发明的公开内容的理解更加透彻全面。In order to facilitate the understanding of the present invention, the present invention will be described more fully hereinafter with reference to the accompanying drawings. Preferred embodiments of the invention are shown in the drawings. However, the invention may be embodied in many different forms and is not limited to the embodiments described herein. Rather, these embodiments are provided so that the understanding of the present disclosure will be more fully understood.
图1为一个实施例中无人驾驶汽车系统的结构框架图,一种无人驾驶汽车系统包括环境感知子系统10、数据融合子系统20、路径规划决策子系统30以及行驶控制子系统40。其中,环境感知子系统10,用于采集无人驾驶汽车的车辆信息和周围环境信息,其中,周围环境信息包括周围环境的影像信息和三维坐标信息。数据融合子系统20,用于融合周围环境的影像信息和三维坐标信息并提取障碍物标识信息。路径规划决策子系统30,用于根据车辆信息、障碍物标识信息以及行驶目的地信息规划行驶路径。行驶控制子系统40,用于根据行驶路径生成控制指令,并根据控制指令控制对无人驾驶汽车进行控制。1 is a structural block diagram of an unmanned vehicle system in an embodiment. An unmanned vehicle system includes an environment sensing subsystem 10, a data fusion subsystem 20, a path planning decision subsystem 30, and a travel control subsystem 40. The environment sensing subsystem 10 is configured to collect vehicle information and surrounding environment information of the driverless vehicle, wherein the surrounding environment information includes image information of the surrounding environment and three-dimensional coordinate information. The data fusion subsystem 20 is configured to fuse the image information and the three-dimensional coordinate information of the surrounding environment and extract the obstacle identification information. The path planning decision subsystem 30 is configured to plan a travel route based on the vehicle information, the obstacle identification information, and the travel destination information. The travel control subsystem 40 is configured to generate a control command according to the travel route, and control the driverless car according to the control command.
上述无人驾驶汽车系统,通过数据融合子系统20融合包括影像信息和三维坐标信息的周围环境信息,并提取障碍物标识信息,提高了对周围环境信息的识别能力和精准度。路径规划决策子系统30根据车辆信息、障碍物标识信息以及行驶目的地信息规划行驶路径,行驶控制子系统40根据行驶路径生成控制指令,并根据控制指令控制对无人驾驶汽车进行控制,进而可以实现安全性能极高的无人驾驶功能。The above-mentioned driverless car system integrates surrounding environment information including image information and three-dimensional coordinate information through the data fusion subsystem 20, and extracts obstacle identification information, thereby improving the recognition ability and accuracy of the surrounding environment information. The route planning decision subsystem 30 plans a travel route based on the vehicle information, the obstacle identification information, and the travel destination information, and the travel control subsystem 40 generates a control command according to the travel route, and controls the driverless vehicle according to the control command, thereby Achieve unparalleled safety features.
在一个实施例中,参考图2,环境感知子系统10包括视觉传感器110和雷达120。其中,视觉传感器110主要由一个或者两个图形传感器组成,有时还要配以光投射器及其他辅助设备。图像传感器可以使用激光扫描器、线阵和面阵CCD摄像机或者TV摄像机,也可以是最新出现的数字摄像机等。视觉传感器110安装在无人驾驶汽车上,用于采集无人驾驶汽车的周围环境信息,也就采集无人驾驶汽车附近的实时路况信息,包括障碍物信息、车道线信息、交通标识信息以及对障碍物的动态追踪信息。所采集的周围环境信息为周围环境的影像信息,又可以称之为视频信息。In one embodiment, referring to FIG. 2, the context aware subsystem 10 includes a vision sensor 110 and a radar 120. Among them, the visual sensor 110 is mainly composed of one or two graphic sensors, and sometimes with a light projector and other auxiliary equipment. Image sensors can use laser scanners, line and area array CCD cameras or TV cameras, or the latest digital cameras. The vision sensor 110 is installed on the driverless car for collecting the surrounding environment information of the driverless car, and collecting real-time road condition information near the driverless car, including obstacle information, lane line information, traffic sign information, and Dynamic tracking information for obstacles. The collected surrounding environment information is image information of the surrounding environment, and may also be referred to as video information.
雷达120用于采集无人驾驶汽车的周围环境的三维坐标信息。该无人驾驶汽车系统中包括多个雷达120。在一个实施例中,多个雷达120包括激光雷达和毫米波雷达。激光雷达采用机械式的多线束激光雷达,主要是通过发射激光束,来探测目标的位置、速度等特征量,还可以利用激光雷达的回波强度信息进行障碍检测和追踪。激光雷达具有探测范围更广,探测精度高的优势。毫米波雷达的波长介于厘米波和光波之间,兼有微波制导和光电制导的优点,且其引导头具有体积小、质量轻、空间分辨率高,毫米波导引头穿透雾、烟、灰尘的能力强的特点。在一个实例中,同时采用激光雷达和毫米波雷达,可以解决激光雷达在极端气候下无法施展性能的弊端,可以大大提升无人驾驶汽车的探测性能。The radar 120 is used to collect three-dimensional coordinate information of the surrounding environment of the driverless car. The radar 105 is included in the driverless vehicle system. In one embodiment, the plurality of radars 120 include a lidar and a millimeter wave radar. Lidar uses mechanical multi-beam laser radar, which mainly detects the position and velocity of the target by emitting a laser beam. It can also use the echo intensity information of the laser radar to detect and track obstacles. Lidar has the advantage of wider detection range and high detection accuracy. The wavelength of the millimeter wave radar is between the centimeter wave and the light wave. It has the advantages of microwave guidance and photoelectric guidance, and its guide head has small volume, light weight and high spatial resolution. The millimeter wave seeker penetrates fog and smoke. The ability to have strong dust. In one example, the simultaneous use of lidar and millimeter-wave radar can solve the drawbacks of Lidar's inability to perform in extreme climates, which can greatly improve the detection performance of driverless vehicles.
在一个实施例中,环境感知子系统10还用于采集无人驾驶汽车的车辆信息。其中,车辆信息包括无人驾驶汽车的当前的地理位置与时间、车辆姿态和当前运行的速度等。环境感知子系统10还包括GPS定位导航仪130、惯性测量单元(Inertial measurement unit,IMU)140和车速采集模块150。其中,GPS定位导航仪130采集无人驾驶汽车的当前的地理位置与时间。无人驾驶汽车在行驶过程中,车内安装的全球定位仪将随时获取汽车所在准确方位,进一步提高安全性。惯性测量单元140用于测量无人驾驶汽车的车辆姿态。车速采集模块150用于获取无人驾驶汽车当前运行的速度。In one embodiment, the context aware subsystem 10 is also used to collect vehicle information for a driverless car. The vehicle information includes the current geographical location and time of the driverless car, the posture of the vehicle, and the current running speed. The environment awareness subsystem 10 further includes a GPS positioning navigator 130 and an inertial measurement unit (Inertial) Measurement Unit, IMU) 140 and vehicle speed acquisition module 150. The GPS location navigator 130 collects the current geographic location and time of the driverless car. When the driverless car is driving, the global locator installed in the car will always get the exact position of the car to further improve safety. The inertial measurement unit 140 is used to measure the vehicle attitude of the driverless car. The vehicle speed collecting module 150 is configured to acquire the speed at which the driverless car is currently running.
在一个实施例中,参考图3,数据融合子系统20用于融合影像信息和三维坐标信息并提取障碍物标识信息,其中,障碍物标识信息包括车道线信息、障碍物信息、交通标识信息以及动态障碍物的追踪信息。数据融合子系统20包括:车道线融合模块210、障碍物识别融合模块220、交通标识融合模块230以及障碍物动态追踪融合模块240。In one embodiment, referring to FIG. 3, the data fusion subsystem 20 is configured to fuse image information and three-dimensional coordinate information and extract obstacle identification information, wherein the obstacle identification information includes lane line information, obstacle information, traffic identification information, and Tracking information for dynamic obstacles. The data fusion subsystem 20 includes a lane line fusion module 210, an obstacle recognition fusion module 220, a traffic identification fusion module 230, and an obstacle dynamic tracking fusion module 240.
其中,车道线融合模块210,用于对视觉传感器110和雷达120采集的周围环境信息进行叠加或排除,并提取车道线信息。障碍物识别融合模块220,用于对视觉传感器110和雷达120采集的周围环境信息进行融合,并提取障碍物信息。交通标识融合模块230,用于对视觉传感器110和雷达120采集的周围环境信息进行检测,并提取交通标识信息。障碍物动态追踪融合模块240,用于对视觉传感器110和雷达120采集的周围环境信息进行融合,并提取车道线信息。The lane line fusion module 210 is configured to superimpose or exclude surrounding environment information collected by the vision sensor 110 and the radar 120, and extract lane line information. The obstacle recognition fusion module 220 is configured to fuse the surrounding environment information collected by the vision sensor 110 and the radar 120, and extract obstacle information. The traffic identifier fusion module 230 is configured to detect surrounding environment information collected by the vision sensor 110 and the radar 120, and extract traffic identification information. The obstacle dynamic tracking fusion module 240 is configured to fuse the surrounding environment information collected by the visual sensor 110 and the radar 120, and extract lane line information.
在一个实施例中,车道线融合模块210包括视觉车道线检测单元211和雷达车道线检测单元213。In one embodiment, the lane line fusion module 210 includes a visual lane line detection unit 211 and a radar lane line detection unit 213.
视觉车道线检测单元211用于对影像信息进行处理,并提取视觉车道线信息。视觉车道线检测单元211对视觉传感器110获取的影像信息进行去噪、增强、分割等预处理,并提取出视觉车道线信息。The visual lane line detecting unit 211 is configured to process the image information and extract the visual lane line information. The visual lane line detecting unit 211 performs preprocessing such as denoising, enhancement, and segmentation on the image information acquired by the visual sensor 110, and extracts visual lane line information.
雷达车道线检测单元213用于提取无人驾驶汽车行驶的路面信息,并根据路面信息获取车道外轮廓信息。雷达车道线检测单元213在获取车道外轮廓信息时,对激光雷达获取的无人驾驶汽车的行驶地面的三维坐标信息进行校准,并计算出三维坐标信息中的离散点,其中,离散点可定义为相邻两点之间的距离大于预设范围的点。并对离散点进行滤波处理,利用随机采样一致性方法拟合出地面的位置信息,获取车道外轮廓信息,也即获取雷达车道线信息。The radar lane line detecting unit 213 is configured to extract road surface information driven by the driverless vehicle, and acquire lane outer contour information according to the road surface information. When acquiring the outer contour information of the lane, the radar lane line detecting unit 213 calibrates the three-dimensional coordinate information of the driving ground of the driverless vehicle acquired by the laser radar, and calculates discrete points in the three-dimensional coordinate information, wherein the discrete points can be defined A point at which the distance between two adjacent points is greater than a preset range. The discrete points are filtered, and the position information of the ground is fitted by the random sampling consistency method to obtain the outer contour information of the lane, that is, the radar lane line information is obtained.
车道线融合模块210对获取的视觉车道线信息和车道外轮廓信息进行融合(叠加)或排除,获取实时的车道线信息。通过车道线融合模块210,可以提高车道线信息的识别的精确度,可以避免漏获取车道线信息的情况发生。The lane line fusion module 210 fuses (superimposes) or excludes the acquired visual lane line information and the lane outer contour information to obtain real-time lane line information. Through the lane line fusion module 210, the accuracy of the identification of the lane line information can be improved, and the situation that the lane line information is acquired can be avoided.
在一个实施例中,障碍物识别融合模块220包括视觉障碍物识别单元221和雷达障碍物识别单元223。其中,视觉障碍物识别单元221用于根据影像信息分割出背景信息和前景信息,对前景信息进行识别获取具有彩色信息的视觉障碍物信息。视觉障碍物识别单元221通过模式识别或者机器学习等方法对影像信息进行处理,使用背景更新算法建立背景模型以及分割出前景。对分割出的前景进行识别获取具有彩色信息的视觉障碍物信息。In one embodiment, the obstacle recognition fusion module 220 includes a visual obstacle recognition unit 221 and a radar obstacle recognition unit 223. The visual obstacle recognition unit 221 is configured to segment the background information and the foreground information according to the image information, and identify the foreground information to obtain the visual obstacle information having the color information. The visual obstacle recognition unit 221 processes the image information by means of pattern recognition or machine learning, and uses the background update algorithm to create a background model and segment the foreground. The segmented foreground is identified to obtain visual obstacle information having color information.
雷达障碍物识别单元223用于识别在第一预设高度范围内的具有三维坐标信息的雷达障碍物信息。The radar obstacle recognition unit 223 is configured to identify radar obstacle information having three-dimensional coordinate information within a first preset height range.
雷达障碍物识别单元223对激光雷达获取的无人驾驶汽车周围环境信息进行预处理,去除地面信息,并筛选识别出在第一预设高度范围内的周围环境的三维坐标信息。根据车道线信息这一约束条件检测感兴趣区域(region of interest,ROI),其中,感兴趣区域为以方框、圆、椭圆、不规则多边形等方式勾勒出需要处理的区域。将识别出的感兴趣区域的数据信息栅格化,并进行障碍物块聚类分割。对每一块障碍物块对应的原始激光雷达点云数据进行二次聚类,放置欠分割。将二次聚类的所点云数据作为训练样本集,根据训练样本集生成分类器模型,继而,利用训练模型对二次聚类后的障碍物块进行分类识别并获取具有三维坐标信息的雷达障碍物信息。The radar obstacle recognition unit 223 preprocesses the surrounding environment information of the driverless vehicle acquired by the laser radar, removes the ground information, and filters and recognizes the three-dimensional coordinate information of the surrounding environment within the first preset height range. Detecting the region of interest based on the constraints of lane line information (region Of Interest, ROI), wherein the region of interest outlines the area to be processed in the form of a box, a circle, an ellipse, an irregular polygon, and the like. The data information of the identified region of interest is rasterized, and the obstacle block clustering is performed. The original lidar point cloud data corresponding to each obstacle block is subjected to secondary clustering, and the under division is placed. The point cloud data of the quadratic cluster is used as a training sample set, the classifier model is generated according to the training sample set, and then the training model is used to classify and identify the obstacle block after the quadratic clustering and acquire the radar with three-dimensional coordinate information. Obstacle information.
障碍物识别融合模块220,用于融合视觉障碍物信息和雷达障碍物信息,获取障碍物信息。由于视觉障碍物信息在强光环境或者光线快速变化的场景中会失效,而雷达120是通过主动光源对障碍物信息进行探测,其稳定性强。当无人驾驶汽车在强光环境或者光线快速变化的场景中行驶时,可以通过障碍物识别融合模块220对视觉障碍物信息和雷达障碍物信息进行叠加,就可以在强光环境或者光线快速变化的场景中获取精确的障碍物信息。The obstacle recognition fusion module 220 is configured to fuse the visual obstacle information and the radar obstacle information to acquire the obstacle information. Since the visual obstacle information will fail in a strong light environment or a scene in which the light changes rapidly, the radar 120 detects the obstacle information through the active light source, and the stability is strong. When the driverless car is driving in a scene with a strong light environment or a rapidly changing light, the obstacle recognition information fusion module 220 can superimpose the visual obstacle information and the radar obstacle information, so that the light environment or the light changes rapidly. Get accurate obstacle information in the scene.
由于雷达120在垂直方向的分辨率较低,所采集的是障碍物的三维坐标信息而且并没有红绿蓝RGB彩色信息,在远距离或者有障碍物遮挡的情况下也会出现错误识别的情况。而视觉障碍物识别单元221获取的障碍物信息包含了丰富的红绿蓝RGB信息,而且像素高。对障碍物的彩色信息和障碍物的三维坐标信息进行叠加融合,就可以同时获取包含彩色信息和三维信息的障碍物信息。通过障碍物识别融合模块220可以减小误识别率、提高识别准确度,进一步保证了安全驾驶。Since the resolution of the radar 120 in the vertical direction is low, the three-dimensional coordinate information of the obstacle is collected and there is no red, green and blue RGB color information, and the error recognition may occur in the case of long distance or obstacle blocking. . The obstacle information acquired by the visual obstacle recognition unit 221 contains rich red, green and blue RGB information, and the pixels are high. By superimposing the color information of the obstacle and the three-dimensional coordinate information of the obstacle, the obstacle information including the color information and the three-dimensional information can be simultaneously acquired. Through the obstacle recognition fusion module 220, the false recognition rate can be reduced, the recognition accuracy can be improved, and safe driving can be further ensured.
在一个实施例中,交通标识融合模块230包括视觉交通标识检测单元231和雷达交通标识检测单元233。In one embodiment, the traffic sign fusion module 230 includes a visual traffic sign detection unit 231 and a radar traffic sign detection unit 233.
视觉交通标识检测单元231对影像信息进行检测,并提取视觉交通标识信息。视觉交通标识检测单元231对影像信息进行检测,通过模式识别或者机器学习等方法对影像信息进行处理,并获取视觉交通标识信息,其中,视觉交通标识信息中包含了红绿蓝RGB彩色信息。The visual traffic sign detecting unit 231 detects the image information and extracts the visual traffic sign information. The visual traffic sign detecting unit 231 detects the image information, processes the image information by means of pattern recognition or machine learning, and acquires visual traffic sign information, wherein the visual traffic sign information includes red, green and blue RGB color information.
雷达交通标识检测单元233用于提取地面交通标识信息;还用于检测在第二预设高度范围内的悬挂交通标识信息。其中,雷达交通标识检测单元233根据反射强度梯度,提取交通标志线点,再利用曲线拟合出地面交通标识信息(地面交通标识线),还可以根据障碍物聚类原理,获取在第二预设高度范围内且形状为标准矩形和圆形的目标物,并定义该目标物为悬挂交通标识信息。The radar traffic identification detecting unit 233 is configured to extract the ground traffic identification information; and is further configured to detect the suspended traffic identification information in the second preset height range. The radar traffic sign detecting unit 233 extracts the traffic sign line points according to the reflection intensity gradient, and then uses the curve to fit the ground traffic sign information (ground traffic sign line), and can also obtain the second pre- according to the obstacle clustering principle. Set the target in the height range and the shape is a standard rectangle and a circle, and define the target as the hanging traffic identification information.
交通标识融合模块230用于根据地面交通标识信息和悬挂交通标识信息确定交通标识信息的位置。在获取的特定位置区域,根据视觉交通标识检测单元231获取的视觉交通标识信息识别出交通标识信息的类别或种类。通过交通标识融合模块230可以准确的获取底面或悬挂的各种交通标识信息,可以保证无人驾驶汽车在遵守交通规则的前体下安全行驶。The traffic identification fusion module 230 is configured to determine the location of the traffic identification information according to the ground traffic identification information and the suspended traffic identification information. In the acquired specific location area, the category or kind of the traffic identification information is identified based on the visual traffic identification information acquired by the visual traffic identification detecting unit 231. The traffic sign fusion module 230 can accurately acquire various traffic sign information of the bottom surface or the suspension, and can ensure that the driverless car drives safely under the precursor of the traffic rules.
在一个实施例中,障碍物动态追踪融合模块240包括视觉动态追踪单元241和雷达动态追踪单元243。In one embodiment, the obstacle dynamic tracking fusion module 240 includes a visual dynamic tracking unit 241 and a radar dynamic tracking unit 243.
视觉动态追踪单元241用于对影像信息进行识别,并在相邻两帧连续帧中定位动态障碍物,并获取动态障碍物的色彩信息。视觉动态追踪单元241通过模式识别或者机器学习等方法对影像信息(视频图像)序列进行处理,在视频图像的连续帧中识别并定位动态障碍物,并获取障碍物的色彩信息。The visual dynamic tracking unit 241 is configured to identify the image information, and locate the dynamic obstacle in the adjacent two consecutive frames, and obtain the color information of the dynamic obstacle. The visual dynamic tracking unit 241 processes the image information (video image) sequence by means of pattern recognition or machine learning, identifies and locates the dynamic obstacle in successive frames of the video image, and acquires the color information of the obstacle.
雷达动态追踪单元243用于追踪动态障碍物的三维坐标信息。雷达动态追踪单元243依据相关目标关联算法,采用最邻近匹配算法和多元假设追踪算法相结合确定相邻两帧或多帧的障碍物为同一目标。根据激光雷达的测试数据获取该目标的三维位置信息和速度信息,进而对关联之后的目标进行追踪。同时,还可以利用卡尔曼滤波与粒子滤波的滤波算法对已经得到的目标的测量状态和预测状态进行滤波得到比较精确的动态障碍物的三维坐标信息。The radar dynamic tracking unit 243 is used to track three-dimensional coordinate information of the dynamic obstacle. The radar dynamic tracking unit 243 combines the nearest neighbor matching algorithm and the multi-hypothesis tracking algorithm to determine that the obstacles of two adjacent frames or frames are the same target according to the related target association algorithm. The three-dimensional position information and speed information of the target are obtained according to the test data of the laser radar, and then the target after the association is tracked. At the same time, the Kalman filter and the particle filter filtering algorithm can be used to filter the measured state and the predicted state of the obtained target to obtain the three-dimensional coordinate information of the relatively accurate dynamic obstacle.
障碍物动态追踪融合模块240用于融合动态障碍物的色彩信息和障碍物的三维坐标信息,获取动态障碍物的追踪信息。由于视觉动态障碍物信息容易受到强光或者光照变化的干扰,没有精确的动态障碍物的三位坐标信息,但是视觉动态障碍物信息中包含了丰富的红绿蓝RGB的彩色信息。雷动获取的动态障碍物信息没有红绿蓝RGB的彩色信息,在运动过程中出现遮挡及遮挡后分开时无法识别出具体是哪个动态物体,但是,激光雷达获取的动态障碍物信息稳定性强,不会受到光强变化等外界干扰,而且激光雷达获取的动态障碍物信息具有精确的三维坐标信息,对运动物体的动态跟踪具有更精确的运动模型。 因此,可以通过障碍物动态追踪融合模块240对从影像信息中获取的动态障碍物的色彩信息和激光雷达获取的动态障碍物信息的三维坐标信息进行融合,既可以获取包含色彩信息和三维坐标信息的动态障碍物,可以对动态障碍物进行精确的追踪。The obstacle dynamic tracking fusion module 240 is configured to combine the color information of the dynamic obstacle and the three-dimensional coordinate information of the obstacle to obtain the tracking information of the dynamic obstacle. Since the visual dynamic obstacle information is easily interfered by strong light or illumination changes, there is no precise three-dimensional coordinate information of the dynamic obstacle, but the visual dynamic obstacle information contains rich red, green and blue RGB color information. The dynamic obstacle information acquired by the lightning has no red, green and blue RGB color information. When the occlusion and occlusion are separated during the movement, it is impossible to identify which dynamic object is specific. However, the dynamic obstacle information acquired by the laser radar is stable. It is not subject to external disturbances such as changes in light intensity, and the dynamic obstacle information acquired by Lidar has accurate three-dimensional coordinate information, and has a more accurate motion model for dynamic tracking of moving objects. Therefore, the obstacle dynamic tracking fusion module 240 can fuse the color information of the dynamic obstacle acquired from the image information and the three-dimensional coordinate information of the dynamic obstacle information acquired by the laser radar, and can acquire the color information and the three-dimensional coordinate information. Dynamic obstacles allow accurate tracking of dynamic obstacles.
在一个实施例中,路径规划决策子系统30用于根据车辆信息、数据融合子系统20提取的障碍物标识信息以及行驶目的地信息规划行驶路径。路径规划决策子系统30可以根据环境感知子系统10获取的车辆信息(无人驾驶汽车的当前的地理位置与时间、车辆姿态和当前运行的速度)、数据融合子系统20提取的周围环境信息(障碍物信息、车道线信息、交通标识信息以及对障碍物的动态追踪信息)以及无人驾驶汽车的行驶目的地信息来规划行驶路径。路径规划决策子系统30结合规划的行驶路径对无人驾驶汽车下一时刻的位置进行路径规划,并计算出无人驾驶汽车的控制数据,包括角速度、线速度、行驶方向等。In one embodiment, the path planning decision subsystem 30 is configured to plan the travel path based on the vehicle information, the obstacle identification information extracted by the data fusion subsystem 20, and the travel destination information. The path planning decision subsystem 30 can determine the surrounding environment information extracted by the data fusion subsystem 20 based on the vehicle information acquired by the environment aware subsystem 10 (the current geographic location and time of the driverless vehicle, the vehicle attitude and the current running speed). The obstacle information, the lane line information, the traffic sign information, and the dynamic tracking information of the obstacle, and the driving destination information of the driverless car are used to plan the traveling route. The path planning decision subsystem 30 combines the planned driving path to plan the position of the driver's car at the next moment, and calculates the control data of the driverless car, including the angular velocity, the line speed, the traveling direction, and the like.
在一个实施例中,行驶控制子系统40用于根据行驶路径生成控制指令,并根据控制指令控制对无人驾驶汽车进行控制。行驶控制子系统40根据路径规划决策子系统30计算的控制数据生成控制指令,该控制指令包括对车辆的行驶速度、行驶方向(前、后、左、右)、油门以及车辆的形式档位的控制,进而保证无人驾驶车辆能够安全平稳行驶,实现无人驾驶的功能。In one embodiment, the travel control subsystem 40 is configured to generate a control command based on the travel path and control the driverless vehicle in accordance with the control command. The travel control subsystem 40 generates control commands based on the control data calculated by the path planning decision subsystem 30, including the travel speed of the vehicle, the direction of travel (front, rear, left, and right), the throttle, and the form of the vehicle. Control, and thus ensure that the driverless vehicle can drive safely and smoothly, and realize the function of driverless.
在一个实施例中,无人驾驶汽车系统还包括通信子系统50,通信子系统50用于将路径规划决策子系统30规划的行驶路径实时传输至外部监控中心。由外部监控中心对无人驾驶汽车的行驶状况进行监控。In one embodiment, the driverless vehicle system further includes a communication subsystem 50 for transmitting the travel path planned by the path planning decision subsystem 30 to the external monitoring center in real time. The driving status of the driverless car is monitored by an external monitoring center.
上述无人驾驶汽车系统,通过数据融合子系统20融合包括影像信息和三维坐标信息的周围环境信息,并提取障碍物信息、车道线信息、交通标识信息以及动态障碍物的追踪信息,提高了对周围环境信息的识别能力和精准度。路径规划决策子系统30根据数据融合子系统20提取的信息以及行驶目的地信息规划行驶路径,行驶控制子系统40根据行驶路径生成控制指令,并根据控制指令控制对无人驾驶汽车进行控制,进而可以实现安全性能极高的无人驾驶功能。The above-mentioned driverless car system integrates surrounding environment information including image information and three-dimensional coordinate information through the data fusion subsystem 20, and extracts obstacle information, lane line information, traffic identification information, and tracking information of dynamic obstacles, thereby improving the pair. The ability and accuracy of the surrounding environment information. The route planning decision subsystem 30 plans the travel route according to the information extracted by the data fusion subsystem 20 and the travel destination information, and the travel control subsystem 40 generates a control command according to the travel route, and controls the unmanned vehicle according to the control command, and further Unmanned features with extremely high safety performance can be achieved.
此外,本发明的实施例还提供一种汽车,包括上述各实施例中的无人驾驶汽车系统。根据本发明实施例的汽车,可通过汽车中的无人驾驶汽车系统中的数据融合子系统20融合包括影像信息和三维坐标信息的周围环境信息,并提取障碍物信息、车道线信息、交通标识信息以及动态障碍物的追踪信息,提高了对周围环境信息的识别能力和精准度。路径规划决策子系统30根据数据融合子系统20提取的信息以及行驶目的地信息规划行驶路径,行驶控制子系统40根据行驶路径生成控制指令,并根据控制指令控制对无人驾驶汽车进行控制,进而可以实现安全性能极高的无人驾驶功能。Further, an embodiment of the present invention also provides an automobile including the driverless automobile system in each of the above embodiments. According to the automobile of the embodiment of the present invention, the surrounding environment information including the image information and the three-dimensional coordinate information can be fused by the data fusion subsystem 20 in the driverless car system in the automobile, and the obstacle information, the lane line information, and the traffic sign can be extracted. Information and tracking information of dynamic obstacles improve the ability and accuracy of information about the surrounding environment. The route planning decision subsystem 30 plans the travel route according to the information extracted by the data fusion subsystem 20 and the travel destination information, and the travel control subsystem 40 generates a control command according to the travel route, and controls the unmanned vehicle according to the control command, and further Unmanned features with extremely high safety performance can be achieved.
此外,还提供一种无人驾驶汽车的控制方法。图4为无人驾驶汽车的控制方法的流程图。控制方法基于上述无人驾驶汽车系统,方法包括:In addition, a control method for a driverless car is also provided. 4 is a flow chart of a control method of a driverless car. The control method is based on the above-described driverless car system, and the method includes:
步骤S410:采集无人驾驶汽车的车辆信息和周围环境信息。Step S410: Collect vehicle information and surrounding environment information of the driverless car.
无人驾驶汽车系统中的环境感知子系统10包括视觉传感器110和雷达120。其中,视觉传感器110主要由一个或者两个图形传感器组成,有时还要配以光投射器及其他辅助设备。图像传感器可以使用激光扫描器、线阵和面阵CCD摄像机或者TV摄像机,也可以是最新出现的数字摄像机等。通过安装在无人驾驶汽车上的视觉传感器110采集无人驾驶汽车的周围环境信息,也就采集无人驾驶汽车附近的实时路况信息,包括障碍物信息、车道线信息、交通标识信息以及对障碍物的动态追踪信息。所采集的周围环境信息为周围环境的影像信息,又可以称之为视频信息。The context aware subsystem 10 in a driverless vehicle system includes a vision sensor 110 and a radar 120. Among them, the visual sensor 110 is mainly composed of one or two graphic sensors, and sometimes with a light projector and other auxiliary equipment. Image sensors can use laser scanners, line and area array CCD cameras or TV cameras, or the latest digital cameras. The surrounding environment information of the driverless car is collected by the visual sensor 110 installed on the driverless car, and the real-time road condition information near the driverless car is collected, including obstacle information, lane line information, traffic sign information, and obstacles. Dynamic tracking information of objects. The collected surrounding environment information is image information of the surrounding environment, and may also be referred to as video information.
通过雷达120采集无人驾驶汽车的周围环境的三维坐标信息。该无人驾驶汽车系统中包括多个雷达120。在一个实施例中,多个雷达120包括激光雷达和毫米波雷达。激光雷达采用机械式的多线束激光雷达,主要是通过发射激光束,来探测目标的位置、速度等特征量,还可以利用激光雷达的回波强度信息进行障碍检测和追踪。激光雷达具有探测范围更广,探测精度高的优势。毫米波雷达的波长介于厘米波和光波之间,兼有微波制导和光电制导的优点,且其引导头具有体积小、质量轻、空间分辨率高,毫米波导引头穿透雾、烟、灰尘的能力强的特点。在一个实例中,同时采用激光雷达和毫米波雷达,可以解决激光雷达在极端气候下无法施展性能的弊端,可以大大提升无人驾驶汽车的探测性能。The three-dimensional coordinate information of the surrounding environment of the driverless car is collected by the radar 120. The radar 105 is included in the driverless vehicle system. In one embodiment, the plurality of radars 120 include a lidar and a millimeter wave radar. Lidar uses mechanical multi-beam laser radar, which mainly detects the position and velocity of the target by emitting a laser beam. It can also use the echo intensity information of the laser radar to detect and track obstacles. Lidar has the advantage of wider detection range and high detection accuracy. The wavelength of the millimeter wave radar is between the centimeter wave and the light wave. It has the advantages of microwave guidance and photoelectric guidance, and its guide head has small volume, light weight and high spatial resolution. The millimeter wave seeker penetrates fog and smoke. The ability to have strong dust. In one example, the simultaneous use of lidar and millimeter-wave radar can solve the drawbacks of Lidar's inability to perform in extreme climates, which can greatly improve the detection performance of driverless vehicles.
步骤S420:融合周围环境的车辆信息和周围环境信息并提取障碍物标识信息。Step S420: Fusion of vehicle information and surrounding environment information of the surrounding environment and extraction of obstacle identification information.
无人驾驶汽车系统中的数据融合子系统20可以融合影像信息和三维坐标信息并提取障碍物标识信息,其中,障碍物标识信息包括车道线信息、障碍物信息、交通标识信息以及动态障碍物的追踪信息。The data fusion subsystem 20 in the driverless car system can fuse the image information and the three-dimensional coordinate information and extract the obstacle identification information, wherein the obstacle identification information includes lane line information, obstacle information, traffic identification information, and dynamic obstacles. tracking information.
对影像信息进行去噪、增强、分割等预处理,并提取视觉车道线信息。对获取的三维坐标信息进行处理,获取无人驾驶汽车行驶的路面信息,并根据路面信息获取车道外轮廓信息,也即获取雷达车道线信息;对获取的视觉车道线信息和车道外轮廓信息进行融合(叠加)或排除,获取实时的车道线信息。The image information is subjected to pre-processing such as denoising, enhancement, and segmentation, and the visual lane line information is extracted. The obtained three-dimensional coordinate information is processed to obtain the road surface information of the driverless vehicle, and the outer contour information of the lane is obtained according to the road surface information, that is, the radar lane line information is acquired; and the acquired visual lane line information and the lane outer contour information are performed. Fusion (overlay) or exclusion to get real-time lane line information.
根据影像信息分割出背景信息和前景信息,对前景信息进行识别获取具有彩色信息的视觉障碍物信息;识别在第一预设高度范围内的具有三维坐标信息的雷达障碍物信息;融合视觉障碍物信息和雷达障碍物信息获取障碍物信息,就可以在强光环境或者光线快速变化的场景中获取精确的障碍物信息。Segmenting background information and foreground information according to image information, identifying foreground information to obtain visual obstacle information having color information; identifying radar obstacle information having three-dimensional coordinate information within a first preset height range; and integrating visual obstacles Information and Radar Obstacle Information Obtaining obstacle information allows you to obtain accurate obstacle information in scenes with strong light conditions or rapidly changing light.
对影像信息进行检测,并提取视觉交通标识信息。根据反射强度梯度,提取交通标志线点,再利用曲线拟合出地面交通标识信息(地面交通标识线),还可以根据障碍物聚类原理,获取在第二预设高度范围内且形状为标准矩形和圆形的目标物,并定义该目标物为悬挂交通标识信息。The image information is detected and the visual traffic identification information is extracted. According to the reflection intensity gradient, the traffic sign line points are extracted, and the ground traffic identification information (ground traffic identification line) is fitted by the curve, and the shape can be obtained within the second preset height range according to the obstacle clustering principle. Rectangular and circular targets, and define the target as hanging traffic identification information.
根据地面交通标识信息和悬挂交通标识信息确定交通标识信息的位置。在获取的特定位置区域,根据视觉交通标识检测单元231获取的视觉交通标识信息识别出交通标识信息的类别或种类。通过交通标识融合模块230可以准确的获取底面或悬挂的各种交通标识信息,可以保证无人驾驶汽车在遵守交通规则的前体下安全行驶。The location of the traffic identification information is determined according to the ground traffic identification information and the suspended traffic identification information. In the acquired specific location area, the category or kind of the traffic identification information is identified based on the visual traffic identification information acquired by the visual traffic identification detecting unit 231. The traffic sign fusion module 230 can accurately acquire various traffic sign information of the bottom surface or the suspension, and can ensure that the driverless car drives safely under the precursor of the traffic rules.
对影像信息进行识别,并在相邻两帧连续帧中定位动态障碍物,并获取动态障碍物的色彩信息。依据相关目标关联算法,采用最邻近匹配算法和多元假设追踪算法相结合确定相邻两帧或多帧的障碍物为同一目标。根据激光雷达的测试数据获取该目标的三维位置信息和速度信息,进而对关联之后的目标进行追踪。同时,还可以利用卡尔曼滤波与粒子滤波的滤波算法对已经得到的目标的测量状态和预测状态进行滤波得到比较精确的动态障碍物的三维坐标信息。融合动态障碍物的色彩信息和障碍物的三维坐标信息,获取动态障碍物的追踪信息。对从影像信息中获取的动态障碍物的色彩信息和激光雷达获取的动态障碍物信息的三维坐标信息进行融合,既可以获取包含色彩信息和三维坐标信息的动态障碍物,可以对动态障碍物进行精确的追踪。The image information is identified, and the dynamic obstacle is located in two consecutive frames of consecutive frames, and the color information of the dynamic obstacle is obtained. According to the related target association algorithm, the nearest neighbor matching algorithm and the multi-hypothesis tracking algorithm are combined to determine that the obstacles of two adjacent frames or multiple frames are the same target. The three-dimensional position information and speed information of the target are obtained according to the test data of the laser radar, and then the target after the association is tracked. At the same time, the Kalman filter and the particle filter filtering algorithm can be used to filter the measured state and the predicted state of the obtained target to obtain the three-dimensional coordinate information of the relatively accurate dynamic obstacle. The color information of the dynamic obstacle and the three-dimensional coordinate information of the obstacle are combined to obtain the tracking information of the dynamic obstacle. The fusion of the color information of the dynamic obstacle obtained from the image information and the three-dimensional coordinate information of the dynamic obstacle information acquired by the laser radar can acquire a dynamic obstacle including the color information and the three-dimensional coordinate information, and can perform the dynamic obstacle Precise tracking.
步骤S430:根据车辆信息、障碍物标识信息以及行驶目的地信息规划行驶路径。Step S430: Plan the travel route based on the vehicle information, the obstacle identification information, and the travel destination information.
可以根据环境感知子系统10获取的车辆信息(无人驾驶汽车的当前的地理位置与时间、车辆姿态和当前运行的速度)、数据融合子系统20提取的周围环境信息(障碍物信息、车道线信息、交通标识信息以及对障碍物的动态追踪信息)以及无人驾驶汽车的行驶目的地信息来规划行驶路径。The vehicle information (the current geographic location and time of the driverless car, the vehicle posture, and the current running speed) acquired by the environment sensing subsystem 10, and the surrounding environment information (obstacle information, lane line) extracted by the data fusion subsystem 20 Information, traffic identification information, and dynamic tracking information for obstacles) and driving destination information of driverless cars to plan driving routes.
步骤S440:根据行驶路径生成控制指令,并根据控制指令控制对无人驾驶汽车进行控制。Step S440: Generate a control command according to the driving route, and control the driverless car according to the control command.
根据路径规划决策子系统30计算的控制数据生成控制指令,该控制指令包括对车辆的行驶速度、行驶方向(前、后、左、右)、油门以及车辆的形式档位的控制,进而保证无人驾驶车辆能够安全平稳行驶,实现无人驾驶的功能。The control command is generated according to the control data calculated by the path planning decision subsystem 30, and the control command includes control of the traveling speed of the vehicle, the driving direction (front, rear, left, and right), the throttle, and the gear position of the vehicle, thereby ensuring no People driving vehicles can drive safely and smoothly, achieving unmanned functions.
在一个实施例中,无人驾驶汽车的控制方法,还包括:采集无人驾驶汽车的当前的地理位置与时间;测量无人驾驶汽车的车辆姿态;获取无人驾驶汽车当前运行的速度。In one embodiment, the control method of the driverless vehicle further includes: collecting the current geographic location and time of the driverless car; measuring the vehicle attitude of the driverless car; and obtaining the current running speed of the driverless car.
在一个实施例中,无人驾驶汽车的控制方法还包括采集无人驾驶汽车的车辆信息的步骤。其中,车辆信息包括无人驾驶汽车的当前的地理位置与时间、车辆姿态和当前运行的速度等。其中可以通过GPS定位导航仪130采集无人驾驶汽车的当前的地理位置与时间。无人驾驶汽车在行驶过程中,车内安装的全球定位仪将随时获取汽车所在准确方位,进一步提高安全性。通过惯性测量单元140测量无人驾驶汽车的车辆姿态。通过车速采集模块150获取无人驾驶汽车当前运行的速度。In one embodiment, the control method of the driverless vehicle further includes the step of collecting vehicle information of the driverless vehicle. The vehicle information includes the current geographical location and time of the driverless car, the posture of the vehicle, and the current running speed. The current location and time of the driverless car can be collected by the GPS positioning navigator 130. When the driverless car is driving, the global locator installed in the car will always get the exact position of the car to further improve safety. The vehicle attitude of the driverless car is measured by the inertial measurement unit 140. The speed of the current running of the driverless car is obtained by the vehicle speed collecting module 150.
在一个实施例中,无人驾驶汽车的控制方法,还包括:将路径规划决策子系统规划的行驶路径实时传输至外部监控中心。由外部监控中心对无人驾驶汽车的行驶状况进行监控。In an embodiment, the control method of the driverless vehicle further includes: transmitting the planned travel path of the route planning decision subsystem to the external monitoring center in real time. The driving status of the driverless car is monitored by an external monitoring center.
上述无人驾驶汽车的控制方法,通过融合车辆信息和周围环境信息并提取障碍物标识信息,可以提高对周围环境信息的识别能力和精准度。根据车辆信息、障碍物标识信息以及行驶目的地信息规划行驶路径;及根据行驶路径生成控制指令,并根据控制指令控制对无人驾驶汽车进行控制,进而可以实现安全性能极高的无人驾驶功能。The above control method of the driverless vehicle can improve the recognition ability and accuracy of the surrounding environment information by integrating the vehicle information and the surrounding environment information and extracting the obstacle identification information. The driving route is planned according to the vehicle information, the obstacle identification information, and the driving destination information; and the control command is generated according to the driving route, and the driverless car is controlled according to the control command, thereby achieving the unsafe driving function with extremely high safety performance. .
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-described embodiments may be arbitrarily combined. For the sake of brevity of description, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction between the combinations of these technical features, All should be considered as the scope of this manual.
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。The above-described embodiments are merely illustrative of several embodiments of the present invention, and the description thereof is more specific and detailed, but is not to be construed as limiting the scope of the invention. It should be noted that a number of variations and modifications may be made by those skilled in the art without departing from the spirit and scope of the invention. Therefore, the scope of the invention should be determined by the appended claims.

Claims (19)

  1. 一种无人驾驶汽车系统,包括:A driverless car system comprising:
    环境感知子系统,用于采集无人驾驶汽车的车辆信息和周围环境信息,所述周围环境信息包括周围环境的影像信息和三维坐标信息;An environment sensing subsystem, configured to collect vehicle information and surrounding environment information of the driverless vehicle, where the surrounding environment information includes image information of the surrounding environment and three-dimensional coordinate information;
    数据融合子系统,用于融合所述周围环境的影像信息和三维坐标信息并提取障碍物标识信息;a data fusion subsystem, configured to fuse image information and three-dimensional coordinate information of the surrounding environment and extract obstacle identification information;
    路径规划决策子系统,用于根据所述车辆信息、所述障碍物标识信息以及行驶目的地信息规划行驶路径;及a path planning decision subsystem for planning a driving route according to the vehicle information, the obstacle identification information, and driving destination information; and
    行驶控制子系统,用于根据所述行驶路径生成控制指令,并根据所述控制指令控制对无人驾驶汽车进行控制。The travel control subsystem is configured to generate a control command according to the travel route, and control the unmanned vehicle to be controlled according to the control command.
  2. 根据权利要求1所述的无人驾驶汽车系统,其特征在于,所述环境感知子系统包括:The driverless vehicle system of claim 1 wherein said environment aware subsystem comprises:
    视觉传感器,用于采集无人驾驶汽车周围环境的影像信息;Vision sensor for collecting image information of the environment surrounding the driverless car;
    雷达,用于采集无人驾驶汽车的周围环境的三维坐标信息。The radar is used to collect three-dimensional coordinate information of the surrounding environment of the driverless car.
  3. 根据权利要求2所述的无人驾驶汽车系统,其特征在于,所述障碍物标识信息包括车道线信息、障碍物信息、交通标识信息以及动态障碍物的追踪信息。The driverless vehicle system according to claim 2, wherein the obstacle identification information includes lane line information, obstacle information, traffic identification information, and tracking information of a dynamic obstacle.
  4. 根据权利要求3所述的无人驾驶汽车系统,其特征在于,所述数据融合子系统包括:The driverless vehicle system of claim 3, wherein the data fusion subsystem comprises:
    车道线融合模块,用于对所述视觉传感器和所述雷达采集的周围环境信息进行叠加或排除,并提取所述车道线信息;a lane line fusion module, configured to superimpose or exclude surrounding information collected by the vision sensor and the radar, and extract the lane line information;
    障碍物识别融合模块,用于对所述视觉传感器和所述雷达采集的周围环境信息进行融合,并提取所述障碍物信息;An obstacle recognition fusion module, configured to fuse the visual sensor and the surrounding environment information collected by the radar, and extract the obstacle information;
    交通标识融合模块,用于对所述视觉传感器和所述雷达采集的周围环境信息进行检测,并提取所述交通标识信息;a traffic sign fusion module, configured to detect the visual sensor and the surrounding environment information collected by the radar, and extract the traffic identification information;
    障碍物动态追踪融合模块,用于对所述视觉传感器和所述雷达采集的周围环境信息进行融合,并提取所述动态障碍物的追踪信息。An obstacle dynamic tracking fusion module is configured to fuse the visual sensor and the surrounding environment information collected by the radar, and extract tracking information of the dynamic obstacle.
  5. 根据权利要求4所述的无人驾驶汽车系统,其特征在于,所述车道线融合模块包括视觉车道线检测单元和雷达车道线检测单元;The driverless vehicle system according to claim 4, wherein the lane line fusion module comprises a visual lane line detecting unit and a radar lane line detecting unit;
    所述视觉车道线检测单元用于对所述影像信息进行处理,并提取视觉车道线信息;所述雷达车道线检测单元用于提取无人驾驶汽车行驶的路面信息,并根据所述路面信息获取车道外轮廓信息;The visual lane line detecting unit is configured to process the image information and extract visual lane line information; the radar lane line detecting unit is configured to extract road surface information driven by the driverless vehicle, and obtain the road information according to the road surface information. Outer lane outline information;
    所述车道线融合模块还用于对所述视觉车道线信息和车道外轮廓信息进行叠加或排除,获取所述车道线信息。The lane line fusion module is further configured to superimpose or exclude the visual lane line information and the lane outer contour information to acquire the lane line information.
  6. 根据权利要求4所述的无人驾驶汽车系统,其特征在于,所述障碍物识别融合模块包括视觉障碍物识别单元和雷达障碍物识别单元;The driverless vehicle system according to claim 4, wherein the obstacle recognition fusion module comprises a visual obstacle recognition unit and a radar obstacle recognition unit;
    所述视觉障碍物识别单元用于根据所述影像信息分割出背景信息和前景信息,对所述前景信息进行识别获取具有彩色信息的视觉障碍物信息;所述雷达障碍物识别单元用于识别在第一预设高度范围内的具有三维坐标信息的雷达障碍物信息;The visual obstacle recognition unit is configured to segment the background information and the foreground information according to the image information, and identify the foreground information to obtain visual obstacle information having color information; the radar obstacle recognition unit is configured to identify Radar obstacle information with three-dimensional coordinate information within a first preset height range;
    所述障碍物识别融合模块还用于融合所述视觉障碍物信息和雷达障碍物信息,获取所述障碍物信息。The obstacle recognition fusion module is further configured to fuse the visual obstacle information and the radar obstacle information to acquire the obstacle information.
  7. 根据权利要求4所述的无人驾驶汽车系统,其特征在于,所述交通标识融合模块包括视觉交通标识检测单元和雷达交通标识检测单元;The driverless vehicle system according to claim 4, wherein the traffic sign fusion module comprises a visual traffic sign detecting unit and a radar traffic sign detecting unit;
    所述视觉交通标识检测单元对所述影像信息进行检测,并提取视觉交通标识信息;所述雷达交通标识检测单元用于提取地面交通标识信息,还用于检测在第二预设高度范围内的悬挂交通标识信息;The visual traffic sign detecting unit detects the image information and extracts visual traffic sign information; the radar traffic sign detecting unit is configured to extract ground traffic sign information, and is further configured to detect the second preset height range. Suspending traffic identification information;
    所述交通标识融合模块还用于根据所述地面交通标识信息和悬挂交通标识信息确定所述交通标识信息的位置,并在所述位置区域内获取所述交通标识信息的类别。The traffic sign fusion module is further configured to determine a location of the traffic identification information according to the ground traffic identification information and the suspended traffic identification information, and obtain a category of the traffic identification information in the location area.
  8. 根据权利要求4所述的无人驾驶汽车系统,其特征在于,所述障碍物动态追踪融合模块包括视觉动态追踪单元和雷达动态追踪单元;The driverless vehicle system according to claim 4, wherein the obstacle dynamic tracking fusion module comprises a visual dynamic tracking unit and a radar dynamic tracking unit;
    所述视觉动态追踪单元用于对所述影像信息进行识别,并在相邻两帧连续帧中定位动态障碍物,并获取所述动态障碍物的色彩信息;所述雷达动态追踪单元用于追踪动态障碍物的三维坐标信息;The visual dynamic tracking unit is configured to identify the image information, and locate a dynamic obstacle in two consecutive frames of consecutive frames, and acquire color information of the dynamic obstacle; the radar dynamic tracking unit is used for tracking Three-dimensional coordinate information of dynamic obstacles;
    所述障碍物动态追踪融合模块还用于融合所述动态障碍物的色彩信息和动态障碍物的三维坐标信息,获取所述动态障碍物的追踪信息。The obstacle dynamic tracking fusion module is further configured to combine the color information of the dynamic obstacle and the three-dimensional coordinate information of the dynamic obstacle to acquire tracking information of the dynamic obstacle.
  9. 根据权利要求1所述的无人驾驶汽车系统,其特征在于,所述环境感知子系统还包括:The driverless vehicle system according to claim 1, wherein the environment sensing subsystem further comprises:
    GPS定位导航仪,用于采集无人驾驶汽车的当前的地理位置与时间;GPS positioning navigator for collecting the current geographical location and time of the driverless car;
    惯性测量单元,用于测量所述无人驾驶汽车的车辆姿态;An inertial measurement unit for measuring a vehicle attitude of the driverless vehicle;
    车速采集模块,用于获取无人驾驶汽车当前运行的速度。The vehicle speed acquisition module is used to obtain the current running speed of the driverless car.
  10. 根据权利要求1所述的无人驾驶汽车系统,其特征在于,还包括:The driverless vehicle system according to claim 1, further comprising:
    通信子系统,用于将所述路径规划决策子系统规划的行驶路径实时传输至外部监控中心。The communication subsystem is configured to transmit the travel path planned by the path planning decision subsystem to the external monitoring center in real time.
  11. 一种汽车,包括如权利要求1~10中任一项所述的无人驾驶汽车系统。An automobile comprising the driverless automobile system according to any one of claims 1 to 10.
  12. 一种无人驾驶汽车的控制方法,包括:A control method for an unmanned vehicle includes:
    采集无人驾驶汽车的车辆信息和周围环境信息,所述周围环境信息包括周围环境的影像信息和三维坐标信息;Collecting vehicle information and surrounding environment information of the driverless vehicle, where the surrounding environment information includes image information of the surrounding environment and three-dimensional coordinate information;
    融合所述周围环境的影像信息和三维坐标信息并提取障碍物标识信息;Integrating image information and three-dimensional coordinate information of the surrounding environment and extracting obstacle identification information;
    根据所述车辆信息、所述障碍物标识信息以及行驶目的地信息规划行驶路径;及Planning a travel route based on the vehicle information, the obstacle identification information, and travel destination information; and
    根据所述行驶路径生成控制指令,并根据所述控制指令控制对无人驾驶汽车进行控制。A control command is generated according to the travel route, and control of the driverless car is controlled according to the control command.
  13. 根据权利要求12所述的无人驾驶汽车系统,其特征在于,所述障碍物标识信息包括车道线信息、障碍物信息、交通标识信息以及动态障碍物的追踪信息。The driverless vehicle system according to claim 12, wherein the obstacle identification information includes lane line information, obstacle information, traffic identification information, and tracking information of a dynamic obstacle.
  14. 根据权利要求13所述的无人驾驶汽车系统,其特征在于,还包括:The driverless vehicle system of claim 13 further comprising:
    对所述影像信息进行处理,并提取视觉车道线信息;Processing the image information and extracting visual lane line information;
    提取无人驾驶汽车行驶的路面信息,并根据所述路面信息获取车道外轮廓信息;Extracting road surface information of the driverless vehicle, and acquiring lane outer contour information according to the road surface information;
    对所述视觉车道线信息和车道外轮廓信息进行叠加或排除,获取所述车道线信息。The visual lane line information and the lane outer contour information are superimposed or excluded to acquire the lane line information.
  15. 根据权利要求13所述的无人驾驶汽车系统,其特征在于,还包括:The driverless vehicle system of claim 13 further comprising:
    根据所述影像信息分割出背景信息和前景信息,对所述前景信息进行识别获取具有彩色信息的视觉障碍物信息;And segmenting the background information and the foreground information according to the image information, and identifying the foreground information to obtain visual obstacle information having color information;
    识别在第一预设高度范围内的具有三维坐标信息的雷达障碍物信息;Identifying radar obstacle information having three-dimensional coordinate information within a first preset height range;
    融合所述视觉障碍物信息和雷达障碍物信息,获取所述障碍物信息。The obstacle obstacle information is acquired by fusing the visual obstacle information and the radar obstacle information.
  16. 根据权利要求13所述的无人驾驶汽车系统,其特征在于,还包括:The driverless vehicle system of claim 13 further comprising:
    对所述影像信息进行检测,并提取视觉交通标识信息;Detecting the image information and extracting visual traffic identification information;
    提取地面交通标识信息;Extract ground traffic identification information;
    检测在第二预设高度范围内的悬挂交通标识信息;Detecting suspended traffic identification information within a second preset height range;
    根据所述地面交通标识信息和悬挂交通标识信息确定所述交通标识信息的位置,并在所述位置区域内获取所述交通标识信息的类别。Determining a location of the traffic identification information according to the ground traffic identification information and the suspended traffic identification information, and acquiring a category of the traffic identification information in the location area.
  17. 根据权利要求13所述的无人驾驶汽车系统,其特征在于,还包括:The driverless vehicle system of claim 13 further comprising:
    对所述影像信息进行识别,并在相邻两帧连续帧中定位动态障碍物,并获取所述动态障碍物的色彩信息;Identifying the image information, and locating a dynamic obstacle in two consecutive frames of consecutive frames, and acquiring color information of the dynamic obstacle;
    追踪动态障碍物的三维坐标信息;Track three-dimensional coordinate information of dynamic obstacles;
    融合所述动态障碍物的色彩信息和动态障碍物的三维坐标信息,获取所述动态障碍物的追踪信息。The color information of the dynamic obstacle and the three-dimensional coordinate information of the dynamic obstacle are merged to acquire tracking information of the dynamic obstacle.
  18. 根据权利要求12所述的无人驾驶汽车系统,其特征在于,还包括:The driverless vehicle system of claim 12, further comprising:
    采集无人驾驶汽车的当前的地理位置与时间;Collect the current location and time of the driverless car;
    测量所述无人驾驶汽车的车辆姿态;Measuring a vehicle attitude of the driverless car;
    获取无人驾驶汽车当前运行的速度。Get the speed at which the driverless car is currently running.
  19. 根据权利要求12所述的无人驾驶汽车系统,其特征在于,还包括:The driverless vehicle system of claim 12, further comprising:
    将所述路径规划决策子系统规划的行驶路径实时传输至外部监控中心。The travel path planned by the path planning decision subsystem is transmitted to the external monitoring center in real time.
PCT/CN2017/075983 2017-03-08 2017-03-08 Driverless automobile system and control method thereof, and automobile WO2018161278A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/075983 WO2018161278A1 (en) 2017-03-08 2017-03-08 Driverless automobile system and control method thereof, and automobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/075983 WO2018161278A1 (en) 2017-03-08 2017-03-08 Driverless automobile system and control method thereof, and automobile

Publications (1)

Publication Number Publication Date
WO2018161278A1 true WO2018161278A1 (en) 2018-09-13

Family

ID=63447212

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/075983 WO2018161278A1 (en) 2017-03-08 2017-03-08 Driverless automobile system and control method thereof, and automobile

Country Status (1)

Country Link
WO (1) WO2018161278A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934164A (en) * 2019-03-12 2019-06-25 杭州飞步科技有限公司 Data processing method and device based on Trajectory Safety degree
CN112543877A (en) * 2019-04-03 2021-03-23 华为技术有限公司 Positioning method and positioning device
CN112947495A (en) * 2021-04-25 2021-06-11 北京三快在线科技有限公司 Model training method, unmanned equipment control method and device
WO2023116344A1 (en) * 2021-12-23 2023-06-29 清华大学 Driverless driving test method, driverless driving test system, and computer device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104648382A (en) * 2013-11-25 2015-05-27 王健 Binocular vision based automobile automatic-running system
CN105151043A (en) * 2015-08-19 2015-12-16 内蒙古麦酷智能车技术有限公司 Emergency avoidance system and method for unmanned automobile
CN105946620A (en) * 2016-06-07 2016-09-21 北京新能源汽车股份有限公司 Electric automobile and active speed-limiting control method and system thereof
JP2016192150A (en) * 2015-03-31 2016-11-10 トヨタ自動車株式会社 Vehicle travel control device
US20160363647A1 (en) * 2015-06-15 2016-12-15 GM Global Technology Operations LLC Vehicle positioning in intersection using visual cues, stationary objects, and gps

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104648382A (en) * 2013-11-25 2015-05-27 王健 Binocular vision based automobile automatic-running system
JP2016192150A (en) * 2015-03-31 2016-11-10 トヨタ自動車株式会社 Vehicle travel control device
US20160363647A1 (en) * 2015-06-15 2016-12-15 GM Global Technology Operations LLC Vehicle positioning in intersection using visual cues, stationary objects, and gps
CN105151043A (en) * 2015-08-19 2015-12-16 内蒙古麦酷智能车技术有限公司 Emergency avoidance system and method for unmanned automobile
CN105946620A (en) * 2016-06-07 2016-09-21 北京新能源汽车股份有限公司 Electric automobile and active speed-limiting control method and system thereof

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934164A (en) * 2019-03-12 2019-06-25 杭州飞步科技有限公司 Data processing method and device based on Trajectory Safety degree
CN112543877A (en) * 2019-04-03 2021-03-23 华为技术有限公司 Positioning method and positioning device
CN112543877B (en) * 2019-04-03 2022-01-11 华为技术有限公司 Positioning method and positioning device
CN112947495A (en) * 2021-04-25 2021-06-11 北京三快在线科技有限公司 Model training method, unmanned equipment control method and device
WO2023116344A1 (en) * 2021-12-23 2023-06-29 清华大学 Driverless driving test method, driverless driving test system, and computer device

Similar Documents

Publication Publication Date Title
CN107161141B (en) Unmanned automobile system and automobile
CN206691107U (en) Pilotless automobile system and automobile
DK180774B1 (en) Automatic annotation of environmental features in a map during navigation of a vehicle
CN111695546B (en) Traffic signal lamp identification method and device for unmanned vehicle
US10528055B2 (en) Road sign recognition
AU2019419781B2 (en) Vehicle using spatial information acquired using sensor, sensing device using spatial information acquired using sensor, and server
CN111102986A (en) Automatic generation and spatiotemporal localization of reduced-size maps for vehicle navigation
US20200346654A1 (en) Vehicle Information Storage Method, Vehicle Travel Control Method, and Vehicle Information Storage Device
WO2021006441A1 (en) Road sign information collection method using mobile mapping system
WO2018161278A1 (en) Driverless automobile system and control method thereof, and automobile
CN115950440A (en) System and method for vehicle navigation
CN112986979A (en) Automatic object labeling using fused camera/LiDAR data points
EP4163595A1 (en) Automatic annotation of environmental features in a map during navigation of a vehicle
KR102604298B1 (en) Apparatus and method for estimating location of landmark and computer recordable medium storing computer program thereof
JP2019105789A (en) Road structure data generator, road structure database
CN109903574B (en) Method and device for acquiring intersection traffic information
WO2020141694A1 (en) Vehicle using spatial information acquired using sensor, sensing device using spatial information acquired using sensor, and server
CN111717244A (en) Train automatic driving sensing method and system
KR101510745B1 (en) Autonomous vehicle system
CN111429734A (en) Real-time monitoring system and method for inside and outside port container trucks
KR101327348B1 (en) A system for ego-lane detection using defined mask in driving road image
JP3227247B2 (en) Roadway detection device
CN114822083A (en) Intelligent vehicle formation auxiliary control system
CN114274978A (en) Obstacle avoidance method for unmanned logistics vehicle
Kloeker et al. Comparison of Camera-Equipped Drones and Infrastructure Sensors for Creating Trajectory Datasets of Road Users.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17899898

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/01/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17899898

Country of ref document: EP

Kind code of ref document: A1