WO2021031469A1 - 一种车辆障碍物检测方法及系统 - Google Patents

一种车辆障碍物检测方法及系统 Download PDF

Info

Publication number
WO2021031469A1
WO2021031469A1 PCT/CN2019/124810 CN2019124810W WO2021031469A1 WO 2021031469 A1 WO2021031469 A1 WO 2021031469A1 CN 2019124810 W CN2019124810 W CN 2019124810W WO 2021031469 A1 WO2021031469 A1 WO 2021031469A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
image feature
obstacle
image
camera
Prior art date
Application number
PCT/CN2019/124810
Other languages
English (en)
French (fr)
Inventor
蒋忠城
张俊
王先锋
李旺
刘晓波
李登科
江大发
郭冰彬
陈晶晶
段华东
何辉永
Original Assignee
中车株洲电力机车有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中车株洲电力机车有限公司 filed Critical 中车株洲电力机车有限公司
Publication of WO2021031469A1 publication Critical patent/WO2021031469A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS

Definitions

  • This application belongs to the technical field of vehicle control, and in particular relates to a vehicle obstacle detection method and system.
  • the existing method of detecting obstacles around the vehicle is to collect images around the vehicle based on a camera device installed on the vehicle, and determine whether there is an obstacle based on the collected images. Since the camera device can only collect images within a short distance range, the images collected based on the camera device installed on the vehicle are images within the short distance range of the vehicle.
  • the image around the vehicle is collected based on the camera installed on the vehicle, and after the obstacle is determined based on the collected image, the vehicle is already very close to the obstacle, making the vehicle unavoidable at high speed Obstacles, resulting in low vehicle safety.
  • the purpose of the present application is to provide a vehicle obstacle detection method and system, which are used to solve the problem of low safety of vehicles at high speeds in the prior art.
  • This application provides a vehicle obstacle detection method, including:
  • obstacle information is determined.
  • the determining obstacle information based on the first image feature and the second image feature includes:
  • the third image feature is input into a pre-established obstacle recognition model to obtain obstacle information output by the obstacle recognition model.
  • it also includes:
  • the driving state of the vehicle includes a straight driving state and a curved driving state
  • the method further includes:
  • the obstacle information includes:
  • the acquiring the second camera image collected by the second camera device installed on the curved road section includes:
  • the second camera image collected by the second camera device set on the curved road section on the road section of the vehicle is acquired through the road communication system.
  • it also includes:
  • the obstacle information includes whether there is an obstacle, an obstacle type, an obstacle size, and the distance between the obstacle and the vehicle;
  • the obstacle information After the obstacle information is determined, it further includes:
  • the vehicle braking strategy is determined according to the distance between the obstacle and the vehicle, and the vehicle is controlled to avoid the obstacle based on the determined vehicle braking strategy.
  • the application also provides a vehicle obstacle detection system, including:
  • Lidar and a first camera device are respectively installed on a vehicle;
  • a vehicle-mounted controller respectively connected to the lidar and the first camera device in communication
  • the vehicle controller is used to control the laser radar installed on the vehicle to obtain the laser image of the vehicle traveling direction, and perform image feature extraction on the laser image to obtain the first image feature; control the first camera device installed on the vehicle to obtain the vehicle traveling direction And perform image feature extraction on the first captured image to obtain a second image feature; and determine obstacle information based on the first image feature and the second image feature.
  • the vehicle controller determining the obstacle information based on the first image feature and the second image feature includes:
  • the third image feature is input into a pre-established obstacle recognition model to obtain obstacle information output by the obstacle recognition model.
  • it also includes:
  • the second camera device is set at a curved road section, and is connected to the vehicle controller through a road communication system;
  • the vehicle controller is also used to obtain a second camera image collected by a second camera device installed on a curved road section, and perform image feature extraction on the second camera image to obtain a fourth image feature; An image feature, the second image feature, and the fourth image feature determine obstacle information.
  • the vehicle controller is further configured to determine a vehicle braking strategy according to the distance between the obstacle and the vehicle, and control the vehicle to avoid the vehicle based on the determined vehicle braking strategy. Open the obstacle.
  • the laser radar and the camera device installed on the vehicle separately acquire the image of the vehicle’s traveling direction, and perform image feature extraction on the image respectively, and then determine the obstacle information based on the extracted image feature.
  • Radar can acquire images at a long distance. Therefore, by combining the image acquired by the lidar with the image acquired by the camera to detect obstacles, it can detect long-distance obstacles and make the vehicle drive at high speed. In the state, it can have a relatively long braking distance, avoiding the occurrence of collisions with obstacles due to the lack of time for the vehicle to brake, and improving the safety of vehicle driving.
  • Figure 1 is a flowchart of a vehicle obstacle detection method disclosed in the present application
  • FIG. 3 is a schematic diagram of an application scenario of the present application including a straight driving section and a curved road section;
  • Fig. 4 is a schematic structural diagram of a vehicle obstacle detection system disclosed in the present application.
  • This application provides a vehicle obstacle detection method.
  • the image of the driving direction of the vehicle is obtained through the lidar and the camera device installed on the vehicle, and the image features are extracted respectively, and then the obstacle information is determined based on the extracted image features .
  • lidar can acquire images in a long-distance range, by combining the image acquired by the lidar with the image acquired by the camera to detect obstacles, it can detect long-distance obstacles so that The vehicle can have a relatively long braking distance when the vehicle is running at a high speed, avoiding the occurrence of collisions with obstacles due to the vehicle being too late to brake, and improving the safety of the vehicle.
  • a vehicle obstacle detection method provided by an embodiment of the present application may include the following steps:
  • S101 Control the laser radar installed on the vehicle to obtain a laser image of the traveling direction of the vehicle, and perform image feature extraction on the laser image to obtain a first image feature.
  • the braking distance is 225m; when the vehicle is traveling at 60km/h and the braking maximum deceleration is 1.1m/s2, the braking The moving distance is 126m; therefore, in order to ensure that the vehicle can avoid obstacles such as braking when there are obstacles in the direction of the vehicle, select a sensor that can detect obstacles in a long distance in front of the vehicle for obstacle detection.
  • a solid-state laser radar with a range of up to 200 m is selected in this embodiment.
  • the solid-state lidar is installed at both sides of the front end of the vehicle, and the lidar is controlled to emit lasers and scan the scene in a certain range in front of the vehicle during the driving process to obtain laser images.
  • the image feature extraction method is used to extract the obstacle feature of the laser image to obtain the first image feature.
  • Obstacles include people, rocks, roadblocks, vehicles, etc. Obstacle features include the outline, size, and internal geometric features of the obstacle.
  • S102 Control the first camera device installed on the vehicle to obtain a first camera image of the traveling direction of the vehicle, and perform image feature extraction on the first camera image to obtain a second image feature.
  • a first camera device such as a normal camera and an infrared camera, is also set.
  • the first camera device can capture a scene within a range of tens of meters in front of the vehicle.
  • the first camera device is controlled to work, and the scene in a certain range in front of the vehicle is collected to obtain the first camera image.
  • the image feature extraction method is used to perform obstacle feature extraction on the first camera image to obtain the second image feature.
  • Obstacles include people, rocks, roadblocks, vehicles, etc. Obstacle features include the outline, size, and internal geometric features of the obstacle.
  • step S101 and step S102 is not limited in this embodiment, as long as the first image feature and the second image feature can be obtained respectively.
  • S103 Determine obstacle information based on the first image feature and the second image feature.
  • the first image feature is the obstacle feature obtained after the obstacle feature extraction is performed on the laser image, including the outline, size, and internal geometric features of the obstacle;
  • the second image feature is the obstacle feature obtained after the obstacle feature extraction is performed on the camera image, including the outline, size, and internal geometric features of the obstacle.
  • the obstacle information in the direction of the vehicle is determined.
  • the obstacle information includes whether there is an obstacle, the type of the obstacle, the size of the obstacle, and the distance between the obstacle and the vehicle.
  • the obstacle information also includes the type of obstacle, the size of the obstacle and the distance between the obstacle and the vehicle, and when there is no obstacle, the obstacle information Only include the prompt that there is no obstacle.
  • an alarm and control the vehicle to avoid the obstacle such as controlling the vehicle to brake.
  • the method of controlling the braking of the vehicle is to determine the corresponding vehicle braking strategy based on the distance between the obstacle and the vehicle included in the obstacle information, and control the braking of the vehicle to avoid the obstacle based on the determined vehicle braking strategy.
  • Vehicle braking strategies include conventional braking and emergency braking.
  • the vehicle braking strategy is determined to be emergency braking, and when the distance between the obstacle and the vehicle is not less than the preset threshold, the vehicle braking strategy is determined For conventional braking.
  • the vehicle obstacle detection method is applied to the vehicle controller as an example.
  • the vehicle controller determines that there is an obstacle in front of the vehicle, it determines the corresponding vehicle system based on the distance between the obstacle and the vehicle included in the obstacle information. And send the determined vehicle braking strategy to the traction control unit of the automatic driving ATO through the vehicle data network, so that the traction control unit performs corresponding braking according to the received vehicle braking strategy.
  • the image of the driving direction of the vehicle is obtained through the lidar and the camera installed on the vehicle, and the image feature is extracted respectively, and then the obstacle information is determined based on the extracted image feature.
  • Radar can acquire images at a long distance. Therefore, by combining the image acquired by the lidar with the image acquired by the camera to detect obstacles, it can detect long-distance obstacles and make the vehicle drive at high speed. In the state, it can have a relatively long braking distance, avoiding the occurrence of collisions with obstacles due to the lack of time for the vehicle to brake, and improving the safety of vehicle driving.
  • the obstacle feature extraction is performed on the images acquired by the lidar and camera device, and the three-dimensional geometric features of the obstacle can be obtained, thereby improving the determination.
  • the accuracy of the obstacle information since the lidar and camera devices are provided on both sides of the front end of the vehicle in this embodiment, the obstacle feature extraction is performed on the images acquired by the lidar and camera device, and the three-dimensional geometric features of the obstacle can be obtained, thereby improving the determination. The accuracy of the obstacle information.
  • the vehicle driving environment is changeable, such as night driving environment, fog driving environment, rain and snow driving environment, etc.
  • different driving environments cause different degrees of interference to laser images and camera images, so the obstacles included in the first image feature are integrated
  • the object feature and the obstacle feature included in the second image feature are required to fully consider the impact of the current driving environment in the process of determining the obstacle information in the driving direction of the vehicle.
  • the following describes an implementation manner for determining obstacle information based on the first image feature and the second image feature.
  • Step 1 Perform data fusion on the first image feature and the second image feature according to environmental conditions to obtain a third image feature.
  • a common camera, an infrared camera, and a lidar are installed on both sides of the front end of the vehicle to obtain a scene within a certain range in front of the vehicle to obtain a first camera image and a laser image.
  • the first camera image and the laser image are preprocessed respectively to obtain the preprocessed camera image and the preprocessed laser image, and the preprocessing includes filtering processing, edge detection, etc.
  • Obstacle feature extraction is performed on the preprocessed camera image to obtain the first image feature; the obstacle feature extraction is performed on the preprocessed laser image to obtain the second image feature.
  • Data fusion is performed on the first image feature and the second image feature to eliminate the influence on the image collected by the lidar and the image collected by the camera under different driving environments, and obtain an accurate third image feature.
  • the second image feature extracted from the camera image obtained by the camera device is used to supplement the first image feature extracted from the laser image obtained by the lidar to obtain the third image feature.
  • the camera image obtained by the camera device includes the camera image obtained by the infrared camera and/or the camera image obtained by the ordinary camera.
  • the first image feature extracted from the laser image obtained by the laser radar is supplemented with the second image feature extracted from the camera image obtained by the infrared camera to obtain the third image feature.
  • the second image feature extracted from the camera image obtained by the infrared camera is mainly used, and the first image feature extracted from the laser image obtained by the lidar is used as a supplement and/or The image feature extracted from the camera image obtained by the ordinary camera is supplemented to obtain the third image feature.
  • Data fusion is a complementary method used to compensate for the weakness of a single sensor in different environments.
  • the image features acquired by each sensor are compared with each other, and the image features are complemented each other to obtain accurate and complete image features. Three image characteristics.
  • Step 2 Input the third image feature into the pre-established obstacle recognition model to obtain obstacle information output by the obstacle recognition model.
  • An obstacle recognition model is established, and the obstacle recognition model is trained using obstacle feature sample data to obtain the trained obstacle recognition model.
  • an obstacle recognition model can be established based on a hidden Markov model or a neural network model.
  • the obstacles on the rail transit road are mainly divided into: maintenance tools, falling surrounding equipment, guardrails at the turnout, vehicles, personnel, etc., using lidar and camera devices to pre-align the track
  • the main obstacles existing on the traffic road are imaged, and the obstacle features are extracted from the acquired images, and the obstacle feature sample database is established.
  • the obstacle characteristic sample data is obtained from the obstacle characteristic sample database, and the obstacle recognition model is trained.
  • the obstacle information is output after being processed by the obstacle recognition model.
  • the obstacle feature sample database can be updated, and the obstacle recognition model can be retrained to ensure that the obstacle recognition model can accurately and quickly determine whether there is an obstacle And obstacle information such as the type and size of the obstacle.
  • the driving state of the vehicle includes a straight driving state and a curved driving state.
  • the aforementioned vehicle obstacle detection method can be directly used to achieve the purpose of detecting whether there is an obstacle in front of the vehicle.
  • the sight distance obstacle is limited by the small radius curve, which makes it impossible to accurately determine the obstacle information in front of the vehicle in obstacle detection based on the images acquired by the lidar and the camera installed on the vehicle.
  • FIG. 1 Another vehicle obstacle detection method is disclosed in this embodiment, which is applied to a vehicle control system, such as a vehicle controller, as shown in Figure 2.
  • the method may include the following steps:
  • S201 Detect the driving state of the vehicle; the driving state of the vehicle includes a straight driving state and a curved driving state.
  • step S202 is executed.
  • the driving state of the vehicle may be detected according to the steering angle sensor installed on the vehicle, or the driving state of the vehicle may be detected according to the steering lever.
  • the driver will not toggle the steering rod, while in a curve driving state, the driver will toggle the steering rod, so that the driving state of the vehicle can be determined according to the position of the steering rod.
  • the attributes of the road segment include straight lines and curves. According to the positioning device of the vehicle, it is determined that the vehicle is driving to the curved road section, then the driving state of the vehicle is determined to be the curve driving state, and the vehicle is determined to be driving to the straight line section according to the positioning device of the vehicle, then the vehicle is determined
  • the driving state is a straight driving state.
  • S202 Control the laser radar installed on the vehicle to obtain a laser image of the traveling direction of the vehicle, and perform image feature extraction on the laser image to obtain a first image feature.
  • steps S202-S203 in this embodiment is similar to the implementation of steps S101-S102 in the previous embodiment, and will not be repeated here.
  • S204 Acquire a second camera image collected by a second camera device installed on a curved road section, and perform image feature extraction on the second camera image to obtain a fourth image feature.
  • Set up multiple sensors at the curved road section including a normal camera and an infrared camera.
  • the scene at the curved road section is acquired through the normal camera and the infrared camera to obtain the second camera image.
  • the second camera image is transmitted to the vehicle controller through the road communication system, so that the vehicle controller can obtain the second camera image collected by the second camera device installed on the curved road section.
  • the road communication system consists of a wired network and a wireless network.
  • Wired network refers to the data communication network between trackside equipment (such as the second camera device), that is, the backbone network, through which the data transmission between the control center, the vehicle and the equipment centralized station is realized; the wireless network refers to the vehicle and the ground Data communication between the vehicle controller and trackside equipment.
  • the second camera device is connected to the Ethernet of the road communication system, and multiple wireless transmission devices are arranged on the Ethernet.
  • the wireless transmission device can transmit wireless information and complete the data transmission and reception function together with the vehicle-mounted wireless transmission device, so that the second camera device communicates through the road
  • the system can transmit the second camera image to the vehicle controller.
  • a second camera device is provided at each curve, so that the second camera image collected by the second camera device can determine the obstacle at the curve
  • the current position of the vehicle is determined by the on-board positioning device, such as GPS to determine the current position of the vehicle, and then the curve section on the vehicle driving section is determined according to the current position of the vehicle, and the corresponding curve section is obtained through the road communication system
  • the second camera image collected by the second camera device is obtained.
  • the second camera image collected by the second camera device determines the obstacle information at all curves of the road section, even if the vehicle may not have traveled to the curve at this time. In this case, the operation of detecting the driving state of the vehicle can be ignored.
  • This embodiment also includes determining whether to obtain the second camera collected by the second camera device installed at the curve section. Image; if it is determined that the second camera image collected by the second camera device set at the curved road section is not obtained, it will prompt the second camera device set at the curved road section to be faulty, and prompt the road maintenance personnel to perform the second camera Maintenance, to avoid the problem of missing obstacles due to the failure of the second camera device at the curve.
  • obstacle feature extraction is performed on the second camera image to obtain the fourth image feature.
  • the obstacle information at the curve can be determined.
  • the method for processing the second camera image in this embodiment is the same as the method for processing the first camera image in step S102, and will not be repeated here.
  • multiple ordinary cameras and infrared cameras can be set in the curved road section, and because the curved road section is not limited by the sensing distance measurement, it is not necessary to use a sensor that can detect a large range, and thus does not use a lidar.
  • the normal camera obtains the camera image and the infrared camera also obtains the camera image.
  • Obstacle features extracted from the camera image obtained by the infrared camera are the main features, and the obstacle feature extracted from the camera image obtained by the ordinary camera is used as a supplement for data fusion, and the obstacle is determined based on the obstacle feature after data fusion information.
  • the order of performing steps S202-S204 is not limited, as long as the first image feature, the second image feature, and the fourth image feature can be obtained respectively.
  • Obstacle information within a certain range in front of the vehicle can be determined based on the first image feature and the second image feature, and obstacle information at a curve can be determined based on the fourth image feature.
  • the method of determining obstacle information within a certain range in front of the vehicle based on the first image feature and the second image feature is similar to the implementation method of determining obstacle information based on the first image feature and the second image feature described above. Repeat it again.
  • steps S202-S203 are executed, and then the step of determining obstacle information based on the first image feature and the second image feature is executed.
  • FIG. 3 is a schematic diagram of a specific application scenario of the vehicle obstacle detection method of this embodiment, including straight-line driving sections and curved road sections.
  • the lidar and the first camera installed on the vehicle can detect obstacles.
  • the lidar and the first camera are identified by a structure.
  • the lidar and the first camera A camera device has two different structures.
  • the laser radar and the first camera installed on the vehicle realize the detection of obstacles
  • the second camera installed at the curved road section realizes the detection of obstacles, which is realized by the road communication system Data transmission between the second camera device and the vehicle controller.
  • the wireless network in the road communication system in Figure 3 refers to the trackside wireless network transmitter, such as a Wifi device, which is connected to a wired network or Ethernet.
  • the second camera device is also connected to the wired network or Ethernet. After the second camera device acquires the camera image at the curve, it is sent to the Wifi device via the wired network or Ethernet, and then transmitted to the vehicle controller on the vehicle via the wireless network , Or the vehicle controller obtains the camera image at the curve obtained by the second camera device from the Wifi device.
  • a second camera device is provided at the curve, and the vehicle controller obtains the second camera image collected by the second camera device installed on the curve section, and performs imaging on the second camera image.
  • the obstacle information at the curve can be determined, so as to avoid the sight distance obstacle caused by the detection of obstacles only based on the lidar and the first camera installed on the vehicle. The problem of obstacle information in the direction of the vehicle arises.
  • an embodiment of the present application provides a vehicle obstacle detection system. As shown in FIG. 4, the system includes:
  • the lidar and the first camera; the lidar and the first camera are respectively installed on the vehicle.
  • An on-board controller respectively communicating with the lidar and the first camera device.
  • the vehicle controller is used to control the laser radar installed on the vehicle to obtain the laser image of the vehicle traveling direction, and perform image feature extraction on the laser image to obtain the first image feature; control the first camera device installed on the vehicle to obtain the vehicle traveling direction And perform image feature extraction on the first captured image to obtain a second image feature; and determine obstacle information based on the first image feature and the second image feature.
  • One way for the vehicle controller to determine the obstacle information based on the first image feature and the second image feature is: performing the first image feature and the second image feature according to environmental conditions Data fusion is used to obtain a third image feature; the third image feature is input into a pre-established obstacle recognition model to obtain obstacle information output by the obstacle recognition model.
  • the obstacle characteristic sample data is obtained from the obstacle characteristic sample database, and the established obstacle recognition model is trained according to the obstacle characteristic sample data to obtain the trained obstacle recognition model.
  • the system further includes: a second camera device, as shown in FIG. 4.
  • the second camera device is arranged at a curved road section and is connected to the vehicle controller through a road communication system;
  • the vehicle controller is also used to obtain a second camera image collected by a second camera device installed on a curved road section, and perform image feature extraction on the second camera image to obtain a fourth image feature; An image feature, the second image feature, and the fourth image feature determine obstacle information.
  • the vehicle controller is also used to determine a vehicle braking strategy based on the distance between the obstacle and the vehicle, and control the vehicle to avoid the obstacle based on the determined vehicle braking strategy And call the police.
  • Vehicle braking strategies include conventional braking and emergency braking.
  • the vehicle braking strategy is determined to be emergency braking, and when the distance between the obstacle and the vehicle is not less than the preset threshold, the vehicle braking strategy is determined For conventional braking.
  • the vehicle obstacle detection method is applied to the vehicle controller as an example.
  • the vehicle controller determines that there is an obstacle in front of the vehicle, it determines the corresponding vehicle system based on the distance between the obstacle and the vehicle included in the obstacle information. And send the determined vehicle braking strategy to the traction control unit of the automatic driving ATO through the vehicle data network, so that the traction control unit performs corresponding braking according to the received vehicle braking strategy.
  • the image of the driving direction of the vehicle is obtained through the lidar and the camera installed on the vehicle, and the image feature is extracted respectively, and then the obstacle information is determined based on the extracted image feature.
  • Radar can acquire images at a long distance. Therefore, by combining the image acquired by the lidar with the image acquired by the camera to detect obstacles, it can detect long-distance obstacles and make the vehicle drive at high speed. In the state, it can have a relatively long braking distance, avoiding the occurrence of collisions with obstacles due to the lack of time for the vehicle to brake, and improving the safety of vehicle driving.
  • a second camera device is set at the curve, and the vehicle controller obtains the second camera image collected by the second camera device on the curve section, and performs image feature extraction on the second camera image to obtain the fourth image feature ,
  • the obstacle information at the curve can be determined, so as to avoid the sight distance obstacle caused by the detection of obstacles only based on the lidar and the first camera installed on the vehicle, and the obstacle information in the direction of the vehicle cannot be accurately determined The problem arises.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请提供一种车辆障碍物检测方法及系统,通过车辆上安装的激光雷达和摄像装置分别获取车辆行驶方向的图像,并分别对图像进行图像特征提取,然后基于提取到的图像特征确定障碍物信息,由于激光雷达可以获取远距离范围内的图像,因此通过将激光雷达获取到的图像与摄像装置获取到的图像结合在一起进行障碍物的检测,可以实现检测到远距离的障碍物,使得在车辆高速行驶状态下可以具备相对较远的制动距离,避免由于车辆来不及制动导致撞击障碍物的情况发生,提高了车辆行驶的安全性。

Description

一种车辆障碍物检测方法及系统
本申请要求于2019年08月19日提交中国专利局、申请号为201910763396.8、发明名称为“一种车辆障碍物检测方法及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于车辆控制技术领域,尤其涉及一种车辆障碍物检测方法及系统。
背景技术
车辆行驶过程中,通过检测车辆周围的障碍物并在检测到障碍物后控制车辆避开障碍物,以保障车辆的安全行驶。
现有检测车辆周围障碍物的方式为基于车辆上安装的摄像装置采集车辆周围图像,根据采集到的图像确定是否存在障碍物。由于摄像装置仅能采集到近距离范围内的图像,因此基于车辆上安装的摄像装置采集到的图像是车辆近距离范围内的图像。
当车辆高速行驶过程中,基于车辆上安装的摄像装置采集车辆周围的图像,并基于采集到的图像确定存在障碍物后,车辆距离障碍物已经很近了,使得车辆在高速行驶下无法避开障碍物,导致车辆行驶的安全性低。
发明内容
有鉴于此,本申请的目的在于提供一种车辆障碍物检测方法及系统,用于解决现有技术中车辆在高速行驶下安全性低的问题。
技术方案如下:
本申请提供一种车辆障碍物检测方法,包括:
控制车辆上安装的激光雷达获取车辆行驶方向的激光图像,并对该激光图像进行图像特征提取,得到第一图像特征;
控制车辆上安装的第一摄像装置获取车辆行驶方向的第一摄像图像,并对该第一摄像图像进行图像特征提取,得到第二图像特征;
基于所述第一图像特征和所述第二图像特征,确定障碍物信息。
可选地,所述基于所述第一图像特征和所述第二图像特征,确定障碍物信息包括:
根据环境条件,对所述第一图像特征和所述第二图像特征进行数据融合,得到第三图像特征;
将所述第三图像特征输入预先建立的障碍物识别模型,得到所述障碍物识别模型输出的障碍物信息。
可选地,还包括:
检测车辆行驶状态;所述车辆行驶状态包括直线行驶状态、弯道行驶状态;
若车辆行驶状态为弯道行驶状态,则基于所述第一图像特征和所述第二图像特征,确定障碍物信息之前,还包括:
获取弯道路段处设置的第二摄像装置采集到的第二摄像图像,并对该第二摄像图像进行图像特征提取,得到第四图像特征;
则确定障碍物信息包括:
基于所述第一图像特征、所述第二图像特征和所述第四图像特征,确定障碍物信息。
可选地,所述获取弯道路段处设置的第二摄像装置采集到的第二摄像图像包括:
根据车辆当前位置,确定车辆行驶路段上的弯道路段;
通过道路通信系统获取车辆行驶路段上的弯道路段处设置的第二摄像装置采集到的第二摄像图像。
可选地,还包括:
判断是否获取到弯道路段处设置的第二摄像装置采集到的第二摄像图像;
若判断没有获取到弯道路段处设置的第二摄像装置采集到的第二摄像图像,则提示弯道路段处设置的第二摄像装置故障。
可选地,所述障碍物信息包括是否存在障碍物、障碍物类型、障碍物尺寸、障碍物与车辆之间的距离;
则所述确定障碍物信息之后,还包括:
若障碍物信息表示存在障碍物,则根据障碍物与车辆之间的距离确定车辆制动策略,并基于确定出的车辆制动策略控制车辆避开该障碍物。
本申请还提供了一种车辆障碍物检测系统,包括:
激光雷达和第一摄像装置;所述激光雷达和所述第一摄像装置分别安装在车辆上;
分别与所述激光雷达和所述第一摄像装置通信连接的车载控制器;
所述车辆控制器用于控制车辆上安装的激光雷达获取车辆行驶方向的激光图像,并对该激光图像进行图像特征提取,得到第一图像特征;控制车辆上安装的第一摄像装置获取车辆行驶方向的第一摄像图像,并对该第一摄像图像进行图像特征提取,得到第二图像特征;基于所述第一图像特征和所述第二图像特征,确定障碍物信息。
可选地,所述车辆控制器基于所述第一图像特征和所述第二图像特征,确定障碍物信息包括:
根据环境条件,对所述第一图像特征和所述第二图像特征进行数据融合,得到第三图像特征;
将所述第三图像特征输入预先建立的障碍物识别模型,得到所述障碍物识别模型输出的障碍物信息。
可选地,还包括:
第二摄像装置;所述第二摄像装置设置在弯道路段处,通过道路通信系统与所述车载控制器连接;
所述车辆控制器,还用于获取弯道路段处设置的第二摄像装置采集到的第二摄像图像,并对该第二摄像图像进行图像特征提取,得到第四图像特征;基于所述第一图像特征、所述第二图像特征和所述第四图像特征,确定障碍物信息。
可选地,若障碍物信息表示存在障碍物,则所述车辆控制器,还用于根据障碍物与车辆之间的距离确定车辆制动策略,并基于确定出的车辆制动策略控 制车辆避开该障碍物。
与现有技术相比,本申请提供的上述技术方案具有如下优点:
从上述技术方案可知,本申请中通过车辆上安装的激光雷达和摄像装置分别获取车辆行驶方向的图像,并分别对图像进行图像特征提取,然后基于提取到的图像特征确定障碍物信息,由于激光雷达可以获取远距离范围内的图像,因此通过将激光雷达获取到的图像与摄像装置获取到的图像结合在一起进行障碍物的检测,可以实现检测到远距离的障碍物,使得在车辆高速行驶状态下可以具备相对较远的制动距离,避免由于车辆来不及制动导致撞击障碍物的情况发生,提高了车辆行驶的安全性。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请公开的一种车辆障碍物检测方法的流程图;
图2是本申请公开的另一种车辆障碍物检测方法的流程图;
图3是本申请包括直线行驶路段和弯道行驶路段的应用场景示意图;
图4是本申请公开的一种车辆障碍物检测系统的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供了一种车辆障碍物检测方法,通过车辆上安装的激光雷达和摄像装置分别获取车辆行驶方向的图像,并分别对图像进行图像特征提取,然后 基于提取到的图像特征确定障碍物信息,由于激光雷达可以获取远距离范围内的图像,因此通过将激光雷达获取到的图像与摄像装置获取到的图像结合在一起进行障碍物的检测,可以实现检测到远距离的障碍物,使得在车辆高速行驶状态下可以具备相对较远的制动距离,避免由于车辆来不及制动导致撞击障碍物的情况发生,提高了车辆行驶的安全性。
参见图1所示,本申请实施例提供的一种车辆障碍物检测方法可以包括以下步骤:
S101、控制车辆上安装的激光雷达获取车辆行驶方向的激光图像,并对该激光图像进行图像特征提取,得到第一图像特征。
由于当车辆行驶速度为80km/h,制动最大减速度为1.1m/s2时,制动距离为225m;当车辆行驶速度为60km/h,制动最大减速度为1.1m/s2时,制动距离为126m;因此为了保证在车辆行驶方向上存在障碍物时,车辆能够避开障碍物如制动,选择能够检测到车辆前方远距离范围内障碍物的传感器进行障碍物检测。
如采用激光雷达对车辆前方一定范围内的场景进行扫描,在本实施例中选择测距达200m的固态激光雷达。
将固态激光雷达安装在车辆前端两侧位置处,在车辆行驶过程中控制激光雷达发射激光并扫描车辆前方一定范围内的场景,得到激光图像。
采用图像特征提取方法对激光图像进行障碍物特征提取,得到第一图像特征。
障碍物包括人、山石、路障、车辆等,障碍物特征包括障碍物的外形轮廓、尺寸、内部几何特征等。
S102、控制车辆上安装的第一摄像装置获取车辆行驶方向的第一摄像图像,并对该第一摄像图像进行图像特征提取,得到第二图像特征。
在车辆前端两侧位置处除设置固态激光雷达以实现对车辆前端200m范围内的场景进行扫描外,还设置了第一摄像装置,如普通摄像头、红外摄像头。
第一摄像装置可以采集到车辆前方几十米范围内的场景。
在车辆行驶过程中控制第一摄像装置工作,采集车辆前方一定范围内的场景,得到第一摄像图像。
采用图像特征提取方法对第一摄像图像进行障碍物特征提取,得到第二图像特征。
障碍物包括人、山石、路障、车辆等,障碍物特征包括障碍物的外形轮廓、尺寸、内部几何特征等。
需要注意的是,本实施例中并不限定执行步骤S101和步骤S102的顺序,只要能够分别获取到第一图像特征和第二图像特征即可。
S103、基于所述第一图像特征和所述第二图像特征,确定障碍物信息。
第一图像特征为对激光图像进行障碍物特征提取后得到的障碍物特征,包括障碍物的外形轮廓、尺寸、内部几何特征等;
第二图像特征为对摄像图像进行障碍物特征提取后得到的障碍物特征,包括障碍物的外形轮廓、尺寸、内部几何特征等。
综合第一图像特征中包括的障碍物特征以及第二图像特征中包括的障碍物特征,确定车辆行驶方向上的障碍物信息。
障碍物信息包括是否存在障碍物、障碍物的类型、障碍物尺寸、障碍物与车辆之间的距离等。
可以理解的是,只有在存在障碍物的情况下,障碍物信息还包括障碍物的类型、障碍物尺寸和障碍物与车辆之间的距离,而在不存在障碍物的情况下,障碍物信息只包括提示不存在障碍物这一内容。
本实施例中,确定出障碍物信息后,并且障碍物信息指示存在障碍物的情况下,报警并控制车辆避开障碍物,如控制车辆制动。
控制车辆制动的方式为基于障碍物信息中包括的障碍物与车辆之间的距离,确定对应的车辆制动策略,并基于确定出的车辆制动策略控制车辆制动以避开障碍物。
车辆制动策略包括常规制动、紧急制动等。
在障碍物与车辆之间的距离小于预设阈值的情况下,确定车辆制动策略为紧急制动,在障碍物与车辆之间的距离不小于预设阈值的情况下,确定车辆制动策略为常规制动。
在本实施例中车辆障碍物检测方法应用于车辆控制器为例,车辆控制器确定车辆前方存在障碍物时,基于障碍物信息中包括的障碍物与车辆之间的距离,确定对应的车辆制动策略,并将确定出的车辆制动策略通过车辆数据网络发送至自动驾驶ATO的牵引控制单元,使得牵引控制单元根据接收到的车辆制动策略执行相应的制动。
通过上述技术方案,本实施例中通过车辆上安装的激光雷达和摄像装置分别获取车辆行驶方向的图像,并分别对图像进行图像特征提取,然后基于提取到的图像特征确定障碍物信息,由于激光雷达可以获取远距离范围内的图像,因此通过将激光雷达获取到的图像与摄像装置获取到的图像结合在一起进行障碍物的检测,可以实现检测到远距离的障碍物,使得在车辆高速行驶状态下可以具备相对较远的制动距离,避免由于车辆来不及制动导致撞击障碍物的情况发生,提高了车辆行驶的安全性。
同时,由于本实施例中在车辆前端的两侧均设置激光雷达和摄像装置,使得对激光雷达和摄像装置获取到的图像进行障碍物特征提取,可以获得障碍物的三维几何特征,从而提高确定出的障碍物信息的准确性。
考虑到车辆行驶环境多变,如夜晚行驶环境、大雾行驶环境、雨雪行驶环境等,不同的行驶环境对激光图像和摄像图像造成不同程度的干扰,因此综合第一图像特征中包括的障碍物特征以及第二图像特征中包括的障碍物特征确定车辆行驶方向上的障碍物信息的过程中,需要充分考虑到当前行驶环境的影响。
下面介绍一种基于第一图像特征和第二图像特征确定障碍物信息的实现方式。
步骤一、根据环境条件,对第一图像特征和第二图像特征进行数据融合,得到第三图像特征。
本实施例中在车辆前端两侧均安装普通摄像头、红外摄像头和激光雷达,分别获取车辆前方一定范围内的场景,得到第一摄像图像和激光图像。
分别对第一摄像图像和激光图像进行预处理,得到预处理后的摄像图像和预处理后的激光图像,预处理包括滤波处理、边缘检测等。
对预处理后的摄像图像进行障碍物特征提取,得到第一图像特征;对预处理后的激光图像进行障碍物特征提取,得到第二图像特征。
对第一图像特征和第二图像特征进行数据融合,以排除不同行驶环境下对激光雷达采集到图像和摄像装置采集到图像的影响,获得准确地第三图像特征。
在不同行驶环境下,数据融合采用不同的策略。
例如在光线较好的明线行驶环境下,以从摄像装置获取的摄像图像中提取的第二图像特征对从激光雷达获取的激光图像中提取的第一图像特征进行补充,得到第三图像特征。此处摄像装置获取到的摄像图像包括红外摄像头获取到的摄像图像和/或普通摄像头获取到的摄像图像。
在夜晚行驶环境下,以从红外摄像头获取的摄像图像中提取的第二图像特征对从激光雷达获取的激光图像中提取的第一图像特征进行补充,得到第三图像特征。
在雨雪行驶环境或大雾行驶环境下,以从红外摄像头获取的摄像图像中提取的第二图像特征为主,以从激光雷达获取的激光图像中提取的第一图像特征为补充和/或将从普通摄像头获取的摄像图像中提取到的图像特征为补充,得到第三图像特征。
数据融合是为了弥补单种传感器在不同环境下的弱势而采用的互补方式,通过各传感器获取到的图像特征进行相互比对,进行图像特征的相互补充,得到准确、完整的图像特征,即第三图像特征。
步骤二、将第三图像特征输入预先建立的障碍物识别模型,得到障碍物识 别模型输出的障碍物信息。
建立障碍物识别模型,采用障碍物特征样本数据对建立的障碍物识别模型进行训练,得到训练后的障碍物识别模型。
利用训练后的障碍物识别模型对第三图像特征进行识别,输出障碍物信息。
本实施例中可以基于隐马尔可夫模型或神经网络模型建立障碍物识别模型。
以轨道交通这一场景为例,轨道交通道路上存在的障碍物主要分为:维修工具、掉落的周围设备、道岔路口的护栏、车辆、人员等类型,利用激光雷达和摄像装置预先对轨道交通道路上存在的主要障碍物进行图像获取,并从获取到的图像中提取障碍物特征,建立障碍物特征样本数据库。
对障碍物识别模型进行训练时,从障碍物特征样本数据库中获取障碍物特征样本数据,对障碍物识别模型进行训练。
如果障碍物识别模型输出的结果不能满足预期,则调整障碍物识别模型的参数,直至障碍物识别模型输出的结果能够满足预期,完成对障碍物识别模型的训练。
将第三图像特征输入完成训练的障碍物识别模型后,经过障碍物识别模型的处理后输出障碍物信息。
在实际应用中,当存在新的障碍物时,可以对障碍物特征样本数据库进行更新,并重新对障碍物识别模型进行训练,以保证障碍物识别模型能够准确且快速的确定出是否存在障碍物以及障碍物的类型、大小等障碍物信息。
车辆行驶状态包括直线行驶状态和弯道行驶状态,其中,直线行驶状态下可以直接采用上述的车辆障碍物检测方法实现检测车辆前方是否存在障碍物的目的。但是,弯道行驶状态下受限于小半径弯道的视距障碍,导致基于车辆上安装的激光雷达以及摄像装置获取到的图像进行障碍物检测不能准确确定车辆前方的障碍物信息。
如,在弯道路段考虑设置在车辆前端两侧的第一摄像装置之间的安装宽度为2m,车辆前端两侧的激光雷达之间的安装宽度为2m,在500m小半径的弯道内,最大可视距离为89m,因此,根据车辆的制动距离存在弯道视距盲区。
针对此,本实施例中公开了另一种车辆障碍物检测方法,应用于车辆控制系统中,如车辆控制器上,参见图2所示,该方法可以包括以下步骤:
S201、检测车辆行驶状态;所述车辆行驶状态包括直线行驶状态、弯道行驶状态。
若车辆行驶状态为弯道行驶状态,则执行步骤S202。
在本实施例中可以根据车辆上安装的转向角传感器检测车辆的行驶状态,或者根据转向杆检测车辆的行驶状态。
通常,在车辆直线行驶状态下,驾驶员不会拨动转向杆,而在车辆弯道行驶状态下,驾驶员会拨动转向杆,从而根据转向杆的位置可以确定车辆的行驶状态。
当然还可以根据车辆上的定位装置确定车辆当前行驶的路段,根据当前行驶路段的属性确定车辆的行驶状态。
路段的属性包括直线、弯道,根据车辆的定位装置确定车辆行驶到弯道路段,那么确定车辆的行驶状态为弯道行驶状态,根据车辆的定位装置确定车辆行驶到直线路段,那么确定车辆的行驶状态为直线行驶状态。
S202、控制车辆上安装的激光雷达获取车辆行驶方向的激光图像,并对该激光图像进行图像特征提取,得到第一图像特征。
S203、控制车辆上安装的第一摄像装置获取车辆行驶方向的第一摄像图像,并对该第一摄像图像进行图像特征提取,得到第二图像特征。
本实施例中步骤S202-S203的实现方式与上一实施例中步骤S101-S102的实现方式类似,此处不再赘述。
S204、获取弯道路段处设置的第二摄像装置采集到的第二摄像图像,并对该第二摄像图像进行图像特征提取,得到第四图像特征。
在弯道路段处设置多个传感器,包括普通摄像头、红外摄像头,通过普通 摄像头和红外摄像头获取弯道路段处的场景,得到第二摄像图像。
通过道路通信系统将第二摄像图像传输给车辆控制器,使得车辆控制器可以获取到弯道路段处设置的第二摄像装置采集到的第二摄像图像。
道路通信系统由有线网络和无线网络组成。有线网络是指轨旁设备(如第二摄像装置)之间的数据通信网络即骨干网,通过骨干网实现控制中心、车辆及设备集中站之间的数据传输;无线网络是指车辆和地面之间的数据通信,实现车辆控制器和轨旁设备之间的数据传输。
第二摄像装置与道路通信系统的以太网连接,以太网上布设有多个无线传输设备,无线传输设备可以发射无线信息,与车载无线传输设备共同完成数据收发功能,从而第二摄像装置通过道路通信系统可以将第二摄像图像传输至车辆控制器。
需要注意的是,由于一段道路上可能存在多个弯道,每个弯道处均设置有第二摄像装置,以通过第二摄像装置采集到的第二摄像图像可以确定该弯道处的障碍物信息,而车辆当前处于弯道行驶状态时,需要明确当前车辆行驶的弯道,以便获取到与该弯道对应的第二摄像装置获取到的第二摄像图像。
基于此,本实施例中通过车载定位装置确定车辆当前位置,如GPS确定车辆当前位置,然后根据车辆当前位置确定车辆行驶路段上的弯道路段,并通过道路通信系统获取对应弯道路段处设置的第二摄像装置采集到的第二摄像图像。
在其他实施例中,还可以在车辆处于行驶状态时,根据GPS确定车辆当前位置,进而确定车辆所需要行驶的路段,并确定该路段包括的全部弯道,然后分别获取与每个弯道对应的第二摄像装置采集到的第二摄像图像,进而确定出该路段的全部弯道处的障碍物信息,即使此时车辆可能还没有行驶到该弯道处。在此种情况下,可以忽略检测车辆行驶状态的操作。
由于弯道处设置的第二摄像装置存在故障的可能性,为了避免当弯道处设置的第二摄像装置出现故障时,车辆控制器不能获取弯道路段处设置的第二摄像装置采集到的第二摄像图像,导致不能确定弯道处的障碍物信息而影响车辆 的安全行驶的问题,本实施例中还包括判断是否获取到弯道路段处设置的第二摄像装置采集到的第二摄像图像;若判断没有获取到弯道路段处设置的第二摄像装置采集到的第二摄像图像,则提示弯道路段处设置的第二摄像装置故障,并提示道路维修人员对第二摄像装置进行维修,避免由于弯道处第二摄像装置发生故障而产生障碍物漏检的问题。
通过道路通信系统获取到第二摄像图像后,对第二摄像图像进行障碍物特征提取,得到第四图像特征。
基于第四图像特征,可以确定弯道处的障碍物信息。
如弯道处有山石、路障,则通过弯道处设置的普通摄像头、红外摄像头就可以确定弯道处存在障碍物。
本实施例中对第二摄像图像的处理方法与步骤S102中对第一摄像图像的处理方法相同,此处不再赘述。
其中,在弯道路段可以设置多个普通摄像头和红外摄像头,而由于弯道路段并不受传感测距的限制,因此不用采用能够检测大范围的传感器,从而不用激光雷达。
需要注意的是,在弯道路段即设置普通摄像头也设置红外摄像头的情况下,普通摄像头获取到摄像图像而红外摄像头也获取到摄像图像,对不同的摄像图像分别进行障碍物特征提取后,以从红外摄像头获取到的摄像图像中提取到的障碍物特征为主,从普通摄像头获取到的摄像图像中提取到的障碍物特征作为补充进行数据融合,基于数据融合后的障碍物特征确定障碍物信息。
本实施例中并不限定执行步骤S202-S204的顺序,只要能够分别得到第一图像特征、第二图像特征和第四图像特征即可。
S205、基于所述第一图像特征、所述第二图像特征和所述第四图像特征,确定障碍物信息。
基于第一图像特征和第二图像特征可以确定车辆前方一定范围内的障碍物信息,同时基于第四图像特征可以确定弯道处的障碍物信息。
其中,基于第一图像特征和第二图像特征确定车辆前方一定范围内的障碍 物信息的方式与上述介绍的基于第一图像特征和第二图像特征确定障碍物信息的实现方式类似,此处不再赘述。
若车辆行驶状态为直线行驶状态,则执行步骤S202-S203,然后执行基于第一图像特征和所述第二图像特征确定障碍物信息的步骤。
参见图3所示,为本实施例的车辆障碍物检测方法的一种特定应用场景示意图,包括直线行驶路段和弯道行驶路段。在直线行驶路段仅通过车辆上安装的激光雷达和第一摄像装置实现对障碍物的检测,图3中将激光雷达和第一摄像装置采用一个结构标识,但是,实际应用中,激光雷达和第一摄像装置是两个不同的结构。
在弯道行驶路段不仅通过车辆上安装的激光雷达和第一摄像装置实现对障碍物的检测而且通过弯道路段处设置的第二摄像装置实现对障碍物的检测,其中,通过道路通信系统实现第二摄像装置和车辆控制器之间的数据传输。
图3中道路通信系统中的无线网络指的是轨旁的无线网络发射器,如Wifi装置,Wifi装置与有线网络或以太网连接。第二摄像装置也与有线网络或以太网连接,第二摄像装置获取到弯道处的摄像图像后,通过有线网络或以太网发送至Wifi装置,然后通过无线网络传输至车辆上的车辆控制器,或车辆控制器从Wifi装置处获取第二摄像装置获取到的弯道处的摄像图像。
通过上述技术方案,本实施例中在弯道处设置第二摄像装置,车辆控制器获取弯道路段处设置的第二摄像装置采集到的第二摄像图像,并对该第二摄像图像进行图像特征提取,得到第四图像特征,根据第四图像特征可以确定弯道处的障碍物信息,避免仅基于车辆上安装的激光雷达和第一摄像装置检测障碍物导致存在视距障碍,不能准确确定车辆行驶方向上障碍物信息的问题产生。
对应上述实施例公开的车辆障碍物检测方法,本申请实施例提供了一种车辆障碍物检测系统,参见图4所示,该系统包括:
激光雷达和第一摄像装置;所述激光雷达和所述第一摄像装置分别安装在 车辆上。分别与所述激光雷达和所述第一摄像装置通信连接的车载控制器。
所述车辆控制器用于控制车辆上安装的激光雷达获取车辆行驶方向的激光图像,并对该激光图像进行图像特征提取,得到第一图像特征;控制车辆上安装的第一摄像装置获取车辆行驶方向的第一摄像图像,并对该第一摄像图像进行图像特征提取,得到第二图像特征;基于所述第一图像特征和所述第二图像特征,确定障碍物信息。
所述车辆控制器基于所述第一图像特征和所述第二图像特征,确定障碍物信息的一种实现方式为:根据环境条件,对所述第一图像特征和所述第二图像特征进行数据融合,得到第三图像特征;将所述第三图像特征输入预先建立的障碍物识别模型,得到所述障碍物识别模型输出的障碍物信息。
其中,建立障碍物识别模型后,从障碍物特征样本数据库中获取障碍物特征样本数据,根据障碍物特征样本数据对建立的障碍物识别模型进行训练,得到训练后的障碍物识别模型。
可选地,在其他实施例中所述系统还包括:第二摄像装置,参见图4所示。
所述第二摄像装置设置在弯道路段处,通过道路通信系统与所述车载控制器连接;
所述车辆控制器,还用于获取弯道路段处设置的第二摄像装置采集到的第二摄像图像,并对该第二摄像图像进行图像特征提取,得到第四图像特征;基于所述第一图像特征、所述第二图像特征和所述第四图像特征,确定障碍物信息。
若障碍物信息表示存在障碍物,则所述车辆控制器,还用于根据障碍物与车辆之间的距离确定车辆制动策略,并基于确定出的车辆制动策略控制车辆避开该障碍物,并报警。
车辆制动策略包括常规制动、紧急制动等。
在障碍物与车辆之间的距离小于预设阈值的情况下,确定车辆制动策略为紧急制动,在障碍物与车辆之间的距离不小于预设阈值的情况下,确定车辆制动策略为常规制动。
在本实施例中车辆障碍物检测方法应用于车辆控制器为例,车辆控制器确定车辆前方存在障碍物时,基于障碍物信息中包括的障碍物与车辆之间的距离,确定对应的车辆制动策略,并将确定出的车辆制动策略通过车辆数据网络发送至自动驾驶ATO的牵引控制单元,使得牵引控制单元根据接收到的车辆制动策略执行相应的制动。
通过上述技术方案,本实施例中通过车辆上安装的激光雷达和摄像装置分别获取车辆行驶方向的图像,并分别对图像进行图像特征提取,然后基于提取到的图像特征确定障碍物信息,由于激光雷达可以获取远距离范围内的图像,因此通过将激光雷达获取到的图像与摄像装置获取到的图像结合在一起进行障碍物的检测,可以实现检测到远距离的障碍物,使得在车辆高速行驶状态下可以具备相对较远的制动距离,避免由于车辆来不及制动导致撞击障碍物的情况发生,提高了车辆行驶的安全性。且在弯道处设置第二摄像装置,车辆控制器获取弯道路段处设置的第二摄像装置采集到的第二摄像图像,并对该第二摄像图像进行图像特征提取,得到第四图像特征,根据第四图像特征可以确定弯道处的障碍物信息,避免仅基于车辆上安装的激光雷达和第一摄像装置检测障碍物导致存在视距障碍,不能准确确定车辆行驶方向上障碍物信息的问题产生。
对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
需要说明的是,本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。对于装置类实施例而言,由于其与方法实施例基本相似,所 以描述的比较简单,相关之处参见方法实施例的部分说明即可。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
对所公开的实施例的上述说明,使本领域技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (10)

  1. 一种车辆障碍物检测方法,其特征在于,包括:
    控制车辆上安装的激光雷达获取车辆行驶方向的激光图像,并对该激光图像进行图像特征提取,得到第一图像特征;
    控制车辆上安装的第一摄像装置获取车辆行驶方向的第一摄像图像,并对该第一摄像图像进行图像特征提取,得到第二图像特征;
    基于所述第一图像特征和所述第二图像特征,确定障碍物信息。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述第一图像特征和所述第二图像特征,确定障碍物信息包括:
    根据环境条件,对所述第一图像特征和所述第二图像特征进行数据融合,得到第三图像特征;
    将所述第三图像特征输入预先建立的障碍物识别模型,得到所述障碍物识别模型输出的障碍物信息。
  3. 根据权利要求1或2所述的方法,其特征在于,还包括:
    检测车辆行驶状态;所述车辆行驶状态包括直线行驶状态、弯道行驶状态;
    若车辆行驶状态为弯道行驶状态,则基于所述第一图像特征和所述第二图像特征,确定障碍物信息之前,还包括:
    获取弯道路段处设置的第二摄像装置采集到的第二摄像图像,并对该第二摄像图像进行图像特征提取,得到第四图像特征;
    则确定障碍物信息包括:
    基于所述第一图像特征、所述第二图像特征和所述第四图像特征,确定障碍物信息。
  4. 根据权利要求3所述的方法,其特征在于,所述获取弯道路段处设置的第二摄像装置采集到的第二摄像图像包括:
    根据车辆当前位置,确定车辆行驶路段上的弯道路段;
    通过道路通信系统获取车辆行驶路段上的弯道路段处设置的第二摄像装置采集到的第二摄像图像。
  5. 根据权利要求4所述的方法,其特征在于,还包括:
    判断是否获取到弯道路段处设置的第二摄像装置采集到的第二摄像图像;
    若判断没有获取到弯道路段处设置的第二摄像装置采集到的第二摄像图像,则提示弯道路段处设置的第二摄像装置故障。
  6. 根据权利要求1-5任意一项所述的方法,其特征在于,所述障碍物信息包括是否存在障碍物、障碍物类型、障碍物尺寸、障碍物与车辆之间的距离;
    则所述确定障碍物信息之后,还包括:
    若障碍物信息表示存在障碍物,则根据障碍物与车辆之间的距离确定车辆制动策略,并基于确定出的车辆制动策略控制车辆避开该障碍物。
  7. 一种车辆障碍物检测系统,其特征在于,包括:
    激光雷达和第一摄像装置;所述激光雷达和所述第一摄像装置分别安装在车辆上;
    分别与所述激光雷达和所述第一摄像装置通信连接的车载控制器;
    所述车辆控制器用于控制车辆上安装的激光雷达获取车辆行驶方向的激光图像,并对该激光图像进行图像特征提取,得到第一图像特征;控制车辆上安装的第一摄像装置获取车辆行驶方向的第一摄像图像,并对该第一摄像图像进行图像特征提取,得到第二图像特征;基于所述第一图像特征和所述第二图像特征,确定障碍物信息。
  8. 根据权利要求7所述的系统,其特征在于,所述车辆控制器基于所述第一图像特征和所述第二图像特征,确定障碍物信息包括:
    根据环境条件,对所述第一图像特征和所述第二图像特征进行数据融合,得到第三图像特征;
    将所述第三图像特征输入预先建立的障碍物识别模型,得到所述障碍物识别模型输出的障碍物信息。
  9. 根据权利要求7或8所述的系统,其特征在于,还包括:
    第二摄像装置;所述第二摄像装置设置在弯道路段处,通过道路通信系统与所述车载控制器连接;
    所述车辆控制器,还用于获取弯道路段处设置的第二摄像装置采集到的第二摄像图像,并对该第二摄像图像进行图像特征提取,得到第四图像特征;基于所述第一图像特征、所述第二图像特征和所述第四图像特征,确定障碍物信息。
  10. 根据权利要求9所述的系统,其特征在于,若障碍物信息表示存在障碍物,则所述车辆控制器,还用于根据障碍物与车辆之间的距离确定车辆制动策略,并基于确定出的车辆制动策略控制车辆避开该障碍物。
PCT/CN2019/124810 2019-08-19 2019-12-12 一种车辆障碍物检测方法及系统 WO2021031469A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910763396.8 2019-08-19
CN201910763396.8A CN110412986A (zh) 2019-08-19 2019-08-19 一种车辆障碍物检测方法及系统

Publications (1)

Publication Number Publication Date
WO2021031469A1 true WO2021031469A1 (zh) 2021-02-25

Family

ID=68367955

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/124810 WO2021031469A1 (zh) 2019-08-19 2019-12-12 一种车辆障碍物检测方法及系统

Country Status (2)

Country Link
CN (1) CN110412986A (zh)
WO (1) WO2021031469A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110412986A (zh) * 2019-08-19 2019-11-05 中车株洲电力机车有限公司 一种车辆障碍物检测方法及系统
JP7276093B2 (ja) * 2019-11-19 2023-05-18 トヨタ自動車株式会社 情報収集システム、中継装置、及びプログラム
CN111582173A (zh) * 2020-05-08 2020-08-25 东软睿驰汽车技术(沈阳)有限公司 一种自动驾驶的方法及系统
CN112550371A (zh) * 2020-11-25 2021-03-26 矿冶科技集团有限公司 井下障碍物检测方法、系统和控制装置
CN112606804B (zh) * 2020-12-08 2022-03-29 东风汽车集团有限公司 一种车辆主动制动的控制方法及控制系统
CN112650225B (zh) * 2020-12-10 2023-07-18 广东嘉腾机器人自动化有限公司 Agv避障方法
CN114915338B (zh) * 2021-02-07 2023-12-08 中国联合网络通信集团有限公司 一种车辆通信方法及装置
CN113050654A (zh) * 2021-03-29 2021-06-29 中车青岛四方车辆研究所有限公司 障碍物检测方法、巡检机器人车载避障系统及方法
CN113325826B (zh) * 2021-06-08 2022-08-30 矿冶科技集团有限公司 一种井下车辆控制方法、装置、电子设备及存储介质
CN113486783A (zh) * 2021-07-02 2021-10-08 浙江省交通投资集团有限公司智慧交通研究分公司 轨道交通车辆的障碍物检测方法及系统
CN115416724B (zh) * 2022-11-03 2023-03-24 中车工业研究院(青岛)有限公司 一种动车组障碍物检测及控制电路、系统
CN117141521B (zh) * 2023-11-01 2024-02-23 广汽埃安新能源汽车股份有限公司 一种基于数据融合的车辆控制方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145680A (zh) * 2017-06-16 2019-01-04 百度在线网络技术(北京)有限公司 一种获取障碍物信息的方法、装置、设备和计算机存储介质
CN109765571A (zh) * 2018-12-27 2019-05-17 合肥工业大学 一种车辆障碍物检测系统及方法
CN110045729A (zh) * 2019-03-12 2019-07-23 广州小马智行科技有限公司 一种车辆自动驾驶方法及装置
CN110045736A (zh) * 2019-04-12 2019-07-23 淮安信息职业技术学院 一种基于无人机的弯道障碍物避让方法及其系统
CN110412986A (zh) * 2019-08-19 2019-11-05 中车株洲电力机车有限公司 一种车辆障碍物检测方法及系统

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6873269B2 (en) * 2003-05-27 2005-03-29 Honeywell International Inc. Embedded free flight obstacle avoidance system
CN103389103B (zh) * 2013-07-03 2015-11-18 北京理工大学 一种基于数据挖掘的地理环境特征地图构建与导航方法
CN103559791B (zh) * 2013-10-31 2015-11-18 北京联合大学 一种融合雷达和ccd摄像机信号的车辆检测方法
CN105892489B (zh) * 2016-05-24 2019-09-10 国网山东省电力公司电力科学研究院 一种基于多传感器融合的自主避障无人机系统及控制方法
CN106646474A (zh) * 2016-12-22 2017-05-10 中国兵器装备集团自动化研究所 一种非结构化道路凹凸障碍物检测装置
CN107650908B (zh) * 2017-10-18 2023-07-14 长沙冰眼电子科技有限公司 无人车环境感知系统
CN108010360A (zh) * 2017-12-27 2018-05-08 中电海康集团有限公司 一种基于车路协同的自动驾驶环境感知系统
CN108229366B (zh) * 2017-12-28 2021-12-14 北京航空航天大学 基于雷达和图像数据融合的深度学习车载障碍物检测方法
CN108416257A (zh) * 2018-01-19 2018-08-17 北京交通大学 融合视觉与激光雷达数据特征的地铁轨道障碍物检测方法
CN108663677A (zh) * 2018-03-29 2018-10-16 上海智瞳通科技有限公司 一种多传感器深度融合提高目标检测能力的方法
CN108983219B (zh) * 2018-08-17 2020-04-07 北京航空航天大学 一种交通场景的图像信息和雷达信息的融合方法及系统
CN109270524B (zh) * 2018-10-19 2020-04-07 禾多科技(北京)有限公司 基于无人驾驶的多数据融合障碍物检测装置及其检测方法
CN109359409A (zh) * 2018-10-31 2019-02-19 张维玲 一种基于视觉与激光雷达传感器的车辆可通过性检测系统
CN109298415B (zh) * 2018-11-20 2020-09-22 中车株洲电力机车有限公司 一种轨道和道路障碍物检测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145680A (zh) * 2017-06-16 2019-01-04 百度在线网络技术(北京)有限公司 一种获取障碍物信息的方法、装置、设备和计算机存储介质
CN109765571A (zh) * 2018-12-27 2019-05-17 合肥工业大学 一种车辆障碍物检测系统及方法
CN110045729A (zh) * 2019-03-12 2019-07-23 广州小马智行科技有限公司 一种车辆自动驾驶方法及装置
CN110045736A (zh) * 2019-04-12 2019-07-23 淮安信息职业技术学院 一种基于无人机的弯道障碍物避让方法及其系统
CN110412986A (zh) * 2019-08-19 2019-11-05 中车株洲电力机车有限公司 一种车辆障碍物检测方法及系统

Also Published As

Publication number Publication date
CN110412986A (zh) 2019-11-05

Similar Documents

Publication Publication Date Title
WO2021031469A1 (zh) 一种车辆障碍物检测方法及系统
US11854212B2 (en) Traffic light detection system for vehicle
CN107346612B (zh) 一种基于车联网的车辆防碰撞方法和系统
US10239539B2 (en) Vehicle travel control method and vehicle travel control device
US20180259640A1 (en) Method and System for Environment Detection
CN112009524B (zh) 一种用于有轨电车障碍物检测的系统及方法
CN110816540B (zh) 交通拥堵的确定方法、装置、系统及车辆
CN109765571B (zh) 一种车辆障碍物检测系统及方法
US10369995B2 (en) Information processing device, information processing method, control device for vehicle, and control method for vehicle
CN104290753A (zh) 一种前方车辆运动状态追踪预测装置及其预测方法
CN111391856A (zh) 汽车自适应巡航的前方弯道检测系统及方法
CN110412980B (zh) 汽车自动驾驶并线控制方法
CN113799852B (zh) 一种支持动态模式切换的智能主动障碍物识别防护方法
CN110007669A (zh) 一种用于汽车的智能驾驶避障方法
CN108639108B (zh) 一种机车作业安全防护系统
CN115257784A (zh) 基于4d毫米波雷达的车路协同系统
KR20150096924A (ko) 전방 충돌 차량 선정 방법 및 시스템
CN114613129A (zh) 用于判断交通信号灯状态的方法、程序产品和系统
CN111123902A (zh) 一种车辆进站方法及车站
DE102012211495B4 (de) Fahrzeugumgebungs-Überwachungsvorrichtung
US20220348199A1 (en) Apparatus and method for assisting driving of vehicle
CN114822083B (zh) 智慧车辆编队辅助控制系统
CN113386791B (zh) 一种基于无人运输车列在大雾天气下的避险系统
US11541887B2 (en) Enabling reverse motion of a preceding vehicle at bunched traffic sites
CN114299715A (zh) 一种基于视频、激光雷达与dsrc的高速公路信息检测系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19942023

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19942023

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19942023

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 28.09.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19942023

Country of ref document: EP

Kind code of ref document: A1