WO2021031469A1 - 一种车辆障碍物检测方法及系统 - Google Patents
一种车辆障碍物检测方法及系统 Download PDFInfo
- Publication number
- WO2021031469A1 WO2021031469A1 PCT/CN2019/124810 CN2019124810W WO2021031469A1 WO 2021031469 A1 WO2021031469 A1 WO 2021031469A1 CN 2019124810 W CN2019124810 W CN 2019124810W WO 2021031469 A1 WO2021031469 A1 WO 2021031469A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- image feature
- obstacle
- image
- camera
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 28
- 238000000605 extraction Methods 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims description 28
- 238000004891 communication Methods 0.000 claims description 16
- 230000004927 fusion Effects 0.000 claims description 11
- 230000007613 environmental effect Effects 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 239000011435 rock Substances 0.000 description 3
- 239000013589 supplement Substances 0.000 description 3
- 238000009434 installation Methods 0.000 description 2
- -1 roadblocks Substances 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0242—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/0278—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
Definitions
- This application belongs to the technical field of vehicle control, and in particular relates to a vehicle obstacle detection method and system.
- the existing method of detecting obstacles around the vehicle is to collect images around the vehicle based on a camera device installed on the vehicle, and determine whether there is an obstacle based on the collected images. Since the camera device can only collect images within a short distance range, the images collected based on the camera device installed on the vehicle are images within the short distance range of the vehicle.
- the image around the vehicle is collected based on the camera installed on the vehicle, and after the obstacle is determined based on the collected image, the vehicle is already very close to the obstacle, making the vehicle unavoidable at high speed Obstacles, resulting in low vehicle safety.
- the purpose of the present application is to provide a vehicle obstacle detection method and system, which are used to solve the problem of low safety of vehicles at high speeds in the prior art.
- This application provides a vehicle obstacle detection method, including:
- obstacle information is determined.
- the determining obstacle information based on the first image feature and the second image feature includes:
- the third image feature is input into a pre-established obstacle recognition model to obtain obstacle information output by the obstacle recognition model.
- it also includes:
- the driving state of the vehicle includes a straight driving state and a curved driving state
- the method further includes:
- the obstacle information includes:
- the acquiring the second camera image collected by the second camera device installed on the curved road section includes:
- the second camera image collected by the second camera device set on the curved road section on the road section of the vehicle is acquired through the road communication system.
- it also includes:
- the obstacle information includes whether there is an obstacle, an obstacle type, an obstacle size, and the distance between the obstacle and the vehicle;
- the obstacle information After the obstacle information is determined, it further includes:
- the vehicle braking strategy is determined according to the distance between the obstacle and the vehicle, and the vehicle is controlled to avoid the obstacle based on the determined vehicle braking strategy.
- the application also provides a vehicle obstacle detection system, including:
- Lidar and a first camera device are respectively installed on a vehicle;
- a vehicle-mounted controller respectively connected to the lidar and the first camera device in communication
- the vehicle controller is used to control the laser radar installed on the vehicle to obtain the laser image of the vehicle traveling direction, and perform image feature extraction on the laser image to obtain the first image feature; control the first camera device installed on the vehicle to obtain the vehicle traveling direction And perform image feature extraction on the first captured image to obtain a second image feature; and determine obstacle information based on the first image feature and the second image feature.
- the vehicle controller determining the obstacle information based on the first image feature and the second image feature includes:
- the third image feature is input into a pre-established obstacle recognition model to obtain obstacle information output by the obstacle recognition model.
- it also includes:
- the second camera device is set at a curved road section, and is connected to the vehicle controller through a road communication system;
- the vehicle controller is also used to obtain a second camera image collected by a second camera device installed on a curved road section, and perform image feature extraction on the second camera image to obtain a fourth image feature; An image feature, the second image feature, and the fourth image feature determine obstacle information.
- the vehicle controller is further configured to determine a vehicle braking strategy according to the distance between the obstacle and the vehicle, and control the vehicle to avoid the vehicle based on the determined vehicle braking strategy. Open the obstacle.
- the laser radar and the camera device installed on the vehicle separately acquire the image of the vehicle’s traveling direction, and perform image feature extraction on the image respectively, and then determine the obstacle information based on the extracted image feature.
- Radar can acquire images at a long distance. Therefore, by combining the image acquired by the lidar with the image acquired by the camera to detect obstacles, it can detect long-distance obstacles and make the vehicle drive at high speed. In the state, it can have a relatively long braking distance, avoiding the occurrence of collisions with obstacles due to the lack of time for the vehicle to brake, and improving the safety of vehicle driving.
- Figure 1 is a flowchart of a vehicle obstacle detection method disclosed in the present application
- FIG. 3 is a schematic diagram of an application scenario of the present application including a straight driving section and a curved road section;
- Fig. 4 is a schematic structural diagram of a vehicle obstacle detection system disclosed in the present application.
- This application provides a vehicle obstacle detection method.
- the image of the driving direction of the vehicle is obtained through the lidar and the camera device installed on the vehicle, and the image features are extracted respectively, and then the obstacle information is determined based on the extracted image features .
- lidar can acquire images in a long-distance range, by combining the image acquired by the lidar with the image acquired by the camera to detect obstacles, it can detect long-distance obstacles so that The vehicle can have a relatively long braking distance when the vehicle is running at a high speed, avoiding the occurrence of collisions with obstacles due to the vehicle being too late to brake, and improving the safety of the vehicle.
- a vehicle obstacle detection method provided by an embodiment of the present application may include the following steps:
- S101 Control the laser radar installed on the vehicle to obtain a laser image of the traveling direction of the vehicle, and perform image feature extraction on the laser image to obtain a first image feature.
- the braking distance is 225m; when the vehicle is traveling at 60km/h and the braking maximum deceleration is 1.1m/s2, the braking The moving distance is 126m; therefore, in order to ensure that the vehicle can avoid obstacles such as braking when there are obstacles in the direction of the vehicle, select a sensor that can detect obstacles in a long distance in front of the vehicle for obstacle detection.
- a solid-state laser radar with a range of up to 200 m is selected in this embodiment.
- the solid-state lidar is installed at both sides of the front end of the vehicle, and the lidar is controlled to emit lasers and scan the scene in a certain range in front of the vehicle during the driving process to obtain laser images.
- the image feature extraction method is used to extract the obstacle feature of the laser image to obtain the first image feature.
- Obstacles include people, rocks, roadblocks, vehicles, etc. Obstacle features include the outline, size, and internal geometric features of the obstacle.
- S102 Control the first camera device installed on the vehicle to obtain a first camera image of the traveling direction of the vehicle, and perform image feature extraction on the first camera image to obtain a second image feature.
- a first camera device such as a normal camera and an infrared camera, is also set.
- the first camera device can capture a scene within a range of tens of meters in front of the vehicle.
- the first camera device is controlled to work, and the scene in a certain range in front of the vehicle is collected to obtain the first camera image.
- the image feature extraction method is used to perform obstacle feature extraction on the first camera image to obtain the second image feature.
- Obstacles include people, rocks, roadblocks, vehicles, etc. Obstacle features include the outline, size, and internal geometric features of the obstacle.
- step S101 and step S102 is not limited in this embodiment, as long as the first image feature and the second image feature can be obtained respectively.
- S103 Determine obstacle information based on the first image feature and the second image feature.
- the first image feature is the obstacle feature obtained after the obstacle feature extraction is performed on the laser image, including the outline, size, and internal geometric features of the obstacle;
- the second image feature is the obstacle feature obtained after the obstacle feature extraction is performed on the camera image, including the outline, size, and internal geometric features of the obstacle.
- the obstacle information in the direction of the vehicle is determined.
- the obstacle information includes whether there is an obstacle, the type of the obstacle, the size of the obstacle, and the distance between the obstacle and the vehicle.
- the obstacle information also includes the type of obstacle, the size of the obstacle and the distance between the obstacle and the vehicle, and when there is no obstacle, the obstacle information Only include the prompt that there is no obstacle.
- an alarm and control the vehicle to avoid the obstacle such as controlling the vehicle to brake.
- the method of controlling the braking of the vehicle is to determine the corresponding vehicle braking strategy based on the distance between the obstacle and the vehicle included in the obstacle information, and control the braking of the vehicle to avoid the obstacle based on the determined vehicle braking strategy.
- Vehicle braking strategies include conventional braking and emergency braking.
- the vehicle braking strategy is determined to be emergency braking, and when the distance between the obstacle and the vehicle is not less than the preset threshold, the vehicle braking strategy is determined For conventional braking.
- the vehicle obstacle detection method is applied to the vehicle controller as an example.
- the vehicle controller determines that there is an obstacle in front of the vehicle, it determines the corresponding vehicle system based on the distance between the obstacle and the vehicle included in the obstacle information. And send the determined vehicle braking strategy to the traction control unit of the automatic driving ATO through the vehicle data network, so that the traction control unit performs corresponding braking according to the received vehicle braking strategy.
- the image of the driving direction of the vehicle is obtained through the lidar and the camera installed on the vehicle, and the image feature is extracted respectively, and then the obstacle information is determined based on the extracted image feature.
- Radar can acquire images at a long distance. Therefore, by combining the image acquired by the lidar with the image acquired by the camera to detect obstacles, it can detect long-distance obstacles and make the vehicle drive at high speed. In the state, it can have a relatively long braking distance, avoiding the occurrence of collisions with obstacles due to the lack of time for the vehicle to brake, and improving the safety of vehicle driving.
- the obstacle feature extraction is performed on the images acquired by the lidar and camera device, and the three-dimensional geometric features of the obstacle can be obtained, thereby improving the determination.
- the accuracy of the obstacle information since the lidar and camera devices are provided on both sides of the front end of the vehicle in this embodiment, the obstacle feature extraction is performed on the images acquired by the lidar and camera device, and the three-dimensional geometric features of the obstacle can be obtained, thereby improving the determination. The accuracy of the obstacle information.
- the vehicle driving environment is changeable, such as night driving environment, fog driving environment, rain and snow driving environment, etc.
- different driving environments cause different degrees of interference to laser images and camera images, so the obstacles included in the first image feature are integrated
- the object feature and the obstacle feature included in the second image feature are required to fully consider the impact of the current driving environment in the process of determining the obstacle information in the driving direction of the vehicle.
- the following describes an implementation manner for determining obstacle information based on the first image feature and the second image feature.
- Step 1 Perform data fusion on the first image feature and the second image feature according to environmental conditions to obtain a third image feature.
- a common camera, an infrared camera, and a lidar are installed on both sides of the front end of the vehicle to obtain a scene within a certain range in front of the vehicle to obtain a first camera image and a laser image.
- the first camera image and the laser image are preprocessed respectively to obtain the preprocessed camera image and the preprocessed laser image, and the preprocessing includes filtering processing, edge detection, etc.
- Obstacle feature extraction is performed on the preprocessed camera image to obtain the first image feature; the obstacle feature extraction is performed on the preprocessed laser image to obtain the second image feature.
- Data fusion is performed on the first image feature and the second image feature to eliminate the influence on the image collected by the lidar and the image collected by the camera under different driving environments, and obtain an accurate third image feature.
- the second image feature extracted from the camera image obtained by the camera device is used to supplement the first image feature extracted from the laser image obtained by the lidar to obtain the third image feature.
- the camera image obtained by the camera device includes the camera image obtained by the infrared camera and/or the camera image obtained by the ordinary camera.
- the first image feature extracted from the laser image obtained by the laser radar is supplemented with the second image feature extracted from the camera image obtained by the infrared camera to obtain the third image feature.
- the second image feature extracted from the camera image obtained by the infrared camera is mainly used, and the first image feature extracted from the laser image obtained by the lidar is used as a supplement and/or The image feature extracted from the camera image obtained by the ordinary camera is supplemented to obtain the third image feature.
- Data fusion is a complementary method used to compensate for the weakness of a single sensor in different environments.
- the image features acquired by each sensor are compared with each other, and the image features are complemented each other to obtain accurate and complete image features. Three image characteristics.
- Step 2 Input the third image feature into the pre-established obstacle recognition model to obtain obstacle information output by the obstacle recognition model.
- An obstacle recognition model is established, and the obstacle recognition model is trained using obstacle feature sample data to obtain the trained obstacle recognition model.
- an obstacle recognition model can be established based on a hidden Markov model or a neural network model.
- the obstacles on the rail transit road are mainly divided into: maintenance tools, falling surrounding equipment, guardrails at the turnout, vehicles, personnel, etc., using lidar and camera devices to pre-align the track
- the main obstacles existing on the traffic road are imaged, and the obstacle features are extracted from the acquired images, and the obstacle feature sample database is established.
- the obstacle characteristic sample data is obtained from the obstacle characteristic sample database, and the obstacle recognition model is trained.
- the obstacle information is output after being processed by the obstacle recognition model.
- the obstacle feature sample database can be updated, and the obstacle recognition model can be retrained to ensure that the obstacle recognition model can accurately and quickly determine whether there is an obstacle And obstacle information such as the type and size of the obstacle.
- the driving state of the vehicle includes a straight driving state and a curved driving state.
- the aforementioned vehicle obstacle detection method can be directly used to achieve the purpose of detecting whether there is an obstacle in front of the vehicle.
- the sight distance obstacle is limited by the small radius curve, which makes it impossible to accurately determine the obstacle information in front of the vehicle in obstacle detection based on the images acquired by the lidar and the camera installed on the vehicle.
- FIG. 1 Another vehicle obstacle detection method is disclosed in this embodiment, which is applied to a vehicle control system, such as a vehicle controller, as shown in Figure 2.
- the method may include the following steps:
- S201 Detect the driving state of the vehicle; the driving state of the vehicle includes a straight driving state and a curved driving state.
- step S202 is executed.
- the driving state of the vehicle may be detected according to the steering angle sensor installed on the vehicle, or the driving state of the vehicle may be detected according to the steering lever.
- the driver will not toggle the steering rod, while in a curve driving state, the driver will toggle the steering rod, so that the driving state of the vehicle can be determined according to the position of the steering rod.
- the attributes of the road segment include straight lines and curves. According to the positioning device of the vehicle, it is determined that the vehicle is driving to the curved road section, then the driving state of the vehicle is determined to be the curve driving state, and the vehicle is determined to be driving to the straight line section according to the positioning device of the vehicle, then the vehicle is determined
- the driving state is a straight driving state.
- S202 Control the laser radar installed on the vehicle to obtain a laser image of the traveling direction of the vehicle, and perform image feature extraction on the laser image to obtain a first image feature.
- steps S202-S203 in this embodiment is similar to the implementation of steps S101-S102 in the previous embodiment, and will not be repeated here.
- S204 Acquire a second camera image collected by a second camera device installed on a curved road section, and perform image feature extraction on the second camera image to obtain a fourth image feature.
- Set up multiple sensors at the curved road section including a normal camera and an infrared camera.
- the scene at the curved road section is acquired through the normal camera and the infrared camera to obtain the second camera image.
- the second camera image is transmitted to the vehicle controller through the road communication system, so that the vehicle controller can obtain the second camera image collected by the second camera device installed on the curved road section.
- the road communication system consists of a wired network and a wireless network.
- Wired network refers to the data communication network between trackside equipment (such as the second camera device), that is, the backbone network, through which the data transmission between the control center, the vehicle and the equipment centralized station is realized; the wireless network refers to the vehicle and the ground Data communication between the vehicle controller and trackside equipment.
- the second camera device is connected to the Ethernet of the road communication system, and multiple wireless transmission devices are arranged on the Ethernet.
- the wireless transmission device can transmit wireless information and complete the data transmission and reception function together with the vehicle-mounted wireless transmission device, so that the second camera device communicates through the road
- the system can transmit the second camera image to the vehicle controller.
- a second camera device is provided at each curve, so that the second camera image collected by the second camera device can determine the obstacle at the curve
- the current position of the vehicle is determined by the on-board positioning device, such as GPS to determine the current position of the vehicle, and then the curve section on the vehicle driving section is determined according to the current position of the vehicle, and the corresponding curve section is obtained through the road communication system
- the second camera image collected by the second camera device is obtained.
- the second camera image collected by the second camera device determines the obstacle information at all curves of the road section, even if the vehicle may not have traveled to the curve at this time. In this case, the operation of detecting the driving state of the vehicle can be ignored.
- This embodiment also includes determining whether to obtain the second camera collected by the second camera device installed at the curve section. Image; if it is determined that the second camera image collected by the second camera device set at the curved road section is not obtained, it will prompt the second camera device set at the curved road section to be faulty, and prompt the road maintenance personnel to perform the second camera Maintenance, to avoid the problem of missing obstacles due to the failure of the second camera device at the curve.
- obstacle feature extraction is performed on the second camera image to obtain the fourth image feature.
- the obstacle information at the curve can be determined.
- the method for processing the second camera image in this embodiment is the same as the method for processing the first camera image in step S102, and will not be repeated here.
- multiple ordinary cameras and infrared cameras can be set in the curved road section, and because the curved road section is not limited by the sensing distance measurement, it is not necessary to use a sensor that can detect a large range, and thus does not use a lidar.
- the normal camera obtains the camera image and the infrared camera also obtains the camera image.
- Obstacle features extracted from the camera image obtained by the infrared camera are the main features, and the obstacle feature extracted from the camera image obtained by the ordinary camera is used as a supplement for data fusion, and the obstacle is determined based on the obstacle feature after data fusion information.
- the order of performing steps S202-S204 is not limited, as long as the first image feature, the second image feature, and the fourth image feature can be obtained respectively.
- Obstacle information within a certain range in front of the vehicle can be determined based on the first image feature and the second image feature, and obstacle information at a curve can be determined based on the fourth image feature.
- the method of determining obstacle information within a certain range in front of the vehicle based on the first image feature and the second image feature is similar to the implementation method of determining obstacle information based on the first image feature and the second image feature described above. Repeat it again.
- steps S202-S203 are executed, and then the step of determining obstacle information based on the first image feature and the second image feature is executed.
- FIG. 3 is a schematic diagram of a specific application scenario of the vehicle obstacle detection method of this embodiment, including straight-line driving sections and curved road sections.
- the lidar and the first camera installed on the vehicle can detect obstacles.
- the lidar and the first camera are identified by a structure.
- the lidar and the first camera A camera device has two different structures.
- the laser radar and the first camera installed on the vehicle realize the detection of obstacles
- the second camera installed at the curved road section realizes the detection of obstacles, which is realized by the road communication system Data transmission between the second camera device and the vehicle controller.
- the wireless network in the road communication system in Figure 3 refers to the trackside wireless network transmitter, such as a Wifi device, which is connected to a wired network or Ethernet.
- the second camera device is also connected to the wired network or Ethernet. After the second camera device acquires the camera image at the curve, it is sent to the Wifi device via the wired network or Ethernet, and then transmitted to the vehicle controller on the vehicle via the wireless network , Or the vehicle controller obtains the camera image at the curve obtained by the second camera device from the Wifi device.
- a second camera device is provided at the curve, and the vehicle controller obtains the second camera image collected by the second camera device installed on the curve section, and performs imaging on the second camera image.
- the obstacle information at the curve can be determined, so as to avoid the sight distance obstacle caused by the detection of obstacles only based on the lidar and the first camera installed on the vehicle. The problem of obstacle information in the direction of the vehicle arises.
- an embodiment of the present application provides a vehicle obstacle detection system. As shown in FIG. 4, the system includes:
- the lidar and the first camera; the lidar and the first camera are respectively installed on the vehicle.
- An on-board controller respectively communicating with the lidar and the first camera device.
- the vehicle controller is used to control the laser radar installed on the vehicle to obtain the laser image of the vehicle traveling direction, and perform image feature extraction on the laser image to obtain the first image feature; control the first camera device installed on the vehicle to obtain the vehicle traveling direction And perform image feature extraction on the first captured image to obtain a second image feature; and determine obstacle information based on the first image feature and the second image feature.
- One way for the vehicle controller to determine the obstacle information based on the first image feature and the second image feature is: performing the first image feature and the second image feature according to environmental conditions Data fusion is used to obtain a third image feature; the third image feature is input into a pre-established obstacle recognition model to obtain obstacle information output by the obstacle recognition model.
- the obstacle characteristic sample data is obtained from the obstacle characteristic sample database, and the established obstacle recognition model is trained according to the obstacle characteristic sample data to obtain the trained obstacle recognition model.
- the system further includes: a second camera device, as shown in FIG. 4.
- the second camera device is arranged at a curved road section and is connected to the vehicle controller through a road communication system;
- the vehicle controller is also used to obtain a second camera image collected by a second camera device installed on a curved road section, and perform image feature extraction on the second camera image to obtain a fourth image feature; An image feature, the second image feature, and the fourth image feature determine obstacle information.
- the vehicle controller is also used to determine a vehicle braking strategy based on the distance between the obstacle and the vehicle, and control the vehicle to avoid the obstacle based on the determined vehicle braking strategy And call the police.
- Vehicle braking strategies include conventional braking and emergency braking.
- the vehicle braking strategy is determined to be emergency braking, and when the distance between the obstacle and the vehicle is not less than the preset threshold, the vehicle braking strategy is determined For conventional braking.
- the vehicle obstacle detection method is applied to the vehicle controller as an example.
- the vehicle controller determines that there is an obstacle in front of the vehicle, it determines the corresponding vehicle system based on the distance between the obstacle and the vehicle included in the obstacle information. And send the determined vehicle braking strategy to the traction control unit of the automatic driving ATO through the vehicle data network, so that the traction control unit performs corresponding braking according to the received vehicle braking strategy.
- the image of the driving direction of the vehicle is obtained through the lidar and the camera installed on the vehicle, and the image feature is extracted respectively, and then the obstacle information is determined based on the extracted image feature.
- Radar can acquire images at a long distance. Therefore, by combining the image acquired by the lidar with the image acquired by the camera to detect obstacles, it can detect long-distance obstacles and make the vehicle drive at high speed. In the state, it can have a relatively long braking distance, avoiding the occurrence of collisions with obstacles due to the lack of time for the vehicle to brake, and improving the safety of vehicle driving.
- a second camera device is set at the curve, and the vehicle controller obtains the second camera image collected by the second camera device on the curve section, and performs image feature extraction on the second camera image to obtain the fourth image feature ,
- the obstacle information at the curve can be determined, so as to avoid the sight distance obstacle caused by the detection of obstacles only based on the lidar and the first camera installed on the vehicle, and the obstacle information in the direction of the vehicle cannot be accurately determined The problem arises.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Optics & Photonics (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
Claims (10)
- 一种车辆障碍物检测方法,其特征在于,包括:控制车辆上安装的激光雷达获取车辆行驶方向的激光图像,并对该激光图像进行图像特征提取,得到第一图像特征;控制车辆上安装的第一摄像装置获取车辆行驶方向的第一摄像图像,并对该第一摄像图像进行图像特征提取,得到第二图像特征;基于所述第一图像特征和所述第二图像特征,确定障碍物信息。
- 根据权利要求1所述的方法,其特征在于,所述基于所述第一图像特征和所述第二图像特征,确定障碍物信息包括:根据环境条件,对所述第一图像特征和所述第二图像特征进行数据融合,得到第三图像特征;将所述第三图像特征输入预先建立的障碍物识别模型,得到所述障碍物识别模型输出的障碍物信息。
- 根据权利要求1或2所述的方法,其特征在于,还包括:检测车辆行驶状态;所述车辆行驶状态包括直线行驶状态、弯道行驶状态;若车辆行驶状态为弯道行驶状态,则基于所述第一图像特征和所述第二图像特征,确定障碍物信息之前,还包括:获取弯道路段处设置的第二摄像装置采集到的第二摄像图像,并对该第二摄像图像进行图像特征提取,得到第四图像特征;则确定障碍物信息包括:基于所述第一图像特征、所述第二图像特征和所述第四图像特征,确定障碍物信息。
- 根据权利要求3所述的方法,其特征在于,所述获取弯道路段处设置的第二摄像装置采集到的第二摄像图像包括:根据车辆当前位置,确定车辆行驶路段上的弯道路段;通过道路通信系统获取车辆行驶路段上的弯道路段处设置的第二摄像装置采集到的第二摄像图像。
- 根据权利要求4所述的方法,其特征在于,还包括:判断是否获取到弯道路段处设置的第二摄像装置采集到的第二摄像图像;若判断没有获取到弯道路段处设置的第二摄像装置采集到的第二摄像图像,则提示弯道路段处设置的第二摄像装置故障。
- 根据权利要求1-5任意一项所述的方法,其特征在于,所述障碍物信息包括是否存在障碍物、障碍物类型、障碍物尺寸、障碍物与车辆之间的距离;则所述确定障碍物信息之后,还包括:若障碍物信息表示存在障碍物,则根据障碍物与车辆之间的距离确定车辆制动策略,并基于确定出的车辆制动策略控制车辆避开该障碍物。
- 一种车辆障碍物检测系统,其特征在于,包括:激光雷达和第一摄像装置;所述激光雷达和所述第一摄像装置分别安装在车辆上;分别与所述激光雷达和所述第一摄像装置通信连接的车载控制器;所述车辆控制器用于控制车辆上安装的激光雷达获取车辆行驶方向的激光图像,并对该激光图像进行图像特征提取,得到第一图像特征;控制车辆上安装的第一摄像装置获取车辆行驶方向的第一摄像图像,并对该第一摄像图像进行图像特征提取,得到第二图像特征;基于所述第一图像特征和所述第二图像特征,确定障碍物信息。
- 根据权利要求7所述的系统,其特征在于,所述车辆控制器基于所述第一图像特征和所述第二图像特征,确定障碍物信息包括:根据环境条件,对所述第一图像特征和所述第二图像特征进行数据融合,得到第三图像特征;将所述第三图像特征输入预先建立的障碍物识别模型,得到所述障碍物识别模型输出的障碍物信息。
- 根据权利要求7或8所述的系统,其特征在于,还包括:第二摄像装置;所述第二摄像装置设置在弯道路段处,通过道路通信系统与所述车载控制器连接;所述车辆控制器,还用于获取弯道路段处设置的第二摄像装置采集到的第二摄像图像,并对该第二摄像图像进行图像特征提取,得到第四图像特征;基于所述第一图像特征、所述第二图像特征和所述第四图像特征,确定障碍物信息。
- 根据权利要求9所述的系统,其特征在于,若障碍物信息表示存在障碍物,则所述车辆控制器,还用于根据障碍物与车辆之间的距离确定车辆制动策略,并基于确定出的车辆制动策略控制车辆避开该障碍物。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910763396.8 | 2019-08-19 | ||
CN201910763396.8A CN110412986A (zh) | 2019-08-19 | 2019-08-19 | 一种车辆障碍物检测方法及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021031469A1 true WO2021031469A1 (zh) | 2021-02-25 |
Family
ID=68367955
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/124810 WO2021031469A1 (zh) | 2019-08-19 | 2019-12-12 | 一种车辆障碍物检测方法及系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110412986A (zh) |
WO (1) | WO2021031469A1 (zh) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110412986A (zh) * | 2019-08-19 | 2019-11-05 | 中车株洲电力机车有限公司 | 一种车辆障碍物检测方法及系统 |
JP7276093B2 (ja) * | 2019-11-19 | 2023-05-18 | トヨタ自動車株式会社 | 情報収集システム、中継装置、及びプログラム |
CN111582173A (zh) * | 2020-05-08 | 2020-08-25 | 东软睿驰汽车技术(沈阳)有限公司 | 一种自动驾驶的方法及系统 |
CN112550371A (zh) * | 2020-11-25 | 2021-03-26 | 矿冶科技集团有限公司 | 井下障碍物检测方法、系统和控制装置 |
CN112606804B (zh) * | 2020-12-08 | 2022-03-29 | 东风汽车集团有限公司 | 一种车辆主动制动的控制方法及控制系统 |
CN112650225B (zh) * | 2020-12-10 | 2023-07-18 | 广东嘉腾机器人自动化有限公司 | Agv避障方法 |
CN114915338B (zh) * | 2021-02-07 | 2023-12-08 | 中国联合网络通信集团有限公司 | 一种车辆通信方法及装置 |
CN113050654A (zh) * | 2021-03-29 | 2021-06-29 | 中车青岛四方车辆研究所有限公司 | 障碍物检测方法、巡检机器人车载避障系统及方法 |
CN113325826B (zh) * | 2021-06-08 | 2022-08-30 | 矿冶科技集团有限公司 | 一种井下车辆控制方法、装置、电子设备及存储介质 |
CN113486783A (zh) * | 2021-07-02 | 2021-10-08 | 浙江省交通投资集团有限公司智慧交通研究分公司 | 轨道交通车辆的障碍物检测方法及系统 |
CN115416724B (zh) * | 2022-11-03 | 2023-03-24 | 中车工业研究院(青岛)有限公司 | 一种动车组障碍物检测及控制电路、系统 |
CN117141521B (zh) * | 2023-11-01 | 2024-02-23 | 广汽埃安新能源汽车股份有限公司 | 一种基于数据融合的车辆控制方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109145680A (zh) * | 2017-06-16 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | 一种获取障碍物信息的方法、装置、设备和计算机存储介质 |
CN109765571A (zh) * | 2018-12-27 | 2019-05-17 | 合肥工业大学 | 一种车辆障碍物检测系统及方法 |
CN110045729A (zh) * | 2019-03-12 | 2019-07-23 | 广州小马智行科技有限公司 | 一种车辆自动驾驶方法及装置 |
CN110045736A (zh) * | 2019-04-12 | 2019-07-23 | 淮安信息职业技术学院 | 一种基于无人机的弯道障碍物避让方法及其系统 |
CN110412986A (zh) * | 2019-08-19 | 2019-11-05 | 中车株洲电力机车有限公司 | 一种车辆障碍物检测方法及系统 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6873269B2 (en) * | 2003-05-27 | 2005-03-29 | Honeywell International Inc. | Embedded free flight obstacle avoidance system |
CN103389103B (zh) * | 2013-07-03 | 2015-11-18 | 北京理工大学 | 一种基于数据挖掘的地理环境特征地图构建与导航方法 |
CN103559791B (zh) * | 2013-10-31 | 2015-11-18 | 北京联合大学 | 一种融合雷达和ccd摄像机信号的车辆检测方法 |
CN105892489B (zh) * | 2016-05-24 | 2019-09-10 | 国网山东省电力公司电力科学研究院 | 一种基于多传感器融合的自主避障无人机系统及控制方法 |
CN106646474A (zh) * | 2016-12-22 | 2017-05-10 | 中国兵器装备集团自动化研究所 | 一种非结构化道路凹凸障碍物检测装置 |
CN107650908B (zh) * | 2017-10-18 | 2023-07-14 | 长沙冰眼电子科技有限公司 | 无人车环境感知系统 |
CN108010360A (zh) * | 2017-12-27 | 2018-05-08 | 中电海康集团有限公司 | 一种基于车路协同的自动驾驶环境感知系统 |
CN108229366B (zh) * | 2017-12-28 | 2021-12-14 | 北京航空航天大学 | 基于雷达和图像数据融合的深度学习车载障碍物检测方法 |
CN108416257A (zh) * | 2018-01-19 | 2018-08-17 | 北京交通大学 | 融合视觉与激光雷达数据特征的地铁轨道障碍物检测方法 |
CN108663677A (zh) * | 2018-03-29 | 2018-10-16 | 上海智瞳通科技有限公司 | 一种多传感器深度融合提高目标检测能力的方法 |
CN108983219B (zh) * | 2018-08-17 | 2020-04-07 | 北京航空航天大学 | 一种交通场景的图像信息和雷达信息的融合方法及系统 |
CN109270524B (zh) * | 2018-10-19 | 2020-04-07 | 禾多科技(北京)有限公司 | 基于无人驾驶的多数据融合障碍物检测装置及其检测方法 |
CN109359409A (zh) * | 2018-10-31 | 2019-02-19 | 张维玲 | 一种基于视觉与激光雷达传感器的车辆可通过性检测系统 |
CN109298415B (zh) * | 2018-11-20 | 2020-09-22 | 中车株洲电力机车有限公司 | 一种轨道和道路障碍物检测方法 |
-
2019
- 2019-08-19 CN CN201910763396.8A patent/CN110412986A/zh active Pending
- 2019-12-12 WO PCT/CN2019/124810 patent/WO2021031469A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109145680A (zh) * | 2017-06-16 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | 一种获取障碍物信息的方法、装置、设备和计算机存储介质 |
CN109765571A (zh) * | 2018-12-27 | 2019-05-17 | 合肥工业大学 | 一种车辆障碍物检测系统及方法 |
CN110045729A (zh) * | 2019-03-12 | 2019-07-23 | 广州小马智行科技有限公司 | 一种车辆自动驾驶方法及装置 |
CN110045736A (zh) * | 2019-04-12 | 2019-07-23 | 淮安信息职业技术学院 | 一种基于无人机的弯道障碍物避让方法及其系统 |
CN110412986A (zh) * | 2019-08-19 | 2019-11-05 | 中车株洲电力机车有限公司 | 一种车辆障碍物检测方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN110412986A (zh) | 2019-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021031469A1 (zh) | 一种车辆障碍物检测方法及系统 | |
US11854212B2 (en) | Traffic light detection system for vehicle | |
CN107346612B (zh) | 一种基于车联网的车辆防碰撞方法和系统 | |
US10239539B2 (en) | Vehicle travel control method and vehicle travel control device | |
US20180259640A1 (en) | Method and System for Environment Detection | |
CN112009524B (zh) | 一种用于有轨电车障碍物检测的系统及方法 | |
CN110816540B (zh) | 交通拥堵的确定方法、装置、系统及车辆 | |
CN109765571B (zh) | 一种车辆障碍物检测系统及方法 | |
US10369995B2 (en) | Information processing device, information processing method, control device for vehicle, and control method for vehicle | |
CN104290753A (zh) | 一种前方车辆运动状态追踪预测装置及其预测方法 | |
CN111391856A (zh) | 汽车自适应巡航的前方弯道检测系统及方法 | |
CN110412980B (zh) | 汽车自动驾驶并线控制方法 | |
CN113799852B (zh) | 一种支持动态模式切换的智能主动障碍物识别防护方法 | |
CN110007669A (zh) | 一种用于汽车的智能驾驶避障方法 | |
CN108639108B (zh) | 一种机车作业安全防护系统 | |
CN115257784A (zh) | 基于4d毫米波雷达的车路协同系统 | |
KR20150096924A (ko) | 전방 충돌 차량 선정 방법 및 시스템 | |
CN114613129A (zh) | 用于判断交通信号灯状态的方法、程序产品和系统 | |
CN111123902A (zh) | 一种车辆进站方法及车站 | |
DE102012211495B4 (de) | Fahrzeugumgebungs-Überwachungsvorrichtung | |
US20220348199A1 (en) | Apparatus and method for assisting driving of vehicle | |
CN114822083B (zh) | 智慧车辆编队辅助控制系统 | |
CN113386791B (zh) | 一种基于无人运输车列在大雾天气下的避险系统 | |
US11541887B2 (en) | Enabling reverse motion of a preceding vehicle at bunched traffic sites | |
CN114299715A (zh) | 一种基于视频、激光雷达与dsrc的高速公路信息检测系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19942023 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19942023 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19942023 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 28.09.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19942023 Country of ref document: EP Kind code of ref document: A1 |