WO2021159397A1 - Procédé de détection et dispositif de détection de région pouvant être parcourue par un véhicule - Google Patents
Procédé de détection et dispositif de détection de région pouvant être parcourue par un véhicule Download PDFInfo
- Publication number
- WO2021159397A1 WO2021159397A1 PCT/CN2020/075104 CN2020075104W WO2021159397A1 WO 2021159397 A1 WO2021159397 A1 WO 2021159397A1 CN 2020075104 W CN2020075104 W CN 2020075104W WO 2021159397 A1 WO2021159397 A1 WO 2021159397A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- boundary
- image
- drivable area
- vehicle
- matching
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 85
- 238000000034 method Methods 0.000 claims description 91
- 238000012545 processing Methods 0.000 claims description 47
- 230000015654 memory Effects 0.000 claims description 39
- 230000008569 process Effects 0.000 claims description 37
- 230000011218 segmentation Effects 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 description 25
- 238000010586 diagram Methods 0.000 description 24
- 230000006870 function Effects 0.000 description 18
- 238000004364 calculation method Methods 0.000 description 17
- 238000013135 deep learning Methods 0.000 description 17
- 230000004438 eyesight Effects 0.000 description 16
- 238000001914 filtration Methods 0.000 description 14
- 238000013473 artificial intelligence Methods 0.000 description 8
- 230000002093 peripheral effect Effects 0.000 description 8
- 230000006399 behavior Effects 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 241001465754 Metazoa Species 0.000 description 5
- 238000009826 distribution Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 230000008447 perception Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000012937 correction Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000010267 cellular communication Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 239000000446 fuel Substances 0.000 description 3
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 2
- ATUOYWHBWRKTHZ-UHFFFAOYSA-N Propane Chemical compound CCC ATUOYWHBWRKTHZ-UHFFFAOYSA-N 0.000 description 2
- 238000002485 combustion reaction Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- HBBGRARXTFLTSG-UHFFFAOYSA-N Lithium ion Chemical compound [Li+] HBBGRARXTFLTSG-UHFFFAOYSA-N 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 239000002253 acid Substances 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 229910001416 lithium ion Inorganic materials 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000003208 petroleum Substances 0.000 description 1
- 239000001294 propane Substances 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/06—Road conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
Definitions
- This application relates to the automotive field, and more specifically, to a detection method and a detection device for a vehicle's drivable area.
- Artificial intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to obtain the best results.
- artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence.
- Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
- Research in the field of artificial intelligence includes robotics, natural language processing, computer vision, decision-making and reasoning, human-computer interaction, recommendation and search, and basic AI theories.
- Autonomous driving is a mainstream application in the field of artificial intelligence.
- Autonomous driving technology relies on the collaboration of computer vision, radar, monitoring devices, and global positioning systems to enable motor vehicles to achieve autonomous driving without the need for human active operations.
- Self-driving vehicles use various computing systems to help transport passengers from one location to another; some self-driving vehicles may require some initial or continuous input from an operator (such as a pilot, driver, or passenger) ; Automatic driving vehicles allow the operator to switch from manual mode to automatic driving mode or a mode in between. Since automatic driving technology does not require humans to drive motor vehicles, it can theoretically effectively avoid human driving errors and reduce The occurrence of traffic accidents can improve the efficiency of highway transportation. Therefore, more and more attention has been paid to autonomous driving technology.
- the binocular vision method refers to the extraction and positioning of the drivable area through the global disparity map of the image output by the binocular camera;
- the amount of calculation required for the eye camera to obtain the global disparity map is large, which makes the automatic driving vehicle unable to perform real-time processing, resulting in safety risks during the driving process of the automatic driving vehicle; therefore, how to improve the driving ability of the vehicle while ensuring the detection accuracy
- the detection efficiency of the area detection method has become an urgent problem to be solved.
- the present application provides a detection method and a detection device for a vehicle travelable area, which can improve the real-time performance of the detection system of an automatic driving vehicle under a certain detection accuracy, and improve the detection efficiency of the vehicle travelable area detection method.
- a method for detecting a vehicle's drivable area including: acquiring a binocular image of a vehicle traveling direction, wherein the binocular image includes a left-eye image and a right-eye image; according to the drivable area in the left-eye image The area boundary and the travelable area boundary in the right-eye image are used to obtain parallax information of the travelable area boundary in the binocular image; based on the parallax information, the vehicle travelable area in the binocular image is obtained.
- the above-mentioned binocular image may include a left-eye image and a right-eye image; for example, it may refer to the left and right two-dimensional images respectively collected by two parallel and equal-height cameras in an autonomous vehicle.
- the binocular image may be an image of the road surface or the surrounding environment obtained by the autonomous vehicle in the driving direction; for example, it includes the image of the road surface and the image of obstacles and pedestrians attached to the vehicle.
- the parallax may refer to the difference in direction caused by observing the same target from two points with a certain distance.
- the difference in the horizontal direction between the position of the driveable area on the same road collected by the left-eye camera and the right-eye camera in an autonomous vehicle may be parallax information.
- the obtained left-eye image and right-eye image can be separately input to a deep learning network pre-trained to identify the drivable area in the image; the left-eye image can be identified through the pre-trained deep learning network The area where the vehicle can travel in the image on the right.
- a binocular image of the driving direction of the vehicle can be obtained, and based on the binocular image, the boundary of the vehicle's travelable area in the left-eye image and the right-eye image can be acquired; according to the travelable area boundary in the left-eye image and the right-eye image
- the disparity information of the travelable area boundary in the binocular image can be obtained; based on the disparity information, the position of the vehicle travelable area in the binocular image can be obtained; the vehicle travelable area in the embodiment of the present application
- the detection method can avoid the pixel-by-pixel disparity calculation of the binocular image to obtain the global disparity image.
- the disparity information of the drivable area boundary in the binocular image can be calculated to realize that the boundary point of the drivable area is in the coordinate system of the autonomous vehicle
- the positioning can greatly reduce the amount of calculation while ensuring detection accuracy, and improve the detection efficiency of autonomous vehicles on road conditions.
- the method further includes: performing segmentation processing on the drivable area boundary in the first image; Obtaining the disparity information of the drivable area boundary in the right eye image to obtain the disparity information of the drivable area boundary in the binocular image includes:
- the second image is matched with the travelable area boundary to obtain the disparity information, wherein the first image is any one of the binocular images;
- the second image is another image different from the first image in the binocular image;
- N is an integer greater than or equal to 2.
- first image may refer to the left-eye image in the binocular image
- second image may refer to the right-eye image in the binocular image
- first image may refer to the right-eye image in the binocular image
- second image The image refers to the left eye image in the binocular image.
- the boundary of the drivable area in the left-eye image can be segmented based on the inflection point of the drivable area boundary in the left-eye image in the binocular image to obtain N segments of the drivable area boundary in the left-eye image ; Matching the boundaries of the drivable area in the right-eye image through the N segments of the drivable area boundary in the left-eye image, so as to obtain the parallax information.
- the boundary of the drivable area in the right-eye image can be segmented based on the inflection point of the boundary of the drivable area in the right-eye image in the binocular image to obtain N segments of drivable areas in the right-eye image Boundary; match the boundaries of the drivable area in the left-eye image through the N segments of the drivable area boundary in the right-eye image to obtain disparity information.
- the boundary of the drivable area in the left-eye image is matched with the drivable boundary in the right-eye image, that is, when the disparity information of the boundary of the drivable area in the binocular image is calculated
- the boundary of the drivable area in any one of the images is segmented, and the segmented drivable area boundary can be segmented to match, so as to improve the accuracy of the drivable area boundary matching, which is conducive to more accurate acquisition of vehicles on the road.
- Information about the drivable area is matched.
- the second image is matched with the drivable area boundary based on the N boundary point line segments obtained by the segmentation process to obtain the disparity information, include:
- the disparity information is obtained by performing boundary matching of the drivable area on the second image according to the N boundary point line segments and the matching strategy, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
- a matching strategy can also be used when segmentally matching the drivable area in the left-eye image and the right-eye image, that is, the boundary point line segments with different slopes can be matched based on different matching strategies, so that Improve the matching accuracy of the boundary points of the drivable area.
- the matching of the drivable area boundary of the second image according to the N boundary point line segments and a matching strategy to obtain the disparity information includes:
- the first matching strategy in the matching strategy is used to perform the drivable area boundary matching on the second image, wherein the boundary point of the first type
- the line segment includes the boundary of the drivable area on the edge of the road and the boundary of the drivable area on the side of other vehicles;
- the second matching strategy in the matching strategy is used to perform the driving area boundary matching on the second image, wherein the second type boundary
- the boundary point in the dotted line segment is the same distance from the vehicle.
- the first type of boundary point line segment may refer to a boundary point line segment with a relatively large slope; for example, a road edge area or other vehicle side area;
- the second type of boundary point line segment may refer to a boundary point line segment with a relatively small slope; for example, , The rear area of other vehicles, etc.
- a segmented matching strategy based on the magnitude of the slope of the drivable area boundary is proposed, and the detected drivable area is segmented according to the slope distribution.
- Different matching strategies are used to match the boundaries of the drivable area, which can improve the matching accuracy of the boundary points of the drivable area.
- the matching strategy includes a first matching strategy and a second matching strategy, wherein the first matching strategy refers to matching through a search area, and the search A region refers to a region generated with any point in the first boundary point line segment as the center, and the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy is Refers to matching through a preset search step, which is determined based on the boundary point disparity of one of the N boundary point line segments.
- one boundary point of the travelable area boundary point segment of the first image (for example, the left eye image) in the binocular image can be used as a template point
- the boundary point of the second image for example, the right eye image
- the search area is generated to match the template point in the first image.
- the boundary point extracted through the first image (for example, the left eye image) will be the same as the boundary point extracted through the second image (for example, the right eye image).
- the boundary point extracted through the second image for example, the right eye image.
- the N boundary point line segments are determined according to inflection points of the travelable area boundary in the first image.
- a device for detecting a vehicle's drivable area including: an acquisition unit for acquiring binocular images of the vehicle traveling direction, wherein the binocular images include a left-eye image and a right-eye image; and the processing unit uses According to the boundary of the drivable area in the left-eye image and the boundary of the drivable area in the right-eye image, disparity information of the boundary of the drivable area in the binocular image is obtained; based on the disparity information, the binocular is obtained The area where the vehicle can travel in the image.
- the above-mentioned binocular image may include a left-eye image and a right-eye image; for example, it may refer to the left and right two-dimensional images respectively collected by two parallel and equal-height cameras in an autonomous vehicle.
- the binocular image may be an image of the road surface or the surrounding environment obtained by the autonomous vehicle in the driving direction; for example, it includes the image of the road surface and the image of obstacles and pedestrians attached to the vehicle.
- the parallax may refer to the difference in direction caused by observing the same target from two points with a certain distance.
- the difference in the horizontal direction between the position of the driveable area on the same road collected by the left-eye camera and the right-eye camera in an autonomous vehicle may be parallax information.
- the obtained left-eye image and right-eye image can be separately input to a deep learning network pre-trained to identify the drivable area in the image; the left-eye image can be identified through the pre-trained deep learning network The area where the vehicle can travel in the image on the right.
- a binocular image of the driving direction of the vehicle can be obtained, and based on the binocular image, the boundary of the vehicle's travelable area in the left-eye image and the right-eye image can be acquired; according to the travelable area boundary in the left-eye image and the right-eye image
- the disparity information of the travelable area boundary in the binocular image can be obtained; based on the disparity information, the position of the vehicle travelable area in the binocular image can be obtained; the vehicle travelable area in the embodiment of the present application
- the detection method can avoid the pixel-by-pixel disparity calculation of the binocular image to obtain the global disparity image.
- the disparity information of the drivable area boundary in the binocular image can be calculated to realize that the boundary point of the drivable area is in the coordinate system of the autonomous vehicle
- the positioning can greatly reduce the amount of calculation while ensuring detection accuracy, and improve the detection efficiency of autonomous vehicles on road conditions.
- the processing unit is further configured to: perform segmentation processing on the boundaries of the drivable area in the first image; and based on the N boundaries obtained by the segmentation processing The dotted line segment matches the travelable area boundary on the second image to obtain the disparity information, where N is an integer greater than or equal to 2.
- first image may refer to the left-eye image in the binocular image
- second image may refer to the right-eye image in the binocular image
- first image may refer to the right-eye image in the binocular image
- second image The image refers to the left eye image in the binocular image.
- the boundary of the drivable area in the left-eye image can be segmented based on the inflection point of the drivable area boundary in the left-eye image in the binocular image to obtain N segments of the drivable area boundary in the left-eye image ; Matching the boundaries of the drivable area in the right-eye image through the N segments of the drivable area boundary in the left-eye image, so as to obtain the parallax information.
- the boundary of the drivable area in the right-eye image can be segmented based on the inflection point of the boundary of the drivable area in the right-eye image in the binocular image to obtain N segments of drivable areas in the right-eye image Boundary; match the boundaries of the drivable area in the left-eye image through the N segments of the drivable area boundary in the right-eye image to obtain disparity information.
- the boundary of the drivable area in the left-eye image is matched with the drivable boundary in the right-eye image, that is, when the disparity information of the boundary of the drivable area in the binocular image is calculated
- the boundary of the drivable area in any one of the images is segmented, and the segmented drivable area boundary can be segmented to match, so as to improve the accuracy of the drivable area boundary matching, which is conducive to more accurate acquisition of vehicles on the road.
- Information about the drivable area is matched.
- the processing unit is specifically configured to: perform boundary matching of the drivable area on the second image according to the N boundary point line segments and a matching strategy to obtain the In the disparity information, the matching strategy is determined according to the slopes of the N boundary point line segments.
- a matching strategy can also be used when segmentally matching the drivable area in the left-eye image and the right-eye image, that is, the boundary point line segments with different slopes can be matched based on different matching strategies, so that Improve the matching accuracy of the boundary points of the drivable area.
- the processing unit is specifically configured to:
- the first matching strategy in the matching strategy is used to perform the drivable area boundary matching on the second image, wherein the boundary point of the first type
- the line segment includes the boundary of the drivable area on the edge of the road and the boundary of the drivable area on the side of other vehicles;
- the second matching strategy in the matching strategy is used to perform the driving area boundary matching on the second image, wherein the second type boundary
- the boundary point in the dotted line segment is the same distance from the vehicle.
- the first type of boundary point line segment may refer to a boundary point line segment with a relatively large slope; for example, a road edge area or other vehicle side area;
- the second type of boundary point line segment may refer to a boundary point line segment with a relatively small slope; for example, , The rear area of other vehicles, etc.
- a segmented matching strategy based on the magnitude of the slope of the drivable area boundary is proposed, and the detected drivable area is segmented according to the slope distribution.
- Different matching strategies are used to match the boundaries of the drivable area, which can improve the matching accuracy of the boundary points of the drivable area.
- the matching strategy includes a first matching strategy and a second matching strategy, wherein the first matching strategy refers to matching through a search area, and the search A region refers to a region generated with any point in the first boundary point line segment as the center, and the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy is Refers to matching through a preset search step, which is determined based on the boundary point disparity of one of the N boundary point line segments.
- one boundary point of the travelable area boundary point segment of the first image (for example, the left eye image) in the binocular image can be used as a template point
- the boundary point of the second image for example, the right eye image
- the search area is generated to match the template point in the first image.
- the boundary point extracted through the first image (for example, the left eye image) will be the same as the boundary point extracted through the second image (for example, the right eye image).
- the boundary point extracted through the second image for example, the right eye image.
- the N boundary point line segments are determined according to inflection points of the travelable area boundary in the first image.
- a device for detecting a vehicle's travelable area including: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed, the The processor is used to perform the following process: acquiring a binocular image of the driving direction of the vehicle, wherein the binocular image includes a left-eye image and a right-eye image; The travel area boundary is obtained, and the parallax information of the travelable area boundary in the binocular image is obtained; based on the parallax information, the vehicle travelable area in the binocular image is obtained.
- the processor included in the foregoing detection device is further configured to execute the method for detecting the travelable area of the vehicle in the first aspect and any one of the implementation manners of the first aspect.
- a computer-readable storage medium is provided.
- the computer-readable medium storage medium is used to store program code.
- the program code is executed by a computer, the computer is used to execute the first aspect and the first aspect described above.
- the detection method in any one of the implementations.
- a chip in a fifth aspect, includes a processor, and the processor is configured to execute the detection method in any one of the foregoing first aspect and the first aspect.
- the chip of the fifth aspect described above may be located in an in-vehicle terminal of an autonomous vehicle.
- a computer program product comprising: computer program code, when the computer program code runs on a computer, the computer executes any one of the first aspect and the first aspect.
- the detection method in a kind of realization.
- the above-mentioned computer program code may be stored in whole or in part on a first storage medium, where the first storage medium may be packaged with the processor, or may be packaged separately with the processor. There is no specific limitation.
- Fig. 1 is a schematic structural diagram of a vehicle provided by an embodiment of the present application.
- Figure 2 is a schematic structural diagram of a computer system provided by an embodiment of the present application.
- FIG. 3 is a schematic diagram of the application of a cloud-side command automatic driving vehicle provided by an embodiment of the present application
- Fig. 4 is a schematic diagram of a detection system for an autonomous vehicle provided by an embodiment of the present application.
- FIG. 5 is a schematic diagram of coordinate conversion provided by an embodiment of the present application.
- FIG. 6 is a schematic flowchart of a method for detecting a vehicle drivable area provided by an embodiment of the present application
- FIG. 7 is a schematic flowchart of a method for detecting a vehicle drivable area provided by an embodiment of the present application.
- Fig. 8 is a schematic diagram of a boundary segment of a drivable area provided by an embodiment of the present application.
- FIG. 9 is a schematic diagram of acquiring a road surface image provided by an embodiment of the present application.
- FIG. 10 is a schematic diagram of matching a boundary point line segment with a larger slope provided by the implementation of the present application.
- FIG. 11 is a schematic diagram of a boundary point line segment with a small slope provided by an embodiment of the present application.
- FIG. 12 is a schematic diagram of sub-pixelation processing provided by an embodiment of the present application.
- FIG. 13 is a schematic diagram of obtaining positioning of boundary points of a drivable area provided by an embodiment of the present application.
- FIG. 14 is a schematic diagram of obtaining positioning of boundary points of a drivable area provided by an embodiment of the present application.
- FIG. 15 is a schematic diagram of the method for detecting a vehicle travelable area according to an embodiment of the present application applied to a specific product form;
- FIG. 16 is a schematic structural diagram of a detection device for a vehicle travelable area provided by an embodiment of the present application.
- FIG. 17 is a schematic structural diagram of a detection device for a vehicle travelable area provided by another embodiment of the present application.
- Fig. 1 is a functional block diagram of a vehicle 100 provided by an embodiment of the present application.
- the vehicle 100 may be a manually driven vehicle, or the vehicle 100 may be configured in a fully or partially automatic driving mode.
- the vehicle 100 can control its own vehicle while in the automatic driving mode, and can determine the current state of the vehicle and its surrounding environment through human operations, determine the possible behavior of at least one other vehicle in the surrounding environment, and The confidence level corresponding to the possibility of other vehicles performing possible behaviors is determined, and the vehicle 100 is controlled based on the determined information.
- the vehicle 100 can be placed to operate without human interaction.
- the vehicle 100 may include various subsystems, such as a traveling system 110, a sensing system 120, a control system 130, one or more peripheral devices 140 and a power supply 160, a computer system 150, and a user interface 170.
- a traveling system 110 a sensing system 120
- a control system 130 a control system 130
- peripheral devices 140 and a power supply 160 a computer system 150
- a user interface 170 a user interface 170.
- the vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements.
- each of the subsystems and elements of the vehicle 100 may be wired or wirelessly interconnected.
- the travel system 110 may include components for providing power movement to the vehicle 100.
- the travel system 110 may include an engine 111, a transmission 112, an energy source 113, and wheels 114/tires.
- the engine 111 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations; for example, a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine.
- the engine 111 can convert the energy source 113 into mechanical energy.
- the energy source 113 may include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other power sources.
- the energy source 113 may also provide energy for other systems of the vehicle 100.
- the transmission device 112 may include a gearbox, a differential, and a drive shaft; wherein, the transmission device 112 may transmit mechanical power from the engine 111 to the wheels 114.
- the transmission device 112 may also include other devices, such as a clutch.
- the drive shaft may include one or more shafts that can be coupled to one or more wheels 114.
- the sensing system 120 may include several sensors that sense information about the environment around the vehicle 100.
- the sensing system 120 may include a positioning system 121 (for example, a GPS system, a Beidou system or other positioning systems), an inertial measurement unit 122 (IMU), a radar 123, a laser rangefinder 124, and a camera 125.
- the sensing system 120 may also include sensors of the internal system of the monitored vehicle 100 (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.). Such detection and identification are key functions for the safe operation of the autonomous vehicle 100.
- the positioning system 121 can be used to estimate the geographic location of the vehicle 100.
- the IMU 122 may be used to sense changes in the position and orientation of the vehicle 100 based on inertial acceleration.
- the IMU 122 may be a combination of an accelerometer and a gyroscope.
- the radar 123 may use radio signals to sense objects in the surrounding environment of the vehicle 100. In some embodiments, in addition to sensing the object, the radar 123 may also be used to sense the speed and/or direction of the object.
- the laser rangefinder 124 may use laser light to sense objects in the environment where the vehicle 100 is located.
- the laser rangefinder 124 may include one or more laser sources, laser scanners, and one or more detectors, as well as other system components.
- the camera 125 may be used to capture multiple images of the surrounding environment of the vehicle 100.
- the camera 125 may be a still camera or a video camera.
- control system 130 controls the operation of the vehicle 100 and its components.
- the control system 130 may include various elements, such as a steering system 131, a throttle 132, a braking unit 133, a computer vision system 134, a route control system 135, and an obstacle avoidance system 136.
- the steering system 131 may be operated to adjust the forward direction of the vehicle 100.
- it may be a steering wheel system in one embodiment.
- the throttle 132 may be used to control the operating speed of the engine 111 and thereby control the speed of the vehicle 100.
- the braking unit 133 may be used to control the deceleration of the vehicle 100; the braking unit 133 may use friction to slow down the wheels 114. In other embodiments, the braking unit 133 may convert the kinetic energy of the wheels 114 into electric current. The braking unit 133 may also take other forms to slow down the rotation speed of the wheels 114 to control the speed of the vehicle 100.
- the computer vision system 134 may be operable to process and analyze the images captured by the camera 125 in order to identify objects and/or features in the surrounding environment of the vehicle 100.
- the aforementioned objects and/or features may include traffic signals, road boundaries and obstacles.
- the computer vision system 134 may use object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision technologies.
- SFM Structure from Motion
- the computer vision system 134 may be used to map the environment, track objects, estimate the speed of objects, and so on.
- the route control system 135 may be used to determine the travel route of the vehicle 100.
- the route control system 135 may combine data from sensors, GPS, and one or more predetermined maps to determine a travel route for the vehicle 100.
- the obstacle avoidance system 136 may be used to identify, evaluate, and avoid or otherwise surpass potential obstacles in the environment of the vehicle 100.
- control system 130 may additionally or alternatively include components other than those shown and described. Alternatively, a part of the components shown above may be reduced.
- the vehicle 100 can interact with external sensors, other vehicles, other computer systems, or users through a peripheral device 140; wherein, the peripheral device 140 can include a wireless communication system 141, an onboard computer 142, a microphone 143 and/ Or speaker 144.
- the peripheral device 140 can include a wireless communication system 141, an onboard computer 142, a microphone 143 and/ Or speaker 144.
- the peripheral device 140 may provide a means for the vehicle 100 to interact with the user interface 170.
- the onboard computer 142 may provide information to the user of the vehicle 100.
- the user interface 116 can also operate the onboard computer 142 to receive user input; the onboard computer 142 can be operated through a touch screen.
- the peripheral device 140 may provide a means for the vehicle 100 to communicate with other devices located in the vehicle.
- the microphone 143 may receive audio (eg, voice commands or other audio input) from the user of the vehicle 100.
- the speaker 144 may output audio to the user of the vehicle 100.
- the wireless communication system 141 may wirelessly communicate with one or more devices directly or via a communication network.
- the wireless communication system 141 can use 3G cellular communication; for example, code division multiple access (CDMA), EVD0, global system for mobile communications (GSM)/general packet radio service (general packet radio service) packet radio service, GPRS), or 4G cellular communication, such as long term evolution (LTE); or, 5G cellular communication.
- CDMA code division multiple access
- EVD0 global system for mobile communications
- GSM global system for mobile communications
- general packet radio service general packet radio service
- GPRS general packet radio service
- 4G cellular communication such as long term evolution (LTE)
- LTE long term evolution
- 5G cellular communication 5G cellular communication.
- the wireless communication system 141 can communicate with a wireless local area network (WLAN) using wireless Internet access (WiFi).
- WLAN wireless local area network
- WiFi wireless Internet access
- the wireless communication system 141 may directly communicate with the device using an infrared link, Bluetooth, or ZigBee; other wireless protocols, such as various vehicle communication systems, for example, the wireless communication system 141 may include one or Multiple dedicated short range communications (DSRC) devices, these devices may include public and/or private data communications between vehicles and/or roadside stations.
- DSRC dedicated short range communications
- the power source 160 may provide power to various components of the vehicle 100.
- the power source 160 may be a rechargeable lithium ion or lead-acid battery.
- One or more battery packs of such batteries may be configured as a power source to provide power to various components of the vehicle 100.
- the power source 160 and the energy source 113 may be implemented together, such as in some all-electric vehicles.
- part or all of the functions of the vehicle 100 may be controlled by the computer system 150, where the computer system 150 may include at least one processor 151, and the processor 151 is executed in a non-transitory computer readable medium stored in the memory 152, for example.
- the computer system 150 may also be multiple computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.
- the processor 151 may be any conventional processor, such as a commercially available CPU.
- the processor may be a dedicated device such as an ASIC or other hardware-based processor.
- FIG. 1 functionally illustrates the processor, memory, and other elements of the computer in the same block, those of ordinary skill in the art should understand that the processor, computer, or memory may actually include or may not store Multiple processors, computers or memories in the same physical enclosure.
- the memory may be a hard disk drive or other storage medium located in a housing other than the computer. Therefore, a reference to a processor or computer will be understood to include a reference to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described here, some components such as steering components and deceleration components may each have its own processor that only performs calculations related to component-specific functions .
- the processor may be located away from the vehicle and wirelessly communicate with the vehicle.
- some of the processes described herein are executed on a processor disposed in the vehicle and others are executed by a remote processor, including taking the necessary steps to perform a single manipulation.
- the memory 152 may contain instructions 153 (eg, program logic), which may be executed by the processor 151 to perform various functions of the vehicle 100, including those functions described above.
- the memory 152 may also contain additional instructions, for example, including sending data to, receiving data from, interacting with, and/or performing data to one or more of the traveling system 110, the sensing system 120, the control system 130, and the peripheral device 140. Control instructions.
- the memory 152 may also store data, such as road maps, route information, the position, direction, and speed of the vehicle, and other such vehicle data, as well as other information. Such information may be used by the vehicle 100 and the computer system 150 during the operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
- the user interface 170 may be used to provide information to or receive information from a user of the vehicle 100.
- the user interface 170 may include one or more input/output devices in the set of peripheral devices 140, for example, a wireless communication system 141, a car computer 142, a microphone 143, and a speaker 144.
- the computer system 150 may control the functions of the vehicle 100 based on inputs received from various subsystems (for example, the traveling system 110, the sensing system 120, and the control system 130) and from the user interface 170.
- the computer system 150 may use input from the control system 130 in order to control the braking unit 133 to avoid obstacles detected by the sensing system 120 and the obstacle avoidance system 136.
- the computer system 150 is operable to provide control of many aspects of the vehicle 100 and its subsystems.
- one or more of these components described above may be installed or associated with the vehicle 100 separately.
- the memory 152 may be partially or completely separated from the vehicle 100.
- the above-mentioned components may be communicatively coupled together in a wired and/or wireless manner.
- FIG. 1 should not be construed as a limitation to the embodiments of the present application.
- the vehicle 100 may be an autonomous vehicle traveling on a road, and may recognize objects in its surrounding environment to determine the adjustment to the current speed.
- the object may be other vehicles, traffic control equipment, or other types of objects.
- each recognized object can be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, distance from the vehicle, etc., can be used to determine the speed to be adjusted by the self-driving car.
- the vehicle 100 or a computing device associated with the vehicle 100 may be based on the characteristics of the identified object and the state of the surrounding environment (for example, traffic, Rain, ice on the road, etc.) to predict the behavior of the identified object.
- each recognized object depends on each other's behavior. Therefore, all recognized objects can also be considered together to predict the behavior of a single recognized object.
- the vehicle 100 can adjust its speed based on the predicted behavior of the identified object.
- the self-driving car can determine based on the predicted behavior of the object that the vehicle will need to be adjusted (e.g., accelerate, decelerate, or stop) to a stable state.
- other factors may also be considered to determine the speed of the vehicle 100, such as the lateral position of the vehicle 100 on the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so on.
- the computing device can also provide instructions to modify the steering angle of the vehicle 100 so that the self-driving car follows a given trajectory and/or maintains an object near the self-driving car (for example, , The safe horizontal and vertical distances of cars in adjacent lanes on the road.
- the above-mentioned vehicle 100 may be a car, truck, motorcycle, bus, boat, airplane, helicopter, lawn mower, recreational vehicle, playground vehicle, construction equipment, tram, golf cart, train, and trolley, etc.
- the application examples are not particularly limited.
- the vehicle 100 shown in FIG. 1 may be an automatic driving vehicle, and the automatic driving system will be described in detail below.
- Fig. 2 is a schematic diagram of an automatic driving system provided by an embodiment of the present application.
- the automatic driving system shown in FIG. 2 includes a computer system 201, where the computer system 201 includes a processor 203, which is coupled to a system bus 205.
- the processor 203 may be one or more processors, where each processor may include one or more processor cores.
- the display adapter 207 (video adapter) can drive the display 209, and the display 209 is coupled to the system bus 205.
- the system bus 205 may be coupled to an input/output (I/O) bus 213 through a bus bridge 211, and an I/O interface 215 is coupled to an I/O bus.
- I/O input/output
- the I/O interface 215 communicates with a variety of I/O devices, such as input devices 217 (such as keyboard, mouse, touch screen, etc.), media tray 221, (such as CD-ROM, multimedia interface, etc.) .
- the transceiver 223 can send and/or receive radio communication signals, and the camera 255 can capture landscape and dynamic digital video images.
- the interface connected to the I/O interface 215 may be the USB port 225.
- the processor 203 may be any traditional processor, such as a reduced instruction set computer (RISC) processor, a complex instruction set computer (CISC) processor, or a combination of the foregoing.
- RISC reduced instruction set computer
- CISC complex instruction set computer
- the processor 203 may be a dedicated device such as an application specific integrated circuit (ASIC); the processor 203 may be a neural network processor or a combination of a neural network processor and the foregoing traditional processors.
- ASIC application specific integrated circuit
- the computer system 201 may be located far away from the autonomous driving vehicle, and may wirelessly communicate with the autonomous driving vehicle.
- some of the processes described herein are executed on a processor provided in an autonomous vehicle, and others are executed by a remote processor, including taking actions required to perform a single manipulation.
- the computer system 201 can communicate with the software deployment server 249 through the network interface 229.
- the network interface 229 may be a hardware network interface, such as a network card.
- the network 227 may be an external network, such as the Internet, or an internal network, such as an Ethernet or a virtual private network (VPN).
- the network 227 may also be a wireless network, such as a wifi network, a cellular network, and so on.
- the hard disk drive interface is coupled with the system bus 205
- the hardware drive interface 231 can be connected with the hard disk drive 233
- the system memory 235 is coupled with the system bus 205.
- the data running in the system memory 235 may include an operating system 237 and application programs 243.
- the operating system 237 may include a parser 239 (shell) and a kernel 241 (kernel).
- the shell 239 is an interface between the user and the kernel of the operating system.
- the shell can be the outermost layer of the operating system; the shell can manage the interaction between the user and the operating system, for example, waiting for the user's input, interpreting the user's input to the operating system, and processing various operating systems The output result.
- the kernel 241 may be composed of those parts of the operating system that are used to manage memory, files, peripherals, and system resources. Directly interact with the hardware.
- the operating system kernel usually runs processes and provides inter-process communication, providing CPU time slice management, interrupts, memory management, IO management, and so on.
- Application programs 243 include programs that control auto-driving cars, such as programs that manage the interaction between autonomous vehicles and obstacles on the road, programs that control the route or speed of autonomous vehicles, and programs that control interaction between autonomous vehicles and other autonomous vehicles on the road. .
- the application program 243 also exists on the system of the software deployment server 249. In one embodiment, the computer system 201 may download the application program from the software deployment server 249 when the automatic driving-related program 247 needs to be executed.
- the application program 243 may also be a program for controlling an automatic driving vehicle to perform automatic parking.
- the senor 253 may be associated with the computer system 201, and the sensor 253 may be used to detect the environment around the computer 201.
- the sensor 253 can detect animals, cars, obstacles, and crosswalks. Further, the sensor can also detect the surrounding environment of the above-mentioned animals, cars, obstacles, and crosswalks, such as: the environment around the animals, for example, when the animals appear around them. Other animals, weather conditions, the brightness of the surrounding environment, etc.
- the senor may be a camera, an infrared sensor, a chemical detector, a microphone, etc.
- the senor 253 can be used to detect the size or position of the storage space and surrounding obstacles around the vehicle, so that the vehicle can perceive the distance between the storage space and the surrounding obstacles, and when parking Carry out collision detection to prevent collisions between vehicles and obstacles.
- the computer system 150 shown in FIG. 1 may also receive information from other computer systems or transfer information to other computer systems.
- the sensor data collected from the sensor system 120 of the vehicle 100 may be transferred to another computer to process the data.
- data from the computer system 312 may be transmitted to the server 320 on the cloud side via the network for further processing.
- the network and intermediate nodes can include various configurations and protocols, including the Internet, World Wide Web, Intranet, virtual private network, wide area network, local area network, private network using one or more company’s proprietary communication protocols, Ethernet, WiFi and HTTP, And various combinations of the foregoing; this communication can be by any device capable of transferring data to and from other computers, such as modems and wireless interfaces.
- the server 320 may include a server with multiple computers, such as a load balancing server group, which exchanges information with different nodes of the network for the purpose of receiving, processing, and transmitting data from the computer system 312.
- the server may be configured similarly to the computer system 312, with a processor 330, a memory 340, instructions 350, and data 360.
- the data 360 of the server 320 may include information related to road conditions around the vehicle.
- the server 320 may receive, detect, store, update, and transmit information related to the road conditions of the vehicle.
- the information related to the road conditions around the vehicle includes information about other vehicles around the vehicle and obstacle information.
- the driving area detection of autonomous vehicles can usually adopt monocular vision method or binocular vision method; among them, monocular vision method refers to obtaining images of the road environment through a monocular camera, based on pre-trained deep neural network equipment detection The drivable area in the image; according to the detected drivable area, the principle of plane assumption is used. For example, the autonomous vehicle is in a flat area and there is no slope, etc.; and the drivable area in the image is changed from the two-dimensional pixel coordinate system Down-conversion to the three-dimensional coordinate system where the autonomous vehicle is located to complete the spatial positioning of the drivable area.
- monocular vision method refers to obtaining images of the road environment through a monocular camera, based on pre-trained deep neural network equipment detection
- the drivable area in the image according to the detected drivable area, the principle of plane assumption is used.
- the autonomous vehicle is in a flat area and there is no slope, etc.
- the drivable area in the image is changed from
- the binocular vision method refers to obtaining a global disparity map of the road environment by acquiring the images output by the binocular camera separately, detecting the drivable area according to the disparity map, and then dividing the drivable area from the second according to the distance detection feature of the binocular camera.
- the dimensional pixel coordinate system is converted to the three-dimensional coordinate system where the autonomous vehicle is located to complete the spatial positioning of the drivable area.
- the monocular camera cannot perceive the distance, in the process of transforming the drivable area from the pixel coordinate system to the coordinate system of the autonomous vehicle, it is necessary to follow the principle of plane assumption, that is, assume that the vehicle is located.
- the road surface is completely flat and there is no ramp, etc., which results in low positioning accuracy of the drivable area;
- the positioning of the drivable area depends on the disparity map, and the calculation of obtaining the global disparity map through the binocular camera is relatively large.
- the automatic driving vehicle cannot achieve real-time processing, leading to safety risks in the driving process of the automatic driving vehicle; therefore, how to improve the real-time performance of the detection method of the vehicle's travelable area has become an urgent problem to be solved.
- the embodiment of the present application provides a method and device for detecting a vehicle drivable area.
- binocular images of the vehicle traveling direction can be obtained, based on the The binocular image can obtain the boundary of the vehicle's drivable area in the left-eye image and the right-eye image; according to the boundary of the drivable area in the left-eye image and the boundary of the drivable area in the right-eye image, the parallax of the boundary of the drivable area in the binocular image can be obtained
- Information based on the disparity information, the position of the vehicle travelable area in the binocular image can be obtained; the method for detecting the vehicle travelable area in the embodiment of the present application can avoid the pixel-by-pixel parallax calculation of the binocular image to obtain the global parallax image, In the embodiment of the present application, there is no need to calculate the global disparity image, and only
- Fig. 4 is a schematic diagram of a detection system for an autonomous vehicle provided by an embodiment of the present application.
- the detection system 400 may be used to implement a method for detecting a vehicle's drivable area.
- the detection system 400 may include a perception module 410, a drivable area detection module 420, a drivable area registration module 430, and a coordinate system conversion module 440.
- the perception module 410 can be used to perceive information about the road surface and surrounding environment when the autonomous vehicle is driving; the perception module can include binocular cameras, where the binocular cameras can include left-eye cameras and right-eye cameras, and the binocular cameras can be used for Perceive the environmental information around the vehicle, as the follow-up deep learning network detects the drivable area; the left-eye camera and the right-eye camera can meet the image frame synchronization, which can be achieved through hardware or software. .
- the baseline distance of the binocular camera can be greater than 30 cm to ensure that it can support the detection of objects of about 100 meters.
- the drivable area detection module 420 can be used to detect the drivable area in the pixel coordinate system; the module can be composed of a deep learning network to detect the drivable area in the left-eye image and the right-eye image; the drivable area
- the input data of the detection module 420 may be the image data collected by the left-eye camera and the right-eye camera included in the above-mentioned perception module 410, and the output data may be the boundary point of the drivable area in the left-eye image and the right-eye image in the pixel coordinate system.
- the drivable area registration module 430 can be used to perform the following five steps at least:
- the first step the left-eye drivable area boundary segmentation processing; that is, in the pixel coordinate system, the boundary points of the left-eye image drivable area extracted above are processed according to the slope transformation, and the continuous drivable area
- the boundary point is divided into N segments.
- the second step segment matching processing between the boundary points of the left-eye drivable area and the boundary points of the right-eye drivable area; that is, in the pixel coordinate system, different segments are used according to the boundary points of the segmented drivable area
- the matching strategy of the matching strategy is to match the boundary points of the left-eye drivable area with the boundary points of the right-eye drivable area.
- the third step registration filtering processing of the drivable area; that is, in the pixel coordinate system, the boundary points of the matched drivable area are filtered by a filtering algorithm to ensure the accuracy of the boundary point registration;
- the fourth step the disparity calculation of the boundary points of the drivable area; that is, in the pixel coordinate system, the disparity calculation corresponding to the boundary points of the drivable area is performed according to the registered boundary points of the drivable area;
- the fifth step Parallax sub-pixelation processing; that is, in the pixel coordinate system, the parallax obtained above is sub-pixelated to ensure that the coordinate positioning of the boundary points of the travelable area at a longer distance is still ensured. High positioning accuracy.
- sub-pixel may refer to subdividing two adjacent pixels, which means that each pixel will be divided into smaller units.
- the coordinate system conversion module 440 can be used to detect the positioning of the boundary point of the drivable area in the X and Y coordinates.
- the calculation of the X distance between the boundary point of the drivable area and the self-vehicle can be based on the parallax obtained above, through the triangulation
- Figure 5 is a schematic diagram of coordinate conversion; the point P(x, y) on the boundary of the drivable area can be converted from the pixel coordinates of the two-dimensional image to the three-dimensional coordinate system where the autonomous vehicle is located, where ,
- FIG. 5 is an example of the coordinate system, and does not limit the direction in the coordinate system in any way.
- Fig. 6 is a schematic flowchart of a method for detecting a vehicle drivable area provided by an embodiment of the present application.
- the detection method shown in FIG. 6 may be executed by the vehicle shown in FIG. 1, or the automatic driving system shown in FIG. 2, or the detection system shown in FIG. 4; the detection method shown in FIG. 6 includes steps 510 to 530 , These steps are described in detail below.
- Step 510 Obtain a binocular image of the driving direction of the vehicle.
- the above-mentioned binocular images may include left-eye images and right-eye images; for example, it may refer to the left and right two-dimensional images collected by two parallel and equal-height cameras in an autonomous vehicle; The image acquired by the binocular camera shown.
- the binocular image may be an image of the road surface or the surrounding environment obtained by the autonomous vehicle in the driving direction; for example, the image of the road surface and the obstacles and pedestrians attached to the vehicle are included.
- Step 520 Obtain the disparity information of the drivable area boundary in the binocular image according to the drivable area boundary in the left-eye image and the drivable area boundary in the right-eye image.
- the disparity information refers to the information of the direction difference caused by observing the same target from two points with a certain distance.
- the difference in the horizontal direction between the position of the driveable area on the same road collected by the left-eye camera and the right-eye camera in an autonomous vehicle may be parallax information.
- the obtained left-eye image and right-eye image can be respectively input to a deep learning network pre-trained to recognize the drivable area in the image; the pre-trained deep learning network can identify the left-eye image and the right-eye image. The area where the vehicle can travel.
- Step 530 Based on the disparity information, obtain the travelable area of the vehicle in the binocular image.
- the position of the vehicle drivable area in the binocular image can be obtained by triangulation.
- the binocular image when the boundary of the drivable area in the left-eye image and the drivable boundary in the right-eye image are matched to calculate the parallax information of the boundary of the drivable area in the binocular image, the binocular image
- the drivable area boundary in any one of the images in the image is segmented, and the segmented drivable area boundary can be used for segment matching, which can improve the accuracy of the drivable area boundary matching, which is conducive to more accurate road acquisition. Information about the area where the vehicle can travel.
- the method for detecting the drivable area of the vehicle further includes segmenting the drivable area boundary in the first image in the binocular image; the foregoing step 520 is based on the drivable area in the left eye image
- the boundary and the boundary of the drivable area in the right eye image to obtain the parallax information of the drivable area boundary in the binocular image may include the second image of the Binocular image based on the N boundary points and line segments obtained by the segmentation process to perform the drivable area boundary Match to obtain disparity information, where N is an integer greater than or equal to 2.
- first image may refer to the left-eye image in the binocular image
- second image may refer to the right-eye image in the binocular image
- first image may refer to the right-eye image in the binocular image
- second image The image refers to the left eye image in the binocular image.
- the foregoing segmentation processing is performed on the boundary of the drivable area in the first image in the binocular image
- the boundary matching of the drivable area in the second image in the binocular image may mean that the boundary of the drivable area in the binocular image
- the boundary of the drivable area in the left-eye image is segmented, and based on the N boundary point line segments in the left-eye image obtained by the segmentation processing, the drivable area boundary is matched in the right-eye image to obtain disparity information; or, it can also refer to
- the boundary of the drivable area in the right-eye image in the binocular image is segmented, and the travelable area boundary is matched in the left-eye image based on the N boundary point line segments in the right-eye image obtained by the segmentation process to obtain parallax information.
- the boundary point line segment can be divided into a boundary point line segment with a smaller slope and a boundary point line segment with a larger slope according to the slopes of the N boundary point line segments after the segmentation process, so as to control the binocular based on the matching strategy.
- the boundary of the vehicle in the image can be matched, and the disparity information can be obtained according to the matching result.
- the second image is matched with the travelable area boundary based on the N boundary point line segments obtained by the segmentation process to obtain the disparity information, including: matching the second image according to the N boundary point line segments and the matching strategy Perform boundary matching of the drivable area to obtain disparity information, where the matching strategy is determined according to the slopes of the line segments of the N boundary points.
- the first matching strategy in the matching strategy can be used to match the travelable area boundary of the second image, where the first type of boundary point line segment may include the road edge The boundary of the drivable area and the boundary of the drivable area on the side of other vehicles;
- the second matching strategy in the matching strategy can be used to match the travelable area boundary of the second image, where the boundary point of the second type of boundary point line segment matches the vehicle The distance is the same.
- the first type of boundary point line segment may refer to a boundary point line segment with a relatively large slope; for example, a road edge area or other vehicle side area;
- the second type of boundary point line segment may refer to a boundary point line segment with a relatively small slope; for example, , The rear area of other vehicles, etc.
- the above-mentioned matching strategy may include a first matching strategy and a second matching strategy.
- the first matching strategy may refer to matching through a search area, and the search area may refer to any point in a line segment with a first boundary point.
- the first boundary point line segment is any one of the N boundary point line segments;
- the second matching strategy can refer to matching through a preset search step, which is based on N The boundary point parallax of a boundary point line segment in the boundary point line segment is determined.
- the first image (for example, the left eye image) in the binocular image can be used as a template point by using a boundary point of the travelable area boundary point segment, and the first image
- the boundary point of the second image (for example, the right eye image) corresponding to the same row of is the center, and the search area is generated to match the template point in the first image.
- the specific process is shown in Figure 7 below.
- the inflection point parallax correction method can be used to match the boundary points of this part.
- the specific process is shown in Figure 7 below.
- a segmented matching strategy based on the magnitude of the slope of the drivable area boundary is proposed, and the detected drivable area is segmented according to the slope distribution.
- Different matching strategies are used to match the boundaries of the drivable area, which can improve the matching accuracy of the boundary points of the drivable area.
- Fig. 7 is a schematic flowchart of a method for detecting a vehicle drivable area provided by an embodiment of the present application.
- the detection method shown in FIG. 7 can be executed by the vehicle shown in FIG. 1, or the automatic driving system shown in FIG. 2, or the detection system shown in FIG. 4; the detection method shown in FIG. 7 includes steps 601 to 614 , These steps are described in detail below.
- Step 601 Start; that is, start to execute the detection method of the vehicle's drivable area.
- Step 602 Acquire a left eye image.
- the left-eye image may refer to an image of the road surface or the surrounding environment acquired by one of the binocular cameras (for example, the left-eye camera).
- Step 603 Acquire a right eye image.
- the right-eye image may refer to the image of the road surface or the surrounding environment acquired by another camera (for example, the left-eye camera) among the binocular cameras.
- step 602 and step 603 may be performed at the same time; alternatively, step 603 may be performed first and then step 602 may be performed, which is not limited in this application.
- Step 604 Perform a vehicle-driving area detection on the acquired left-eye image.
- a deep learning network for identifying a drivable area in an image can be pre-trained through training data; the pre-trained deep learning network can identify a drivable area of a vehicle in the left-eye image.
- a pre-trained deep learning network can be used to detect the drivable area in the pixel coordinate system; the input data can be the collected left-eye image, and the output data can be the detected drivable area on the left.
- Step 605 Perform a vehicle-driving area detection on the acquired right-eye image.
- the pre-trained deep learning network in step 604 can be used to detect the drivable area in the right-eye image.
- the obtained left-eye image and right-eye image can be simultaneously input to a pre-trained deep learning network for detection; or, the obtained left-eye image and right-eye image can also be input one after the other to the pre-trained deep learning network.
- the trained deep learning network can detect the coordinates of the drivable area in the left-eye image and the right-eye image.
- Step 606 Perform segmentation processing on the drivable area in the left-eye image.
- the boundary of the drivable area may be segmented according to different slopes of the boundary points of the drivable area detected in the left-eye image.
- the slope vector of each point on the boundary of the vehicle's drivable area can be obtained as K ⁇ k 1 , k 2 ... k n ⁇ ; the inflection point of the boundary of the drivable area is detected according to the jump of the slope;
- the inflection points on the boundary of the vehicle's drivable area can be obtained, including point B, point C, point D, point E, point F, and point G; according to the inflection point, the boundary of the vehicle drivable area can be segmented, as shown in the figure As shown in 8, it can be divided into AB section, BC section, CD section, DE section, EF section, FG section and GH section.
- the boundary of the above-mentioned drivable area can be segmented to obtain N segments; the boundary points of the N segments can be classified into two categories according to the slope distribution of the segment, that is, the boundary point line segment with a smaller slope and the boundary with a larger slope Point line segment.
- FIG 9 shows a road surface image (for example, the left-eye image) obtained by one of the binocular cameras. From the image shown in Figure 9 (a), it can be seen that the vehicle can travel The boundary of the area includes point A, point B, point C, and point D.
- a road surface image for example, the left-eye image
- the slope of the corresponding drivable area boundary in the pixel coordinate system will be larger; the line segment AB included on the drivable area boundary as shown in Figure 8 , Line segment CD, line segment EF, line segment GH; if the detected drivable area boundary is located at the rear of other vehicles, the slope of the corresponding drivable area boundary is smaller, as shown in Figure 8 for the line segment included on the drivable area boundary BC, line segment DE, line segment FG.
- Step 607 Segment matching of the boundary of the drivable area.
- step 606 illustrates the segmentation of the boundary of the vehicle travelable area detected in the left-eye image; in addition, the boundary point line segment can be divided into the boundary point with a smaller slope according to the slope of the boundary point line segment after the segmentation process Line segments and boundary point line segments with larger slopes; step 607 will perform matching in the right-eye image with the boundary point line segments in the left-eye image.
- the boundary points of the drivable area in the acquired left-eye image may be matched according to different matching strategies.
- the boundary point line segment with a large slope may correspond to the road edge in the real scene, or the side of the vehicle, such as the AB line segment and the CD line segment shown in Figure 9 (a); Figure 9 (b)
- the boundary point line segment with a larger slope for example, the edge of the road
- the first matching strategy for the above-mentioned boundary point line segment with a larger slope, one boundary point of the left-eye image's travelable area boundary point segment can be used as a template point, and the right-eye image boundary point corresponding to the same row in the left-eye image As the center, the search area is generated to match the template points in the left-eye image.
- FIG. 10 shows a left-eye image
- FIG. 10 shows a schematic diagram of generating a search area in the right-eye image to match template points in the left-eye image.
- the above-mentioned search area may be a search area obtained by an eight-dimensional descriptor; wherein, 360 degrees may be divided into eight equal parts first, that is, 0 degrees to 45 degrees, 45 degrees to 90 degrees, and 90 degrees to 135 degrees.
- the eight regions can represent eight angular distributions; corresponding to the eight-dimensional subsequent generation Descriptor, count the angles of each pixel in the above 5*5 neighborhood, and if the angle falls in the corresponding area, the corresponding area assignment is accumulated in a 1*range manner, and the assignment of each area is calculated to generate 8-dimensional Descriptor of; can be calculated using the following formula.
- S can be represented as a descriptor
- Angle can represent the corresponding 8 angle regions
- range can represent the gradient size or amplitude corresponding to each point; through the above formula, the corresponding boundary point of each travelable area can be obtained.
- Eight-dimensional descriptor, the subsequent matching process is based on the realization of the descriptor.
- each point in the boundary point of the drivable area that can be extracted from the left-eye image is the center, in the same area of the right-eye image corresponding to the left-eye image, for example, in the left-eye image corresponding to a boundary point
- the descriptor is generated in the 5*5 neighborhood around the right-eye image, and the gradient and angle value of each point in the neighborhood in the right-eye image are calculated using the following formula.
- dx represents the x-direction gradient of the pixel
- dy represents the y-direction gradient of the pixel
- Angle represents the angle value of the pixel
- range represents the gradient magnitude of the pixel.
- the above description is based on the eight-dimensional descriptor generating a search area with a size of 5*5 as an example.
- the size of the search area can also be other scale sizes, which is not limited in this application.
- the second matching strategy For the boundary point line segment with a small slope, because the boundary point extracted by the right eye image will overlap with the boundary point extracted by the left eye image, the inflection point parallax correction method can be used for this Matching of partial boundary points.
- the BC line segment shown in FIG. 11 because the slope of the line segment of the boundary point of the drivable area is relatively small, that is, the boundary of the drivable area detected by the left-eye image and the right-eye image may overlap in the line segment of the boundary point.
- Area; for the adjacent area of the BC line segment, for example, the line segment on the left side of point B and the line segment on the right side of point C are boundary point line segments with a larger slope.
- the boundary points with a larger slope can be matched Matching; in the matching process of the adjacent line segments of the BC line segment, the parallax between the left eye image and the right eye image at point B and C can be obtained, and the parallax between the two points B and C can be used as the basis for the BC line segment to
- the average value of the above-mentioned two-point disparity is the search length, and the boundary point line segment with the smaller slope in the left-eye image and the right-eye image is matched by the search length.
- boundary point matching through the descriptor is also applicable to the boundary point line segment with a small slope, for example, matching the BC line segment of the travelable area boundary point in the left-eye image and the right-eye image.
- the extracted drivable area boundary of the left-eye image is segmented, and different matching strategies are adopted for the drivable area boundary with a larger slope and a smaller slope. Matching the boundary of the driving area can improve the matching accuracy of the boundary points of the drivable area.
- Step 608 Filter the matching result.
- the matching result filtering may refer to a process of filtering the matching result of step 607, so as to eliminate erroneous matching points.
- the matched boundary point pairs may be filtered in a filtering manner, so as to ensure the matching accuracy of the boundary points of the drivable area.
- the matching degree of the boundary points can be sorted, and abnormal matching points can be eliminated according to the method of box plot, and the matching points with higher matching degree can be retained as the final left-eye image and right-eye image.
- the boundary point where the drivable area is successfully matched in the eye image can be sorted, and abnormal matching points can be eliminated according to the method of box plot, and the matching points with higher matching degree can be retained as the final left-eye image and right-eye image.
- Step 609 Calculate the disparity of the boundary points of the drivable area.
- the disparity calculation is performed based on the above-mentioned matched boundary points.
- the coordinate difference of the matched boundary points in the left-eye image and the right-eye image in the pixel coordinate system corresponding to the boundary point in the Y direction can be calculated.
- Step 610 Parallax filtering of boundary points of the drivable area.
- the pixel points obtained by the disparity calculation in step 609 may be discrete points, and further, the disparity filtering of the boundary points of the drivable area in step 610 may be used to perform continuous operations on the discrete points.
- the process of parallax filtering can be mainly divided into filtering and interpolation (for example, continuity processing); first, considering that it starts from the bottom of the image, It is the drivable area closest to the self-driving vehicle; starting from the bottom of the image and gradually upward, the parallax corresponding to the boundary points of the matched drivable area should gradually decrease, that is, there may not be an increase in the parallax of the boundary points.
- filtering and interpolation for example, continuity processing
- the error parallax is filtered out, that is, if it starts from the bottom of the image and gradually goes up, the deeper the depth of field, the smaller the parallax between the left-eye image and the right-eye image of the travelable area; if it is at the boundary point corresponding to the deeper area If the parallax increases, the boundary point may be a matching deviation point, and the deviation point can be eliminated.
- the distance between the border points corresponding to the same row in the image and the autonomous vehicle is the same (that is, the depth of field can be the same, or it can mean that the coordinates in the Y direction are the same), that is, the left eye image and the right
- the disparity of the boundary points corresponding to the same row in the eye image should be the same. Therefore, the second filtering can be performed based on this situation, and the boundary points after the filtering process can be continuously corrected by interpolation using the disparity of the adjacent boundary points.
- Step 611 Parallax sub-pixelation processing.
- sub-pixel may refer to subdividing two adjacent pixels, which means that each pixel will be divided into smaller units.
- the extracted boundary point disparity may be subjected to sub-pixelation processing.
- M represents the descriptor of sub-pixel processing
- N represents the descriptor of sub-pixel processing
- y represents the y offset of the image coordinate system.
- Step 612 The boundary point of the drivable area is positioned on the X coordinate in the coordinate system of the autonomous vehicle.
- the triangulation method can be used to obtain the distance of the boundary point in the X direction in the vehicle coordinate system.
- Step 613 The boundary point of the drivable area is positioned on the Y coordinate in the coordinate system of the autonomous vehicle.
- the triangulation method can be used to obtain the distance of the boundary point in the Y direction in the vehicle coordinate system.
- triangulation is used to measure the distances of boundary points in the X direction and Y direction in the coordinate system where the autonomous vehicle is located, where f represents the focal length; B represents the reference line; y represents the image coordinate system The offset in.
- Y is the horizontal distance from the object to the camera; y is the pixel coordinates on the picture; and the distance in the Y direction:
- the detection of the drivable area can be realized in time and the drivable area from the pixel coordinate system to the self-car coordinate can be realized through a small amount of calculation.
- System mapping thereby improving the real-time detection of autonomous vehicles on the vehicle's drivable area.
- FIG. 15 is a schematic diagram of the method for detecting a vehicle travelable area according to an embodiment of the present application applied to a specific product form.
- the product form shown in Figure 15 can refer to a vehicle-mounted visual perception device, and a method of detecting the drivable area and positioning the space coordinate can be realized through the software algorithm deployed on the computing node of the related device.
- it can mainly include three parts: the first part is to obtain the image, that is, the left-eye image and the right-eye image can be obtained through the left-eye camera and the right-eye camera, and the left-eye camera and the right-eye camera can meet the frame synchronization; the second part is Obtain the drivable area in each image, for example, the deep learning algorithm can output the drivable area (Freespace) in the left-eye image and the right-eye image, or the boundary of the drivable area; the above-mentioned deep learning algorithm can be deployed with AI chips , Can be based on the parallel acceleration processing of multiple AI chips to output Freespace; the third part is to obtain the parallax of the boundary points of the drivable area, for example, based on the serial processing to output
- the detection method of the vehicle drivable area of the embodiment of the present application is described in detail above with reference to Figs. 1 to 15, and the device embodiment of the present application will be described in detail below with reference to Figs. 16-17. It should be understood that the detection device for the vehicle travelable area in the embodiment of the present application can execute the various vehicle travelable area detection methods of the foregoing embodiments of the present application, that is, for the specific working process of the following various products, you can refer to the foregoing method embodiment Corresponding process in.
- Fig. 16 is a schematic block diagram of a device for detecting a vehicle travelable area provided by an embodiment of the present application. It should be understood that the detection device 700 may execute the method for detecting the drivable area shown in FIGS. 6 to 15.
- the detection device 700 includes: an acquisition unit 710 and a processing unit 720.
- the acquiring unit 710 is configured to acquire binocular images of the driving direction of the vehicle, where the binocular images include left-eye images and right-eye images; From the boundary of the drivable area in the image, the disparity information of the boundary of the drivable area in the binocular image is obtained; and based on the disparity information, the drivable area of the vehicle in the binocular image is obtained.
- processing unit 720 is further configured to:
- the processing unit 720 is specifically configured to:
- the disparity information is obtained by performing boundary matching of the drivable area on the second image according to the N boundary point line segments and the matching strategy, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
- the processing unit 720 is specifically configured to:
- the first matching strategy in the matching strategy is used to perform the drivable area boundary matching on the second image, wherein the boundary point of the first type
- the line segment includes the boundary of the drivable area on the edge of the road and the boundary of the drivable area on the side of other vehicles;
- the second matching strategy in the matching strategy is used to perform the driving area boundary matching on the second image, wherein the second type boundary
- the boundary point in the dotted line segment is the same distance from the vehicle.
- the matching strategy includes a first matching strategy and a second matching strategy, wherein the first matching strategy refers to matching through a search area, and the search area refers to a first boundary. Any one of the dotted line segments is a region generated at the center, the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy refers to a preset search step For matching, the preset search step size is determined based on the boundary point disparity of one boundary point line segment among the N boundary point line segments.
- the N boundary point line segments are determined according to inflection points of the boundary of the drivable area in the first image.
- the detection device 700 described above is embodied in the form of a functional unit.
- the term "unit” herein can be implemented in the form of software and/or hardware, which is not specifically limited.
- a "unit” may be a software program, a hardware circuit, or a combination of the two that realizes the above-mentioned functions.
- the hardware circuit may include an application specific integrated circuit (ASIC), an electronic circuit, and a processor for executing one or more software or firmware programs (such as a shared processor, a dedicated processor, or a group processor). Etc.) and memory, merged logic circuits and/or other suitable components that support the described functions.
- the units of the examples described in the embodiments of the present application can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
- FIG. 17 is a schematic diagram of the hardware structure of a detection device for a vehicle travelable area provided by an embodiment of the present application.
- the detection apparatus 800 (the detection apparatus 800 may specifically be a computer device) includes a memory 801, a processor 802, a communication interface 803, and a bus 804. Among them, the memory 801, the processor 802, and the communication interface 803 realize the communication connection between each other through the bus 804.
- the memory 801 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM).
- the memory 801 may store a program.
- the processor 802 is configured to execute each step of the method for detecting a vehicle drivable area in the embodiment of the present application, for example, execute FIGS. 6 to 15 The various steps shown.
- vehicle travelable area detection device shown in the embodiment of the present application may be a server, for example, it may be a cloud server, or may also be a chip configured in a cloud server.
- the processor 802 may adopt a general-purpose central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC) or one or more integrated circuits for executing related programs to realize this The method for detecting a vehicle's drivable area in the embodiment of the application method.
- CPU central processing unit
- ASIC application specific integrated circuit
- the processor 802 may also be an integrated circuit chip with signal processing capability.
- each step of the method for detecting a vehicle travelable area of the present application can be completed by an integrated logic circuit of hardware in the processor 802 or instructions in the form of software.
- the above-mentioned processor 802 may also be a general-purpose processor, a digital signal processing (digital signal processing, DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, Discrete gates or transistor logic devices, discrete hardware components.
- DSP digital signal processing
- ASIC application-specific integrated circuit
- FPGA field programmable gate array
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
- the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
- the storage medium is located in the memory 801, and the processor 802 reads the information in the memory 801 and, in combination with its hardware, completes the functions required by the units included in the detection device for the vehicle travelable area shown in FIG. 16 in the implementation of this application, or,
- the method for detecting the vehicle's drivable area shown in FIGS. 6 to 15 of the method embodiment of the present application is executed.
- the communication interface 803 uses a transceiver device such as but not limited to a transceiver to implement communication between the detection device 800 and other devices or communication networks.
- a transceiver device such as but not limited to a transceiver to implement communication between the detection device 800 and other devices or communication networks.
- the bus 804 may include a path for transmitting information between various components of the detection device 800 (for example, the memory 801, the processor 802, and the communication interface 803).
- detection device 800 only shows a memory, a processor, and a communication interface, in the specific implementation process, those skilled in the art should understand that the detection device 800 may also include other devices necessary for normal operation. At the same time, according to specific needs, those skilled in the art should understand that the detection device 800 may also include hardware devices that implement other additional functions.
- detection device 800 described above may also only include the components necessary to implement the embodiments of the present application, and not necessarily include all the components shown in FIG. 17.
- the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application.
- the implementation process constitutes any limitation.
- the disclosed system, device, and method can be implemented in other ways.
- the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Mathematical Physics (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
Procédé de détection et dispositif de détection de région pouvant être parcourue par un véhicule. Le procédé de détection comprend : l'obtention d'une image binoculaire dans une direction de déplacement de véhicule, l'image binoculaire comprenant une image gauche et une image droite ; l'obtention d'informations de parallaxe d'une limite de région pouvant être parcourue dans l'image binoculaire selon une limite de région pouvant être parcourue dans l'image gauche et une limite de région pouvant être parcourue dans l'image droite ; et l'obtention d'une région pouvant être parcourue par un véhicule dans l'image binoculaire selon les informations de parallaxe. La solution technique ci-dessus peut réduire considérablement la quantité de calcul tout en garantissant la précision de détection et améliorer l'efficacité de détection de l'état de la route d'un véhicule autonome.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202080093411.3A CN114981138A (zh) | 2020-02-13 | 2020-02-13 | 车辆可行驶区域的检测方法以及检测装置 |
PCT/CN2020/075104 WO2021159397A1 (fr) | 2020-02-13 | 2020-02-13 | Procédé de détection et dispositif de détection de région pouvant être parcourue par un véhicule |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/075104 WO2021159397A1 (fr) | 2020-02-13 | 2020-02-13 | Procédé de détection et dispositif de détection de région pouvant être parcourue par un véhicule |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021159397A1 true WO2021159397A1 (fr) | 2021-08-19 |
Family
ID=77292613
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/075104 WO2021159397A1 (fr) | 2020-02-13 | 2020-02-13 | Procédé de détection et dispositif de détection de région pouvant être parcourue par un véhicule |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114981138A (fr) |
WO (1) | WO2021159397A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113715907A (zh) * | 2021-09-27 | 2021-11-30 | 郑州新大方重工科技有限公司 | 一种适用于轮式设备的姿态调整方法、自动驾驶方法 |
CN115114494A (zh) * | 2022-06-20 | 2022-09-27 | 中国第一汽车股份有限公司 | 一种Freespace边缘点的处理方法以及装置 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101410872A (zh) * | 2006-03-28 | 2009-04-15 | 株式会社博思科 | 道路图像解析装置及道路图像解析方法 |
US20140071240A1 (en) * | 2012-09-11 | 2014-03-13 | Automotive Research & Testing Center | Free space detection system and method for a vehicle using stereo vision |
WO2015053100A1 (fr) * | 2013-10-07 | 2015-04-16 | 日立オートモティブシステムズ株式会社 | Dispositif de détection d'objet et véhicule qui l'utilise |
CN105313892A (zh) * | 2014-06-16 | 2016-02-10 | 现代摩比斯株式会社 | 安全驾驶引导系统及其方法 |
CN105550665A (zh) * | 2016-01-15 | 2016-05-04 | 北京理工大学 | 一种基于双目视觉的无人驾驶汽车可通区域检测方法 |
CN106303501A (zh) * | 2016-08-23 | 2017-01-04 | 深圳市捷视飞通科技股份有限公司 | 基于图像稀疏特征匹配的立体图像重构方法及装置 |
CN107358168A (zh) * | 2017-06-21 | 2017-11-17 | 海信集团有限公司 | 一种车辆可行驶区域的检测方法及装置、车载电子设备 |
FR3056531A1 (fr) * | 2016-09-29 | 2018-03-30 | Valeo Schalter Und Sensoren Gmbh | Detection d'obstacles pour vehicule automobile |
CN107909036A (zh) * | 2017-11-16 | 2018-04-13 | 海信集团有限公司 | 一种基于视差图的道路检测方法及装置 |
-
2020
- 2020-02-13 WO PCT/CN2020/075104 patent/WO2021159397A1/fr active Application Filing
- 2020-02-13 CN CN202080093411.3A patent/CN114981138A/zh active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101410872A (zh) * | 2006-03-28 | 2009-04-15 | 株式会社博思科 | 道路图像解析装置及道路图像解析方法 |
US20140071240A1 (en) * | 2012-09-11 | 2014-03-13 | Automotive Research & Testing Center | Free space detection system and method for a vehicle using stereo vision |
WO2015053100A1 (fr) * | 2013-10-07 | 2015-04-16 | 日立オートモティブシステムズ株式会社 | Dispositif de détection d'objet et véhicule qui l'utilise |
CN105313892A (zh) * | 2014-06-16 | 2016-02-10 | 现代摩比斯株式会社 | 安全驾驶引导系统及其方法 |
CN105550665A (zh) * | 2016-01-15 | 2016-05-04 | 北京理工大学 | 一种基于双目视觉的无人驾驶汽车可通区域检测方法 |
CN106303501A (zh) * | 2016-08-23 | 2017-01-04 | 深圳市捷视飞通科技股份有限公司 | 基于图像稀疏特征匹配的立体图像重构方法及装置 |
FR3056531A1 (fr) * | 2016-09-29 | 2018-03-30 | Valeo Schalter Und Sensoren Gmbh | Detection d'obstacles pour vehicule automobile |
CN107358168A (zh) * | 2017-06-21 | 2017-11-17 | 海信集团有限公司 | 一种车辆可行驶区域的检测方法及装置、车载电子设备 |
CN107909036A (zh) * | 2017-11-16 | 2018-04-13 | 海信集团有限公司 | 一种基于视差图的道路检测方法及装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113715907A (zh) * | 2021-09-27 | 2021-11-30 | 郑州新大方重工科技有限公司 | 一种适用于轮式设备的姿态调整方法、自动驾驶方法 |
CN113715907B (zh) * | 2021-09-27 | 2023-02-28 | 郑州新大方重工科技有限公司 | 一种适用于轮式设备的姿态调整方法、自动驾驶方法 |
CN115114494A (zh) * | 2022-06-20 | 2022-09-27 | 中国第一汽车股份有限公司 | 一种Freespace边缘点的处理方法以及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN114981138A (zh) | 2022-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021000800A1 (fr) | Procédé de raisonnement pour la région roulable d'une route, et dispositif | |
WO2022001773A1 (fr) | Procédé et appareil de prédiction de trajectoire | |
WO2021102955A1 (fr) | Procédé et appareil de planification de trajet pour véhicule | |
CN110930323B (zh) | 图像去反光的方法、装置 | |
US12001517B2 (en) | Positioning method and apparatus | |
US20230048680A1 (en) | Method and apparatus for passing through barrier gate crossbar by vehicle | |
WO2022051951A1 (fr) | Procédé de détection de ligne de voie de circulation, dispositif associé et support de stockage lisible par ordinateur | |
WO2021110166A1 (fr) | Procédé et dispositif de détection de structure de route | |
WO2022001366A1 (fr) | Procédé et appareil de détection de ligne de voie | |
CN114255275A (zh) | 一种构建地图的方法及计算设备 | |
WO2022089577A1 (fr) | Procédé de détermination de pose et dispositif associé | |
WO2022062825A1 (fr) | Procédé, dispositif de commande de véhicule et véhicule | |
EP4307251A1 (fr) | Procédé de mappage, véhicule, support d'informations lisible par ordinateur, et puce | |
CN115205311B (zh) | 图像处理方法、装置、车辆、介质及芯片 | |
CN112810603B (zh) | 定位方法和相关产品 | |
WO2021159397A1 (fr) | Procédé de détection et dispositif de détection de région pouvant être parcourue par un véhicule | |
CN115398272A (zh) | 检测车辆可通行区域的方法及装置 | |
WO2022022284A1 (fr) | Procédé et appareil de détection d'objet cible | |
WO2022033089A1 (fr) | Procédé et dispositif permettant de déterminer des informations tridimensionnelles d'un objet qui doit subir une détection | |
CN115100630B (zh) | 障碍物检测方法、装置、车辆、介质及芯片 | |
CN115082886B (zh) | 目标检测的方法、装置、存储介质、芯片及车辆 | |
WO2022061725A1 (fr) | Procédé et appareil d'observation d'élément de circulation | |
CN111775962B (zh) | 自动行驶策略的确定方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20918543 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20918543 Country of ref document: EP Kind code of ref document: A1 |