WO2021159397A1 - Vehicle travelable region detection method and detection device - Google Patents

Vehicle travelable region detection method and detection device Download PDF

Info

Publication number
WO2021159397A1
WO2021159397A1 PCT/CN2020/075104 CN2020075104W WO2021159397A1 WO 2021159397 A1 WO2021159397 A1 WO 2021159397A1 CN 2020075104 W CN2020075104 W CN 2020075104W WO 2021159397 A1 WO2021159397 A1 WO 2021159397A1
Authority
WO
WIPO (PCT)
Prior art keywords
boundary
image
drivable area
vehicle
matching
Prior art date
Application number
PCT/CN2020/075104
Other languages
French (fr)
Chinese (zh)
Inventor
朱麒文
崔学理
郑佳
吴祖光
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202080093411.3A priority Critical patent/CN114981138A/en
Priority to PCT/CN2020/075104 priority patent/WO2021159397A1/en
Publication of WO2021159397A1 publication Critical patent/WO2021159397A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes

Definitions

  • This application relates to the automotive field, and more specifically, to a detection method and a detection device for a vehicle's drivable area.
  • Artificial intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to obtain the best results.
  • artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Research in the field of artificial intelligence includes robotics, natural language processing, computer vision, decision-making and reasoning, human-computer interaction, recommendation and search, and basic AI theories.
  • Autonomous driving is a mainstream application in the field of artificial intelligence.
  • Autonomous driving technology relies on the collaboration of computer vision, radar, monitoring devices, and global positioning systems to enable motor vehicles to achieve autonomous driving without the need for human active operations.
  • Self-driving vehicles use various computing systems to help transport passengers from one location to another; some self-driving vehicles may require some initial or continuous input from an operator (such as a pilot, driver, or passenger) ; Automatic driving vehicles allow the operator to switch from manual mode to automatic driving mode or a mode in between. Since automatic driving technology does not require humans to drive motor vehicles, it can theoretically effectively avoid human driving errors and reduce The occurrence of traffic accidents can improve the efficiency of highway transportation. Therefore, more and more attention has been paid to autonomous driving technology.
  • the binocular vision method refers to the extraction and positioning of the drivable area through the global disparity map of the image output by the binocular camera;
  • the amount of calculation required for the eye camera to obtain the global disparity map is large, which makes the automatic driving vehicle unable to perform real-time processing, resulting in safety risks during the driving process of the automatic driving vehicle; therefore, how to improve the driving ability of the vehicle while ensuring the detection accuracy
  • the detection efficiency of the area detection method has become an urgent problem to be solved.
  • the present application provides a detection method and a detection device for a vehicle travelable area, which can improve the real-time performance of the detection system of an automatic driving vehicle under a certain detection accuracy, and improve the detection efficiency of the vehicle travelable area detection method.
  • a method for detecting a vehicle's drivable area including: acquiring a binocular image of a vehicle traveling direction, wherein the binocular image includes a left-eye image and a right-eye image; according to the drivable area in the left-eye image The area boundary and the travelable area boundary in the right-eye image are used to obtain parallax information of the travelable area boundary in the binocular image; based on the parallax information, the vehicle travelable area in the binocular image is obtained.
  • the above-mentioned binocular image may include a left-eye image and a right-eye image; for example, it may refer to the left and right two-dimensional images respectively collected by two parallel and equal-height cameras in an autonomous vehicle.
  • the binocular image may be an image of the road surface or the surrounding environment obtained by the autonomous vehicle in the driving direction; for example, it includes the image of the road surface and the image of obstacles and pedestrians attached to the vehicle.
  • the parallax may refer to the difference in direction caused by observing the same target from two points with a certain distance.
  • the difference in the horizontal direction between the position of the driveable area on the same road collected by the left-eye camera and the right-eye camera in an autonomous vehicle may be parallax information.
  • the obtained left-eye image and right-eye image can be separately input to a deep learning network pre-trained to identify the drivable area in the image; the left-eye image can be identified through the pre-trained deep learning network The area where the vehicle can travel in the image on the right.
  • a binocular image of the driving direction of the vehicle can be obtained, and based on the binocular image, the boundary of the vehicle's travelable area in the left-eye image and the right-eye image can be acquired; according to the travelable area boundary in the left-eye image and the right-eye image
  • the disparity information of the travelable area boundary in the binocular image can be obtained; based on the disparity information, the position of the vehicle travelable area in the binocular image can be obtained; the vehicle travelable area in the embodiment of the present application
  • the detection method can avoid the pixel-by-pixel disparity calculation of the binocular image to obtain the global disparity image.
  • the disparity information of the drivable area boundary in the binocular image can be calculated to realize that the boundary point of the drivable area is in the coordinate system of the autonomous vehicle
  • the positioning can greatly reduce the amount of calculation while ensuring detection accuracy, and improve the detection efficiency of autonomous vehicles on road conditions.
  • the method further includes: performing segmentation processing on the drivable area boundary in the first image; Obtaining the disparity information of the drivable area boundary in the right eye image to obtain the disparity information of the drivable area boundary in the binocular image includes:
  • the second image is matched with the travelable area boundary to obtain the disparity information, wherein the first image is any one of the binocular images;
  • the second image is another image different from the first image in the binocular image;
  • N is an integer greater than or equal to 2.
  • first image may refer to the left-eye image in the binocular image
  • second image may refer to the right-eye image in the binocular image
  • first image may refer to the right-eye image in the binocular image
  • second image The image refers to the left eye image in the binocular image.
  • the boundary of the drivable area in the left-eye image can be segmented based on the inflection point of the drivable area boundary in the left-eye image in the binocular image to obtain N segments of the drivable area boundary in the left-eye image ; Matching the boundaries of the drivable area in the right-eye image through the N segments of the drivable area boundary in the left-eye image, so as to obtain the parallax information.
  • the boundary of the drivable area in the right-eye image can be segmented based on the inflection point of the boundary of the drivable area in the right-eye image in the binocular image to obtain N segments of drivable areas in the right-eye image Boundary; match the boundaries of the drivable area in the left-eye image through the N segments of the drivable area boundary in the right-eye image to obtain disparity information.
  • the boundary of the drivable area in the left-eye image is matched with the drivable boundary in the right-eye image, that is, when the disparity information of the boundary of the drivable area in the binocular image is calculated
  • the boundary of the drivable area in any one of the images is segmented, and the segmented drivable area boundary can be segmented to match, so as to improve the accuracy of the drivable area boundary matching, which is conducive to more accurate acquisition of vehicles on the road.
  • Information about the drivable area is matched.
  • the second image is matched with the drivable area boundary based on the N boundary point line segments obtained by the segmentation process to obtain the disparity information, include:
  • the disparity information is obtained by performing boundary matching of the drivable area on the second image according to the N boundary point line segments and the matching strategy, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
  • a matching strategy can also be used when segmentally matching the drivable area in the left-eye image and the right-eye image, that is, the boundary point line segments with different slopes can be matched based on different matching strategies, so that Improve the matching accuracy of the boundary points of the drivable area.
  • the matching of the drivable area boundary of the second image according to the N boundary point line segments and a matching strategy to obtain the disparity information includes:
  • the first matching strategy in the matching strategy is used to perform the drivable area boundary matching on the second image, wherein the boundary point of the first type
  • the line segment includes the boundary of the drivable area on the edge of the road and the boundary of the drivable area on the side of other vehicles;
  • the second matching strategy in the matching strategy is used to perform the driving area boundary matching on the second image, wherein the second type boundary
  • the boundary point in the dotted line segment is the same distance from the vehicle.
  • the first type of boundary point line segment may refer to a boundary point line segment with a relatively large slope; for example, a road edge area or other vehicle side area;
  • the second type of boundary point line segment may refer to a boundary point line segment with a relatively small slope; for example, , The rear area of other vehicles, etc.
  • a segmented matching strategy based on the magnitude of the slope of the drivable area boundary is proposed, and the detected drivable area is segmented according to the slope distribution.
  • Different matching strategies are used to match the boundaries of the drivable area, which can improve the matching accuracy of the boundary points of the drivable area.
  • the matching strategy includes a first matching strategy and a second matching strategy, wherein the first matching strategy refers to matching through a search area, and the search A region refers to a region generated with any point in the first boundary point line segment as the center, and the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy is Refers to matching through a preset search step, which is determined based on the boundary point disparity of one of the N boundary point line segments.
  • one boundary point of the travelable area boundary point segment of the first image (for example, the left eye image) in the binocular image can be used as a template point
  • the boundary point of the second image for example, the right eye image
  • the search area is generated to match the template point in the first image.
  • the boundary point extracted through the first image (for example, the left eye image) will be the same as the boundary point extracted through the second image (for example, the right eye image).
  • the boundary point extracted through the second image for example, the right eye image.
  • the N boundary point line segments are determined according to inflection points of the travelable area boundary in the first image.
  • a device for detecting a vehicle's drivable area including: an acquisition unit for acquiring binocular images of the vehicle traveling direction, wherein the binocular images include a left-eye image and a right-eye image; and the processing unit uses According to the boundary of the drivable area in the left-eye image and the boundary of the drivable area in the right-eye image, disparity information of the boundary of the drivable area in the binocular image is obtained; based on the disparity information, the binocular is obtained The area where the vehicle can travel in the image.
  • the above-mentioned binocular image may include a left-eye image and a right-eye image; for example, it may refer to the left and right two-dimensional images respectively collected by two parallel and equal-height cameras in an autonomous vehicle.
  • the binocular image may be an image of the road surface or the surrounding environment obtained by the autonomous vehicle in the driving direction; for example, it includes the image of the road surface and the image of obstacles and pedestrians attached to the vehicle.
  • the parallax may refer to the difference in direction caused by observing the same target from two points with a certain distance.
  • the difference in the horizontal direction between the position of the driveable area on the same road collected by the left-eye camera and the right-eye camera in an autonomous vehicle may be parallax information.
  • the obtained left-eye image and right-eye image can be separately input to a deep learning network pre-trained to identify the drivable area in the image; the left-eye image can be identified through the pre-trained deep learning network The area where the vehicle can travel in the image on the right.
  • a binocular image of the driving direction of the vehicle can be obtained, and based on the binocular image, the boundary of the vehicle's travelable area in the left-eye image and the right-eye image can be acquired; according to the travelable area boundary in the left-eye image and the right-eye image
  • the disparity information of the travelable area boundary in the binocular image can be obtained; based on the disparity information, the position of the vehicle travelable area in the binocular image can be obtained; the vehicle travelable area in the embodiment of the present application
  • the detection method can avoid the pixel-by-pixel disparity calculation of the binocular image to obtain the global disparity image.
  • the disparity information of the drivable area boundary in the binocular image can be calculated to realize that the boundary point of the drivable area is in the coordinate system of the autonomous vehicle
  • the positioning can greatly reduce the amount of calculation while ensuring detection accuracy, and improve the detection efficiency of autonomous vehicles on road conditions.
  • the processing unit is further configured to: perform segmentation processing on the boundaries of the drivable area in the first image; and based on the N boundaries obtained by the segmentation processing The dotted line segment matches the travelable area boundary on the second image to obtain the disparity information, where N is an integer greater than or equal to 2.
  • first image may refer to the left-eye image in the binocular image
  • second image may refer to the right-eye image in the binocular image
  • first image may refer to the right-eye image in the binocular image
  • second image The image refers to the left eye image in the binocular image.
  • the boundary of the drivable area in the left-eye image can be segmented based on the inflection point of the drivable area boundary in the left-eye image in the binocular image to obtain N segments of the drivable area boundary in the left-eye image ; Matching the boundaries of the drivable area in the right-eye image through the N segments of the drivable area boundary in the left-eye image, so as to obtain the parallax information.
  • the boundary of the drivable area in the right-eye image can be segmented based on the inflection point of the boundary of the drivable area in the right-eye image in the binocular image to obtain N segments of drivable areas in the right-eye image Boundary; match the boundaries of the drivable area in the left-eye image through the N segments of the drivable area boundary in the right-eye image to obtain disparity information.
  • the boundary of the drivable area in the left-eye image is matched with the drivable boundary in the right-eye image, that is, when the disparity information of the boundary of the drivable area in the binocular image is calculated
  • the boundary of the drivable area in any one of the images is segmented, and the segmented drivable area boundary can be segmented to match, so as to improve the accuracy of the drivable area boundary matching, which is conducive to more accurate acquisition of vehicles on the road.
  • Information about the drivable area is matched.
  • the processing unit is specifically configured to: perform boundary matching of the drivable area on the second image according to the N boundary point line segments and a matching strategy to obtain the In the disparity information, the matching strategy is determined according to the slopes of the N boundary point line segments.
  • a matching strategy can also be used when segmentally matching the drivable area in the left-eye image and the right-eye image, that is, the boundary point line segments with different slopes can be matched based on different matching strategies, so that Improve the matching accuracy of the boundary points of the drivable area.
  • the processing unit is specifically configured to:
  • the first matching strategy in the matching strategy is used to perform the drivable area boundary matching on the second image, wherein the boundary point of the first type
  • the line segment includes the boundary of the drivable area on the edge of the road and the boundary of the drivable area on the side of other vehicles;
  • the second matching strategy in the matching strategy is used to perform the driving area boundary matching on the second image, wherein the second type boundary
  • the boundary point in the dotted line segment is the same distance from the vehicle.
  • the first type of boundary point line segment may refer to a boundary point line segment with a relatively large slope; for example, a road edge area or other vehicle side area;
  • the second type of boundary point line segment may refer to a boundary point line segment with a relatively small slope; for example, , The rear area of other vehicles, etc.
  • a segmented matching strategy based on the magnitude of the slope of the drivable area boundary is proposed, and the detected drivable area is segmented according to the slope distribution.
  • Different matching strategies are used to match the boundaries of the drivable area, which can improve the matching accuracy of the boundary points of the drivable area.
  • the matching strategy includes a first matching strategy and a second matching strategy, wherein the first matching strategy refers to matching through a search area, and the search A region refers to a region generated with any point in the first boundary point line segment as the center, and the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy is Refers to matching through a preset search step, which is determined based on the boundary point disparity of one of the N boundary point line segments.
  • one boundary point of the travelable area boundary point segment of the first image (for example, the left eye image) in the binocular image can be used as a template point
  • the boundary point of the second image for example, the right eye image
  • the search area is generated to match the template point in the first image.
  • the boundary point extracted through the first image (for example, the left eye image) will be the same as the boundary point extracted through the second image (for example, the right eye image).
  • the boundary point extracted through the second image for example, the right eye image.
  • the N boundary point line segments are determined according to inflection points of the travelable area boundary in the first image.
  • a device for detecting a vehicle's travelable area including: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed, the The processor is used to perform the following process: acquiring a binocular image of the driving direction of the vehicle, wherein the binocular image includes a left-eye image and a right-eye image; The travel area boundary is obtained, and the parallax information of the travelable area boundary in the binocular image is obtained; based on the parallax information, the vehicle travelable area in the binocular image is obtained.
  • the processor included in the foregoing detection device is further configured to execute the method for detecting the travelable area of the vehicle in the first aspect and any one of the implementation manners of the first aspect.
  • a computer-readable storage medium is provided.
  • the computer-readable medium storage medium is used to store program code.
  • the program code is executed by a computer, the computer is used to execute the first aspect and the first aspect described above.
  • the detection method in any one of the implementations.
  • a chip in a fifth aspect, includes a processor, and the processor is configured to execute the detection method in any one of the foregoing first aspect and the first aspect.
  • the chip of the fifth aspect described above may be located in an in-vehicle terminal of an autonomous vehicle.
  • a computer program product comprising: computer program code, when the computer program code runs on a computer, the computer executes any one of the first aspect and the first aspect.
  • the detection method in a kind of realization.
  • the above-mentioned computer program code may be stored in whole or in part on a first storage medium, where the first storage medium may be packaged with the processor, or may be packaged separately with the processor. There is no specific limitation.
  • Fig. 1 is a schematic structural diagram of a vehicle provided by an embodiment of the present application.
  • Figure 2 is a schematic structural diagram of a computer system provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of the application of a cloud-side command automatic driving vehicle provided by an embodiment of the present application
  • Fig. 4 is a schematic diagram of a detection system for an autonomous vehicle provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of coordinate conversion provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a method for detecting a vehicle drivable area provided by an embodiment of the present application
  • FIG. 7 is a schematic flowchart of a method for detecting a vehicle drivable area provided by an embodiment of the present application.
  • Fig. 8 is a schematic diagram of a boundary segment of a drivable area provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of acquiring a road surface image provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of matching a boundary point line segment with a larger slope provided by the implementation of the present application.
  • FIG. 11 is a schematic diagram of a boundary point line segment with a small slope provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of sub-pixelation processing provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of obtaining positioning of boundary points of a drivable area provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of obtaining positioning of boundary points of a drivable area provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of the method for detecting a vehicle travelable area according to an embodiment of the present application applied to a specific product form;
  • FIG. 16 is a schematic structural diagram of a detection device for a vehicle travelable area provided by an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of a detection device for a vehicle travelable area provided by another embodiment of the present application.
  • Fig. 1 is a functional block diagram of a vehicle 100 provided by an embodiment of the present application.
  • the vehicle 100 may be a manually driven vehicle, or the vehicle 100 may be configured in a fully or partially automatic driving mode.
  • the vehicle 100 can control its own vehicle while in the automatic driving mode, and can determine the current state of the vehicle and its surrounding environment through human operations, determine the possible behavior of at least one other vehicle in the surrounding environment, and The confidence level corresponding to the possibility of other vehicles performing possible behaviors is determined, and the vehicle 100 is controlled based on the determined information.
  • the vehicle 100 can be placed to operate without human interaction.
  • the vehicle 100 may include various subsystems, such as a traveling system 110, a sensing system 120, a control system 130, one or more peripheral devices 140 and a power supply 160, a computer system 150, and a user interface 170.
  • a traveling system 110 a sensing system 120
  • a control system 130 a control system 130
  • peripheral devices 140 and a power supply 160 a computer system 150
  • a user interface 170 a user interface 170.
  • the vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements.
  • each of the subsystems and elements of the vehicle 100 may be wired or wirelessly interconnected.
  • the travel system 110 may include components for providing power movement to the vehicle 100.
  • the travel system 110 may include an engine 111, a transmission 112, an energy source 113, and wheels 114/tires.
  • the engine 111 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations; for example, a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine.
  • the engine 111 can convert the energy source 113 into mechanical energy.
  • the energy source 113 may include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other power sources.
  • the energy source 113 may also provide energy for other systems of the vehicle 100.
  • the transmission device 112 may include a gearbox, a differential, and a drive shaft; wherein, the transmission device 112 may transmit mechanical power from the engine 111 to the wheels 114.
  • the transmission device 112 may also include other devices, such as a clutch.
  • the drive shaft may include one or more shafts that can be coupled to one or more wheels 114.
  • the sensing system 120 may include several sensors that sense information about the environment around the vehicle 100.
  • the sensing system 120 may include a positioning system 121 (for example, a GPS system, a Beidou system or other positioning systems), an inertial measurement unit 122 (IMU), a radar 123, a laser rangefinder 124, and a camera 125.
  • the sensing system 120 may also include sensors of the internal system of the monitored vehicle 100 (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.). Such detection and identification are key functions for the safe operation of the autonomous vehicle 100.
  • the positioning system 121 can be used to estimate the geographic location of the vehicle 100.
  • the IMU 122 may be used to sense changes in the position and orientation of the vehicle 100 based on inertial acceleration.
  • the IMU 122 may be a combination of an accelerometer and a gyroscope.
  • the radar 123 may use radio signals to sense objects in the surrounding environment of the vehicle 100. In some embodiments, in addition to sensing the object, the radar 123 may also be used to sense the speed and/or direction of the object.
  • the laser rangefinder 124 may use laser light to sense objects in the environment where the vehicle 100 is located.
  • the laser rangefinder 124 may include one or more laser sources, laser scanners, and one or more detectors, as well as other system components.
  • the camera 125 may be used to capture multiple images of the surrounding environment of the vehicle 100.
  • the camera 125 may be a still camera or a video camera.
  • control system 130 controls the operation of the vehicle 100 and its components.
  • the control system 130 may include various elements, such as a steering system 131, a throttle 132, a braking unit 133, a computer vision system 134, a route control system 135, and an obstacle avoidance system 136.
  • the steering system 131 may be operated to adjust the forward direction of the vehicle 100.
  • it may be a steering wheel system in one embodiment.
  • the throttle 132 may be used to control the operating speed of the engine 111 and thereby control the speed of the vehicle 100.
  • the braking unit 133 may be used to control the deceleration of the vehicle 100; the braking unit 133 may use friction to slow down the wheels 114. In other embodiments, the braking unit 133 may convert the kinetic energy of the wheels 114 into electric current. The braking unit 133 may also take other forms to slow down the rotation speed of the wheels 114 to control the speed of the vehicle 100.
  • the computer vision system 134 may be operable to process and analyze the images captured by the camera 125 in order to identify objects and/or features in the surrounding environment of the vehicle 100.
  • the aforementioned objects and/or features may include traffic signals, road boundaries and obstacles.
  • the computer vision system 134 may use object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision technologies.
  • SFM Structure from Motion
  • the computer vision system 134 may be used to map the environment, track objects, estimate the speed of objects, and so on.
  • the route control system 135 may be used to determine the travel route of the vehicle 100.
  • the route control system 135 may combine data from sensors, GPS, and one or more predetermined maps to determine a travel route for the vehicle 100.
  • the obstacle avoidance system 136 may be used to identify, evaluate, and avoid or otherwise surpass potential obstacles in the environment of the vehicle 100.
  • control system 130 may additionally or alternatively include components other than those shown and described. Alternatively, a part of the components shown above may be reduced.
  • the vehicle 100 can interact with external sensors, other vehicles, other computer systems, or users through a peripheral device 140; wherein, the peripheral device 140 can include a wireless communication system 141, an onboard computer 142, a microphone 143 and/ Or speaker 144.
  • the peripheral device 140 can include a wireless communication system 141, an onboard computer 142, a microphone 143 and/ Or speaker 144.
  • the peripheral device 140 may provide a means for the vehicle 100 to interact with the user interface 170.
  • the onboard computer 142 may provide information to the user of the vehicle 100.
  • the user interface 116 can also operate the onboard computer 142 to receive user input; the onboard computer 142 can be operated through a touch screen.
  • the peripheral device 140 may provide a means for the vehicle 100 to communicate with other devices located in the vehicle.
  • the microphone 143 may receive audio (eg, voice commands or other audio input) from the user of the vehicle 100.
  • the speaker 144 may output audio to the user of the vehicle 100.
  • the wireless communication system 141 may wirelessly communicate with one or more devices directly or via a communication network.
  • the wireless communication system 141 can use 3G cellular communication; for example, code division multiple access (CDMA), EVD0, global system for mobile communications (GSM)/general packet radio service (general packet radio service) packet radio service, GPRS), or 4G cellular communication, such as long term evolution (LTE); or, 5G cellular communication.
  • CDMA code division multiple access
  • EVD0 global system for mobile communications
  • GSM global system for mobile communications
  • general packet radio service general packet radio service
  • GPRS general packet radio service
  • 4G cellular communication such as long term evolution (LTE)
  • LTE long term evolution
  • 5G cellular communication 5G cellular communication.
  • the wireless communication system 141 can communicate with a wireless local area network (WLAN) using wireless Internet access (WiFi).
  • WLAN wireless local area network
  • WiFi wireless Internet access
  • the wireless communication system 141 may directly communicate with the device using an infrared link, Bluetooth, or ZigBee; other wireless protocols, such as various vehicle communication systems, for example, the wireless communication system 141 may include one or Multiple dedicated short range communications (DSRC) devices, these devices may include public and/or private data communications between vehicles and/or roadside stations.
  • DSRC dedicated short range communications
  • the power source 160 may provide power to various components of the vehicle 100.
  • the power source 160 may be a rechargeable lithium ion or lead-acid battery.
  • One or more battery packs of such batteries may be configured as a power source to provide power to various components of the vehicle 100.
  • the power source 160 and the energy source 113 may be implemented together, such as in some all-electric vehicles.
  • part or all of the functions of the vehicle 100 may be controlled by the computer system 150, where the computer system 150 may include at least one processor 151, and the processor 151 is executed in a non-transitory computer readable medium stored in the memory 152, for example.
  • the computer system 150 may also be multiple computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.
  • the processor 151 may be any conventional processor, such as a commercially available CPU.
  • the processor may be a dedicated device such as an ASIC or other hardware-based processor.
  • FIG. 1 functionally illustrates the processor, memory, and other elements of the computer in the same block, those of ordinary skill in the art should understand that the processor, computer, or memory may actually include or may not store Multiple processors, computers or memories in the same physical enclosure.
  • the memory may be a hard disk drive or other storage medium located in a housing other than the computer. Therefore, a reference to a processor or computer will be understood to include a reference to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described here, some components such as steering components and deceleration components may each have its own processor that only performs calculations related to component-specific functions .
  • the processor may be located away from the vehicle and wirelessly communicate with the vehicle.
  • some of the processes described herein are executed on a processor disposed in the vehicle and others are executed by a remote processor, including taking the necessary steps to perform a single manipulation.
  • the memory 152 may contain instructions 153 (eg, program logic), which may be executed by the processor 151 to perform various functions of the vehicle 100, including those functions described above.
  • the memory 152 may also contain additional instructions, for example, including sending data to, receiving data from, interacting with, and/or performing data to one or more of the traveling system 110, the sensing system 120, the control system 130, and the peripheral device 140. Control instructions.
  • the memory 152 may also store data, such as road maps, route information, the position, direction, and speed of the vehicle, and other such vehicle data, as well as other information. Such information may be used by the vehicle 100 and the computer system 150 during the operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
  • the user interface 170 may be used to provide information to or receive information from a user of the vehicle 100.
  • the user interface 170 may include one or more input/output devices in the set of peripheral devices 140, for example, a wireless communication system 141, a car computer 142, a microphone 143, and a speaker 144.
  • the computer system 150 may control the functions of the vehicle 100 based on inputs received from various subsystems (for example, the traveling system 110, the sensing system 120, and the control system 130) and from the user interface 170.
  • the computer system 150 may use input from the control system 130 in order to control the braking unit 133 to avoid obstacles detected by the sensing system 120 and the obstacle avoidance system 136.
  • the computer system 150 is operable to provide control of many aspects of the vehicle 100 and its subsystems.
  • one or more of these components described above may be installed or associated with the vehicle 100 separately.
  • the memory 152 may be partially or completely separated from the vehicle 100.
  • the above-mentioned components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as a limitation to the embodiments of the present application.
  • the vehicle 100 may be an autonomous vehicle traveling on a road, and may recognize objects in its surrounding environment to determine the adjustment to the current speed.
  • the object may be other vehicles, traffic control equipment, or other types of objects.
  • each recognized object can be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, distance from the vehicle, etc., can be used to determine the speed to be adjusted by the self-driving car.
  • the vehicle 100 or a computing device associated with the vehicle 100 may be based on the characteristics of the identified object and the state of the surrounding environment (for example, traffic, Rain, ice on the road, etc.) to predict the behavior of the identified object.
  • each recognized object depends on each other's behavior. Therefore, all recognized objects can also be considered together to predict the behavior of a single recognized object.
  • the vehicle 100 can adjust its speed based on the predicted behavior of the identified object.
  • the self-driving car can determine based on the predicted behavior of the object that the vehicle will need to be adjusted (e.g., accelerate, decelerate, or stop) to a stable state.
  • other factors may also be considered to determine the speed of the vehicle 100, such as the lateral position of the vehicle 100 on the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so on.
  • the computing device can also provide instructions to modify the steering angle of the vehicle 100 so that the self-driving car follows a given trajectory and/or maintains an object near the self-driving car (for example, , The safe horizontal and vertical distances of cars in adjacent lanes on the road.
  • the above-mentioned vehicle 100 may be a car, truck, motorcycle, bus, boat, airplane, helicopter, lawn mower, recreational vehicle, playground vehicle, construction equipment, tram, golf cart, train, and trolley, etc.
  • the application examples are not particularly limited.
  • the vehicle 100 shown in FIG. 1 may be an automatic driving vehicle, and the automatic driving system will be described in detail below.
  • Fig. 2 is a schematic diagram of an automatic driving system provided by an embodiment of the present application.
  • the automatic driving system shown in FIG. 2 includes a computer system 201, where the computer system 201 includes a processor 203, which is coupled to a system bus 205.
  • the processor 203 may be one or more processors, where each processor may include one or more processor cores.
  • the display adapter 207 (video adapter) can drive the display 209, and the display 209 is coupled to the system bus 205.
  • the system bus 205 may be coupled to an input/output (I/O) bus 213 through a bus bridge 211, and an I/O interface 215 is coupled to an I/O bus.
  • I/O input/output
  • the I/O interface 215 communicates with a variety of I/O devices, such as input devices 217 (such as keyboard, mouse, touch screen, etc.), media tray 221, (such as CD-ROM, multimedia interface, etc.) .
  • the transceiver 223 can send and/or receive radio communication signals, and the camera 255 can capture landscape and dynamic digital video images.
  • the interface connected to the I/O interface 215 may be the USB port 225.
  • the processor 203 may be any traditional processor, such as a reduced instruction set computer (RISC) processor, a complex instruction set computer (CISC) processor, or a combination of the foregoing.
  • RISC reduced instruction set computer
  • CISC complex instruction set computer
  • the processor 203 may be a dedicated device such as an application specific integrated circuit (ASIC); the processor 203 may be a neural network processor or a combination of a neural network processor and the foregoing traditional processors.
  • ASIC application specific integrated circuit
  • the computer system 201 may be located far away from the autonomous driving vehicle, and may wirelessly communicate with the autonomous driving vehicle.
  • some of the processes described herein are executed on a processor provided in an autonomous vehicle, and others are executed by a remote processor, including taking actions required to perform a single manipulation.
  • the computer system 201 can communicate with the software deployment server 249 through the network interface 229.
  • the network interface 229 may be a hardware network interface, such as a network card.
  • the network 227 may be an external network, such as the Internet, or an internal network, such as an Ethernet or a virtual private network (VPN).
  • the network 227 may also be a wireless network, such as a wifi network, a cellular network, and so on.
  • the hard disk drive interface is coupled with the system bus 205
  • the hardware drive interface 231 can be connected with the hard disk drive 233
  • the system memory 235 is coupled with the system bus 205.
  • the data running in the system memory 235 may include an operating system 237 and application programs 243.
  • the operating system 237 may include a parser 239 (shell) and a kernel 241 (kernel).
  • the shell 239 is an interface between the user and the kernel of the operating system.
  • the shell can be the outermost layer of the operating system; the shell can manage the interaction between the user and the operating system, for example, waiting for the user's input, interpreting the user's input to the operating system, and processing various operating systems The output result.
  • the kernel 241 may be composed of those parts of the operating system that are used to manage memory, files, peripherals, and system resources. Directly interact with the hardware.
  • the operating system kernel usually runs processes and provides inter-process communication, providing CPU time slice management, interrupts, memory management, IO management, and so on.
  • Application programs 243 include programs that control auto-driving cars, such as programs that manage the interaction between autonomous vehicles and obstacles on the road, programs that control the route or speed of autonomous vehicles, and programs that control interaction between autonomous vehicles and other autonomous vehicles on the road. .
  • the application program 243 also exists on the system of the software deployment server 249. In one embodiment, the computer system 201 may download the application program from the software deployment server 249 when the automatic driving-related program 247 needs to be executed.
  • the application program 243 may also be a program for controlling an automatic driving vehicle to perform automatic parking.
  • the senor 253 may be associated with the computer system 201, and the sensor 253 may be used to detect the environment around the computer 201.
  • the sensor 253 can detect animals, cars, obstacles, and crosswalks. Further, the sensor can also detect the surrounding environment of the above-mentioned animals, cars, obstacles, and crosswalks, such as: the environment around the animals, for example, when the animals appear around them. Other animals, weather conditions, the brightness of the surrounding environment, etc.
  • the senor may be a camera, an infrared sensor, a chemical detector, a microphone, etc.
  • the senor 253 can be used to detect the size or position of the storage space and surrounding obstacles around the vehicle, so that the vehicle can perceive the distance between the storage space and the surrounding obstacles, and when parking Carry out collision detection to prevent collisions between vehicles and obstacles.
  • the computer system 150 shown in FIG. 1 may also receive information from other computer systems or transfer information to other computer systems.
  • the sensor data collected from the sensor system 120 of the vehicle 100 may be transferred to another computer to process the data.
  • data from the computer system 312 may be transmitted to the server 320 on the cloud side via the network for further processing.
  • the network and intermediate nodes can include various configurations and protocols, including the Internet, World Wide Web, Intranet, virtual private network, wide area network, local area network, private network using one or more company’s proprietary communication protocols, Ethernet, WiFi and HTTP, And various combinations of the foregoing; this communication can be by any device capable of transferring data to and from other computers, such as modems and wireless interfaces.
  • the server 320 may include a server with multiple computers, such as a load balancing server group, which exchanges information with different nodes of the network for the purpose of receiving, processing, and transmitting data from the computer system 312.
  • the server may be configured similarly to the computer system 312, with a processor 330, a memory 340, instructions 350, and data 360.
  • the data 360 of the server 320 may include information related to road conditions around the vehicle.
  • the server 320 may receive, detect, store, update, and transmit information related to the road conditions of the vehicle.
  • the information related to the road conditions around the vehicle includes information about other vehicles around the vehicle and obstacle information.
  • the driving area detection of autonomous vehicles can usually adopt monocular vision method or binocular vision method; among them, monocular vision method refers to obtaining images of the road environment through a monocular camera, based on pre-trained deep neural network equipment detection The drivable area in the image; according to the detected drivable area, the principle of plane assumption is used. For example, the autonomous vehicle is in a flat area and there is no slope, etc.; and the drivable area in the image is changed from the two-dimensional pixel coordinate system Down-conversion to the three-dimensional coordinate system where the autonomous vehicle is located to complete the spatial positioning of the drivable area.
  • monocular vision method refers to obtaining images of the road environment through a monocular camera, based on pre-trained deep neural network equipment detection
  • the drivable area in the image according to the detected drivable area, the principle of plane assumption is used.
  • the autonomous vehicle is in a flat area and there is no slope, etc.
  • the drivable area in the image is changed from
  • the binocular vision method refers to obtaining a global disparity map of the road environment by acquiring the images output by the binocular camera separately, detecting the drivable area according to the disparity map, and then dividing the drivable area from the second according to the distance detection feature of the binocular camera.
  • the dimensional pixel coordinate system is converted to the three-dimensional coordinate system where the autonomous vehicle is located to complete the spatial positioning of the drivable area.
  • the monocular camera cannot perceive the distance, in the process of transforming the drivable area from the pixel coordinate system to the coordinate system of the autonomous vehicle, it is necessary to follow the principle of plane assumption, that is, assume that the vehicle is located.
  • the road surface is completely flat and there is no ramp, etc., which results in low positioning accuracy of the drivable area;
  • the positioning of the drivable area depends on the disparity map, and the calculation of obtaining the global disparity map through the binocular camera is relatively large.
  • the automatic driving vehicle cannot achieve real-time processing, leading to safety risks in the driving process of the automatic driving vehicle; therefore, how to improve the real-time performance of the detection method of the vehicle's travelable area has become an urgent problem to be solved.
  • the embodiment of the present application provides a method and device for detecting a vehicle drivable area.
  • binocular images of the vehicle traveling direction can be obtained, based on the The binocular image can obtain the boundary of the vehicle's drivable area in the left-eye image and the right-eye image; according to the boundary of the drivable area in the left-eye image and the boundary of the drivable area in the right-eye image, the parallax of the boundary of the drivable area in the binocular image can be obtained
  • Information based on the disparity information, the position of the vehicle travelable area in the binocular image can be obtained; the method for detecting the vehicle travelable area in the embodiment of the present application can avoid the pixel-by-pixel parallax calculation of the binocular image to obtain the global parallax image, In the embodiment of the present application, there is no need to calculate the global disparity image, and only
  • Fig. 4 is a schematic diagram of a detection system for an autonomous vehicle provided by an embodiment of the present application.
  • the detection system 400 may be used to implement a method for detecting a vehicle's drivable area.
  • the detection system 400 may include a perception module 410, a drivable area detection module 420, a drivable area registration module 430, and a coordinate system conversion module 440.
  • the perception module 410 can be used to perceive information about the road surface and surrounding environment when the autonomous vehicle is driving; the perception module can include binocular cameras, where the binocular cameras can include left-eye cameras and right-eye cameras, and the binocular cameras can be used for Perceive the environmental information around the vehicle, as the follow-up deep learning network detects the drivable area; the left-eye camera and the right-eye camera can meet the image frame synchronization, which can be achieved through hardware or software. .
  • the baseline distance of the binocular camera can be greater than 30 cm to ensure that it can support the detection of objects of about 100 meters.
  • the drivable area detection module 420 can be used to detect the drivable area in the pixel coordinate system; the module can be composed of a deep learning network to detect the drivable area in the left-eye image and the right-eye image; the drivable area
  • the input data of the detection module 420 may be the image data collected by the left-eye camera and the right-eye camera included in the above-mentioned perception module 410, and the output data may be the boundary point of the drivable area in the left-eye image and the right-eye image in the pixel coordinate system.
  • the drivable area registration module 430 can be used to perform the following five steps at least:
  • the first step the left-eye drivable area boundary segmentation processing; that is, in the pixel coordinate system, the boundary points of the left-eye image drivable area extracted above are processed according to the slope transformation, and the continuous drivable area
  • the boundary point is divided into N segments.
  • the second step segment matching processing between the boundary points of the left-eye drivable area and the boundary points of the right-eye drivable area; that is, in the pixel coordinate system, different segments are used according to the boundary points of the segmented drivable area
  • the matching strategy of the matching strategy is to match the boundary points of the left-eye drivable area with the boundary points of the right-eye drivable area.
  • the third step registration filtering processing of the drivable area; that is, in the pixel coordinate system, the boundary points of the matched drivable area are filtered by a filtering algorithm to ensure the accuracy of the boundary point registration;
  • the fourth step the disparity calculation of the boundary points of the drivable area; that is, in the pixel coordinate system, the disparity calculation corresponding to the boundary points of the drivable area is performed according to the registered boundary points of the drivable area;
  • the fifth step Parallax sub-pixelation processing; that is, in the pixel coordinate system, the parallax obtained above is sub-pixelated to ensure that the coordinate positioning of the boundary points of the travelable area at a longer distance is still ensured. High positioning accuracy.
  • sub-pixel may refer to subdividing two adjacent pixels, which means that each pixel will be divided into smaller units.
  • the coordinate system conversion module 440 can be used to detect the positioning of the boundary point of the drivable area in the X and Y coordinates.
  • the calculation of the X distance between the boundary point of the drivable area and the self-vehicle can be based on the parallax obtained above, through the triangulation
  • Figure 5 is a schematic diagram of coordinate conversion; the point P(x, y) on the boundary of the drivable area can be converted from the pixel coordinates of the two-dimensional image to the three-dimensional coordinate system where the autonomous vehicle is located, where ,
  • FIG. 5 is an example of the coordinate system, and does not limit the direction in the coordinate system in any way.
  • Fig. 6 is a schematic flowchart of a method for detecting a vehicle drivable area provided by an embodiment of the present application.
  • the detection method shown in FIG. 6 may be executed by the vehicle shown in FIG. 1, or the automatic driving system shown in FIG. 2, or the detection system shown in FIG. 4; the detection method shown in FIG. 6 includes steps 510 to 530 , These steps are described in detail below.
  • Step 510 Obtain a binocular image of the driving direction of the vehicle.
  • the above-mentioned binocular images may include left-eye images and right-eye images; for example, it may refer to the left and right two-dimensional images collected by two parallel and equal-height cameras in an autonomous vehicle; The image acquired by the binocular camera shown.
  • the binocular image may be an image of the road surface or the surrounding environment obtained by the autonomous vehicle in the driving direction; for example, the image of the road surface and the obstacles and pedestrians attached to the vehicle are included.
  • Step 520 Obtain the disparity information of the drivable area boundary in the binocular image according to the drivable area boundary in the left-eye image and the drivable area boundary in the right-eye image.
  • the disparity information refers to the information of the direction difference caused by observing the same target from two points with a certain distance.
  • the difference in the horizontal direction between the position of the driveable area on the same road collected by the left-eye camera and the right-eye camera in an autonomous vehicle may be parallax information.
  • the obtained left-eye image and right-eye image can be respectively input to a deep learning network pre-trained to recognize the drivable area in the image; the pre-trained deep learning network can identify the left-eye image and the right-eye image. The area where the vehicle can travel.
  • Step 530 Based on the disparity information, obtain the travelable area of the vehicle in the binocular image.
  • the position of the vehicle drivable area in the binocular image can be obtained by triangulation.
  • the binocular image when the boundary of the drivable area in the left-eye image and the drivable boundary in the right-eye image are matched to calculate the parallax information of the boundary of the drivable area in the binocular image, the binocular image
  • the drivable area boundary in any one of the images in the image is segmented, and the segmented drivable area boundary can be used for segment matching, which can improve the accuracy of the drivable area boundary matching, which is conducive to more accurate road acquisition. Information about the area where the vehicle can travel.
  • the method for detecting the drivable area of the vehicle further includes segmenting the drivable area boundary in the first image in the binocular image; the foregoing step 520 is based on the drivable area in the left eye image
  • the boundary and the boundary of the drivable area in the right eye image to obtain the parallax information of the drivable area boundary in the binocular image may include the second image of the Binocular image based on the N boundary points and line segments obtained by the segmentation process to perform the drivable area boundary Match to obtain disparity information, where N is an integer greater than or equal to 2.
  • first image may refer to the left-eye image in the binocular image
  • second image may refer to the right-eye image in the binocular image
  • first image may refer to the right-eye image in the binocular image
  • second image The image refers to the left eye image in the binocular image.
  • the foregoing segmentation processing is performed on the boundary of the drivable area in the first image in the binocular image
  • the boundary matching of the drivable area in the second image in the binocular image may mean that the boundary of the drivable area in the binocular image
  • the boundary of the drivable area in the left-eye image is segmented, and based on the N boundary point line segments in the left-eye image obtained by the segmentation processing, the drivable area boundary is matched in the right-eye image to obtain disparity information; or, it can also refer to
  • the boundary of the drivable area in the right-eye image in the binocular image is segmented, and the travelable area boundary is matched in the left-eye image based on the N boundary point line segments in the right-eye image obtained by the segmentation process to obtain parallax information.
  • the boundary point line segment can be divided into a boundary point line segment with a smaller slope and a boundary point line segment with a larger slope according to the slopes of the N boundary point line segments after the segmentation process, so as to control the binocular based on the matching strategy.
  • the boundary of the vehicle in the image can be matched, and the disparity information can be obtained according to the matching result.
  • the second image is matched with the travelable area boundary based on the N boundary point line segments obtained by the segmentation process to obtain the disparity information, including: matching the second image according to the N boundary point line segments and the matching strategy Perform boundary matching of the drivable area to obtain disparity information, where the matching strategy is determined according to the slopes of the line segments of the N boundary points.
  • the first matching strategy in the matching strategy can be used to match the travelable area boundary of the second image, where the first type of boundary point line segment may include the road edge The boundary of the drivable area and the boundary of the drivable area on the side of other vehicles;
  • the second matching strategy in the matching strategy can be used to match the travelable area boundary of the second image, where the boundary point of the second type of boundary point line segment matches the vehicle The distance is the same.
  • the first type of boundary point line segment may refer to a boundary point line segment with a relatively large slope; for example, a road edge area or other vehicle side area;
  • the second type of boundary point line segment may refer to a boundary point line segment with a relatively small slope; for example, , The rear area of other vehicles, etc.
  • the above-mentioned matching strategy may include a first matching strategy and a second matching strategy.
  • the first matching strategy may refer to matching through a search area, and the search area may refer to any point in a line segment with a first boundary point.
  • the first boundary point line segment is any one of the N boundary point line segments;
  • the second matching strategy can refer to matching through a preset search step, which is based on N The boundary point parallax of a boundary point line segment in the boundary point line segment is determined.
  • the first image (for example, the left eye image) in the binocular image can be used as a template point by using a boundary point of the travelable area boundary point segment, and the first image
  • the boundary point of the second image (for example, the right eye image) corresponding to the same row of is the center, and the search area is generated to match the template point in the first image.
  • the specific process is shown in Figure 7 below.
  • the inflection point parallax correction method can be used to match the boundary points of this part.
  • the specific process is shown in Figure 7 below.
  • a segmented matching strategy based on the magnitude of the slope of the drivable area boundary is proposed, and the detected drivable area is segmented according to the slope distribution.
  • Different matching strategies are used to match the boundaries of the drivable area, which can improve the matching accuracy of the boundary points of the drivable area.
  • Fig. 7 is a schematic flowchart of a method for detecting a vehicle drivable area provided by an embodiment of the present application.
  • the detection method shown in FIG. 7 can be executed by the vehicle shown in FIG. 1, or the automatic driving system shown in FIG. 2, or the detection system shown in FIG. 4; the detection method shown in FIG. 7 includes steps 601 to 614 , These steps are described in detail below.
  • Step 601 Start; that is, start to execute the detection method of the vehicle's drivable area.
  • Step 602 Acquire a left eye image.
  • the left-eye image may refer to an image of the road surface or the surrounding environment acquired by one of the binocular cameras (for example, the left-eye camera).
  • Step 603 Acquire a right eye image.
  • the right-eye image may refer to the image of the road surface or the surrounding environment acquired by another camera (for example, the left-eye camera) among the binocular cameras.
  • step 602 and step 603 may be performed at the same time; alternatively, step 603 may be performed first and then step 602 may be performed, which is not limited in this application.
  • Step 604 Perform a vehicle-driving area detection on the acquired left-eye image.
  • a deep learning network for identifying a drivable area in an image can be pre-trained through training data; the pre-trained deep learning network can identify a drivable area of a vehicle in the left-eye image.
  • a pre-trained deep learning network can be used to detect the drivable area in the pixel coordinate system; the input data can be the collected left-eye image, and the output data can be the detected drivable area on the left.
  • Step 605 Perform a vehicle-driving area detection on the acquired right-eye image.
  • the pre-trained deep learning network in step 604 can be used to detect the drivable area in the right-eye image.
  • the obtained left-eye image and right-eye image can be simultaneously input to a pre-trained deep learning network for detection; or, the obtained left-eye image and right-eye image can also be input one after the other to the pre-trained deep learning network.
  • the trained deep learning network can detect the coordinates of the drivable area in the left-eye image and the right-eye image.
  • Step 606 Perform segmentation processing on the drivable area in the left-eye image.
  • the boundary of the drivable area may be segmented according to different slopes of the boundary points of the drivable area detected in the left-eye image.
  • the slope vector of each point on the boundary of the vehicle's drivable area can be obtained as K ⁇ k 1 , k 2 ... k n ⁇ ; the inflection point of the boundary of the drivable area is detected according to the jump of the slope;
  • the inflection points on the boundary of the vehicle's drivable area can be obtained, including point B, point C, point D, point E, point F, and point G; according to the inflection point, the boundary of the vehicle drivable area can be segmented, as shown in the figure As shown in 8, it can be divided into AB section, BC section, CD section, DE section, EF section, FG section and GH section.
  • the boundary of the above-mentioned drivable area can be segmented to obtain N segments; the boundary points of the N segments can be classified into two categories according to the slope distribution of the segment, that is, the boundary point line segment with a smaller slope and the boundary with a larger slope Point line segment.
  • FIG 9 shows a road surface image (for example, the left-eye image) obtained by one of the binocular cameras. From the image shown in Figure 9 (a), it can be seen that the vehicle can travel The boundary of the area includes point A, point B, point C, and point D.
  • a road surface image for example, the left-eye image
  • the slope of the corresponding drivable area boundary in the pixel coordinate system will be larger; the line segment AB included on the drivable area boundary as shown in Figure 8 , Line segment CD, line segment EF, line segment GH; if the detected drivable area boundary is located at the rear of other vehicles, the slope of the corresponding drivable area boundary is smaller, as shown in Figure 8 for the line segment included on the drivable area boundary BC, line segment DE, line segment FG.
  • Step 607 Segment matching of the boundary of the drivable area.
  • step 606 illustrates the segmentation of the boundary of the vehicle travelable area detected in the left-eye image; in addition, the boundary point line segment can be divided into the boundary point with a smaller slope according to the slope of the boundary point line segment after the segmentation process Line segments and boundary point line segments with larger slopes; step 607 will perform matching in the right-eye image with the boundary point line segments in the left-eye image.
  • the boundary points of the drivable area in the acquired left-eye image may be matched according to different matching strategies.
  • the boundary point line segment with a large slope may correspond to the road edge in the real scene, or the side of the vehicle, such as the AB line segment and the CD line segment shown in Figure 9 (a); Figure 9 (b)
  • the boundary point line segment with a larger slope for example, the edge of the road
  • the first matching strategy for the above-mentioned boundary point line segment with a larger slope, one boundary point of the left-eye image's travelable area boundary point segment can be used as a template point, and the right-eye image boundary point corresponding to the same row in the left-eye image As the center, the search area is generated to match the template points in the left-eye image.
  • FIG. 10 shows a left-eye image
  • FIG. 10 shows a schematic diagram of generating a search area in the right-eye image to match template points in the left-eye image.
  • the above-mentioned search area may be a search area obtained by an eight-dimensional descriptor; wherein, 360 degrees may be divided into eight equal parts first, that is, 0 degrees to 45 degrees, 45 degrees to 90 degrees, and 90 degrees to 135 degrees.
  • the eight regions can represent eight angular distributions; corresponding to the eight-dimensional subsequent generation Descriptor, count the angles of each pixel in the above 5*5 neighborhood, and if the angle falls in the corresponding area, the corresponding area assignment is accumulated in a 1*range manner, and the assignment of each area is calculated to generate 8-dimensional Descriptor of; can be calculated using the following formula.
  • S can be represented as a descriptor
  • Angle can represent the corresponding 8 angle regions
  • range can represent the gradient size or amplitude corresponding to each point; through the above formula, the corresponding boundary point of each travelable area can be obtained.
  • Eight-dimensional descriptor, the subsequent matching process is based on the realization of the descriptor.
  • each point in the boundary point of the drivable area that can be extracted from the left-eye image is the center, in the same area of the right-eye image corresponding to the left-eye image, for example, in the left-eye image corresponding to a boundary point
  • the descriptor is generated in the 5*5 neighborhood around the right-eye image, and the gradient and angle value of each point in the neighborhood in the right-eye image are calculated using the following formula.
  • dx represents the x-direction gradient of the pixel
  • dy represents the y-direction gradient of the pixel
  • Angle represents the angle value of the pixel
  • range represents the gradient magnitude of the pixel.
  • the above description is based on the eight-dimensional descriptor generating a search area with a size of 5*5 as an example.
  • the size of the search area can also be other scale sizes, which is not limited in this application.
  • the second matching strategy For the boundary point line segment with a small slope, because the boundary point extracted by the right eye image will overlap with the boundary point extracted by the left eye image, the inflection point parallax correction method can be used for this Matching of partial boundary points.
  • the BC line segment shown in FIG. 11 because the slope of the line segment of the boundary point of the drivable area is relatively small, that is, the boundary of the drivable area detected by the left-eye image and the right-eye image may overlap in the line segment of the boundary point.
  • Area; for the adjacent area of the BC line segment, for example, the line segment on the left side of point B and the line segment on the right side of point C are boundary point line segments with a larger slope.
  • the boundary points with a larger slope can be matched Matching; in the matching process of the adjacent line segments of the BC line segment, the parallax between the left eye image and the right eye image at point B and C can be obtained, and the parallax between the two points B and C can be used as the basis for the BC line segment to
  • the average value of the above-mentioned two-point disparity is the search length, and the boundary point line segment with the smaller slope in the left-eye image and the right-eye image is matched by the search length.
  • boundary point matching through the descriptor is also applicable to the boundary point line segment with a small slope, for example, matching the BC line segment of the travelable area boundary point in the left-eye image and the right-eye image.
  • the extracted drivable area boundary of the left-eye image is segmented, and different matching strategies are adopted for the drivable area boundary with a larger slope and a smaller slope. Matching the boundary of the driving area can improve the matching accuracy of the boundary points of the drivable area.
  • Step 608 Filter the matching result.
  • the matching result filtering may refer to a process of filtering the matching result of step 607, so as to eliminate erroneous matching points.
  • the matched boundary point pairs may be filtered in a filtering manner, so as to ensure the matching accuracy of the boundary points of the drivable area.
  • the matching degree of the boundary points can be sorted, and abnormal matching points can be eliminated according to the method of box plot, and the matching points with higher matching degree can be retained as the final left-eye image and right-eye image.
  • the boundary point where the drivable area is successfully matched in the eye image can be sorted, and abnormal matching points can be eliminated according to the method of box plot, and the matching points with higher matching degree can be retained as the final left-eye image and right-eye image.
  • Step 609 Calculate the disparity of the boundary points of the drivable area.
  • the disparity calculation is performed based on the above-mentioned matched boundary points.
  • the coordinate difference of the matched boundary points in the left-eye image and the right-eye image in the pixel coordinate system corresponding to the boundary point in the Y direction can be calculated.
  • Step 610 Parallax filtering of boundary points of the drivable area.
  • the pixel points obtained by the disparity calculation in step 609 may be discrete points, and further, the disparity filtering of the boundary points of the drivable area in step 610 may be used to perform continuous operations on the discrete points.
  • the process of parallax filtering can be mainly divided into filtering and interpolation (for example, continuity processing); first, considering that it starts from the bottom of the image, It is the drivable area closest to the self-driving vehicle; starting from the bottom of the image and gradually upward, the parallax corresponding to the boundary points of the matched drivable area should gradually decrease, that is, there may not be an increase in the parallax of the boundary points.
  • filtering and interpolation for example, continuity processing
  • the error parallax is filtered out, that is, if it starts from the bottom of the image and gradually goes up, the deeper the depth of field, the smaller the parallax between the left-eye image and the right-eye image of the travelable area; if it is at the boundary point corresponding to the deeper area If the parallax increases, the boundary point may be a matching deviation point, and the deviation point can be eliminated.
  • the distance between the border points corresponding to the same row in the image and the autonomous vehicle is the same (that is, the depth of field can be the same, or it can mean that the coordinates in the Y direction are the same), that is, the left eye image and the right
  • the disparity of the boundary points corresponding to the same row in the eye image should be the same. Therefore, the second filtering can be performed based on this situation, and the boundary points after the filtering process can be continuously corrected by interpolation using the disparity of the adjacent boundary points.
  • Step 611 Parallax sub-pixelation processing.
  • sub-pixel may refer to subdividing two adjacent pixels, which means that each pixel will be divided into smaller units.
  • the extracted boundary point disparity may be subjected to sub-pixelation processing.
  • M represents the descriptor of sub-pixel processing
  • N represents the descriptor of sub-pixel processing
  • y represents the y offset of the image coordinate system.
  • Step 612 The boundary point of the drivable area is positioned on the X coordinate in the coordinate system of the autonomous vehicle.
  • the triangulation method can be used to obtain the distance of the boundary point in the X direction in the vehicle coordinate system.
  • Step 613 The boundary point of the drivable area is positioned on the Y coordinate in the coordinate system of the autonomous vehicle.
  • the triangulation method can be used to obtain the distance of the boundary point in the Y direction in the vehicle coordinate system.
  • triangulation is used to measure the distances of boundary points in the X direction and Y direction in the coordinate system where the autonomous vehicle is located, where f represents the focal length; B represents the reference line; y represents the image coordinate system The offset in.
  • Y is the horizontal distance from the object to the camera; y is the pixel coordinates on the picture; and the distance in the Y direction:
  • the detection of the drivable area can be realized in time and the drivable area from the pixel coordinate system to the self-car coordinate can be realized through a small amount of calculation.
  • System mapping thereby improving the real-time detection of autonomous vehicles on the vehicle's drivable area.
  • FIG. 15 is a schematic diagram of the method for detecting a vehicle travelable area according to an embodiment of the present application applied to a specific product form.
  • the product form shown in Figure 15 can refer to a vehicle-mounted visual perception device, and a method of detecting the drivable area and positioning the space coordinate can be realized through the software algorithm deployed on the computing node of the related device.
  • it can mainly include three parts: the first part is to obtain the image, that is, the left-eye image and the right-eye image can be obtained through the left-eye camera and the right-eye camera, and the left-eye camera and the right-eye camera can meet the frame synchronization; the second part is Obtain the drivable area in each image, for example, the deep learning algorithm can output the drivable area (Freespace) in the left-eye image and the right-eye image, or the boundary of the drivable area; the above-mentioned deep learning algorithm can be deployed with AI chips , Can be based on the parallel acceleration processing of multiple AI chips to output Freespace; the third part is to obtain the parallax of the boundary points of the drivable area, for example, based on the serial processing to output
  • the detection method of the vehicle drivable area of the embodiment of the present application is described in detail above with reference to Figs. 1 to 15, and the device embodiment of the present application will be described in detail below with reference to Figs. 16-17. It should be understood that the detection device for the vehicle travelable area in the embodiment of the present application can execute the various vehicle travelable area detection methods of the foregoing embodiments of the present application, that is, for the specific working process of the following various products, you can refer to the foregoing method embodiment Corresponding process in.
  • Fig. 16 is a schematic block diagram of a device for detecting a vehicle travelable area provided by an embodiment of the present application. It should be understood that the detection device 700 may execute the method for detecting the drivable area shown in FIGS. 6 to 15.
  • the detection device 700 includes: an acquisition unit 710 and a processing unit 720.
  • the acquiring unit 710 is configured to acquire binocular images of the driving direction of the vehicle, where the binocular images include left-eye images and right-eye images; From the boundary of the drivable area in the image, the disparity information of the boundary of the drivable area in the binocular image is obtained; and based on the disparity information, the drivable area of the vehicle in the binocular image is obtained.
  • processing unit 720 is further configured to:
  • the processing unit 720 is specifically configured to:
  • the disparity information is obtained by performing boundary matching of the drivable area on the second image according to the N boundary point line segments and the matching strategy, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
  • the processing unit 720 is specifically configured to:
  • the first matching strategy in the matching strategy is used to perform the drivable area boundary matching on the second image, wherein the boundary point of the first type
  • the line segment includes the boundary of the drivable area on the edge of the road and the boundary of the drivable area on the side of other vehicles;
  • the second matching strategy in the matching strategy is used to perform the driving area boundary matching on the second image, wherein the second type boundary
  • the boundary point in the dotted line segment is the same distance from the vehicle.
  • the matching strategy includes a first matching strategy and a second matching strategy, wherein the first matching strategy refers to matching through a search area, and the search area refers to a first boundary. Any one of the dotted line segments is a region generated at the center, the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy refers to a preset search step For matching, the preset search step size is determined based on the boundary point disparity of one boundary point line segment among the N boundary point line segments.
  • the N boundary point line segments are determined according to inflection points of the boundary of the drivable area in the first image.
  • the detection device 700 described above is embodied in the form of a functional unit.
  • the term "unit” herein can be implemented in the form of software and/or hardware, which is not specifically limited.
  • a "unit” may be a software program, a hardware circuit, or a combination of the two that realizes the above-mentioned functions.
  • the hardware circuit may include an application specific integrated circuit (ASIC), an electronic circuit, and a processor for executing one or more software or firmware programs (such as a shared processor, a dedicated processor, or a group processor). Etc.) and memory, merged logic circuits and/or other suitable components that support the described functions.
  • the units of the examples described in the embodiments of the present application can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • FIG. 17 is a schematic diagram of the hardware structure of a detection device for a vehicle travelable area provided by an embodiment of the present application.
  • the detection apparatus 800 (the detection apparatus 800 may specifically be a computer device) includes a memory 801, a processor 802, a communication interface 803, and a bus 804. Among them, the memory 801, the processor 802, and the communication interface 803 realize the communication connection between each other through the bus 804.
  • the memory 801 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM).
  • the memory 801 may store a program.
  • the processor 802 is configured to execute each step of the method for detecting a vehicle drivable area in the embodiment of the present application, for example, execute FIGS. 6 to 15 The various steps shown.
  • vehicle travelable area detection device shown in the embodiment of the present application may be a server, for example, it may be a cloud server, or may also be a chip configured in a cloud server.
  • the processor 802 may adopt a general-purpose central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC) or one or more integrated circuits for executing related programs to realize this The method for detecting a vehicle's drivable area in the embodiment of the application method.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • the processor 802 may also be an integrated circuit chip with signal processing capability.
  • each step of the method for detecting a vehicle travelable area of the present application can be completed by an integrated logic circuit of hardware in the processor 802 or instructions in the form of software.
  • the above-mentioned processor 802 may also be a general-purpose processor, a digital signal processing (digital signal processing, DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, Discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory 801, and the processor 802 reads the information in the memory 801 and, in combination with its hardware, completes the functions required by the units included in the detection device for the vehicle travelable area shown in FIG. 16 in the implementation of this application, or,
  • the method for detecting the vehicle's drivable area shown in FIGS. 6 to 15 of the method embodiment of the present application is executed.
  • the communication interface 803 uses a transceiver device such as but not limited to a transceiver to implement communication between the detection device 800 and other devices or communication networks.
  • a transceiver device such as but not limited to a transceiver to implement communication between the detection device 800 and other devices or communication networks.
  • the bus 804 may include a path for transmitting information between various components of the detection device 800 (for example, the memory 801, the processor 802, and the communication interface 803).
  • detection device 800 only shows a memory, a processor, and a communication interface, in the specific implementation process, those skilled in the art should understand that the detection device 800 may also include other devices necessary for normal operation. At the same time, according to specific needs, those skilled in the art should understand that the detection device 800 may also include hardware devices that implement other additional functions.
  • detection device 800 described above may also only include the components necessary to implement the embodiments of the present application, and not necessarily include all the components shown in FIG. 17.
  • the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

A vehicle travelable region detection method and detection device. The detection method comprises: obtaining a binocular image in a vehicle traveling direction, the binocular image comprising a left-eye image and a right-eye image; obtaining parallax information of a travelable region boundary in the binocular image according to a travelable region boundary in the left-eye image and a travelable region boundary in the right-eye image; and obtaining a vehicle travelable region in the binocular image according to the parallax information. The above technical solution can greatly reduce the amount of computation while ensuring the detection accuracy, and improve the road condition detection efficiency of an autonomous vehicle.

Description

车辆可行驶区域的检测方法以及检测装置Detection method and detection device of vehicle travelable area 技术领域Technical field
本申请涉及汽车领域,并且更具体地,涉及一种车辆可行驶区域的检测方法以及检测装置。This application relates to the automotive field, and more specifically, to a detection method and a detection device for a vehicle's drivable area.
背景技术Background technique
人工智能(artificial intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个分支,它企图了解智能的实质,并生产出一种新的能以与人类智能相似的方式作出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。人工智能领域的研究包括机器人,自然语言处理,计算机视觉,决策与推理,人机交互,推荐与搜索,AI基础理论等。Artificial intelligence (AI) is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to obtain the best results. In other words, artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence. Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making. Research in the field of artificial intelligence includes robotics, natural language processing, computer vision, decision-making and reasoning, human-computer interaction, recommendation and search, and basic AI theories.
自动驾驶是人工智能领域的一种主流应用,自动驾驶技术依靠计算机视觉、雷达、监控装置和全球定位系统等协同合作,让机动车辆可以在不需要人类主动操作下,实现自动驾驶。自动驾驶的车辆使用各种计算系统来帮助将乘客从一个位置运输到另一位置;一些自动驾驶车辆可能要求来自操作者(诸如,领航员、驾驶员、或者乘客)的一些初始输入或者连续输入;自动驾驶车辆准许操作者从手动模操作式切换到自动驾驶模式或者介于两者之间的模式,由于自动驾驶技术无需人类来驾驶机动车辆,所以理论上能够有效避免人类的驾驶失误,减少交通事故的发生,且能够提高公路的运输效率。因此,自动驾驶技术越来越受到重视。Autonomous driving is a mainstream application in the field of artificial intelligence. Autonomous driving technology relies on the collaboration of computer vision, radar, monitoring devices, and global positioning systems to enable motor vehicles to achieve autonomous driving without the need for human active operations. Self-driving vehicles use various computing systems to help transport passengers from one location to another; some self-driving vehicles may require some initial or continuous input from an operator (such as a pilot, driver, or passenger) ; Automatic driving vehicles allow the operator to switch from manual mode to automatic driving mode or a mode in between. Since automatic driving technology does not require humans to drive motor vehicles, it can theoretically effectively avoid human driving errors and reduce The occurrence of traffic accidents can improve the efficiency of highway transportation. Therefore, more and more attention has been paid to autonomous driving technology.
目前,可以通过采用双目视觉方法进行自动驾驶车辆的可行驶区域的检测,双目视觉方法是指通过双目相机输出的图像的全局视差图进行可行驶区域的提取以及定位;但是,通过双目相机获取全局视差图的计算量较大,从而导致自动驾驶车辆无法做到实时处理,导致自动驾驶车辆在驾驶过程中存在安全风险;因此,在保证检测精度的情况下,如何提高车辆可行驶区域的检测方法的检测效率成为一个亟需解决的问题。At present, it is possible to detect the drivable area of an autonomous vehicle by using a binocular vision method. The binocular vision method refers to the extraction and positioning of the drivable area through the global disparity map of the image output by the binocular camera; The amount of calculation required for the eye camera to obtain the global disparity map is large, which makes the automatic driving vehicle unable to perform real-time processing, resulting in safety risks during the driving process of the automatic driving vehicle; therefore, how to improve the driving ability of the vehicle while ensuring the detection accuracy The detection efficiency of the area detection method has become an urgent problem to be solved.
发明内容Summary of the invention
本申请提供一种车辆可行驶区域的检测方法以及检测装置,能够检测精度一定的情况下,提高自动驾驶车辆的检测系统的实时性,提高车辆可行驶区域的检测方法的检测效率。The present application provides a detection method and a detection device for a vehicle travelable area, which can improve the real-time performance of the detection system of an automatic driving vehicle under a certain detection accuracy, and improve the detection efficiency of the vehicle travelable area detection method.
第一方面,提供了一种车辆可行驶区域的检测方法,包括:获取车辆行驶方向的双目图像,其中,所述双目图像包括左目图像与右目图像;根据所述左目图像中的可行驶区域边界与所述右目图像中的可行驶区域边界,得到所述双目图像中可行驶区域边界的视差信息;基于所述视差信息,得到所述双目图像中的车辆可行驶区域。In a first aspect, a method for detecting a vehicle's drivable area is provided, including: acquiring a binocular image of a vehicle traveling direction, wherein the binocular image includes a left-eye image and a right-eye image; according to the drivable area in the left-eye image The area boundary and the travelable area boundary in the right-eye image are used to obtain parallax information of the travelable area boundary in the binocular image; based on the parallax information, the vehicle travelable area in the binocular image is obtained.
其中,上述双目图像可以包括左目图像与右目图像;比如,可以是指由自动驾驶车辆中的平行等高的两个摄像头分别采集的左、右二维图像。The above-mentioned binocular image may include a left-eye image and a right-eye image; for example, it may refer to the left and right two-dimensional images respectively collected by two parallel and equal-height cameras in an autonomous vehicle.
在一种可能的实现方式中,双目图像可以是自动驾驶车辆获取行驶方向的道路路面或者周围环境的图像;比如,包括道路路面图像以及车辆附件的障碍物、行人的图像。In a possible implementation, the binocular image may be an image of the road surface or the surrounding environment obtained by the autonomous vehicle in the driving direction; for example, it includes the image of the road surface and the image of obstacles and pedestrians attached to the vehicle.
应理解,视差可以是指从有一定距离的两个点上观察同一个目标所产生的方向差异。比如,通过自动驾驶车辆中的左目摄像头与右目摄像头采集的相同的道路可行驶区域的位置在水平方向上产生的差异可以为视差信息。It should be understood that the parallax may refer to the difference in direction caused by observing the same target from two points with a certain distance. For example, the difference in the horizontal direction between the position of the driveable area on the same road collected by the left-eye camera and the right-eye camera in an autonomous vehicle may be parallax information.
在一种可能的实现方式中,可以将获取的左目图像与右目图像分别输入至预先训练用于识别图像中的可行驶区域的深度学习网络;通过该预先训练的深度学习网络可以识别出左目图像与右目图像中的车辆可行驶区域。In a possible implementation, the obtained left-eye image and right-eye image can be separately input to a deep learning network pre-trained to identify the drivable area in the image; the left-eye image can be identified through the pre-trained deep learning network The area where the vehicle can travel in the image on the right.
在本申请的实施例中,可以获取车辆行驶方向的双目图像,基于该双目图像可以获取左目图像与右目图像中的车辆可行驶区域边界;根据左目图像中的可行驶区域边界与右目图像中的可行驶区域边界,从而能够得到双目图像中可行驶区域边界的视差信息;基于视差信息可以得到双目图像中的车辆可行驶区域的位置;通过本申请实施例中的车辆可行驶区域的检测方法可以避免双目图像逐像素点的视差计算从而获取全局视差图像,可以通过计算双目图像中的可行驶区域边界的视差信息即可实现可行驶区域边界点在自动驾驶车辆所在坐标系的定位,能够在保证检测精度的情况下很大程度上减少计算量,提高自动驾驶车辆对道路情况的检测效率。In the embodiment of the present application, a binocular image of the driving direction of the vehicle can be obtained, and based on the binocular image, the boundary of the vehicle's travelable area in the left-eye image and the right-eye image can be acquired; according to the travelable area boundary in the left-eye image and the right-eye image The disparity information of the travelable area boundary in the binocular image can be obtained; based on the disparity information, the position of the vehicle travelable area in the binocular image can be obtained; the vehicle travelable area in the embodiment of the present application The detection method can avoid the pixel-by-pixel disparity calculation of the binocular image to obtain the global disparity image. The disparity information of the drivable area boundary in the binocular image can be calculated to realize that the boundary point of the drivable area is in the coordinate system of the autonomous vehicle The positioning can greatly reduce the amount of calculation while ensuring detection accuracy, and improve the detection efficiency of autonomous vehicles on road conditions.
结合第一方面,在第一方面的某些实现方式中,还包括:对第一图像中的可行驶区域边界进行分段处理;所述根据所述左目图像中的可行驶区域边界与所述右目图像中的可行驶区域边界,得到所述双目图像中可行驶区域边界的视差信息,包括:With reference to the first aspect, in some implementations of the first aspect, the method further includes: performing segmentation processing on the drivable area boundary in the first image; Obtaining the disparity information of the drivable area boundary in the right eye image to obtain the disparity information of the drivable area boundary in the binocular image includes:
基于所述分段处理得到的N个边界点线段对第二图像进行可行驶区域边界匹配,得到所述视差信息,其中,所述第一图像为所述双目图像中的任意一个图像;所述第二图像为所述双目图像中与所述第一图像不同的另一个图像;N为大于或者等于2的整数。Based on the N boundary points and line segments obtained by the segmentation process, the second image is matched with the travelable area boundary to obtain the disparity information, wherein the first image is any one of the binocular images; The second image is another image different from the first image in the binocular image; N is an integer greater than or equal to 2.
其中,上述第一图像可以是指双目图像中的左目图像,则第二图像是指双目图像中的右目图像;或者,第一图像可以是指双目图像中的右目图像,则第二图像是指双目图像中的左目图像。Wherein, the above-mentioned first image may refer to the left-eye image in the binocular image, and the second image may refer to the right-eye image in the binocular image; or, the first image may refer to the right-eye image in the binocular image, and the second image The image refers to the left eye image in the binocular image.
在一种可能的实现方式中,可以基于双目图像中的左目图像中可行驶区域边界的拐点对左目图像中的可行驶区域边界进行分段处理,得到左目图像中的N段可行驶区域边界;通过左目图像中的N段可行驶区域边界对右目图像中的可行驶区域边界进行匹配,从而得到视差信息。In a possible implementation, the boundary of the drivable area in the left-eye image can be segmented based on the inflection point of the drivable area boundary in the left-eye image in the binocular image to obtain N segments of the drivable area boundary in the left-eye image ; Matching the boundaries of the drivable area in the right-eye image through the N segments of the drivable area boundary in the left-eye image, so as to obtain the parallax information.
在另一种可能的实现方式中,可以基于双目图像中的右目图像中可行驶区域边界的拐点对右目图像中的可行驶区域边界进行分段处理,得到右目图像中的N段可行驶区域边界;通过右目图像中的N段可行驶区域边界对左目图像中的可行驶区域边界进行匹配,从而得到视差信息。In another possible implementation manner, the boundary of the drivable area in the right-eye image can be segmented based on the inflection point of the boundary of the drivable area in the right-eye image in the binocular image to obtain N segments of drivable areas in the right-eye image Boundary; match the boundaries of the drivable area in the left-eye image through the N segments of the drivable area boundary in the right-eye image to obtain disparity information.
在本申请的实施例中,在对左目图像中的可行驶区域边界与右目图像中的可行驶边界进行匹配,即计算双目图像中可行驶区域边界的视差信息时,可以对双目图像中的任意一个图像中的可行驶区域边界进行分段处理,通过分段后的可行驶区域边界可以进行分段匹配,从而能够提高可行驶区域边界匹配的精度,有利于更加准确的获取道路中车辆可行驶区域的信息。In the embodiment of the present application, when the boundary of the drivable area in the left-eye image is matched with the drivable boundary in the right-eye image, that is, when the disparity information of the boundary of the drivable area in the binocular image is calculated, the The boundary of the drivable area in any one of the images is segmented, and the segmented drivable area boundary can be segmented to match, so as to improve the accuracy of the drivable area boundary matching, which is conducive to more accurate acquisition of vehicles on the road. Information about the drivable area.
结合第一方面,在第一方面的某些实现方式中,所述基于所述分段处理得到的N个边界点线段对所述第二图像进行可行驶区域边界匹配,得到所述视差信息,包括:With reference to the first aspect, in some implementation manners of the first aspect, the second image is matched with the drivable area boundary based on the N boundary point line segments obtained by the segmentation process to obtain the disparity information, include:
根据所述N个边界点线段与匹配策略对所述第二图像进行可行驶区域边界匹配,得到所述视差信息,其中,所述匹配策略是根据所述N个边界点线段的斜率确定的。The disparity information is obtained by performing boundary matching of the drivable area on the second image according to the N boundary point line segments and the matching strategy, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
在本申请的实施例中,在对左目图像与右目图像中的可行驶区域进行分段匹配时还可以采用匹配策略,即对于斜率不同的边界点线段可以基于不同的匹配策略进行匹配,从而能够提升可行驶区域边界点的匹配精度。In the embodiment of the present application, a matching strategy can also be used when segmentally matching the drivable area in the left-eye image and the right-eye image, that is, the boundary point line segments with different slopes can be matched based on different matching strategies, so that Improve the matching accuracy of the boundary points of the drivable area.
结合第一方面,在第一方面的某些实现方式中,所述根据所述N个边界点线段与匹配策略对所述第二图像进行可行驶区域边界匹配,得到所述视差信息,包括:With reference to the first aspect, in some implementation manners of the first aspect, the matching of the drivable area boundary of the second image according to the N boundary point line segments and a matching strategy to obtain the disparity information includes:
对于所述N个边界点线段中的第一类边界点线段,采用所述匹配策略中的第一匹配策略对所述第二图像进行可行驶区域边界匹配,其中,所述第一类边界点线段包括道路边沿的可行驶区域边界以及其它车辆侧面的可行驶区域边界;For the boundary point line segment of the first type among the N boundary point line segments, the first matching strategy in the matching strategy is used to perform the drivable area boundary matching on the second image, wherein the boundary point of the first type The line segment includes the boundary of the drivable area on the edge of the road and the boundary of the drivable area on the side of other vehicles;
对于所述N个边界点线段中的第二类边界点线段,采用所述匹配策略中的第二匹配策略在对所述第二图像进行可行驶区域边界匹配,其中,所述第二类边界点线段中的边界点与所述车辆的距离相同。For the second type of boundary point line segments among the N boundary point line segments, the second matching strategy in the matching strategy is used to perform the driving area boundary matching on the second image, wherein the second type boundary The boundary point in the dotted line segment is the same distance from the vehicle.
其中,上述第一类边界点线段可以是指斜率较大的边界点线段;比如,道路边沿区域,或者其它车辆侧面区域;第二类边界点线段可以是指斜率较小的边界点线段;比如,其它车辆的尾部区域等。对于斜率较大的边界点线段左目图像与右目图像中提取出来的可行驶区域边界中基本不存在重叠的现象;对于斜率较小的边界点线段左目图像与右目图像中提取出来的可行驶区域边界存在部分边界点重叠现象。Among them, the first type of boundary point line segment may refer to a boundary point line segment with a relatively large slope; for example, a road edge area or other vehicle side area; the second type of boundary point line segment may refer to a boundary point line segment with a relatively small slope; for example, , The rear area of other vehicles, etc. There is basically no overlap in the drivable area boundary extracted from the left eye image and the right eye image for the boundary point line segment with a larger slope; for the drivable area boundary extracted from the left eye image and the right eye image for the boundary point line segment with a smaller slope There is some overlap of boundary points.
在本申请的实施例中,提出了基于可行驶区域边界的斜率大小的分段式匹配策略,将检测到的可行驶区域依据斜率分布进行分段化处理,对于斜率较大的及斜率较小的可行驶区域边界采用不同的匹配策略进行匹配,从而能够提升可行驶区域边界点的匹配精度。In the embodiment of the present application, a segmented matching strategy based on the magnitude of the slope of the drivable area boundary is proposed, and the detected drivable area is segmented according to the slope distribution. For larger slopes and smaller slopes Different matching strategies are used to match the boundaries of the drivable area, which can improve the matching accuracy of the boundary points of the drivable area.
结合第一方面,在第一方面的某些实现方式中,所述匹配策略包括第一匹配策略与第二匹配策略,其中,所述第一匹配策略是指通过搜索区域进行匹配,所述搜索区域是指以第一边界点线段中的任意一个点为中心生成的区域,所述第一边界点线段为所述N个边界点线段中的任意一个边界点线段;所述第二匹配策略是指通过预设搜索步长进行匹配,所述预设搜索步长是基于所述N个边界点线段中一个边界点线段的边界点视差确定的。With reference to the first aspect, in some implementations of the first aspect, the matching strategy includes a first matching strategy and a second matching strategy, wherein the first matching strategy refers to matching through a search area, and the search A region refers to a region generated with any point in the first boundary point line segment as the center, and the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy is Refers to matching through a preset search step, which is determined based on the boundary point disparity of one of the N boundary point line segments.
在一种可能的实现方式中,对于上述斜率较大的边界点线段可以采用通过以双目图像中的第一图像(例如,左目图像)可行驶区域边界点段的一个边界点为模板点,以第一图像中的同一行对应的第二图像(例如,右目图像)边界点为中心,生成搜索区域与第一图像中的模板点进行匹配。In a possible implementation manner, for the above-mentioned boundary point line segment with a larger slope, one boundary point of the travelable area boundary point segment of the first image (for example, the left eye image) in the binocular image can be used as a template point, With the boundary point of the second image (for example, the right eye image) corresponding to the same row in the first image as the center, the search area is generated to match the template point in the first image.
在另一种可能的实现方式中,对于斜率较小的边界点线段,由于通过第一图像(例如,左目图像)提取到的边界点会与通过第二图像(例如,右目图像)提取的边界点存在部分边界点重叠现象,可以采用拐点视差修正的方法进行该部分边界点的匹配。In another possible implementation, for the boundary point line segment with a small slope, the boundary point extracted through the first image (for example, the left eye image) will be the same as the boundary point extracted through the second image (for example, the right eye image). There is a phenomenon of overlap of some boundary points of the points, and the method of inflection point parallax correction can be used to match this part of the boundary points.
结合第一方面,在第一方面的某些实现方式中,所述N个边界点线段是根据所述第一图像中可行驶区域边界的拐点确定的。With reference to the first aspect, in some implementations of the first aspect, the N boundary point line segments are determined according to inflection points of the travelable area boundary in the first image.
第二方面,提供了一种车辆可行驶区域的检测装置,包括:获取单元,用于获取车辆行驶方向的双目图像,其中,所述双目图像包括左目图像与右目图像;处理单元,用于根据所述左目图像中的可行驶区域边界与所述右目图像中的可行驶区域边界,得到所述双目图像中可行驶区域边界的视差信息;基于所述视差信息,得到所述双目图像中的车辆可行驶区域。In a second aspect, a device for detecting a vehicle's drivable area is provided, including: an acquisition unit for acquiring binocular images of the vehicle traveling direction, wherein the binocular images include a left-eye image and a right-eye image; and the processing unit uses According to the boundary of the drivable area in the left-eye image and the boundary of the drivable area in the right-eye image, disparity information of the boundary of the drivable area in the binocular image is obtained; based on the disparity information, the binocular is obtained The area where the vehicle can travel in the image.
其中,上述双目图像可以包括左目图像与右目图像;比如,可以是指由自动驾驶车辆中的平行等高的两个摄像头分别采集的左、右二维图像。The above-mentioned binocular image may include a left-eye image and a right-eye image; for example, it may refer to the left and right two-dimensional images respectively collected by two parallel and equal-height cameras in an autonomous vehicle.
在一种可能的实现方式中,双目图像可以是自动驾驶车辆获取行驶方向的道路路面或者周围环境的图像;比如,包括道路路面图像以及车辆附件的障碍物、行人的图像。In a possible implementation, the binocular image may be an image of the road surface or the surrounding environment obtained by the autonomous vehicle in the driving direction; for example, it includes the image of the road surface and the image of obstacles and pedestrians attached to the vehicle.
应理解,视差可以是指从有一定距离的两个点上观察同一个目标所产生的方向差异。比如,通过自动驾驶车辆中的左目摄像头与右目摄像头采集的相同的道路可行驶区域的位置在水平方向上产生的差异可以为视差信息。It should be understood that the parallax may refer to the difference in direction caused by observing the same target from two points with a certain distance. For example, the difference in the horizontal direction between the position of the driveable area on the same road collected by the left-eye camera and the right-eye camera in an autonomous vehicle may be parallax information.
在一种可能的实现方式中,可以将获取的左目图像与右目图像分别输入至预先训练用于识别图像中的可行驶区域的深度学习网络;通过该预先训练的深度学习网络可以识别出左目图像与右目图像中的车辆可行驶区域。In a possible implementation, the obtained left-eye image and right-eye image can be separately input to a deep learning network pre-trained to identify the drivable area in the image; the left-eye image can be identified through the pre-trained deep learning network The area where the vehicle can travel in the image on the right.
在本申请的实施例中,可以获取车辆行驶方向的双目图像,基于该双目图像可以获取左目图像与右目图像中的车辆可行驶区域边界;根据左目图像中的可行驶区域边界与右目图像中的可行驶区域边界,从而能够得到双目图像中可行驶区域边界的视差信息;基于视差信息可以得到双目图像中的车辆可行驶区域的位置;通过本申请实施例中的车辆可行驶区域的检测方法可以避免双目图像逐像素点的视差计算从而获取全局视差图像,可以通过计算双目图像中的可行驶区域边界的视差信息即可实现可行驶区域边界点在自动驾驶车辆所在坐标系的定位,能够在保证检测精度的情况下很大程度上减少计算量,提高自动驾驶车辆对道路情况的检测效率。In the embodiment of the present application, a binocular image of the driving direction of the vehicle can be obtained, and based on the binocular image, the boundary of the vehicle's travelable area in the left-eye image and the right-eye image can be acquired; according to the travelable area boundary in the left-eye image and the right-eye image The disparity information of the travelable area boundary in the binocular image can be obtained; based on the disparity information, the position of the vehicle travelable area in the binocular image can be obtained; the vehicle travelable area in the embodiment of the present application The detection method can avoid the pixel-by-pixel disparity calculation of the binocular image to obtain the global disparity image. The disparity information of the drivable area boundary in the binocular image can be calculated to realize that the boundary point of the drivable area is in the coordinate system of the autonomous vehicle The positioning can greatly reduce the amount of calculation while ensuring detection accuracy, and improve the detection efficiency of autonomous vehicles on road conditions.
结合第二方面,在第二方面的某些实现方式中,所述处理单元还用于:对第一图像中的可行驶区域边界进行分段处理;基于所述分段处理得到的N个边界点线段对第二图像进行可行驶区域边界匹配,得到所述视差信息,其中,N为大于或者等于2的整数。With reference to the second aspect, in some implementations of the second aspect, the processing unit is further configured to: perform segmentation processing on the boundaries of the drivable area in the first image; and based on the N boundaries obtained by the segmentation processing The dotted line segment matches the travelable area boundary on the second image to obtain the disparity information, where N is an integer greater than or equal to 2.
其中,上述第一图像可以是指双目图像中的左目图像,则第二图像是指双目图像中的右目图像;或者,第一图像可以是指双目图像中的右目图像,则第二图像是指双目图像中的左目图像。Wherein, the above-mentioned first image may refer to the left-eye image in the binocular image, and the second image may refer to the right-eye image in the binocular image; or, the first image may refer to the right-eye image in the binocular image, and the second image The image refers to the left eye image in the binocular image.
在一种可能的实现方式中,可以基于双目图像中的左目图像中可行驶区域边界的拐点对左目图像中的可行驶区域边界进行分段处理,得到左目图像中的N段可行驶区域边界;通过左目图像中的N段可行驶区域边界对右目图像中的可行驶区域边界进行匹配,从而得到视差信息。In a possible implementation, the boundary of the drivable area in the left-eye image can be segmented based on the inflection point of the drivable area boundary in the left-eye image in the binocular image to obtain N segments of the drivable area boundary in the left-eye image ; Matching the boundaries of the drivable area in the right-eye image through the N segments of the drivable area boundary in the left-eye image, so as to obtain the parallax information.
在另一种可能的实现方式中,可以基于双目图像中的右目图像中可行驶区域边界的拐点对右目图像中的可行驶区域边界进行分段处理,得到右目图像中的N段可行驶区域边界;通过右目图像中的N段可行驶区域边界对左目图像中的可行驶区域边界进行匹配,从而得到视差信息。In another possible implementation manner, the boundary of the drivable area in the right-eye image can be segmented based on the inflection point of the boundary of the drivable area in the right-eye image in the binocular image to obtain N segments of drivable areas in the right-eye image Boundary; match the boundaries of the drivable area in the left-eye image through the N segments of the drivable area boundary in the right-eye image to obtain disparity information.
在本申请的实施例中,在对左目图像中的可行驶区域边界与右目图像中的可行驶边界进行匹配,即计算双目图像中可行驶区域边界的视差信息时,可以对双目图像中的任意一个图像中的可行驶区域边界进行分段处理,通过分段后的可行驶区域边界可以进行分段匹配,从而能够提高可行驶区域边界匹配的精度,有利于更加准确的获取道路中车辆可行驶区域的信息。In the embodiment of the present application, when the boundary of the drivable area in the left-eye image is matched with the drivable boundary in the right-eye image, that is, when the disparity information of the boundary of the drivable area in the binocular image is calculated, the The boundary of the drivable area in any one of the images is segmented, and the segmented drivable area boundary can be segmented to match, so as to improve the accuracy of the drivable area boundary matching, which is conducive to more accurate acquisition of vehicles on the road. Information about the drivable area.
结合第二方面,在第二方面的某些实现方式中,所述处理单元具体用于:根据所述N个边界点线段与匹配策略对所述第二图像进行可行驶区域边界匹配,得到所述视差信息,其中,所述匹配策略是根据所述N个边界点线段的斜率确定的。With reference to the second aspect, in some implementations of the second aspect, the processing unit is specifically configured to: perform boundary matching of the drivable area on the second image according to the N boundary point line segments and a matching strategy to obtain the In the disparity information, the matching strategy is determined according to the slopes of the N boundary point line segments.
在本申请的实施例中,在对左目图像与右目图像中的可行驶区域进行分段匹配时还可以采用匹配策略,即对于斜率不同的边界点线段可以基于不同的匹配策略进行匹配,从而能够提升可行驶区域边界点的匹配精度。In the embodiment of the present application, a matching strategy can also be used when segmentally matching the drivable area in the left-eye image and the right-eye image, that is, the boundary point line segments with different slopes can be matched based on different matching strategies, so that Improve the matching accuracy of the boundary points of the drivable area.
结合第二方面,在第二方面的某些实现方式中,所述处理单元具体用于:With reference to the second aspect, in some implementation manners of the second aspect, the processing unit is specifically configured to:
对于所述N个边界点线段中的第一类边界点线段,采用所述匹配策略中的第一匹配策略对所述第二图像进行可行驶区域边界匹配,其中,所述第一类边界点线段包括道路边沿的可行驶区域边界以及其它车辆侧面的可行驶区域边界;For the boundary point line segment of the first type among the N boundary point line segments, the first matching strategy in the matching strategy is used to perform the drivable area boundary matching on the second image, wherein the boundary point of the first type The line segment includes the boundary of the drivable area on the edge of the road and the boundary of the drivable area on the side of other vehicles;
对于所述N个边界点线段中的第二类边界点线段,采用所述匹配策略中的第二匹配策略在对所述第二图像进行可行驶区域边界匹配,其中,所述第二类边界点线段中的边界点与所述车辆的距离相同。For the second type of boundary point line segments among the N boundary point line segments, the second matching strategy in the matching strategy is used to perform the driving area boundary matching on the second image, wherein the second type boundary The boundary point in the dotted line segment is the same distance from the vehicle.
其中,上述第一类边界点线段可以是指斜率较大的边界点线段;比如,道路边沿区域,或者其它车辆侧面区域;第二类边界点线段可以是指斜率较小的边界点线段;比如,其它车辆的尾部区域等。对于斜率较大的边界点线段左目图像与右目图像中提取出来的可行驶区域边界中基本不存在重叠的现象;对于斜率较小的边界点线段左目图像与右目图像中提取出来的可行驶区域边界存在部分边界点重叠现象。Among them, the first type of boundary point line segment may refer to a boundary point line segment with a relatively large slope; for example, a road edge area or other vehicle side area; the second type of boundary point line segment may refer to a boundary point line segment with a relatively small slope; for example, , The rear area of other vehicles, etc. There is basically no overlap in the drivable area boundary extracted from the left eye image and the right eye image for the boundary point line segment with a larger slope; for the drivable area boundary extracted from the left eye image and the right eye image for the boundary point line segment with a smaller slope There is some overlap of boundary points.
在本申请的实施例中,提出了基于可行驶区域边界的斜率大小的分段式匹配策略,将检测到的可行驶区域依据斜率分布进行分段化处理,对于斜率较大的及斜率较小的可行驶区域边界采用不同的匹配策略进行匹配,从而能够提升可行驶区域边界点的匹配精度。In the embodiment of the present application, a segmented matching strategy based on the magnitude of the slope of the drivable area boundary is proposed, and the detected drivable area is segmented according to the slope distribution. For larger slopes and smaller slopes Different matching strategies are used to match the boundaries of the drivable area, which can improve the matching accuracy of the boundary points of the drivable area.
结合第二方面,在第二方面的某些实现方式中,所述匹配策略包括第一匹配策略与第二匹配策略,其中,所述第一匹配策略是指通过搜索区域进行匹配,所述搜索区域是指以第一边界点线段中的任意一个点为中心生成的区域,所述第一边界点线段为所述N个边界点线段中的任意一个边界点线段;所述第二匹配策略是指通过预设搜索步长进行匹配,所述预设搜索步长是基于所述N个边界点线段中一个边界点线段的边界点视差确定的。With reference to the second aspect, in some implementations of the second aspect, the matching strategy includes a first matching strategy and a second matching strategy, wherein the first matching strategy refers to matching through a search area, and the search A region refers to a region generated with any point in the first boundary point line segment as the center, and the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy is Refers to matching through a preset search step, which is determined based on the boundary point disparity of one of the N boundary point line segments.
在一种可能的实现方式中,对于上述斜率较大的边界点线段可以采用通过以双目图像中的第一图像(例如,左目图像)可行驶区域边界点段的一个边界点为模板点,以第一图像中的同一行对应的第二图像(例如,右目图像)边界点为中心,生成搜索区域与第一图像中的模板点进行匹配。In a possible implementation manner, for the above-mentioned boundary point line segment with a larger slope, one boundary point of the travelable area boundary point segment of the first image (for example, the left eye image) in the binocular image can be used as a template point, With the boundary point of the second image (for example, the right eye image) corresponding to the same row in the first image as the center, the search area is generated to match the template point in the first image.
在另一种可能的实现方式中,对于斜率较小的边界点线段,由于通过第一图像(例如,左目图像)提取到的边界点会与通过第二图像(例如,右目图像)提取的边界点存在部分边界点重叠现象,可以采用拐点视差修正的方法进行该部分边界点的匹配。In another possible implementation, for the boundary point line segment with a small slope, the boundary point extracted through the first image (for example, the left eye image) will be the same as the boundary point extracted through the second image (for example, the right eye image). There is a phenomenon of overlap of some boundary points of the points, and the method of inflection point parallax correction can be used to match this part of the boundary points.
结合第二方面,在第二方面的某些实现方式中,所述N个边界点线段是根据所述第一图像中可行驶区域边界的拐点确定的。With reference to the second aspect, in some implementations of the second aspect, the N boundary point line segments are determined according to inflection points of the travelable area boundary in the first image.
第三方面,提供了一种车辆可行驶区域的检测装置,包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行以下过程:获取车辆行驶方向的双目图像,其中,所述双目图像包括左目图像与右目图像;根据所述左目图像中的可行驶区域边界与所述右目图像中的可行驶区域边界,得到所述双目图像中可行驶区域边界的视差信息;基于所述视差信息,得到所述双目图像中的车辆可行驶区域。In a third aspect, a device for detecting a vehicle's travelable area is provided, including: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed, the The processor is used to perform the following process: acquiring a binocular image of the driving direction of the vehicle, wherein the binocular image includes a left-eye image and a right-eye image; The travel area boundary is obtained, and the parallax information of the travelable area boundary in the binocular image is obtained; based on the parallax information, the vehicle travelable area in the binocular image is obtained.
在一种可能的实现方式中,上述检测装置中包括的处理器还用于执行第一方面及第一方面中的任意一种实现方式中的车辆可行驶区域的检测方法。In a possible implementation manner, the processor included in the foregoing detection device is further configured to execute the method for detecting the travelable area of the vehicle in the first aspect and any one of the implementation manners of the first aspect.
应理解,在上述第一方面中对相关内容的扩展、限定、解释和说明也适用于第三方面中相同的内容。It should be understood that the expansion, limitation, explanation and description of the related content in the above-mentioned first aspect are also applicable to the same content in the third aspect.
第四方面,提供了一种计算机可读存储介质,所述计算机可读介质存储介质用于存储程序代码,当所述程序代码被计算机执行时,所述计算机用于执行上述第一方面及第一方面中的任意一种实现方式中的检测方法。In a fourth aspect, a computer-readable storage medium is provided. The computer-readable medium storage medium is used to store program code. When the program code is executed by a computer, the computer is used to execute the first aspect and the first aspect described above. On the one hand, the detection method in any one of the implementations.
第五方面,提供了一种芯片,所述芯片包括处理器,所述处理器用于执行上述第一方面及第一方面中的任意一种实现方式中的检测方法。In a fifth aspect, a chip is provided, the chip includes a processor, and the processor is configured to execute the detection method in any one of the foregoing first aspect and the first aspect.
在一种可能的实现方式中,上述第五方面的芯片可以位于自动驾驶车辆的车载终端中。In a possible implementation manner, the chip of the fifth aspect described above may be located in an in-vehicle terminal of an autonomous vehicle.
第六方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面及第一方面中的任意一种实现方式中的检测方法。In a sixth aspect, a computer program product is provided, the computer program product comprising: computer program code, when the computer program code runs on a computer, the computer executes any one of the first aspect and the first aspect. The detection method in a kind of realization.
需要说明的是,上述计算机程序代码可以全部或者部分存储在第一存储介质上,其中,第一存储介质可以与处理器封装在一起的,也可以与处理器单独封装,本申请实施例对此不作具体限定。It should be noted that the above-mentioned computer program code may be stored in whole or in part on a first storage medium, where the first storage medium may be packaged with the processor, or may be packaged separately with the processor. There is no specific limitation.
附图说明Description of the drawings
图1是本申请实施例提供的一种车辆的结构示意图;Fig. 1 is a schematic structural diagram of a vehicle provided by an embodiment of the present application;
图2是本申请实施例提供的一种计算机系统的结构示意图;Figure 2 is a schematic structural diagram of a computer system provided by an embodiment of the present application;
图3是本申请实施例提供的一种云侧指令自动驾驶车辆的应用示意图;FIG. 3 is a schematic diagram of the application of a cloud-side command automatic driving vehicle provided by an embodiment of the present application;
图4是本申请实施例提供的自动驾驶车辆的检测系统的示意图;Fig. 4 is a schematic diagram of a detection system for an autonomous vehicle provided by an embodiment of the present application;
图5是本申请实施例提供的坐标转换的示意图;FIG. 5 is a schematic diagram of coordinate conversion provided by an embodiment of the present application;
图6是本申请实施例提供的车辆可行驶区域的检测方法的示意性流程图;FIG. 6 is a schematic flowchart of a method for detecting a vehicle drivable area provided by an embodiment of the present application;
图7是本申请实施例提供的车辆可行驶区域的检测方法的示意性流程图;FIG. 7 is a schematic flowchart of a method for detecting a vehicle drivable area provided by an embodiment of the present application;
图8是本申请实施例提供的可行驶区域边界分段的示意图;Fig. 8 is a schematic diagram of a boundary segment of a drivable area provided by an embodiment of the present application;
图9是本申请实施例提供的获取道路路面图像的示意图;FIG. 9 is a schematic diagram of acquiring a road surface image provided by an embodiment of the present application;
图10是本申请实施提供的对于斜率较大的边界点线段匹配的示意图;FIG. 10 is a schematic diagram of matching a boundary point line segment with a larger slope provided by the implementation of the present application;
图11是本申请实施例提供的对于斜率较小的边界点线段的示意图;FIG. 11 is a schematic diagram of a boundary point line segment with a small slope provided by an embodiment of the present application;
图12是本申请实施例提供的亚像素化处理的示意图;FIG. 12 is a schematic diagram of sub-pixelation processing provided by an embodiment of the present application;
图13是本申请实施例提供的获取可行驶区域边界点的定位的示意图;FIG. 13 is a schematic diagram of obtaining positioning of boundary points of a drivable area provided by an embodiment of the present application;
图14是本申请实施例提供的获取可行驶区域边界点的定位的示意图;FIG. 14 is a schematic diagram of obtaining positioning of boundary points of a drivable area provided by an embodiment of the present application;
图15是本申请实施例的车辆可行驶区域检测方法应用在具体的产品形态上的示意图;FIG. 15 is a schematic diagram of the method for detecting a vehicle travelable area according to an embodiment of the present application applied to a specific product form;
图16是本申请一个实施例提供的车辆可行驶区域的检测装置的结构示意图;FIG. 16 is a schematic structural diagram of a detection device for a vehicle travelable area provided by an embodiment of the present application;
图17是本申请另一个实施例提供的车辆可行驶区域的检测装置的结构示意图。FIG. 17 is a schematic structural diagram of a detection device for a vehicle travelable area provided by another embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于 本申请保护的范围。The technical solutions in the embodiments of the present application will be described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. Based on the embodiments in this application, all other embodiments obtained by a person of ordinary skill in the art without creative work shall fall within the protection scope of this application.
图1是本申请实施例提供的车辆100的功能框图。Fig. 1 is a functional block diagram of a vehicle 100 provided by an embodiment of the present application.
其中,车辆100可以是人工驾驶车辆,或者可以将车辆100配置可以为完全或部分地自动驾驶模式。Among them, the vehicle 100 may be a manually driven vehicle, or the vehicle 100 may be configured in a fully or partially automatic driving mode.
在一个示例中,车辆100可以在处于自动驾驶模式中的同时控制自车,并且可通过人为操作来确定车辆及其周边环境的当前状态,确定周边环境中的至少一个其他车辆的可能行为,并确定其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制车辆100。在车辆100处于自动驾驶模式中时,可以将车辆100置为在没有和人交互的情况下操作。In one example, the vehicle 100 can control its own vehicle while in the automatic driving mode, and can determine the current state of the vehicle and its surrounding environment through human operations, determine the possible behavior of at least one other vehicle in the surrounding environment, and The confidence level corresponding to the possibility of other vehicles performing possible behaviors is determined, and the vehicle 100 is controlled based on the determined information. When the vehicle 100 is in the automatic driving mode, the vehicle 100 can be placed to operate without human interaction.
车辆100中可以包括各种子系统,例如,行进系统110、传感系统120、控制系统130、一个或多个外围设备140以及电源160、计算机系统150和用户接口170。The vehicle 100 may include various subsystems, such as a traveling system 110, a sensing system 120, a control system 130, one or more peripheral devices 140 and a power supply 160, a computer system 150, and a user interface 170.
可选地,车辆100可以包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,车辆100的每个子系统和元件可以通过有线或者无线互连。Alternatively, the vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements. In addition, each of the subsystems and elements of the vehicle 100 may be wired or wirelessly interconnected.
示例性地,行进系统110可以包括用于向车辆100提供动力运动的组件。在一个实施例中,行进系统110可以包括引擎111、传动装置112、能量源113和车轮114/轮胎。其中,引擎111可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合;例如,汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎111可以将能量源113转换成机械能量。Illustratively, the travel system 110 may include components for providing power movement to the vehicle 100. In one embodiment, the travel system 110 may include an engine 111, a transmission 112, an energy source 113, and wheels 114/tires. The engine 111 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations; for example, a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine. The engine 111 can convert the energy source 113 into mechanical energy.
示例性地,能量源113可以包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源113也可以为车辆100的其他系统提供能量。Exemplarily, the energy source 113 may include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other power sources. The energy source 113 may also provide energy for other systems of the vehicle 100.
示例性地,传动装置112可以包括变速箱、差速器和驱动轴;其中,传动装置112可以将来自引擎111的机械动力传送到车轮114。Exemplarily, the transmission device 112 may include a gearbox, a differential, and a drive shaft; wherein, the transmission device 112 may transmit mechanical power from the engine 111 to the wheels 114.
在一个实施例中,传动装置112还可以包括其他器件,比如离合器。其中,驱动轴可以包括可耦合到一个或多个车轮114的一个或多个轴。In an embodiment, the transmission device 112 may also include other devices, such as a clutch. Among them, the drive shaft may include one or more shafts that can be coupled to one or more wheels 114.
示例性地,传感系统120可以包括感测关于车辆100周边的环境的信息的若干个传感器。Exemplarily, the sensing system 120 may include several sensors that sense information about the environment around the vehicle 100.
例如,传感系统120可以包括定位系统121(例如,GPS系统、北斗系统或者其他定位系统)、惯性测量单元122(inertial measurement unit,IMU)、雷达123、激光测距仪124以及相机125。传感系统120还可以包括被监视车辆100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是自主车辆100的安全操作的关键功能。For example, the sensing system 120 may include a positioning system 121 (for example, a GPS system, a Beidou system or other positioning systems), an inertial measurement unit 122 (IMU), a radar 123, a laser rangefinder 124, and a camera 125. The sensing system 120 may also include sensors of the internal system of the monitored vehicle 100 (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.). Such detection and identification are key functions for the safe operation of the autonomous vehicle 100.
其中,定位系统121可以用于估计车辆100的地理位置。IMU122可以用于基于惯性加速度来感测车辆100的位置和朝向变化。在一个实施例中,IMU 122可以是加速度计和陀螺仪的组合。Among them, the positioning system 121 can be used to estimate the geographic location of the vehicle 100. The IMU 122 may be used to sense changes in the position and orientation of the vehicle 100 based on inertial acceleration. In an embodiment, the IMU 122 may be a combination of an accelerometer and a gyroscope.
示例性地,雷达123可以利用无线电信号来感测车辆100的周边环境内的物体。在一些实施例中,除了感测物体以外,雷达123还可用于感测物体的速度和/或前进方向。Exemplarily, the radar 123 may use radio signals to sense objects in the surrounding environment of the vehicle 100. In some embodiments, in addition to sensing the object, the radar 123 may also be used to sense the speed and/or direction of the object.
示例性地,激光测距仪124可以利用激光来感测车辆100所位于的环境中的物体。在一些实施例中,激光测距仪124可以包括一个或多个激光源、激光扫描器以及一个或多个 检测器,以及其他系统组件。Exemplarily, the laser rangefinder 124 may use laser light to sense objects in the environment where the vehicle 100 is located. In some embodiments, the laser rangefinder 124 may include one or more laser sources, laser scanners, and one or more detectors, as well as other system components.
示例性地,相机125可以用于捕捉车辆100的周边环境的多个图像。例如,相机125可以是静态相机或视频相机。Exemplarily, the camera 125 may be used to capture multiple images of the surrounding environment of the vehicle 100. For example, the camera 125 may be a still camera or a video camera.
如图1所示,控制系统130为控制车辆100及其组件的操作。控制系统130可以包括各种元件,比如可以包括转向系统131、油门132、制动单元133、计算机视觉系统134、路线控制系统135以及障碍规避系统136。As shown in FIG. 1, the control system 130 controls the operation of the vehicle 100 and its components. The control system 130 may include various elements, such as a steering system 131, a throttle 132, a braking unit 133, a computer vision system 134, a route control system 135, and an obstacle avoidance system 136.
示例性地,转向系统131可以操作来调整车辆100的前进方向。例如,在一个实施例中可以为方向盘系统。油门132可以用于控制引擎111的操作速度并进而控制车辆100的速度。Illustratively, the steering system 131 may be operated to adjust the forward direction of the vehicle 100. For example, it may be a steering wheel system in one embodiment. The throttle 132 may be used to control the operating speed of the engine 111 and thereby control the speed of the vehicle 100.
示例性地,制动单元133可以用于控制车辆100减速;制动单元133可以使用摩擦力来减慢车轮114。在其他实施例中,制动单元133可以将车轮114的动能转换为电流。制动单元133也可以采取其他形式来减慢车轮114转速从而控制车辆100的速度。Illustratively, the braking unit 133 may be used to control the deceleration of the vehicle 100; the braking unit 133 may use friction to slow down the wheels 114. In other embodiments, the braking unit 133 may convert the kinetic energy of the wheels 114 into electric current. The braking unit 133 may also take other forms to slow down the rotation speed of the wheels 114 to control the speed of the vehicle 100.
如图1所示,计算机视觉系统134可以操作来处理和分析由相机125捕捉的图像以便识别车辆100周边环境中的物体和/或特征。上述物体和/或特征可以包括交通信号、道路边界和障碍物。计算机视觉系统134可以使用物体识别算法、运动中恢复结构(Structure from Motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统134可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。As shown in FIG. 1, the computer vision system 134 may be operable to process and analyze the images captured by the camera 125 in order to identify objects and/or features in the surrounding environment of the vehicle 100. The aforementioned objects and/or features may include traffic signals, road boundaries and obstacles. The computer vision system 134 may use object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision technologies. In some embodiments, the computer vision system 134 may be used to map the environment, track objects, estimate the speed of objects, and so on.
示例性地,路线控制系统135可以用于确定车辆100的行驶路线。在一些实施例中,路线控制系统135可结合来自传感器、GPS和一个或多个预定地图的数据以为车辆100确定行驶路线。Illustratively, the route control system 135 may be used to determine the travel route of the vehicle 100. In some embodiments, the route control system 135 may combine data from sensors, GPS, and one or more predetermined maps to determine a travel route for the vehicle 100.
如图1所示,障碍规避系统136可以用于识别、评估和避免或者以其他方式越过车辆100的环境中的潜在障碍物。As shown in FIG. 1, the obstacle avoidance system 136 may be used to identify, evaluate, and avoid or otherwise surpass potential obstacles in the environment of the vehicle 100.
在一个实例中,控制系统130可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。In one example, the control system 130 may additionally or alternatively include components other than those shown and described. Alternatively, a part of the components shown above may be reduced.
如图1所示,车辆100可以通过外围设备140与外部传感器、其他车辆、其他计算机系统或用户之间进行交互;其中,外围设备140可包括无线通信系统141、车载电脑142、麦克风143和/或扬声器144。As shown in FIG. 1, the vehicle 100 can interact with external sensors, other vehicles, other computer systems, or users through a peripheral device 140; wherein, the peripheral device 140 can include a wireless communication system 141, an onboard computer 142, a microphone 143 and/ Or speaker 144.
在一些实施例中,外围设备140可以提供车辆100与用户接口170交互的手段。例如,车载电脑142可以向车辆100的用户提供信息。用户接口116还可操作车载电脑142来接收用户的输入;车载电脑142可以通过触摸屏进行操作。在其他情况中,外围设备140可以提供用于车辆100与位于车内的其它设备通信的手段。例如,麦克风143可以从车辆100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器144可以向车辆100的用户输出音频。In some embodiments, the peripheral device 140 may provide a means for the vehicle 100 to interact with the user interface 170. For example, the onboard computer 142 may provide information to the user of the vehicle 100. The user interface 116 can also operate the onboard computer 142 to receive user input; the onboard computer 142 can be operated through a touch screen. In other cases, the peripheral device 140 may provide a means for the vehicle 100 to communicate with other devices located in the vehicle. For example, the microphone 143 may receive audio (eg, voice commands or other audio input) from the user of the vehicle 100. Similarly, the speaker 144 may output audio to the user of the vehicle 100.
如图1所述,无线通信系统141可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统141可以使用3G蜂窝通信;例如,码分多址(code division multiple access,CDMA))、EVD0、全球移动通信系统(global system for mobile communications,GSM)/通用分组无线服务(general packet radio service,GPRS),或者4G蜂窝通信,例如长期演进(long term evolution,LTE);或者,5G蜂窝通信。无线通信系统141可以利用无线上网(WiFi)与无线局域网(wireless local area network,WLAN)通信。As shown in FIG. 1, the wireless communication system 141 may wirelessly communicate with one or more devices directly or via a communication network. For example, the wireless communication system 141 can use 3G cellular communication; for example, code division multiple access (CDMA), EVD0, global system for mobile communications (GSM)/general packet radio service (general packet radio service) packet radio service, GPRS), or 4G cellular communication, such as long term evolution (LTE); or, 5G cellular communication. The wireless communication system 141 can communicate with a wireless local area network (WLAN) using wireless Internet access (WiFi).
在一些实施例中,无线通信系统141可以利用红外链路、蓝牙或者紫蜂协议(ZigBee)与设备直接通信;其他无线协议,例如各种车辆通信系统,例如,无线通信系统141可以包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。In some embodiments, the wireless communication system 141 may directly communicate with the device using an infrared link, Bluetooth, or ZigBee; other wireless protocols, such as various vehicle communication systems, for example, the wireless communication system 141 may include one or Multiple dedicated short range communications (DSRC) devices, these devices may include public and/or private data communications between vehicles and/or roadside stations.
如图1所示,电源160可以向车辆100的各种组件提供电力。在一个实施例中,电源160可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源为车辆100的各种组件提供电力。在一些实施例中,电源160和能量源113可一起实现,例如一些全电动车中那样。As shown in FIG. 1, the power source 160 may provide power to various components of the vehicle 100. In one embodiment, the power source 160 may be a rechargeable lithium ion or lead-acid battery. One or more battery packs of such batteries may be configured as a power source to provide power to various components of the vehicle 100. In some embodiments, the power source 160 and the energy source 113 may be implemented together, such as in some all-electric vehicles.
示例性地,车辆100的部分或所有功能可以受计算机系统150控制,其中,计算机系统150可以包括至少一个处理器151,处理器151执行存储在例如存储器152中的非暂态计算机可读介质中的指令153。计算机系统150还可以是采用分布式方式控制车辆100的个体组件或子系统的多个计算设备。Exemplarily, part or all of the functions of the vehicle 100 may be controlled by the computer system 150, where the computer system 150 may include at least one processor 151, and the processor 151 is executed in a non-transitory computer readable medium stored in the memory 152, for example. The instruction 153. The computer system 150 may also be multiple computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.
例如,处理器151可以是任何常规的处理器,诸如商业可获得的CPU。For example, the processor 151 may be any conventional processor, such as a commercially available CPU.
可选地,该处理器可以是诸如ASIC或其它基于硬件的处理器的专用设备。尽管图1功能性地图示了处理器、存储器、和在相同块中的计算机的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。Alternatively, the processor may be a dedicated device such as an ASIC or other hardware-based processor. Although FIG. 1 functionally illustrates the processor, memory, and other elements of the computer in the same block, those of ordinary skill in the art should understand that the processor, computer, or memory may actually include or may not store Multiple processors, computers or memories in the same physical enclosure. For example, the memory may be a hard disk drive or other storage medium located in a housing other than the computer. Therefore, a reference to a processor or computer will be understood to include a reference to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described here, some components such as steering components and deceleration components may each have its own processor that only performs calculations related to component-specific functions .
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。In the various aspects described herein, the processor may be located away from the vehicle and wirelessly communicate with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed in the vehicle and others are executed by a remote processor, including taking the necessary steps to perform a single manipulation.
在一些实施例中,存储器152可包含指令153(例如,程序逻辑),指令153可以被处理器151执行来执行车辆100的各种功能,包括以上描述的那些功能。存储器152也可包含额外的指令,比如包括向行进系统110、传感系统120、控制系统130和外围设备140中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。In some embodiments, the memory 152 may contain instructions 153 (eg, program logic), which may be executed by the processor 151 to perform various functions of the vehicle 100, including those functions described above. The memory 152 may also contain additional instructions, for example, including sending data to, receiving data from, interacting with, and/or performing data to one or more of the traveling system 110, the sensing system 120, the control system 130, and the peripheral device 140. Control instructions.
示例性地,除了指令153以外,存储器152还可存储数据,例如,道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆100在自主、半自主和/或手动模式中操作期间被车辆100和计算机系统150使用。Exemplarily, in addition to the instructions 153, the memory 152 may also store data, such as road maps, route information, the position, direction, and speed of the vehicle, and other such vehicle data, as well as other information. Such information may be used by the vehicle 100 and the computer system 150 during the operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
如图1所示,用户接口170可以用于向车辆100的用户提供信息或从其接收信息。可选地,用户接口170可以包括在外围设备140的集合内的一个或多个输入/输出设备,例如,无线通信系统141、车载电脑142、麦克风143和扬声器144。As shown in FIG. 1, the user interface 170 may be used to provide information to or receive information from a user of the vehicle 100. Optionally, the user interface 170 may include one or more input/output devices in the set of peripheral devices 140, for example, a wireless communication system 141, a car computer 142, a microphone 143, and a speaker 144.
在本申请的实施例中,计算机系统150可以基于从各种子系统(例如,行进系统110、传感系统120和控制系统130)以及从用户接口170接收的输入来控制车辆100的功能。例如,计算机系统150可以利用来自控制系统130的输入以便控制制动单元133来避免由传感系统120和障碍规避系统136检测到的障碍物。在一些实施例中,计算机系统150可操作来对车辆100及其子系统的许多方面提供控制。In an embodiment of the present application, the computer system 150 may control the functions of the vehicle 100 based on inputs received from various subsystems (for example, the traveling system 110, the sensing system 120, and the control system 130) and from the user interface 170. For example, the computer system 150 may use input from the control system 130 in order to control the braking unit 133 to avoid obstacles detected by the sensing system 120 and the obstacle avoidance system 136. In some embodiments, the computer system 150 is operable to provide control of many aspects of the vehicle 100 and its subsystems.
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,存储器 152可以部分或完全地与车辆100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。Optionally, one or more of these components described above may be installed or associated with the vehicle 100 separately. For example, the memory 152 may be partially or completely separated from the vehicle 100. The above-mentioned components may be communicatively coupled together in a wired and/or wireless manner.
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。Optionally, the above-mentioned components are only an example. In practical applications, the components in the above-mentioned modules may be added or deleted according to actual needs. FIG. 1 should not be construed as a limitation to the embodiments of the present application.
可选地,车辆100可以是在道路行进的自动驾驶汽车,可以识别其周围环境内的物体以确定对当前速度的调整。物体可以是其它车辆、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,可以用来确定自动驾驶汽车所要调整的速度。Optionally, the vehicle 100 may be an autonomous vehicle traveling on a road, and may recognize objects in its surrounding environment to determine the adjustment to the current speed. The object may be other vehicles, traffic control equipment, or other types of objects. In some examples, each recognized object can be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, distance from the vehicle, etc., can be used to determine the speed to be adjusted by the self-driving car.
可选地,车辆100或者与车辆100相关联的计算设备(如图1的计算机系统150、计算机视觉系统134、存储器152)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰等等)来预测所述识别的物体的行为。Optionally, the vehicle 100 or a computing device associated with the vehicle 100 (such as the computer system 150, the computer vision system 134, and the memory 152 in FIG. 1) may be based on the characteristics of the identified object and the state of the surrounding environment (for example, traffic, Rain, ice on the road, etc.) to predict the behavior of the identified object.
可选地,每一个所识别的物体都依赖于彼此的行为,因此,还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。车辆100能够基于预测的所述识别的物体的行为来调整它的速度。换句话说,自动驾驶汽车能够基于所预测的物体的行为来确定车辆将需要调整到(例如,加速、减速、或者停止)稳定状态。在这个过程中,也可以考虑其它因素来确定车辆100的速度,诸如,车辆100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。Optionally, each recognized object depends on each other's behavior. Therefore, all recognized objects can also be considered together to predict the behavior of a single recognized object. The vehicle 100 can adjust its speed based on the predicted behavior of the identified object. In other words, the self-driving car can determine based on the predicted behavior of the object that the vehicle will need to be adjusted (e.g., accelerate, decelerate, or stop) to a stable state. In this process, other factors may also be considered to determine the speed of the vehicle 100, such as the lateral position of the vehicle 100 on the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so on.
除了提供调整自动驾驶汽车的速度的指令之外,计算设备还可以提供修改车辆100的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持与自动驾驶汽车附近的物体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。In addition to providing instructions to adjust the speed of the self-driving car, the computing device can also provide instructions to modify the steering angle of the vehicle 100 so that the self-driving car follows a given trajectory and/or maintains an object near the self-driving car (for example, , The safe horizontal and vertical distances of cars in adjacent lanes on the road.
上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。The above-mentioned vehicle 100 may be a car, truck, motorcycle, bus, boat, airplane, helicopter, lawn mower, recreational vehicle, playground vehicle, construction equipment, tram, golf cart, train, and trolley, etc. The application examples are not particularly limited.
在一种可能的实现方式中,上述图1所示的车辆100可以是自动驾驶车辆,下面对自动驾驶系统的进行详细描述。In a possible implementation manner, the vehicle 100 shown in FIG. 1 may be an automatic driving vehicle, and the automatic driving system will be described in detail below.
图2是本申请实施例提供的自动驾驶系统的示意图。Fig. 2 is a schematic diagram of an automatic driving system provided by an embodiment of the present application.
如图2所示的自动驾驶系统包括计算机系统201,其中,计算机系统201包括处理器203,处理器203和系统总线205耦合。处理器203可以是一个或者多个处理器,其中,每个处理器都可以包括一个或多个处理器核。显示适配器207(video adapter),显示适配器可以驱动显示器209,显示器209和系统总线205耦合。系统总线205可以通过总线桥211和输入输出(I/O)总线213耦合,I/O接口215和I/O总线耦合。I/O接口215和多种I/O设备进行通信,比如,输入设备217(如:键盘,鼠标,触摸屏等),媒体盘221(media tray),(例如,CD-ROM,多媒体接口等)。收发器223可以发送和/或接受无线电通信信号,摄像头255可以捕捉景田和动态数字视频图像。其中,和I/O接口215相连接的接口可以是USB端口225。The automatic driving system shown in FIG. 2 includes a computer system 201, where the computer system 201 includes a processor 203, which is coupled to a system bus 205. The processor 203 may be one or more processors, where each processor may include one or more processor cores. The display adapter 207 (video adapter) can drive the display 209, and the display 209 is coupled to the system bus 205. The system bus 205 may be coupled to an input/output (I/O) bus 213 through a bus bridge 211, and an I/O interface 215 is coupled to an I/O bus. The I/O interface 215 communicates with a variety of I/O devices, such as input devices 217 (such as keyboard, mouse, touch screen, etc.), media tray 221, (such as CD-ROM, multimedia interface, etc.) . The transceiver 223 can send and/or receive radio communication signals, and the camera 255 can capture landscape and dynamic digital video images. Wherein, the interface connected to the I/O interface 215 may be the USB port 225.
其中,处理器203可以是任何传统处理器,比如,精简指令集计算(reduced instruction set computer,RISC)处理器、复杂指令集计算(complex instruction set computer,CISC)处理器或上述的组合。The processor 203 may be any traditional processor, such as a reduced instruction set computer (RISC) processor, a complex instruction set computer (CISC) processor, or a combination of the foregoing.
可选地,处理器203可以是诸如专用集成电路(application specific integrated circuit,ASIC)的专用装置;处理器203可以是神经网络处理器或者是神经网络处理器和上述传 统处理器的组合。Optionally, the processor 203 may be a dedicated device such as an application specific integrated circuit (ASIC); the processor 203 may be a neural network processor or a combination of a neural network processor and the foregoing traditional processors.
可选地,在本文所述的各种实施例中,计算机系统201可位于远离自动驾驶车辆的地方,并且可与自动驾驶车辆无线通信。在其它方面,本文所述的一些过程在设置在自动驾驶车辆内的处理器上执行,其它由远程处理器执行,包括采取执行单个操纵所需的动作。Optionally, in various embodiments described herein, the computer system 201 may be located far away from the autonomous driving vehicle, and may wirelessly communicate with the autonomous driving vehicle. In other respects, some of the processes described herein are executed on a processor provided in an autonomous vehicle, and others are executed by a remote processor, including taking actions required to perform a single manipulation.
计算机系统201可以通过网络接口229和软件部署服务器249通信。网络接口229可以是硬件网络接口,比如,网卡。网络227可以是外部网络,比如,因特网,也可以是内部网络,比如以太网或者虚拟私人网络(virtual private network,VPN)。可选地,网络227还可以是无线网络,比如wifi网络,蜂窝网络等。The computer system 201 can communicate with the software deployment server 249 through the network interface 229. The network interface 229 may be a hardware network interface, such as a network card. The network 227 may be an external network, such as the Internet, or an internal network, such as an Ethernet or a virtual private network (VPN). Optionally, the network 227 may also be a wireless network, such as a wifi network, a cellular network, and so on.
如图2所示,硬盘驱动接口和系统总线205耦合,硬件驱动器接口231可以与硬盘驱动器233相连接,系统内存235和系统总线205耦合。运行在系统内存235的数据可以包括操作系统237和应用程序243。其中,操作系统237可以包括解析器239(shell)和内核241(kernel)。shell 239是介于使用者和操作系统之内核(kernel)间的一个接口。Shell可以是操作系统最外面的一层;shell可以管理使用者与操作系统之间的交互,比如,等待使用者的输入,向操作系统解释使用者的输入,并且处理各种各样的操作系统的输出结果。内核241可以由操作系统中用于管理存储器、文件、外设和系统资源的那些部分组成。直接与硬件交互,操作系统内核通常运行进程,并提供进程间的通信,提供CPU时间片管理、中断、内存管理、IO管理等等。应用程序243包括控制汽车自动驾驶相关的程序,比如,管理自动驾驶的汽车和路上障碍物交互的程序,控制自动驾驶汽车路线或者速度的程序,控制自动驾驶汽车和路上其他自动驾驶汽车交互的程序。应用程序243也存在于软件部署服务器249的系统上。在一个实施例中,在需要执行自动驾驶相关程序247时,计算机系统201可以从软件部署服务器249下载应用程序。As shown in FIG. 2, the hard disk drive interface is coupled with the system bus 205, the hardware drive interface 231 can be connected with the hard disk drive 233, and the system memory 235 is coupled with the system bus 205. The data running in the system memory 235 may include an operating system 237 and application programs 243. The operating system 237 may include a parser 239 (shell) and a kernel 241 (kernel). The shell 239 is an interface between the user and the kernel of the operating system. The shell can be the outermost layer of the operating system; the shell can manage the interaction between the user and the operating system, for example, waiting for the user's input, interpreting the user's input to the operating system, and processing various operating systems The output result. The kernel 241 may be composed of those parts of the operating system that are used to manage memory, files, peripherals, and system resources. Directly interact with the hardware. The operating system kernel usually runs processes and provides inter-process communication, providing CPU time slice management, interrupts, memory management, IO management, and so on. Application programs 243 include programs that control auto-driving cars, such as programs that manage the interaction between autonomous vehicles and obstacles on the road, programs that control the route or speed of autonomous vehicles, and programs that control interaction between autonomous vehicles and other autonomous vehicles on the road. . The application program 243 also exists on the system of the software deployment server 249. In one embodiment, the computer system 201 may download the application program from the software deployment server 249 when the automatic driving-related program 247 needs to be executed.
例如,应用程序243还可以是控制自动驾驶车辆进行自动泊车的程序。For example, the application program 243 may also be a program for controlling an automatic driving vehicle to perform automatic parking.
示例性地,传感器253可以与计算机系统201关联,传感器253可以用于探测计算机201周围的环境。Exemplarily, the sensor 253 may be associated with the computer system 201, and the sensor 253 may be used to detect the environment around the computer 201.
举例来说,传感器253可以探测动物,汽车,障碍物和人行横道等,进一步传感器还可以探测上述动物,汽车,障碍物和人行横道等物体周围的环境,比如:动物周围的环境,例如,动物周围出现的其他动物,天气条件,周围环境的光亮度等。For example, the sensor 253 can detect animals, cars, obstacles, and crosswalks. Further, the sensor can also detect the surrounding environment of the above-mentioned animals, cars, obstacles, and crosswalks, such as: the environment around the animals, for example, when the animals appear around them. Other animals, weather conditions, the brightness of the surrounding environment, etc.
可选地,如果计算机201位于自动驾驶的汽车上,传感器可以是摄像头,红外线感应器,化学检测器,麦克风等。Optionally, if the computer 201 is located on a self-driving car, the sensor may be a camera, an infrared sensor, a chemical detector, a microphone, etc.
示例性地,在自动泊车的场景中,传感器253可以用于探测车辆周围的库位和周边障碍物的尺寸或者位置,从而使得车辆能够感知库位和周边障碍物的距离,在泊车时进行碰撞检测,防止车辆与障碍物发生碰撞。For example, in an automatic parking scenario, the sensor 253 can be used to detect the size or position of the storage space and surrounding obstacles around the vehicle, so that the vehicle can perceive the distance between the storage space and the surrounding obstacles, and when parking Carry out collision detection to prevent collisions between vehicles and obstacles.
在一个示例中,图1所示的计算机系统150还可以从其它计算机系统接收信息或转移信息到其它计算机系统。或者,从车辆100的传感系统120收集的传感器数据可以被转移到另一个计算机对此数据进行处理。In an example, the computer system 150 shown in FIG. 1 may also receive information from other computer systems or transfer information to other computer systems. Alternatively, the sensor data collected from the sensor system 120 of the vehicle 100 may be transferred to another computer to process the data.
例如,如图3所示,来自计算机系统312的数据可以经由网络被传送到云侧的服务器320用于进一步的处理。网络以及中间节点可以包括各种配置和协议,包括因特网、万维网、内联网、虚拟专用网络、广域网、局域网、使用一个或多个公司的专有通信协议的专用网络、以太网、WiFi和HTTP、以及前述的各种组合;这种通信可以由能够传送数据到其它计算机和从其它计算机传送数据的任何设备,诸如调制解调器和无线接口。For example, as shown in FIG. 3, data from the computer system 312 may be transmitted to the server 320 on the cloud side via the network for further processing. The network and intermediate nodes can include various configurations and protocols, including the Internet, World Wide Web, Intranet, virtual private network, wide area network, local area network, private network using one or more company’s proprietary communication protocols, Ethernet, WiFi and HTTP, And various combinations of the foregoing; this communication can be by any device capable of transferring data to and from other computers, such as modems and wireless interfaces.
在一个示例中,服务器320可以包括具有多个计算机的服务器,例如负载均衡服务器群,为了从计算机系统312接收、处理并传送数据的目的,其与网络的不同节点交换信息。该服务器可以被类似于计算机系统312配置,具有处理器330、存储器340、指令350、和数据360。In one example, the server 320 may include a server with multiple computers, such as a load balancing server group, which exchanges information with different nodes of the network for the purpose of receiving, processing, and transmitting data from the computer system 312. The server may be configured similarly to the computer system 312, with a processor 330, a memory 340, instructions 350, and data 360.
示例性地,服务器320的数据360可以包括车辆周围道路情况的相关信息。例如,服务器320可以接收、检测、存储、更新、以及传送与车辆道路情况的相关信息。Exemplarily, the data 360 of the server 320 may include information related to road conditions around the vehicle. For example, the server 320 may receive, detect, store, update, and transmit information related to the road conditions of the vehicle.
例如,车辆周围道路情况的相关信息包括与车辆周围的其它车辆信息以及障碍物信息。For example, the information related to the road conditions around the vehicle includes information about other vehicles around the vehicle and obstacle information.
目前,自动驾驶车辆的可行驶区域检测通常可以采用单目视觉方法或者双目视觉方法;其中,单目视觉方法是指通过单目相机获取道路环境的图像,基于预先训练的深度神经网络设备检测图像中的可行驶区域;根据检测到的可行驶区域采用平面假设原理,比如,自动驾驶车辆处于平坦的区域,不存在坡道等情况;进而将图像中的可行驶区域从二维像素坐标系下转换至自动驾驶车辆所在的三维坐标系下,完成可行驶区域的空间定位。双目视觉方法是指通过双目相机分别获取双目相机输出的图像从而得到道路环境的全局视差图,根据视差图检测可行驶区域,进而根据双目相机的距离检测特征将可行驶区域从二维像素坐标系下转换至自动驾驶车辆所在的三维坐标系下,完成可行驶区域的空间定位。At present, the driving area detection of autonomous vehicles can usually adopt monocular vision method or binocular vision method; among them, monocular vision method refers to obtaining images of the road environment through a monocular camera, based on pre-trained deep neural network equipment detection The drivable area in the image; according to the detected drivable area, the principle of plane assumption is used. For example, the autonomous vehicle is in a flat area and there is no slope, etc.; and the drivable area in the image is changed from the two-dimensional pixel coordinate system Down-conversion to the three-dimensional coordinate system where the autonomous vehicle is located to complete the spatial positioning of the drivable area. The binocular vision method refers to obtaining a global disparity map of the road environment by acquiring the images output by the binocular camera separately, detecting the drivable area according to the disparity map, and then dividing the drivable area from the second according to the distance detection feature of the binocular camera. The dimensional pixel coordinate system is converted to the three-dimensional coordinate system where the autonomous vehicle is located to complete the spatial positioning of the drivable area.
但是,对于上述单目视觉方法由于单目相机无法感知距离,所以在进行可行驶区域从像素坐标系到自动驾驶车辆所在坐标系的变换过程中,需要依据平面假设原理,即假设车辆所处的路面完全平坦不存在坡道等,从而导致可行驶区域的定位的精度较低;对于上述双目视觉方法,可行驶区域的定位依赖于视差图,通过双目相机获取全局视差图的计算量较大,从而导致自动驾驶车辆无法做到实时处理,导致自动驾驶车辆在驾驶过程中存在安全风险;因此,如何提高车辆可行驶区域的检测方法的实时性成为一个亟需解决的问题。However, for the above-mentioned monocular vision method, because the monocular camera cannot perceive the distance, in the process of transforming the drivable area from the pixel coordinate system to the coordinate system of the autonomous vehicle, it is necessary to follow the principle of plane assumption, that is, assume that the vehicle is located. The road surface is completely flat and there is no ramp, etc., which results in low positioning accuracy of the drivable area; for the above-mentioned binocular vision method, the positioning of the drivable area depends on the disparity map, and the calculation of obtaining the global disparity map through the binocular camera is relatively large. As a result, the automatic driving vehicle cannot achieve real-time processing, leading to safety risks in the driving process of the automatic driving vehicle; therefore, how to improve the real-time performance of the detection method of the vehicle's travelable area has become an urgent problem to be solved.
有鉴于此,本申请实施例提供了一种车辆可行驶区域的检测方法以及检测装置,在本申请实施例中的车辆可行驶区域的检测方法中可以获取车辆行驶方向的双目图像,基于该双目图像可以获取左目图像与右目图像中的车辆可行驶区域边界;根据左目图像中的可行驶区域边界与右目图像中的可行驶区域边界,从而能够得到双目图像中可行驶区域边界的视差信息,基于视差信息可以得到双目图像中的车辆可行驶区域的位置;通过本申请实施例中的车辆可行驶区域的检测方法可以避免双目图像逐像素点的视差计算从而获取全局视差图像,在本申请的实施例中无需进行全局视差图像的计算,可以只需要计算双目图像中的可行驶区域边界的视差信息即可实现可行驶区域边界点的自车坐标系定位,能够在精度一定的情况下很大程度上减少了计算量,提高自动驾驶车辆的检测系统的实时性。In view of this, the embodiment of the present application provides a method and device for detecting a vehicle drivable area. In the method for detecting a vehicle drivable area in the embodiment of the present application, binocular images of the vehicle traveling direction can be obtained, based on the The binocular image can obtain the boundary of the vehicle's drivable area in the left-eye image and the right-eye image; according to the boundary of the drivable area in the left-eye image and the boundary of the drivable area in the right-eye image, the parallax of the boundary of the drivable area in the binocular image can be obtained Information, based on the disparity information, the position of the vehicle travelable area in the binocular image can be obtained; the method for detecting the vehicle travelable area in the embodiment of the present application can avoid the pixel-by-pixel parallax calculation of the binocular image to obtain the global parallax image, In the embodiment of the present application, there is no need to calculate the global disparity image, and only need to calculate the disparity information of the travelable area boundary in the binocular image to realize the positioning of the self-vehicle coordinate system of the travelable area boundary point. In this case, the amount of calculation is greatly reduced, and the real-time performance of the detection system of autonomous vehicles is improved.
下面结合图4至图14对本申请的实施例中的车辆可行驶区域的检测方法进行详细的说明。Hereinafter, the method for detecting the travelable area of the vehicle in the embodiment of the present application will be described in detail with reference to FIGS. 4 to 14.
图4是本申请实施例提供的自动驾驶车辆的检测系统的示意图。该检测系统400可以用于执行车辆可行驶区域的检测方法,检测系统400可以包括感知模块410、可行驶区域检测模块420、可行驶区域配准模块430以及坐标系转换模块440。Fig. 4 is a schematic diagram of a detection system for an autonomous vehicle provided by an embodiment of the present application. The detection system 400 may be used to implement a method for detecting a vehicle's drivable area. The detection system 400 may include a perception module 410, a drivable area detection module 420, a drivable area registration module 430, and a coordinate system conversion module 440.
其中,感知模块410可以用于感知自动驾驶车辆行驶时路面以及周围环境的信息;感知模块可以包括双目相机,其中,双目相机可以包括左眼相机与右眼相机,双目相机可以用于感知车辆周围的环境信息,由于后续深度学习网络进行可行驶区域的检测;左眼相机与右眼相机之间可以满足图像帧同步,可以通过硬件实现图像帧同步或者也可以通过软件 实现图像帧同步。Among them, the perception module 410 can be used to perceive information about the road surface and surrounding environment when the autonomous vehicle is driving; the perception module can include binocular cameras, where the binocular cameras can include left-eye cameras and right-eye cameras, and the binocular cameras can be used for Perceive the environmental information around the vehicle, as the follow-up deep learning network detects the drivable area; the left-eye camera and the right-eye camera can meet the image frame synchronization, which can be achieved through hardware or software. .
例如,为了保证车载双目相机可以正常应用于智能驾驶场景下,双目相机的基线距离可以大于30cm以保证能够支持100米左右物体的检测。For example, in order to ensure that the vehicle-mounted binocular camera can be normally used in a smart driving scene, the baseline distance of the binocular camera can be greater than 30 cm to ensure that it can support the detection of objects of about 100 meters.
可行驶区域检测模块420可以用于实现在像素坐标系下的可行驶区域的检测;该模块可以由深度学习网络构成,用于检测左眼图像与右眼图像中的可行驶区域;可行驶区域检测模块420的输入数据可以为上述感知模块410中包括的左眼相机与右眼相机采集到的图像数据,输出数据可以为左眼图像与右眼图像中的可行驶区域边界点在像素坐标系下的坐标。The drivable area detection module 420 can be used to detect the drivable area in the pixel coordinate system; the module can be composed of a deep learning network to detect the drivable area in the left-eye image and the right-eye image; the drivable area The input data of the detection module 420 may be the image data collected by the left-eye camera and the right-eye camera included in the above-mentioned perception module 410, and the output data may be the boundary point of the drivable area in the left-eye image and the right-eye image in the pixel coordinate system. The coordinates below.
可行驶区域配准模块430至少可以用于执行以下五个步骤:The drivable area registration module 430 can be used to perform the following five steps at least:
第一个步骤:左眼可行驶区域边界分段处理;即在像素坐标系下,依据斜率变换情况对上述提取到的左眼图像可行驶区域边界点进行分段处理,将连续的可行驶区域边界点分割为N段。The first step: the left-eye drivable area boundary segmentation processing; that is, in the pixel coordinate system, the boundary points of the left-eye image drivable area extracted above are processed according to the slope transformation, and the continuous drivable area The boundary point is divided into N segments.
第二个步骤:左眼可行驶区域边界点与右眼可行驶区域边界点的分段匹配处理;即在像素坐标系下,依据分段后的可行驶区域边界点,对不同的段采用不同的匹配策略进行左眼可行驶区域边界点与右眼可行驶区域边界点的匹配。The second step: segment matching processing between the boundary points of the left-eye drivable area and the boundary points of the right-eye drivable area; that is, in the pixel coordinate system, different segments are used according to the boundary points of the segmented drivable area The matching strategy of the matching strategy is to match the boundary points of the left-eye drivable area with the boundary points of the right-eye drivable area.
第三个步骤:可行驶区域的配准滤波处理;即在像素坐标系下,通过滤波算法对匹配后的可行驶区域边界点进行滤波,以保证边界点配准的准确性;The third step: registration filtering processing of the drivable area; that is, in the pixel coordinate system, the boundary points of the matched drivable area are filtered by a filtering algorithm to ensure the accuracy of the boundary point registration;
第四个步骤:可行驶区域的边界点视差计算;即在像素坐标系下,依据配准后的可行驶区域边界点进行对应可行驶区域边界点的视差计算;The fourth step: the disparity calculation of the boundary points of the drivable area; that is, in the pixel coordinate system, the disparity calculation corresponding to the boundary points of the drivable area is performed according to the registered boundary points of the drivable area;
第五个步骤:视差亚像素化处理;即在像素坐标系下,对上述获取到的视差进行亚像素化处理,用于保证对较远距离情况下可行驶区域边界点的坐标定位依然保证较高的定位精度。The fifth step: Parallax sub-pixelation processing; that is, in the pixel coordinate system, the parallax obtained above is sub-pixelated to ensure that the coordinate positioning of the boundary points of the travelable area at a longer distance is still ensured. High positioning accuracy.
其中,亚像素可以是指将相邻两像素之间细分处理,这意味着每个像素将被分为更小的单元。Among them, sub-pixel may refer to subdividing two adjacent pixels, which means that each pixel will be divided into smaller units.
坐标系转换模块440可以用于检测可行驶区域边界点在X,Y两个方向坐标的定位,其中,可行驶区域边界点距离自车的X距离计算可以是基于上述求取的视差,通过三角测量法或最小二乘法进行边界点从像素坐标系到自车坐标系的X方向坐标的转换;可行驶区域边界点距离自车的Y距离计算可以基于求取的X距离,通过三角测量法进行可行驶区域边界点从像素坐标系到自车坐标系的Y方向坐标的转换。The coordinate system conversion module 440 can be used to detect the positioning of the boundary point of the drivable area in the X and Y coordinates. The calculation of the X distance between the boundary point of the drivable area and the self-vehicle can be based on the parallax obtained above, through the triangulation The measurement method or the least square method is used to convert the boundary point from the pixel coordinate system to the X-direction coordinate of the self-car coordinate system; the Y distance calculation from the boundary point of the travelable area to the self-car can be carried out by triangulation based on the calculated X distance The conversion of the boundary point of the drivable area from the pixel coordinate system to the Y coordinate of the vehicle coordinate system.
示例性地,如图5所示为坐标转换的示意图;可以将可行驶区域边界上的点P(x,y)从二维图像的像素坐标转换至自动驾驶车辆所在的三维坐标系中,其中,X方向可以表示沿着自动驾驶车辆所在位置的向前的方向;Y方向可以表示沿着自动驾驶车辆所在位置的向左的方向。Illustratively, Figure 5 is a schematic diagram of coordinate conversion; the point P(x, y) on the boundary of the drivable area can be converted from the pixel coordinates of the two-dimensional image to the three-dimensional coordinate system where the autonomous vehicle is located, where , The X direction can represent the forward direction along the location of the autonomous vehicle; the Y direction can represent the left direction along the location of the autonomous vehicle.
应理解,上述图5为坐标系的举例说明,并不对坐标系中的方向作任何限定。It should be understood that the foregoing FIG. 5 is an example of the coordinate system, and does not limit the direction in the coordinate system in any way.
图6是本申请实施例提供的车辆可行驶区域的检测方法的示意性流程图。图6所示的检测方法可以由图1所示车辆,或者,图2所示的自动驾驶系统,或者图4所示的检测系统来执行;图6所示的检测方法包括步骤510至步骤530,下面分别对这些步骤进行详细的描述。Fig. 6 is a schematic flowchart of a method for detecting a vehicle drivable area provided by an embodiment of the present application. The detection method shown in FIG. 6 may be executed by the vehicle shown in FIG. 1, or the automatic driving system shown in FIG. 2, or the detection system shown in FIG. 4; the detection method shown in FIG. 6 includes steps 510 to 530 , These steps are described in detail below.
步骤510、获取车辆行驶方向的双目图像。Step 510: Obtain a binocular image of the driving direction of the vehicle.
其中,上述双目图像可以包括左目图像与右目图像;比如,可以是指由自动驾驶车辆 中的平行等高的两个摄像头分别采集的左、右二维图像;也可以是由如图4所示的双目相机获取的图像。Among them, the above-mentioned binocular images may include left-eye images and right-eye images; for example, it may refer to the left and right two-dimensional images collected by two parallel and equal-height cameras in an autonomous vehicle; The image acquired by the binocular camera shown.
例如,双目图像可以是自动驾驶车辆获取行驶方向的道路路面或者周围环境的图像;比如,包括道路路面图像以及车辆附件的障碍物、行人的图像。For example, the binocular image may be an image of the road surface or the surrounding environment obtained by the autonomous vehicle in the driving direction; for example, the image of the road surface and the obstacles and pedestrians attached to the vehicle are included.
步骤520、根据左目图像中的可行驶区域边界与右目图像中的可行驶区域边界,得到双目图像中可行驶区域边界的视差信息。Step 520: Obtain the disparity information of the drivable area boundary in the binocular image according to the drivable area boundary in the left-eye image and the drivable area boundary in the right-eye image.
需要说明的是,视差信息是指从有一定距离的两个点上观察同一个目标所产生的方向差异的信息。比如,通过自动驾驶车辆中的左目摄像头与右目摄像头采集的相同的道路可行驶区域的位置在水平方向上产生的差异可以为视差信息。It should be noted that the disparity information refers to the information of the direction difference caused by observing the same target from two points with a certain distance. For example, the difference in the horizontal direction between the position of the driveable area on the same road collected by the left-eye camera and the right-eye camera in an autonomous vehicle may be parallax information.
示例性地,可以将获取的左目图像与右目图像分别输入至预先训练用于识别图像中的可行驶区域的深度学习网络;通过该预先训练的深度学习网络可以识别出左目图像与右目图像中的车辆可行驶区域。Exemplarily, the obtained left-eye image and right-eye image can be respectively input to a deep learning network pre-trained to recognize the drivable area in the image; the pre-trained deep learning network can identify the left-eye image and the right-eye image. The area where the vehicle can travel.
步骤530、基于视差信息,得到双目图像中的车辆可行驶区域。Step 530: Based on the disparity information, obtain the travelable area of the vehicle in the binocular image.
例如,可以基于双目图像中的可行驶区域边界的视差信息,通过三角测量法得到双目图像中的车辆可行驶区域的位置。For example, based on the parallax information of the boundary of the drivable area in the binocular image, the position of the vehicle drivable area in the binocular image can be obtained by triangulation.
进一步,在本申请的实施例中,在对左目图像中的可行驶区域边界与右目图像中的可行驶边界进行匹配,计算双目图像中可行驶区域边界的视差信息时,可以对双目图像中的任意一个图像中的可行驶区域边界进行分段处理,通过分段后的可行驶区域边界可以进行分段匹配,从而能够提高可行驶区域边界匹配的精度,有利于更加准确的获取道路中车辆可行驶区域的信息。Further, in the embodiment of the present application, when the boundary of the drivable area in the left-eye image and the drivable boundary in the right-eye image are matched to calculate the parallax information of the boundary of the drivable area in the binocular image, the binocular image The drivable area boundary in any one of the images in the image is segmented, and the segmented drivable area boundary can be used for segment matching, which can improve the accuracy of the drivable area boundary matching, which is conducive to more accurate road acquisition. Information about the area where the vehicle can travel.
在一种可能的实现方式中,该车辆可行驶区域的检测方法还包括对双目图像中的第一图像中的可行驶区域边界进行分段处理;上述步骤520根据左目图像中的可行驶区域边界与右目图像中的可行驶区域边界,得到双目图像中可行驶区域边界的视差信息可以包括基于分段处理得到的N个边界点线段在双目图像中的第二图像进行可行驶区域边界匹配,得到视差信息,其中,N为大于或者等于2的整数。In a possible implementation, the method for detecting the drivable area of the vehicle further includes segmenting the drivable area boundary in the first image in the binocular image; the foregoing step 520 is based on the drivable area in the left eye image The boundary and the boundary of the drivable area in the right eye image to obtain the parallax information of the drivable area boundary in the binocular image may include the second image of the Binocular image based on the N boundary points and line segments obtained by the segmentation process to perform the drivable area boundary Match to obtain disparity information, where N is an integer greater than or equal to 2.
其中,上述第一图像可以是指双目图像中的左目图像,则第二图像是指双目图像中的右目图像;或者,第一图像可以是指双目图像中的右目图像,则第二图像是指双目图像中的左目图像。Wherein, the above-mentioned first image may refer to the left-eye image in the binocular image, and the second image may refer to the right-eye image in the binocular image; or, the first image may refer to the right-eye image in the binocular image, and the second image The image refers to the left eye image in the binocular image.
示例性地,上述对双目图像中的第一图像中的可行驶区域边界进行分段处理,在双目图像中的第二图像中进行可行驶区域边界匹配可以是指在双目图像中的左目图像中的可行驶区域边界进行分段处理,基于分段处理得到的左目图形中的N个边界点线段在右目图像中进行可行驶区域边界匹配,得到视差信息;或者,也可以是指在双目图像中的右目图像中的可行驶区域边界进行分段处理,基于分段处理得到的右目图形中的N个边界点线段在左目图像中进行可行驶区域边界匹配,得到视差信息。Exemplarily, the foregoing segmentation processing is performed on the boundary of the drivable area in the first image in the binocular image, and the boundary matching of the drivable area in the second image in the binocular image may mean that the boundary of the drivable area in the binocular image The boundary of the drivable area in the left-eye image is segmented, and based on the N boundary point line segments in the left-eye image obtained by the segmentation processing, the drivable area boundary is matched in the right-eye image to obtain disparity information; or, it can also refer to The boundary of the drivable area in the right-eye image in the binocular image is segmented, and the travelable area boundary is matched in the left-eye image based on the N boundary point line segments in the right-eye image obtained by the segmentation process to obtain parallax information.
在本申请的实施例中可以根据分段处理后的N个边界点线段的斜率将边界点线段分为斜率较小的边界点线段以及斜率较大的边界点线段,从而基于匹配策略对双目图像中的车辆可行驶区域边界进行匹配,根据匹配结果可以得到视差信息。In the embodiment of the present application, the boundary point line segment can be divided into a boundary point line segment with a smaller slope and a boundary point line segment with a larger slope according to the slopes of the N boundary point line segments after the segmentation process, so as to control the binocular based on the matching strategy. The boundary of the vehicle in the image can be matched, and the disparity information can be obtained according to the matching result.
在一种可能的实现方式中,基于分段处理得到的N个边界点线段对第二图像进行可行驶区域边界匹配,得到视差信息,包括:根据N个边界点线段与匹配策略对第二图像进行可行驶区域边界匹配,得到视差信息,其中,匹配策略是根据N个边界点线段的斜率确定 的。In a possible implementation manner, the second image is matched with the travelable area boundary based on the N boundary point line segments obtained by the segmentation process to obtain the disparity information, including: matching the second image according to the N boundary point line segments and the matching strategy Perform boundary matching of the drivable area to obtain disparity information, where the matching strategy is determined according to the slopes of the line segments of the N boundary points.
例如,对于N个边界点线段中的第一类边界点线段,可以采用匹配策略中的第一匹配策略对第二图像进行可行驶区域边界匹配,其中,第一类边界点线段可以包括道路边沿的可行驶区域边界以及其它车辆侧面的可行驶区域边界;For example, for the first type of boundary point line segment among the N boundary point line segments, the first matching strategy in the matching strategy can be used to match the travelable area boundary of the second image, where the first type of boundary point line segment may include the road edge The boundary of the drivable area and the boundary of the drivable area on the side of other vehicles;
对于N个边界点线段中的第二类边界点线段,可以采用匹配策略中的第二匹配策略对第二图像进行可行驶区域边界匹配,其中,第二类边界点线段中的边界点与车辆的距离相同。For the second type of boundary point line segment in the N boundary point line segments, the second matching strategy in the matching strategy can be used to match the travelable area boundary of the second image, where the boundary point of the second type of boundary point line segment matches the vehicle The distance is the same.
其中,上述第一类边界点线段可以是指斜率较大的边界点线段;比如,道路边沿区域,或者其它车辆侧面区域;第二类边界点线段可以是指斜率较小的边界点线段;比如,其它车辆的尾部区域等。对于斜率较大的边界点线段左目图像与右目图像中提取出来的可行驶区域边界中基本不存在重叠的现象;对于斜率较小的边界点线段左目图像与右目图像中提取出来的可行驶区域边界存在部分边界点重叠现象。Among them, the first type of boundary point line segment may refer to a boundary point line segment with a relatively large slope; for example, a road edge area or other vehicle side area; the second type of boundary point line segment may refer to a boundary point line segment with a relatively small slope; for example, , The rear area of other vehicles, etc. There is basically no overlap in the drivable area boundary extracted from the left eye image and the right eye image for the boundary point line segment with a larger slope; for the drivable area boundary extracted from the left eye image and the right eye image for the boundary point line segment with a smaller slope There is some overlap of boundary points.
可选地,上述匹配策略可以包括第一匹配策略与第二匹配策略,其中,第一匹配策略可以是指通过搜索区域进行匹配,搜索区域可以是指以第一边界点线段中的任意一个点为中心生成的区域,第一边界点线段为N个边界点线段中的任意一个边界点线段;第二匹配策略可以是指通过预设搜索步长进行匹配,预设搜索步长是基于N个边界点线段中一个边界点线段的边界点视差确定的。Optionally, the above-mentioned matching strategy may include a first matching strategy and a second matching strategy. The first matching strategy may refer to matching through a search area, and the search area may refer to any point in a line segment with a first boundary point. As the center generated area, the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy can refer to matching through a preset search step, which is based on N The boundary point parallax of a boundary point line segment in the boundary point line segment is determined.
示例性地,对于上述斜率较大的边界点线段可以采用通过以双目图像中的第一图像(例如,左目图像)可行驶区域边界点段的一个边界点为模板点,以第一图像中的同一行对应的第二图像(例如,右目图像)边界点为中心,生成搜索区域与第一图像中的模板点进行匹配。具体流程参见后续图7所示。Exemplarily, for the above-mentioned boundary point line segment with a larger slope, the first image (for example, the left eye image) in the binocular image can be used as a template point by using a boundary point of the travelable area boundary point segment, and the first image The boundary point of the second image (for example, the right eye image) corresponding to the same row of is the center, and the search area is generated to match the template point in the first image. The specific process is shown in Figure 7 below.
示例性地,对于斜率较小的边界点线段,由于通过第一图像(例如,左目图像)提取到的边界点会与通过第二图像(例如,右目图像)提取的边界点存在部分边界点重叠现象,可以采用拐点视差修正的方法进行该部分边界点的匹配。具体流程参见后续图7所示。Exemplarily, for the boundary point line segment with a small slope, because the boundary point extracted through the first image (for example, the left eye image) will partially overlap with the boundary point extracted through the second image (for example, the right eye image) Phenomenon, the inflection point parallax correction method can be used to match the boundary points of this part. The specific process is shown in Figure 7 below.
在本申请的实施例中,提出了基于可行驶区域边界的斜率大小的分段式匹配策略,将检测到的可行驶区域依据斜率分布进行分段化处理,对于斜率较大的及斜率较小的可行驶区域边界采用不同的匹配策略进行匹配,从而能够提升可行驶区域边界点的匹配精度。In the embodiment of the present application, a segmented matching strategy based on the magnitude of the slope of the drivable area boundary is proposed, and the detected drivable area is segmented according to the slope distribution. For larger slopes and smaller slopes Different matching strategies are used to match the boundaries of the drivable area, which can improve the matching accuracy of the boundary points of the drivable area.
图7是本申请实施例提供的车辆可行驶区域的检测方法的示意性流程图。图7所示的检测方法可以由图1所示车辆,或者,图2所示的自动驾驶系统,或者图4所示的检测系统来执行;图7所示的检测方法包括步骤601至步骤614,下面分别对这些步骤进行详细的描述。Fig. 7 is a schematic flowchart of a method for detecting a vehicle drivable area provided by an embodiment of the present application. The detection method shown in FIG. 7 can be executed by the vehicle shown in FIG. 1, or the automatic driving system shown in FIG. 2, or the detection system shown in FIG. 4; the detection method shown in FIG. 7 includes steps 601 to 614 , These steps are described in detail below.
步骤601、开始;即开始执行车辆可行驶区域的检测方法。Step 601: Start; that is, start to execute the detection method of the vehicle's drivable area.
步骤602、获取左眼图像。Step 602: Acquire a left eye image.
其中,左眼图像可以是指双目相机中的一个相机(例如,左目相机)获取的道路路面或者周围环境的图像。The left-eye image may refer to an image of the road surface or the surrounding environment acquired by one of the binocular cameras (for example, the left-eye camera).
步骤603、获取右眼图像。Step 603: Acquire a right eye image.
其中,右眼图像可以是指双目相机中的另一个相机(例如,左目相机)获取的道路路面或者周围环境的图像。The right-eye image may refer to the image of the road surface or the surrounding environment acquired by another camera (for example, the left-eye camera) among the binocular cameras.
需要说明的是,上述步骤602与步骤603可以是同时执行的;或者,也可以是先执行步骤603再执行步骤602,本申请对此不作任何限定。It should be noted that the foregoing step 602 and step 603 may be performed at the same time; alternatively, step 603 may be performed first and then step 602 may be performed, which is not limited in this application.
步骤604、对获取的左眼图像进行车辆可行驶区域检测。Step 604: Perform a vehicle-driving area detection on the acquired left-eye image.
例如,可以通过训练数据预先训练用于识别图像中的可行驶区域的深度学习网络;通过该预先训练的深度学习网络可以识别出左眼图像中的车辆可行驶区域。For example, a deep learning network for identifying a drivable area in an image can be pre-trained through training data; the pre-trained deep learning network can identify a drivable area of a vehicle in the left-eye image.
示例性地,可以利用预先训练的深度学习网络,进行可行驶区域在像素坐标系下的检测;其中,输入数据可以为采集到的左眼图像,输出数据可以为检测出的可行驶区域在左眼图像中的像素坐标系下的坐标向量;基于输出的坐标向量可以在左眼图像中定位出像素坐标系下可行驶区域的坐标。Exemplarily, a pre-trained deep learning network can be used to detect the drivable area in the pixel coordinate system; the input data can be the collected left-eye image, and the output data can be the detected drivable area on the left. The coordinate vector in the pixel coordinate system in the eye image; based on the output coordinate vector, the coordinates of the drivable area in the pixel coordinate system can be located in the left eye image.
步骤605、对获取的右眼图像进行车辆可行驶区域检测。Step 605: Perform a vehicle-driving area detection on the acquired right-eye image.
同理,可以采用上述步骤604中预先训练的深度学习网络对右眼图像中的可行驶区域进行检测。In the same way, the pre-trained deep learning network in step 604 can be used to detect the drivable area in the right-eye image.
示例性地,可以将获取的左眼图像与右眼图像同时输入至预先训练的深度学习网络进行检测;或者,也可以将获取的左眼图像与右眼图像分别一前一后的输入至预先训练的深度学习网络,从而可以检测出左眼图像与右眼图像中的可行驶区域的坐标。Exemplarily, the obtained left-eye image and right-eye image can be simultaneously input to a pre-trained deep learning network for detection; or, the obtained left-eye image and right-eye image can also be input one after the other to the pre-trained deep learning network. The trained deep learning network can detect the coordinates of the drivable area in the left-eye image and the right-eye image.
步骤606、对左眼图像中的可行驶区域进行分段处理。Step 606: Perform segmentation processing on the drivable area in the left-eye image.
示例性地,可以根据在左眼图像中检测出的可行驶区域的边界点的不同斜率大小,对可行驶区域的边界进行分段处理。Exemplarily, the boundary of the drivable area may be segmented according to different slopes of the boundary points of the drivable area detected in the left-eye image.
示例性地,假设车辆可行驶区域边界的坐标向量P{p 1(x 1,y 1),p 2(x 2,y 2)..p n(x n,y n)},则可以通过以下公式计算每一个点的斜率: Illustratively, assuming that the coordinate vector P{p 1 (x 1 ,y 1 ),p 2 (x 2 ,y 2 )..p n (x n ,y n )} of the boundary of the vehicle's drivable area can be passed The following formula calculates the slope of each point:
Figure PCTCN2020075104-appb-000001
Figure PCTCN2020075104-appb-000001
其中,K i可以表示第i个点的斜率;x i可以表示第i个点在像素坐标系下的横坐标;x i-1可以表示第i-1个点在像素坐标系下的横坐标;y i可以表示第i个点在像素坐标系下的纵坐标;y i-1可以表示第i-1个点在像素坐标系下的纵坐标。 Among them, K i can represent the slope of the i-th point; xi can represent the abscissa of the i-th point in the pixel coordinate system ; xi-1 can represent the abscissa of the i-1th point in the pixel coordinate system ; Y i can represent the ordinate of the i-th point in the pixel coordinate system; y i-1 can represent the ordinate of the i-1th point in the pixel coordinate system.
进一步地,通过上述公式可以得到车辆可行驶区域边界上每个点的斜率向量为K{k 1,k 2...k n};根据斜率的跳变情况进行可行驶区域边界的拐点检测;例如图8所示,可以得到车辆可行驶区域边界上的拐点包括点B、点C、点D、点E、点F、点G;根据拐点可以将车辆可行驶区域边界进行分段,如图8所示,可以分为AB段、BC段、CD段、DE段、EF段、FG段以及GH段。 Further, through the above formula, the slope vector of each point on the boundary of the vehicle's drivable area can be obtained as K{k 1 , k 2 ... k n }; the inflection point of the boundary of the drivable area is detected according to the jump of the slope; For example, as shown in Figure 8, the inflection points on the boundary of the vehicle's drivable area can be obtained, including point B, point C, point D, point E, point F, and point G; according to the inflection point, the boundary of the vehicle drivable area can be segmented, as shown in the figure As shown in 8, it can be divided into AB section, BC section, CD section, DE section, EF section, FG section and GH section.
示例性地,可以将上述可行驶区域边界进行分段,得到N段;可以将N段边界点依据该段的斜率分布情况进行二分类,即斜率较小的边界点线段与斜率较大的边界点线段。Exemplarily, the boundary of the above-mentioned drivable area can be segmented to obtain N segments; the boundary points of the N segments can be classified into two categories according to the slope distribution of the segment, that is, the boundary point line segment with a smaller slope and the boundary with a larger slope Point line segment.
例如,图9中的(a)所示为通过双目相机中的一个相机获取的道路路面图像(例如,左目图像),从图9中的(a)所示的图像可以看出车辆可行驶区域的边界包括点A、点B、点C以及点D。若检测到的可行驶区域边界位于道路边沿或者其它车辆的侧面,则反映到像素坐标系下对应的可行驶区域边界的斜率较大;如图8所示的可行驶区域边界上包括的线段AB,线段CD,线段EF,线段GH;若检测到的可行驶区域边界位于其它车辆的尾部,则对应的可行驶区域边界的斜率较小,如图8所示的可行驶区域边界上包括的线段BC,线段DE,线段FG。For example, (a) in Figure 9 shows a road surface image (for example, the left-eye image) obtained by one of the binocular cameras. From the image shown in Figure 9 (a), it can be seen that the vehicle can travel The boundary of the area includes point A, point B, point C, and point D. If the detected drivable area boundary is located on the edge of the road or on the side of other vehicles, the slope of the corresponding drivable area boundary in the pixel coordinate system will be larger; the line segment AB included on the drivable area boundary as shown in Figure 8 , Line segment CD, line segment EF, line segment GH; if the detected drivable area boundary is located at the rear of other vehicles, the slope of the corresponding drivable area boundary is smaller, as shown in Figure 8 for the line segment included on the drivable area boundary BC, line segment DE, line segment FG.
步骤607、可行驶区域边界分段匹配。Step 607: Segment matching of the boundary of the drivable area.
其中,上述步骤606举例说明了对左眼图像中检测的车辆可行驶区域边界进行分段;此外,可以根据分段处理后的边界点线段的斜率将边界点线段分为斜率较小的边界点线段 以及斜率较大的边界点线段;步骤607将在右眼图像中进行与左眼图像中的边界点线段进行匹配。Among them, the above step 606 illustrates the segmentation of the boundary of the vehicle travelable area detected in the left-eye image; in addition, the boundary point line segment can be divided into the boundary point with a smaller slope according to the slope of the boundary point line segment after the segmentation process Line segments and boundary point line segments with larger slopes; step 607 will perform matching in the right-eye image with the boundary point line segments in the left-eye image.
示例性地,在右眼图像中可以基于右眼图像的可行驶区域边界的分段结果,依据不同的匹配策略对上述获取到的左眼图像中的可行驶区域边界点进行匹配。Exemplarily, in the right-eye image, based on the segmentation result of the drivable area boundary of the right-eye image, the boundary points of the drivable area in the acquired left-eye image may be matched according to different matching strategies.
例如,对于斜率较大的边界点线段,可能对应于真实场景中的道路边沿,或者车辆侧面,比如图9中的(a)所示的AB线段,CD线段;如图9中的(b)所示,为对于斜率较大的边界点线段(例如,道路边沿)位于左图像与右眼图像中提取出来的可行驶区域边界中基本不存在重叠的现象。For example, for the boundary point line segment with a large slope, it may correspond to the road edge in the real scene, or the side of the vehicle, such as the AB line segment and the CD line segment shown in Figure 9 (a); Figure 9 (b) As shown, for the boundary point line segment with a larger slope (for example, the edge of the road), there is basically no overlap in the boundary of the drivable area extracted from the left image and the right eye image.
第一匹配策略:对于上述斜率较大的边界点线段可以采用通过以左眼图像可行驶区域边界点段的一个边界点为模板点,以左眼图像中的同一行对应的右眼图像边界点为中心,生成搜索区域与左眼图像中的模板点进行匹配。The first matching strategy: for the above-mentioned boundary point line segment with a larger slope, one boundary point of the left-eye image's travelable area boundary point segment can be used as a template point, and the right-eye image boundary point corresponding to the same row in the left-eye image As the center, the search area is generated to match the template points in the left-eye image.
例如,图10中的(a)所示的为左眼图像;图10中的(b)所示的为在右眼图像中生成搜索区域与左眼图像中的模板点进行匹配的示意图。For example, (a) in FIG. 10 shows a left-eye image; (b) in FIG. 10 shows a schematic diagram of generating a search area in the right-eye image to match template points in the left-eye image.
示例性地,上述搜索区域可以是通过八维描述子得到的搜索区域;其中,可以首先将360度等分为八份,即0度~45度,45度~90度,90度~135度,135度~180度,180度~225度,225度~270度,270度~315度,315度~360度,八个区域可以分别代表了八个角度分布;对应于后续生成的八维描述子,统计上述5*5的邻域内各个像素点的角度,角度落在对应的区域内则对应的区域赋值按照1*range的方式进行累加,求取每个区域的赋值,从而生成8维的描述子;可以采用如下公式进行计算。Exemplarily, the above-mentioned search area may be a search area obtained by an eight-dimensional descriptor; wherein, 360 degrees may be divided into eight equal parts first, that is, 0 degrees to 45 degrees, 45 degrees to 90 degrees, and 90 degrees to 135 degrees. , 135 degrees to 180 degrees, 180 degrees to 225 degrees, 225 degrees to 270 degrees, 270 degrees to 315 degrees, 315 degrees to 360 degrees, the eight regions can represent eight angular distributions; corresponding to the eight-dimensional subsequent generation Descriptor, count the angles of each pixel in the above 5*5 neighborhood, and if the angle falls in the corresponding area, the corresponding area assignment is accumulated in a 1*range manner, and the assignment of each area is calculated to generate 8-dimensional Descriptor of; can be calculated using the following formula.
Figure PCTCN2020075104-appb-000002
Figure PCTCN2020075104-appb-000002
其中,S可以表示为描述子,Angle可以表示对应的8个角度区域,range可以表示每个点对应的梯度大小即幅值;通过上述公式,可以求取得出每一个可行驶区域边界点对应的八维描述子,后续匹配过程均基于描述子实现。Among them, S can be represented as a descriptor, Angle can represent the corresponding 8 angle regions, and range can represent the gradient size or amplitude corresponding to each point; through the above formula, the corresponding boundary point of each travelable area can be obtained. Eight-dimensional descriptor, the subsequent matching process is based on the realization of the descriptor.
例如,可以生成5*5的大小生成搜索区域,将模板点的描述子与搜索区域内每一个点的描述子进行相似度匹配,确定与模板点匹配的配对点,从而实现斜率较大的边界点段的匹配。For example, you can generate a search area with a size of 5*5, match the descriptors of the template points with the descriptors of each point in the search area, and determine the matching points that match the template points, so as to achieve a boundary with a larger slope. Point segment matching.
其中,可以通过左眼图像提取到的可行驶区域边界点中的每一个点为中心,在左眼图像对应的右眼图像的相同区域中,比如,在左眼图像中的一个边界点对应的右眼图像周围的5*5的邻域内进行描述子的生成,采用如下公式计算右眼图像中邻域内每一个点的梯度及角度值。Among them, each point in the boundary point of the drivable area that can be extracted from the left-eye image is the center, in the same area of the right-eye image corresponding to the left-eye image, for example, in the left-eye image corresponding to a boundary point The descriptor is generated in the 5*5 neighborhood around the right-eye image, and the gradient and angle value of each point in the neighborhood in the right-eye image are calculated using the following formula.
Figure PCTCN2020075104-appb-000003
Figure PCTCN2020075104-appb-000003
其中,dx表示该像素点的x方向梯度,dy表示该像素点y方向梯度,Angle表示该像素点角度值,range表示该像素点梯度幅度。Among them, dx represents the x-direction gradient of the pixel, dy represents the y-direction gradient of the pixel, Angle represents the angle value of the pixel, and range represents the gradient magnitude of the pixel.
需要说明的是,上述是以八维描述子生成5*5的大小生成搜索区域进行举例说明,搜索区域的大小还可以为其他尺度大小,本申请对此不作任何限定。It should be noted that the above description is based on the eight-dimensional descriptor generating a search area with a size of 5*5 as an example. The size of the search area can also be other scale sizes, which is not limited in this application.
第二匹配策略:对于斜率较小的边界点线段,由于通过右眼图像提取到的边界点会与通过左眼图像提取的边界点存在部分边界点重叠现象,可以采用拐点视差修正的方法进行该部分边界点的匹配。The second matching strategy: For the boundary point line segment with a small slope, because the boundary point extracted by the right eye image will overlap with the boundary point extracted by the left eye image, the inflection point parallax correction method can be used for this Matching of partial boundary points.
具体地,如图11所示的BC线段,由于该可行驶区域边界点线段的斜率较小,即左眼图像与右眼图像检测出来的可行驶区域边界在此边界点线段中可能会存在重叠区域;对于BC线段的相邻区域,比如,B点处左侧的线段以及C点出右侧的线段均为斜率较大的边界点线段,通过上述匹配策略一可以对斜率较大的边界点进行匹配;在BC线段的相邻线段的匹配过程中可以求获取左眼图像与右眼图像在B点与C点的视差,可以通过以B,C两点的视差为基础,在BC线段以上述两点视差均值为搜索长度,通过搜索长度对左眼图像与右眼图像中斜率较小的边界点线段进行匹配。Specifically, the BC line segment shown in FIG. 11, because the slope of the line segment of the boundary point of the drivable area is relatively small, that is, the boundary of the drivable area detected by the left-eye image and the right-eye image may overlap in the line segment of the boundary point. Area; for the adjacent area of the BC line segment, for example, the line segment on the left side of point B and the line segment on the right side of point C are boundary point line segments with a larger slope. Through the above matching strategy one, the boundary points with a larger slope can be matched Matching; in the matching process of the adjacent line segments of the BC line segment, the parallax between the left eye image and the right eye image at point B and C can be obtained, and the parallax between the two points B and C can be used as the basis for the BC line segment to The average value of the above-mentioned two-point disparity is the search length, and the boundary point line segment with the smaller slope in the left-eye image and the right-eye image is matched by the search length.
应理解,上述通过描述子进行边界点匹配同样适用于斜率较小的边界点线段,比如,对左眼图像与右眼图像中可行驶区域边界点BC线段进行匹配。It should be understood that the above-mentioned boundary point matching through the descriptor is also applicable to the boundary point line segment with a small slope, for example, matching the BC line segment of the travelable area boundary point in the left-eye image and the right-eye image.
还应理解,上述是以对左眼图像中的可行驶区域边界进行分段处理后与右眼图像中的可行驶区域边界进行匹配的过程进行举例说明;同理,也可以对获取的右眼图像中的可行驶区域边界进行分段处理从而与左眼图像中的可行驶区域边界进行匹配,本申请对此不作任何限定。It should also be understood that the above is an example of the process of matching the drivable area boundary in the left-eye image with the drivable area boundary in the right-eye image after segmentation processing; in the same way, the acquired right eye The boundary of the drivable area in the image is segmented to match the boundary of the drivable area in the left-eye image, which is not limited in this application.
在本申请的实施例中,对于提取的左眼图像的可行驶区域边界进行分段,通过对于斜率较大的以及斜率较小的可行驶区域边界采用不同的匹配策略与右眼图像中的可行驶区域边界进行匹配,能够提高可行驶区域边界点的匹配精度。In the embodiment of the present application, the extracted drivable area boundary of the left-eye image is segmented, and different matching strategies are adopted for the drivable area boundary with a larger slope and a smaller slope. Matching the boundary of the driving area can improve the matching accuracy of the boundary points of the drivable area.
步骤608、匹配结果滤波。Step 608: Filter the matching result.
其中,匹配结果滤波可以是指对上述步骤607的匹配结果进行滤波,从而消除错误匹配点的过程。Wherein, the matching result filtering may refer to a process of filtering the matching result of step 607, so as to eliminate erroneous matching points.
示例性地,对于匹配到的边界点,可能会存在错误匹配的情况;因此,可以采用滤波的方式对匹配到的边界点对进行滤波,从而确保可行驶区域边界点的匹配精度。Exemplarily, for the matched boundary points, there may be a mismatch situation; therefore, the matched boundary point pairs may be filtered in a filtering manner, so as to ensure the matching accuracy of the boundary points of the drivable area.
进一步地,由于考虑到真实场景中,光线等外界因素的影响,采用固定阈值进行滤波的鲁棒性较低;因此,可以采用动态阈值的方式进行滤波。Further, considering the influence of external factors such as light in a real scene, the robustness of filtering with a fixed threshold is low; therefore, filtering can be performed with a dynamic threshold.
例如,可以根据匹配到的边界点的匹配度,对边界点的匹配度进行排序,并按照箱型图的方式剔除异常的匹配点,保留匹配度较高的匹配点作为最终左眼图像与右眼图像中可行驶区域成功匹配的边界点。For example, according to the matching degree of the matched boundary points, the matching degree of the boundary points can be sorted, and abnormal matching points can be eliminated according to the method of box plot, and the matching points with higher matching degree can be retained as the final left-eye image and right-eye image. The boundary point where the drivable area is successfully matched in the eye image.
步骤609、可行驶区域边界点视差计算。Step 609: Calculate the disparity of the boundary points of the drivable area.
例如,基于上述匹配到的边界点进行视差的计算,比如,可以通过计算左眼图像与右眼图像中匹配成功的边界点在像素坐标系下对应边界点Y方向的坐标差值。For example, the disparity calculation is performed based on the above-mentioned matched boundary points. For example, the coordinate difference of the matched boundary points in the left-eye image and the right-eye image in the pixel coordinate system corresponding to the boundary point in the Y direction can be calculated.
步骤610、可行驶区域边界点视差滤波。Step 610: Parallax filtering of boundary points of the drivable area.
在本申请的实施例中,通过上述步骤609的视差计算得到的像素点可能是离散点,进一步可以通过步骤610可行驶区域边界点视差滤波,可以将离散点进行连续化操作。In the embodiment of the present application, the pixel points obtained by the disparity calculation in step 609 may be discrete points, and further, the disparity filtering of the boundary points of the drivable area in step 610 may be used to perform continuous operations on the discrete points.
示例性地,通过采用插值和补偿的方法对边界点进行视差滤波处理,视差滤波的过程可以主要分为滤波与插值(例如,连续化处理)两部分;其一,考虑到自图像底部开始,是距离自动驾驶车辆最近的可行驶区域;从图像底部开始逐渐往上,对应匹配到的可行驶区域边界点的视差应该逐渐减少,即可能不会出现边界点视差增大的现象,因此,可以依据此思路进行错误视差的滤除即若从图像底部开始逐渐往上,景深越深则左眼图像与右眼图像的可行驶区域边界的视差越小;若在景深越深区域对应的边界点的视差增大则该边界点可能是匹配的偏差点,则可以剔除掉该偏差点。其二,考虑到像素坐标系下,图像中同一行对应的边界点其距离自动驾驶车辆的距离是一致(即可以是景深一致,或者可以是指Y方向坐标一致),即左眼图像与右眼图像中同一行对应的边界点的视差应当一致,因此基于这种情况可以进行第二次滤波,对于滤波处理后的边界点可以通过相邻边界点视差利用插值的方式进行连续化修正。Exemplarily, by using interpolation and compensation methods to perform parallax filtering on the boundary points, the process of parallax filtering can be mainly divided into filtering and interpolation (for example, continuity processing); first, considering that it starts from the bottom of the image, It is the drivable area closest to the self-driving vehicle; starting from the bottom of the image and gradually upward, the parallax corresponding to the boundary points of the matched drivable area should gradually decrease, that is, there may not be an increase in the parallax of the boundary points. Therefore, you can According to this idea, the error parallax is filtered out, that is, if it starts from the bottom of the image and gradually goes up, the deeper the depth of field, the smaller the parallax between the left-eye image and the right-eye image of the travelable area; if it is at the boundary point corresponding to the deeper area If the parallax increases, the boundary point may be a matching deviation point, and the deviation point can be eliminated. Second, considering that in the pixel coordinate system, the distance between the border points corresponding to the same row in the image and the autonomous vehicle is the same (that is, the depth of field can be the same, or it can mean that the coordinates in the Y direction are the same), that is, the left eye image and the right The disparity of the boundary points corresponding to the same row in the eye image should be the same. Therefore, the second filtering can be performed based on this situation, and the boundary points after the filtering process can be continuously corrected by interpolation using the disparity of the adjacent boundary points.
步骤611、视差亚像素化处理。Step 611: Parallax sub-pixelation processing.
其中,亚像素可以是指将相邻两像素之间细分处理,这意味着每个像素将被分为更小的单元。Among them, sub-pixel may refer to subdividing two adjacent pixels, which means that each pixel will be divided into smaller units.
进一步地,在本申请的实施例中,为了保证双目图像之间距离计算过程中的准确性,可以对提取到的边界点视差进行亚像素化处理。Further, in the embodiment of the present application, in order to ensure the accuracy in the calculation of the distance between the binocular images, the extracted boundary point disparity may be subjected to sub-pixelation processing.
示例性地,如图12所示,假设b点为匹配到的边界点坐标,a、b、c三点的梯度值分别记为:g a,g b,g c;a、b、c三点的坐标分别记为:p a(x a,y a),p b(x b,y b),p c(x c,y c);可以采用如下公式计算亚像素处理后边界点的坐标: Exemplarily, as shown in Figure 12, suppose point b is the coordinate of the matched boundary point, and the gradient values of points a, b, and c are respectively denoted as: g a , g b , g c ; a, b, and c are three The coordinates of the points are respectively recorded as: p a (x a , y a ), p b (x b , y b ), p c (x c , y c ); the following formula can be used to calculate the coordinates of the boundary point after sub-pixel processing :
Figure PCTCN2020075104-appb-000004
Figure PCTCN2020075104-appb-000004
其中,M表示亚像素处理的描述符;N表示亚像素处理的描述符;y表示表示图像坐标系y偏移量。Among them, M represents the descriptor of sub-pixel processing; N represents the descriptor of sub-pixel processing; y represents the y offset of the image coordinate system.
步骤612、可行驶区域边界点在自动驾驶车辆坐标系下X坐标定位。Step 612: The boundary point of the drivable area is positioned on the X coordinate in the coordinate system of the autonomous vehicle.
例如,可以依据求取到的可行驶区域边界点的视差,利用三角测量法进行边界点在自车坐标系下X方向距离的求取。For example, based on the obtained parallax of the boundary point of the drivable area, the triangulation method can be used to obtain the distance of the boundary point in the X direction in the vehicle coordinate system.
步骤613、可行驶区域边界点在自动驾驶车辆坐标系下Y坐标定位。Step 613: The boundary point of the drivable area is positioned on the Y coordinate in the coordinate system of the autonomous vehicle.
例如,可以依据求取到的可行驶区域边界点的视差,利用三角测量法进行边界点在自车坐标系下Y方向距离的求取。For example, based on the obtained parallax of the boundary point of the drivable area, the triangulation method can be used to obtain the distance of the boundary point in the Y direction in the vehicle coordinate system.
示例性地,如图13所示,利用三角测量法进行边界点在自动驾驶车辆所在坐标系中的X方向以及Y方向的距离,其中,f表示焦距;B表示基准线;y表示图像坐标系中的偏移量。Exemplarily, as shown in FIG. 13, triangulation is used to measure the distances of boundary points in the X direction and Y direction in the coordinate system where the autonomous vehicle is located, where f represents the focal length; B represents the reference line; y represents the image coordinate system The offset in.
根据三角形相似关系,即△PLR与△PL1R1的相似可以得到:According to the triangle similarity relationship, that is, the similarity between △PLR and △PL1R1 can be obtained:
Figure PCTCN2020075104-appb-000005
Figure PCTCN2020075104-appb-000005
整理后得到X方向距离:
Figure PCTCN2020075104-appb-000006
d=uL-uR即视差。
Get the distance in X direction after sorting:
Figure PCTCN2020075104-appb-000006
d=uL-uR is the parallax.
如图14所示,Y为物体到相机的水平距离;y为图片上的像素坐标;Y方向距离:
Figure PCTCN2020075104-appb-000007
As shown in Figure 14, Y is the horizontal distance from the object to the camera; y is the pixel coordinates on the picture; and the distance in the Y direction:
Figure PCTCN2020075104-appb-000007
在本申请的实施例中,通过上述步骤可以在不依赖于平面假设的情况下,通过较小的计算量,能够实现及时对可行驶区域的检测及可行驶区域从像素坐标系到自车坐标系的映射,从而提高自动驾驶车辆对车辆可行驶区域检测的实时性。In the embodiment of the present application, through the above steps, without relying on the assumption of the plane, the detection of the drivable area can be realized in time and the drivable area from the pixel coordinate system to the self-car coordinate can be realized through a small amount of calculation. System mapping, thereby improving the real-time detection of autonomous vehicles on the vehicle's drivable area.
图15是本申请实施例的车辆可行驶区域检测方法应用在具体的产品形态上的示意图。FIG. 15 is a schematic diagram of the method for detecting a vehicle travelable area according to an embodiment of the present application applied to a specific product form.
如图15所示产品形态可以是指车载视觉感知设备,通过部署在相关设备的计算节点上的软件算法可以实现对可行驶区域的检测及空间坐标定位的方法。其中,可以主要包括三部分:第一部分为获取图像,即可以通过左眼相机与右眼相机实现获取左眼图像与右眼图像,左眼相机与右眼相机可以满足帧同步;第二部分为获取每个图像中的可行驶区域,比如,由深度学习算法可以输出左眼图像与右眼图像中的可行驶区域(Freespace),或者可行驶区域的边界;上述深度学习算法可以部署与AI芯片中,可以基于多AI芯片并行加速处理从而输出Freespace;第三部分为获取可行驶区域边界点的视差,比如,基于串行处理输出视差;最终可以基于可行驶区域边界的视差获取可行驶区域在自动驾驶车辆所在坐标系的定位,即基于边界视差获取可行驶区域在X方向以及Y方向的距离。The product form shown in Figure 15 can refer to a vehicle-mounted visual perception device, and a method of detecting the drivable area and positioning the space coordinate can be realized through the software algorithm deployed on the computing node of the related device. Among them, it can mainly include three parts: the first part is to obtain the image, that is, the left-eye image and the right-eye image can be obtained through the left-eye camera and the right-eye camera, and the left-eye camera and the right-eye camera can meet the frame synchronization; the second part is Obtain the drivable area in each image, for example, the deep learning algorithm can output the drivable area (Freespace) in the left-eye image and the right-eye image, or the boundary of the drivable area; the above-mentioned deep learning algorithm can be deployed with AI chips , Can be based on the parallel acceleration processing of multiple AI chips to output Freespace; the third part is to obtain the parallax of the boundary points of the drivable area, for example, based on the serial processing to output the parallax; finally, the disparity of the drivable area can be obtained based on the parallax of the drivable area boundary. The positioning of the coordinate system where the autonomous vehicle is located is to obtain the distance of the drivable area in the X direction and the Y direction based on the boundary parallax.
应理解,上述举例说明是为了帮助本领域技术人员理解本申请实施例,而非要将本申请实施例限于所例示的具体数值或具体场景。本领域技术人员根据所给出的上述举例说明,显然可以进行各种等价的修改或变化,这样的修改或变化也落入本申请实施例的范围内。It should be understood that the above examples are intended to help those skilled in the art understand the embodiments of the present application, and are not intended to limit the embodiments of the present application to the specific numerical values or specific scenarios illustrated. Those skilled in the art can obviously make various equivalent modifications or changes based on the above examples given, and such modifications or changes also fall within the scope of the embodiments of the present application.
上文结合图1至图15,详细描述了本申请实施例的车辆可行驶区域的检测方法,下面将结合图16至图17,详细描述本申请的装置实施例。应理解,本申请实施例中的车辆可行驶区域的检测装置可以执行前述本申请实施例的各种车辆可行驶区域的检测方法,即以下各种产品的具体工作过程,可以参考前述方法实施例中的对应过程。The detection method of the vehicle drivable area of the embodiment of the present application is described in detail above with reference to Figs. 1 to 15, and the device embodiment of the present application will be described in detail below with reference to Figs. 16-17. It should be understood that the detection device for the vehicle travelable area in the embodiment of the present application can execute the various vehicle travelable area detection methods of the foregoing embodiments of the present application, that is, for the specific working process of the following various products, you can refer to the foregoing method embodiment Corresponding process in.
图16是本申请实施例提供的车辆可行驶区域的检测装置的示意性框图。应理解,检测装置700可以执行图6至图15所示的可行驶区域的检测方法。该检测装置700包括:获取单元710和处理单元720。Fig. 16 is a schematic block diagram of a device for detecting a vehicle travelable area provided by an embodiment of the present application. It should be understood that the detection device 700 may execute the method for detecting the drivable area shown in FIGS. 6 to 15. The detection device 700 includes: an acquisition unit 710 and a processing unit 720.
其中,获取单元710用于获取车辆行驶方向的双目图像,其中,所述双目图像包括左目图像与右目图像;处理单元720用于根据所述左目图像中的可行驶区域边界与所述右目图像中的可行驶区域边界,得到所述双目图像中可行驶区域边界的视差信息;基于所述视差信息,得到所述双目图像中的车辆可行驶区域。Wherein, the acquiring unit 710 is configured to acquire binocular images of the driving direction of the vehicle, where the binocular images include left-eye images and right-eye images; From the boundary of the drivable area in the image, the disparity information of the boundary of the drivable area in the binocular image is obtained; and based on the disparity information, the drivable area of the vehicle in the binocular image is obtained.
可选地,作为一个实施例,所述处理单元720还用于:Optionally, as an embodiment, the processing unit 720 is further configured to:
对第一图像中的可行驶区域边界进行分段处理;基于所述分段处理得到的N个边界点线段对第二图像进行可行驶区域边界匹配,得到所述视差信息,其中,N为大于或者等于2的整数。Perform segmentation processing on the travelable area boundary in the first image; perform segmentation processing on the travelable area boundary of the second image based on the N boundary point line segments obtained by the segmentation processing to obtain the disparity information, where N is greater than Or an integer equal to 2.
可选地,作为一个实施例,所述处理单元720具体用于:Optionally, as an embodiment, the processing unit 720 is specifically configured to:
根据所述N个边界点线段与匹配策略对所述第二图像进行可行驶区域边界匹配,得到所述视差信息,其中,所述匹配策略是根据所述N个边界点线段的斜率确定的。The disparity information is obtained by performing boundary matching of the drivable area on the second image according to the N boundary point line segments and the matching strategy, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
可选地,作为一个实施例,所述处理单元720具体用于:Optionally, as an embodiment, the processing unit 720 is specifically configured to:
对于所述N个边界点线段中的第一类边界点线段,采用所述匹配策略中的第一匹配策略对所述第二图像进行可行驶区域边界匹配,其中,所述第一类边界点线段包括道路边沿的可行驶区域边界以及其它车辆侧面的可行驶区域边界;For the boundary point line segment of the first type among the N boundary point line segments, the first matching strategy in the matching strategy is used to perform the drivable area boundary matching on the second image, wherein the boundary point of the first type The line segment includes the boundary of the drivable area on the edge of the road and the boundary of the drivable area on the side of other vehicles;
对于所述N个边界点线段中的第二类边界点线段,采用所述匹配策略中的第二匹配策略在对所述第二图像进行可行驶区域边界匹配,其中,所述第二类边界点线段中的边界点与所述车辆的距离相同。For the second type of boundary point line segments among the N boundary point line segments, the second matching strategy in the matching strategy is used to perform the driving area boundary matching on the second image, wherein the second type boundary The boundary point in the dotted line segment is the same distance from the vehicle.
可选地,作为一个实施例,所述匹配策略包括第一匹配策略与第二匹配策略,其中,所述第一匹配策略是指通过搜索区域进行匹配,所述搜索区域是指以第一边界点线段中的任意一个点为中心生成的区域,所述第一边界点线段为所述N个边界点线段中的任意一个边界点线段;所述第二匹配策略是指通过预设搜索步长进行匹配,所述预设搜索步长是基于所述N个边界点线段中一个边界点线段的边界点视差确定的。Optionally, as an embodiment, the matching strategy includes a first matching strategy and a second matching strategy, wherein the first matching strategy refers to matching through a search area, and the search area refers to a first boundary. Any one of the dotted line segments is a region generated at the center, the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy refers to a preset search step For matching, the preset search step size is determined based on the boundary point disparity of one boundary point line segment among the N boundary point line segments.
可选地,作为一个实施例,所述N个边界点线段是根据所述第一图像中可行驶区域边界的拐点确定的。Optionally, as an embodiment, the N boundary point line segments are determined according to inflection points of the boundary of the drivable area in the first image.
需要说明的是,上述检测装置700以功能单元的形式体现。这里的术语“单元”可以通过软件和/或硬件形式实现,对此不作具体限定。It should be noted that the detection device 700 described above is embodied in the form of a functional unit. The term "unit" herein can be implemented in the form of software and/or hardware, which is not specifically limited.
例如,“单元”可以是实现上述功能的软件程序、硬件电路或二者结合。所述硬件电路可能包括应用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。For example, a "unit" may be a software program, a hardware circuit, or a combination of the two that realizes the above-mentioned functions. The hardware circuit may include an application specific integrated circuit (ASIC), an electronic circuit, and a processor for executing one or more software or firmware programs (such as a shared processor, a dedicated processor, or a group processor). Etc.) and memory, merged logic circuits and/or other suitable components that support the described functions.
因此,在本申请的实施例中描述的各示例的单元,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Therefore, the units of the examples described in the embodiments of the present application can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
图17是本申请实施例提供的车辆可行驶区域的检测装置的硬件结构示意图。FIG. 17 is a schematic diagram of the hardware structure of a detection device for a vehicle travelable area provided by an embodiment of the present application.
如图17所示,检测装置800(该检测装置800具体可以是一种计算机设备)包括存储器801、处理器802、通信接口803以及总线804。其中,存储器801、处理器802、通信接口803通过总线804实现彼此之间的通信连接。As shown in FIG. 17, the detection apparatus 800 (the detection apparatus 800 may specifically be a computer device) includes a memory 801, a processor 802, a communication interface 803, and a bus 804. Among them, the memory 801, the processor 802, and the communication interface 803 realize the communication connection between each other through the bus 804.
存储器801可以是只读存储器(read only memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(random access memory,RAM)。存储器801可以存储程序,当存储器801中存储的程序被处理器802执行时,处理器802用于执行本申请实施例的车辆可行驶区域的检测方法的各个步骤,例如,执行图6至图15所示的各个步骤。The memory 801 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM). The memory 801 may store a program. When the program stored in the memory 801 is executed by the processor 802, the processor 802 is configured to execute each step of the method for detecting a vehicle drivable area in the embodiment of the present application, for example, execute FIGS. 6 to 15 The various steps shown.
应理解,本申请实施例所示的车辆可行驶区域检测装置可以是服务器,例如,可以是云端的服务器,或者,也可以是配置于云端的服务器中的芯片。It should be understood that the vehicle travelable area detection device shown in the embodiment of the present application may be a server, for example, it may be a cloud server, or may also be a chip configured in a cloud server.
处理器802可以采用通用的中央处理器(central processing unit,CPU),微处理器,应用专用集成电路(application specific integrated circuit,ASIC)或者一个或多个集成电路,用于执行相关程序以实现本申请方法实施例的车辆可行驶区域的检测方法。The processor 802 may adopt a general-purpose central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC) or one or more integrated circuits for executing related programs to realize this The method for detecting a vehicle's drivable area in the embodiment of the application method.
处理器802还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请的车辆可行驶区域的检测方法的各个步骤可以通过处理器802中的硬件的集成逻辑电路或者软件形式的指令完成。The processor 802 may also be an integrated circuit chip with signal processing capability. In the implementation process, each step of the method for detecting a vehicle travelable area of the present application can be completed by an integrated logic circuit of hardware in the processor 802 or instructions in the form of software.
上述处理器802还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器801,处理器802读取存储器801中的信息,结合其硬件完成本申请实施中图16所示的车辆可行驶区域的检测装置中包括的单元所需执行的功能,或者,执行本申请方法实施例的图6至图15所示的车辆可行驶区域的检测方法。The above-mentioned processor 802 may also be a general-purpose processor, a digital signal processing (digital signal processing, DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, Discrete gates or transistor logic devices, discrete hardware components. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers. The storage medium is located in the memory 801, and the processor 802 reads the information in the memory 801 and, in combination with its hardware, completes the functions required by the units included in the detection device for the vehicle travelable area shown in FIG. 16 in the implementation of this application, or, The method for detecting the vehicle's drivable area shown in FIGS. 6 to 15 of the method embodiment of the present application is executed.
通信接口803使用例如但不限于收发器一类的收发装置,来实现检测装置800与其他设备或通信网络之间的通信。The communication interface 803 uses a transceiver device such as but not limited to a transceiver to implement communication between the detection device 800 and other devices or communication networks.
总线804可包括在检测装置800各个部件(例如,存储器801、处理器802、通信接口803)之间传送信息的通路。The bus 804 may include a path for transmitting information between various components of the detection device 800 (for example, the memory 801, the processor 802, and the communication interface 803).
应注意,尽管上述检测装置800仅仅示出了存储器、处理器、通信接口,但是在具体实现过程中,本领域的技术人员应当理解,检测装置800还可以包括实现正常运行所必须的其他器件。同时,根据具体需要本领域的技术人员应当理解,上述检测装置800还可包括实现其他附加功能的硬件器件。It should be noted that although the foregoing detection device 800 only shows a memory, a processor, and a communication interface, in the specific implementation process, those skilled in the art should understand that the detection device 800 may also include other devices necessary for normal operation. At the same time, according to specific needs, those skilled in the art should understand that the detection device 800 may also include hardware devices that implement other additional functions.
此外,本领域的技术人员应当理解,上述检测装置800也可仅仅包括实现本申请实施例所必须的器件,而不必包括图17中所示的全部器件。In addition, those skilled in the art should understand that the detection device 800 described above may also only include the components necessary to implement the embodiments of the present application, and not necessarily include all the components shown in FIG. 17.
应理解,上述举例说明是为了帮助本领域技术人员理解本申请实施例,而非要将本申请实施例限于所例示的具体数值或具体场景。本领域技术人员根据所给出的上述举例说明,显然可以进行各种等价的修改或变化,这样的修改或变化也落入本申请实施例的范围内。It should be understood that the above examples are intended to help those skilled in the art understand the embodiments of the present application, and are not intended to limit the embodiments of the present application to the specific numerical values or specific scenarios illustrated. Those skilled in the art can obviously make various equivalent modifications or changes based on the above examples given, and such modifications or changes also fall within the scope of the embodiments of the present application.
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。It should be understood that the term "and/or" in this text is only an association relationship describing the associated objects, indicating that there can be three types of relationships, for example, A and/or B, which can mean: A alone exists, and both A and B exist. , There are three cases of B alone. In addition, the character "/" in this text generally indicates that the associated objects before and after are in an "or" relationship.
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that in the various embodiments of the present application, the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application. The implementation process constitutes any limitation.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装 置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the specific working process of the system, device and unit described above can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method can be implemented in other ways. For example, the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Should be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.

Claims (14)

  1. 一种车辆可行驶区域的检测方法,其特征在于,包括:A method for detecting a vehicle's drivable area, which is characterized in that it comprises:
    获取车辆行驶方向的双目图像,其中,所述双目图像包括左目图像与右目图像;Acquiring a binocular image of the driving direction of the vehicle, where the binocular image includes a left-eye image and a right-eye image;
    根据所述左目图像中的可行驶区域边界与所述右目图像中的可行驶区域边界,得到所述双目图像中可行驶区域边界的视差信息;Obtaining the disparity information of the drivable area boundary in the binocular image according to the drivable area boundary in the left-eye image and the drivable area boundary in the right-eye image;
    基于所述视差信息,得到所述双目图像中的车辆可行驶区域。Based on the disparity information, the vehicle-drivable area in the binocular image is obtained.
  2. 如权利要求1所述的检测方法,其特征在于,还包括:The detection method of claim 1, further comprising:
    对第一图像中的可行驶区域边界进行分段处理;Perform segmentation processing on the boundary of the drivable area in the first image;
    所述根据所述左目图像中的可行驶区域边界与所述右目图像中的可行驶区域边界,得到所述双目图像中可行驶区域边界的视差信息,包括:The obtaining the disparity information of the drivable area boundary in the binocular image according to the drivable area boundary in the left-eye image and the drivable area boundary in the right-eye image includes:
    基于所述分段处理得到的N个边界点线段对第二图像进行可行驶区域边界匹配,得到所述视差信息,Performing boundary matching of the drivable area on the second image based on the N boundary point line segments obtained by the segmentation process to obtain the disparity information,
    其中,所述第一图像为所述双目图像中的任意一个图像,所述第二图像为所述双目图像中与所述第一图像不同的另一个图像,N为大于或者等于2的整数。Wherein, the first image is any one of the binocular images, the second image is another image in the binocular images that is different from the first image, and N is greater than or equal to 2 Integer.
  3. 如权利要求2所述的检测方法,其特征在于,所述基于所述分段处理得到的N个边界点线段对所述第二图像进行可行驶区域边界匹配,得到所述视差信息,包括:The detection method according to claim 2, wherein the matching of the drivable area boundary on the second image based on the N boundary point line segments obtained by the segmentation process to obtain the disparity information comprises:
    根据所述N个边界点线段与匹配策略对所述第二图像进行可行驶区域边界匹配,得到所述视差信息,其中,所述匹配策略是根据所述N个边界点线段的斜率确定的。The disparity information is obtained by performing boundary matching of the drivable area on the second image according to the N boundary point line segments and the matching strategy, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
  4. 如权利要求3所述的检测方法,其特征在于,所述根据所述N个边界点线段与匹配策略对所述第二图像进行可行驶区域边界匹配,得到所述视差信息,包括:The detection method according to claim 3, wherein the matching of the travelable area boundary of the second image according to the N boundary point line segments and a matching strategy to obtain the disparity information comprises:
    对于所述N个边界点线段中的第一类边界点线段,采用所述匹配策略中的第一匹配策略对所述第二图像进行可行驶区域边界匹配,其中,所述第一类边界点线段包括道路边沿的可行驶区域边界以及其它车辆侧面的可行驶区域边界;For the boundary point line segment of the first type among the N boundary point line segments, the first matching strategy in the matching strategy is used to perform the drivable area boundary matching on the second image, wherein the boundary point of the first type The line segment includes the boundary of the drivable area on the edge of the road and the boundary of the drivable area on the side of other vehicles;
    对于所述N个边界点线段中的第二类边界点线段,采用所述匹配策略中的第二匹配策略在对所述第二图像进行可行驶区域边界匹配,其中,所述第二类边界点线段中的边界点与所述车辆的距离相同。For the second type of boundary point line segments among the N boundary point line segments, the second matching strategy in the matching strategy is used to perform the driving area boundary matching on the second image, wherein the second type boundary The boundary point in the dotted line segment is the same distance from the vehicle.
  5. 如权利要求3或4所述的检测方法,其特征在于,所述匹配策略包括第一匹配策略与第二匹配策略,其中,所述第一匹配策略是指通过搜索区域进行匹配,所述搜索区域是指以第一边界点线段中的任意一个点为中心生成的区域,所述第一边界点线段为所述N个边界点线段中的任意一个边界点线段;所述第二匹配策略是指通过预设搜索步长进行匹配,所述预设搜索步长是基于所述N个边界点线段中一个边界点线段的边界点视差确定的。The detection method according to claim 3 or 4, wherein the matching strategy includes a first matching strategy and a second matching strategy, wherein the first matching strategy refers to matching through a search area, and the search A region refers to a region generated with any point in the first boundary point line segment as the center, and the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy is Refers to matching through a preset search step, which is determined based on the boundary point disparity of one of the N boundary point line segments.
  6. 如权利要求2至5中任一项所述的检测方法,其特征在于,所述N个边界点线段是根据所述第一图像中可行驶区域边界的拐点确定的。The detection method according to any one of claims 2 to 5, wherein the N boundary point line segments are determined according to the inflection point of the boundary of the drivable area in the first image.
  7. 一种车辆可行驶区域的检测装置,其特征在于,包括:A device for detecting a vehicle's travelable area is characterized in that it comprises:
    获取单元,用于获取车辆行驶方向的双目图像,其中,所述双目图像包括左目图像与右目图像;An acquiring unit for acquiring a binocular image of a vehicle traveling direction, wherein the binocular image includes a left-eye image and a right-eye image;
    处理单元,用于根据所述左目图像中的可行驶区域边界与所述右目图像中的可行驶区 域边界,得到所述双目图像中可行驶区域边界的视差信息;基于所述视差信息,得到所述双目图像中的车辆可行驶区域。The processing unit is configured to obtain the disparity information of the drivable area boundary in the binocular image according to the drivable area boundary in the left-eye image and the drivable area boundary in the right-eye image; and obtain the disparity information based on the disparity information The travelable area of the vehicle in the binocular image.
  8. 如权利要求7所述的检测装置,其特征在于,所述处理单元还用于:8. The detection device according to claim 7, wherein the processing unit is further configured to:
    对第一图像中的可行驶区域边界进行分段处理;Perform segmentation processing on the boundary of the drivable area in the first image;
    基于所述分段处理得到的N个边界点线段对第二图像进行可行驶区域边界匹配,得到所述视差信息,Performing boundary matching of the drivable area on the second image based on the N boundary point line segments obtained by the segmentation process to obtain the disparity information,
    其中,所述第一图像为所述双目图像中的任意一个图像,所述第二图像为所述双目图像中与所述第一图像不同的另一个图像,N为大于或者等于2的整数。Wherein, the first image is any one of the binocular images, the second image is another image in the binocular images that is different from the first image, and N is greater than or equal to 2 Integer.
  9. 如权利要求8所述的检测装置,其特征在于,所述处理单元具体用于:The detection device according to claim 8, wherein the processing unit is specifically configured to:
    根据所述N个边界点线段与匹配策略对所述第二图像进行可行驶区域边界匹配,得到所述视差信息,其中,所述匹配策略是根据所述N个边界点线段的斜率确定的。The disparity information is obtained by performing boundary matching of the drivable area on the second image according to the N boundary point line segments and the matching strategy, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
  10. 如权利要求9所述的检测装置,其特征在于,所述处理单元具体用于:The detection device according to claim 9, wherein the processing unit is specifically configured to:
    对于所述N个边界点线段中的第一类边界点线段,采用所述匹配策略中的第一匹配策略对所述第二图像进行可行驶区域边界匹配,其中,所述第一类边界点线段包括道路边沿的可行驶区域边界以及其它车辆侧面的可行驶区域边界;For the boundary point line segment of the first type among the N boundary point line segments, the first matching strategy in the matching strategy is used to perform the drivable area boundary matching on the second image, wherein the boundary point of the first type The line segment includes the boundary of the drivable area on the edge of the road and the boundary of the drivable area on the side of other vehicles;
    对于所述N个边界点线段中的第二类边界点线段,采用所述匹配策略中的第二匹配策略在对所述第二图像进行可行驶区域边界匹配,其中,所述第二类边界点线段中的边界点与所述车辆的距离相同。For the second type of boundary point line segments among the N boundary point line segments, the second matching strategy in the matching strategy is used to perform the driving area boundary matching on the second image, wherein the second type boundary The boundary point in the dotted line segment is the same distance from the vehicle.
  11. 如权利要求9或10所述的检测装置,其特征在于,所述匹配策略包括第一匹配策略与第二匹配策略,其中,所述第一匹配策略是指通过搜索区域进行匹配,所述搜索区域是指以第一边界点线段中的任意一个点为中心生成的区域,所述第一边界点线段为所述N个边界点线段中的任意一个边界点线段;所述第二匹配策略是指通过预设搜索步长进行匹配,所述预设搜索步长是基于所述N个边界点线段中一个边界点线段的边界点视差确定的。The detection device according to claim 9 or 10, wherein the matching strategy includes a first matching strategy and a second matching strategy, wherein the first matching strategy refers to matching through a search area, and the search A region refers to a region generated with any point in the first boundary point line segment as the center, and the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy is Refers to matching through a preset search step, which is determined based on the boundary point disparity of one of the N boundary point line segments.
  12. 如权利要求8至11中任一项所述的检测装置,其特征在于,所述N个边界点线段是根据所述第一图像中可行驶区域边界的拐点确定的。The detection device according to any one of claims 8 to 11, wherein the N boundary point line segments are determined according to the inflection point of the boundary of the drivable area in the first image.
  13. 一种车辆可行驶区域的检测装置,其特征在于,包括至少一个处理器和存储器,所述至少一个处理器与所述存储器耦合,用于读取并执行所述存储器中的指令,以执行如权利要求1至6中任一项所述的检测方法。A device for detecting a travelable area of a vehicle, characterized in that it comprises at least one processor and a memory, the at least one processor is coupled with the memory, and is configured to read and execute instructions in the memory to execute such as The detection method according to any one of claims 1 to 6.
  14. 一种计算机可读介质,其特征在于,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行如权利要求1至6中任一项所述的检测方法。A computer-readable medium, wherein the computer-readable medium stores a program code, and when the computer program code runs on a computer, the computer executes the method described in any one of claims 1 to 6 Detection method.
PCT/CN2020/075104 2020-02-13 2020-02-13 Vehicle travelable region detection method and detection device WO2021159397A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080093411.3A CN114981138A (en) 2020-02-13 2020-02-13 Method and device for detecting vehicle travelable region
PCT/CN2020/075104 WO2021159397A1 (en) 2020-02-13 2020-02-13 Vehicle travelable region detection method and detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/075104 WO2021159397A1 (en) 2020-02-13 2020-02-13 Vehicle travelable region detection method and detection device

Publications (1)

Publication Number Publication Date
WO2021159397A1 true WO2021159397A1 (en) 2021-08-19

Family

ID=77292613

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/075104 WO2021159397A1 (en) 2020-02-13 2020-02-13 Vehicle travelable region detection method and detection device

Country Status (2)

Country Link
CN (1) CN114981138A (en)
WO (1) WO2021159397A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113715907A (en) * 2021-09-27 2021-11-30 郑州新大方重工科技有限公司 Attitude adjusting method and automatic driving method suitable for wheeled equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101410872A (en) * 2006-03-28 2009-04-15 株式会社博思科 Road video image analyzing device and road video image analyzing method
US20140071240A1 (en) * 2012-09-11 2014-03-13 Automotive Research & Testing Center Free space detection system and method for a vehicle using stereo vision
WO2015053100A1 (en) * 2013-10-07 2015-04-16 日立オートモティブシステムズ株式会社 Object detection device and vehicle using same
CN105313892A (en) * 2014-06-16 2016-02-10 现代摩比斯株式会社 Safe driving guiding system and method thereof
CN105550665A (en) * 2016-01-15 2016-05-04 北京理工大学 Method for detecting pilotless automobile through area based on binocular vision
CN106303501A (en) * 2016-08-23 2017-01-04 深圳市捷视飞通科技股份有限公司 Stereo-picture reconstructing method based on image sparse characteristic matching and device
CN107358168A (en) * 2017-06-21 2017-11-17 海信集团有限公司 A kind of detection method and device in vehicle wheeled region, vehicle electronic device
FR3056531A1 (en) * 2016-09-29 2018-03-30 Valeo Schalter Und Sensoren Gmbh OBSTACLE DETECTION FOR MOTOR VEHICLE
CN107909036A (en) * 2017-11-16 2018-04-13 海信集团有限公司 A kind of Approach for road detection and device based on disparity map

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101410872A (en) * 2006-03-28 2009-04-15 株式会社博思科 Road video image analyzing device and road video image analyzing method
US20140071240A1 (en) * 2012-09-11 2014-03-13 Automotive Research & Testing Center Free space detection system and method for a vehicle using stereo vision
WO2015053100A1 (en) * 2013-10-07 2015-04-16 日立オートモティブシステムズ株式会社 Object detection device and vehicle using same
CN105313892A (en) * 2014-06-16 2016-02-10 现代摩比斯株式会社 Safe driving guiding system and method thereof
CN105550665A (en) * 2016-01-15 2016-05-04 北京理工大学 Method for detecting pilotless automobile through area based on binocular vision
CN106303501A (en) * 2016-08-23 2017-01-04 深圳市捷视飞通科技股份有限公司 Stereo-picture reconstructing method based on image sparse characteristic matching and device
FR3056531A1 (en) * 2016-09-29 2018-03-30 Valeo Schalter Und Sensoren Gmbh OBSTACLE DETECTION FOR MOTOR VEHICLE
CN107358168A (en) * 2017-06-21 2017-11-17 海信集团有限公司 A kind of detection method and device in vehicle wheeled region, vehicle electronic device
CN107909036A (en) * 2017-11-16 2018-04-13 海信集团有限公司 A kind of Approach for road detection and device based on disparity map

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113715907A (en) * 2021-09-27 2021-11-30 郑州新大方重工科技有限公司 Attitude adjusting method and automatic driving method suitable for wheeled equipment
CN113715907B (en) * 2021-09-27 2023-02-28 郑州新大方重工科技有限公司 Attitude adjusting method and automatic driving method suitable for wheeled equipment

Also Published As

Publication number Publication date
CN114981138A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
WO2021000800A1 (en) Reasoning method for road drivable region and device
WO2022001773A1 (en) Trajectory prediction method and apparatus
WO2021102955A1 (en) Path planning method for vehicle and path planning apparatus for vehicle
CN110930323B (en) Method and device for removing reflection of image
CN112543877B (en) Positioning method and positioning device
WO2022001366A1 (en) Lane line detection method and apparatus
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
WO2022062825A1 (en) Vehicle control method, device, and vehicle
WO2022051951A1 (en) Lane line detection method, related device, and computer readable storage medium
EP4307251A1 (en) Mapping method, vehicle, computer readable storage medium, and chip
CN112810603B (en) Positioning method and related product
WO2022089577A1 (en) Pose determination method and related device thereof
CN114255275A (en) Map construction method and computing device
WO2021159397A1 (en) Vehicle travelable region detection method and detection device
CN115100630B (en) Obstacle detection method, obstacle detection device, vehicle, medium and chip
CN115398272A (en) Method and device for detecting passable area of vehicle
CN115205311B (en) Image processing method, device, vehicle, medium and chip
WO2022022284A1 (en) Target object sensing method and apparatus
WO2021110166A1 (en) Road structure detection method and device
US12001517B2 (en) Positioning method and apparatus
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle
WO2022061725A1 (en) Traffic element observation method and apparatus
CN111775962B (en) Method and device for determining automatic driving strategy
WO2022033089A1 (en) Method and device for determining three-dimensional information of object to undergo detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20918543

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20918543

Country of ref document: EP

Kind code of ref document: A1