CN114981138A - Method and device for detecting vehicle travelable region - Google Patents

Method and device for detecting vehicle travelable region Download PDF

Info

Publication number
CN114981138A
CN114981138A CN202080093411.3A CN202080093411A CN114981138A CN 114981138 A CN114981138 A CN 114981138A CN 202080093411 A CN202080093411 A CN 202080093411A CN 114981138 A CN114981138 A CN 114981138A
Authority
CN
China
Prior art keywords
boundary
image
matching
vehicle
boundary point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080093411.3A
Other languages
Chinese (zh)
Inventor
朱麒文
崔学理
郑佳
吴祖光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN114981138A publication Critical patent/CN114981138A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes

Abstract

A detection method and a detection device for a vehicle travelable region are provided, wherein the detection method comprises the following steps: acquiring binocular images of the vehicle running direction, wherein the binocular images comprise a left eye image and a right eye image; obtaining parallax information of the drivable area boundary in the binocular image according to the drivable area boundary in the left eye image and the drivable area boundary in the right eye image; based on the parallax information, a vehicle travelable region in the binocular image is obtained. According to the technical scheme, under the condition of ensuring the detection precision, the calculated amount can be reduced to a great extent, and the detection efficiency of the automatic driving vehicle on the road condition is improved.

Description

Method and device for detecting vehicle travelable region Technical Field
The present application relates to the field of automobiles, and more particularly, to a method and apparatus for detecting a travelable area of a vehicle.
Background
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. Research in the field of artificial intelligence includes robotics, natural language processing, computer vision, decision and reasoning, human-computer interaction, recommendation and search, AI basic theory, and the like.
Automatic driving is a mainstream application in the field of artificial intelligence, and the automatic driving technology depends on the cooperative cooperation of computer vision, radar, a monitoring device, a global positioning system and the like, so that the motor vehicle can realize automatic driving without the active operation of human beings. Autonomous vehicles use various computing systems to assist in transporting passengers from one location to another; some autonomous vehicles may require some initial input or continuous input from an operator (such as a pilot, driver, or passenger); the automatic driving vehicle permits an operator to switch from a manual mode operation mode to an automatic driving mode or a mode between the manual mode and the automatic driving mode, and since the automatic driving technology does not require human beings to drive the motor vehicle, human driving errors can be effectively avoided theoretically, traffic accidents can be reduced, and the transportation efficiency of roads can be improved. Therefore, the automatic driving technique is increasingly emphasized.
At present, the drivable area of an autonomous vehicle can be detected by using a binocular vision method, wherein the binocular vision method is to extract and position the drivable area through a global disparity map of an image output by a binocular camera; however, the calculated amount of the global disparity map acquired by the binocular camera is large, so that the automatic driving vehicle cannot process in real time, and the safety risk exists in the driving process of the automatic driving vehicle; therefore, it is an urgent problem to improve the detection efficiency of the detection method for the vehicle travelable region while ensuring the detection accuracy.
Disclosure of Invention
The application provides a detection method and a detection device for a vehicle travelable area, which can improve the real-time performance of a detection system of an automatic driving vehicle and improve the detection efficiency of the detection method for the vehicle travelable area under the condition of certain detection precision.
In a first aspect, a method for detecting a vehicle travelable area is provided, including: acquiring a binocular image of a vehicle driving direction, wherein the binocular image comprises a left eye image and a right eye image; obtaining parallax information of the travelable area boundary in the binocular image according to the travelable area boundary in the left eye image and the travelable area boundary in the right eye image; and obtaining a vehicle travelable area in the binocular image based on the parallax information.
The binocular image may include a left eye image and a right eye image; for example, the two-dimensional images may be left and right two-dimensional images respectively acquired by two cameras of parallel height in the autonomous vehicle.
In one possible implementation, the binocular image may be an image of the road surface or surroundings in which the autonomous vehicle acquires the direction of travel; examples include road surface images and images of obstacles and pedestrians near the vehicle.
It should be understood that parallax may refer to a difference in direction resulting from viewing the same object from two points at a distance. For example, a difference in the horizontal direction in the position of the same road travelable area acquired by the left-eye camera and the right-eye camera in the autonomous vehicle may be parallax information.
In one possible implementation manner, the acquired left eye image and the acquired right eye image can be respectively input to a deep learning network trained in advance for identifying the travelable region in the image; the vehicle travelable areas in the left eye image and the right eye image can be identified through the pre-trained deep learning network.
In the embodiment of the application, a binocular image of the vehicle driving direction can be acquired, and the boundary of the vehicle drivable area in the left eye image and the right eye image can be acquired based on the binocular image; according to the travelable region boundary in the left eye image and the travelable region boundary in the right eye image, parallax information of the travelable region boundary in the binocular image can be obtained; the position of the vehicle travelable region in the binocular image can be obtained based on the parallax information; by the method for detecting the vehicle travelable area, the binocular image pixel-by-pixel parallax calculation can be avoided, so that the global parallax image can be obtained, the travelable area boundary point can be positioned in the coordinate system of the automatic driving vehicle by calculating the parallax information of the travelable area boundary in the binocular image, the calculated amount can be reduced to a great extent under the condition of ensuring the detection precision, and the detection efficiency of the automatic driving vehicle on the road condition can be improved.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: performing segmentation processing on the boundary of the travelable region in the first image; the obtaining of the parallax information of the drivable area boundary in the binocular image according to the drivable area boundary in the left eye image and the drivable area boundary in the right eye image includes:
performing travelable region boundary matching on a second image based on the N boundary point line segments obtained by the segmentation processing to obtain the parallax information, wherein the first image is any one of the binocular images; the second image is another image different from the first image in the binocular image; n is an integer greater than or equal to 2.
The first image may refer to a left eye image in a binocular image, and the second image refers to a right eye image in the binocular image; alternatively, the first image may refer to a right eye image among binocular images, and the second image may refer to a left eye image among the binocular images.
In a possible implementation manner, segmentation processing can be performed on the boundary of the drivable region in the left eye image based on the inflection point of the boundary of the drivable region in the left eye image in the binocular image, so as to obtain N sections of boundaries of the drivable region in the left eye image; and matching the travelable region boundary in the right eye image through the N travelable region boundaries in the left eye image to obtain the parallax information.
In another possible implementation manner, segmentation processing may be performed on the travelable region boundary in the right eye image based on an inflection point of the travelable region boundary in the right eye image in the binocular image, so as to obtain N sections of travelable region boundaries in the right eye image; and matching the drivable area boundary in the left eye image through the N sections of drivable area boundaries in the right eye image so as to obtain parallax information.
In the embodiment of the application, when the drivable region boundary in the left eye image is matched with the drivable boundary in the right eye image, namely the parallax information of the drivable region boundary in the binocular image is calculated, the drivable region boundary in any one of the binocular images can be processed in a segmented manner, and the segmented drivable region boundary can be matched in a segmented manner, so that the accuracy of matching the drivable region boundary can be improved, and the information of the drivable region of a vehicle in a road can be acquired more accurately.
With reference to the first aspect, in certain implementations of the first aspect, the performing travelable region boundary matching on the second image based on the N boundary point line segments obtained through the segmentation processing to obtain the disparity information includes:
and matching the travelable region boundary of the second image according to the N boundary point line segments and a matching strategy to obtain the parallax information, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
In the embodiment of the application, a matching strategy can be adopted when the travelable regions in the left eye image and the right eye image are matched in a segmented manner, namely, the boundary point line segments with different slopes can be matched based on different matching strategies, so that the matching accuracy of the boundary points of the travelable regions can be improved.
With reference to the first aspect, in some implementation manners of the first aspect, the performing travelable region boundary matching on the second image according to the N boundary point line segments and a matching policy to obtain the disparity information includes:
for a first type of boundary point line segments in the N boundary point line segments, performing travelable region boundary matching on the second image by adopting a first matching strategy in the matching strategies, wherein the first type of boundary point line segments comprise travelable region boundaries of road edges and travelable region boundaries of other vehicle sides;
and for a second type of boundary point line segments in the N boundary point line segments, performing travelable area boundary matching on the second image by adopting a second matching strategy in the matching strategies, wherein the distance between the boundary points in the second type of boundary point line segments and the vehicle is the same.
The first type of boundary point line segment may be a boundary point line segment with a larger slope; such as a road edge area, or other vehicle side area; the second type of boundary point line segment may refer to a boundary point line segment with a smaller slope; such as the rear area of other vehicles, etc. For boundary point line segments with larger slopes, the phenomenon that the boundary of the travelable area extracted from the left eye image and the right eye image is basically not overlapped does not exist; and partial boundary point overlapping phenomenon exists on the boundary of the travelable region extracted from the left eye image and the right eye image of the boundary point line segment with smaller slope.
In the embodiment of the application, a sectional type matching strategy based on the slope size of the boundary of the travelable region is provided, the detected travelable region is subjected to sectional processing according to slope distribution, and different matching strategies are adopted for the boundary of the travelable region with the larger slope and the boundary of the travelable region with the smaller slope to perform matching, so that the matching precision of the boundary point of the travelable region can be improved.
With reference to the first aspect, in some implementations of the first aspect, the matching policy includes a first matching policy and a second matching policy, where the first matching policy is to perform matching through a search area, the search area is an area generated by taking any one point in a first boundary point line segment as a center, and the first boundary point line segment is any one boundary point line segment in the N boundary point line segments; the second matching strategy is to perform matching through a preset search step, wherein the preset search step is determined based on the boundary point parallax of one of the N boundary point line segments.
In one possible implementation manner, for the boundary point line segment with the larger slope, a search area generated by taking one boundary point of a first image (for example, a left eye image) travelable area boundary point segment in the binocular image as a template point and taking a second image (for example, a right eye image) boundary point corresponding to the same row in the first image as a center can be adopted to match with the template point in the first image.
In another possible implementation manner, for a boundary point line segment with a small slope, since a partial boundary point overlapping phenomenon may exist between a boundary point extracted through a first image (e.g., a left eye image) and a boundary point extracted through a second image (e.g., a right eye image), a method of knee point parallax correction may be used to perform matching of the partial boundary points.
With reference to the first aspect, in certain implementations of the first aspect, the N boundary point line segments are determined according to inflection points of a travelable region boundary in the first image.
In a second aspect, there is provided a detection apparatus of a vehicle travelable region, including: the device comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring binocular images of the driving direction of a vehicle, and the binocular images comprise a left eye image and a right eye image; the processing unit is used for obtaining parallax information of the boundary of the drivable area in the binocular image according to the boundary of the drivable area in the left eye image and the boundary of the drivable area in the right eye image; and obtaining a vehicle travelable area in the binocular image based on the parallax information.
The binocular image may include a left eye image and a right eye image; for example, the left and right two-dimensional images may be acquired by two cameras of equal parallel height in the autonomous vehicle.
In one possible implementation, the binocular image may be an image of the road surface or surrounding environment in which the autonomous vehicle acquires the direction of travel; examples include road surface images and images of obstacles and pedestrians near the vehicle.
It should be understood that parallax may refer to a difference in direction resulting from viewing the same object from two points that are at a distance. For example, a difference in the horizontal direction in the position of the same road travelable area acquired by the left-eye camera and the right-eye camera in the autonomous vehicle may be parallax information.
In one possible implementation manner, the acquired left eye image and the acquired right eye image can be respectively input to a deep learning network trained in advance for identifying a travelable area in the image; the vehicle drivable regions in the left eye image and the right eye image can be identified through the pre-trained deep learning network.
In the embodiment of the application, a binocular image of the vehicle driving direction can be acquired, and the boundary of the vehicle drivable area in the left eye image and the right eye image can be acquired based on the binocular image; according to the drivable region boundary in the left eye image and the drivable region boundary in the right eye image, parallax information of the drivable region boundary in the binocular image can be obtained; the position of the vehicle travelable region in the binocular image can be obtained based on the parallax information; by the method for detecting the vehicle travelable area, the binocular image pixel-by-pixel parallax calculation can be avoided, so that the global parallax image can be obtained, the travelable area boundary point can be positioned in the coordinate system of the automatic driving vehicle by calculating the parallax information of the travelable area boundary in the binocular image, the calculated amount can be reduced to a great extent under the condition of ensuring the detection precision, and the detection efficiency of the automatic driving vehicle on the road condition can be improved.
With reference to the second aspect, in certain implementations of the second aspect, the processing unit is further configured to: performing segmentation processing on the boundary of the travelable region in the first image; and matching the boundary of the travelable region of the second image based on the N boundary point line segments obtained by the segmentation processing to obtain the parallax information, wherein N is an integer greater than or equal to 2.
The first image may refer to a left eye image in a binocular image, and the second image refers to a right eye image in the binocular image; alternatively, the first image may refer to a right eye image among binocular images, and the second image may refer to a left eye image among the binocular images.
In a possible implementation manner, segmentation processing can be performed on the boundary of the travelable region in the left eye image based on the inflection point of the boundary of the travelable region in the left eye image in the binocular image, so as to obtain N sections of travelable region boundaries in the left eye image; and matching the travelable region boundary in the right eye image through the N travelable region boundaries in the left eye image to obtain the parallax information.
In another possible implementation manner, segmentation processing may be performed on the travelable region boundary in the right eye image based on an inflection point of the travelable region boundary in the right eye image in the binocular image, so as to obtain N sections of travelable region boundaries in the right eye image; and matching the drivable area boundary in the left eye image through the N sections of drivable area boundaries in the right eye image so as to obtain parallax information.
In the embodiment of the application, when the drivable region boundary in the left eye image is matched with the drivable boundary in the right eye image, namely the parallax information of the drivable region boundary in the binocular image is calculated, the drivable region boundary in any one of the binocular images can be processed in a segmented manner, and the segmented drivable region boundary can be matched in a segmented manner, so that the accuracy of matching the drivable region boundary can be improved, and the information of the drivable region of a vehicle in a road can be acquired more accurately.
With reference to the second aspect, in some implementations of the second aspect, the processing unit is specifically configured to: and matching the boundary of the travelable region of the second image according to the N boundary point line segments and a matching strategy to obtain the parallax information, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
In the embodiment of the application, a matching strategy can be adopted when the travelable region in the left eye image and the right eye image is matched in a segmented manner, namely, the boundary point line segments with different slopes can be matched based on different matching strategies, so that the matching precision of the travelable region boundary points can be improved.
With reference to the second aspect, in some implementations of the second aspect, the processing unit is specifically configured to:
for a first type of boundary point line segments in the N boundary point line segments, performing travelable region boundary matching on the second image by adopting a first matching strategy in the matching strategies, wherein the first type of boundary point line segments comprise travelable region boundaries of road edges and travelable region boundaries of other vehicle sides;
and for a second type of boundary point line segments in the N boundary point line segments, performing travelable area boundary matching on the second image by adopting a second matching strategy in the matching strategies, wherein the distance between the boundary points in the second type of boundary point line segments and the vehicle is the same.
The first type of boundary point line segment may be a boundary point line segment with a larger slope; such as the road edge area, or other vehicle side areas; the second type of boundary point line segment may refer to a boundary point line segment with a smaller slope; such as the rear area of other vehicles, etc. For boundary point line segments with larger slopes, the phenomenon that the boundary of the travelable area extracted from the left eye image and the right eye image is basically not overlapped does not exist; partial boundary point overlapping phenomenon exists on the boundary of the travelable area extracted from the left eye image and the right eye image of the boundary point line segment with small slope.
In the embodiment of the application, a sectional type matching strategy based on the slope size of the boundary of the travelable region is provided, the detected travelable region is subjected to sectional processing according to slope distribution, and different matching strategies are adopted for the boundary of the travelable region with the larger slope and the boundary of the travelable region with the smaller slope to perform matching, so that the matching precision of the boundary point of the travelable region can be improved.
With reference to the second aspect, in some implementations of the second aspect, the matching policy includes a first matching policy and a second matching policy, where the first matching policy refers to matching through a search area, the search area refers to an area generated with any one of first boundary point line segments as a center, and the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy is to match through a preset search step, and the preset search step is determined based on the boundary point parallax of one boundary point line segment in the N boundary point line segments.
In one possible implementation manner, for the boundary point line segment with the larger slope, a search area generated by taking one boundary point of a first image (for example, a left eye image) travelable area boundary point segment in the binocular image as a template point and taking a second image (for example, a right eye image) boundary point corresponding to the same row in the first image as a center can be adopted to match with the template point in the first image.
In another possible implementation manner, for a boundary point line segment with a small slope, since a partial boundary point overlapping phenomenon may exist between a boundary point extracted through a first image (e.g., a left eye image) and a boundary point extracted through a second image (e.g., a right eye image), a method of knee point parallax correction may be used to perform matching of the partial boundary points.
With reference to the second aspect, in certain implementations of the second aspect, the N boundary point line segments are determined according to inflection points of travelable region boundaries in the first image.
In a third aspect, there is provided a detection device of a vehicle travelable region, including: a memory for storing a program; a processor for executing the memory-stored program, the processor for performing the following processes when the memory-stored program is executed: acquiring a binocular image of a vehicle driving direction, wherein the binocular image comprises a left eye image and a right eye image; obtaining parallax information of the travelable area boundary in the binocular image according to the travelable area boundary in the left eye image and the travelable area boundary in the right eye image; and obtaining a vehicle travelable area in the binocular image based on the parallax information.
In a possible implementation manner, the processor included in the detection device is further configured to execute the method for detecting the vehicle travelable region in any one of the first aspect and the first aspect.
It will be appreciated that extensions, definitions, explanations and explanations of relevant content in the above-described first aspect also apply to the same content in the third aspect.
In a fourth aspect, a computer-readable storage medium is provided, where the computer-readable storage medium is used to store program codes, and when the program codes are executed by a computer, the computer is used to execute the detection method in any one of the implementation manners of the first aspect and the first aspect.
In a fifth aspect, a chip is provided, where the chip includes a processor, and the processor is configured to execute the detection method in any implementation manner of the first aspect and the first aspect.
In one possible implementation, the chip of the fifth aspect described above may be located in an in-vehicle terminal of an autonomous vehicle.
In a sixth aspect, there is provided a computer program product comprising: computer program code for causing a computer to perform the detection method in any one of the implementations of the first aspect and the first aspect when the computer program code runs on a computer.
It should be noted that, all or part of the computer program code may be stored in the first storage medium, where the first storage medium may be packaged together with the processor or may be packaged separately from the processor, and this is not particularly limited in this embodiment of the present application.
Drawings
FIG. 1 is a schematic structural diagram of a vehicle according to an embodiment of the present disclosure;
FIG. 2 is a schematic structural diagram of a computer system according to an embodiment of the present application;
fig. 3 is a schematic application diagram of a cloud-side instruction autonomous vehicle according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a detection system for an autonomous vehicle provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of coordinate transformation provided by an embodiment of the present application;
FIG. 6 is a schematic flow chart of a method for detecting a vehicle driving range according to an embodiment of the present application;
FIG. 7 is a schematic flow chart of a method for detecting a vehicle driving range according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a travelable region boundary segment provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of acquiring an image of a road surface according to an embodiment of the present application;
FIG. 10 is a graph of the boundary line segment matching for larger slopes as provided by an implementation of the present application;
FIG. 11 is a schematic diagram of a segment of a boundary point for a smaller slope according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a sub-pixelation process provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of obtaining locations of travelable region boundary points according to an embodiment of the present disclosure;
FIG. 14 is a schematic diagram of obtaining locations of travelable region boundary points according to an embodiment of the present disclosure;
fig. 15 is a schematic view of a vehicle drivable area detection method according to an embodiment of the present application, applied to a specific product form;
FIG. 16 is a schematic structural diagram of a detection device for a vehicle driving area according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a detection device for a vehicle travelable region according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Fig. 1 is a functional block diagram of a vehicle 100 provided in an embodiment of the present application.
Where the vehicle 100 may be a human-driven vehicle, or the vehicle 100 may be configured to be in a fully or partially autonomous driving mode.
In one example, the vehicle 100 may control the own vehicle while in the autonomous driving mode, and may determine a current state of the vehicle and its surroundings by human operation, determine a possible behavior of at least one other vehicle in the surroundings, and determine a confidence level corresponding to a likelihood that the other vehicle performs the possible behavior, controlling the vehicle 100 based on the determined information. While the vehicle 100 is in the autonomous driving mode, the vehicle 100 may be placed into operation without human interaction.
Various subsystems may be included in vehicle 100, such as, for example, a travel system 110, a sensing system 120, a control system 130, one or more peripheral devices 140, as well as a power supply 160, a computer system 150, and a user interface 170.
Alternatively, vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements. In addition, each of the sub-systems and elements of the vehicle 100 may be interconnected by wire or wirelessly.
For example, the travel system 110 may include components for providing powered motion to the vehicle 100. In one embodiment, the travel system 110 may include an engine 111, a transmission 112, an energy source 113, and wheels 114/tires. Wherein the engine 111 may be an internal combustion engine, an electric motor, an air compression engine, or other type of engine combination; for example, a hybrid engine composed of a gasoline engine and an electric motor, and a hybrid engine composed of an internal combustion engine and an air compression engine. The engine 111 may convert the energy source 113 into mechanical energy.
Illustratively, the energy source 113 may include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 113 may also provide energy to other systems of the vehicle 100.
For example, the transmission 112 may include a gearbox, a differential, and a drive shaft; wherein the transmission 112 may transmit mechanical power from the engine 111 to the wheels 114.
In one embodiment, the transmission 112 may also include other devices, such as a clutch. Wherein the drive shaft may comprise one or more shafts that may be coupled to one or more wheels 114.
For example, the sensing system 120 may include several sensors that sense information about the environment surrounding the vehicle 100.
For example, the sensing system 120 may include a positioning system 121 (e.g., a GPS system, a beidou system, or other positioning system), an inertial measurement unit 122 (IMU), a radar 123, a laser range finder 124, and a camera 125. The sensing system 120 may also include sensors of internal systems of the monitored vehicle 100 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors may be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function of the safe operation of the autonomous vehicle 100.
The positioning system 121 may be used, among other things, to estimate the geographic location of the vehicle 100. The IMU122 may be used to sense position and orientation changes of the vehicle 100 based on inertial acceleration. In one embodiment, the IMU122 may be a combination of an accelerometer and a gyroscope.
For example, the radar 123 may utilize radio signals to sense objects within the surrounding environment of the vehicle 100. In some embodiments, in addition to sensing objects, radar 123 may also be used to sense the speed and/or heading of an object.
For example, the laser rangefinder 124 may utilize a laser to sense objects in the environment in which the vehicle 100 is located. In some embodiments, laser rangefinder 124 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
Illustratively, the camera 125 may be used to capture multiple images of the surrounding environment of the vehicle 100. For example, the camera 125 may be a still camera or a video camera.
As shown in FIG. 1, a control system 130 is provided for controlling the operation of the vehicle 100 and its components. Control system 130 may include various elements, such as may include a steering system 131, a throttle 132, a braking unit 133, a computer vision system 134, a route control system 135, and an obstacle avoidance system 136.
For example, the steering system 131 may be operable to adjust the heading of the vehicle 100. For example, in one embodiment, a steering wheel system. The throttle 132 may be used to control the operating speed of the engine 111 and thus the speed of the vehicle 100.
For example, the brake unit 133 may be used to control the vehicle 100 to decelerate; the brake unit 133 may use friction to slow the wheel 114. In other embodiments, the brake unit 133 may convert the kinetic energy of the wheel 114 into an electrical current. The brake unit 133 may take other forms to slow the rotational speed of the wheels 114 to control the speed of the vehicle 100.
As shown in FIG. 1, the computer vision system 134 may be operable to process and analyze images captured by the camera 125 in order to identify objects and/or features in the environment surrounding the vehicle 100. Such objects and/or features may include traffic signals, road boundaries, and obstacles. The computer vision system 134 may use object recognition algorithms, Motion from Motion (SFM) algorithms, video tracking, and other computer vision techniques. In some embodiments, the computer vision system 134 may be used to map an environment, track objects, estimate the speed of objects, and so forth.
For example, route control system 135 may be used to determine a route of travel for vehicle 100. In some embodiments, route control system 135 may combine data from sensors, GPS, and one or more predetermined maps to determine a travel route for vehicle 100.
As shown in fig. 1, obstacle avoidance system 136 may be used to identify, evaluate, and avoid or otherwise negotiate potential obstacles in the environment of vehicle 100.
In one example, the control system 130 may additionally or alternatively include components other than those shown and described. Or may reduce some of the components shown above.
As shown in fig. 1, vehicle 100 may interact with external sensors, other vehicles, other computer systems, or users through peripherals 140; the peripheral devices 140 may include, among other things, a wireless communication system 141, an in-vehicle computer 142, a microphone 143, and/or a speaker 144.
In some embodiments, the peripheral device 140 may provide a means for the vehicle 100 to interact with the user interface 170. For example, the in-vehicle computer 142 may provide information to a user of the vehicle 100. The user interface 116 may also operate the in-vehicle computer 142 to receive user inputs; the in-vehicle computer 142 may be operated through a touch screen. In other cases, the peripheral device 140 may provide a means for the vehicle 100 to communicate with other devices located within the vehicle. For example, the microphone 143 may receive audio (e.g., voice commands or other audio input) from a user of the vehicle 100. Similarly, the speaker 144 may output audio to a user of the vehicle 100.
As depicted in fig. 1, wireless communication system 141 may wirelessly communicate with one or more devices, either directly or via a communication network. For example, wireless communication system 141 may use 3G cellular communication; for example, Code Division Multiple Access (CDMA), EVD0, global system for mobile communications (GSM)/General Packet Radio Service (GPRS), or 4G cellular communications, such as Long Term Evolution (LTE); or, 5G cellular communication. The wireless communication system 141 may communicate with a Wireless Local Area Network (WLAN) using wireless fidelity (WiFi).
In some embodiments, the wireless communication system 141 may communicate directly with devices using an infrared link, bluetooth, or ZigBee protocols (ZigBee); other wireless protocols, such as various vehicle communication systems, for example, wireless communication system 141 may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
As shown in fig. 1, a power supply 160 may provide power to various components of the vehicle 100. In one embodiment, power source 160 may be a rechargeable lithium ion or lead acid battery. One or more battery packs of such batteries may be configured as a power source to provide power to various components of the vehicle 100. In some embodiments, the power source 160 and the energy source 113 may be implemented together, such as in some all-electric vehicles.
Illustratively, some or all of the functionality of the vehicle 100 may be controlled by a computer system 150, wherein the computer system 150 may include at least one processor 151, the processor 151 executing instructions 153 stored in a non-transitory computer readable medium, such as a memory 152. The computer system 150 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.
For example, the processor 151 may be any conventional processor, such as a commercially available CPU.
Alternatively, the processor may be a dedicated device such as an ASIC or other hardware-based processor. Although fig. 1 functionally illustrates processors, memories, and other elements of a computer in the same blocks, one of ordinary skill in the art will appreciate that the processors, computers, or memories may actually comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different housing than the computer. Thus, references to a processor or computer are to be understood as including references to a collection of processors or computers or memories which may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only computations related to the component-specific functions.
In various aspects described herein, the processor may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle and others are executed by a remote processor, including taking the steps necessary to perform a single maneuver.
In some embodiments, the memory 152 may contain instructions 153 (e.g., program logic), which instructions 153 may be executed by the processor 151 to perform various functions of the vehicle 100, including those described above. The memory 152 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the travel system 110, the sensing system 120, the control system 130, and the peripheral devices 140, for example.
Illustratively, in addition to instructions 153, memory 152 may also store data such as road maps, route information, location, direction, speed of the vehicle, and other such vehicle data, among other information. Such information may be used by the vehicle 100 and the computer system 150 during operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
As shown in fig. 1, user interface 170 may be used to provide information to or receive information from a user of vehicle 100. Optionally, the user interface 170 may include one or more input/output devices within the collection of peripheral devices 140, such as a wireless communication system 141, an in-vehicle computer 142, a microphone 143, and a speaker 144.
In embodiments of the present application, the computer system 150 may control the functions of the vehicle 100 based on inputs received from various subsystems (e.g., the travel system 110, the sensing system 120, and the control system 130) and from the user interface 170. For example, the computer system 150 may utilize inputs from the control system 130 in order to control the brake unit 133 to avoid obstacles detected by the sensing system 120 and the obstacle avoidance system 136. In some embodiments, the computer system 150 is operable to provide control over many aspects of the vehicle 100 and its subsystems.
Alternatively, one or more of these components described above may be mounted or associated separately from the vehicle 100. For example, the memory 152 may exist partially or completely separate from the vehicle 100. The above components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 1 should not be construed as a limitation to the embodiment of the present application.
Alternatively, the vehicle 100 may be an autonomous automobile traveling on a road, and objects within its surrounding environment may be identified to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to be adjusted.
Optionally, the vehicle 100 or a computing device associated with the vehicle 100 (e.g., the computer system 150, the computer vision system 134, the memory 152 of fig. 1) may predict behavior of the identified objects based on characteristics of the identified objects and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.).
Optionally, each identified object depends on the behavior of each other, and therefore, it is also possible to predict the behavior of a single identified object taking all identified objects together into account. The vehicle 100 is able to adjust its speed based on the predicted behaviour of said identified object. In other words, the autonomous vehicle is able to determine that the vehicle will need to adjust (e.g., accelerate, decelerate, or stop) to a steady state based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 100, such as the lateral position of the vehicle 100 in the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 100 to cause the autonomous vehicle to follow a given trajectory and/or to maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle (e.g., cars in adjacent lanes on the road).
The vehicle 100 may be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, construction equipment, a trolley, a golf cart, a train, a cart, or the like, and the embodiment of the present invention is not particularly limited.
In one possible implementation, the vehicle 100 shown in fig. 1 may be an autonomous vehicle, and the autonomous system is described in detail below.
Fig. 2 is a schematic diagram of an automatic driving system provided in an embodiment of the present application.
The autopilot system shown in fig. 2 includes a computer system 201, wherein the computer system 201 includes a processor 203, and the processor 203 is coupled to a system bus 205. Processor 203 may be one or more processors, where each processor may include one or more processor cores. A display adapter 207(video adapter), which may drive a display 209, the display 209 coupled with the system bus 205. System bus 205 may be coupled via a bus bridge 211 to an input/output (I/O) bus 213, and an I/O interface 215 coupled to the I/O bus. The I/O interface 215 communicates with various I/O devices, such as an input device 217 (e.g., keyboard, mouse, touch screen, etc.), a media tray 221 (e.g., CD-ROM, multimedia interface, etc.). Transceiver 223 may send and/or receive radio communication signals and camera 255 may capture digital video images of the scene and motion. Among the interfaces connected to the I/O interface 215 may be USB ports 225.
The processor 203 may be any conventional processor, such as a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, or a combination thereof.
Alternatively, the processor 203 may be a dedicated device such as an Application Specific Integrated Circuit (ASIC); the processor 203 may be a neural network processor or a combination of a neural network processor and the above-described conventional processor.
Optionally, in various embodiments described herein, the computer system 201 may be located remotely from the autonomous vehicle and may communicate wirelessly with the autonomous vehicle. In other aspects, some processes described herein are performed on a processor disposed within an autonomous vehicle, others being performed by a remote processor, including taking the actions required to perform a single maneuver.
Computer system 201 may communicate with software deploying server 249 via network interface 229. The network interface 229 may be a hardware network interface, such as a network card. The network 227 may be an external network, such as the internet, or an internal network, such as an ethernet or a Virtual Private Network (VPN). Optionally, the network 227 may also be a wireless network, such as a wifi network, a cellular network, or the like.
As shown in FIG. 2, a hard drive interface is coupled to system bus 205, and a hard drive interface 231 may be coupled to hard drive 233, and a system memory 235 may be coupled to system bus 205. The data running in system memory 235 may include an operating system 237 and application programs 243. Operating system 237 may include a parser 239(shell) and a kernel 241(kernel), among other things. The shell 239 is an interface between the user and the kernel of the operating system. Shell can be the outermost layer of the operating system; the shell may manage the interaction between the user and the operating system, such as waiting for user input, interpreting the user input to the operating system, and processing the output results of the various operating systems. Kernel 241 may be comprised of those portions of an operating system that are used to manage memory, files, peripherals, and system resources. Interacting directly with the hardware, the operating system kernel typically runs processes and provides inter-process communication, CPU slot management, interrupts, memory management, IO management, and the like. Applications 243 include programs related to controlling the automatic driving of a vehicle, such as programs that manage the interaction of an automatically driven vehicle with obstacles on the road, programs that control the route or speed of an automatically driven vehicle, and programs that control the interaction of an automatically driven vehicle with other automatically driven vehicles on the road. Application programs 243 also exist on the system of software deploying server 249. In one embodiment, the computer system 201 may download an application from the software deployment server 249 when the autopilot-related program 247 needs to be executed.
For example, the application 243 may be a program for controlling an autonomous vehicle to perform automatic parking.
Illustratively, a sensor 253 can be associated with the computer system 201, and the sensor 253 can be used to detect the environment surrounding the computer 201.
For example, the sensor 253 can detect animals, cars, obstacles, crosswalks, etc., and further the sensor can detect the environment around the objects such as the animals, cars, obstacles, crosswalks, etc., such as: the environment surrounding the animal, e.g., other animals present around the animal, weather conditions, brightness of the surrounding environment, etc.
Alternatively, if the computer 201 is located on an autonomous automobile, the sensor may be a camera, infrared sensor, chemical detector, microphone, or the like.
For example, in the scenario of automatic parking, the sensor 253 may be used to detect the size or position of a garage and surrounding obstacles around the vehicle, so that the vehicle can sense the distance between the garage and the surrounding obstacles, and perform collision detection when parking to prevent the vehicle from colliding with the obstacles.
In one example, the computer system 150 shown in FIG. 1 may also receive information from, or transfer information to, other computer systems. Alternatively, sensor data collected from the sensing system 120 of the vehicle 100 may be transferred to another computer for processing of this data.
For example, as shown in fig. 3, data from computer system 312 may be transmitted via a network to cloud-side server 320 for further processing. The network and intermediate nodes may include various configurations and protocols, including the internet, world wide web, intranets, virtual private networks, wide area networks, local area networks, private networks using proprietary communication protocols of one or more companies, ethernet, WiFi and HTTP, and various combinations of the foregoing; such communications may be by any device capable of communicating data to and from other computers, such as modems and wireless interfaces.
In one example, server 320 may comprise a server having multiple computers, such as a load balancing server farm, that exchange information with different nodes of a network for the purpose of receiving, processing, and transmitting data from computer system 312. The server may be configured similar to computer system 312, with processor 330, memory 340, instructions 350, and data 360.
Illustratively, the data 360 of the server 320 may include information regarding the road conditions surrounding the vehicle. For example, the server 320 may receive, detect, store, update, and transmit information related to vehicle road conditions.
For example, the information on the condition of the road around the vehicle includes information on other vehicles around the vehicle and obstacle information.
Currently, a monocular vision method or a binocular vision method can be generally adopted for detecting the drivable area of the automatic driving vehicle; the monocular vision method comprises the steps of acquiring an image of a road environment through a monocular camera, and detecting a drivable area in the image based on a depth neural network device trained in advance; adopting a plane assumption principle according to the detected travelable area, for example, the automatic driving vehicle is in a flat area, and no ramp exists; and further converting the travelable area in the image from a two-dimensional pixel coordinate system to a three-dimensional coordinate system of the automatic driving vehicle, so as to complete the space positioning of the travelable area. The binocular vision method includes the steps that images output by binocular cameras are respectively obtained through the binocular cameras, so that a global disparity map of a road environment is obtained, a travelable area is detected according to the disparity map, and then the travelable area is converted from a two-dimensional pixel coordinate system to a three-dimensional coordinate system where an automatic driving vehicle is located according to distance detection characteristics of the binocular cameras, and spatial positioning of the travelable area is completed.
However, in the monocular vision method, since the monocular camera cannot sense the distance, in the process of transforming the travelable region from the pixel coordinate system to the coordinate system of the autonomous vehicle, it is necessary to use a plane assumption principle, that is, it is assumed that the road surface on which the vehicle is located is completely flat and has no slope, and the accuracy of positioning the travelable region is low; for the binocular vision method, the location of the drivable area depends on the disparity map, and the calculated amount of the global disparity map acquired by the binocular camera is large, so that the automatic driving vehicle cannot be processed in real time, and the safety risk exists in the driving process of the automatic driving vehicle; therefore, how to improve the real-time performance of the detection method of the vehicle driving area becomes a problem which needs to be solved urgently.
In view of this, embodiments of the present application provide a method and an apparatus for detecting a vehicle travelable area, in which a binocular image of a vehicle traveling direction may be acquired, and a boundary of the vehicle travelable area in a left eye image and a right eye image may be acquired based on the binocular image; according to the drivable region boundary in the left eye image and the drivable region boundary in the right eye image, the parallax information of the drivable region boundary in the binocular image can be obtained, and the position of the vehicle drivable region in the binocular image can be obtained based on the parallax information; according to the method for detecting the vehicle travelable area, the binocular image can be prevented from being subjected to pixel-by-pixel parallax calculation, so that the global parallax image is obtained, the calculation of the global parallax image is not needed, the self-vehicle coordinate system positioning of the travelable area boundary point can be realized only by calculating the parallax information of the travelable area boundary in the binocular image, the calculation amount can be reduced to a great extent under the condition of certain precision, and the real-time performance of the detection system of the automatic driving vehicle is improved.
The following describes a method for detecting a vehicle travelable region in the embodiment of the present application in detail with reference to fig. 4 to 14.
FIG. 4 is a schematic diagram of a detection system of an autonomous vehicle provided by an embodiment of the application. The detection system 400 may be used to perform a method of detecting a travelable region of a vehicle, and the detection system 400 may include a sensing module 410, a travelable region detection module 420, a travelable region registration module 430, and a coordinate system conversion module 440.
The sensing module 410 may be configured to sense information of a road surface and a surrounding environment when the autonomous vehicle is running; the sensing module may include a binocular camera, wherein the binocular camera may include a left eye camera and a right eye camera, and the binocular camera may be used to sense environmental information around the vehicle, and detect a drivable area due to a subsequent deep learning network; the left-eye camera and the right-eye camera can meet image frame synchronization, and the image frame synchronization can be realized through hardware or software.
For example, to ensure that an on-board binocular camera can be normally used in an intelligent driving scene, the baseline distance of the binocular camera may be greater than 30cm to ensure that detection of objects of around 100 meters can be supported.
The travelable region detection module 420 may be used to enable detection of travelable regions in the pixel coordinate system; the module can be composed of a deep learning network and is used for detecting travelable areas in a left eye image and a right eye image; the input data of the travelable region detection module 420 may be image data collected by the left-eye camera and the right-eye camera included in the sensing module 410, and the output data may be coordinates of travelable region boundary points in the left-eye image and the right-eye image in a pixel coordinate system.
The travelable region registration module 430 can be used to perform at least the following five steps:
the first step is as follows: segmenting the boundary of the left-eye travelable area; in other words, in the pixel coordinate system, the extracted boundary points of the travelable region for the left-eye image are segmented according to the slope transformation condition, and the continuous boundary points of the travelable region for the left-eye image are segmented into N segments.
The second step is that: carrying out segmentation matching processing on the boundary point of the left-eye travelable area and the boundary point of the right-eye travelable area; in other words, in a pixel coordinate system, different matching strategies are adopted for different segments according to segmented travelable region boundary points to match the left-eye travelable region boundary points with the right-eye travelable region boundary points.
The third step: registration filtering processing of the travelable region; filtering the matched boundary points of the travelable region by a filtering algorithm under a pixel coordinate system so as to ensure the accuracy of the registration of the boundary points;
the fourth step: calculating the parallax of the boundary points of the travelable areas; in a pixel coordinate system, performing parallax calculation of corresponding travelable region boundary points according to the registered travelable region boundary points;
the fifth step: performing parallax sub-pixelation processing; in other words, the obtained parallax is sub-pixelized in a pixel coordinate system, so as to ensure that the coordinate positioning of the boundary point of the travelable region at a longer distance still ensures higher positioning accuracy.
Here, sub-pixel may refer to a process of subdividing between two adjacent pixels, which means that each pixel is divided into smaller units.
The coordinate system conversion module 440 may be configured to detect the locations of the boundary points of the travelable region in the X and Y directions, where the calculation of the X distance from the boundary point of the travelable region to the host vehicle may be based on the obtained parallax, and perform conversion of the boundary point from the pixel coordinate system to the X-direction coordinate of the host vehicle coordinate system by triangulation or least square method; the Y distance calculation of the travelable region boundary point from the host vehicle may be based on the found X distance, and the conversion of the travelable region boundary point from the pixel coordinate system to the Y-direction coordinate of the host vehicle coordinate system may be performed by triangulation.
Exemplarily, a schematic diagram of coordinate transformation is shown in fig. 5; a point P (X, y) on the travelable region boundary may be converted from the pixel coordinates of the two-dimensional image into a three-dimensional coordinate system in which the autonomous vehicle is located, wherein the X direction may represent a forward direction along the location of the autonomous vehicle; the Y direction may represent a direction to the left along the position where the autonomous vehicle is located.
It should be understood that fig. 5 is an illustration of a coordinate system, and does not limit the directions in the coordinate system in any way.
Fig. 6 is a schematic flowchart of a method for detecting a vehicle travelable area according to an embodiment of the present application. The detection method shown in FIG. 6 may be performed by the vehicle shown in FIG. 1, or the autonomous driving system shown in FIG. 2, or the detection system shown in FIG. 4; the detection method shown in fig. 6 includes steps 510 to 530, which are described in detail below.
And step 510, acquiring a binocular image of the vehicle driving direction.
The binocular image may include a left eye image and a right eye image; for example, the two-dimensional images may be left and right two-dimensional images respectively acquired by two cameras with equal height and parallel in an autonomous vehicle; or may be an image acquired by a binocular camera as shown in fig. 4.
For example, the binocular image may be an image of a road surface or surroundings where the autonomous vehicle acquires a driving direction; examples include road surface images and images of obstacles and pedestrians near the vehicle.
And step 520, obtaining parallax information of the boundary of the drivable region in the binocular image according to the boundary of the drivable region in the left eye image and the boundary of the drivable region in the right eye image.
The parallax information is information of a difference in direction between two points at a certain distance from each other when the same object is viewed. For example, a difference in the horizontal direction in the position of the same road travelable area acquired by the left-eye camera and the right-eye camera in the autonomous vehicle may be parallax information.
Illustratively, the acquired left eye image and the acquired right eye image can be respectively input into a deep learning network trained in advance for identifying the travelable region in the image; the vehicle travelable areas in the left eye image and the right eye image can be identified through the pre-trained deep learning network.
And step 530, obtaining a vehicle travelable area in the binocular image based on the parallax information.
For example, the position of the vehicle travelable region in the binocular image may be obtained by triangulation based on parallax information of the travelable region boundary in the binocular image.
Further, in the embodiment of the application, when the drivable region boundary in the left eye image is matched with the drivable boundary in the right eye image and the parallax information of the drivable region boundary in the binocular image is calculated, the drivable region boundary in any one of the binocular images can be processed in a segmented manner, and the segmented drivable region boundary can be matched in a segmented manner, so that the accuracy of matching the drivable region boundary can be improved, and the information of the drivable region of the vehicle in the road can be acquired more accurately.
In one possible implementation manner, the method for detecting the vehicle travelable region further comprises the steps of carrying out segmentation processing on a travelable region boundary in a first image in the binocular image; the step 520 of obtaining the parallax information of the boundary of the drivable region in the binocular image according to the boundary of the drivable region in the left eye image and the boundary of the drivable region in the right eye image may include performing drivable region boundary matching on a second image of the binocular image based on N boundary point line segments obtained through segmentation processing to obtain the parallax information, where N is an integer greater than or equal to 2.
The first image may refer to a left eye image in a binocular image, and the second image refers to a right eye image in the binocular image; alternatively, the first image may refer to a right eye image among binocular images, and the second image may refer to a left eye image among the binocular images.
For example, the foregoing performing the segmentation processing on the travelable region boundary in the first image of the binocular images, and performing the travelable region boundary matching in the second image of the binocular images may be performing the segmentation processing on the travelable region boundary in the left eye image of the binocular images, and performing the travelable region boundary matching in the right eye image based on N boundary point line segments in the left eye image obtained by the segmentation processing to obtain the parallax information; alternatively, segmentation processing may be performed on the travelable region boundary in the right eye image in the binocular image, and travelable region boundary matching may be performed in the left eye image based on N boundary point line segments in the right eye image obtained by the segmentation processing, so as to obtain the parallax information.
In the embodiment of the application, the boundary point line segments can be divided into the boundary point line segments with smaller slopes and the boundary point line segments with larger slopes according to the slopes of the N boundary point line segments after segmentation processing, so that the boundaries of the vehicle travelable areas in the binocular image are matched based on the matching strategy, and the parallax information can be obtained according to the matching result.
In a possible implementation manner, performing travelable region boundary matching on the second image based on N boundary point line segments obtained through segmentation processing to obtain disparity information includes: and matching the boundary of the drivable area of the second image according to the N boundary point line segments and a matching strategy to obtain parallax information, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
For example, for a first type of boundary point line segment of the N boundary point line segments, a first matching strategy of matching strategies may be adopted to perform travelable region boundary matching on the second image, where the first type of boundary point line segment may include travelable region boundaries of road edges and travelable region boundaries of other vehicle sides;
for a second type of boundary point line segments of the N boundary point line segments, a second matching strategy of the matching strategies may be adopted to perform travelable region boundary matching on the second image, wherein the distance between the boundary points of the second type of boundary point line segments and the vehicle is the same.
The first type of boundary point line segment may be a boundary point line segment with a larger slope; such as a road edge area, or other vehicle side area; the second type of boundary point line segment may refer to a boundary point line segment with a smaller slope; such as the rear area of other vehicles, etc. The phenomenon that the boundary points with larger slopes, namely the boundary of the travelable region extracted from the left eye image and the right eye image, are basically not overlapped exists; partial boundary point overlapping phenomenon exists on the boundary of the travelable area extracted from the left eye image and the right eye image of the boundary point line segment with small slope.
Optionally, the matching policy may include a first matching policy and a second matching policy, where the first matching policy may refer to matching through a search region, the search region may refer to a region generated by using any one point in a first boundary point line segment as a center, and the first boundary point line segment is any one boundary point line segment in N boundary point line segments; the second matching strategy may refer to matching by a preset search step, where the preset search step is determined based on a boundary point disparity of one of the N boundary point line segments.
For example, for the boundary point line segment with a larger slope, a search region generated by taking one boundary point of the first image (for example, the left eye image) drivable region boundary point segment in the binocular image as a template point and taking the boundary point of the second image (for example, the right eye image) corresponding to the same row in the first image as the center can be adopted to match with the template point in the first image. The specific flow is shown in the subsequent figure 7.
For example, for a boundary point line segment with a small slope, since a boundary point extracted from a first image (e.g., a left eye image) may overlap with a boundary point extracted from a second image (e.g., a right eye image), a matching of the boundary points may be performed by using an inflection point parallax correction method. The specific flow is shown in the subsequent fig. 7.
In the embodiment of the application, a sectional type matching strategy based on the slope size of the boundary of the travelable region is provided, the detected travelable region is subjected to sectional processing according to slope distribution, and different matching strategies are adopted for the boundary of the travelable region with the larger slope and the boundary of the travelable region with the smaller slope to perform matching, so that the matching precision of the boundary point of the travelable region can be improved.
Fig. 7 is a schematic flowchart of a method for detecting a vehicle travelable area according to an embodiment of the present application. The detection method shown in FIG. 7 may be performed by the vehicle shown in FIG. 1, or the autonomous driving system shown in FIG. 2, or the detection system shown in FIG. 4; the detection method shown in fig. 7 includes steps 601 to 614, which are described in detail below.
Step 601, starting; namely, the execution of the detection method of the vehicle travelable region is started.
Step 602, a left eye image is acquired.
Here, the left eye image may refer to an image of a road surface or surroundings acquired by one of the binocular cameras (e.g., a left eye camera).
Step 603, acquiring a right eye image.
Here, the right eye image may refer to an image of a road surface or surroundings acquired by another camera (e.g., a left eye camera) of the binocular cameras.
It should be noted that, the above step 602 and step 603 may be executed simultaneously; alternatively, step 603 may be executed first, and then step 602 may be executed, which is not limited in this application.
And step 604, detecting the vehicle travelable area of the acquired left-eye image.
For example, a deep learning network for identifying travelable regions in an image may be trained in advance by training data; the vehicle travelable region in the left-eye image can be identified by the pre-trained deep learning network.
For example, a pre-trained deep learning network can be utilized to detect the travelable area in a pixel coordinate system; the input data may be the acquired left-eye image, and the output data may be a coordinate vector of the detected travelable region in a pixel coordinate system of the left-eye image; the coordinates of the travelable region under the pixel coordinate system can be located in the left-eye image based on the output coordinate vector.
Step 605, vehicle travelable region detection is performed on the acquired right eye image.
Similarly, the travelable region in the right-eye image may be detected by using the deep learning network trained in advance in step 604.
Illustratively, the acquired left eye image and the acquired right eye image can be simultaneously input to a deep learning network trained in advance for detection; alternatively, the acquired left-eye image and right-eye image may be input to a deep learning network trained in advance in tandem, so that the coordinates of the travelable region in the left-eye image and the right-eye image may be detected.
And step 606, performing segmentation processing on the travelable area in the left-eye image.
For example, the boundary of the travelable region may be segmented according to different slope sizes of the boundary points of the travelable region detected in the left-eye image.
Illustratively, assume a coordinate vector P { P) of a vehicle travelable region boundary 1 (x 1 ,y 1 ),p 2 (x 2 ,y 2 )..p n (x n ,y n ) Then the slope of each point can be calculated by the following formula:
Figure PCTCN2020075104-APPB-000001
wherein, K i May represent the slope of the ith point; x is the number of i Can represent the abscissa of the ith point in a pixel coordinate system; x is the number of i-1 Can represent the abscissa of the (i-1) th point in a pixel coordinate system; y is i Can represent the ith point atA vertical coordinate under a pixel coordinate system; y is i-1 The ordinate of the i-1 st point in the pixel coordinate system can be represented.
Furthermore, the gradient vector K K of each point on the boundary of the driving area of the vehicle can be obtained through the formula 1 ,k 2 ...k n }; detecting the inflection point of the boundary of the travelable area according to the jumping condition of the slope; for example, as shown in fig. 8, inflection points on the boundary of the vehicle travelable region may be obtained, including a point B, a point C, a point D, a point E, a point F, and a point G; the boundary of the vehicle travelable region may be segmented according to the inflection point, and as shown in fig. 8, may be divided into an AB segment, a BC segment, a CD segment, a DE segment, an EF segment, an FG segment, and a GH segment.
For example, the boundary of the travelable region may be segmented to obtain N segments; the N segment boundary points can be classified into two categories according to the slope distribution of the segment, i.e., a boundary point line segment with a smaller slope and a boundary point line segment with a larger slope.
For example, fig. 9 (a) shows a road surface image (e.g., a left eye image) acquired by one of the binocular cameras, and it can be seen from the image shown in fig. 9 (a) that the boundary of the vehicle travelable region includes a point a, a point B, a point C, and a point D. If the detected boundary of the travelable region is positioned on the road edge or the side of other vehicles, reflecting that the slope of the corresponding boundary of the travelable region under the pixel coordinate system is larger; a line segment AB, a line segment CD, a line segment EF, and a line segment GH included on the travelable area boundary as shown in fig. 8; if the detected boundary of the travelable region is located at the tail of another vehicle, the slope of the corresponding boundary of the travelable region is smaller, as shown in fig. 8, which includes a line segment BC, a line segment DE, and a line segment FG.
And step 607, matching the boundary segments of the travelable areas.
Wherein, the step 606 illustrates segmenting the boundary of the vehicle travelable region detected in the left-eye image; in addition, the boundary point line segment can be divided into a boundary point line segment with a smaller slope and a boundary point line segment with a larger slope according to the slope of the boundary point line segment after segmentation processing; step 607 will match the edge point line segments in the left eye image in the right eye image.
For example, the travelable region boundary points in the obtained left-eye image may be matched according to different matching strategies based on the segmentation result of the travelable region boundary of the right-eye image in the right-eye image.
For example, for a boundary point line segment with a large slope, it may correspond to a road edge in a real scene, or a vehicle side, such as an AB line segment, a CD line segment shown in (a) in fig. 9; as shown in fig. 9 (b), the phenomenon that a line segment (e.g., a road edge) at a boundary point with a large slope is located at a boundary of a travelable region extracted from the left image and the right image, and there is substantially no overlap exists.
A first matching strategy: for the boundary point line segment with a larger slope, a search area is generated by taking one boundary point of the boundary point segment of the travelable area of the left eye image as a template point and taking the boundary point of the right eye image corresponding to the same line in the left eye image as the center, and the search area is matched with the template point in the left eye image.
For example, shown in fig. 10 (a) is a left-eye image; fig. 10 (b) is a schematic diagram illustrating the generation of a search region in the right-eye image and the matching of the template points in the left-eye image.
Exemplarily, the search area may be a search area obtained by an eight-dimensional descriptor; wherein, 360 degrees can be divided into eight parts equally at first, namely 0 degree-45 degrees, 45 degrees-90 degrees, 90 degrees-135 degrees, 135 degrees-180 degrees, 180 degrees-225 degrees, 225 degrees-270 degrees, 270 degrees-315 degrees, 315 degrees-360 degrees, eight areas can represent eight angle distributions respectively; corresponding to the subsequently generated eight-dimensional descriptors, counting the angles of all pixel points in the 5 × 5 neighborhood, and accumulating the corresponding regional assignments according to a 1 × range mode when the angles fall into the corresponding regions to obtain the assignment of each region, thereby generating 8-dimensional descriptors; the following formula can be used for the calculation.
Figure PCTCN2020075104-APPB-000002
Wherein, S can be represented as a descriptor, Angle can represent the corresponding 8 angular regions, and range can represent the gradient size, i.e. amplitude, corresponding to each point; by the formula, the eight-dimensional descriptor corresponding to each travelable area boundary point can be obtained, and the subsequent matching process is realized based on the descriptor.
For example, a search region may be generated in a size of 5 × 5, similarity matching may be performed between the descriptor of the template point and the descriptor of each point in the search region, and matching points matching the template point may be determined, so as to implement matching of the boundary point segment with a larger slope.
The descriptor is generated in the same region of the right eye image corresponding to the left eye image, for example, in a neighborhood of 5 × 5 around the right eye image corresponding to one boundary point in the left eye image, and the gradient and angle value of each point in the neighborhood in the right eye image are calculated by adopting the following formulas.
Figure PCTCN2020075104-APPB-000003
Wherein dx represents the gradient of the pixel point in the x direction, dy represents the gradient of the pixel point in the y direction, Angle represents the Angle value of the pixel point, and range represents the gradient amplitude of the pixel point.
It should be noted that, the search area is generated by the size of 5 × 5 generated by the eight-dimensional descriptor, and the size of the search area may also be other dimensions, which is not limited in this application.
And a second matching strategy: for the boundary point line segment with a small slope, since the boundary point extracted from the right eye image and the boundary point extracted from the left eye image have a partial boundary point overlapping phenomenon, the matching of the partial boundary points can be performed by adopting a method of inflection point parallax correction.
Specifically, as shown in the BC line segment in fig. 11, since the slope of the travelable region boundary point line segment is small, that is, there may be an overlapping region in the travelable region boundary detected by the left-eye image and the right-eye image in the boundary point line segment; for adjacent areas of the BC line segment, for example, the line segment on the left side of the point B and the line segment on the right side of the point C are both boundary point line segments with a large slope, and the boundary points with the large slope can be matched through the first matching strategy; in the matching process of adjacent segments of the BC segment, the parallax of the left-eye image and the right-eye image at the point B and the point C can be obtained, and the boundary point segment with smaller gradient in the left-eye image and the right-eye image can be matched by taking the parallax of the point B and the point C as the basis and the average value of the parallax of the two points as the search length in the BC segment.
It should be understood that the above boundary point matching by descriptor is also applicable to the boundary point line segment with small slope, for example, the boundary point BC line segment of the travelable region in the left-eye image and the right-eye image is matched.
It should be further understood that the above is exemplified by the process of matching the travelable region boundary in the left-eye image with the travelable region boundary in the right-eye image after the segmentation processing is performed on the travelable region boundary in the left-eye image; similarly, the travelable region boundary in the acquired right-eye image may also be processed in a segmented manner so as to be matched with the travelable region boundary in the left-eye image, which is not limited in this application.
In the embodiment of the application, the extracted travelable region boundary of the left-eye image is segmented, and different matching strategies are adopted for the travelable region boundary with the larger slope and the smaller slope to match with the travelable region boundary in the right-eye image, so that the matching accuracy of the travelable region boundary point can be improved.
And step 608, filtering the matching result.
The matching result filtering may refer to a process of filtering the matching result of the step 607, so as to eliminate the wrong matching point.
Illustratively, for the matched boundary points, there may be a case of a false match; therefore, the matched boundary point pairs can be filtered in a filtering mode, and therefore matching accuracy of the boundary points of the driving area is ensured.
Furthermore, due to the consideration of the influence of external factors such as light rays and the like in a real scene, the robustness of filtering by adopting a fixed threshold value is low; therefore, the filtering can be performed by using a dynamic threshold.
For example, the matching degrees of the boundary points may be sorted according to the matching degrees of the matched boundary points, abnormal matching points are eliminated in a box-type graph manner, and the matching point with the higher matching degree is reserved as the boundary point where the travelable region in the final left-eye image and the right-eye image is successfully matched.
And step 609, calculating the parallax of the boundary point of the travelable area.
For example, the parallax may be calculated based on the matched boundary points, for example, by calculating a coordinate difference value of the boundary point in the pixel coordinate system corresponding to the boundary point Y in the successfully matched boundary point between the left-eye image and the right-eye image.
And step 610, parallax filtering of boundary points of the travelable areas.
In the embodiment of the present application, the pixel points obtained by the parallax calculation in step 609 may be discrete points, and further, the discrete points may be subjected to continuous operation by performing parallax filtering on the boundary points of the travelable areas in step 610.
Illustratively, by performing parallax filtering processing on the boundary points by using interpolation and compensation methods, the process of parallax filtering can be mainly divided into two parts, namely filtering and interpolation (e.g., serialization processing); first, consider that from the bottom of the image, it is the drivable region closest to the autonomous vehicle; the parallax of the boundary point of the corresponding travelable region is gradually reduced from the bottom of the image to the top, that is, the phenomenon that the parallax of the boundary point is increased can not occur, so that the filtering of the error parallax can be performed according to the thought, that is, if the parallax of the travelable region boundary of the left eye image and the right eye image is gradually increased from the bottom of the image to the top, the deeper the depth of field is, the smaller the parallax of the travelable region boundary of the left eye image and the right eye image is; if the parallax of the corresponding boundary point in the deeper depth of field is increased and the boundary point is possibly a matched deviation point, the deviation point can be eliminated. Secondly, considering that the distance between the boundary point corresponding to the same line in the image and the autonomous vehicle is consistent (i.e. the depth of field is consistent or the Y-direction coordinate is consistent), that is, the parallax between the boundary point corresponding to the same line in the left-eye image and the right-eye image should be consistent, the second filtering can be performed based on this situation, and the filtered boundary point can be continuously corrected by using the parallax between the adjacent boundary points in an interpolation manner.
Step 611, parallax sub-pixelation processing.
Here, sub-pixel may refer to a process of subdividing between two adjacent pixels, which means that each pixel is divided into smaller units.
Further, in the embodiment of the present application, in order to ensure accuracy in the calculation of the distance between the binocular images, the extracted boundary point parallax may be sub-pixelized.
Exemplarily, as shown in FIG. 12, assuming that the b point is the coordinate of the matched boundary point, the gradient values of the a, b, c points are respectively marked as g a ,g b ,g c (ii) a a. The coordinates of the three points b and c are respectively marked as p a (x a ,y a ),p b (x b ,y b ),p c (x c ,y c ) (ii) a The coordinates of the boundary points after sub-pixel processing can be calculated using the following formula:
Figure PCTCN2020075104-APPB-000004
wherein M represents a descriptor of sub-pixel processing; n represents a descriptor of sub-pixel processing; y denotes the amount of shift of the image coordinate system y.
And step 612, positioning the boundary point of the travelable area in an X coordinate system of the automatic driving vehicle.
For example, the distance in the X direction of the boundary point in the vehicle coordinate system may be determined by triangulation based on the parallax of the determined boundary point of the travelable region.
Step 613, positioning the boundary point of the travelable area in the Y coordinate of the coordinate system of the autonomous vehicle.
For example, the distance of the boundary point in the Y direction in the vehicle coordinate system may be determined by triangulation based on the parallax of the determined boundary point of the travelable region.
Exemplarily, as shown in fig. 13, distances of boundary points in the X direction and the Y direction in a coordinate system where the autonomous vehicle is located are performed using triangulation, where f denotes a focal length; b represents a reference line; y represents an offset in the image coordinate system.
From the triangular similarity relationship, i.e. the similarity of Δ PLR to Δ PL1R1, we can find:
Figure PCTCN2020075104-APPB-000005
obtaining the distance in the X direction after finishing:
Figure PCTCN2020075104-APPB-000006
d-uL-uR means parallax.
As shown in fig. 14, Y is the horizontal distance of the object from the camera; y is the pixel coordinate on the picture; distance in the Y direction:
Figure PCTCN2020075104-APPB-000007
in the embodiment of the application, the detection of the travelable area and the mapping of the travelable area from the pixel coordinate system to the own vehicle coordinate system can be realized in time through the steps without depending on the plane hypothesis and by a small calculation amount, so that the real-time performance of the autonomous vehicle on the detection of the vehicle travelable area is improved.
Fig. 15 is a schematic view of a specific product form to which the vehicle travelable region detection method according to the embodiment of the present application is applied.
The product form shown in fig. 15 may refer to a vehicle-mounted visual perception device, and the method for detecting a travelable area and positioning spatial coordinates may be implemented by a software algorithm deployed on a computing node of a related device. The method mainly comprises three parts: the first part is used for acquiring images, namely, a left eye image and a right eye image can be acquired through a left eye camera and a right eye camera, and the left eye camera and the right eye camera can meet frame synchronization; the second part is to acquire a travelable region in each image, for example, a travelable region (Freespace) in the left-eye image and the right-eye image or a boundary of the travelable region can be output by a deep learning algorithm; the deep learning algorithm can be deployed in an AI chip, and can be based on parallel accelerated processing of multiple AI chips so as to output Freespace; the third part is to acquire the parallax of the boundary point of the travelable region, for example, outputting the parallax based on serial processing; finally, the location of the travelable region in the coordinate system of the autonomous vehicle can be obtained based on the parallax of the boundary of the travelable region, that is, the distance of the travelable region in the X direction and the Y direction is obtained based on the boundary parallax.
It should be understood that the above illustrations are for the purpose of assisting persons skilled in the art in understanding the embodiments of the application, and are not intended to limit the embodiments of the application to the specific values or specific scenarios illustrated. It will be apparent to those skilled in the art from the foregoing description that various equivalent modifications or changes may be made, and such modifications or changes are intended to fall within the scope of the embodiments of the present application.
The method for detecting the vehicle travelable region according to the embodiment of the present application is described in detail above with reference to fig. 1 to 15, and the device embodiment of the present application is described in detail below with reference to fig. 16 to 17. It should be understood that the detection device of the vehicle travelable region in the embodiment of the present application may execute the detection method of various vehicle travelable regions in the embodiment of the present application, that is, the following specific working processes of various products, and reference may be made to the corresponding processes in the embodiment of the foregoing method.
Fig. 16 is a schematic block diagram of a detection device of a vehicle travelable region according to an embodiment of the present application. It should be understood that the detection apparatus 700 may perform the travelable region detection method illustrated in fig. 6 to 15. The detection apparatus 700 includes: an acquisition unit 710 and a processing unit 720.
The acquiring unit 710 is configured to acquire binocular images of a vehicle driving direction, where the binocular images include a left eye image and a right eye image; the processing unit 720 is configured to obtain disparity information of a drivable area boundary in the binocular image according to the drivable area boundary in the left eye image and the drivable area boundary in the right eye image; and obtaining a vehicle travelable area in the binocular image based on the parallax information.
Optionally, as an embodiment, the processing unit 720 is further configured to:
performing segmentation processing on the boundary of the travelable region in the first image; and matching the boundary of the travelable region of the second image based on the N boundary point line segments obtained by the segmentation processing to obtain the parallax information, wherein N is an integer greater than or equal to 2.
Optionally, as an embodiment, the processing unit 720 is specifically configured to:
and matching the boundary of the travelable region of the second image according to the N boundary point line segments and a matching strategy to obtain the parallax information, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
Optionally, as an embodiment, the processing unit 720 is specifically configured to:
for a first type of boundary point line segments in the N boundary point line segments, performing travelable region boundary matching on the second image by adopting a first matching strategy in the matching strategies, wherein the first type of boundary point line segments comprise travelable region boundaries of road edges and travelable region boundaries of other vehicle sides;
and for a second type of boundary point line segments in the N boundary point line segments, performing travelable area boundary matching on the second image by adopting a second matching strategy in the matching strategies, wherein the distance between the boundary points in the second type of boundary point line segments and the vehicle is the same.
Optionally, as an embodiment, the matching policy includes a first matching policy and a second matching policy, where the first matching policy refers to matching through a search area, the search area refers to an area generated by taking any one of first boundary point line segments as a center, and the first boundary point line segment is any one of the N boundary point line segments; the second matching strategy is to match through a preset search step, and the preset search step is determined based on the boundary point parallax of one boundary point line segment in the N boundary point line segments.
Optionally, as an embodiment, the N boundary point line segments are determined according to an inflection point of a travelable region boundary in the first image.
The detection device 700 is embodied as a functional unit. The term "unit" herein may be implemented in software and/or hardware, and is not particularly limited thereto.
For example, a "unit" may be a software program, a hardware circuit, or a combination of both that implement the above-described functions. The hardware circuitry may include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared processor, a dedicated processor, or a group of processors) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality.
Accordingly, the units of the respective examples described in the embodiments of the present application can be realized in electronic hardware, or a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 17 is a schematic hardware configuration diagram of a detection device of a vehicle travelable region according to an embodiment of the present application.
As shown in fig. 17, the detection apparatus 800 (the detection apparatus 800 may be a computer device) includes a memory 801, a processor 802, a communication interface 803, and a bus 804. The memory 801, the processor 802, and the communication interface 803 are communicatively connected to each other via a bus 804.
The memory 801 may be a Read Only Memory (ROM), a static memory device, a dynamic memory device, or a Random Access Memory (RAM). The memory 801 may store a program, and when the program stored in the memory 801 is executed by the processor 802, the processor 802 is configured to perform the steps of the method for detecting a vehicle travelable region according to the embodiment of the present application, for example, the steps shown in fig. 6 to 15.
It should be understood that the vehicle travelable area detection device shown in the embodiment of the present application may be a server, for example, a server in the cloud, or may also be a chip configured in the server in the cloud.
The processor 802 may be a general Central Processing Unit (CPU), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits, and is configured to execute related programs to implement the method for detecting a vehicle driving area according to the embodiment of the present application.
The processor 802 may also be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the method for detecting the vehicle driving range of the present application may be implemented by an integrated logic circuit of hardware in the processor 802 or instructions in the form of software.
The processor 802 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 801, and the processor 802 reads information in the memory 801, and in combination with hardware thereof, performs functions that need to be performed by units included in the device for detecting a vehicle travelable area shown in fig. 16 in the embodiment of the present application, or performs a method for detecting a vehicle travelable area shown in fig. 6 to 15 in the embodiment of the method of the present application.
The communication interface 803 enables communication between the detection apparatus 800 and other devices or communication networks using transceiver means such as, but not limited to, transceivers.
Bus 804 may include a pathway to transfer information between various components of detection apparatus 800 (e.g., memory 801, processor 802, communication interface 803).
It should be noted that although the detection apparatus 800 described above shows only a memory, a processor, and a communication interface, in a specific implementation, those skilled in the art will appreciate that the detection apparatus 800 may also include other components necessary to achieve normal operation. Meanwhile, it should be understood by those skilled in the art that the above-mentioned detection apparatus 800 may also include hardware components for implementing other additional functions according to specific needs.
Furthermore, those skilled in the art will appreciate that the above-described detection apparatus 800 may also include only those components necessary to implement the embodiments of the present application, and need not include all of the components shown in FIG. 17.
It is to be understood that the above description is intended to assist those skilled in the art in understanding the embodiments of the present application and is not intended to limit the embodiments of the present application to the particular values or particular scenarios illustrated. It will be apparent to those skilled in the art from the foregoing description that various equivalent modifications or changes may be made, and such modifications or changes are intended to fall within the scope of the embodiments of the present application.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

  1. A method for detecting a vehicle travelable area, characterized by comprising:
    acquiring binocular images of the driving direction of a vehicle, wherein the binocular images comprise a left eye image and a right eye image;
    obtaining parallax information of the drivable area boundary in the binocular image according to the drivable area boundary in the left eye image and the drivable area boundary in the right eye image;
    and obtaining a vehicle travelable area in the binocular image based on the parallax information.
  2. The detection method of claim 1, further comprising:
    performing segmentation processing on the boundary of the travelable region in the first image;
    the obtaining of the parallax information of the drivable area boundary in the binocular image according to the drivable area boundary in the left eye image and the drivable area boundary in the right eye image includes:
    performing travelable region boundary matching on the second image based on the N boundary point line segments obtained by the segmentation processing to obtain the parallax information,
    the first image is any one of the binocular images, the second image is another image different from the first image, and N is an integer greater than or equal to 2.
  3. The detection method according to claim 2, wherein the performing travelable region boundary matching on the second image based on the N boundary point line segments obtained by the segmentation processing to obtain the parallax information includes:
    and matching the boundary of the travelable region of the second image according to the N boundary point line segments and a matching strategy to obtain the parallax information, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
  4. The detection method according to claim 3, wherein the performing travelable region boundary matching on the second image according to the N boundary point line segments and a matching strategy to obtain the parallax information comprises:
    for a first type of boundary point line segments in the N boundary point line segments, performing travelable region boundary matching on the second image by adopting a first matching strategy in the matching strategies, wherein the first type of boundary point line segments comprise travelable region boundaries of road edges and travelable region boundaries of other vehicle sides;
    and for a second type of boundary point line segments in the N boundary point line segments, adopting a second matching strategy in the matching strategies to carry out travelable region boundary matching on the second image, wherein the distance between the boundary points in the second type of boundary point line segments and the vehicle is the same.
  5. The detection method according to claim 3 or 4, wherein the matching policy includes a first matching policy and a second matching policy, wherein the first matching policy refers to matching through a search area, the search area refers to an area generated with any one point in a first boundary point line segment as a center, and the first boundary point line segment is any one boundary point line segment in the N boundary point line segments; the second matching strategy is to perform matching through a preset search step, wherein the preset search step is determined based on the boundary point parallax of one of the N boundary point line segments.
  6. The detection method according to any one of claims 2 to 5, wherein the N boundary point line segments are determined from inflection points of a travelable region boundary in the first image.
  7. A vehicle travelable region detection apparatus, characterized by comprising:
    the device comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring binocular images of the driving direction of a vehicle, and the binocular images comprise a left eye image and a right eye image;
    the processing unit is used for obtaining parallax information of the boundary of the drivable area in the binocular image according to the boundary of the drivable area in the left eye image and the boundary of the drivable area in the right eye image; and obtaining a vehicle travelable area in the binocular image based on the parallax information.
  8. The detection device of claim 7, wherein the processing unit is further to:
    performing segmentation processing on the boundary of the travelable region in the first image;
    performing travelable region boundary matching on the second image based on the N boundary point line segments obtained by the segmentation processing to obtain the parallax information,
    the first image is any one of the binocular images, the second image is another image different from the first image, and N is an integer greater than or equal to 2.
  9. The detection apparatus according to claim 8, wherein the processing unit is specifically configured to:
    and matching the travelable region boundary of the second image according to the N boundary point line segments and a matching strategy to obtain the parallax information, wherein the matching strategy is determined according to the slopes of the N boundary point line segments.
  10. The detection apparatus according to claim 9, wherein the processing unit is specifically configured to:
    for a first type of boundary point line segments in the N boundary point line segments, performing travelable region boundary matching on the second image by adopting a first matching strategy in the matching strategies, wherein the first type of boundary point line segments comprise travelable region boundaries of road edges and travelable region boundaries of other vehicle sides;
    and for a second type of boundary point line segments in the N boundary point line segments, performing travelable area boundary matching on the second image by adopting a second matching strategy in the matching strategies, wherein the distance between the boundary points in the second type of boundary point line segments and the vehicle is the same.
  11. The detection apparatus according to claim 9 or 10, wherein the matching policy includes a first matching policy and a second matching policy, wherein the first matching policy refers to matching by a search region, the search region refers to a region generated with any one of first boundary point line segments as a center, and the first boundary point line segments are any one of the N boundary point line segments; the second matching strategy is to match through a preset search step, and the preset search step is determined based on the boundary point parallax of one boundary point line segment in the N boundary point line segments.
  12. The detection apparatus according to any one of claims 8 to 11, wherein the N boundary point line segments are determined from inflection points of a travelable region boundary in the first image.
  13. A vehicle drivable region detection device, comprising at least one processor and a memory, the at least one processor being coupled to the memory for reading and executing instructions from the memory to carry out the detection method according to any one of claims 1 to 6.
  14. A computer-readable medium, characterized in that the computer-readable medium has stored a program code which, when run on a computer, causes the computer to carry out the detection method according to any one of claims 1 to 6.
CN202080093411.3A 2020-02-13 2020-02-13 Method and device for detecting vehicle travelable region Pending CN114981138A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/075104 WO2021159397A1 (en) 2020-02-13 2020-02-13 Vehicle travelable region detection method and detection device

Publications (1)

Publication Number Publication Date
CN114981138A true CN114981138A (en) 2022-08-30

Family

ID=77292613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080093411.3A Pending CN114981138A (en) 2020-02-13 2020-02-13 Method and device for detecting vehicle travelable region

Country Status (2)

Country Link
CN (1) CN114981138A (en)
WO (1) WO2021159397A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113715907B (en) * 2021-09-27 2023-02-28 郑州新大方重工科技有限公司 Attitude adjusting method and automatic driving method suitable for wheeled equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101410872A (en) * 2006-03-28 2009-04-15 株式会社博思科 Road video image analyzing device and road video image analyzing method
US20140071240A1 (en) * 2012-09-11 2014-03-13 Automotive Research & Testing Center Free space detection system and method for a vehicle using stereo vision
CN105313892A (en) * 2014-06-16 2016-02-10 现代摩比斯株式会社 Safe driving guiding system and method thereof
CN105550665A (en) * 2016-01-15 2016-05-04 北京理工大学 Method for detecting pilotless automobile through area based on binocular vision
US20160253575A1 (en) * 2013-10-07 2016-09-01 Hitachi Automotive Systems, Ltd. Object Detection Device and Vehicle Using Same
CN106303501A (en) * 2016-08-23 2017-01-04 深圳市捷视飞通科技股份有限公司 Stereo-picture reconstructing method based on image sparse characteristic matching and device
CN107358168A (en) * 2017-06-21 2017-11-17 海信集团有限公司 A kind of detection method and device in vehicle wheeled region, vehicle electronic device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3056531B1 (en) * 2016-09-29 2019-07-12 Valeo Schalter Und Sensoren Gmbh OBSTACLE DETECTION FOR MOTOR VEHICLE
CN107909036B (en) * 2017-11-16 2020-06-23 海信集团有限公司 Road detection method and device based on disparity map

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101410872A (en) * 2006-03-28 2009-04-15 株式会社博思科 Road video image analyzing device and road video image analyzing method
US20140071240A1 (en) * 2012-09-11 2014-03-13 Automotive Research & Testing Center Free space detection system and method for a vehicle using stereo vision
US20160253575A1 (en) * 2013-10-07 2016-09-01 Hitachi Automotive Systems, Ltd. Object Detection Device and Vehicle Using Same
CN105313892A (en) * 2014-06-16 2016-02-10 现代摩比斯株式会社 Safe driving guiding system and method thereof
CN105550665A (en) * 2016-01-15 2016-05-04 北京理工大学 Method for detecting pilotless automobile through area based on binocular vision
CN106303501A (en) * 2016-08-23 2017-01-04 深圳市捷视飞通科技股份有限公司 Stereo-picture reconstructing method based on image sparse characteristic matching and device
CN107358168A (en) * 2017-06-21 2017-11-17 海信集团有限公司 A kind of detection method and device in vehicle wheeled region, vehicle electronic device

Also Published As

Publication number Publication date
WO2021159397A1 (en) 2021-08-19

Similar Documents

Publication Publication Date Title
CN112230642B (en) Road travelable area reasoning method and device
JP2023508114A (en) AUTOMATED DRIVING METHOD, RELATED DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
CN110930323B (en) Method and device for removing reflection of image
CN112543877B (en) Positioning method and positioning device
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
WO2022051951A1 (en) Lane line detection method, related device, and computer readable storage medium
CN115042821B (en) Vehicle control method, vehicle control device, vehicle and storage medium
EP4307251A1 (en) Mapping method, vehicle, computer readable storage medium, and chip
CN112810603B (en) Positioning method and related product
CN114255275A (en) Map construction method and computing device
CN114257712A (en) Method and device for controlling light supplementing time of camera module
CN115100630B (en) Obstacle detection method, obstacle detection device, vehicle, medium and chip
CN115205311B (en) Image processing method, device, vehicle, medium and chip
WO2021159397A1 (en) Vehicle travelable region detection method and detection device
CN115205848A (en) Target detection method, target detection device, vehicle, storage medium and chip
WO2021110166A1 (en) Road structure detection method and device
CN115508841A (en) Road edge detection method and device
CN114092898A (en) Target object sensing method and device
CN111775962B (en) Method and device for determining automatic driving strategy
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle
CN111845786B (en) Method and device for determining automatic driving strategy
CN115063639B (en) Model generation method, image semantic segmentation device, vehicle and medium
WO2022061725A1 (en) Traffic element observation method and apparatus
CN114616153B (en) Method and control device for preventing collision
CN115139946B (en) Vehicle falling water detection method, vehicle, computer readable storage medium and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination