WO2021233154A1 - 可行驶区域检测方法、装置、设备及存储介质 - Google Patents

可行驶区域检测方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021233154A1
WO2021233154A1 PCT/CN2021/092822 CN2021092822W WO2021233154A1 WO 2021233154 A1 WO2021233154 A1 WO 2021233154A1 CN 2021092822 W CN2021092822 W CN 2021092822W WO 2021233154 A1 WO2021233154 A1 WO 2021233154A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
data
roi
parameter
driving
Prior art date
Application number
PCT/CN2021/092822
Other languages
English (en)
French (fr)
Inventor
曾洲
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021233154A1 publication Critical patent/WO2021233154A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image

Definitions

  • This application relates to the field of smart vehicles, and in particular to a method, device, equipment and storage medium for detecting a drivable area.
  • the drivable area detection plays a very important role.
  • the detection based on the drivable area can reduce the false detection rate of the target object detection, and can also be used for the auxiliary ranging of the target object.
  • the drivable area detection can effectively detect these objects, which can improve the perception ability of the automatic driving system.
  • the main drivable area detection includes: a drivable area detection method based on monocular vision and a drivable area detection method based on binocular vision.
  • the drivable area detection method based on binocular vision can be applied to more complicated driving scenes, but the effective detection distance of the drivable area detection method based on binocular vision is relatively
  • the effective detection distance of the driving area detection method based on monocular vision is relatively short. Therefore, in the related art, the binocular vision-based driving area detection method usually adopts a higher resolution in order to achieve long-distance and large-range detection.
  • the region of interest (ROI) parameter in the binocular vision-based driving area detection method in the related technology is fixed, that is to say, no matter in any road driving scene, the above-mentioned binocular vision-based The driving area detection methods all require large computing resources. Therefore, the above-mentioned binocular vision-based driving area detection method requires a large amount of computational energy consumption in any road driving scene.
  • the embodiments of the present application provide a method, device, equipment, and storage medium for detecting a drivable area, which can not only save computing resources, but also improve local detection accuracy.
  • an embodiment of the present application provides a method for detecting a drivable area, including:
  • the target ROI parameters obtain the data of the vehicle's drivable area at the next vehicle moment.
  • the current ROI parameters of the vehicle are adjusted to the target ROI parameter corresponding to the target ROI parameter switching condition, and then the vehicle can be driven according to the target ROI parameter.
  • Area detection it can be seen that in the embodiments of the present application, different ROI parameters can be used for different road driving scenarios, which is beneficial to reduce calculation energy consumption, thereby not only saving calculation resources, but also improving local detection accuracy.
  • the target ROI parameters include at least one of the following: image preprocessing parameters, point cloud generation parameters, or drivable area generation parameters;
  • the image preprocessing parameters include at least one of the following: the size parameter of the image ROI, the position parameter of the image ROI, or the number of image zoom layers;
  • the point cloud generation parameters include at least one of the following: a support point grid step parameter, a support point sparsity parameter, or a support point distribution method parameter;
  • the drivable area generation parameters include: occupancy grid resolution parameters.
  • the ROI parameter switching condition includes any one of the following:
  • the preset ROI parameter switching conditions of the congested road driving scene include: the vehicle is driving in the current driving lane at a driving speed less than the first preset speed along the driving direction of the last vehicle, and the current driving lane is far away from the vehicle. There is an obstacle in the first preset distance;
  • the preset ROI parameter switching conditions for driving scenes on uphill and downhill roads in a narrow space include: the vehicle is traveling in the current lane at a speed less than the first preset speed in the direction of the previous vehicle, the state of the accelerator pedal of the vehicle, and The state of the brake pedal changes intermittently, and the vehicle has a moving distance greater than the second preset distance in the vertical and horizontal directions.
  • the preset ROI parameter switching conditions of the highway driving scene include any of the following: the preset ROI parameter switching conditions of the first sub-scene, the preset ROI parameter switching conditions of the second sub-scene, The preset ROI parameter switching condition of the third sub-scene, or the preset ROI parameter switching condition of the fourth sub-scene;
  • the preset ROI parameter switching conditions of the first sub-scene include: the vehicle travels in the current travel lane at a travel speed greater than the second preset speed along the travel direction of the previous vehicle at the time of travel, and the current travel lane is adjacent to There are no obstacles in the lane within the third preset distance from the vehicle;
  • the preset ROI parameter switching conditions of the second sub-scene include: the vehicle is traveling in the current traveling lane at a traveling speed greater than the second preset speed along the traveling direction of the previous vehicle at the time of travel, and on the adjacent traveling lane of the current traveling lane There is an obstacle within the third preset distance from the vehicle;
  • the preset ROI parameter switching conditions of the third sub-scenario include: the vehicle is traveling in the current traveling lane at a traveling speed greater than the second preset speed along the traveling direction of the previous vehicle, and the current traveling lane is third from the vehicle. There are obstacles within the preset distance;
  • the preset ROI parameter switching conditions of the fourth sub-scenario include: the vehicle is traveling in a driving direction different from the previous vehicle time at a driving speed greater than a second preset speed on the current travel lane.
  • obtaining the data of the drivable area of the vehicle at the next vehicle moment includes:
  • the estimated data of the vehicle's driving state at the current vehicle moment, the data of the drivable area, and the binocular image data of the vehicle at the next vehicle moment obtain the estimated data and point cloud data of the vehicle's driving state at the next vehicle moment;
  • the data of the driveable area of the vehicle at the next vehicle time is obtained.
  • the target ROI parameters include: image preprocessing parameters and point cloud generation parameters, according to the target ROI parameters, the estimated data of the vehicle's driving state at the current time of the vehicle, the data of the drivable area, and the vehicle's next The binocular image data of the vehicle time to obtain the estimated data and point cloud data of the vehicle's driving state at the next vehicle time, including:
  • point cloud generation processing is performed on the image processing data and the driving state estimation data of the vehicle at the next vehicle time to obtain the point cloud data of the vehicle at the next vehicle time.
  • the target ROI parameter further includes: a drivable area generation parameter
  • the data of the drivable area of the vehicle at the next vehicle time is obtained according to the estimated data and point cloud data of the driving state of the vehicle at the next vehicle time, include:
  • the driving state estimation data and point cloud data of the vehicle at the next vehicle time are processed to generate the driving area data to obtain the driving area data of the vehicle at the next vehicle time.
  • the method before obtaining the data of the drivable area of the vehicle at the next vehicle time according to the target ROI parameters, the method further includes: projecting the data of the drivable area of the vehicle at the current vehicle time to the ROI corresponding to the target ROI parameter , So that in the next vehicle moment, not only can the calculation energy consumption be reduced, but also the drivable area can be detected accurately.
  • the driving state data includes at least one of the following: driving speed, driving direction, accelerator pedal state, and brake pedal state.
  • an embodiment of the present application provides a device for detecting a drivable area, including:
  • the first acquisition module is used to acquire the driving state data and the drivable area data of the vehicle at the current time of the vehicle;
  • the judgment module is used for judging whether the driving scene of the vehicle meets the ROI parameter switching condition of the region of interest based on the driving state data and the driving area data;
  • the adjustment module is used to adjust the current ROI parameter of the vehicle to the target ROI parameter if the driving scene of the vehicle meets the ROI parameter switching condition;
  • the second acquisition module is used to acquire the data of the drivable area of the vehicle at the next vehicle moment according to the target ROI parameters.
  • the target ROI parameters include at least one of the following: image preprocessing parameters, point cloud generation parameters, or drivable area generation parameters;
  • the image preprocessing parameters include at least one of the following: the size parameter of the image ROI, the position parameter of the image ROI, or the number of image zoom layers;
  • the point cloud generation parameters include at least one of the following: a support point grid step parameter, a support point sparsity parameter, or a support point distribution method parameter;
  • the drivable area generation parameters include: occupancy grid resolution parameters.
  • the ROI parameter switching condition includes any one of the following:
  • the preset ROI parameter switching conditions of the congested road driving scene include: the vehicle is driving in the current driving lane at a driving speed less than the first preset speed along the driving direction of the last vehicle, and the current driving lane is far away from the vehicle. There is an obstacle in the first preset distance;
  • the preset ROI parameter switching conditions for driving scenes on uphill and downhill roads in a narrow space include: the vehicle is traveling in the current lane at a speed less than the first preset speed in the direction of the previous vehicle, the state of the accelerator pedal of the vehicle, and The state of the brake pedal changes intermittently, and the vehicle has a moving distance greater than the second preset distance in the vertical and horizontal directions.
  • the preset ROI parameter switching conditions of the highway driving scene include any of the following: the preset ROI parameter switching conditions of the first sub-scene, the preset ROI parameter switching conditions of the second sub-scene, The preset ROI parameter switching condition of the third sub-scene, or the preset ROI parameter switching condition of the fourth sub-scene;
  • the preset ROI parameter switching conditions of the first sub-scene include: the vehicle travels in the current travel lane at a travel speed greater than the second preset speed along the travel direction of the previous vehicle at the time of travel, and the current travel lane is adjacent to There are no obstacles in the lane within the third preset distance from the vehicle;
  • the preset ROI parameter switching conditions of the second sub-scene include: the vehicle is traveling in the current traveling lane at a traveling speed greater than the second preset speed along the traveling direction of the previous vehicle at the time of travel, and on the adjacent traveling lane of the current traveling lane There is an obstacle within the third preset distance from the vehicle;
  • the preset ROI parameter switching conditions of the third sub-scenario include: the vehicle is traveling in the current traveling lane at a traveling speed greater than the second preset speed along the traveling direction of the previous vehicle, and the current traveling lane is third from the vehicle. There are obstacles within the preset distance;
  • the preset ROI parameter switching conditions of the fourth sub-scenario include: the vehicle is traveling in a driving direction different from the previous vehicle time at a driving speed greater than a second preset speed on the current travel lane.
  • the second acquisition module includes:
  • the first acquisition unit is used to obtain the estimation of the driving state of the vehicle at the next vehicle time according to the target ROI parameters, the estimated data of the driving state of the vehicle at the current vehicle time, the data of the drivable area, and the binocular image data of the vehicle at the next vehicle time Data and point cloud data;
  • the second acquiring unit is used to acquire the data of the drivable area of the vehicle at the next vehicle time according to the estimated data and point cloud data of the driving state of the vehicle at the next vehicle time.
  • the first acquiring unit is specifically configured to:
  • point cloud generation processing is performed on the image processing data and the driving state estimation data of the vehicle at the next vehicle time to obtain the point cloud data of the vehicle at the next vehicle time.
  • the target ROI parameter further includes: a drivable area generation parameter
  • the second acquiring unit is specifically configured to:
  • the driving state estimation data and point cloud data of the vehicle at the next vehicle time are processed to generate the driving area data to obtain the driving area data of the vehicle at the next vehicle time.
  • the device further includes:
  • the projection module is used to project the travelable area data of the vehicle at the current vehicle moment to the ROI corresponding to the target ROI parameter.
  • the driving state data includes at least one of the following: driving speed, driving direction, accelerator pedal state, and brake pedal state.
  • an embodiment of the present application provides a device for detecting a drivable area, including: a processor, a memory, and a communication interface;
  • the communication interface is used to obtain data to be processed
  • the memory is used to store program instructions
  • the processor is configured to call and execute the program instructions stored in the memory.
  • the driveable area detection device is configured to execute the above-mentioned data to be processed. The method described in any of the implementations on the one hand, to obtain the processed data;
  • the communication interface is also used to output processed data.
  • an embodiment of the present application provides a chip including the drivable area detection device described in any implementation manner of the third aspect.
  • an embodiment of the present application provides an in-vehicle device, including the drivable area detection device described in any implementation manner of the third aspect.
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, and the computer program is used to implement the method described in any implementation manner of the above-mentioned first aspect.
  • an embodiment of the present application provides a chip system that includes a processor, and may also include a memory and a communication interface, for implementing the method described in any implementation manner of the first aspect.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • an embodiment of the present application provides a program that is used to execute the method described in any implementation manner of the first aspect when the program is executed by a processor.
  • embodiments of the present application provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the method described in any implementation manner of the first aspect.
  • FIG. 1 is a schematic diagram of the architecture of a hardware system in a vehicle provided by an embodiment of the application;
  • FIG. 2 is a schematic diagram of the architecture of a software system in a vehicle provided by an embodiment of the application;
  • FIG. 3 is a schematic diagram of the execution sequence of the software system in the vehicle provided by the embodiment of the application;
  • FIG. 4 is a schematic diagram 1 of the main components of the method for detecting a drivable area provided by an embodiment of the application;
  • FIG. 5 is a second schematic diagram of the main components of the method for detecting a drivable area provided by an embodiment of the application;
  • FIG. 6 is a schematic diagram 1 of the impact of adjustment of point cloud generation parameters provided by an embodiment of the application on system performance;
  • FIG. 7 is a second schematic diagram of the impact of adjustment of point cloud generation parameters provided by an embodiment of the application on system performance
  • FIG. 8 is a schematic flowchart of a method for detecting a drivable area provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram 1 of an image including an ROI provided by an embodiment of the application.
  • FIG. 10 is a second schematic diagram of an image including an ROI provided by an embodiment of the application.
  • FIG. 11 is a third schematic diagram of an image including an ROI provided by an embodiment of the application.
  • FIG. 12 is a fourth schematic diagram of an image including an ROI provided by an embodiment of the application.
  • FIG. 13 is a schematic diagram 5 of an image including an ROI provided by an embodiment of the application.
  • FIG. 14 is a schematic diagram of the execution sequence of the method for detecting a drivable area provided by an embodiment of the application.
  • FIG. 15 is a schematic structural diagram of a drivable area detection device provided by an embodiment of the application.
  • FIG. 16 is a schematic structural diagram of a drivable area detection device provided by another embodiment of the application.
  • the drivable area detection method, device, device, and storage medium provided by the embodiments of the present application can be applied to binocular vision-based drivable area detection scenarios in which vehicles are driving on different roads.
  • the drivable area detection method, device, device, and storage medium provided by the embodiments of the present application can be applied to the drivable area detection scene in the highway driving scene, the drivable area detection scene in the congested road driving scene, and the narrow A detection scene of a drivable area under a spatial uphill road driving scene (for example, an up and down multi-storey parking garage scene, etc.).
  • a spatial uphill road driving scene for example, an up and down multi-storey parking garage scene, etc.
  • the traveling speed of the vehicle is greater than the second preset speed (for example, 40 km/h).
  • the traveling speed of the vehicle is less than the first preset speed (for example, 20 km/h), and there is an obstacle in the first preset distance (for example, 15 m) from the vehicle on the current traveling lane.
  • the first preset speed for example, 20 km/h
  • the first preset distance for example, 15 m
  • the traveling speed of the vehicle is less than the first preset speed (for example, 20km/h), the accelerator pedal state and the brake pedal state of the vehicle change intermittently (for example, the driver stepping on the accelerator and pedal intermittently), and the vehicle is In the direction perpendicular to the horizontal ground, there is a movement distance greater than the second preset distance (for example, 0.5 m).
  • the self-vehicle movement direction within 1 s moves on the Z axis (or the axial direction perpendicular to the ground) by 6 m.
  • the method, device, device, and storage medium for detecting the drivable area provided in the embodiments of the present application can also be applied to other scenarios, which are not limited in the embodiments of the present application.
  • the execution subject of the method for detecting a drivable area provided in the embodiments of the present application may be a device for detecting a drivable area.
  • the driving area detection device may be a chip, a chip system, a circuit, or a module, etc., which is not limited in the present application.
  • the drivable area detection device involved in the embodiments of the present application may be a chip system; of course, it may also be other computing devices with data and/or image processing functions.
  • FIG. 1 is a schematic diagram of the architecture of a hardware system in a vehicle provided by an embodiment of the application.
  • the schematic diagram of the architecture of the hardware system in the vehicle may include but is not limited to: a binocular camera 10, a chip system 11, an electronic control unit (ECU) 12, a controller 13, and a controller area network ( Controller Area Network, CAN) bus 14.
  • the binocular camera 10 is used to collect image data
  • the CAN bus 14 is used to provide driving state data of the vehicle
  • the chip system 11 is used to detect the travelable area based on the image data collected by the binocular camera 10 and the data provided by the CAN bus 14.
  • the ECU 12 is used to determine control decisions based on the detection results of the chip system 11 and the data provided by the CAN bus 14; the controller 13 is used to control the movement of the vehicle according to the control decisions of the ECU 12. It should be understood that the chip system 11 may adopt the method for detecting the drivable area provided in the embodiment of the present application.
  • FIG. 2 is a schematic diagram of the architecture of a software system in a vehicle provided by an embodiment of the application.
  • the schematic diagram of the architecture of the software system in the vehicle may include, but is not limited to: a driving layer 20, a business software layer 21, a planning and control layer 22, and an execution layer 23.
  • the drive layer 20 is used to read the data of all on-board sensors in the vehicle, for example, it may include but not limited to: image data of binocular cameras and/or data provided by the CAN bus; the business software layer 21 is used to perform vehicle detection, pedestrians Tasks such as detection and/or drivable area detection; the control and planning layer 22 is used to perform route planning based on all the detection results of the business software layer 21 (for example, including but not limited to: drivable area detection results) and generate control commands to transmit to Execution layer 23; The execution layer 23 is used to call the on-board equipment in the vehicle according to the control commands generated by the control and planning layer 22 to control the movement of the vehicle.
  • the business software layer 21 is used to perform vehicle detection, pedestrians Tasks such as detection and/or drivable area detection
  • the control and planning layer 22 is used to perform route planning based on all the detection results of the business software layer 21 (for example, including but not limited to: drivable area detection results) and generate control commands to transmit to Execut
  • FIG. 3 is a schematic diagram of the execution sequence of the software system in the vehicle provided by an embodiment of the application.
  • the driver layer obtains the data of all on-board sensors in the vehicle, such as the image data of the binocular camera
  • the business software layer obtains the image data of the binocular camera from the driver layer, and then the image data of the binocular camera
  • the data is subjected to image calibration processing, and then the driving area is detected, and finally the detection result is output to the planning and control layer
  • the planning and control layer performs path planning and generates control commands according to the detection results, and transmits the control commands to the execution layer
  • the execution layer calls the on-board equipment in the vehicle according to the control command to control the movement of the vehicle.
  • the ROI parameters in the driving area detection method based on binocular vision in the related technology are fixed, which means that no matter in any road driving scene, the above-mentioned binocular vision-based driving area detection method requires a larger Computing resources have led to the technical problem that the binocular vision-based drivable area detection method in the related art has a large computational energy consumption in any road driving scene.
  • the driving scene of the vehicle is detected in accordance with the In the ROI parameter switching condition, the current ROI parameter of the vehicle is adjusted to the target ROI parameter corresponding to the currently met ROI parameter switching condition, and then the driveable area is detected according to the target ROI parameter. It can be seen that in the embodiments of the present application, different ROI parameters can be used for different road driving scenarios, which is beneficial to reduce calculation energy consumption, thereby not only saving calculation resources, but also improving local detection accuracy.
  • FIG. 4 is a schematic diagram 1 of the main components of the method for detecting a drivable area provided by an embodiment of the application.
  • the main components of the method for detecting a drivable area provided by the embodiments of the present application may include, but are not limited to: image preprocessing, ego motion estimation, point cloud generation, and drivable area generation. Part and scene adaptive ROI decision-making part.
  • the scene adaptive ROI decision part is used to automatically detect the current driving scene of the vehicle, and when it is detected that the driving scene of the vehicle meets the ROI parameter switching conditions, the current ROI parameters of the vehicle are adjusted to the current ROI parameters Switch the target ROI parameters corresponding to the conditions, so as to achieve the goal of reducing computational energy consumption and improving local detection accuracy.
  • the aforementioned target ROI parameters may include, but are not limited to: image preprocessing parameters related to ROI adjustment corresponding to the aforementioned image preprocessing part, point cloud generation parameters related to ROI adjustment corresponding to the point cloud generation part, and/ Or, at least one parameter among the drivable area generation parameters related to the ROI adjustment corresponding to the drivable area generation portion.
  • Fig. 5 is a second schematic diagram of the main components of the method for detecting a drivable area provided by an embodiment of the application.
  • the embodiment shown in FIG. 5 introduces each main component in detail.
  • the main components of the drivable area detection method provided by the embodiments of the present application may include, but are not limited to: image preprocessing, self-motion estimation, point cloud generation, drivable area generation, and scene auto Adapt to the ROI decision-making part.
  • the image preprocessing part is used to perform image preprocessing on the image data of the binocular camera to achieve image ROI setting, image scaling, image color conversion (such as color to grayscale, etc.), image enhancement and other preprocessing functions.
  • the self-motion estimation part is used to realize the self-vehicle positioning function.
  • the point cloud generation part is used to realize the depth map estimation and 3D point cloud generation functions such as the feature descriptor generation, support point generation, support point triangulation, disparity map generation, point cloud generation, etc. of the image data of the binocular camera.
  • the drivable area generation part is used to implement grid settings, update the elevation map according to the estimation results of the self-motion estimation part, add the newly generated point cloud to the elevation map, and update the drivable area to generate Stixel (i.e. obstacles represented by vertical stripes) Area) and other functions.
  • the scene adaptive ROI decision-making part is used to automatically based on the longitudinal data provided by the CAN bus of the vehicle at the current vehicle time (for example, accelerator pedal state, brake pedal state, driving speed, and/or driving direction, etc.) and drivable area data.
  • the current ROI parameters of the vehicle are adjusted to the target ROI parameters corresponding to the current ROI parameter switching conditions to achieve reduction The goal of calculating energy consumption and improving local detection accuracy.
  • the following embodiments of this application sequentially compare the image preprocessing parameters related to the ROI adjustment corresponding to the above-mentioned image preprocessing part, the point cloud generation parameters corresponding to the point cloud generation part and the ROI adjustment related point cloud generation parameters, and the corresponding and The parameters related to ROI adjustment of the drivable area generation are introduced.
  • the aforementioned image preprocessing parameters involved in the embodiments of the present application may include but are not limited to: image ROI setting parameters, and/or the number of image zoom layers; wherein, the image ROI setting parameters may include but are not limited to: image ROI The size parameter, and/or the position parameter of the image ROI.
  • Table 1 is a schematic table of the impact of adjustment of image preprocessing parameters on system performance
  • the aforementioned point cloud generation parameters involved in the embodiments of the present application may include, but are not limited to, at least one of the following: a support point grid step parameter, a support point sparsity parameter, or a support point distribution mode parameter.
  • Table 2 is a schematic diagram of the impact of point cloud generation parameters adjustment on system performance
  • FIG. 6 is a schematic diagram 1 of the influence of adjustment of point cloud generation parameters provided by an embodiment of the application on system performance.
  • the farther the ROI distance is set the smaller the support point grid step length parameter, for example, the width and height step length changes from 8*8 to 6*6 and 4*4. In order to improve the detection accuracy, thereby reducing the probability of missed detection.
  • the sparser the support point sparseness parameter the lower the calculation energy consumption but the corresponding detection accuracy is also reduced. Therefore, a sparser distribution can be used for short-range ROIs, and a sparser distribution can be used for long-range ROIs. More densely distributed.
  • the distribution formula indicated by the support point distribution parameter may include, but is not limited to: uniform distribution or concentrated distribution on the detected known objects.
  • a uniform distribution method is usually used to reduce the missed detection rate.
  • a method of distributing support points to known objects is usually used to improve detection accuracy and reduce the missed detection rate.
  • FIG. 7 is a second schematic diagram of the influence of adjustment of point cloud generation parameters provided by an embodiment of the application on system performance. As shown in FIG. 7, it shows the influence of the sparse distribution of support points on the detection accuracy of the drivable area.
  • sparse support points are first distributed, and then the depth of the support points is calculated, and the depth of other pixels is approximated by triangulating the support point area.
  • the denser the support points the more accurate the depth estimation of other pixels will be.
  • Fig. 7(b) the sparser the support points, the rougher the depth estimation of other pixels will be.
  • the above-mentioned drivable area generation parameters involved in the embodiments of the present application may include, but are not limited to, a space-occupying grid resolution parameter.
  • Table 3 is a schematic table of the influence of the adjustment of the driving area generation parameters on the system performance
  • At least one refers to one or more, and “multiple” refers to two or more.
  • “And/or” describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects before and after are in an “or” relationship.
  • the following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a).
  • at least one item (a) of a, b, or c can mean: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple .
  • FIG. 8 is a schematic flowchart of a method for detecting a drivable area provided by an embodiment of the application. As shown in FIG. 8, the method of the embodiment of the present application may include:
  • Step S801 Acquire the driving state data and the drivable area data of the vehicle at the current vehicle time.
  • the driving area detection device can obtain the driving state data of the vehicle at the current vehicle time through the CAN bus.
  • the driving state data may include, but is not limited to, at least one of the following: driving speed, driving direction, accelerator pedal state, and brake pedal state.
  • the driving speed may correspond to the wheel speedometer in the vehicle
  • the driving direction may correspond to the steering wheel angle in the vehicle
  • the driving direction may also correspond to the state of the turn signal of the vehicle (for example, if the turn signal is turned on) , It can be determined that the direction of travel of the vehicle has turned; if the turn signal is turned off, it can be determined that the direction of travel of the vehicle has not turned).
  • the driveable area data of the vehicle at the current vehicle time may be detected by the driveable area generating part of the driveable area detection device.
  • the driveable area detection device can generate the driveable area based on the estimated data of the vehicle at the current vehicle time and the point cloud data of the vehicle at the current vehicle time to obtain the vehicle at the current vehicle time.
  • Driveable area data the specific method for obtaining the driveable area data of the vehicle at the current vehicle time will be introduced in the subsequent part of the embodiment of the present application (see the embodiment shown in FIG. 14).
  • the above-mentioned drivable area data may include, but is not limited to, at least one of the following: whether there is obstacle information in the ROI, position information of the obstacle, and size information of the obstacle.
  • Step S802 Determine whether the driving scene of the vehicle meets the ROI parameter switching condition of the region of interest based on the driving state data and the driving area data.
  • At least one region of interest ROI parameter switching condition may be preset in the driving area detection device, so that the vehicle can be judged in real time according to the driving state data and the driving area data of the vehicle at the current time of the vehicle. Whether the driving scene meets a certain ROI parameter switching condition.
  • the at least one ROI parameter switching condition of the region of interest preset in the drivable area detection device may include, but is not limited to, at least one of the following: preset ROI parameter switching conditions of a highway driving scene, and a congested road driving scene. Preset ROI parameter switching conditions, or preset ROI parameter switching conditions for driving scenes on up and downhill roads in a narrow space.
  • the preset ROI parameter switching conditions of the congested road driving scene may include, but are not limited to: the vehicle travels along the previous vehicle at a driving speed less than the first preset speed (for example, 20km/h) on the current travel lane.
  • the driving direction is traveling (for example, the turn signal is in the off state), and there is an obstacle in the first preset distance (for example, 15m) from the vehicle on the current traveling lane.
  • the preset ROI parameter switching conditions for a driving scene on an uphill and downhill road in a narrow space may include, but are not limited to: the vehicle travels along and on the current lane at a speed less than the first preset speed (for example, 20km/h).
  • the direction of travel of a vehicle at a time for example, the turn signal is turned off
  • the state of the accelerator pedal and the brake pedal of the vehicle change intermittently (for example, the driver intermittently steps on the accelerator pedal and the brake pedal)
  • the vehicle is in a vertical position
  • the moving direction of the self-vehicle within 1 s moves on the Z axis (or the axial direction perpendicular to the ground) by 6 m.
  • the preset ROI parameter switching conditions of the highway driving scene may include but not limited to at least one of the following: the preset ROI parameter switching conditions of the first sub-scene, the preset ROI parameter switching conditions of the second sub-scene, and the second sub-scene.
  • the preset ROI parameter switching conditions of the first sub-scene may include, but are not limited to: the vehicle travels along the direction of the previous vehicle at a speed greater than the second preset speed (for example, 40km/h) on the current lane Driving (for example, the turn signal is in the off state), and there is no obstacle within the third preset distance (for example, 50 m) from the vehicle on the adjacent traveling lane of the current traveling lane.
  • the second preset speed for example, 40km/h
  • Driving for example, the turn signal is in the off state
  • the third preset distance for example, 50 m
  • the preset ROI parameter switching conditions of the second sub-scene may include, but are not limited to: the vehicle travels along the direction of the previous vehicle at a speed greater than the second preset speed (for example, 40km/h) on the current lane (for example, the turn signal is turned off), and there is an obstacle within the third preset distance from the vehicle on the adjacent traveling lane of the current traveling lane (for example, an obstacle is detected on the adjacent traveling lane of the current traveling lane in the ROI Objects, and as the vehicle's forward obstacle gets closer and closer, it has reached the ROI boundary).
  • the vehicle travels along the direction of the previous vehicle at a speed greater than the second preset speed (for example, 40km/h) on the current lane For example, the turn signal is turned off
  • there is an obstacle within the third preset distance from the vehicle on the adjacent traveling lane of the current traveling lane for example, an obstacle is detected on the adjacent traveling lane of the current traveling lane in the ROI Objects, and as the vehicle's forward obstacle gets closer and closer
  • the preset ROI parameter switching conditions of the third sub-scenario may include, but are not limited to: the vehicle travels in the current travel lane at a travel speed greater than the second preset speed (for example, 40km/h) along the travel direction of the previous vehicle moment ( For example, the turn signal is turned off), and there is an obstacle in the third preset distance from the vehicle in the current lane (for example, an obstacle is detected on the current lane in the ROI, and the obstacle is as the vehicle moves forward. The object is getting closer and has reached the boundary of the ROI).
  • the preset ROI parameter switching conditions of the fourth sub-scene may include, but are not limited to: the vehicle travels at a speed greater than the second preset speed (for example, 40km/h) on the current travel lane at a time different from the previous vehicle time. Drive in the direction of travel (for example, the turn signal is on).
  • the drivable area detection device can determine whether the current driving scene of the vehicle conforms to the aforementioned at least one region of interest ROI parameter switch based on the vehicle’s current travel state data and drivable area data acquired in step S801.
  • a certain ROI parameter switching condition in the conditions (for ease of description, it can be referred to as a target ROI parameter switching condition).
  • the target ROI parameter switching condition may include, but is not limited to, any of the following: a preset ROI parameter switching condition for a highway driving scene, a preset ROI parameter switching condition for a congested road driving scene, or a narrow space up and downhill road driving scene The preset ROI parameter switching conditions.
  • Step S803 If the driving scene of the vehicle meets the ROI parameter switching condition, the current ROI parameter of the vehicle is adjusted to the target ROI parameter.
  • the driving area detection device may be preset with ROI parameters corresponding to each ROI parameter switching condition in the at least one region of interest ROI parameter switching condition, so as to facilitate the detection of the current driving of the vehicle
  • the current ROI parameter of the vehicle can be adjusted to the ROI parameter corresponding to the currently met ROI parameter switching condition (for ease of description, it may be referred to as the target ROI parameter).
  • the driving area The detection device can adjust the current ROI parameter of the vehicle to the target ROI parameter corresponding to the above-mentioned target ROI parameter switching condition.
  • the driving area detection device can be preset with ROI parameters corresponding to ROI parameter switching condition 1, ROI parameters corresponding to ROI parameter switching condition 2, ROI parameters corresponding to ROI parameter switching condition 2, ROI parameter 3 corresponding to ROI parameter switching conditions 3, and It is detected that the current driving scene of the vehicle meets the aforementioned ROI parameter switching condition 2, and the device for detecting the driveable area can adjust the current ROI parameter of the vehicle to the aforementioned ROI parameter 2.
  • the aforementioned target ROI parameters may include but are not limited to at least one of the following: image preprocessing parameters, point cloud generation parameters, or drivable area generation parameters.
  • the image preprocessing parameter may include but is not limited to at least one of the following: the size parameter of the image ROI, the position parameter of the image ROI, or the number of image zoom layers.
  • the point cloud generation parameters may include, but are not limited to, at least one of the following: a support point grid step parameter, a support point sparsity parameter, or a support point distribution mode parameter.
  • the drivable area generation parameters may include, but are not limited to, the space-occupying grid resolution parameters.
  • the current ROI parameters of the vehicle will be different from those contained in the target ROI parameters.
  • the corresponding parameters of is modified to be the same as the target ROI parameters, but the corresponding parameters in the current ROI parameters of the vehicle that are the same as the target ROI parameters are retained, and other parameters that are not included in the target ROI parameters in the current ROI parameters of the vehicle can also be retained.
  • the ROI parameter adjustment method when the current driving scene of the vehicle meets different ROI parameter switching conditions is introduced.
  • the device for detecting the drivable area can adjust the current ROI parameter of the vehicle to the target ROI parameter corresponding to the preset ROI parameter switching condition of the first sub-scene.
  • Table 4 is a schematic table about the preset ROI parameter switching conditions of the first sub-scene
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the first sub-scene may include at least one of the following main adjustment parameters: the size parameter of the image ROI (for example, the ROI may cover 50m -80m range), position parameters of the image ROI (for example, the ROI can be located 50m away from the vehicle), support point grid step parameter (for example, the grid step parameter is 4*4), the space-occupying grid Resolution parameters (for example, the grid size of the space occupied by the grid resolution parameter is set to 0.8m).
  • the size parameter of the image ROI for example, the ROI may cover 50m -80m range
  • position parameters of the image ROI for example, the ROI can be located 50m away from the vehicle
  • support point grid step parameter for example, the grid step parameter is 4*4
  • the space-occupying grid Resolution parameters for example, the grid size of the space occupied by the grid resolution parameter is set to 0.8m.
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the first sub-scene may also include at least one of the following auxiliary adjustment parameters: the number of image zoom layers (for example, image zoom The number of layers is 2), the support point sparsity parameter (for example, the support point density indicated by the support point sparsity parameter is greater than the preset density), the support point distribution method parameter (for example, the support point distribution method parameter is used to indicate The distribution method is based on the distribution of obstacles).
  • the number of image zoom layers for example, image zoom The number of layers is 2
  • the support point sparsity parameter for example, the support point density indicated by the support point sparsity parameter is greater than the preset density
  • the support point distribution method parameter for example, the support point distribution method parameter is used to indicate The distribution method is based on the distribution of obstacles.
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the first sub-scene may also include other parameters, which are not limited in the embodiment of the present application.
  • the travelable area The detection device also needs to reinitialize the image preprocessing part; if the target ROI parameter includes any point cloud generation parameter corresponding to the point cloud generation part, and the point cloud generation parameter is in the process of adjusting the current ROI parameter of the vehicle to the target ROI parameter If adjustment occurs, the drivable area detection device also needs to reinitialize the point cloud generation part; if the target ROI parameter includes any drivable area generation parameter corresponding to the drivable area generation part, and the drivable area generation parameter is in the current vehicle If the ROI parameter is adjusted to the target ROI parameter, the drivable area detection device also needs to reinitialize the drivable area generating part.
  • the device for detecting the drivable area can adjust the current ROI parameter of the vehicle to the target ROI parameter corresponding to the preset ROI parameter switching condition of the second sub-scene.
  • Table 5 is a schematic table of the preset ROI parameter switching conditions for the second sub-scene
  • the relevant content of the target ROI parameter corresponding to the preset ROI parameter switching condition of the second sub-scene may refer to the relevant content of the target ROI parameter corresponding to the preset ROI parameter switching condition of the first sub-scene, I won't repeat them here.
  • the travelable area The detection device also needs to reinitialize the image preprocessing part; if the target ROI parameter includes any point cloud generation parameter corresponding to the point cloud generation part, and the point cloud generation parameter is in the process of adjusting the current ROI parameter of the vehicle to the target ROI parameter If adjustment occurs, the drivable area detection device also needs to reinitialize the point cloud generation part; if the target ROI parameter includes any drivable area generation parameter corresponding to the drivable area generation part, and the drivable area generation parameter is in the current vehicle If the ROI parameter is adjusted to the target ROI parameter, the drivable area detection device also needs to reinitialize the drivable area generating part.
  • the driving area detection device No need to reinitialize.
  • FIG. 9 is a schematic diagram 1 of an image including an ROI provided by an embodiment of the application.
  • an obstacle is detected on the adjacent traveling lane of the current traveling lane in the ROI, and as the vehicle moves forward, the obstacle is crossed. It is getting closer and has reached the ROI boundary.
  • the driving area detection device can estimate according to the driving state of the vehicle
  • the data adopts the motion trajectory prediction method, and continues to track the obstacle until the obstacle disappears from the image data collected by the vehicle's binocular camera.
  • the device for detecting the drivable area can adjust the current ROI parameter of the vehicle to the target ROI parameter corresponding to the preset ROI parameter switching condition of the third sub-scene.
  • Table 6 is a schematic table regarding the preset ROI parameter switching conditions of the third sub-scene
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the third sub-scene may include at least one of the following main adjustment parameters: the size parameter of the image ROI (for example, the ROI may cover 0 m -50m range), image ROI position parameters (for example, the ROI can be located within 50m from the vehicle), support point grid step parameter (for example, the grid step parameter is 8*8), occupancy grid Resolution parameter (for example, the grid resolution parameter of the space occupied is set to 0.2m for the grid size).
  • the size parameter of the image ROI for example, the ROI may cover 0 m -50m range
  • image ROI position parameters for example, the ROI can be located within 50m from the vehicle
  • support point grid step parameter for example, the grid step parameter is 8*8)
  • occupancy grid Resolution parameter for example, the grid resolution parameter of the space occupied is set to 0.2m for the grid size.
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the third sub-scene may also include at least one of the following auxiliary adjustment parameters: the number of image zoom layers (for example, image zoom The number of layers is 4), the support point sparseness parameter (for example, the support point density indicated by the support point sparseness parameter is not greater than the preset density), the support point distribution parameter (for example, the support point distribution parameter is used for Indicates the distribution method based on the distribution of obstacles).
  • the number of image zoom layers for example, image zoom The number of layers is 4
  • the support point sparseness parameter for example, the support point density indicated by the support point sparseness parameter is not greater than the preset density
  • the support point distribution parameter for example, the support point distribution parameter is used for Indicates the distribution method based on the distribution of obstacles.
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the third sub-scene may also include other parameters, which are not limited in the embodiment of the present application.
  • the travelable area The detection device also needs to reinitialize the image preprocessing part; if the target ROI parameter includes any point cloud generation parameter corresponding to the point cloud generation part, and the point cloud generation parameter is in the process of adjusting the current ROI parameter of the vehicle to the target ROI parameter If adjustment occurs, the drivable area detection device also needs to reinitialize the point cloud generation part; if the target ROI parameter includes any drivable area generation parameter corresponding to the drivable area generation part, and the drivable area generation parameter is in the current vehicle If the ROI parameter is adjusted to the target ROI parameter, the drivable area detection device also needs to reinitialize the drivable area generating part.
  • Figure 10 is a second schematic diagram of an image including an ROI provided by an embodiment of the application.
  • an obstacle is detected on the current traveling lane in the ROI, and as the vehicle moves closer and closer to the obstacle, Reaching the ROI boundary, when the obstacle continues to move relative to the vehicle beyond the ROI boundary, because the obstacle is in the current driving lane, it will affect the safe driving of the vehicle.
  • the drivable area detection device can move the ROI to the vicinity of the vehicle and move the vehicle in The data of the drivable area at the current vehicle moment is reprojected to the new ROI corresponding to the target ROI parameters, and then obstacles are continuously detected and tracked.
  • the device for detecting the drivable area can adjust the current ROI parameter of the vehicle to the target ROI parameter corresponding to the preset ROI parameter switching condition of the fourth sub-scene.
  • Table 7 is a schematic table of the preset ROI parameter switching conditions of the fourth sub-scene
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the fourth sub-scene may include at least one of the following main adjustment parameters: the size parameter of the image ROI (for example, the ROI may cover 0m -50m range), image ROI position parameters (for example, the ROI can be located within 50m from the vehicle), support point grid step parameter (for example, the grid step parameter is 8*8), occupancy grid Resolution parameter (for example, the grid resolution parameter of the space occupied is set to 0.2m for the grid size).
  • the size parameter of the image ROI for example, the ROI may cover 0m -50m range
  • image ROI position parameters for example, the ROI can be located within 50m from the vehicle
  • support point grid step parameter for example, the grid step parameter is 8*8
  • occupancy grid Resolution parameter for example, the grid resolution parameter of the space occupied is set to 0.2m for the grid size.
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the fourth sub-scene may also include at least one of the following auxiliary adjustment parameters: the number of image zoom layers (for example, image zoom The number of layers is 4), the support point sparseness parameter (for example, the support point density indicated by the support point sparseness parameter is not greater than the preset density), the support point distribution parameter (for example, the support point distribution parameter is used for Indicates the distribution method based on the distribution of obstacles).
  • the number of image zoom layers for example, image zoom The number of layers is 4
  • the support point sparseness parameter for example, the support point density indicated by the support point sparseness parameter is not greater than the preset density
  • the support point distribution parameter for example, the support point distribution parameter is used for Indicates the distribution method based on the distribution of obstacles.
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the fourth sub-scene may also include other parameters, which are not limited in the embodiment of the present application.
  • the travelable area The detection device also needs to reinitialize the image preprocessing part; if the target ROI parameter includes any point cloud generation parameter corresponding to the point cloud generation part, and the point cloud generation parameter is in the process of adjusting the current ROI parameter of the vehicle to the target ROI parameter If adjustment occurs, the drivable area detection device also needs to reinitialize the point cloud generation part; if the target ROI parameter includes any drivable area generation parameter corresponding to the drivable area generation part, and the drivable area generation parameter is in the current vehicle If the ROI parameter is adjusted to the target ROI parameter, the drivable area detection device also needs to reinitialize the drivable area generating part.
  • Figure 11 is the third schematic diagram of the image including the ROI provided by the embodiment of the application.
  • the driveable area detection device can determine that the vehicle will travel along a different time from the previous vehicle. Driving in the direction of the vehicle (that is, the vehicle turns), that is, the scene in front of the vehicle will quickly switch to an unknown scene. Therefore, move the ROI to the vicinity of the vehicle, so that the middle and near objects can be detected in the unknown scene, thereby improving the vehicle Safety of driving.
  • the driving area detection device when the driving area detection device detects that the driving scene of the vehicle meets the above-mentioned high-speed road driving scene, it can switch conditions according to different ROI parameters that the driving scene of the vehicle meets, and the detection area of the ROI can be adjusted from a distance.
  • the adaptive adjustment to the near, or jump from the near to the far which can effectively reduce the calculation energy consumption and improve the detection distance and accuracy of the far travelable area.
  • the driving area The detection device can adjust the current ROI parameter of the vehicle to the target ROI parameter corresponding to the preset ROI parameter switching condition of the above-mentioned congested road driving scene.
  • Table 8 is a schematic table of the preset ROI parameter switching conditions for the congested road driving scene
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the above-mentioned congested road driving scene may include at least one of the following main adjustment parameters: the size parameter of the image ROI (for example, the ROI may cover 0 m -50m range), position parameters of the image ROI (for example, the ROI can be located within 50m from the vehicle), support point grid step parameter (for example, the grid step parameter is 16*16), occupancy grid Resolution parameters (for example, the grid size of the space occupied by the grid resolution parameter is set to 0.1m).
  • the size parameter of the image ROI for example, the ROI may cover 0 m -50m range
  • position parameters of the image ROI for example, the ROI can be located within 50m from the vehicle
  • support point grid step parameter for example, the grid step parameter is 16*16
  • occupancy grid Resolution parameters for example, the grid size of the space occupied by the grid resolution parameter is set to 0.1m.
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the above-mentioned congested road driving scene may also include at least one of the following auxiliary adjustment parameters: the number of image zoom layers (for example, image zoom The number of layers is 6), the support point sparseness parameter (for example, the support point density indicated by the support point sparseness parameter is not greater than the preset density), the support point distribution parameter (for example, the support point distribution parameter is used for Indicates a distribution method based on uniform distribution).
  • the number of image zoom layers for example, image zoom The number of layers is 6
  • the support point sparseness parameter for example, the support point density indicated by the support point sparseness parameter is not greater than the preset density
  • the support point distribution parameter for example, the support point distribution parameter is used for Indicates a distribution method based on uniform distribution.
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the above-mentioned congested road driving scene may also include other parameters, which are not limited in the embodiment of the present application.
  • the travelable area The detection device also needs to reinitialize the image preprocessing part; if the target ROI parameter includes any point cloud generation parameter corresponding to the point cloud generation part, and the point cloud generation parameter is in the process of adjusting the current ROI parameter of the vehicle to the target ROI parameter If adjustment occurs, the drivable area detection device also needs to reinitialize the point cloud generation part; if the target ROI parameter includes any drivable area generation parameter corresponding to the drivable area generation part, and the drivable area generation parameter is in the current vehicle If the ROI parameter is adjusted to the target ROI parameter, the drivable area detection device also needs to reinitialize the drivable area generating part.
  • FIG. 12 is a fourth schematic diagram of an image including an ROI provided by an embodiment of the application. As shown in FIG. 12, for a crowded road driving scene, it mainly refers to the detection of close obstacles. Since the same short-distance obstacle occupies most of the image, a sparser support point distribution and uniform distribution can be used, which can reduce computational energy consumption and improve the accuracy of nearby obstacle detection.
  • the driveable area detection device can adjust the current ROI parameter of the vehicle to the target ROI parameter corresponding to the preset ROI parameter switching condition of the above-mentioned narrow space uphill road driving scene.
  • the following embodiments of the present application provide a schematic table of preset ROI parameter switching conditions for a driving scene on a narrow space up and downhill road, and the implementation manner is introduced in combination with the schematic table.
  • Table 9 is a schematic table of the preset ROI parameter switching conditions for driving scenes on up and downhill roads in a narrow space
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the above-mentioned narrow space uphill road driving scene may include at least one of the following main adjustment parameters: the size parameter of the image ROI (for example, ROI It can cover the range of 0m-50m), the position parameter of the image ROI (for example, the ROI can be located within 50m from the vehicle), the support point grid step parameter (for example, the grid step parameter is 8*8), Bit grid resolution parameter (for example, the grid size of the space occupied is set to 0.1m), support point sparseness parameter (for example, support point sparseness parameter is used to indicate the support of the area on both sides of the image The point distribution density is adjusted to 2 times the middle area of the image).
  • the size parameter of the image ROI for example, ROI It can cover the range of 0m-50m
  • the position parameter of the image ROI for example, the ROI can be located within 50m from the vehicle
  • the support point grid step parameter for example, the grid step parameter is 8*8
  • Bit grid resolution parameter for example,
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the above-mentioned narrow space uphill road driving scene may also include at least one of the following auxiliary adjustment parameters: the number of image zoom layers (for example, , The number of image zoom layers is 6), and the support point distribution parameter (for example, the support point distribution parameter is used to indicate the distribution method based on uniform distribution).
  • the target ROI parameter corresponding to the preset ROI parameter switching condition of the above-mentioned narrow space uphill road driving scene may also include other parameters, which are not limited in the embodiment of the present application.
  • the travelable area The detection device also needs to reinitialize the image preprocessing part; if the target ROI parameter includes any point cloud generation parameter corresponding to the point cloud generation part, and the point cloud generation parameter is in the process of adjusting the current ROI parameter of the vehicle to the target ROI parameter If adjustment occurs, the drivable area detection device also needs to reinitialize the point cloud generation part; if the target ROI parameter includes any drivable area generation parameter corresponding to the drivable area generation part, and the drivable area generation parameter is in the current vehicle If the ROI parameter is adjusted to the target ROI parameter, the drivable area detection device also needs to reinitialize the drivable area generating part.
  • FIG. 13 is a schematic diagram 5 of an image including ROI provided by an embodiment of the application.
  • a driving scene on an uphill and downhill road in a narrow space for example, an uphill and downhill scene in a multi-storey parking garage
  • the detection accuracy of obstacles on the left and right sides of the vehicle is improved, which not only reduces the calculation energy consumption, but also improves the degree of protection of the left and right sides of the vehicle when the vehicle is driving in a narrow space.
  • Step S804 According to the target ROI parameter, obtain the data of the drivable area of the vehicle at the next vehicle moment.
  • the drivable area detection device may further obtain the drivable area data of the vehicle at the next vehicle time according to the target ROI parameter corresponding to the ROI parameter switching condition that the current driving scene of the vehicle meets in step S803. It can be seen that by adopting different ROI parameters for different road driving scenarios, it is beneficial to reduce the computational energy consumption in the driving area detection process, which not only saves the calculation resources in the driving area detection process, but also improves the local detection accuracy.
  • the drivable area detection device obtains the drivable area data of the vehicle at the next vehicle time according to the above-mentioned target ROI parameters, so that the drivable area detection device can also obtain the drivable area data according to the vehicle travel state data and the drivable area data at the next vehicle time.
  • the ROI parameter of the vehicle at the next vehicle time can be changed Adjust to the ROI parameter corresponding to the ROI parameter switching condition that the next vehicle moment meets.
  • the driveable area detection device may obtain the vehicle's next vehicle based on the target ROI parameters, the estimated data of the vehicle at the current vehicle time, the driveable area data, and the binocular image data of the vehicle at the next vehicle time. Estimated data and point cloud data of the running state of a vehicle at a time.
  • the estimation data of the driving state of the vehicle at the current time of the vehicle may be obtained by the ego motion estimation part of the driving area detection device.
  • the driveable area detection device can perform state estimation processing based on the vehicle's travel state estimation data at the last vehicle time and the vehicle's binocular image data at the current vehicle time to obtain the vehicle's travel state at the current vehicle time.
  • Estimated data can refer to the state estimation processing method in the related art, which is not limited in the embodiment of the present application.
  • the travelable area data of the vehicle at the current vehicle time may be detected by the drivable area generating part of the drivable area detection device.
  • the driveable area detection device can generate the driveable area based on the estimated data of the vehicle at the current vehicle time and the point cloud data of the vehicle at the current vehicle time to obtain the vehicle at the current vehicle time.
  • Driveable area data can refer to the drivable area generation processing method in the related art, which is not limited in the embodiment of the present application.
  • the driveable area detection device may first base on the image preprocessing parameters, the estimated data of the vehicle's running state at the current vehicle time, and the vehicle Image preprocessing is performed on the binocular image data of the vehicle at the next vehicle time based on the drivable area data at the current vehicle time to obtain the image processing data of the vehicle at the next vehicle time.
  • image preprocessing reference may be made to image preprocessing methods in related technologies, which are not limited in the embodiments of the present application.
  • the travelable area detection device can perform state estimation processing based on the estimated data of the driving state of the vehicle at the current vehicle time and the binocular image data of the vehicle at the next vehicle time, so as to obtain the estimated travel state of the vehicle at the next vehicle time. data.
  • the driveable area detection device can perform point cloud generation processing on the image processing data of the vehicle at the next vehicle time and the travel state estimation data of the vehicle at the next vehicle time according to the point cloud generation parameters, so as to obtain the vehicle at the next vehicle time.
  • Point cloud data at the moment of the vehicle For the specific point cloud generation processing, reference may be made to the point cloud generation processing method in the related technology, which is not limited in the embodiment of the present application.
  • the driveable area detection device may perform image preprocessing according to the image preprocessing parameter included in the current ROI parameter of the vehicle. If the target ROI parameter does not include the point cloud generation parameter, the driveable area detection device may perform the point cloud generation process according to the point cloud generation parameter included in the current ROI parameter of the vehicle.
  • the drivable area detection device can also obtain the aforementioned target ROI parameters, the estimated data of the vehicle's running state at the current vehicle time, the drivable area data, and the binocular image data of the vehicle at the next vehicle time by other means.
  • the travel state estimation data and point cloud data of the vehicle at the next vehicle moment are not limited in the embodiment of the present application.
  • the device for detecting the drivable area may obtain the data of the drivable area of the vehicle at the next vehicle time based on the estimated data and point cloud data of the traveling state of the vehicle at the next vehicle time.
  • the drivable area detection device can estimate the driving state data of the vehicle at the next vehicle time and the travelable area generation parameter according to the travelable area
  • the point cloud data of the time is subjected to the process of generating the drivable area, and the data of the drivable area of the vehicle at the next vehicle time can be obtained.
  • the drivable area detection device may perform drivable area generation processing according to the drivable area generation parameter included in the current ROI parameters of the vehicle.
  • the drivable area detection device can obtain the drivable area data of the vehicle at the next vehicle time in other ways based on the estimated data and point cloud data of the vehicle at the next vehicle time. Not limited.
  • the current ROI parameter of the vehicle is adjusted to the target ROI parameter corresponding to the target ROI parameter switching condition, and then according to the target ROI parameters are used to detect the drivable area. It can be seen that in the embodiments of the present application, different ROI parameters can be used for different road driving scenarios, which is beneficial to reduce calculation energy consumption, thereby not only saving calculation resources, but also improving local detection accuracy.
  • the binocular vision-based driving area detection method has low computational energy consumption and can be applied to a general-purpose chip system without using neural network dedicated hardware, which can also save system costs. .
  • the device for detecting the drivable area may also reproject the drivable area data of the vehicle at the current vehicle time to the target ROI parameter. Corresponding new ROI, so that in the next vehicle moment, not only can the calculation energy consumption be reduced, but also the drivable area can be detected accurately.
  • FIG. 14 is a schematic diagram of the execution sequence of the method for detecting a drivable area provided by an embodiment of the application.
  • the method of the embodiment of the present application may include two parts: the detection of the drivable area and the adaptive adjustment of the ROI.
  • the driveable area detection device can obtain the binocular image data of the vehicle at the current vehicle time, the driveable area data of the vehicle at the previous vehicle time, and the estimation of the vehicle's driving state at the previous vehicle time from the driveable area detection memory data. Secondly, the driveable area detection device can perform image preprocessing based on the binocular image data of the vehicle at the current vehicle time, the driveable area data of the vehicle at the previous vehicle time, and the estimated data of the vehicle's travel state at the previous vehicle time. The image processing data of the vehicle at the current vehicle time can be obtained. Then, the drivable area detection device can store the image processing data of the vehicle at the current vehicle time in the drivable area detection memory.
  • the driving area detection device can obtain the binocular image data of the vehicle at the current vehicle time and the estimated data of the vehicle driving state at the previous vehicle time from the driving area detection memory. Secondly, the driving area detection device can perform state estimation processing based on the binocular image data of the vehicle at the current vehicle time and the vehicle driving state estimation data at the previous vehicle time to obtain the vehicle driving state estimation data at the current vehicle time . Then, the drivable area detection device can store the estimated data of the driving state of the vehicle at the current vehicle time in the drivable area detection memory.
  • the drivable area detection device can obtain the image processing data of the vehicle at the current vehicle time and the estimated data of the driving state of the vehicle at the current vehicle time from the drivable area detection memory.
  • the driveable area detection device can perform point cloud generation processing based on the image processing data of the vehicle at the current vehicle time and the estimated data of the vehicle's traveling state at the current vehicle time to obtain the point cloud data of the vehicle at the current vehicle time.
  • the drivable area detection device can store the point cloud data of the vehicle at the current vehicle time in the drivable area detection memory.
  • the driveable area detection device can obtain the point cloud data of the vehicle at the current vehicle time and the estimated data of the vehicle's running state at the current vehicle time from the driveable area detection memory. Secondly, the driveable area detection device can generate the driveable area based on the point cloud data of the vehicle at the current vehicle time and the estimated data of the vehicle at the current vehicle time to obtain the driveable area of the vehicle at the current vehicle time. data. Then, the drivable area detection device can store the drivable area data of the vehicle at the current vehicle time in the drivable area detection memory.
  • the drivable area detection device can obtain the driving state data (ie CAN data) of the vehicle at the current vehicle time and the drivable area data of the vehicle at the current vehicle time from the drivable area detection memory. Secondly, the driving area detection device can determine whether the current driving scene of the vehicle meets a certain ROI parameter switching condition according to the driving state data of the vehicle at the current vehicle time (ie CAN data) and the driving area data of the vehicle at the current vehicle time ( Or call it driving scene judgment).
  • the driving area detection device can adjust the current ROI parameters of the vehicle to be the same as the above-mentioned target ROI.
  • the target ROI parameter corresponding to the parameter switching condition is a certain ROI parameter switching condition.
  • the target ROI parameters can include: image preprocessing parameters, point cloud generation parameters, and drivable area generation parameters
  • the drivable area detection device will adjust the corresponding and ROI adjustment related image preprocessing parameters, adjustment of the point cloud generation parameters related to the ROI adjustment corresponding to the above-mentioned point cloud generation part, and adjustment of the drivable area generation parameters related to the ROI adjustment corresponding to the above-mentioned drivable area generation part.
  • the target ROI parameters include any image preprocessing parameters corresponding to the image preprocessing part, and the image preprocessing parameters are in the process of adjusting the current ROI parameters of the vehicle to the target ROI parameters If adjustment occurs, the driving area detection device also needs to reinitialize the image preprocessing part; if the target ROI parameter includes any point cloud generation parameter corresponding to the point cloud generation part, and the point cloud generation parameter is the current ROI parameter of the vehicle If an adjustment occurs during the process of adjusting the parameters of the target ROI, the drivable area detection device also needs to reinitialize the point cloud generation part; if the target ROI parameters include any drivable area generation parameters corresponding to the drivable area generation part, and the drivable area If the driving area generation parameters are adjusted during the process of adjusting the current ROI parameters of the vehicle to the target ROI parameters, the drivable area detection device also needs to reinitialize the drivable area generation part.
  • the drivable area detection device may also reproject the drivable area data of the vehicle at the current vehicle time to the corresponding target ROI parameters.
  • the new ROI so that in the next vehicle moment, not only can the calculation energy consumption be reduced, but also the drivable area can be detected accurately.
  • the driving area detection device first executes the driving area detection part at the current vehicle time, and then executes the ROI adaptive adjustment part as an example for introduction. It should be understood that the drivable area detection device may also first execute the ROI adaptive adjustment part at the current vehicle moment, and then execute the drivable area detection part; wherein, the drivable area detection part can refer to the relevant drivable area detection part in FIG.
  • ROI adaptive adjustment part please refer to the relevant ROI adaptive adjustment part in Figure 14 (but because the vehicle's drivable area data at the current vehicle time has not been detected at this time, it is necessary to use the vehicle's drivable area at the previous vehicle time The area data replaces the travelable area data of the vehicle at the current vehicle time).
  • the specific process refer to the embodiment shown in FIG. 14, which is not limited in the embodiment of the present application.
  • execution time sequence of the drivable area detection method provided in the embodiment of the present application at the next vehicle time may refer to the execution sequence of the drivable area detection method provided in the embodiment of the application at the current vehicle time. It is not limited.
  • the drivable area detection device 150 of this embodiment of the present application may include: a first acquisition module 1501, a judgment module 1502, and an adjustment Module 1503 and second acquisition module 1504.
  • the first acquisition module 1501 is used to acquire the driving state data and the drivable area data of the vehicle at the current vehicle time;
  • the judgment module 1502 is configured to judge whether the driving scene of the vehicle meets the ROI parameter switching condition of the region of interest based on the driving state data and the driving area data;
  • the adjustment module 1503 is configured to adjust the current ROI parameter of the vehicle to the target ROI parameter if the driving scene of the vehicle meets the ROI parameter switching condition;
  • the second acquisition module 1504 is configured to acquire data of the vehicle's drivable area at the next vehicle moment according to the target ROI parameters.
  • the target ROI parameters include at least one of the following: image preprocessing parameters, point cloud generation parameters, or drivable area generation parameters;
  • the image preprocessing parameters include at least one of the following: the size parameter of the image ROI, the position parameter of the image ROI, or the number of image zoom layers;
  • the point cloud generation parameters include at least one of the following: a support point grid step parameter, a support point sparsity parameter, or a support point distribution method parameter;
  • the drivable area generation parameters include: occupancy grid resolution parameters.
  • the ROI parameter switching condition includes any one of the following:
  • the preset ROI parameter switching conditions of the congested road driving scene include: the vehicle is driving in the current driving lane at a driving speed less than the first preset speed along the driving direction of the last vehicle, and the current driving lane is far away from the vehicle. There is an obstacle in the first preset distance;
  • the preset ROI parameter switching conditions for driving scenes on uphill and downhill roads in a narrow space include: the vehicle is traveling in the current lane at a speed less than the first preset speed in the direction of the previous vehicle, the state of the accelerator pedal of the vehicle, and The state of the brake pedal changes intermittently, and the vehicle has a moving distance greater than the second preset distance in the vertical and horizontal directions.
  • the preset ROI parameter switching conditions of the highway driving scene include any of the following: the preset ROI parameter switching conditions of the first sub-scene, the preset ROI parameter switching conditions of the second sub-scene, The preset ROI parameter switching condition of the third sub-scene, or the preset ROI parameter switching condition of the fourth sub-scene;
  • the preset ROI parameter switching conditions of the first sub-scene include: the vehicle travels in the current travel lane at a travel speed greater than the second preset speed along the travel direction of the previous vehicle at the time of travel, and the current travel lane is adjacent to There are no obstacles in the lane within the third preset distance from the vehicle;
  • the preset ROI parameter switching conditions of the second sub-scene include: the vehicle is traveling in the current traveling lane at a traveling speed greater than the second preset speed along the traveling direction of the previous vehicle at the time of travel, and on the adjacent traveling lane of the current traveling lane There is an obstacle within the third preset distance from the vehicle;
  • the preset ROI parameter switching conditions of the third sub-scenario include: the vehicle is traveling in the current traveling lane at a traveling speed greater than the second preset speed along the traveling direction of the previous vehicle, and the current traveling lane is third from the vehicle. There are obstacles within the preset distance;
  • the preset ROI parameter switching conditions of the fourth sub-scenario include: the vehicle is traveling in a driving direction different from the previous vehicle time at a driving speed greater than a second preset speed on the current travel lane.
  • the second acquisition module includes:
  • the first acquisition unit is used to obtain the estimation of the driving state of the vehicle at the next vehicle time according to the target ROI parameters, the estimated data of the driving state of the vehicle at the current vehicle time, the data of the drivable area, and the binocular image data of the vehicle at the next vehicle time Data and point cloud data;
  • the second acquiring unit is used to acquire the data of the drivable area of the vehicle at the next vehicle time according to the estimated data and point cloud data of the driving state of the vehicle at the next vehicle time.
  • the first acquiring unit is specifically configured to:
  • point cloud generation processing is performed on the image processing data and the driving state estimation data of the vehicle at the next vehicle time to obtain the point cloud data of the vehicle at the next vehicle time.
  • the target ROI parameter further includes: a drivable area generation parameter
  • the second acquiring unit is specifically configured to:
  • the driving state estimation data and point cloud data of the vehicle at the next vehicle time are processed to generate the driving area data to obtain the driving area data of the vehicle at the next vehicle time.
  • the foregoing device further includes:
  • the projection module is used to project the travelable area data of the vehicle at the current vehicle moment to the ROI corresponding to the target ROI parameter.
  • the driving state data includes at least one of the following: driving speed, driving direction, accelerator pedal state, and brake pedal state.
  • the device for detecting a drivable area provided in the embodiment of the present application can be used to implement the technical solutions in the above-mentioned method for detecting a drivable area of the present application.
  • the implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 16 is a schematic structural diagram of a drivable area detection device provided by another embodiment of this application.
  • the drivable area detection device 160 of this embodiment of the present application may include: a processor 1601, a memory 1602, and a communication interface 1603 .
  • the communication interface 1603 is used to obtain data to be processed (for example, driving state data, and/or binocular image data, etc.);
  • the memory 1602 is used to store program instructions;
  • the processor 1601 uses When the program instructions stored in the memory 1602 are called and executed, when the processor 1601 executes the program instructions stored in the memory 1602, the drivable area detection device is used to execute the drivable data described in the application for the data to be processed.
  • the technical solution in the embodiment of the area detection method obtains processed data (for example, travelable area data, etc.), so that the communication interface 1603 is also used to output the processed data.
  • processed data for example, travelable area data, etc.
  • the implementation principles and technical effects are similar and will not be omitted here. Go into details.
  • the memory 1602 in the embodiment of the present application may also be used to store intermediate result data of the drivable area detection device in the process of executing the technical solution in the above-mentioned drivable area detection method embodiment of the present application.
  • the communication interface involved in the embodiment of the present application may include, but is not limited to: an image data interface, and/or a CAN data interface.
  • the embodiment of the present application also provides a chip, which may include the above-mentioned drivable area detection device of the present application, or is used to support the drivable area detection device to realize the functions shown in the embodiment of the present application.
  • the chip when the chip in the electronic device implements the above method, the chip may include a processing unit. Further, the chip may also include a communication unit, and the processing unit may be, for example, a processor; when the chip includes a communication unit, The communication unit may be, for example, an input/output interface, a pin, a circuit, or the like. Wherein, the processing unit executes all or part of the actions executed by each processing module in the embodiments of the present application, and the communication unit can execute corresponding receiving or acquiring actions.
  • An embodiment of the present application also provides an in-vehicle device, which may include the above-mentioned drivable area detection device of the present application.
  • the vehicle-mounted terminal in the embodiment of the present application may further include: the ECU 12 and the controller 13 as shown in FIG. 1; of course, the vehicle-mounted terminal in the embodiment of the present application may also include other devices. This is not limited.
  • the embodiment of the present application further provides a computer-readable storage medium, the computer-readable storage medium is used to store a computer program, and the computer program is used to implement the technical solution in the above-mentioned driving area detection method embodiment of the present application.
  • the implementation principle and technical effect are similar, and will not be repeated here.
  • the embodiment of the present application also provides a chip system, which includes a processor, and may also include a memory and a communication interface, which is used to implement the technical solutions in the above-mentioned drivable area detection method embodiments of the present application, and its implementation principles and technical effects Similar, not repeat them here.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the embodiment of the present application also provides a program, which is used to execute the technical solution in the above-mentioned driving area detection method embodiment of the present application when the program is executed by the processor. Its implementation principles and technical effects are similar and will not be repeated here.
  • the embodiment of the present application also provides a computer program product containing instructions, which when running on a computer, causes the computer to execute the technical solution in the above-mentioned driving area detection method embodiment of the present application.
  • the implementation principles and technical effects are similar. I won't repeat it here.
  • the processors involved in the embodiments of the present application may be general-purpose processors, digital signal processors, application specific integrated circuits, field programmable gate arrays or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, and may implement or Perform the methods, steps, and logic block diagrams disclosed in the embodiments of the present application.
  • the general-purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in combination with the embodiments of the present application may be directly embodied as being executed and completed by a hardware processor, or executed and completed by a combination of hardware and software modules in the processor.
  • the memory involved in the embodiments of the present application may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or a volatile memory (volatile memory), for example Random-access memory (random-access memory, RAM).
  • the memory is any other medium that can be used to carry or store desired program codes in the form of instructions or data structures and that can be accessed by a computer, but is not limited to this.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • the size of the sequence number of each process does not mean the order of execution.
  • the order of execution of each process should be determined by its function and internal logic. There should be any limitation on the implementation process of the embodiments of the present application.
  • all or part of it may be implemented by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请实施例提供一种可行驶区域检测方法、装置、设备及存储介质,该方法包括:通过在检测到车辆的行驶场景符合目标ROI参数切换条件时,将车辆当前的ROI参数调整为与目标ROI参数切换条件对应的目标ROI参数,然后根据目标ROI参数进行可行驶区域检测。可见,本申请实施例中,针对不同道路行驶场景可以采用不同的ROI参数,有利于降低计算能耗,从而不仅可以节省计算资源,还可以提高局部检测精度。

Description

可行驶区域检测方法、装置、设备及存储介质
本申请要求于2020年05月20日提交中国专利局、申请号为202010429511.0、申请名称为“可行驶区域检测方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及智能车辆领域,尤其涉及一种可行驶区域检测方法、装置、设备及存储介质。
背景技术
在先进驾驶员辅助系统(Advanced Driver Assistance Systems,ADAS)系统中,可行驶区域检测具有非常重要的作用。基于可行驶区域检测可以降低目标对象检测的误检率,也可以用于目标对象的辅助测距。尤其是当道路场景中存在一些异形障碍物对象时,可行驶区域检测可以有效地检测出这些对象,从而可以提高自动驾驶系统的感知能力。
目前主要的可行驶区域检测包括:基于单目视觉的可行驶区域检测方法和基于双目视觉的可行驶区域检测方法。其中,相对于基于单目视觉的可行驶区域检测方法,基于双目视觉的可行驶区域检测方法可以适用于更加复杂的行驶场景,但基于双目视觉的可行驶区域检测方法的有效检测距离相对于基于单目视觉的可行驶区域检测方法的有效检测距离较短。因此,相关技术中的基于双目视觉的可行驶区域检测方法通常会采用较高的分辨率,以便于为了实现远距离大范围的检测。
但相关技术中的基于双目视觉的可行驶区域检测方法中的感兴趣区域(Region of interest,ROI)参数是固定的,也就是说不管在任何道路行驶场景下,上述基于双目视觉的可行驶区域检测方法都需要较大的计算资源,因此,上述基于双目视觉的可行驶区域检测方法在任何道路行驶场景下的计算能耗都很大。
发明内容
本申请实施例提供一种可行驶区域检测方法、装置、设备及存储介质,不仅可以节省计算资源,还可以提高局部检测精度。
第一方面,本申请实施例提供一种可行驶区域检测方法,包括:
获取车辆在当前车辆时刻的行驶状态数据及可行驶区域数据;
基于上述行驶状态数据及可行驶区域数据,判断上述车辆的行驶场景是否符合感兴趣区域ROI参数切换条件;
若车辆的行驶场景符合ROI参数切换条件,则将车辆当前的ROI参数调整为目标ROI参数;
根据目标ROI参数,获取车辆在下一车辆时刻的可行驶区域数据。
本申请实施例中,通过在检测到车辆的行驶场景符合目标ROI参数切换条件时, 将车辆当前的ROI参数调整为与目标ROI参数切换条件对应的目标ROI参数,然后根据目标ROI参数进行可行驶区域检测。可见,本申请实施例中,针对不同道路行驶场景可以采用不同的ROI参数,有利于降低计算能耗,从而不仅可以节省计算资源,还可以提高局部检测精度。
在一种可能的实现方式中,目标ROI参数包括以下至少一项:图像预处理参数、点云生成参数,或者可行驶区域生成参数;
其中,图像预处理参数包括以下至少一项:图像ROI的尺寸参数、图像ROI的位置参数,或者图像缩放层数;
点云生成参数包括以下至少一项:支撑点网格步长参数、支撑点稀疏程度参数,或者支撑点分布方式参数;
可行驶区域生成参数包括:占位网格分辨率参数。
在一种可能的实现方式中,ROI参数切换条件包括以下任一项:
高速道路行驶场景的预设ROI参数切换条件、拥堵道路行驶场景的预设ROI参数切换条件,或者狭窄空间上下坡道路行驶场景的预设ROI参数切换条件;
其中,拥堵道路行驶场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照小于第一预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且当前行进车道上距离车辆的第一预设距离内存在障碍物;
狭窄空间上下坡道路行驶场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照小于第一预设速度的行驶速度沿着与上一车辆时刻的行驶方向行驶、车辆的油门踏板状态和制动踏板状态断续变化,且车辆在垂直与水平地面的方向上存在大于第二预设距离的移动距离。
在一种可能的实现方式中,高速道路行驶场景的预设ROI参数切换条件包括以下任一项:第一子场景的预设ROI参数切换条件、第二子场景的预设ROI参数切换条件、第三子场景的预设ROI参数切换条件,或者第四子场景的预设ROI参数切换条件;
其中,第一子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且当前行进车道的相邻行进车道上距离车辆的第三预设距离内无障碍物;
第二子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且当前行进车道的相邻行进车道上距离车辆的第三预设距离内存在障碍物;
第三子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且当前行进车道上距离车辆的第三预设距离内存在障碍物;
第四子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着不同于上一车辆时刻的行驶方向行驶。
在一种可能的实现方式中,根据目标ROI参数,获取车辆在下一车辆时刻的可行驶区域数据,包括:
根据目标ROI参数、车辆在当前车辆时刻的行驶状态估计数据、可行驶区域数据,以及车辆在下一车辆时刻的双目图像数据,获取车辆在下一车辆时刻的行驶状态估计 数据和点云数据;
根据车辆在下一车辆时刻的行驶状态估计数据和点云数据,获取车辆在下一车辆时刻的可行驶区域数据。
在一种可能的实现方式中,若目标ROI参数包括:图像预处理参数和点云生成参数,根据目标ROI参数、车辆在当前车辆时刻的行驶状态估计数据、可行驶区域数据,以及车辆在下一车辆时刻的双目图像数据,获取车辆在下一车辆时刻的行驶状态估计数据和点云数据,包括:
根据图像预处理参数、车辆在当前车辆时刻的行驶状态估计数据和可行驶区域数据,对车辆在下一车辆时刻的双目图像数据进行图像预处理,得到车辆在下一车辆时刻的图像处理数据;
根据车辆在当前车辆时刻的行驶状态估计数据和车辆在下一车辆时刻的双目图像数据进行状态估计处理,得到车辆在下一车辆时刻的行驶状态估计数据;
根据点云生成参数对车辆在下一车辆时刻的图像处理数据和行驶状态估计数据进行点云生成处理,得到车辆在下一车辆时刻的点云数据。
在一种可能的实现方式中,若目标ROI参数还包括:可行驶区域生成参数,根据车辆在下一车辆时刻的行驶状态估计数据和点云数据,获取车辆在下一车辆时刻的可行驶区域数据,包括:
根据可行驶区域生成参数对车辆在下一车辆时刻的行驶状态估计数据和点云数据进行可行驶区域生成处理,得到车辆在下一车辆时刻的可行驶区域数据。
在一种可能的实现方式中,根据目标ROI参数,获取车辆在下一车辆时刻的可行驶区域数据之前,方法还包括:将车辆在当前车辆时刻的可行驶区域数据投影到目标ROI参数对应的ROI,以便于在下一车辆时刻不仅可以在降低计算能耗,还可以准确地进行可行驶区域检测。
在一种可能的实现方式中,行驶状态数据包括以下至少一项:行驶速度、行驶方向、油门踏板状态、制动踏板状态。
第二方面,本申请实施例提供一种可行驶区域检测装置,包括:
第一获取模块,用于获取车辆在当前车辆时刻的行驶状态数据及可行驶区域数据;
判断模块,用于基于上述行驶状态数据及可行驶区域数据,判断上述车辆的行驶场景是否符合感兴趣区域ROI参数切换条件;
调整模块,用于若车辆的行驶场景符合ROI参数切换条件,则将车辆当前的ROI参数调整为目标ROI参数;
第二获取模块,用于根据目标ROI参数,获取车辆在下一车辆时刻的可行驶区域数据。
在一种可能的实现方式中,目标ROI参数包括以下至少一项:图像预处理参数、点云生成参数,或者可行驶区域生成参数;
其中,图像预处理参数包括以下至少一项:图像ROI的尺寸参数、图像ROI的位置参数,或者图像缩放层数;
点云生成参数包括以下至少一项:支撑点网格步长参数、支撑点稀疏程度参数,或者支撑点分布方式参数;
可行驶区域生成参数包括:占位网格分辨率参数。
在一种可能的实现方式中,ROI参数切换条件包括以下任一项:
高速道路行驶场景的预设ROI参数切换条件、拥堵道路行驶场景的预设ROI参数切换条件,或者狭窄空间上下坡道路行驶场景的预设ROI参数切换条件;
其中,拥堵道路行驶场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照小于第一预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且当前行进车道上距离车辆的第一预设距离内存在障碍物;
狭窄空间上下坡道路行驶场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照小于第一预设速度的行驶速度沿着与上一车辆时刻的行驶方向行驶、车辆的油门踏板状态和制动踏板状态断续变化,且车辆在垂直与水平地面的方向上存在大于第二预设距离的移动距离。
在一种可能的实现方式中,高速道路行驶场景的预设ROI参数切换条件包括以下任一项:第一子场景的预设ROI参数切换条件、第二子场景的预设ROI参数切换条件、第三子场景的预设ROI参数切换条件,或者第四子场景的预设ROI参数切换条件;
其中,第一子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且当前行进车道的相邻行进车道上距离车辆的第三预设距离内无障碍物;
第二子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且当前行进车道的相邻行进车道上距离车辆的第三预设距离内存在障碍物;
第三子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且当前行进车道上距离车辆的第三预设距离内存在障碍物;
第四子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着不同于上一车辆时刻的行驶方向行驶。
在一种可能的实现方式中,第二获取模块,包括:
第一获取单元,用于根据目标ROI参数、车辆在当前车辆时刻的行驶状态估计数据、可行驶区域数据,以及车辆在下一车辆时刻的双目图像数据,获取车辆在下一车辆时刻的行驶状态估计数据和点云数据;
第二获取单元,用于根据车辆在下一车辆时刻的行驶状态估计数据和点云数据,获取车辆在下一车辆时刻的可行驶区域数据。
在一种可能的实现方式中,若目标ROI参数包括:图像预处理参数和点云生成参数,第一获取单元具体用于:
根据图像预处理参数、车辆在当前车辆时刻的行驶状态估计数据和可行驶区域数据,对车辆在下一车辆时刻的双目图像数据进行图像预处理,得到车辆在下一车辆时刻的图像处理数据;
根据车辆在当前车辆时刻的行驶状态估计数据和车辆在下一车辆时刻的双目图像数据进行状态估计处理,得到车辆在下一车辆时刻的行驶状态估计数据;
根据点云生成参数对车辆在下一车辆时刻的图像处理数据和行驶状态估计数据进 行点云生成处理,得到车辆在下一车辆时刻的点云数据。
在一种可能的实现方式中,若目标ROI参数还包括:可行驶区域生成参数,第二获取单元具体用于:
根据可行驶区域生成参数对车辆在下一车辆时刻的行驶状态估计数据和点云数据进行可行驶区域生成处理,得到车辆在下一车辆时刻的可行驶区域数据。
在一种可能的实现方式中,装置还包括:
投影模块,用于将车辆在当前车辆时刻的可行驶区域数据投影到目标ROI参数对应的ROI。
在一种可能的实现方式中,行驶状态数据包括以下至少一项:行驶速度、行驶方向、油门踏板状态、制动踏板状态。
第三方面,本申请实施例提供一种可行驶区域检测装置,包括:处理器、存储器和通信接口;
其中,所述通信接口用于获取待处理的数据;
所述存储器,用于存储程序指令;
所述处理器,用于调用并执行所述存储器中存储的程序指令,当所述处理器执行所述存储器存储的程序指令时,所述可行驶区域检测装置用于对待处理的数据执行上述第一方面的任意实现方式所述的方法,得到处理后的数据;
所述通信接口还用于输出处理后的数据。
第四方面,本申请实施例提供一种芯片,包括上述第三方面的任意实现方式所述的可行驶区域检测装置。
第五方面,本申请实施例提供一种车载设备,包括上述第三方面的任意实现方式所述的可行驶区域检测装置。
第六方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,所述计算机程序用于实现上述第一方面的任意实现方式所述的方法。
第七方面,本申请实施例提供一种芯片系统,该芯片系统包括处理器,还可以包括存储器和通信接口,用于实现上述第一方面的任意实现方式所述的方法。示例性地,该芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
第八方面,本申请实施例提供一种程序,该程序在被处理器执行时用于执行上述第一方面的任意实现方式所述的方法。
第九方面,本申请实施例提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面的任意实现方式所述的方法。
附图说明
图1为本申请实施例提供的车辆中硬件系统的架构示意图;
图2为本申请实施例提供的车辆中软件系统的架构示意图;
图3为本申请实施例提供的车辆中软件系统的执行时序示意图;
图4为本申请实施例提供的可行驶区域检测方法的主要组成部分的示意图一;
图5为本申请实施例提供的可行驶区域检测方法的主要组成部分的示意图二;
图6为本申请实施例提供的点云生成参数的调整对系统性能的影响示意图一;
图7为本申请实施例提供的点云生成参数的调整对系统性能的影响示意图二;
图8为本申请一实施例提供的可行驶区域检测方法的流程示意图;
图9为本申请实施例提供的包含ROI的图像示意图一;
图10为本申请实施例提供的包含ROI的图像示意图二;
图11为本申请实施例提供的包含ROI的图像示意图三;
图12为本申请实施例提供的包含ROI的图像示意图四;
图13为本申请实施例提供的包含ROI的图像示意图五;
图14为本申请实施例提供的可行驶区域检测方法的执行时序示意图;
图15为本申请一实施例提供的可行驶区域检测装置的结构示意图;
图16为本申请另一实施例提供的可行驶区域检测装置的结构示意图。
具体实施方式
首先,对本申请实施例所涉及的应用场景和部分词汇进行解释说明。
本申请实施例提供的可行驶区域检测方法、装置、设备及存储介质可以应用在车辆在不同道路行驶场景下的基于双目视觉的可行驶区域检测场景。
示例性地,本申请实施例提供的可行驶区域检测方法、装置、设备及存储介质可以应用在高速道路行驶场景下的可行驶区域检测场景、拥堵道路行驶场景下的可行驶区域检测场景以及狭窄空间上下坡道路行驶场景(例如上下多层停车库场景等)下的可行驶区域检测场景。下面分别对上述高速道路行驶场景、拥堵道路行驶场景和狭窄空间上下坡道路行驶场景进行介绍。
1)高速道路行驶场景:
示例性地,车辆的行驶速度大于第二预设速度(例如40km/h)。
2)拥堵道路行驶场景:
示例性地,车辆的行驶速小于第一预设速度(例如20km/h),且当前行进车道上距离车辆的第一预设距离(例如15m)内存在障碍物。
3)狭窄空间上下坡道路行驶场景:
示例性地,车辆的行驶速小于第一预设速度(例如20km/h)、车辆的油门踏板状态和制动踏板状态断续变化(例如,驾驶员断续踩踏油门和踏板),且车辆在垂直于水平地面的方向上存在大于第二预设距离(例如0.5m)的移动距离,例如,1s内自车运动方向在Z轴(或者说与地面垂直向上的轴向)上移动6m。
当然,本申请实施例提供的可行驶区域检测方法、装置、设备及存储介质还可以应用在其它场景,本申请实施例中对此并不作限制。
本申请实施例中,本申请实施例提供的可行驶区域检测方法的执行主体可以是可行驶区域检测装置。示例性地,可行驶区域检测装置可以是芯片、芯片系统、电路或者模块等,本申请不作限制。
示例性地,本申请实施例中涉及的可行驶区域检测装置可以为芯片系统;当然,还可以为其它具有数据和/或图像等处理功能的计算装置。
图1为本申请实施例提供的车辆中硬件系统的架构示意图。如图1所示,车辆中 硬件系统的架构示意图中可以包括但不限于:双目相机10、芯片系统11、电子控制单元(Electronic Control Unit,ECU)12、控制器13和控制器局域网络(Controller Area Network,CAN)总线14。其中,双目相机10用于采集图像数据;CAN总线14用于提供车辆的行驶状态数据;芯片系统11用于根据双目相机10采集的图像数据和CAN总线14提供的数据进行可行驶区域检测;ECU 12用于根据芯片系统11的检测结果和CAN总线14提供的数据确定控制决策;控制器13用于根据ECU 12的控制决策控制车辆的运动。应理解,芯片系统11可以采用本申请实施例提供的可行驶区域检测方法。
图2为本申请实施例提供的车辆中软件系统的架构示意图。如图2所示,车辆中软件系统的架构示意图中可以包括但不限于:驱动层20、业务软件层21、规划与控制层22和执行层23。其中,驱动层20用于读取车辆内所有车载传感器的数据,例如可以包括但不限于:双目相机的图像数据和/或CAN总线提供的数据;业务软件层21用于执行车辆检测、行人检测和/或可行驶区域检测等任务;控制与规划层22用于根据业务软件层21的所有检测结果(例如可以包括但不限于:可行驶区域检测结果)进行路径规划并生成控制命令传达至执行层23;执行层23用于根据控制与规划层22生成的控制命令调用车辆内的车载设备从而控制车辆的运动。
图3为本申请实施例提供的车辆中软件系统的执行时序示意图。如图3所示,1)驱动层获取车辆内所有车载传感器的数据,例如双目相机的图像数据;2)业务软件层从驱动层获取双目相机的图像数据,其次对双目相机的图像数据进行图像校准处理,然后进行可行驶区域检测,最后向规划与控制层输出检测结果;3)规划与控制层根据检测结果进行路径规划和生成控制命令,并向执行层传达控制命令;4)执行层根据控制命令调用车辆内的车载设备从而控制车辆的运动。
针对相关技术中的基于双目视觉的可行驶区域检测方法中的ROI参数是固定的,也就是说不管在任何道路行驶场景下,上述基于双目视觉的可行驶区域检测方法都需要较大的计算资源,导致相关技术中的基于双目视觉的可行驶区域检测方法在任何道路行驶场景下的计算能耗都很大的技术问题,本申请实施例中,通过在检测到车辆的行驶场景符合ROI参数切换条件时,将车辆当前的ROI参数调整为与当前所符合的ROI参数切换条件对应的目标ROI参数,然后根据目标ROI参数进行可行驶区域检测。可见,本申请实施例中,针对不同道路行驶场景可以采用不同的ROI参数,有利于降低计算能耗,从而不仅可以节省计算资源,还可以提高局部检测精度。
图4为本申请实施例提供的可行驶区域检测方法的主要组成部分的示意图一。如图4所示,本申请实施例提供的可行驶区域检测方法的主要组成部分可以包括但不限于:图像预处理部分、自我运动(ego motion)估计部分、点云生成部分和可行驶区域生成部分和场景自适应ROI决策部分。
其中,场景自适应ROI决策部分用于自动地检测车辆的当前行驶场景,并在检测到车辆的行驶场景符合ROI参数切换条件时,通过将车辆当前的ROI参数调整为与当前所符合的ROI参数切换条件对应的目标ROI参数,从而实现降低计算能耗并提高局部检测精度的目标。需要说明的是,上述目标ROI参数可以包括但不限于:上述图像预处理部分对应的与ROI调整相关的图像预处理参数、点云生成部分对应的与ROI调整相关的点云生成参数,和/或,可行驶区域生成部分对应的与ROI调整相关的可行 驶区域生成参数中的至少一项参数。
图5为本申请实施例提供的可行驶区域检测方法的主要组成部分的示意图二。在上述图4所示实施例的基础上,图5所示实施例对各主要组成部分进行详细的介绍。如图5所示,本申请实施例提供的可行驶区域检测方法的主要组成部分可以包括但不限于:图像预处理部分、自我运动估计部分、点云生成部分和可行驶区域生成部分和场景自适应ROI决策部分。
其中,图像预处理部分用于对双目相机的图像数据进行图像预处理,实现图像ROI设置、图像缩放、图像颜色转换(例如彩色转灰度等)、图像增强等预处理功能。
自我运动估计部分用于实现自车定位功能。
点云生成部分用于实现双目相机的图像数据的特征描述器生成、支撑点生成、支撑点三角化、视差图生成、点云生成等深度图估计和3D点云生成功能。
可行驶区域生成部分用于实现网格设置、根据自我运动估计部分的估计结果更新高程地图、将新生成的点云加入高程地图、更新可行驶区域以生成Stixel(即用竖条纹表示的障碍物区域)等功能。
场景自适应ROI决策部分用于根据车辆在当前车辆时刻的CAN总线提供的纵向数据(例如,油门踏板状态、制动踏板状态、行驶速度,和/或行驶方向等)和可行驶区域数据,自动地检测车辆的当前行驶场景,并在检测到车辆的行驶场景符合ROI参数切换条件时,通过将车辆当前的ROI参数调整为与当前所符合的ROI参数切换条件对应的目标ROI参数,从而实现降低计算能耗并提高局部检测精度的目标。
本申请下述实施例依次对上述图像预处理部分对应的与ROI调整相关的图像预处理参数、点云生成部分对应的与ROI调整相关的点云生成参数,以及可行驶区域生成部分对应的与ROI调整相关的可行驶区域生成参数进行介绍。
1)图像预处理部分对应的与ROI调整相关的图像预处理参数
示例性地,本申请实施例中涉及的上述图像预处理参数可以包括但不限于:图像ROI设置参数,和/或,图像缩放层数;其中,图像ROI设置参数可以包括但不限于:图像ROI的尺寸参数,和/或,图像ROI的位置参数。
表1为关于图像预处理参数的调整对系统性能的影响示意表
Figure PCTCN2021092822-appb-000001
2)点云生成部分对应的与ROI调整相关的点云生成参数
示例性地,本申请实施例中涉及的上述点云生成参数可以包括但不限于以下至少一项:支撑点网格步长参数、支撑点稀疏程度参数,或者支撑点分布方式参数。
表2为关于点云生成参数的调整对系统性能的影响示意表
Figure PCTCN2021092822-appb-000002
图6为本申请实施例提供的点云生成参数的调整对系统性能的影响示意图一。1)如图6所示,沿着箭头方向,设置ROI距离越远,则支撑点网格步长参数越小,例如宽高步长由8*8依次变为6*6和4*4,以便提高检测精度,从而降低漏检概率。2)如图6所示,支撑点稀疏程度参数越稀疏,则计算能耗越低但相应的检测精度也降低了,因此,对于近距离ROI可以采用更稀疏的分布,对于远距离ROI可以采用更密集的分布。3)支撑点分布方式参数所指示的分布公式可以包括但不限于:均匀分布方式或者集中分布到检测出的已知对象上。对于近距离ROI,通常采用均匀分布方式,可以降低漏检率,对于远距离ROI,通常采用将支撑点分布到已知对象上的方式,可以提高检测精度和降低漏检率。
图7为本申请实施例提供的点云生成参数的调整对系统性能的影响示意图二,如图7所示,显示了支撑点的分布稀疏程度对可行驶区域的检测精度的影响。为了提高像素深度估计的计算效率,首先会分布稀疏的支撑点,然后计算支撑点的深度,而对于其他像素的深度通过三角化支撑点区域进行近似深度估计。如图7(a)所示,支撑点越密集,则其它像素的深度估计会越准确,如图7(b)所示,支撑点越稀疏,则其它像素的深度估计会越粗糙。
3)可行驶区域生成部分对应的与ROI调整相关的可行驶区域生成参数
示例性地,本申请实施例中涉及的上述可行驶区域生成参数可以包括但不限于:占位网格分辨率参数。
表3为关于可行驶区域生成参数的调整对系统性能的影响示意表
Figure PCTCN2021092822-appb-000003
本申请实施例中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数 或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
下面以具体地实施例对本申请的技术方案进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。
图8为本申请一实施例提供的可行驶区域检测方法的流程示意图。如图8所示,本申请实施例的方法可以包括:
步骤S801、获取车辆在当前车辆时刻的行驶状态数据及可行驶区域数据。
本步骤中,可行驶区域检测装置可以通过CAN总线获取车辆在当前车辆时刻的行驶状态数据。示例性地,行驶状态数据可以包括但不限于以下至少一项:行驶速度、行驶方向、油门踏板状态、制动踏板状态。应理解,行驶速度可以与车辆内的轮速里程计相对应,行驶方向可以与车辆内的方向盘转角相对应,行驶方向还可以与车辆的转向灯状态相对应(例如,若转向灯处于开启状态,则可以确定车辆的行驶方向发生了转向;若转向灯处于关闭状态,则可以确定车辆的行驶方向未发生转向)。
应理解,上述车辆在当前车辆时刻的可行驶区域数据可以是可行驶区域检测装置的可行驶区域生成部分所检测到的。示例性地,可行驶区域检测装置可以根据上述车辆在当前车辆时刻的行驶状态估计数据和上述车辆在当前车辆时刻的点云数据进行可行驶区域生成处理,便可得到上述车辆在当前车辆时刻的可行驶区域数据。其中,上述车辆在当前车辆时刻的可行驶区域数据的具体的获取方式,本申请实施例的后续部分(参见图14所示实施例)会进行介绍。
示例性地,上述可行驶区域数据可以包括但不限于以下至少一项:ROI中是否存在障碍物信息、障碍物的位置信息、障碍物的尺寸信息。
步骤S802、基于行驶状态数据及可行驶区域数据,判断车辆的行驶场景是否符合感兴趣区域ROI参数切换条件。
本申请实施例中,可行驶区域检测装置中可以预置有至少一种感兴趣区域ROI参数切换条件,以便于根据车辆在当前车辆时刻的行驶状态数据及可行驶区域数据,可以实时地判断车辆的行驶场景是否符合某种ROI参数切换条件。
可选地,可行驶区域检测装置中预置的至少一种感兴趣区域ROI参数切换条件可以包括但不限于以下至少一项:高速道路行驶场景的预设ROI参数切换条件、拥堵道路行驶场景的预设ROI参数切换条件,或者狭窄空间上下坡道路行驶场景的预设ROI参数切换条件。
示例性地,拥堵道路行驶场景的预设ROI参数切换条件可以包括但不限于:车辆在当前行进车道上按照小于第一预设速度(例如20km/h)的行驶速度沿着上一车辆时刻的行驶方向行驶(例如,转向灯处于关闭状态),且当前行进车道上距离车辆的第一预设距离(例如15m)内存在障碍物。
示例性地,狭窄空间上下坡道路行驶场景的预设ROI参数切换条件可以包括但不限于:车辆在当前行进车道上按照小于第一预设速度(例如20km/h)的行驶速度沿着与上一车辆时刻的行驶方向行驶(例如,转向灯处于关闭状态)、车辆的油门踏板状 态和制动踏板状态断续变化(例如,驾驶员断续踩踏油门踏板和制动踏板),且车辆在垂直与水平地面的方向上存在大于第二预设距离(例如0.5m)的移动距离,例如,1s内自车运动方向在Z轴(或者说与地面垂直向上的轴向)上移动6m。
示例性地,高速道路行驶场景的预设ROI参数切换条件可以包括但不限于以下至少一项:第一子场景的预设ROI参数切换条件、第二子场景的预设ROI参数切换条件、第三子场景的预设ROI参数切换条件,或者第四子场景的预设ROI参数切换条件。
其中,第一子场景的预设ROI参数切换条件可以包括但不限于:车辆在当前行进车道上按照大于第二预设速度(例如40km/h)的行驶速度沿着上一车辆时刻的行驶方向行驶(例如,转向灯处于关闭状态),且当前行进车道的相邻行进车道上距离车辆的第三预设距离(例如50m)内无障碍物。
第二子场景的预设ROI参数切换条件可以包括但不限于:车辆在当前行进车道上按照大于第二预设速度(例如40km/h)的行驶速度沿着上一车辆时刻的行驶方向行驶(例如,转向灯处于关闭状态),且当前行进车道的相邻行进车道上距离车辆的第三预设距离内存在障碍物(例如,在ROI中的当前行进车道的相邻行进车道上检测到障碍物,且随着车辆的前行障碍物越来越近,已经到达ROI边界)。
第三子场景的预设ROI参数切换条件可以包括但不限于:车辆在当前行进车道上按照大于第二预设速度(例如40km/h)的行驶速度沿着上一车辆时刻的行驶方向行驶(例如,转向灯处于关闭状态),且当前行进车道上距离车辆的第三预设距离内存在障碍物(例如,在ROI中的当前行进车道上检测到障碍物,且随着车辆的前行障碍物越来越近,已经到达ROI边界)。
所述第四子场景的预设ROI参数切换条件可以包括但不限于:车辆在当前行进车道上按照大于第二预设速度(例如40km/h)的行驶速度沿着不同于上一车辆时刻的行驶方向行驶(例如,转向灯处于开启状态)。
本步骤中,可行驶区域检测装置可以根据步骤S801中获取到的车辆在当前车辆时刻的行驶状态数据及可行驶区域数据,判断车辆的当前行驶场景是否符合上述至少一种感兴趣区域ROI参数切换条件中的某种ROI参数切换条件(为了便于描述,可以称之为目标ROI参数切换条件)。
应理解,目标ROI参数切换条件可以包括但不限于以下任一项:高速道路行驶场景的预设ROI参数切换条件、拥堵道路行驶场景的预设ROI参数切换条件,或者狭窄空间上下坡道路行驶场景的预设ROI参数切换条件。
步骤S803、若车辆的行驶场景符合ROI参数切换条件,则将车辆当前的ROI参数调整为目标ROI参数。
本申请实施例中,可行驶区域检测装置中可以预置有上述至少一种感兴趣区域ROI参数切换条件中的每种ROI参数切换条件所对应的ROI参数,以便于在检测到车辆的当前行驶场景符合某种ROI参数切换条件时,可以将车辆当前的ROI参数调整为当前所符合的ROI参数切换条件对应的ROI参数(为了便于描述,可以称之为目标ROI参数)。
本步骤中,若车辆的当前行驶场景符合上述至少一种感兴趣区域ROI参数切换条件中的某种ROI参数切换条件(为了便于描述,可以称之为目标ROI参数切换条件), 则可行驶区域检测装置可以将车辆当前的ROI参数调整为与上述目标ROI参数切换条件对应的目标ROI参数。
例如,若可行驶区域检测装置中可以预置有ROI参数切换条件1所对应的ROI参数1、ROI参数切换条件2所对应的ROI参数2、ROI参数切换条件3所对应的ROI参数3,且检测到车辆的当前行驶场景符合上述ROI参数切换条件2,则可行驶区域检测装置可以将车辆当前的ROI参数调整为上述ROI参数2。
示例性地,上述目标ROI参数可以包括但不限于以下至少一项:图像预处理参数、点云生成参数,或者可行驶区域生成参数。
其中,图像预处理参数可以包括但不限于以下至少一项:图像ROI的尺寸参数、图像ROI的位置参数,或者图像缩放层数。
点云生成参数可以包括但不限于以下至少一项:支撑点网格步长参数、支撑点稀疏程度参数,或者支撑点分布方式参数。
可行驶区域生成参数可以包括但不限于:占位网格分辨率参数。
应理解,本申请实施例中涉及的将车辆当前的ROI参数调整为与上述目标ROI参数切换条件对应的目标ROI参数过程中,会将车辆当前的ROI参数中与目标ROI参数所包含的不相同的相应参数修改为与目标ROI参数相同,但会保留车辆当前的ROI参数中与目标ROI参数相同的相应参数,另外还可以保留车辆当前的ROI参数中目标ROI参数所不包含的其它参数。
本申请下述实施例中,对车辆的当前行驶场景符合不同ROI参数切换条件时的ROI参数调整方式进行介绍。
1)车辆的行驶场景符合上述高速道路行驶场景的预设ROI参数切换条件:
一种可能的实现方式中,若检测到车辆的当前行驶场景符合上述第一子场景的预设ROI参数切换条件,即上述目标ROI参数切换条件为第一子场景的预设ROI参数切换条件,则可行驶区域检测装置可以将车辆当前的ROI参数调整为与上述第一子场景的预设ROI参数切换条件所对应的目标ROI参数。
为了便于理解,本申请下述实施例中提供了关于第一子场景的预设ROI参数切换条件的示意表,并结合示意表对本实现方式进行介绍。
表4为关于第一子场景的预设ROI参数切换条件的示意表
Figure PCTCN2021092822-appb-000004
Figure PCTCN2021092822-appb-000005
需要说明的是,表4中各参数的取值仅为示例性的,具体数值可以根据实际情况而定,本申请实施例中对此并不作限定。
示例性地,如表4所示,上述第一子场景的预设ROI参数切换条件所对应的目标ROI参数可以包括以下至少一项主调节参数:图像ROI的尺寸参数(例如,ROI可以覆盖50m-80m范围)、图像ROI的位置参数(例如,ROI可以位于距离车辆50m之外的位置)、支撑点网格步长参数(例如,网格步长参数为4*4)、占位网格分辨率参数(例如,占位网格分辨率参数为网格尺寸设置为0.8m)。
又一示例性地,如表4所示,上述第一子场景的预设ROI参数切换条件所对应的目标ROI参数还可以包括以下至少一项辅助调节参数:图像缩放层数(例如,图像缩放层数为2层)、支撑点稀疏程度参数(例如,支撑点稀疏程度参数所指示的支撑点密集程度大于预设密集程度)、支撑点分布方式参数(例如,支撑点分布方式参数用于指示以障碍物分布为主的分布方式)。
当然,上述第一子场景的预设ROI参数切换条件所对应的目标ROI参数还可以包括其它参数,本申请实施例中对此并不作限定。
应理解,若目标ROI参数中包括图像预处理部分对应的任意图像预处理参数,且该图像预处理参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化图像预处理部分;若目标ROI参数中包括点云生成部分对应的任意点云生成参数,且该点云生成参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化点云生成部分;若目标ROI参数中包括可行驶区域生成部分对应的任意可行驶区域生成参数,且该可行驶区域生成参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化可行驶区域生成部分。
另一种可能的实现方式中,若检测到车辆的行驶场景符合上述第二子场景的预设ROI参数切换条件,即上述目标ROI参数切换条件为第二子场景的预设ROI参数切换条件,则可行驶区域检测装置可以将车辆当前的ROI参数调整为与上述第二子场景的预设ROI参数切换条件所对应的目标ROI参数。
为了便于理解,本申请下述实施例中提供了关于第二子场景的预设ROI参数切换条件的示意表,并结合示意表对本实现方式进行介绍。
表5为关于第二子场景的预设ROI参数切换条件的示意表
Figure PCTCN2021092822-appb-000006
Figure PCTCN2021092822-appb-000007
需要说明的是,表5中各参数的取值仅为示例性的,具体数值可以根据实际情况而定,本申请实施例中对此并不作限定。
示例性地,上述第二子场景的预设ROI参数切换条件所对应的目标ROI参数的相关内容,可以参考上述第一子场景的预设ROI参数切换条件所对应的目标ROI参数的相关内容,此处不再赘述。
应理解,若目标ROI参数中包括图像预处理部分对应的任意图像预处理参数,且该图像预处理参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化图像预处理部分;若目标ROI参数中包括点云生成部分对应的任意点云生成参数,且该点云生成参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化点云生成部分;若目标ROI参数中包括可行驶区域生成部分对应的任意可行驶区域生成参数,且该可行驶区域生成参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化可行驶区域生成部分。
应理解,若车辆当前的ROI参数包括目标ROI参数,或者说车辆当前的ROI参数中与目标ROI参数相对应的参数,均与目标ROI参数相同,即参数未发生调整,则可行驶区域检测装置不需要重新初始化。
图9为本申请实施例提供的包含ROI的图像示意图一,如图9所示,在ROI中的当前行进车道的相邻行进车道上检测到障碍物,且随着车辆的前行障碍物越来越近,已经到达ROI边界,当障碍物相对车辆继续运动超出ROI边界时,由于障碍物不在当 前行进车道,并不会影响车辆的安全行驶,可行驶区域检测装置可以根据车辆的行驶状态估计数据采用运动轨迹预测方式,继续跟踪障碍物,直至障碍物从车辆的双目相机所采集的图像数据中消失。
另一种可能的实现方式中,若检测到车辆的行驶场景符合上述第三子场景的预设ROI参数切换条件,即上述目标ROI参数切换条件为第三子场景的预设ROI参数切换条件,则可行驶区域检测装置可以将车辆当前的ROI参数调整为与上述第三子场景的预设ROI参数切换条件所对应的目标ROI参数。
为了便于理解,本申请下述实施例中提供了关于第三子场景的预设ROI参数切换条件的示意表,并结合示意表对本实现方式进行介绍。
表6为关于第三子场景的预设ROI参数切换条件的示意表
Figure PCTCN2021092822-appb-000008
需要说明的是,表6中各参数的取值仅为示例性的,具体数值可以根据实际情况而定,本申请实施例中对此并不作限定。
示例性地,如表6所示,上述第三子场景的预设ROI参数切换条件所对应的目标ROI参数可以包括以下至少一项主调节参数:图像ROI的尺寸参数(例如,ROI可以覆盖0m-50m范围)、图像ROI的位置参数(例如,ROI可以位于距离车辆50m之内的位置)、支撑点网格步长参数(例如,网格步长参数为8*8)、占位网格分辨率参数(例如,占位网格分辨率参数为网格尺寸设置为0.2m)。
又一示例性地,如表6所示,上述第三子场景的预设ROI参数切换条件所对应的目标ROI参数还可以包括以下至少一项辅助调节参数:图像缩放层数(例如,图像缩 放层数为4层)、支撑点稀疏程度参数(例如,支撑点稀疏程度参数所指示的支撑点密集程度不大于预设密集程度)、支撑点分布方式参数(例如,支撑点分布方式参数用于指示以障碍物分布为主的分布方式)。
当然,上述第三子场景的预设ROI参数切换条件所对应的目标ROI参数还可以包括其它参数,本申请实施例中对此并不作限定。
应理解,若目标ROI参数中包括图像预处理部分对应的任意图像预处理参数,且该图像预处理参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化图像预处理部分;若目标ROI参数中包括点云生成部分对应的任意点云生成参数,且该点云生成参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化点云生成部分;若目标ROI参数中包括可行驶区域生成部分对应的任意可行驶区域生成参数,且该可行驶区域生成参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化可行驶区域生成部分。
图10为本申请实施例提供的包含ROI的图像示意图二,如图10所示,在ROI中的当前行进车道上检测到障碍物,且随着车辆的前行障碍物越来越近,已经到达ROI边界,当障碍物相对车辆继续运动超出ROI边界时,由于障碍物在当前行进车道,会影响车辆的安全行驶,可行驶区域检测装置可以将ROI向车辆的近处移动,并将车辆在当前车辆时刻的可行驶区域数据重投影到目标ROI参数对应的新ROI,然后持续地检测和跟踪障碍物。
另一种可能的实现方式中,若检测到车辆的行驶场景符合上述第四子场景的预设ROI参数切换条件,即上述目标ROI参数切换条件为第四子场景的预设ROI参数切换条件,则可行驶区域检测装置可以将车辆当前的ROI参数调整为与上述第四子场景的预设ROI参数切换条件所对应的目标ROI参数。
为了便于理解,本申请下述实施例中提供了关于第四子场景的预设ROI参数切换条件的示意表,并结合示意表对本实现方式进行介绍。
表7为关于第四子场景的预设ROI参数切换条件的示意表
Figure PCTCN2021092822-appb-000009
Figure PCTCN2021092822-appb-000010
需要说明的是,表7中各参数的取值仅为示例性的,具体数值可以根据实际情况而定,本申请实施例中对此并不作限定
示例性地,如表7所示,上述第四子场景的预设ROI参数切换条件所对应的目标ROI参数可以包括以下至少一项主调节参数:图像ROI的尺寸参数(例如,ROI可以覆盖0m-50m范围)、图像ROI的位置参数(例如,ROI可以位于距离车辆50m之内的位置)、支撑点网格步长参数(例如,网格步长参数为8*8)、占位网格分辨率参数(例如,占位网格分辨率参数为网格尺寸设置为0.2m)。
又一示例性地,如表7所示,上述第四子场景的预设ROI参数切换条件所对应的目标ROI参数还可以包括以下至少一项辅助调节参数:图像缩放层数(例如,图像缩放层数为4层)、支撑点稀疏程度参数(例如,支撑点稀疏程度参数所指示的支撑点密集程度不大于预设密集程度)、支撑点分布方式参数(例如,支撑点分布方式参数用于指示以障碍物分布为主的分布方式)。
当然,上述第四子场景的预设ROI参数切换条件所对应的目标ROI参数还可以包括其它参数,本申请实施例中对此并不作限定。
应理解,若目标ROI参数中包括图像预处理部分对应的任意图像预处理参数,且该图像预处理参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化图像预处理部分;若目标ROI参数中包括点云生成部分对应的任意点云生成参数,且该点云生成参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化点云生成部分;若目标ROI参数中包括可行驶区域生成部分对应的任意可行驶区域生成参数,且该可行驶区域生成参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化可行驶区域生成部分。
图11为本申请实施例提供的包含ROI的图像示意图三,如图11所示,由于检测到转向灯处于开启状态,可行驶区域检测装置可以确定车辆会沿着不同于上一车辆时刻的行驶方向行驶(即车辆转弯),也就是说车辆前方的场景会迅速切换为未知场景,因此,将ROI向车辆的近处移动,以便于在未知场景下可以检测到中近处对象,从而提高车辆行驶的安全性。
综上所述,可行驶区域检测装置在检测到车辆的行驶场景符合上述高速道路行驶场景时,可以根据车辆的行驶场景所符合的不同ROI参数切换条件,可以将ROI的检测区域从远处调整至近处,或者从近处跳转至远处的自适应调整,从而可以有效地降低计算耗能并提高了远处可行驶区域的检测距离和精度。
2)车辆的行驶场景符合上述拥堵道路行驶场景的预设ROI参数切换条件:
示例性地,若检测到车辆的当前行驶场景符合上述拥堵道路行驶场景的预设ROI参数切换条件,即上述目标ROI参数切换条件为拥堵道路行驶场景的预设ROI参数切换条件,则可行驶区域检测装置可以将车辆当前的ROI参数调整为与上述拥堵道路行驶场景的预设ROI参数切换条件所对应的目标ROI参数。
为了便于理解,本申请下述实施例中提供了关于拥堵道路行驶场景的预设ROI参数切换条件的示意表,并结合示意表对本实现方式进行介绍。
表8为关于拥堵道路行驶场景的预设ROI参数切换条件的示意表
Figure PCTCN2021092822-appb-000011
需要说明的是,表8中各参数的取值仅为示例性的,具体数值可以根据实际情况而定,本申请实施例中对此并不作限定。
示例性地,如表8所示,上述拥堵道路行驶场景的预设ROI参数切换条件所对应的目标ROI参数可以包括以下至少一项主调节参数:图像ROI的尺寸参数(例如,ROI可以覆盖0m-50m范围)、图像ROI的位置参数(例如,ROI可以位于距离车辆50m之内的位置)、支撑点网格步长参数(例如,网格步长参数为16*16)、占位网格分辨率参数(例如,占位网格分辨率参数为网格尺寸设置为0.1m)。
又一示例性地,如表8所示,上述拥堵道路行驶场景的预设ROI参数切换条件所对应的目标ROI参数还可以包括以下至少一项辅助调节参数:图像缩放层数(例如,图像缩放层数为6层)、支撑点稀疏程度参数(例如,支撑点稀疏程度参数所指示的支撑点密集程度不大于预设密集程度)、支撑点分布方式参数(例如,支撑点分布方式参数用于指示以均匀分布为主的分布方式)。
当然,上述拥堵道路行驶场景的预设ROI参数切换条件所对应的目标ROI参数还 可以包括其它参数,本申请实施例中对此并不作限定。
应理解,若目标ROI参数中包括图像预处理部分对应的任意图像预处理参数,且该图像预处理参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化图像预处理部分;若目标ROI参数中包括点云生成部分对应的任意点云生成参数,且该点云生成参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化点云生成部分;若目标ROI参数中包括可行驶区域生成部分对应的任意可行驶区域生成参数,且该可行驶区域生成参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化可行驶区域生成部分。
图12为本申请实施例提供的包含ROI的图像示意图四,如图12所示,对于拥挤道路行驶场景,主要指近距离障碍物的检测。由于同一个近距离障碍物占据了图像的大部分区域,因此,可以采用更稀疏的支撑点分布及均匀分布方式等,从而可以降低计算能耗,并可以提高近处障碍物检测的精度。
3)车辆的行驶场景符合上述狭窄空间上下坡道路行驶场景的预设ROI参数切换条件:
示例性地,若检测到车辆的当前行驶场景符合上述狭窄空间上下坡道路行驶场景的预设ROI参数切换条件,即上述目标ROI参数切换条件为狭窄空间上下坡道路行驶场景的预设ROI参数切换条件,则可行驶区域检测装置可以将车辆当前的ROI参数调整为与上述狭窄空间上下坡道路行驶场景的预设ROI参数切换条件所对应的目标ROI参数。
为了便于理解,本申请下述实施例中提供了关于狭窄空间上下坡道路行驶场景的预设ROI参数切换条件的示意表,并结合示意表对本实现方式进行介绍。
表9为关于狭窄空间上下坡道路行驶场景的预设ROI参数切换条件的示意表
Figure PCTCN2021092822-appb-000012
Figure PCTCN2021092822-appb-000013
需要说明的是,表9中各参数的取值仅为示例性的,具体数值可以根据实际情况而定,本申请实施例中对此并不作限定。
示例性地,如表9所示,上述狭窄空间上下坡道路行驶场景的预设ROI参数切换条件所对应的目标ROI参数可以包括以下至少一项主调节参数:图像ROI的尺寸参数(例如,ROI可以覆盖0m-50m范围)、图像ROI的位置参数(例如,ROI可以位于距离车辆50m之内的位置)、支撑点网格步长参数(例如,网格步长参数为8*8)、占位网格分辨率参数(例如,占位网格分辨率参数为网格尺寸设置为0.1m)、支撑点稀疏程度参数(例如,支撑点稀疏程度参数用于指示将图像的两侧区域的支撑点分布密度调整为图像的中间区域的2倍)。
又一示例性地,如表9所示,上述狭窄空间上下坡道路行驶场景的预设ROI参数切换条件所对应的目标ROI参数还可以包括以下至少一项辅助调节参数:图像缩放层数(例如,图像缩放层数为6层)、支撑点分布方式参数(例如,支撑点分布方式参数用于指示以均匀分布为主的分布方式)。
当然,上述狭窄空间上下坡道路行驶场景的预设ROI参数切换条件所对应的目标ROI参数还可以包括其它参数,本申请实施例中对此并不作限定。
应理解,若目标ROI参数中包括图像预处理部分对应的任意图像预处理参数,且该图像预处理参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化图像预处理部分;若目标ROI参数中包括点云生成部分对应的任意点云生成参数,且该点云生成参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化点云生成部分;若目标ROI参数中包括可行驶区域生成部分对应的任意可行驶区域生成参数,且该可行驶区域生成参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化可行驶区域生成部分。
图13为本申请实施例提供的包含ROI的图像示意图五,如图13所示,对于狭窄空间上下坡道路行驶场景(例如,多层停车库中的上下坡场景),通过调整相关ROI参数,提高了车辆左右两侧关于障碍物的检测精度,从而不仅降低了计算能耗,还提高了狭窄空间内车辆行驶时对车辆左右两侧的保护程度。
步骤S804、根据目标ROI参数,获取车辆在下一车辆时刻的可行驶区域数据。
本步骤中,可行驶区域检测装置可以根据上述步骤S803中车辆的当前行驶场景所符合的ROI参数切换条件对应的目标ROI参数,进一步地获取车辆在下一车辆时刻的可行驶区域数据。可见,通过针对不同道路行驶场景可以采用不同的ROI参数,有利于降低可行驶区域检测过程中的计算能耗,从而不仅可以节省可行驶区域检测过程中的计算资源,还可以提高局部检测精度。
应理解,可行驶区域检测装置根据上述目标ROI参数,获取车辆在下一车辆时刻的可行驶区域数据,还以便于可行驶区域检测装置可以根据车辆在下一车辆时刻的行驶状态数据及可行驶区域数据,判断车辆在下一车辆时刻的行驶场景是否符合某种 ROI参数切换条件,并在检测到车辆在下一车辆时刻的行驶场景符合某种ROI参数切换条件时,可以将车辆在下一车辆时刻的ROI参数调整为下一车辆时刻所符合的ROI参数切换条件对应的ROI参数。
本申请实施例的下述部分,对上述步骤S804中“根据目标ROI参数,获取车辆在下一车辆时刻的可行驶区域数据”的可实现方式进行介绍。
可选地,可行驶区域检测装置可以根据上述目标ROI参数、上述车辆在当前车辆时刻的行驶状态估计数据、可行驶区域数据,以及上述车辆在下一车辆时刻的双目图像数据,获取上述车辆在下一车辆时刻的行驶状态估计数据和点云数据。
需要说明的是,上述车辆在当前车辆时刻的行驶状态估计数据可以是可行驶区域检测装置的自我运动(ego motion)估计部分所得到的。示例性地,可行驶区域检测装置可以根据车辆在上一车辆时刻的行驶状态估计数据以及车辆在当前车辆时刻的双目图像数据进行状态估计处理,便可得到上述车辆在当前车辆时刻的行驶状态估计数据。其中,具体的状态估计处理的方式,可以参考相关技术中的状态估计处理方式,本申请实施例中对此并不作限定。
需要说明的是,上述车辆在当前车辆时刻的可行驶区域数据可以是可行驶区域检测装置的可行驶区域生成部分所检测到的。示例性地,可行驶区域检测装置可以根据上述车辆在当前车辆时刻的行驶状态估计数据和上述车辆在当前车辆时刻的点云数据进行可行驶区域生成处理,便可得到上述车辆在当前车辆时刻的可行驶区域数据。其中,具体的可行驶区域生成处理,可以参考相关技术中的可行驶区域生成处理方式,本申请实施例中对此并不作限定。
示例性地,若上述目标ROI参数包括:图像预处理参数和点云生成参数,则可行驶区域检测装置可以首先根据上述图像预处理参数、上述车辆在当前车辆时刻的行驶状态估计数据和上述车辆在当前车辆时刻的可行驶区域数据,对上述车辆在下一车辆时刻的双目图像数据进行图像预处理,便可得到上述车辆在下一车辆时刻的图像处理数据。其中,具体的图像预处理,可以参考相关技术中的图像预处理方式,本申请实施例中对此并不作限定。
其次,可行驶区域检测装置可以根据上述车辆在当前车辆时刻的行驶状态估计数据和上述车辆在下一车辆时刻的双目图像数据进行状态估计处理,便可得到上述车辆在下一车辆时刻的行驶状态估计数据。
然后,可行驶区域检测装置可以根据上述点云生成参数对上述车辆在下一车辆时刻的图像处理数据和上述车辆在下一车辆时刻的行驶状态估计数据进行点云生成处理,便可得到上述车辆在下一车辆时刻的点云数据。其中,具体的点云生成处理,可以参考相关技术中的点云生成处理方式,本申请实施例中对此并不作限定。
应理解,若上述目标ROI参数不包括图像预处理参数,则可行驶区域检测装置可以根据上述车辆当前的ROI参数中所包括的图像预处理参数进行图像预处理。若上述目标ROI参数不包括点云生成参数,则可行驶区域检测装置可以根据上述车辆当前的ROI参数中所包括的点云生成参数进行点云生成处理。
当然,可行驶区域检测装置根据上述目标ROI参数、上述车辆在当前车辆时刻的行驶状态估计数据、可行驶区域数据,以及上述车辆在下一车辆时刻的双目图像数据, 还可通过其它方式获取上述车辆在下一车辆时刻的行驶状态估计数据和点云数据,本申请实施例中对此并不作限定。
进一步地,可行驶区域检测装置可以根据上述车辆在下一车辆时刻的行驶状态估计数据和点云数据,获取上述车辆在下一车辆时刻的可行驶区域数据。
示例性地,若上述目标ROI参数还包括:可行驶区域生成参数,则可行驶区域检测装置可以根据上述可行驶区域生成参数对上述车辆在下一车辆时刻的行驶状态估计数据和上述车辆在下一车辆时刻的点云数据进行可行驶区域生成处理,便可得到上述车辆在下一车辆时刻的可行驶区域数据。
应理解,若上述目标ROI参数不包括可行驶区域生成参数,则可行驶区域检测装置可以根据上述车辆当前的ROI参数中所包括的可行驶区域生成参数进行可行驶区域生成处理。
当然,可行驶区域检测装置根据上述车辆在下一车辆时刻的行驶状态估计数据和点云数据,还可以通过其它方式获取上述车辆在下一车辆时刻的可行驶区域数据,本申请实施例中对此并不作限定。
综上所述,本申请实施例中,通过在检测到车辆的行驶场景符合目标ROI参数切换条件时,将车辆当前的ROI参数调整为与目标ROI参数切换条件对应的目标ROI参数,然后根据目标ROI参数进行可行驶区域检测。可见,本申请实施例中,针对不同道路行驶场景可以采用不同的ROI参数,有利于降低计算能耗,从而不仅可以节省计算资源,还可以提高局部检测精度。
需要说明的是,本申请实施例提供的基于双目视觉的可行驶区域检测方法的计算能耗较低,从而可以应用于通用芯片系统,而无需使用神经网络专用硬件,从而还可以节省系统成本。
进一步地,可行驶区域检测装置在根据上述目标ROI参数,获取上述车辆在下一车辆时刻的可行驶区域数据之前,还可以将上述车辆在当前车辆时刻的可行驶区域数据重投影到上述目标ROI参数对应的新ROI,以便于在下一车辆时刻不仅可以在降低计算能耗,还可以准确地进行可行驶区域检测。
为了便于理解,本申请下述实施例中结合图4和图5所示,对本申请实施例提供的可行驶区域检测方法在当前车辆时刻的执行时序进行介绍。
图14为本申请实施例提供的可行驶区域检测方法的执行时序示意图。在上述实施例的基础上,如图14所示,本申请实施例的方法可以包括:可行驶区域检测和ROI自适应调整两大部分。
一、可行驶区域检测
1)可行驶区域检测装置可以从可行驶区域检测内存中获取车辆在当前车辆时刻的双目图像数据、车辆在上一车辆时刻的可行驶区域数据,以及车辆在上一车辆时刻的行驶状态估计数据。其次,可行驶区域检测装置可以根据车辆在当前车辆时刻的双目图像数据、车辆在上一车辆时刻的可行驶区域数据,以及车辆在上一车辆时刻的行驶状态估计数据进行图像预处理,便可得到车辆在当前车辆时刻的图像处理数据。然后,可行驶区域检测装置可以将车辆在当前车辆时刻的图像处理数据存入可行驶区域检测内存。
2)可行驶区域检测装置可以从可行驶区域检测内存中获取车辆在当前车辆时刻的双目图像数据,以及车辆在上一车辆时刻的行驶状态估计数据。其次,可行驶区域检测装置可以根据车辆在当前车辆时刻的双目图像数据,以及车辆在上一车辆时刻的行驶状态估计数据进行状态估计处理,便可得到车辆在当前车辆时刻的行驶状态估计数据。然后,可行驶区域检测装置可以将车辆在当前车辆时刻的行驶状态估计数据存入可行驶区域检测内存。
3)可行驶区域检测装置可以从可行驶区域检测内存中获取车辆在当前车辆时刻的图像处理数据,以及车辆在当前车辆时刻的行驶状态估计数据。其次,可行驶区域检测装置可以根据车辆在当前车辆时刻的图像处理数据,以及车辆在当前车辆时刻的行驶状态估计数据进行点云生成处理,便可得到上述车辆在当前车辆时刻的点云数据。然后,可行驶区域检测装置可以将车辆在当前车辆时刻的点云数据存入可行驶区域检测内存。
4)可行驶区域检测装置可以从可行驶区域检测内存中获取车辆在当前车辆时刻的点云数据,以及车辆在当前车辆时刻的行驶状态估计数据。其次,可行驶区域检测装置可以根据车辆在当前车辆时刻的点云数据,以及车辆在当前车辆时刻的行驶状态估计数据进行可行驶区域生成处理,便可得到上述车辆在当前车辆时刻的可行驶区域数据。然后,可行驶区域检测装置可以将车辆在当前车辆时刻的可行驶区域数据存入可行驶区域检测内存。
二、ROI自适应调整
1)可行驶区域检测装置可以从可行驶区域检测内存中获取车辆在当前车辆时刻的行驶状态数据(即CAN数据),以及车辆在当前车辆时刻的可行驶区域数据。其次,可行驶区域检测装置可以根据车辆在当前车辆时刻的行驶状态数据(即CAN数据),以及车辆在当前车辆时刻的可行驶区域数据判断车辆的当前行驶场景是否符合某个ROI参数切换条件(或者称之为行驶场景判决)。然后,若车辆的当前行驶场景符合某种ROI参数切换条件(为了便于描述,可以称之为目标ROI参数切换条件),则可行驶区域检测装置可以将车辆当前的ROI参数调整为与上述目标ROI参数切换条件对应的目标ROI参数。
示例性地,如图14所示,若目标ROI参数可以包括:图像预处理参数、点云生成参数,以及可行驶区域生成参数,则可行驶区域检测装置会调整上述图像预处理部分对应的与ROI调整相关的图像预处理参数、调整上述点云生成部分对应的与ROI调整相关的点云生成参数,以及调整上述可行驶区域生成部分对应的与ROI调整相关的可行驶区域生成参数。
需要说明的是,如图14所示,若目标ROI参数中包括图像预处理部分对应的任意图像预处理参数,且该图像预处理参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化图像预处理部分;若目标ROI参数中包括点云生成部分对应的任意点云生成参数,且该点云生成参数在将车辆当前的ROI参数调整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化点云生成部分;若目标ROI参数中包括可行驶区域生成部分对应的任意可行驶区域生成参数,且该可行驶区域生成参数在将车辆当前的ROI参数调 整为目标ROI参数的过程中发生了调整,则可行驶区域检测装置还需要重新初始化可行驶区域生成部分。
应理解,若将车辆当前的ROI参数调整为目标ROI参数的过程导致ROI发生变化,则可行驶区域检测装置还可以将上述车辆在当前车辆时刻的可行驶区域数据重投影到上述目标ROI参数对应的新ROI,以便于在下一车辆时刻不仅可以在降低计算能耗,还可以准确地进行可行驶区域检测。
需要说明的是,图14中以可行驶区域检测装置在当前车辆时刻先执行可行驶区域检测部分,然后执行ROI自适应调整部分为例进行介绍的。应理解,可行驶区域检测装置在当前车辆时刻也可以先执行ROI自适应调整部分,然后执行可行驶区域检测部分;其中,可行驶区域检测部分可以参考图14中的相关可行驶区域检测部分,ROI自适应调整部分可以参考图14中的相关ROI自适应调整部分(但由于此时还未检测到车辆在当前车辆时刻的可行驶区域数据,因此,需要用车辆在上一车辆时刻的可行驶区域数据代替车辆在当前车辆时刻的可行驶区域数据),具体过程可以参考图14所示实施例,本申请实施例中对此并不作限定。
应理解,本申请实施例提供的可行驶区域检测方法在下一车辆时刻的执行时序,可以参考本申请实施例提供的可行驶区域检测方法在当前车辆时刻的执行时序,本申请实施例中对此并不作限定。
图15为本申请一实施例提供的可行驶区域检测装置的结构示意图,如图15所示,本申请实施例的可行驶区域检测装置150可以包括:第一获取模块1501、判断模块1502、调整模块1503以及第二获取模块1504。
其中,第一获取模块1501,用于获取车辆在当前车辆时刻的行驶状态数据及可行驶区域数据;
判断模块1502,用于基于上述行驶状态数据及上述可行驶区域数据,判断上述车辆的行驶场景是否符合感兴趣区域ROI参数切换条件;
调整模块1503,用于若车辆的行驶场景符合ROI参数切换条件,则将车辆当前的ROI参数调整为目标ROI参数;
第二获取模块1504,用于根据目标ROI参数,获取车辆在下一车辆时刻的可行驶区域数据。
在一种可能的实现方式中,目标ROI参数包括以下至少一项:图像预处理参数、点云生成参数,或者可行驶区域生成参数;
其中,图像预处理参数包括以下至少一项:图像ROI的尺寸参数、图像ROI的位置参数,或者图像缩放层数;
点云生成参数包括以下至少一项:支撑点网格步长参数、支撑点稀疏程度参数,或者支撑点分布方式参数;
可行驶区域生成参数包括:占位网格分辨率参数。
在一种可能的实现方式中,ROI参数切换条件包括以下任一项:
高速道路行驶场景的预设ROI参数切换条件、拥堵道路行驶场景的预设ROI参数切换条件,或者狭窄空间上下坡道路行驶场景的预设ROI参数切换条件;
其中,拥堵道路行驶场景的预设ROI参数切换条件包括:车辆在当前行进车道上 按照小于第一预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且当前行进车道上距离车辆的第一预设距离内存在障碍物;
狭窄空间上下坡道路行驶场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照小于第一预设速度的行驶速度沿着与上一车辆时刻的行驶方向行驶、车辆的油门踏板状态和制动踏板状态断续变化,且车辆在垂直与水平地面的方向上存在大于第二预设距离的移动距离。
在一种可能的实现方式中,高速道路行驶场景的预设ROI参数切换条件包括以下任一项:第一子场景的预设ROI参数切换条件、第二子场景的预设ROI参数切换条件、第三子场景的预设ROI参数切换条件,或者第四子场景的预设ROI参数切换条件;
其中,第一子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且当前行进车道的相邻行进车道上距离车辆的第三预设距离内无障碍物;
第二子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且当前行进车道的相邻行进车道上距离车辆的第三预设距离内存在障碍物;
第三子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且当前行进车道上距离车辆的第三预设距离内存在障碍物;
第四子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着不同于上一车辆时刻的行驶方向行驶。
在一种可能的实现方式中,第二获取模块,包括:
第一获取单元,用于根据目标ROI参数、车辆在当前车辆时刻的行驶状态估计数据、可行驶区域数据,以及车辆在下一车辆时刻的双目图像数据,获取车辆在下一车辆时刻的行驶状态估计数据和点云数据;
第二获取单元,用于根据车辆在下一车辆时刻的行驶状态估计数据和点云数据,获取车辆在下一车辆时刻的可行驶区域数据。
在一种可能的实现方式中,若目标ROI参数包括:图像预处理参数和点云生成参数,第一获取单元具体用于:
根据图像预处理参数、车辆在当前车辆时刻的行驶状态估计数据和可行驶区域数据,对车辆在下一车辆时刻的双目图像数据进行图像预处理,得到车辆在下一车辆时刻的图像处理数据;
根据车辆在当前车辆时刻的行驶状态估计数据和车辆在下一车辆时刻的双目图像数据进行状态估计处理,得到车辆在下一车辆时刻的行驶状态估计数据;
根据点云生成参数对车辆在下一车辆时刻的图像处理数据和行驶状态估计数据进行点云生成处理,得到车辆在下一车辆时刻的点云数据。
在一种可能的实现方式中,若目标ROI参数还包括:可行驶区域生成参数,第二获取单元具体用于:
根据可行驶区域生成参数对车辆在下一车辆时刻的行驶状态估计数据和点云数据进行可行驶区域生成处理,得到车辆在下一车辆时刻的可行驶区域数据。
在一种可能的实现方式中,上述装置还包括:
投影模块,用于将车辆在当前车辆时刻的可行驶区域数据投影到目标ROI参数对应的ROI。
在一种可能的实现方式中,行驶状态数据包括以下至少一项:行驶速度、行驶方向、油门踏板状态、制动踏板状态。
本申请实施例提供的可行驶区域检测装置,可以用于执行本申请上述可行驶区域检测方法实施例中的技术方案,其实现原理和技术效果类似,此处不再赘述。
图16为本申请另一实施例提供的可行驶区域检测装置的结构示意图,如图16所示,本申请实施例的可行驶区域检测装置160可以包括:处理器1601、存储器1602和通信接口1603。其中,所述通信接口1603用于获取待处理的数据(例如,行驶状态数据,和/或,双目图像数据等);所述存储器1602,用于存储程序指令;所述处理器1601,用于调用并执行所述存储器1602中存储的程序指令,当所述处理器1601执行所述存储器1602存储的程序指令时,所述可行驶区域检测装置用于对待处理的数据执行本申请上述可行驶区域检测方法实施例中的技术方案,得到处理后的数据(例如,可行驶区域数据等),使得通信接口1603还用于输出处理后的数据,其实现原理和技术效果类似,此处不再赘述。
应理解,本申请实施例中的存储器1602还可以用于存储所述可行驶区域检测装置在执行本申请上述可行驶区域检测方法实施例中的技术方案过程中的中间结果数据。
示例性地,本申请实施例中涉及的通信接口可以包括但不限于:图像数据接口,和/或,CAN数据接口。
本申请实施例还提供一种芯片,可以包括本申请上述可行驶区域检测装置,或用于支持可行驶区域检测装置实现本申请实施例所示的功能。
示例性地,当实现上述方法的为电子设备内的芯片时,芯片可以包括处理单元,进一步的,芯片还可以包括通信单元,所述处理单元例如可以是处理器;当芯片包括通信单元时,所述通信单元例如可以是输入/输出接口、管脚或电路等。其中,处理单元执行本申请实施例中各个处理模块所执行的全部或部分动作,通信单元可执行相应的接收或获取动作。
本申请实施例还提供一种车载设备,可以包括本申请上述可行驶区域检测装置。
可选地,本申请实施例中的车载终端还可以包括:如图1所示的ECU 12以及控制器13;当然,本申请实施例中的车载终端还可以包括其它装置,本申请实施例中对此并不作限定。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,所述计算机程序用于实现本申请上述可行驶区域检测方法实施例中的技术方案,其实现原理和技术效果类似,此处不再赘述。
本申请实施例还提供一种芯片系统,该芯片系统包括处理器,还可以包括存储器和通信接口,用于实现本申请上述可行驶区域检测方法实施例中的技术方案,其实现原理和技术效果类似,此处不再赘述。示例性地,该芯片系统可以由芯片构成,也可以包含芯片和其它分立器件。
本申请实施例还提供一种程序,该程序在被处理器执行时用于执行本申请上述可 行驶区域检测方法实施例中的技术方案,其实现原理和技术效果类似,此处不再赘述。
本申请实施例还提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行本申请上述可行驶区域检测方法实施例中的技术方案,其实现原理和技术效果类似,此处不再赘述。
本申请实施例中涉及的处理器可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
本申请实施例中涉及的存储器可以是非易失性存储器,比如硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)等,还可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM)。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在上述各实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、 微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。

Claims (22)

  1. 一种可行驶区域检测方法,其特征在于,包括:
    获取车辆在当前车辆时刻的行驶状态数据及可行驶区域数据;
    基于所述行驶状态数据及所述可行驶区域数据,判断所述车辆的行驶场景是否符合感兴趣区域ROI参数切换条件;
    若所述车辆的行驶场景符合所述ROI参数切换条件,则将所述车辆当前的ROI参数调整为目标ROI参数;
    根据所述目标ROI参数,获取所述车辆在下一车辆时刻的可行驶区域数据。
  2. 根据权利要求1所述的方法,其特征在于,所述目标ROI参数包括以下至少一项:图像预处理参数、点云生成参数,或者可行驶区域生成参数;
    其中,所述图像预处理参数包括以下至少一项:图像ROI的尺寸参数、图像ROI的位置参数,或者图像缩放层数;
    所述点云生成参数包括以下至少一项:支撑点网格步长参数、支撑点稀疏程度参数,或者支撑点分布方式参数;
    所述可行驶区域生成参数包括:占位网格分辨率参数。
  3. 根据权利要求1或2所述的方法,其特征在于,所述ROI参数切换条件包括以下任一项:
    高速道路行驶场景的预设ROI参数切换条件、拥堵道路行驶场景的预设ROI参数切换条件,或者狭窄空间上下坡道路行驶场景的预设ROI参数切换条件;
    其中,所述拥堵道路行驶场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照小于第一预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且所述当前行进车道上距离所述车辆的第一预设距离内存在障碍物;
    所述狭窄空间上下坡道路行驶场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照小于第一预设速度的行驶速度沿着与上一车辆时刻的行驶方向行驶、所述车辆的油门踏板状态和制动踏板状态断续变化,且所述车辆在垂直与水平地面的方向上存在大于第二预设距离的移动距离。
  4. 根据权利要求3所述的方法,其特征在于,所述高速道路行驶场景的预设ROI参数切换条件包括以下任一项:第一子场景的预设ROI参数切换条件、第二子场景的预设ROI参数切换条件、第三子场景的预设ROI参数切换条件,或者第四子场景的预设ROI参数切换条件;
    其中,所述第一子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且所述当前行进车道的相邻行进车道上距离所述车辆的第三预设距离内无障碍物;
    所述第二子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且所述当前行进车道的相邻行进车道上距离所述车辆的第三预设距离内存在障碍物;
    所述第三子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且所述当前行进车道上距离所述车辆的第三预设距离内存在障碍物;
    所述第四子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着不同于上一车辆时刻的行驶方向行驶。
  5. 根据权利要求2-4中任一项所述的方法,其特征在于,所述根据所述目标ROI参数,获取所述车辆在下一车辆时刻的可行驶区域数据,包括:
    根据所述目标ROI参数、所述车辆在当前车辆时刻的行驶状态估计数据、可行驶区域数据,以及所述车辆在下一车辆时刻的双目图像数据,获取所述车辆在下一车辆时刻的行驶状态估计数据和点云数据;
    根据所述车辆在下一车辆时刻的行驶状态估计数据和点云数据,获取所述车辆在下一车辆时刻的可行驶区域数据。
  6. 根据权利要求5所述的方法,其特征在于,若所述目标ROI参数包括:所述图像预处理参数和所述点云生成参数,所述根据所述目标ROI参数、所述车辆在当前车辆时刻的行驶状态估计数据、可行驶区域数据,以及所述车辆在下一车辆时刻的双目图像数据,获取所述车辆在下一车辆时刻的行驶状态估计数据和点云数据,包括:
    根据所述图像预处理参数、所述车辆在当前车辆时刻的行驶状态估计数据和可行驶区域数据,对所述车辆在下一车辆时刻的双目图像数据进行图像预处理,得到所述车辆在下一车辆时刻的图像处理数据;
    根据所述车辆在当前车辆时刻的行驶状态估计数据和所述车辆在下一车辆时刻的双目图像数据进行状态估计处理,得到所述车辆在下一车辆时刻的行驶状态估计数据;
    根据所述点云生成参数对所述车辆在下一车辆时刻的图像处理数据和行驶状态估计数据进行点云生成处理,得到所述车辆在下一车辆时刻的点云数据。
  7. 根据权利要求6所述的方法,其特征在于,若所述目标ROI参数还包括:所述可行驶区域生成参数,所述根据所述车辆在下一车辆时刻的行驶状态估计数据和点云数据,获取所述车辆在下一车辆时刻的可行驶区域数据,包括:
    根据所述可行驶区域生成参数对所述车辆在下一车辆时刻的行驶状态估计数据和点云数据进行可行驶区域生成处理,得到所述车辆在下一车辆时刻的可行驶区域数据。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,所述根据所述目标ROI参数,获取所述车辆在下一车辆时刻的可行驶区域数据之前,所述方法还包括:
    将所述车辆在当前车辆时刻的可行驶区域数据投影到所述目标ROI参数对应的ROI。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述行驶状态数据包括以下至少一项:行驶速度、行驶方向、油门踏板状态、制动踏板状态。
  10. 一种可行驶区域检测装置,其特征在于,包括:
    第一获取模块,用于获取车辆在当前车辆时刻的行驶状态数据及可行驶区域数据;
    判断模块,用于基于所述行驶状态数据及所述可行驶区域数据,判断所述车辆的行驶场景是否符合感兴趣区域ROI参数切换条件;
    调整模块,用于若所述车辆的行驶场景符合所述ROI参数切换条件,则将所述车辆当前的ROI参数调整为目标ROI参数;
    第二获取模块,用于根据所述目标ROI参数,获取所述车辆在下一车辆时刻的可行驶区域数据。
  11. 根据权利要求10所述的装置,其特征在于,所述目标ROI参数包括以下至少一项:图像预处理参数、点云生成参数,或者可行驶区域生成参数;
    其中,所述图像预处理参数包括以下至少一项:图像ROI的尺寸参数、图像ROI的位置参数,或者图像缩放层数;
    所述点云生成参数包括以下至少一项:支撑点网格步长参数、支撑点稀疏程度参数,或者支撑点分布方式参数;
    所述可行驶区域生成参数包括:占位网格分辨率参数。
  12. 根据权利要求10或11所述的装置,其特征在于,所述ROI参数切换条件包括以下任一项:
    高速道路行驶场景的预设ROI参数切换条件、拥堵道路行驶场景的预设ROI参数切换条件,或者狭窄空间上下坡道路行驶场景的预设ROI参数切换条件;
    其中,所述拥堵道路行驶场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照小于第一预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且所述当前行进车道上距离所述车辆的第一预设距离内存在障碍物;
    所述狭窄空间上下坡道路行驶场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照小于第一预设速度的行驶速度沿着与上一车辆时刻的行驶方向行驶、所述车辆的油门踏板状态和制动踏板状态断续变化,且所述车辆在垂直与水平地面的方向上存在大于第二预设距离的移动距离。
  13. 根据权利要求12所述的装置,其特征在于,所述高速道路行驶场景的预设ROI参数切换条件包括以下任一项:第一子场景的预设ROI参数切换条件、第二子场景的预设ROI参数切换条件、第三子场景的预设ROI参数切换条件,或者第四子场景的预设ROI参数切换条件;
    其中,所述第一子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且所述当前行进车道的相邻行进车道上距离所述车辆的第三预设距离内无障碍物;
    所述第二子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且所述当前行进车道的相邻行进车道上距离所述车辆的第三预设距离内存在障碍物;
    所述第三子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着上一车辆时刻的行驶方向行驶,且所述当前行进车道上距离所述车辆的第三预设距离内存在障碍物;
    所述第四子场景的预设ROI参数切换条件包括:车辆在当前行进车道上按照大于第二预设速度的行驶速度沿着不同于上一车辆时刻的行驶方向行驶。
  14. 根据权利要求11-13中任一项所述的装置,其特征在于,所述第二获取模块,包括:
    第一获取单元,用于根据所述目标ROI参数、所述车辆在当前车辆时刻的行驶状态估计数据、可行驶区域数据,以及所述车辆在下一车辆时刻的双目图像数据,获取所述车辆在下一车辆时刻的行驶状态估计数据和点云数据;
    第二获取单元,用于根据所述车辆在下一车辆时刻的行驶状态估计数据和点云数 据,获取所述车辆在下一车辆时刻的可行驶区域数据。
  15. 根据权利要求14所述的装置,其特征在于,若所述目标ROI参数包括:所述图像预处理参数和所述点云生成参数,所述第一获取单元具体用于:
    根据所述图像预处理参数、所述车辆在当前车辆时刻的行驶状态估计数据和可行驶区域数据,对所述车辆在下一车辆时刻的双目图像数据进行图像预处理,得到所述车辆在下一车辆时刻的图像处理数据;
    根据所述车辆在当前车辆时刻的行驶状态估计数据和所述车辆在下一车辆时刻的双目图像数据进行状态估计处理,得到所述车辆在下一车辆时刻的行驶状态估计数据;
    根据所述点云生成参数对所述车辆在下一车辆时刻的图像处理数据和行驶状态估计数据进行点云生成处理,得到所述车辆在下一车辆时刻的点云数据。
  16. 根据权利要求15所述的装置,其特征在于,若所述目标ROI参数还包括:所述可行驶区域生成参数,所述第二获取单元具体用于:
    根据所述可行驶区域生成参数对所述车辆在下一车辆时刻的行驶状态估计数据和点云数据进行可行驶区域生成处理,得到所述车辆在下一车辆时刻的可行驶区域数据。
  17. 根据权利要求10-16中任一项所述的装置,其特征在于,所述装置还包括:
    投影模块,用于将所述车辆在当前车辆时刻的可行驶区域数据投影到所述目标ROI参数对应的ROI。
  18. 根据权利要求10-17中任一项所述的装置,其特征在于,所述行驶状态数据包括以下至少一项:行驶速度、行驶方向、油门踏板状态、制动踏板状态。
  19. 一种可行驶区域检测装置,其特征在于,包括:处理器、存储器和通信接口;
    其中,所述通信接口用于获取待处理的数据;
    所述存储器,用于存储程序指令;
    所述处理器,用于调用并执行所述存储器中存储的程序指令,当所述处理器执行所述存储器存储的程序指令时,所述可行驶区域检测装置用于对待处理的数据执行如权利要求1至9中任一项所述的方法,得到处理后的数据;
    所述通信接口还用于输出处理后的数据。
  20. 一种芯片,其特征在于,包括如权利要求19所述的可行驶区域检测装置。
  21. 一种车载设备,其特征在于,包括如权利要求19所述的可行驶区域检测装置。
  22. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,所述计算机程序用于实现如权利要求1-9任一项所述的方法。
PCT/CN2021/092822 2020-05-20 2021-05-10 可行驶区域检测方法、装置、设备及存储介质 WO2021233154A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010429511.0A CN113705272A (zh) 2020-05-20 2020-05-20 可行驶区域检测方法、装置、设备及存储介质
CN202010429511.0 2020-05-20

Publications (1)

Publication Number Publication Date
WO2021233154A1 true WO2021233154A1 (zh) 2021-11-25

Family

ID=78645514

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/092822 WO2021233154A1 (zh) 2020-05-20 2021-05-10 可行驶区域检测方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN113705272A (zh)
WO (1) WO2021233154A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445786A (zh) * 2021-12-30 2022-05-06 深圳云天励飞技术股份有限公司 道路拥堵检测方法、装置、电子设备及存储介质
CN115223119A (zh) * 2022-06-15 2022-10-21 广州汽车集团股份有限公司 一种可行驶区域检测方法与系统
CN115396141A (zh) * 2022-07-19 2022-11-25 岚图汽车科技有限公司 一种车辆安全控制方法、装置、设备和介质
CN117422808A (zh) * 2023-12-19 2024-01-19 中北数科(河北)科技有限公司 一种三维场景数据的加载方法及电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387579B (zh) * 2021-12-21 2025-07-11 南京佑驾科技有限公司 一种快速道路的目标检测方法、装置及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105480227A (zh) * 2015-12-29 2016-04-13 大连楼兰科技股份有限公司 主动驾驶技术中基于红外雷达与视频图像信息融合的方法
US20180154825A1 (en) * 2015-07-31 2018-06-07 Hitachi Automotive Systems, Ltd. Vehicle Periphery Information Management Device
CN109919144A (zh) * 2019-05-15 2019-06-21 长沙智能驾驶研究院有限公司 可行驶区域检测方法、装置、计算机存储介质及路测视觉设备
CN109977845A (zh) * 2019-03-21 2019-07-05 百度在线网络技术(北京)有限公司 一种可行驶区域检测方法及车载终端
CN110376594A (zh) * 2018-08-17 2019-10-25 北京京东尚科信息技术有限公司 一种基于拓扑图的智能导航的方法和系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107215339B (zh) * 2017-06-26 2019-08-23 地壳机器人科技有限公司 自动驾驶车辆的换道控制方法和装置
CN109017785B (zh) * 2018-08-09 2020-06-26 北京智行者科技有限公司 车辆换道行驶方法
CN115578711A (zh) * 2019-05-21 2023-01-06 华为技术有限公司 自动换道方法、装置及存储介质
CN110008941B (zh) * 2019-06-05 2020-01-17 长沙智能驾驶研究院有限公司 可行驶区域检测方法、装置、计算机设备和存储介质
CN110737266B (zh) * 2019-09-17 2022-11-18 中国第一汽车股份有限公司 一种自动驾驶控制方法、装置、车辆和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180154825A1 (en) * 2015-07-31 2018-06-07 Hitachi Automotive Systems, Ltd. Vehicle Periphery Information Management Device
CN105480227A (zh) * 2015-12-29 2016-04-13 大连楼兰科技股份有限公司 主动驾驶技术中基于红外雷达与视频图像信息融合的方法
CN110376594A (zh) * 2018-08-17 2019-10-25 北京京东尚科信息技术有限公司 一种基于拓扑图的智能导航的方法和系统
CN109977845A (zh) * 2019-03-21 2019-07-05 百度在线网络技术(北京)有限公司 一种可行驶区域检测方法及车载终端
CN109919144A (zh) * 2019-05-15 2019-06-21 长沙智能驾驶研究院有限公司 可行驶区域检测方法、装置、计算机存储介质及路测视觉设备

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445786A (zh) * 2021-12-30 2022-05-06 深圳云天励飞技术股份有限公司 道路拥堵检测方法、装置、电子设备及存储介质
CN115223119A (zh) * 2022-06-15 2022-10-21 广州汽车集团股份有限公司 一种可行驶区域检测方法与系统
CN115223119B (zh) * 2022-06-15 2024-06-11 广州汽车集团股份有限公司 一种可行驶区域检测方法与系统
CN115396141A (zh) * 2022-07-19 2022-11-25 岚图汽车科技有限公司 一种车辆安全控制方法、装置、设备和介质
CN117422808A (zh) * 2023-12-19 2024-01-19 中北数科(河北)科技有限公司 一种三维场景数据的加载方法及电子设备
CN117422808B (zh) * 2023-12-19 2024-03-19 中北数科(河北)科技有限公司 一种三维场景数据的加载方法及电子设备

Also Published As

Publication number Publication date
CN113705272A (zh) 2021-11-26

Similar Documents

Publication Publication Date Title
WO2021233154A1 (zh) 可行驶区域检测方法、装置、设备及存储介质
US20200365030A1 (en) Vehicular control system using influence mapping for conflict avoidance path determination
US12043283B2 (en) Detection of near-range and far-range small objects for autonomous vehicles
JP2024020237A (ja) 自動運転のための3次元特徴の予測
WO2021008605A1 (zh) 一种确定车速的方法和装置
CN113297881B (zh) 一种目标检测方法及相关装置
KR20210058696A (ko) 3d 대상체 검출을 위한 순차 융합
CN113496201B (zh) 物体状态识别装置、方法、计算机可读取的记录介质及控制装置
US11004233B1 (en) Intelligent vision-based detection and ranging system and method
WO2020154990A1 (zh) 目标物体运动状态检测方法、设备及存储介质
US12033397B2 (en) Controller, method, and computer program for controlling vehicle
CN110371016A (zh) 车辆前灯的距离估计
CN110944895B (zh) 用于根据由车辆的摄像机所拍摄的图像序列来求取光流的方法和设备
CN117111055A (zh) 一种基于雷视融合的车辆状态感知方法
Tsai et al. Accurate and fast obstacle detection method for automotive applications based on stereo vision
CN108475471B (zh) 车辆判定装置、车辆判定方法和计算机可读的记录介质
Saleh et al. Towards robust perception depth information for collision avoidance
CN117622205A (zh) 车辆控制装置、车辆控制方法以及车辆控制用计算机程序
CN116246235A (zh) 基于行泊一体的目标检测方法、装置、电子设备和介质
US11544899B2 (en) System and method for generating terrain maps
JP2022147745A (ja) 移動体の制御装置及び制御方法並びに車両
US12125215B1 (en) Stereo vision system and method for small-object detection and tracking in real time
US12094144B1 (en) Real-time confidence-based image hole-filling for depth maps
US20250209828A1 (en) Method for providing a free-space estimation with motion data
WO2024180708A1 (ja) 物標認識装置及び物標認識方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21809724

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21809724

Country of ref document: EP

Kind code of ref document: A1