CN112419154A - Method, device, equipment and computer readable storage medium for detecting travelable area - Google Patents

Method, device, equipment and computer readable storage medium for detecting travelable area Download PDF

Info

Publication number
CN112419154A
CN112419154A CN202011345606.0A CN202011345606A CN112419154A CN 112419154 A CN112419154 A CN 112419154A CN 202011345606 A CN202011345606 A CN 202011345606A CN 112419154 A CN112419154 A CN 112419154A
Authority
CN
China
Prior art keywords
image
region
detection method
travelable
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011345606.0A
Other languages
Chinese (zh)
Inventor
王苗苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sany Special Vehicle Co Ltd
Original Assignee
Sany Special Vehicle Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sany Special Vehicle Co Ltd filed Critical Sany Special Vehicle Co Ltd
Priority to CN202011345606.0A priority Critical patent/CN112419154A/en
Publication of CN112419154A publication Critical patent/CN112419154A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a travelable area detection method, a travelable area detection device, travelable area detection equipment and a computer-readable storage medium. The travelable region detection method includes: acquiring shot images of at least two vision sensors; performing all-round stitching on the plurality of shot images to obtain all-round stitched images; performing network segmentation on the panoramic stitched image to obtain region boundary pixel points of a drivable region; obtaining the distance of the obstacle according to the region boundary pixel points; and obtaining the region range according to the distance between the region boundary pixel points and the obstacles.

Description

Method, device, equipment and computer readable storage medium for detecting travelable area
Technical Field
The application belongs to the technical field of vehicles, and particularly relates to a travelable region detection method, a travelable region detection device, travelable region detection equipment and a computer-readable storage medium.
Background
In the related art, travelable region detection is one of key technologies for intelligently driving a car. The dead zone of the engineering machinery vehicle in the driving process is very large, and in order to increase the safety of lives and properties of the engineering machinery vehicle and others, the problem of active safety is urgently needed to be solved by adding a driving area detection function. The mainstream algorithms for solving this problem in the industry are to extract a segmented travelable region by using texture features, and to segment a road surface, a vehicle, a sidewalk, and the like by using a segmentation model based on deep learning, and the segmented travelable region can be also used for detecting the travelable region. However, the generalization performance of texture feature detection based on the traditional machine vision is poor, and the method cannot adapt to various complex scenes; although the latter can accurately segment various targets in the image, the calculation amount is large, the real-time performance is difficult to achieve, and the speed cannot meet the requirement. In addition, the unmanned driving in the related art is based on a high-precision map, and the cost is high.
Disclosure of Invention
Embodiments according to the present invention aim to solve or improve at least one of the above technical problems.
A first object according to an embodiment of the present invention is to provide a travelable region detection method.
A second object according to an embodiment of the present invention is to provide a travelable region detection apparatus.
It is a third object according to an embodiment of the present invention to provide a computer apparatus.
It is a fourth object according to an embodiment of the present invention to provide a computer-readable storage medium.
To achieve the first object according to an embodiment of the present invention, a technical solution of the present invention provides a travelable region detection method for detecting a region range of a travelable region of a vehicle in a traveling environment, the travelable region detection method including: acquiring shot images of at least two vision sensors; performing all-round stitching on the plurality of shot images to obtain all-round stitched images; performing network segmentation on the panoramic stitched image to obtain region boundary pixel points of a drivable region; obtaining the distance of the obstacle according to the region boundary pixel points; and obtaining the region range according to the distance between the region boundary pixel points and the obstacles.
In this technical solution, the vision sensor is used for shooting the driving environment. The vision sensors are at least two, so that the driving environment can be shot in an all-around mode, shot images shot by the vision sensors more than two are spliced in a all-around mode, and spliced images in the all-around mode can be obtained. Inputting the ring-view stitched image into a segmentation network, performing network segmentation on the ring-view stitched image by the segmentation network, outputting a segmentation result, and obtaining region boundary pixel points of the drivable region according to the segmentation result. And (4) carrying out coordinate conversion according to the region boundary pixel points to obtain the distances of the front, the rear, the left side and the right side of the obstacle. And finally, calculating according to the distance between the area boundary pixel points and the obstacles to obtain the area range of the driving area. The drivable area detection method in the embodiment can obtain images around a driving environment in real time through the visual sensor, then perform a series of calculations on the shot images, and obtain an area range in real time.
In addition, the technical solution provided by the embodiment of the present invention may further have the following additional technical features:
among the above-mentioned technical scheme, carry out the ring-view concatenation to a plurality of images of shooing, obtain the ring-view concatenation image, specifically include: judging whether to calibrate at least two vision sensors or not; acquiring a splicing lookup table; reducing parameters in the splicing lookup table; reducing the parameters of the shot image to be consistent with the parameters in the reduced splicing lookup table; generating a circular viewing aerial view image according to the reduced splicing lookup table; and generating a ring-view spliced image according to the ring-view aerial view image.
In the technical scheme, in the specific process of performing the around-looking stitching according to the shot images of the visual sensors to obtain the around-looking stitched images, whether each visual sensor is calibrated or not needs to be judged, and the purpose is to generate a stitching lookup table according to the shot images or call a pre-stored stitching lookup table. By judging first, time and programs can be saved, so that efficiency is improved, and driving safety is guaranteed. Because the position of the vision sensor changes along with the change of the driving environment, and the sizes of the shot images are not uniform, in order to facilitate calculation, the parameters of the shot images need to be reduced, meanwhile, the parameters in the splicing lookup table also need to be reduced, and the reduced two are consistent. Then, a complete spliced all-round bird's-eye view image can be mapped in a look-up table mode. After the splicing lookup table is obtained, a complete all-round-view aerial view image can be mapped in a lookup table mode according to the splicing lookup table, then the corresponding relation between pixel points of the spliced aerial view image and a world coordinate system is established, and then the all-round-view spliced image is generated according to the all-round-view aerial view image. The obtained panoramic mosaic image is more complete and clear, and can lay a solid foundation for finally obtaining the accurate region range of the drivable region.
In any of the above technical solutions, obtaining the concatenation lookup table specifically includes: calibrating at least two visual sensors to generate calibration parameters; carrying out distortion correction on the shot image according to the calibration parameters to obtain a distortion corrected image; calculating and storing homography matrix parameters of distortion correction images of any two adjacent visual sensors in the at least two visual sensors according to the calibration parameters; selecting a top-view plane, and calculating and storing top-view transformation matrix parameters of the distortion correction image; and generating a splicing lookup table according to the calibration parameters, the homography matrix parameters and the overlook transformation matrix parameters.
In the technical scheme, calibration parameters are generated according to a calibrated visual sensor. Because the photographed image photographed in the driving process can generate the phenomena of blurring, deformation and the like, the deformity correction is needed to be carried out according to the calibration parameters so as to further obtain the deformity correction image, and the obtained panoramic mosaic image can be further ensured to be more real and accurate. After acquiring the deformity correction image, the homography matrix parameters of the deformity correction image of any two adjacent vision sensors are calculated. The homography matrix is used for describing the position mapping relation of the object between a world coordinate system and a pixel coordinate system, and the corresponding transformation matrix is called as the homography matrix. And (3) carrying out top view transformation on the malformed corrected image, firstly selecting a top view plane, and then calculating and storing a projective transformation matrix, namely a top view transformation matrix according to the corresponding relation between the original coordinates of the four vertex coordinates of the panoramic image and the top view point coordinates to obtain the parameters of the top view transformation matrix. And finally, generating a splicing lookup table according to the calibration parameters, the homography matrix parameters and the overlook transformation matrix parameters. By storing the splicing lookup table, when the shooting images of the plurality of vision sensors are spliced next time, the stored splicing lookup table can be directly called, so that the time for obtaining the all-round spliced image is saved.
In any of the above technical solutions, the generating of the panoramic view bird's-eye view image according to the reduced mosaic lookup table specifically includes: correcting distortion of the reduced shot image; performing image transformation on the photographed image subjected to distortion correction; performing pose online optimization on the shot image subjected to image transformation; splicing and stitching the shot images subjected to pose online optimization; and performing overlook transformation on the images formed by splicing and stitching to generate a look-around bird-eye view image.
According to the technical scheme, no matter the splicing lookup table is automatically generated through the system or the generated splicing lookup table is directly called, after the splicing lookup table is obtained, the all-around bird's-eye view image can be generated step by step according to the splicing lookup table. The method comprises the following specific steps: the distortion correction is firstly carried out on the reduced shot image so as to eliminate the fuzzy and deformed parts of the shot image and ensure that the image is clearer and more complete. And then, the photographed image with the corrected distortion is subjected to image transformation, so that the photographed image is smoother. Due to the reasons of the angle of the shot image and the like, the pose of the shot image needs to be adjusted, so that the on-line pose optimization is carried out on the shot image after image transformation, and the subsequent splicing and stitching of the image are facilitated. And finally, performing overlook transformation on the images formed by splicing and stitching to generate a surrounding bird's-eye view image. The panoramic aerial view generated by reducing, correcting distortion, image transformation, pose transformation, splicing and stitching and overlooking transformation of the shot image is more complete and clear.
In any one of the above technical solutions, generating a stitched panoramic image according to the panoramic bird's-eye view image specifically includes: sampling the circular viewing aerial view image to obtain a sampled image with a target resolution; and generating a panoramic mosaic image according to the sampling image.
In the technical scheme, the panoramic aerial view image is large and changes in real time, and the panoramic aerial view image needs to be preprocessed to obtain a panoramic image which can meet a certain resolution. Firstly, a sampling image is obtained by sampling the all-round bird's-eye view image, and the sampling image has target resolution, so that the image is clearer. And then, the panoramic mosaic image generated according to the sampling image is more complete and clear, and the follow-up detection of the drivable area can be more accurate.
In any of the above technical solutions, network segmentation is performed on the panoramic stitched image to obtain region boundary pixel points of the travelable region, which specifically includes: extracting a plurality of candidate regions from the all-round stitched image; extracting the characteristics of each candidate region; classifying each candidate region according to the extracted features; and mapping the characteristics of the classified candidate regions onto the shot image through a deconvolution process to realize semantic segmentation, so as to obtain region boundary pixel points.
In the technical scheme, a searching method is used for replacing the traditional window sliding, a plurality of candidate regions can be extracted from each pair of all-around images, the purpose of feature extraction is to classify the plurality of candidate regions, the plurality of candidate regions are divided into a plurality of categories according to the extracted features, finally, a deep learning network combining a convolution layer and a deconvolution layer is used for each category to perform feature description, extraction and classification, and the features are mapped to the original input shot image through a deconvolution process to realize semantic segmentation, so that region boundary pixel points are obtained.
In any of the above technical solutions, estimating the distance to the obstacle according to the area boundary pixel point specifically includes: acquiring calibration parameters of a visual sensor; converting the coordinates of the ring-view stitched images into world coordinates according to the calibration parameters and the homography relation; and obtaining the distances of the obstacles around according to the world coordinates.
According to the technical scheme, after the stitched panoramic stitched image obtains the region boundary pixel points, a deep learning mode can be used for training the neural network so as to obtain the weight value and the network parameters of the neural network according to the detection target, so that the pixel distance of the panoramic stitched image can be obtained, the distance corresponding to a world coordinate system can be calculated according to the pixel distance of the panoramic stitched image with the established corresponding relation, and the distance can be further converted into a vehicle coordinate system so as to estimate the distance of obstacles around the vehicle.
To achieve the second object according to an embodiment of the present invention, a technical solution of the present invention provides a travelable region detection apparatus including: the visual sensor is arranged on the vehicle and used for shooting and sending an image of the running environment of the vehicle; a processing device, communicatively connected to the vision sensor, for acquiring the captured image from the vision sensor, and implementing the travelable region detection method according to any one of claims to and outputting a region range of the travelable region based on the captured image.
In this technical solution, since the travelable region detection apparatus is used to implement the travelable region detection method of any of the embodiments, it has all the advantageous effects of the travelable region detection method of any of the embodiments of the present invention.
To achieve the third object according to an embodiment of the present invention, a technical solution of the present invention provides a computer apparatus including: a memory storing a computer program; an executor for executing the computer program; when executing the computer program, the executor implements the steps of the travelable region detection method in any technical scheme.
In this technical solution, the computer device is used to implement the steps of the travelable region detection method of any embodiment, and therefore has all the beneficial effects of the travelable region detection method of any embodiment of the present invention.
To achieve the fourth object according to the embodiments of the present invention, the technical solution of the present invention provides a computer-readable storage medium storing a computer program, which when executed, implements the steps of the travelable region detection method according to any one of the technical solutions.
In this technical solution, the computer-readable storage medium is used to implement the steps of the travelable region detection method of any embodiment, and therefore has all the beneficial effects of the travelable region detection method of any embodiment of the present invention.
Additional aspects and advantages of embodiments in accordance with the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments in accordance with the invention.
Drawings
The above and/or additional aspects and advantages of embodiments according to the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is one of the flow diagrams of a travelable region detection method according to some embodiments of the invention;
fig. 2 is a second flowchart of a travelable region detection method according to some embodiments of the invention;
FIG. 3 is a third flowchart of a drivable region detection method according to some embodiments of the invention;
FIG. 4 is a fourth flowchart of a travelable region detection method according to some embodiments of the invention;
FIG. 5 is a fifth flowchart of a travelable region detection method according to some embodiments of the invention;
FIG. 6 is a sixth flowchart of a travelable region detection method according to some embodiments of the invention;
FIG. 7 is a seventh flowchart of a travelable region detection method according to some embodiments of the invention;
FIG. 8 is a schematic diagram of the composition of a travelable region detection apparatus according to some embodiments of the invention;
FIG. 9 is a schematic diagram of the components of a computer device according to some embodiments of the invention;
FIG. 10 is an eighth flowchart of a travelable region detection method according to some embodiments of the invention;
FIG. 11 is a ninth flowchart of a travelable region detection method according to some embodiments of the invention;
FIG. 12 is ten of a flow chart of a travelable region detection method according to some embodiments of the invention;
FIG. 13 is an eleventh flowchart of a travelable region detection method according to some embodiments of the invention;
fig. 14 is a twelve-step flow diagram of a travelable region detection method according to some embodiments of the invention.
Wherein, the correspondence between the reference numbers and the part names in fig. 1 to 14 is:
100: a travelable region detection device; 110: a vision sensor; 112: a front side looking around camera; 1142: a first front right look-around camera; 1144: a second front right look-around camera; 1162: a first front left look-around camera; 1164: a second front left look-around camera; 118: a rear-side looking-around camera; 120: a processing device; 200: a computer device; 210: a memory; 220: and an actuator.
Detailed Description
In order that the above objects, features and advantages of embodiments in accordance with the present invention can be more clearly understood, embodiments in accordance with the present invention are described in further detail below with reference to the accompanying drawings and detailed description. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments according to the invention, however, embodiments according to the invention may be practiced in other ways than those described herein, and therefore the scope of embodiments according to the invention is not limited by the specific embodiments disclosed below.
In the related art, a travelable region detection method, a travelable region detection device, a travelable region detection apparatus, and a storage medium are also disclosed, and a specific implementation scheme of the travelable region detection method is as follows:
1. and acquiring three-dimensional point cloud data measured by the target vehicle at the current frame.
2. And determining an obstacle area in a surrounding perception range according to the three-dimensional point cloud data.
3. And acquiring a GPS track, and performing displacement operation on the GPS track to obtain a plurality of reference tracks.
4. And determining a road surface boundary candidate line from the plurality of reference tracks according to each reference track and the obstacle area.
5. And correcting the road boundary candidate line according to the three-dimensional point cloud data of the target laser points close to the road boundary candidate line to obtain the road boundary line of the current frame.
6. The method comprises the steps of obtaining a plurality of reference tracks by performing displacement operation on GPS tracks, determining a road surface boundary candidate line from the plurality of reference tracks, correcting the road surface boundary candidate line according to three-dimensional point cloud data close to the road surface boundary candidate line to obtain a road surface boundary line, and further obtaining a drivable area.
In the related art, the travelable region detection method relies on the GPS, which is costly.
Travelable region detection methods, apparatuses, devices, and computer-readable storage media according to some embodiments of the present invention are described below with reference to fig. 1 through 14.
Example 1
As shown in fig. 1, the present embodiment provides a travelable region detection method, including:
step S102: performing all-round stitching according to the shot images of the at least two vision sensors to obtain an all-round stitched image;
step S104: performing network segmentation on the panoramic stitched image to obtain region boundary pixel points of a drivable region;
step S106: obtaining the distance of the obstacle according to the region boundary pixel points;
step S108: and obtaining the area range of the travelable area according to the area boundary pixel points and the distance between the obstacles.
In this embodiment, the vision sensor is used to shoot the driving environment, and the vision sensor may be a camera, a laser radar sensor, or the like. The vision sensors are at least two, so that the driving environment can be shot in an all-around mode, shot images shot by the vision sensors more than two are spliced in a all-around mode, and spliced images in the all-around mode can be obtained. Inputting the ring-view stitched image into a segmentation network, performing network segmentation on the ring-view stitched image by the segmentation network, outputting a segmentation result, and obtaining region boundary pixel points of the drivable region according to the segmentation result. And (4) carrying out coordinate conversion according to the region boundary pixel points to obtain the distances of the front, the rear, the left side and the right side of the obstacle. And finally, calculating according to the distance between the area boundary pixel points and the obstacles to obtain the area range of the driving area. The drivable area detection method in the embodiment can obtain images around a driving environment in real time through the visual sensor, then perform a series of calculations on the shot images, and obtain an area range in real time.
Example 2
As shown in fig. 2, the present embodiment provides a travelable region detection method. In addition to the technical features of the above embodiment, the present embodiment further includes the following technical features:
carry out the all-round concatenation according to the shooting image of vision sensor, obtain the all-round concatenation image, specifically include:
step S202: judging whether to calibrate at least two vision sensors or not;
step S204: acquiring a splicing lookup table;
step S206: reducing parameters in the splicing lookup table;
step S208: reducing the parameters of the shot image to be consistent with the parameters in the reduced splicing lookup table;
step S210: generating a circular viewing aerial view image according to the reduced splicing lookup table;
step S212: and generating a ring-view spliced image according to the ring-view aerial view image.
In this embodiment, the image taken by the visual sensors is subjected to the around view stitching to obtain the around view stitched image, and whether each visual sensor is calibrated or not needs to be judged at first, so that the stitching lookup table is generated according to the image taken, or the pre-stored stitching lookup table is called. By judging first, time and programs can be saved, so that efficiency is improved, and driving safety is guaranteed. After the splicing lookup table is obtained, a complete all-round-view aerial view image can be mapped in a lookup table mode according to the splicing lookup table, then the corresponding relation between pixel points of the spliced aerial view image and a world coordinate system is established, and then the all-round-view spliced image is generated according to the all-round-view aerial view image. The obtained panoramic mosaic image is more complete and clear, and can lay a solid foundation for finally obtaining the accurate region range of the drivable region. Because the position of the vision sensor changes along with the change of the driving environment, and the sizes of the shot images are not uniform, in order to facilitate calculation, the parameters of the shot images need to be reduced, meanwhile, the parameters in the splicing lookup table also need to be reduced, and the reduced two are consistent. Then, a complete spliced all-round bird's-eye view image can be mapped in a look-up table mode.
Example 3
As shown in fig. 3, the present embodiment provides a travelable region detection method. In addition to the technical features of the above embodiment, the present embodiment further includes the following technical features:
acquiring a splicing lookup table, specifically comprising:
step S302: calibrating at least two visual sensors to generate calibration parameters;
step S304: carrying out distortion correction on the shot image according to the calibration parameters to obtain a distortion corrected image;
step S306: calculating and storing homography matrix parameters of distortion correction images of any two adjacent visual sensors in the at least two visual sensors according to the calibration parameters;
step S308: selecting a top-view plane, and calculating and storing top-view transformation matrix parameters of the distortion correction image;
step S310: and generating a splicing lookup table according to the calibration parameters, the homography matrix parameters and the overlook transformation matrix parameters.
In this embodiment, when it is determined whether each of the visual sensors is calibrated, and if the determination result is yes, each of the visual sensors is calibrated, and a calibration parameter is generated according to the calibrated visual sensor. The calibration parameters comprise: internal reference, external reference, distortion coefficient, etc. Because the photographed image photographed in the driving process can generate the phenomena of blurring, deformation and the like, the deformity correction is needed to be carried out according to the calibration parameters so as to further obtain the deformity correction image, and the obtained panoramic mosaic image can be further ensured to be more real and accurate. After acquiring the deformity correction image, the homography matrix parameters of the deformity correction image of any two adjacent vision sensors are calculated. The homography matrix is used for describing the position mapping relation of the object between a world coordinate system and a pixel coordinate system, and the corresponding transformation matrix is called as the homography matrix. And (3) carrying out top view transformation on the malformed corrected image, firstly selecting a top view plane, and then calculating and storing a projective transformation matrix, namely a top view transformation matrix according to the corresponding relation between the original coordinates of the four vertex coordinates of the panoramic image and the top view point coordinates to obtain the parameters of the top view transformation matrix. And finally, generating a splicing lookup table according to the calibration parameters, the homography matrix parameters and the overlook transformation matrix parameters. By storing the splicing lookup table, when the shooting images of the plurality of vision sensors are spliced next time, the stored splicing lookup table can be directly called, so that the time for obtaining the all-round spliced image is saved.
Example 4
As shown in fig. 4, the present embodiment provides a travelable region detection method. In addition to the technical features of the above embodiment, the present embodiment further includes the following technical features:
generating a look-around aerial view image according to the reduced splicing lookup table, and specifically comprising:
step S502: correcting distortion of the reduced shot image;
step S504: performing image transformation on the photographed image subjected to distortion correction;
step S506: performing pose online optimization on the shot image subjected to image transformation;
step S508: splicing and stitching the shot images subjected to pose online optimization;
step S510: and performing overlook transformation on the images formed by splicing and stitching to generate a look-around bird-eye view image.
In this embodiment, no matter the stitching lookup table is automatically generated by the system, or the generated stitching lookup table is directly called, after the stitching lookup table is obtained, the all-round bird's-eye view image can be generated step by step according to the stitching lookup table. The method comprises the following specific steps: the distortion correction is firstly carried out on the reduced shot image so as to eliminate the fuzzy and deformed parts of the shot image and ensure that the image is clearer and more complete. And then, the photographed image with the corrected distortion is subjected to image transformation, so that the photographed image is smoother. Due to the reasons of the angle of the shot image and the like, the pose of the shot image needs to be adjusted, so that the on-line pose optimization is carried out on the shot image after image transformation, and the subsequent splicing and stitching of the image are facilitated. And finally, performing overlook transformation on the images formed by splicing and stitching to generate a surrounding bird's-eye view image. The panoramic aerial view generated by reducing, correcting distortion, image transformation, pose transformation, splicing and stitching and overlooking transformation of the shot image is more complete and clear.
Example 5
As shown in fig. 5, the present embodiment provides a travelable region detection method. In addition to the technical features of the above embodiment, the present embodiment further includes the following technical features:
according to the ring-view aerial view image, generating a ring-view stitched image, specifically comprising:
step S602: sampling the circular viewing aerial view image to obtain a sampled image with a target resolution;
step S604: and generating a panoramic mosaic image according to the sampling image.
In this embodiment, the panoramic view image is relatively large and changes in real time, and the panoramic view image needs to be preprocessed to obtain a panoramic image that can satisfy a certain resolution. Firstly, a sampling image is obtained by sampling the all-round bird's-eye view image, and the sampling image has target resolution, so that the image is clearer. And then, the panoramic mosaic image generated according to the sampling image is more complete and clear, and the follow-up detection of the drivable area can be more accurate.
Example 6
As shown in fig. 6, the present embodiment provides a travelable region detection method. In addition to the technical features of the above embodiment, the present embodiment further includes the following technical features:
network segmentation is carried out on the panoramic stitching image, and area boundary pixel points of a driving area are obtained, and the method specifically comprises the following steps:
step S702: extracting a plurality of candidate regions from the all-round stitched image;
step S704: extracting the characteristics of each candidate region;
step S706: classifying each candidate region according to the extracted features;
step S708: and mapping the characteristics of the classified candidate regions onto the shot image through a deconvolution process to realize semantic segmentation, so as to obtain region boundary pixel points.
In this embodiment, the spliced aerial view is preprocessed and then sent to a segmentation network, and a segmentation result is output to obtain area boundary pixel points of the drivable area. The specific process is that, instead of the traditional window sliding, a searching method is used, and a plurality of candidate regions, for example 2000 candidate regions, can be extracted on each round-looking image. And then, extracting features with the dimension up to 4096 from each candidate area, wherein the purpose of extracting the features is to classify a plurality of candidate areas, divide the candidate areas into a plurality of categories according to the extracted features, finally, use a deep learning network combining a convolutional layer and a deconvolution layer for each category to perform feature description, extraction and classification, and map the features to the original input shot image through a deconvolution process to realize semantic segmentation, thereby obtaining area boundary pixel points.
Example 7
As shown in fig. 7, the present embodiment provides a travelable region detection method. In addition to the technical features of the above embodiment, the present embodiment further includes the following technical features:
estimating the distance of the obstacle according to the regional boundary pixel points, and specifically comprising the following steps:
step S802: acquiring calibration parameters of a visual sensor;
step S804: converting the coordinates of the ring-view stitched images into world coordinates according to the calibration parameters and the homography relation;
step S806: and estimating the distance of the obstacles around according to the world coordinates.
In this embodiment, after the stitched panoramic stitched image obtains the region boundary pixel points, the neural network may be trained in a deep learning manner to obtain the weight value of the neural network and the network parameters thereof according to the detection target, so as to obtain the pixel distance of the panoramic stitched image, and according to the pixel distance of the panoramic stitched image with the established correspondence, the distance corresponding to the world coordinate system may be calculated, and further converted to the vehicle coordinate system, so as to estimate the distance to the obstacles around the vehicle. And then according to the outputted region boundary pixel points of the travelable region, according to the corresponding relation between the corresponding region boundary pixel points and the vehicle coordinate system and the distance of the obstacle, calculating to obtain the region range of the travelable region of the vehicle.
Example 8
As shown in fig. 8, the present embodiment provides a travelable region detection apparatus 100 including: the vision sensor 110 is arranged on the vehicle, and the vision sensor 110 is used for shooting and sending an image of the running environment of the vehicle; the processing device 120 is communicatively connected to the vision sensor 110 for acquiring the captured image from the vision sensor 110, and the processing device 120 implements the travelable region detection method according to any one of the embodiments based on the captured image and outputs the region range of the travelable region.
In the present embodiment, since the travelable region detection apparatus 100 is used to implement the travelable region detection method of any of the embodiments, it has all the advantageous effects of the travelable region detection method of any of the embodiments of the present invention.
Example 9
As shown in fig. 9, the present embodiment provides a computer apparatus 200 including: a memory 210 and an actuator 220, the memory 210 storing a computer program. The executor 220 is used for acquiring a shot image and executing a computer program; the executor 220 implements the steps of the travelable region detection method in any of the embodiments when executing the computer program.
In the present embodiment, the computer apparatus 200 is used to implement the steps of the travelable region detection method of any embodiment, and therefore has all the advantageous effects of the travelable region detection method of any embodiment of the present invention.
Example 10
As shown in fig. 3, the present embodiment provides a computer-readable storage medium storing a computer program which, when executed, implements the steps of the travelable region detection method in any one of the embodiments.
In the present embodiment, a computer-readable storage medium is used to implement the steps of the travelable region detection method of any embodiment, and thus has all the advantageous effects of the travelable region detection method of any embodiment of the present invention.
Example 11
As shown in fig. 10, the present embodiment provides a vehicle-based travelable region detection method for detecting and outputting a region range of a travelable region of a vehicle in a traveling environment to guide the traveling of the vehicle. The method for detecting the travelable area is based on a panoramic mosaic image, and comprises four modules, namely: performing all-round view stitching on the shot images, segmenting the all-round view stitched images, calculating the distance of the obstacle, and calculating the area range of the travelable area. The travelable area is also called passable area. As shown in fig. 10, the all-around camera may be a fisheye camera by using the all-around camera or the all-around camera as the vision sensor 110. The specific process is as follows:
step S110: splicing and calibrating the panoramic camera;
calibrating the vision sensor 110, and splicing images acquired by the vision sensor 110, wherein the vision sensor 110 is a panoramic camera, and can acquire calibration parameters of the panoramic camera and splice the shot images shot by the panoramic camera.
Step S120: obtaining a splicing map;
the mosaic image is a circular mosaic image.
Step S130: training a model;
and training the panoramic mosaic image through a training model.
Step S140: results (Driving region detection results)
And obtaining a detection result of the travelable area through post-processing.
The fisheye image and the preset splicing lookup table are obtained, the fisheye image shot by the fisheye camera is reduced, and parameters in the splicing lookup table are reduced to be consistent with the reduced fisheye images. And then splicing the fisheye images. A specific splicing flow can be shown in fig. 11, and the steps are as follows:
step S902: judging whether to calibrate the image;
i.e., whether to calibrate the vision sensor 110.
Step S904: and if the judgment result is yes, carrying out camera calibration and storing calibration parameters.
The camera may be a look-around camera.
Step S906: correcting image distortion;
i.e. to correct image distortion of the fisheye image.
Step S908: solving a homography matrix;
the image for which this step is directed is a fisheye image with image distortion corrected.
Step S910: solving a overlook transformation matrix;
the image aimed at by this step is a fisheye image that completes the homography matrix solution.
Step S912: generating a lookup table;
the step is to generate a splicing lookup table according to the result of the solution of the overlooking transformation matrix.
Step S914: and if the judgment result is negative, loading the splicing lookup table.
Step S916: correcting image distortion;
the step is to correct the image distortion of the fisheye image loaded with the splicing lookup table.
Step S918: image transformation;
the image targeted by the step is a fisheye image with corrected distortion.
Step S920: optimizing the pose on line;
the image targeted by this step is a fish-eye image after image transformation.
Step S922: splicing and stitching images;
the image targeted by this step is a stitched fisheye image.
Step S924: and (5) top view transformation.
The image targeted by this step is a fish-eye image that is transformed from the top view.
And obtaining the overlook bird-eye view image after overlook transformation of the fish-eye image. That is, through the above steps, each reduced fisheye image can be mapped to a complete stitched bird's-eye view image in the form of a lookup table through the reduced stitching lookup table. And then sampling the panoramic aerial view image to obtain a sampled image with the target resolution, and converting the sampled image with the target resolution into a multi-channel spliced panoramic image to obtain a panoramic spliced image.
And a module for segmenting the panoramic stitched image inputs the panoramic stitched image obtained after preprocessing the stitched aerial view into a segmentation network, and outputs a segmentation result, so that region boundary pixel points of the drivable region can be obtained. As shown in fig. 12, the specific process is as follows:
step S710: and inputting the spliced image.
The spliced image is a ring-view spliced image, and when the ring-view spliced image is segmented by the separation network, the spliced ring-view spliced image is input firstly.
Step S720: a plurality of regions is found in the image.
The image is a ring-view stitched image, a plurality of regions are over-found on the ring-view stitched image as candidate regions by a searching method instead of traditional window sliding, for example, 2000 candidate regions are extracted from each ring-view stitched image.
Step S730: calculating the characteristics of each region;
for example, feature extraction up to 4096 dimensions may be performed for each region.
Step S740: classifying each region according to the characteristics;
and classifying each region according to the characteristics to form a category. And for each classified category, performing feature characterization and extraction classification by using a deep learning network combining a convolution layer and a deconvolution layer, and mapping the features onto the original input image through a deconvolution process to realize semantic segmentation.
Step S750: filtering;
carrying out filtering processing after classification;
step S760: a drivable area profile.
After filtering, the travelable region profile of the travelable region can be obtained through calculation. In the module for calculating the distance between the obstacles, the image coordinates are converted into world coordinates according to the parameters and the homography relation of the look-around camera calibrated in advance, namely, the distance between the front and the surrounding obstacles and the vehicle can be estimated or calculated by taking the coordinate system of the vehicle body as a uniform coordinate system.
And calculating the area range of the travelable area in front of the vehicle and around the vehicle body of the vehicle according to the calculated obstacle distance in an area range module for calculating the travelable area.
In this embodiment, based on the 360 ° panoramic stitched image, a correspondence between the pixel points of the stitched bird's-eye view image and the world coordinate system is established. The visual sensor 110 may be a surround-view camera, which may be arranged in a manner as shown in fig. 13, a front surround-view camera 112 is disposed in front of a vehicle body of the vehicle, a rear surround-view camera 118 is disposed behind the vehicle body, a first front left surround-view camera 1162 and a second front left surround-view camera 1164 are disposed on the left side of the vehicle body, and a first front right surround-view camera 1142 and a second front right surround-view camera 1144 are disposed on the right side of the vehicle body. More specifically, the generation flow of the concatenation lookup table may be as shown in fig. 14.
Step S320: collecting images through a collection module;
image acquisition is carried out at the board end through an acquisition module, and the acquired image is a shot image;
then the following steps can be carried out in sequence at the computer end:
step S330: calibrating a program;
the internal reference, external reference and distortion coefficients of the camera can be generated by calibrating the look-around camera.
Step S340: correcting distortion;
the fisheye projection model can be selected according to the internal reference and the distortion coefficient of the camera, and the fisheye image is subjected to image distortion correction.
Step S350: solving a homography matrix;
the homography matrix parameters of every two adjacent paths of all-round looking cameras can be calculated and stored according to the internal parameters, the distortion coefficients and the external parameters calibrated by the cameras.
Step S360: solving a overlook transformation matrix;
and selecting a top view plane, and calculating and storing projective transformation matrix parameters according to the corresponding relation between the original coordinates of the four vertex coordinates of the panoramic image of the panoramic stitched image and the coordinates of the top view points.
Step S370: and generating a splicing lookup table.
And uniformly generating a splicing lookup table by using the internal parameter, the external parameter, the distortion coefficient, the homography matrix parameter and the overlooking transformation matrix parameter of the camera. The generated splicing lookup table comprises horizontal and vertical coordinates and pixel values in the spliced all-around bird's-eye view image mapped by each pixel in each fish-eye image.
And sending the spliced look-around image obtained after preprocessing the spliced aerial view image into a segmentation network, and outputting a segmentation result to obtain area boundary pixel points of the drivable area. And training the neural network by using a deep learning mode for the spliced aerial view image to obtain a neural network weight value and network parameters thereof according to the detection target. And calculating the region range of the drivable region of the vehicle according to the output region boundary pixel points and the corresponding relation between the corresponding region pixel points and the vehicle coordinate system. The area range of the vehicle driving-capable area based on the all-around mosaic image output can be used for avoiding obstacles and giving collision early warning, and the safety of vehicle driving is improved.
According to the embodiment of the invention, the drivable area detection method based on the panoramic mosaic image is not limited to the mixer truck, and can be popularized and applied to other large engineering machinery vehicles and other intelligent driving vehicles, such as: buses, mixer cars, special vehicles, heavy trucks, etc. And resources are shared with 360-degree panoramic stitching, so that the cost is reduced. In the driving process of the vehicle, the video images of the surrounding area of the vehicle are acquired in real time, and the detection of the forward drivable area is carried out by the algorithm of the embodiment of the invention, so that the visual field of a driver can be enlarged, the driving strength of the driver is reduced, and the safety factor is improved. Wherein, looking the camera replacement for laser radar sensor, can improve the precision. In addition, the sensor in the embodiment of the invention can be shared with a 360-degree all-round fisheye camera, so that the cost of the sensor is reduced. The ring-view camera can be replaced by a common camera for forward collision early warning. According to the embodiment of the invention, the area range of the front travelable area can be intelligently identified by training the neural network (such as the convolutional neural network CNN, FASTCNN and the like) used by deep learning.
In summary, the beneficial effects according to the embodiments of the present invention are:
1. the drivable area detection method based on the panoramic mosaic image only obtains the area range of the drivable area through the vision sensor 110 and certain calculation, and saves certain cost on the premise of ensuring the detection effect.
2. The method for detecting the drivable area based on the panoramic mosaic image can be popularized and applied to other large engineering machinery vehicles and other intelligent driving vehicles, and the safety of vehicle driving can be improved by acquiring the shot images of the surrounding area of the vehicle in real time to implement the method for detecting the drivable area.
In embodiments according to the present invention, the terms "first", "second", "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance; the term "plurality" means two or more unless expressly limited otherwise. The terms "mounted," "connected," "fixed," and the like are to be construed broadly, and for example, "connected" may be a fixed connection, a removable connection, or an integral connection; "coupled" may be direct or indirect through an intermediary. Specific meanings of the above terms in the embodiments according to the present invention can be understood by those of ordinary skill in the art according to specific situations.
In the description of the embodiments according to the present invention, it should be understood that the terms "upper", "lower", "left", "right", "front", "rear", and the like indicate orientations or positional relationships based on those shown in the drawings, only for convenience of description and simplification of description of the embodiments according to the present invention, and do not indicate or imply that the referred devices or units must have a specific direction, be configured and operated in a specific orientation, and thus, should not be construed as limiting the embodiments according to the present invention.
In the description herein, the description of the terms "one embodiment," "some embodiments," "specific embodiments," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of an embodiment according to the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above is only a preferred embodiment according to the present invention, and is not intended to limit the embodiment according to the present invention, and various modifications and variations may be made to the embodiment according to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the embodiment according to the present invention should be included in the protection scope of the embodiment according to the present invention.

Claims (10)

1. A travelable region detection method for detecting a region range of a travelable region of a vehicle in a travel environment, characterized by comprising:
acquiring shot images of at least two vision sensors;
performing all-round stitching on the plurality of shot images to obtain all-round stitched images;
network segmentation is carried out on the panoramic stitching image to obtain area boundary pixel points of the travelable area;
obtaining the distance of an obstacle according to the region boundary pixel points;
and obtaining the region range according to the region boundary pixel points and the distance between the obstacles.
2. The drivable area detection method according to claim 1, wherein the step of performing all-round stitching on the plurality of captured images to obtain all-round stitched images specifically comprises:
judging whether the at least two vision sensors are calibrated or not;
acquiring a splicing lookup table;
narrowing the parameters in the stitching look-up table;
reducing the parameters of the shot image to be consistent with the parameters in the reduced splicing lookup table;
generating a circular viewing aerial view image according to the reduced splicing lookup table;
and generating the ring-view spliced image according to the ring-view aerial view image.
3. The drivable region detection method according to claim 2, characterized in that the obtaining of the stitching look-up table specifically comprises:
calibrating the at least two vision sensors to generate calibration parameters;
carrying out distortion correction on the shot image according to the calibration parameters to obtain a distortion corrected image;
calculating and storing homography matrix parameters of the distortion correction images of any two adjacent visual sensors in the at least two visual sensors according to the calibration parameters;
selecting a top-view plane, and calculating and storing top-view transformation matrix parameters of the distortion correction image;
and generating the splicing lookup table according to the calibration parameters, the homography matrix parameters and the overlook transformation matrix parameters.
4. The drivable area detection method of claim 2, wherein generating a look-around bird's-eye view image from the reduced stitching lookup table specifically comprises:
correcting distortion of the reduced shot image;
performing image transformation on the photographed image with the distortion corrected;
performing pose online optimization on the shot image subjected to the image transformation;
splicing and stitching the shot images subjected to pose online optimization;
and performing overlook transformation on the images formed by splicing and stitching to generate the all-around bird-eye view image.
5. The drivable area detection method of claim 2, wherein the generating of the stitched panoramic image from the panoramic aerial image specifically comprises:
sampling the circular viewing aerial view image to obtain a sampled image with a target resolution;
and generating the all-round mosaic image according to the sampling image.
6. The travelable region detection method according to any one of claims 1 to 5, wherein the network segmentation is performed on the panoramic stitched image to obtain region boundary pixel points of a travelable region, and specifically includes:
extracting a plurality of candidate regions from the all-around stitched image;
extracting features of each candidate region;
classifying each of the candidate regions according to the extracted features;
and mapping the features to the shot image through a deconvolution process on the classified candidate region to realize semantic segmentation, so as to obtain region boundary pixel points.
7. The method for detecting a travelable area according to any one of claims 1 to 5, wherein the estimating an obstacle distance according to the area boundary pixel points specifically comprises:
acquiring calibration parameters of the visual sensor;
converting the coordinates of the all-round-view mosaic images into world coordinates according to the calibration parameters and the homography relation;
and obtaining the distances of the obstacles around according to the world coordinates.
8. A travelable region detection apparatus, characterized by comprising:
the vision sensor is arranged on the vehicle and is used for shooting and sending an image of the running environment of the vehicle;
a processing device, communicatively connected to the vision sensor, for acquiring a captured image from the vision sensor, and implementing the travelable region detection method according to any one of claims 1 to 7 on the basis of the captured image, and outputting a region range of a travelable region.
9. A computer device, comprising:
a memory storing a computer program;
an actuator for executing the computer program;
wherein the actuator, when executing the computer program, implements the steps of the travelable region detection method according to any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that,
the computer-readable storage medium stores a computer program which, when executed, implements the steps of the travelable region detection method according to any of claims 1 to 7.
CN202011345606.0A 2020-11-26 2020-11-26 Method, device, equipment and computer readable storage medium for detecting travelable area Pending CN112419154A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011345606.0A CN112419154A (en) 2020-11-26 2020-11-26 Method, device, equipment and computer readable storage medium for detecting travelable area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011345606.0A CN112419154A (en) 2020-11-26 2020-11-26 Method, device, equipment and computer readable storage medium for detecting travelable area

Publications (1)

Publication Number Publication Date
CN112419154A true CN112419154A (en) 2021-02-26

Family

ID=74843057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011345606.0A Pending CN112419154A (en) 2020-11-26 2020-11-26 Method, device, equipment and computer readable storage medium for detecting travelable area

Country Status (1)

Country Link
CN (1) CN112419154A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113108794A (en) * 2021-03-30 2021-07-13 北京深睿博联科技有限责任公司 Position identification method, device, equipment and computer readable storage medium
CN113286196A (en) * 2021-05-14 2021-08-20 湖北亿咖通科技有限公司 Vehicle-mounted video playing system and video split-screen display method and device
CN113689552A (en) * 2021-08-27 2021-11-23 北京百度网讯科技有限公司 Vehicle-mounted all-round-view model adjusting method and device, electronic equipment and storage medium
CN114445415A (en) * 2021-12-14 2022-05-06 中国科学院深圳先进技术研究院 Method for dividing a drivable region and associated device
CN115042821A (en) * 2022-08-12 2022-09-13 小米汽车科技有限公司 Vehicle control method, vehicle control device, vehicle and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485233A (en) * 2016-10-21 2017-03-08 深圳地平线机器人科技有限公司 Drivable region detection method, device and electronic equipment
CN110084086A (en) * 2018-12-11 2019-08-02 安徽江淮汽车集团股份有限公司 A kind of automatic driving vehicle drivable region detection method of view-based access control model sensor
CN110827197A (en) * 2019-10-08 2020-02-21 武汉极目智能技术有限公司 Method and device for detecting and identifying vehicle all-round looking target based on deep learning
CN111369439A (en) * 2020-02-29 2020-07-03 华南理工大学 Panoramic view image real-time splicing method for automatic parking stall identification based on panoramic view

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485233A (en) * 2016-10-21 2017-03-08 深圳地平线机器人科技有限公司 Drivable region detection method, device and electronic equipment
CN110084086A (en) * 2018-12-11 2019-08-02 安徽江淮汽车集团股份有限公司 A kind of automatic driving vehicle drivable region detection method of view-based access control model sensor
CN110827197A (en) * 2019-10-08 2020-02-21 武汉极目智能技术有限公司 Method and device for detecting and identifying vehicle all-round looking target based on deep learning
CN111369439A (en) * 2020-02-29 2020-07-03 华南理工大学 Panoramic view image real-time splicing method for automatic parking stall identification based on panoramic view

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113108794A (en) * 2021-03-30 2021-07-13 北京深睿博联科技有限责任公司 Position identification method, device, equipment and computer readable storage medium
CN113286196A (en) * 2021-05-14 2021-08-20 湖北亿咖通科技有限公司 Vehicle-mounted video playing system and video split-screen display method and device
CN113286196B (en) * 2021-05-14 2023-02-17 亿咖通(湖北)技术有限公司 Vehicle-mounted video playing system and video split-screen display method and device
CN113689552A (en) * 2021-08-27 2021-11-23 北京百度网讯科技有限公司 Vehicle-mounted all-round-view model adjusting method and device, electronic equipment and storage medium
CN114445415A (en) * 2021-12-14 2022-05-06 中国科学院深圳先进技术研究院 Method for dividing a drivable region and associated device
CN115042821A (en) * 2022-08-12 2022-09-13 小米汽车科技有限公司 Vehicle control method, vehicle control device, vehicle and storage medium
CN115042821B (en) * 2022-08-12 2022-11-04 小米汽车科技有限公司 Vehicle control method, vehicle control device, vehicle and storage medium

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN112419154A (en) Method, device, equipment and computer readable storage medium for detecting travelable area
US11427193B2 (en) Methods and systems for providing depth maps with confidence estimates
CN106462968B (en) Method and device for calibrating a camera system of a motor vehicle
JP6473571B2 (en) TTC measuring device and TTC measuring program
CN110910453B (en) Vehicle pose estimation method and system based on non-overlapping view field multi-camera system
CN108445496B (en) Ranging calibration device and method, ranging equipment and ranging method
CN110826499A (en) Object space parameter detection method and device, electronic equipment and storage medium
CN108596058A (en) Running disorder object distance measuring method based on computer vision
JP6456499B2 (en) Three-dimensional object detection device, stereo camera device, vehicle, and three-dimensional object detection method
CN113173502B (en) Anticollision method and system based on laser vision fusion and deep learning
JP4032843B2 (en) Monitoring system and monitoring method, distance correction device and distance correction method in the monitoring system
JP4344860B2 (en) Road plan area and obstacle detection method using stereo image
JP5027710B2 (en) Vehicle environment recognition device and preceding vehicle tracking control system
Lion et al. Smart speed bump detection and estimation with kinect
CN106803073B (en) Auxiliary driving system and method based on stereoscopic vision target
CN114428259A (en) Automatic vehicle extraction method in laser point cloud of ground library based on map vehicle acquisition
CN116630429B (en) Visual guiding and positioning method and device for docking of vehicle and box and electronic equipment
Gao et al. Distance measurement method for obstacles in front of vehicles based on monocular vision
CN116188580A (en) System and method for calibrating a camera on an autopilot system
CN116563807A (en) Model training method and device, electronic equipment and storage medium
CN113834463B (en) Intelligent vehicle side pedestrian/vehicle monocular depth ranging method based on absolute size
JP4106163B2 (en) Obstacle detection apparatus and method
CN115294211A (en) Vehicle-mounted camera installation external parameter calibration method, system, device and storage medium
WO2022141262A1 (en) Object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination