WO2018058355A1 - 汽车可行驶区域实时检测方法和系统 - Google Patents

汽车可行驶区域实时检测方法和系统 Download PDF

Info

Publication number
WO2018058355A1
WO2018058355A1 PCT/CN2016/100520 CN2016100520W WO2018058355A1 WO 2018058355 A1 WO2018058355 A1 WO 2018058355A1 CN 2016100520 W CN2016100520 W CN 2016100520W WO 2018058355 A1 WO2018058355 A1 WO 2018058355A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
line
disparity map
straight line
travelable area
Prior art date
Application number
PCT/CN2016/100520
Other languages
English (en)
French (fr)
Inventor
李斌
赵勇
Original Assignee
驭势科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 驭势科技(北京)有限公司 filed Critical 驭势科技(北京)有限公司
Priority to PCT/CN2016/100520 priority Critical patent/WO2018058355A1/zh
Priority to CN201680001427.0A priority patent/CN107517592B/zh
Publication of WO2018058355A1 publication Critical patent/WO2018058355A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention generally relates to automotive autonomous driving techniques, and more particularly to a method and system for real-time detection of a vehicle's travelable area.
  • Accurate real-time driving area detection has important application significance, especially in the auxiliary driving safety warning and automatic driving automatic control.
  • assisted driving the early warning of non-travelable road surface can be as much as possible. Reduce accidents and avoid personal and property damage; in automatic driving, the more accurate the detection of the driving area, the more accurate the automatic control.
  • the lidar sensor based on the lidar sensor, the calibration is first performed, and the area below a certain threshold is judged as the ground.
  • the cost of the laser radar required by the method is high and it is difficult to popularize.
  • the ground is basically not horizontal, the results of the lidar are not accurate;
  • the second is the use of a monocular color camera to detect the travelable area through machine learning and computer vision, which relies heavily on trained samples.
  • the characteristics of the artificial design, the driving area varies widely, the situation does not exist in the case of non-existence in the training sample, the scalability and versatility are not strong, on the other hand, the monocular camera cannot accurately obtain the depth information, and the result is obtained. It often does not conform to the real scene, and finally the method is not guaranteed in real time.
  • a following stereo camera-based road surface detection technique is proposed: obtaining a V-disparity map including a road surface; and extracting a straight line from the V-disparity map as a road surface, wherein extracting a straight line from the V-disparity map as the road surface includes: The Hough transform is used to extract the first straight line as the first road surface approximation; and based on the first straight line, the second straight line is fitted as the road surface by least squares fitting. Further, in the patent document CN 103489175 B, it is proposed to filter a straight line by a Kalman filtering method.
  • V parallax map is used to detect above the ground. Obstacles Things. However, in the driving environment, the height of the car is generally more than 1 meter. Even if the V-disparity map is not very accurate, the obstacles are easily detected. However, if the road surface is to be detected, the accuracy of the algorithm will be much higher.
  • Patent Document 1 has certain application limitations: First, the road surface is not a strict plane, but a curved surface, and the surface cannot be simply represented by a plane; secondly, application of patent document 1 is required.
  • the depth image obtained by binocular must contain less noise and must be a dense disparity map; this makes the technique not run in real time; the least squares method is only suitable for cases with small errors. Imagine this situation. If you need to extract a model from a noisy dataset (for example, only 20% of the data fits the model), the least squares method is not enough.
  • the technique of Patent Document 1 is In practical applications, the anti-interference ability is likely to be poor, and it cannot adapt to various complicated scenes. When there are many obstacles on the road surface, the impact of obstacles cannot be effectively removed.
  • the present invention has been made in view of the above circumstances.
  • a real-time detection method for a travelable area of a vehicle that can detect a travelable area of a vehicle in real time is provided, which may include: capturing a left and right front of the vehicle along the direction of travel of the vehicle by a binocular camera mounted on the vehicle body Two grayscale images are calculated to obtain a disparity map; the V disparity map is obtained from the disparity map; the V disparity map is binarized; and the RANSAC method is used to fit the segment from the binarized V disparity map points. a straight line; smoothing the straight line according to the multi-frame image; and obtaining the travelable area in the original grayscale image by the extracted straight line.
  • the binarizing the V-disparity map may include: determining a maximum value of each row of pixel values, and setting a gray value of a pixel at which only the maximum value is located in each row. It is 255, and the remaining pixel gray value is set to 0.
  • using the RANSAC method to fit a piece of segmentation line may include repeatedly performing the following sequence of operations until a predetermined end criterion is reached: selecting a random set of the maximum points in the V disparity map The subset is used to perform straight line fitting to obtain a straight line model; the obtained straight line model is used to test all other data.
  • a point is applied to the estimated straight line model, it is considered to be an intra-point, and if there is a point exceeding a predetermined number, The class is an intra-point, then the estimated model is considered reasonable, then the model is re-estimated with all intra-site points, and the error rate of the intra-point and model is estimated; if the error rate of the model is lower than the current best model, then The model replaces the current best model; the best model obtained last is used as the segmentation line.
  • the using the RANSAC method to fit the multi-segment segment line may include: first extracting the first line according to the above method, and after the extraction is completed, the point belonging to the first line is from the V The parallax map is removed, and then the second line is extracted as described above for the remaining points, and so on until the number of remaining points is less than a predetermined threshold.
  • the line model parameters of the first frame image are subtracted from the accumulated parameter results, and the line model parameters of the current frame image are added, and then averaged as The linear model parameters for this frame.
  • a real-time detection system for a travelable area of a vehicle for detecting a travelable area of a vehicle may include: a binocular camera that continuously captures the left and right sides of the front of the vehicle along the direction of travel of the vehicle.
  • a computing device including a memory, a processor, a communication interface, a bus, a memory, a communication interface, and a processor are all connected to the bus, the memory stores computer executable instructions, and the computing device can obtain the binocular camera via the communication interface
  • the processor executes the computer executable instructions, performing the following method: calculating a disparity map based on the left and right two grayscale images; and converting the parallax map to obtain a V disparity map
  • the V disparity map is binarized; the RANSAC method is used to fit the segmented line from the binarized V disparity map points; the smoothing line is smoothed according to the multi-frame image; and the original gray scale is obtained by the extracted straight line The travelable area in the image.
  • binarizing the V-disparity map may include: obtaining a maximum value of each row of pixel values, and setting a gray value of a pixel at which only the maximum value is in each row to 255. The remaining pixel gray value is set to 0.
  • using the RANSAC method to fit a piece of segmentation line may include repeatedly performing the following sequence of operations until a predetermined exit criterion is reached: selecting a random set of the maximum points in the v-disparity map The subset is used to perform straight line fitting to obtain a straight line model; the obtained straight line model is used to test all other data.
  • a point is applied to the estimated straight line model, it is considered to be an intra-point, and if there is a point exceeding a predetermined number, The class is an intra-point, then the estimated model is considered reasonable, then the model is re-estimated with all intra-site points, and the error rate of the intra-point and model is estimated; if the error rate of the model is lower than the current best model, then The model replaces the current best model; the best model obtained last is used as the segmentation line.
  • using the RANSAC method to fit the multi-segment segment line may include: extracting the first line, and after the extraction is completed, removing the points belonging to the first line from the V-disparity map, and then extracting for the remaining points The second line, so repeated, until the number of remaining points is less than a predetermined threshold.
  • obtaining the travelable area in the original grayscale image by the extracted straight line may include: selecting, for each row in the V disparity map, a disparity value on the extracted straight line as d, in the row corresponding to the disparity map The difference between the parallax map value and the d of each pixel is compared. When the difference is less than a certain threshold, the corresponding position of the original image is determined as a safe travelable area.
  • a system for detecting a travelable area of a car in real time may include: a binocular camera configured to capture two left and right grayscale images in front of a car in a direction of travel of the vehicle; a disparity map
  • the calculation component calculates a parallax map from the left and right two grayscale images;
  • the V parallax map conversion module converts the parallax map into a V parallax map;
  • the binarization module performs binarization on the V parallax map;
  • the RANSAC straight line fitting module The RANSAC method is used to fit the segmented straight line from the binarized V disparity map points;
  • the multi-frame image filtering module smoothes the filtered straight line according to the multi-frame image;
  • the original image travelable region determining module passes the extracted straight line The travelable area in the original grayscale image is obtained.
  • the real-time detecting method and system for a travelable area of a vehicle can adapt to various road surfaces and road conditions, have low precision requirements for parallax maps, reduce front end calculation amount, have strong anti-interference ability, and improve real-time performance. Auto-safe driving of cars is critical.
  • FIG. 1 shows a schematic diagram of a system 100 for detecting a travelable area of a vehicle, in accordance with an embodiment of the present invention
  • FIG. 2 illustrates in detail a method of detecting a travelable area of a vehicle in real time in accordance with an embodiment of the present invention
  • FIG. 3 is a schematic diagram showing a situation in which a least square method incorrectly extracts a straight line in the presence of a large noise
  • FIG. 4 shows a flow diagram of a method 240 of fitting a straight line from points of a V disparity map, in accordance with an embodiment of the present invention
  • FIG. 5 is a block diagram showing the structure of a real-time detection system 300 for a motor vehicle that can detect a travelable area of a vehicle in real time according to another embodiment of the present invention.
  • the disparity map is based on any image in the image pair, the size of which is the size of the reference image, and the element value is the image of the disparity value.
  • the disparity map contains the distance information of the scene.
  • the parallax map can be calculated from the left and right images taken by the binocular camera.
  • the coordinates of a point in a common two-dimensional disparity map are represented by (u, v), where u is the abscissa and v is the ordinate; the pixel value of the pixel at the point (u, v) is represented by d(u, v),
  • the pixel value represents the parallax at this point (u, v). Since the disparity map contains the distance information of the scene, the image matching of the disparity map from the stereo image pair has been the most active field in binocular vision research.
  • V disparity map The V disparity map is converted from the disparity map, and the gray value of any point (d, v) in the V disparity map is the number of points in the line corresponding to the ordinate of the disparity map having the disparity value equal to d.
  • the V-disparity map can be viewed as a side view of the disparity map. The plane in the original image is projected into a straight line by accumulating the number of identical points of the same line of disparity values.
  • RANSAC Abbreviation for RANdom Sample Consensus, which is an algorithm for calculating valid mathematical data based on a set of sample data sets containing abnormal data.
  • FIG. 1 shows a schematic diagram of a system 100 for detecting a travelable area of a vehicle, including a binocular camera 110 and a computing device 120, in accordance with an embodiment of the present invention.
  • the binocular camera 110 continuously captures two left and right grayscale images in front of the car in the direction of travel of the car.
  • the binocular camera 110 is mounted, for example, in front of the top of the vehicle such that its imaging range is concentrated on the road surface at the front of the vehicle.
  • Computing device 120 includes a memory 121, a processor 122, a communication interface 123, and a bus 124.
  • the memory 121, the communication interface 123 and the processor 122 are both connected to the bus 124.
  • the memory stores computer executable instructions, and the computing device can obtain the left and right grayscale images captured by the binocular camera via the communication interface, when the processor executes the When the computer executable instructions are executed, a method of detecting the travelable area of the car in real time is performed.
  • An alarm 125 may also be included in the computing device 120 for providing an alert signal or sending a notification when a danger or emergency is detected.
  • Fig. 1 The structure shown in Fig. 1 is merely an example, and can be added, reduced, replaced, etc. as needed.
  • the technology for detecting the travelable area of the automobile in real time is obtained by the binocular camera sensor, the left and right images are obtained, the disparity map is obtained from the left and right images, and the V disparity map is constructed by using the disparity map (V-disparity). Map), then use RANSAC to obtain the segmentation line on the V disparity map, and smoothly filter the line according to the multi-frame image, and finally obtain the safe area that can be driven in the original image by the disparity corresponding to these lines.
  • FIG. 2 shows a general flow diagram of a method 200 of detecting a travelable area of a vehicle in real time in accordance with an embodiment of the present invention.
  • step S210 two left and right grayscale images in front of the car in the traveling direction of the vehicle are imaged by a binocular camera mounted on the vehicle body, and a parallax map is calculated.
  • the correspondence between each pair of images is first found, and the disparity map of the current scene is obtained according to the triangulation principle.
  • step S220 a V-disparity map is obtained from the parallax map conversion.
  • the relative distance of the object with respect to the lens is represented by the change in the shade of gray, and the parallax of the ground is continuously changed according to the depth information included in the disparity map, and the segment line is approximated.
  • M d represents the pixel value of a point on the disparity map
  • M vd represents the pixel value of the corresponding point on the V disparity map.
  • P num is the gray value of the corresponding pixel, so that a gray V disparity map is obtained.
  • step S230 the V-disparity map is binarized.
  • the principle of binarization is to first determine the maximum value of each row, and the gray value of the pixel at which only the maximum value is located in each row is set to 255, and the gray values of the remaining pixels are set. Is 0.
  • step S240 the RANSAC method is used to fit the segmentation line from the points of the binarized V-disparity map.
  • the disadvantage of the Hough transform is that the detection speed is too slow to control in real time; the accuracy is not high enough, the expected information is not detected, but the error is judged, and a large amount of redundant data is generated. This is mainly due to:
  • the least squares method calculates the value of the minimum mean square error with respect to the partial derivatives of the parameters a, b being zero.
  • the least squares method is synonymous with linear regression.
  • the least squares method is only suitable for situations where the error is small. Imagine this situation. If you need to extract a model from a noisy dataset (for example, only 20% of the data fits the model), the least squares method is not enough. For example, in Figure 3, a straight line (pattern) can be easily seen by the naked eye, but the least squares method is wrong.
  • the present invention detects a road surface by extracting a straight line from a V-disparity map, and has a large noise in the parallax map.
  • the least square method is used to extract the straight line, and it is likely that a wrong fit is obtained.
  • the RANSAC algorithm can estimate the parameters of the mathematical model in an iterative manner from a set of observation data sets containing “outside points”, which is very suitable for model parameter estimation of observation data with more noise.
  • the data obtained in practical applications often contains noisy data, which will cause interference to the construction of the model.
  • One thing RANSAC does is to randomly select some points and use them to obtain a model (if the line is fitted, the so-called model is actually the slope) Then, use this model to test the remaining points. If the data point of the test is within the error tolerance, the data point is judged as an intra-point, otherwise it is judged as an out-of-point.
  • the number of intra-site points reaches a certain threshold, it indicates that the selected data point set has reached an acceptable level. Otherwise, all the steps after the previous random selection of the point set are continued, and the process is repeated until Finding the selected set of data points is acceptable, and the model obtained at this time can be considered as the optimal model construction for the data points.
  • FIG. 4 shows a flow diagram of a method 240 of fitting a straight line from a point of a V disparity map, in accordance with an embodiment of the present invention. This method can be used for step S240 in Fig. 2.
  • step S241 a set of random subsets of the points in the binarized V-disparity map is selected for straight line fitting to obtain a straight line model.
  • step S242 the obtained line model is used to test all other data. If a point is applied to the estimated line model, it is considered to be the intra-point, and the number of points in the statistical office.
  • step S243 it is judged whether or not the number of intra-office points is larger than the threshold, and if the result of the determination is YES, the process proceeds to step S245, otherwise, the process proceeds to step S244.
  • step S244 it is determined that the estimated model is unreasonable, the model is discarded, and then proceeds to step S249.
  • step S245 it is judged that the estimated model is reasonable, then the model is re-estimated with all the intra-site points, and the error rate of the intra-point and the model is estimated, and then proceeds to step S246.
  • step S246 it is judged whether the error rate of the current estimated model is smaller than the error rate of the optimal model, and if the result is affirmative, it proceeds to step S247, otherwise proceeds to step S248.
  • step S247 the current model is substituted for the best model, that is, because the current estimated model error rate is lower than the best model error rate according to the judgment of step S246, the performance is better than the best model, so the most replaced The good model becomes the new best model, and then proceeds to step S249.
  • step S248 the estimated model is discarded, and then proceeds to step S249.
  • step S249 it is determined whether or not the termination condition is reached, and if the termination condition is reached, the process is terminated, otherwise the process returns to step S241 to repeat the execution.
  • the termination condition here may be, for example, the number of iterations reaches a threshold number of times, the error rate is lower than a predetermined threshold, and the like.
  • the method of extracting a straight line from the V disparity map by using the RANSAC method is described above with reference to FIG. 4, and the ground is not a plane, so it is reflected in the V disparity map as a plurality of consecutive segmented straight lines, and the method of extracting the multi-segment piecewise straight lines may be, for example, as follows First, the first straight line is extracted according to the method described in connection with FIG. 4, and after the extraction is completed, the points belonging to the first straight line are removed from the V disparity map, and then the same method is used to extract the same method for the remaining points. Two straight lines, so repeated, until the number of remaining points is less than a predetermined threshold.
  • step S240 the process proceeds to step S250.
  • step S250 the straight line is smoothed according to the multi-frame image.
  • Kalman filtering is performed on the fitted straight line.
  • the road surface detecting technology designs a method for smoothly smoothing a straight line according to a multi-frame image in accordance with real-time requirements. Since the slope of the road on which the automobile travels does not change greatly, and the change is uniformly slow, the variation of the straight line fitted according to the embodiment of the present invention also varies uniformly. On the other hand, since the parallax map obtained by the binocular camera has a lot of noise, the obtained straight line causes unnecessary jitter. In order to reduce such jitter and consider that the straight line obtained by the above fitting is uniform and slow, the embodiment of the present invention proposes smoothing filtering using a multi-frame image to obtain a smooth and uniform straight line model.
  • step S260 the travelable area in the original grayscale image is obtained by the extracted straight line.
  • the travelable area in the original grayscale image may be obtained by the line extracted in the V disparity map as follows: for each row in the V disparity map, the point of the disparity value on the extracted line is d, In the row corresponding to the disparity map, the difference between the disparity value of each pixel and d is compared. When the difference is less than a certain threshold, the corresponding position of the original image is determined as a safe travelable area.
  • Information on safe travelable areas in grayscale maps provides critical decision-making information for assisted driving, autonomous driving and driverless driving to prevent collisions and ensure safety.
  • FIG. 5 is a block diagram showing the structure of a real-time detection system 300 for a motor vehicle that can detect a travelable area of a vehicle in real time according to another embodiment of the present invention.
  • the system 300 is placed in the vehicle for detecting the travelable area of the car in real time, providing critical support for the driver's assisted driving, automatic driving and driverless driving.
  • the vehicle travelable area real-time detection system 300 may include: a binocular camera 310, a parallax map calculation unit 320, a V-disparity map conversion unit 330, a binarization unit 340, a RANSAC line fitting unit 350, and a multi-frame.
  • the binocular camera 310 is configured to capture two left and right grayscale images in front of the car in the direction of travel of the car.
  • the parallax map calculation unit 320 calculates a parallax map from the left and right two grayscale images.
  • the V-disparity map conversion section 330 converts the parallax map to obtain a V-disparity map.
  • the binarization unit 340 binarizes the V-disparity map.
  • the RANSAC line fitting component 350 uses the RANSAC method to fit a segmentation line from the points of the binarized V-disparity map.
  • the multi-frame image filtering section 360 smooth-filters the straight line according to the multi-frame image.
  • the original image travelable area determining section 370 obtains the travelable area in the original grayscale image by the extracted straight line.
  • parallax map calculation unit 320 the V-disparity map conversion unit 330, the binarization unit 340, the RANSAC line fitting unit 350, the multi-frame image filtering unit 360, and the original image travelable area determining unit 370 can be referred to the drawings. 2 Corresponding description of the steps, which will not be repeated here.
  • binocular camera in this paper should be understood in a broad sense. Any camera that can obtain left and right images or a device with camera function can be regarded as a binocular camera in this paper.
  • the parallax map calculation unit 320, the V-disparity map conversion unit 330, the binarization unit 340, the RANSAC line fitting unit 350, the multi-frame image filter unit 360, and the original image travelable area determining unit 370 should also be understood in a broad sense. These components may be implemented in software, firmware or hardware, or a combination of these, and the various components may be combined, sub-combined or further split, etc., which are within the scope of the present disclosure.
  • the real-time detecting method and system for a travelable area of a vehicle can adapt to various road surfaces and road conditions, have low precision requirements for parallax maps, reduce front end calculation amount, have strong anti-interference ability, and improve real-time performance. Auto-safe driving of cars is critical.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

汽车可行驶区域实时检测方法(200)和系统(100,300),该方法(200)包括:通过汽车车身上搭载的双目摄像头(110,310)拍摄得到沿汽车行进方向的汽车前方的左右两张灰度图像,计算得到视差图(S210);从视差图转换得到V视差图(S220);对V视差图进行二值化(S230);使用RANSAC方法来从二值化后的V视差图的点中拟合得到分段直线(S240);根据多帧图像平滑滤波直线(S250);以及通过所提取的直线得到原灰度图像中的可行驶区域(S260)。本技术方案可以适应各种各样的路面和路况,对视差图精度要求低,减少前端运算量,抗干扰能力强,提高实时性,为汽车的自动安全驾驶提供了关键支撑。

Description

汽车可行驶区域实时检测方法和系统 技术领域
本发明总体地涉及汽车自动驾驶技术,更具体地涉及汽车可行驶区域实时检测方法和系统。
背景技术
精确实时的可行驶区域检测有重要的应用意义,尤其是在辅助驾驶安全警示和自动驾驶的自动控制中起到决定性作用,比如在辅助驾驶中,非可行驶路面的提前预警可以尽可能多地减少事故,避免人身和财产损失;在自动驾驶中,可行驶区域的检测越精确,自动控制则越精确。
目前,对于可行驶安全区域的检测方法主要有,一是基于激光雷达传感器,首先进行标定,对低于一定阈值的区域判断为地面,该方法所需激光雷达成本很高,很难普及使用,另一方面,地面基本不会是水平的,激光雷达的结果并不精确;二是利用单目彩色摄像头,通过机器学习和计算机视觉的方法来检测可行驶区域,该方法严重依赖于训练的样本和人工设计的特征,可行驶区域千差万别,遇到训练样本中不存在的情况则检测不出来,扩展性、通用性不强,另一方面,单目相机无法准确地获取深度信息,得到的结果往往不符合真实场景,最后该方法实时性也难以保障。
近几年来已经提出了利用立体相机来检测路面用于辅助驾驶的技术。
在专利文献CN 103489175 B中,提出了如下基于立体相机的路面检测技术:获得包括路面的V视差图;以及从V视差图中提取直线作为路面,其中从V视差图中提取直线作为路面包括:利用霍夫变换提取第一直线作为第一路面近似;以及基于第一直线,利用最小二乘法拟合得到第二直线作为路面。另外,在专利文献CN 103489175 B中,提出采用卡尔曼滤波方法对直线进行滤波。
在非专利文献Real time obstacle detection on non flat road geometry through v-disparity representation和在非专利文献U-V-Disparity based Obstacle Detection with 3D Camera and Steerable Filter中,都用到V视差图,用来检测地面以上的障碍 物。不过,在行车环境中,车的高度一般都在1米以上,即使V视差图精度不是很高,障碍物也很容易检测出来,但如果要检测路面,对算法的精度要求就会高很多。
发明内容
发明人经实验及分析发现:专利文献1存在一定的应用局限性:首先,路面并不是一个严格的平面,而是一个曲面,不能简单地用平面来表示曲面;其次,要想应用专利文献1,双目得到的深度图像必须包含较少的噪声,并且必须是稠密的视差图;这使得该技术不能实时运行;最小二乘法只适合与误差较小的情况。试想一下这种情况,假使需要从一个噪音较大的数据集中提取模型(比方说只有20%的数据时符合模型的)时,最小二乘法就显得力不从心了;综上,专利文献1的技术在实际应用中抗干扰能力很可能较差,不能适应各种复杂场景,当路面障碍物较多时无法有效去除障碍物所带来的影响。
鉴于以上情况,提出了本发明。
根据本发明的一个方面,提供了一种实时检测汽车的可行驶区域的汽车可行驶区域实时检测方法,可以包括:通过汽车车身上搭载的双目摄像头拍摄得到沿汽车行进方向的汽车前方的左右两张灰度图像,计算得到视差图;从视差图转换得到V视差图;对V视差图进行二值化;使用RANSAC方法来从二值化后的V视差图的点中拟合得到分段直线;根据多帧图像平滑滤波直线;以及通过所提取的直线得到原灰度图像中的可行驶区域。
根据上述汽车可行驶区域实时检测方法,其中,所述对V视差图进行二值化可以包括:求取每一行像素值的最大值,将每一行中仅最大值所处像素的灰度值设置为255,其余像素灰度值设置为0。
根据上述汽车可行驶区域实时检测方法,使用RANSAC方法来拟合一段分段直线可以包括:反复执行下述操作序列,直至达到预定结束标准:选择V视差图中的最大值点中的一组随机子集来进行直线拟合,得到直线模型;用得到的直线模型去测试所有的其它数据,如果某个点适用于估计的直线模型,认为它也是局内点,如果有超出预定数目的点被归类为局内点,那么估计的模型就认为是合理的,然后用所有局内点重新估计模型,并估计局内点与模型的错误率;如果模型的错误率低于当前最好的模型,则用该模型替代当前最好的模型;以最后得到的最好的模型作为该段分段直线。
根据上述汽车可行驶区域实时检测方法,所述使用RANSAC方法来拟合多段分段直线可以包括:首先按照上述方法来提取第一条直线,提取完毕后,将属于第一直线的点从V视差图中去除,然后针对剩下的点按照上述方法来提取第二条直线,如此反复下去,直到剩余的点的数目小于预定阈值。
根据上述汽车可行驶区域实时检测方法,根据多帧图像平滑滤波直线可以包括:设定一个时间窗口,假设直线模型表示为ax+by+c=0,对每帧图像得到直线模型参数,针对每个参数对每帧进行累加,当每来一帧新的图像,从累加的参数结果中减去最开始一帧图像的直线模型参数,再加上当前帧图像的直线模型参数,再求平均作为这一帧的直线模型参数。
根据上述汽车可行驶区域实时检测方法,通过所提取的直线得到原灰度图像中的可行驶区域可以包括:针对V视差图中的每一行,选取提取的直线上的视差值为d的点,在视差图对应的行中,比较每一个像素的视差值和d的差值,当差值小于一定阈值时,则将原图对应位置判定为安全的可行驶区域。
根据本发明的另一方面,提供了一种车载的用于检测汽车可行驶区域的汽车可行驶区域实时检测系统,可以包括:双目摄像头,持续拍摄得到沿汽车行驶方向的汽车前方的左右两张灰度图像;计算装置,包括存储器、处理器、通信接口、总线,存储器、通信接口和处理器都连接到总线,存储器中存储有计算机可执行指令,计算装置经由通信接口能够获得双目摄像头拍摄得到的左右两张灰度图像,当处理器执行所述计算机可执行指令时,执行下述方法:基于左右两张灰度图像,计算得到视差图;从视差图转换得到V视差图;对V视差图进行二值化;使用RANSAC方法来来从二值化后的V视差图的点中拟合得到分段直线;根据多帧图像平滑滤波直线;以及通过所提取的直线得到原灰度图像中的可行驶区域。
根据上述汽车可行驶区域实时检测系统,其中,对V视差图进行二值化可以包括:求取每一行像素值的最大值,将每一行中仅最大值所处像素的灰度值设置为255,其余像素灰度值设置为0。
根据上述汽车可行驶区域实时检测系统,使用RANSAC方法来拟合一段分段直线可以包括:反复执行下述操作序列,直至达到预定退出标准:选择v视差图中的最大值点中的一组随机子集来进行直线拟合,得到直线模型;用得到的直线模型去测试所有的其它数据,如果某个点适用于估计的直线模型,认为它也是局内点,如果有超出预定数目的点被归类为局内点,那么估计的模型就认为是合理的,然后用所有局内点重新估计模型,并估计局内点与模型的错误率;如果模型的错误率低于当前最好的模型,则用该模型替代当前最好的模型;以最后得到的最好的模型作为该段分段直线。
根据上述系统,使用RANSAC方法来拟合多段分段直线可以包括:提取第一条直线,提取完毕后,将属于第一直线的点从V视差图中去除,然后针对剩下的点来提取第二条直线,如此反复下去,直到剩余的点的数目小于预定阈值。
根据上述系统,根据多帧图像平滑滤波直线可以包括:设定一个时间窗口,假设直线模型表示为ax+by+c=0,对每帧图像得到直线模型参数,针对每个参数对每帧进行累加,当每来一帧新的图像,从累加的参数结果中减去最开始一帧图像的直线模型参数,再加上当前帧图像的直线模型参数,再求平均作为这一帧的直线模型参数。
根据上述的系统,通过所提取的直线得到原灰度图像中的可行驶区域可以包括:针对V视差图中的每一行,选取提取的直线上的视差值为d,在视差图对应的行中,比较每一个像素的视差图值和d的差值,当差值小于一定阈值时,则将原图对应位置判定为安全的可行驶区域。
根据本发明的另一方面,提供了一种实时检测汽车的可行驶区域的系统,可以包括:双目相机,配置为拍摄得到沿汽车行进方向的汽车前方的左右两张灰度图像;视差图计算部件,从左右两张灰度图像计算得到视差图;V视差图转换模块,从视差图转换得到V视差图;二值化模块,对V视差图进行二值化;RANSAC直线拟合模块,使用RANSAC方法来从二值化后的V视差图的点中拟合得到分段直线;多帧图像滤波模块,根据多帧图像平滑滤波直线;原图像可行驶区域确定模块,通过所提取的直线得到原灰度图像中的可行驶区域。
根据本发明实施例的汽车可行驶区域实时检测方法和系统,可以适应各种各样的路面和路况,对视差图精度要求低,减少前端运算量,抗干扰能力强,提高实时性,这些对于汽车的自动安全驾驶非常关键。
附图说明
从下面结合附图对本发明实施例的详细描述中,本发明的这些和/或其它方面和优点将变得更加清楚并更容易理解,其中:
图1示出了根据本发明实施例的、车载的用于检测汽车可行驶区域的系统100的示意图;
图2详细描述根据本发明实施例的实时检测汽车可行驶区域的方法;
图3示出了最小二乘法在存在较大噪声情况下错误提取直线情形的示意图;
图4示出了根据本发明实施例的从V视差图的点中拟合一段直线的方法240的流程图;
图5示出了根据本发明的另一实施例的实时检测汽车的可行驶区域的汽车可行驶区域实时检测系统300的结构框图。
具体实施方式
为了使本领域技术人员更好地理解本发明,下面结合附图和具体实施方式对本发明作进一步详细说明。
首先给出本文中使用的术语的解释。
视差图:视差图是以图像对中任一幅图像为基准,其大小为该基准图像的大小,元素值为视差值的图像。视差图包含了场景的距离信息。视差图可以从双目相机拍摄的左右图像中计算得到。普通二维视差图中的某点坐标以(u,v)表示,其中u为横坐标,v为纵坐标;点(u,v)处的像素的像素值用d(u,v)表示,像素值表示该点(u,v)处的视差。由于视差图包含了场景的距离信息,因此从立体图像对中提取视差图的图像匹配,一直是双目视觉研究中最为活跃的领域。
V视差图:V视差图从视差图转换得到,V视差图中任意一点(d,v)的灰度值是对应视差图的纵坐标为v的行中视差值等于d的点的个数。形象地说,V视差图可以视为视差图的侧视图。通过累计同一行的视差值相同点的个数将原图像中的平面投影成一条直线。
RANSAC:RANdom Sample Consensus的缩写,它是根据一组包含异常数据的样本数据集,计算出数据的数学模型参数,得到有效样本数据的算法。
图1示出了根据本发明实施例的、车载的用于检测汽车可行驶区域的系统100的示意图,包括双目摄像头110和计算装置120。
双目摄像头110持续拍摄得到沿汽车行驶方向的汽车前方的左右两张灰度图像。
双目摄像头110例如安装在车辆的顶部前方,使其摄像范围集中于车辆前部的路面。
计算装置120包括存储器121、处理器122、通信接口123、总线124。存储器121、通信接口123和处理器122都连接到总线124,存储器中存储有计算机可执行指令,计算装置经由通信接口能够获得双目摄像头拍摄得到的左右两张灰度图像,当处理器执行所述计算机可执行指令时,执行实时检测汽车可行驶区域的方法。
计算装置120中还可以包括报警器125,用于在发现危险或紧急情况时给出报警信号或向外发送通知。
图1所示的结构仅为示例,可以根据需要进行增加、减少、替换等。
另外,需要说明的是,有些功能或功能中的一部分可以根据需要由不同部件来实现,例如从左右图像来计算求得视差图在实施例中被描述为由计算装置来实现,不过根据需要也可以在双目相机中增加用于计算视差图的软件、硬件或固件,或者也可以在车辆中部署专门的用于基于左右图像计算视差图的部件,这些都在本发明构思范围之内。
下面结合图2详细描述根据本发明实施例的实时检测汽车可行驶区域的方法。
本发明实施例的实时检测汽车可行驶区域的技术,通过双目摄像头传感器获得左右两张图像,由左右两张图像得到视差图(disparity map),用视差图来构造V视差图(V-disparity map),再在V视差图上利用RANSAC求取分段直线,并根据多帧图像对直线进行平滑滤波,再最终由这些直线所对应的视差在原始图像中得到可行驶的安全区域。
图2示出了根据本发明实施例的实时检测汽车的可行驶区域的方法200的总体流程图。
在步骤S210中,通过汽车车身上搭载的双目摄像头拍摄得到沿汽车行进方向的汽车前方的左右两张灰度图像,计算得到视差图。
具体地,例如,依据双目立体匹配相关算法,先找出每对图像间的对应关系,根据三角测量原理,得到当前场景的视差图。
这里,还可以对视差图进行一些去噪处理等。
在步骤S220中,从视差图转换得到V视差图。
具体地,例如,在视差图中,用灰度深浅变化来表示物体相对于镜头的相对距离远近,根据视差图中包含的景深信息,地面的视差是连续变化的,近似分段直线。假设用Md表示视差图上某点的像素值,用Mvd表示V视差图上对应点的像素值。用函数f(Md)=Mvd来表示视差图和V视差图之间的转换关系,函数f表示累计视差图每一行上具有相同视差的像素的个数Pnum,这样以视差为横轴,纵轴与视差图一致,Pnum为对应像素的灰度值,这样就得到一个灰度V视差图。
在步骤S230中,对V视差图进行二值化。
在一个示例中,采用如下方法进行二值化:二值化的原则是先求取每一行的最大值,每一行中仅最大值所处像素灰度值设置为255,其余像素灰度值设置为0。
在步骤S240中,使用RANSAC方法来从二值化后的V视差图的点中拟合得到分段直线。
下面解释下在众多的直线拟合算法中,为什么本发明实施例选择使用RANSAC方法来进行二值化后的V视差图的点的直线拟合。
实际生活中的数据往往会有一定的偏差,或者说噪声,这给数学拟合造成了困难。例如我们知道两个变量X与Y之间呈线性关系,Y=aX+b,我们想确定参数a与b的具体值。通过实验,可以得到一组X与Y的测试值。虽然理论上两个未知数的方程只需要两组值即可确认,但由于系统误差的原因,任意取两点算出的a与b的值都不尽相同。我们希望的是,最后计算得出的理论模型与测试值的误差最小。
通常现有技术采用最小二乘法或霍夫变换来拟合直线。
霍夫变换的不足在于:检测速度太慢,无法做到实时控制;精度不够高,期望的信息检测不到反而做出错误判断,进而产生大量的冗余数据。这主要源于:
1、需占用大量内存空间,耗时久、实时性差;
2、现实中的图像一般都受到外界噪声的干扰,信噪比较低,此时常规Hough变换的性能将急剧下降,进行参数空间极大值的搜索时由于合适的阈值难以确定,往往出现“虚峰”和“漏检”的问题。
最小二乘法通过计算最小均方差关于参数a、b的偏导数为零时的值。事实上,在很多情况下,最小二乘法都是线性回归的代名词。遗憾的是,最小二乘法只适合于误差较小的情况。试想一下这种情况,假使需要从一个噪音较大的数据集中提取模型(比方说只有20%的数据时符合模型的)时,最小二乘法就显得力不从心了。例如图3,肉眼可以很轻易地看出一条直线(模式),但最小二乘法却找错了。
本发明通过从V视差图提取直线来检测路面,视差图中具有较大的噪声,这种情况下用最小二乘法来提取直线很可能会得到错误的拟合。
RANSAC算法可以从一组包含“局外点”的观测数据集中,通过迭代方式估计数学模型的参数,非常适合于含有较多噪声的观测数据的模型参数估计。在实际应用中获取到的数据,常常会包含有噪声数据,这些噪声数据会使对模型的构建造成干扰,我们称这样的噪声数据点为outliers(局外点),那些对于模型构建起积极作用的我们称它们为inliers(局内点),RANSAC做的一件事就是先随机的选取一些点,用这些点去获得一个模型(如果是在做直线拟合的话,这个所谓的模型其实就是斜率),然后用此模型去测试剩余的点,如果测试的数据点在误差允许的范围内,则将该数据点判为局内点,否则判断为局外点。局内点的数目如果达到了某个设定的阈值,则说明此次选取的这些数据点集达到了可以接受的程度,否则继续前面的随机选取点集后所有的步骤,不断重复此过程,直到找到选取的这些数据点集达到了可以接受的程度为止,此时得到的模型便可认为是对数据点的最优模型构建。
图4示出了根据本发明实施例的从V视差图的点中拟合一段直线的方法240的流程图。该方法可以用于图2中的步骤S240。
在步骤S241中,选择二值化后的V视差图中的点中的一组随机子集来进行直线拟合,得到直线模型。
在步骤S242中用得到的直线模型去测试所有的其它数据,如果某个点适用于估计的直线模型,认为它也是局内点,统计局内点的数目。
在步骤S243中,判断局内点的数目是否大于阈值,如果确定的结果为是,则前进到步骤S245,否则前进到步骤S244。
在步骤S244中,判定估计的模型不合理,丢弃该模型,然后前进到步骤S249。
在步骤S245中,判定估计的模型是合理的,然后用所有局内点重新估计模型,并估计局内点与模型的错误率,然后前进到步骤S246。
在步骤S246中,判断当前估计模型的错误率是否小于最佳模型的错误率,如果结果是肯定的,前进到步骤S247,否则前进到步骤S248。
在步骤S247中,用当前估计的模型替代最佳模型,即因为根据步骤S246的判断,当前估计的模型错误率比最佳模型的错误率更低,性能比最佳模型更好,因此取代最佳模型成为新的最佳模型,然后前进到步骤S249。
在步骤S248中,丢弃估计的模型,然后前进到步骤S249。
在步骤S249中,判定是否达到终止条件,如果达到终止条件,则过程终止,否则返回到步骤S241重复执行。这里的终止条件,例如可以为迭代次数达到阈值次数,错误率低于预定阈值等等。
上面参考图4描述了利用RANSAC方法来从V视差图中提取一段直线的方法,地面不是平面,因此反映在V视差图中为多段连续的分段直线,提取多段的分段直线方法可以例如如下:首先按照例如结合图4所述的方法来提取第一条直线,提取完毕后,将属于第一直线的点从V视差图中去除,然后针对剩下的点按照同样的方法来提取第二条直线,如此反复下去,直到剩余的点的数目小于预定阈值。
回到图2,在步骤S240完成之后,前进到步骤S250。
在步骤S250中,根据多帧图像平滑滤波直线。
如前所述,在专利文献CN 103489175 B中,对拟合的直线进行了卡尔曼滤波。
发明人经实验分析认为,卡尔曼滤波方法基于处理对象的变化是高斯分布来进行滤波,但实际上路面的变化并不是高斯分布,另外卡尔曼滤波方法运行起来很慢,无法满足自动驾驶领域检测汽车可行驶区域的实时性要求。
根据本发明实施例的路面检测技术设计了符合实时需求的根据多帧图像平滑滤波直线的方法。由于汽车所行驶的路面其坡度不会发生巨变,其变化是均匀缓慢的,所以根据本发明实施例所拟合的直线的变化也是均匀变化的。另一方面由于双目摄像头所得出的视差图具有很多噪声,得到的直线会产生不必要的抖动。为了减少这种抖动以及考虑到上面的拟合得到的直线是均匀缓慢的性质,本发明实施例提出用多帧图像进行平滑滤波,来得到平滑均匀的直线模型。
具体地,可以如下根据多帧图像进行直线的平滑滤波:设定一个时间窗口,假设直线模型表示为ax+by+c=0,对每帧图像得到直线模型参数,针对每个参数对每帧进行累加,当每来一帧新的图像,从累加的参数结果中减去最开始一帧图像的直线模型参数,再加上 当前帧图像的直线模型参数,再求平均作为这一帧的直线模型参数。例如,汽车行驶在路面上,当前时刻为tc,新拍摄得到当前图像,此时,即对于固定窗口,从窗口中去掉第一帧,然后加入新来的图像帧,对窗口内图像的直线模型参数来求平均作为新来的图像帧的直线模型参数,也即估计出的路面在V视差图中的数学模型参数;然后随着时间的进行,继续此操作,相当于随着时间进行向前滑动窗口。
在步骤S260中,通过所提取的直线得到原灰度图像中的可行驶区域。
在一个示例中,可以如下通过V视差图中所提取的直线得到原灰度图像中的可行驶区域:针对V视差图中的每一行,选取提取的直线上的视差值为d的点,在视差图对应的行中,比较每一个像素的视差值和d的差值,当差值小于一定阈值时,则将原图对应位置判定为安全的可行驶区域。
在灰度图中得到安全的可行驶区域的信息,能够为辅助驾驶、自动驾驶和无人驾驶提供关键的决策信息,防止碰撞发生,保证安全。
图5示出了根据本发明的另一实施例的实时检测汽车的可行驶区域的汽车可行驶区域实时检测系统300的结构框图。系统300安置在汽车上,用于实时检测汽车的可行驶区域,为汽车的辅助驾驶、自动驾驶和无人驾驶提供关键支撑。
如图5所示,汽车可行驶区域实时检测系统300可以包括:双目相机310、视差图计算部件320、V视差图转换部件330、二值化部件340、RANSAC直线拟合部件350、多帧图像滤波部件360、原图像可行驶区域确定部件370。
双目相机310配置为拍摄得到沿汽车行进方向的汽车前方的左右两张灰度图像。视差图计算部件320从左右两张灰度图像计算得到视差图。V视差图转换部件330从视差图转换得到V视差图。二值化部件340对V视差图进行二值化。RANSAC直线拟合部件350使用RANSAC方法来从二值化后的V视差图的点中拟合得到分段直线。多帧图像滤波部件360根据多帧图像平滑滤波直线。原图像可行驶区域确定部件370通过所提取的直线得到原灰度图像中的可行驶区域。
关于视差图计算部件320、V视差图转换部件330、二值化部件340、RANSAC直线拟合部件350、多帧图像滤波部件360、原图像可行驶区域确定部件370的功能和具体实现可以参考图2对应步骤的描述,这里不再赘述。
需要说明的是,本文中的双目相机应该做广义理解,任何能够获得左图像和右图像的相机或具有摄像功能的设备都可以视为本文中的双目相机。
关于视差图计算部件320、V视差图转换部件330、二值化部件340、RANSAC直线拟合部件350、多帧图像滤波部件360、原图像可行驶区域确定部件370也应做广义理解, 这些部件可以以软件、固件或硬件或者这些的组合来实现,以及各个部件可以彼此进行组合、次组合或者进一步进行拆分等等,这些都落入本公开的范围。
根据本发明实施例的汽车可行驶区域实时检测方法和系统,可以适应各种各样的路面和路况,对视差图精度要求低,减少前端运算量,抗干扰能力强,提高实时性,这些对于汽车的自动安全驾驶非常关键。
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。因此,本发明的保护范围应该以权利要求的保护范围为准。

Claims (13)

  1. 一种实时检测汽车的可行驶区域的汽车可行驶区域实时检测方法,包括:
    通过汽车车身上搭载的双目摄像头拍摄得到沿汽车行进方向的汽车前方的左右两张灰度图像,计算得到视差图;
    从视差图转换得到V视差图;
    对V视差图进行二值化;
    使用RANSAC方法来从二值化后的V视差图的点中拟合得到分段直线;
    根据多帧图像平滑滤波直线;以及
    通过所提取的直线得到原灰度图像中的可行驶区域。
  2. 根据权利要求1的汽车可行驶区域实时检测方法,其中,所述对V视差图进行二值化包括:
    求取每一行像素值的最大值,将每一行中仅最大值所处像素的灰度值设置为255,其余像素灰度值设置为0。
  3. 根据权利要求1的汽车可行驶区域实时检测方法,使用RANSAC方法来拟合一段分段直线包括:
    反复执行下述操作序列,直至达到预定结束标准:
    选择V视差图中的最大值点中的一组随机子集来进行直线拟合,得到直线模型;
    用得到的直线模型去测试所有的其它数据,如果某个点适用于估计的直线模型,认为它也是局内点,如果有超出预定数目的点被归类为局内点,那么估计的模型就认为是合理的,然后用所有局内点重新估计模型,并估计局内点与模型的错误率;
    如果模型的错误率低于当前最好的模型,则用该模型替代当前最好的模型;
    以最后得到的最好的模型作为该段分段直线。
  4. 根据权利要求3的汽车可行驶区域实时检测方法,所述使用RANSAC方法来拟合多段分段直线包括:
    首先按照权利要求3所述的方法来提取第一条直线,提取完毕后,将属于第一直线的点从V视差图中去除,然后针对剩下的点按照权利要求3所述的方法来提取第二条直线,如此反复下去,直到剩余的点的数目小于预定阈值。
  5. 根据权利要求1的汽车可行驶区域实时检测方法,所述根据多帧图像平滑滤波直线包括:
    设定一个时间窗口,假设直线模型表示为ax+by+c=0,对每帧图像得到直线模型参数,针对每个参数对每帧进行累加,当每来一帧新的图像,从累加的参数结果中减去最开始一帧图像的直线模型参数,再加上当前帧图像的直线模型参数,再求平均作为这一帧的直线模型参数。
  6. 根据权利要求1到5任一项的汽车可行驶区域实时检测方法,所述通过所提取的直线得到原灰度图像中的可行驶区域包括:
    针对V视差图中的每一行,选取提取的直线上的视差值为d的点,在视差图对应的行中,比较每一个像素的视差值和d的差值,当差值小于一定阈值时,则将原图对应位置判定为安全的可行驶区域。
  7. 一种车载的用于检测汽车可行驶区域的汽车可行驶区域实时检测系统,包括:
    双目摄像头,持续拍摄得到沿汽车行驶方向的汽车前方的左右两张灰度图像;
    计算装置,包括存储器、处理器、通信接口、总线,存储器、通信接口和处理器都连接到总线,存储器中存储有计算机可执行指令,计算装置经由通信接口能够获得双目摄像头拍摄得到的左右两张灰度图像,当处理器执行所述计算机可执行指令时,执行下述方法:
    基于左右两张灰度图像,计算得到视差图;
    从视差图转换得到V视差图;
    对V视差图进行二值化;
    使用RANSAC方法来来从二值化后的V视差图的点中拟合得到分段直线;
    根据多帧图像平滑滤波直线;以及
    通过所提取的直线得到原灰度图像中的可行驶区域。
  8. 根据权利要求7的汽车可行驶区域实时检测系统,其中,所述对V视差图进行二值化包括:
    求取每一行像素值的最大值,将每一行中仅最大值所处像素的灰度值设置为255,其余像素灰度值设置为0。
  9. 根据权利要求7的汽车可行驶区域实时检测系统,所述使用RANSAC方法来拟合一段分段直线包括:
    反复执行下述操作序列,直至达到预定退出标准:
    选择V视差图中的最大值点中的一组随机子集来进行直线拟合,得到直线模型;
    用得到的直线模型去测试所有的其它数据,如果某个点适用于估计的直线模型,认为它也是局内点,如果有超出预定数目的点被归类为局内点,那么估计的模型就认为是合理的,然后用所有局内点重新估计模型,并估计局内点与模型的错误率;
    如果模型的错误率低于当前最好的模型,则用该模型替代当前最好的模型;
    以最后得到的最好的模型作为该段分段直线。
  10. 根据权利要求9的系统,所述使用RANSAC方法来拟合多段分段直线包括:
    提取第一条直线,提取完毕后,将属于第一直线的点从V视差图中去除,然后针对剩下的点来提取第二条直线,如此反复下去,直到剩余的点的数目小于预定阈值。
  11. 根据权利要求7的系统,所述根据多帧图像平滑滤波直线包括:
    设定一个时间窗口,假设直线模型表示为ax+by+c=0,对每帧图像得到直线模型参数,针对每个参数对每帧进行累加,当每来一帧新的图像,从累加的参数结果中减去最开始一帧图像的直线模型参数,再加上当前帧图像的直线模型参数,再求平均作为这一帧的直线模型参数。
  12. 根据权利要求7的系统,所述通过所提取的直线得到原灰度图像中的可行驶区域包括:
    针对V视差图中的每一行,选取提取的直线上的视差值为d,在视差图中对应的行中,比较每一个像素的视差值和d的差值,当差值小于一定阈值时,则将原图对应位置判定为安全的可行驶区域。
  13. 一种实时检测汽车的可行驶区域的系统,包括:
    双目相机,配置为拍摄得到沿汽车行进方向的汽车前方的左右两张灰度图像;
    视差图计算部件,从左右两张灰度图像计算得到视差图;
    V视差图转换模块,从视差图转换得到V视差图;
    二值化模块,对V视差图进行二值化;
    RANSAC直线拟合模块,使用RANSAC方法来从二值化后的V视差图的点中拟合得到分段直线;
    多帧图像滤波模块,根据多帧图像平滑滤波直线;
    原图像可行驶区域确定模块,通过所提取的直线得到原灰度图像中的可行驶区域。
PCT/CN2016/100520 2016-09-28 2016-09-28 汽车可行驶区域实时检测方法和系统 WO2018058355A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/100520 WO2018058355A1 (zh) 2016-09-28 2016-09-28 汽车可行驶区域实时检测方法和系统
CN201680001427.0A CN107517592B (zh) 2016-09-28 2016-09-28 汽车可行驶区域实时检测方法和系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/100520 WO2018058355A1 (zh) 2016-09-28 2016-09-28 汽车可行驶区域实时检测方法和系统

Publications (1)

Publication Number Publication Date
WO2018058355A1 true WO2018058355A1 (zh) 2018-04-05

Family

ID=60723355

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/100520 WO2018058355A1 (zh) 2016-09-28 2016-09-28 汽车可行驶区域实时检测方法和系统

Country Status (2)

Country Link
CN (1) CN107517592B (zh)
WO (1) WO2018058355A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059706A (zh) * 2019-04-23 2019-07-26 上海工程技术大学 一种用于富椒盐噪声环境下单一直线的检测方法
CN111243003A (zh) * 2018-11-12 2020-06-05 海信集团有限公司 车载双目摄像机及其检测道路限高杆的方法、装置
CN111243008A (zh) * 2020-01-19 2020-06-05 广西师范大学 一种用于高精度工件的圆弧数据拟合方法
CN111754469A (zh) * 2020-06-10 2020-10-09 深圳南方德尔汽车电子有限公司 基于sobel的车位检测方法、装置、计算机设备及存储介质
CN113989347A (zh) * 2021-11-09 2022-01-28 北京智芯原动科技有限公司 一种双目视差计算方法及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197569A (zh) * 2017-12-29 2018-06-22 驭势科技(北京)有限公司 障碍物识别方法、装置、计算机存储介质和电子设备
CN111284490B (zh) * 2018-12-06 2021-06-04 海信集团有限公司 车载双目相机检测前车溜车的方法及车载双目相机

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090263009A1 (en) * 2008-04-22 2009-10-22 Honeywell International Inc. Method and system for real-time visual odometry
CN103106651A (zh) * 2012-07-16 2013-05-15 清华大学深圳研究生院 一种基于三维hough变换的获取视差平面的方法
CN103679127A (zh) * 2012-09-24 2014-03-26 株式会社理光 检测道路路面的可行驶区域的方法和装置
CN104902261A (zh) * 2014-03-06 2015-09-09 株式会社日立制作所 用于低清晰度视频流中的路面识别的装置和方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105225482B (zh) * 2015-09-02 2017-08-11 上海大学 基于双目立体视觉的车辆检测系统和方法
CN105550665B (zh) * 2016-01-15 2019-01-25 北京理工大学 一种基于双目视觉的无人驾驶汽车可通区域检测方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090263009A1 (en) * 2008-04-22 2009-10-22 Honeywell International Inc. Method and system for real-time visual odometry
CN103106651A (zh) * 2012-07-16 2013-05-15 清华大学深圳研究生院 一种基于三维hough变换的获取视差平面的方法
CN103679127A (zh) * 2012-09-24 2014-03-26 株式会社理光 检测道路路面的可行驶区域的方法和装置
CN104902261A (zh) * 2014-03-06 2015-09-09 株式会社日立制作所 用于低清晰度视频流中的路面识别的装置和方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHANG, YI: "Stereo-based Research on Obstacles Detection in Unstructured Environment", CHINESE MASTER'S THESES FULL-TEXT DATABASE INFORMATION TECHNOLOGY, 15 July 2015 (2015-07-15), pages 24, ISSN: 1674-0246 *
ZHU, XIAOZHOU ET AL.: "Research on Vision-based Traversable Region Recognition for mobile robots", APPLICATION RESEARCH OF COMPUTERS, vol. 29, no. 6, 30 June 2012 (2012-06-30), pages 2010, ISSN: 1001-3695 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111243003A (zh) * 2018-11-12 2020-06-05 海信集团有限公司 车载双目摄像机及其检测道路限高杆的方法、装置
CN110059706A (zh) * 2019-04-23 2019-07-26 上海工程技术大学 一种用于富椒盐噪声环境下单一直线的检测方法
CN110059706B (zh) * 2019-04-23 2023-04-07 上海工程技术大学 一种用于富椒盐噪声环境下单一直线的检测方法
CN111243008A (zh) * 2020-01-19 2020-06-05 广西师范大学 一种用于高精度工件的圆弧数据拟合方法
CN111243008B (zh) * 2020-01-19 2022-04-22 广西师范大学 一种用于高精度工件的圆弧数据拟合方法
CN111754469A (zh) * 2020-06-10 2020-10-09 深圳南方德尔汽车电子有限公司 基于sobel的车位检测方法、装置、计算机设备及存储介质
CN113989347A (zh) * 2021-11-09 2022-01-28 北京智芯原动科技有限公司 一种双目视差计算方法及装置
CN113989347B (zh) * 2021-11-09 2024-05-03 北京智芯原动科技有限公司 一种双目视差计算方法及装置

Also Published As

Publication number Publication date
CN107517592B (zh) 2021-07-02
CN107517592A (zh) 2017-12-26

Similar Documents

Publication Publication Date Title
CN108243623B (zh) 基于双目立体视觉的汽车防碰撞预警方法和系统
WO2018058355A1 (zh) 汽车可行驶区域实时检测方法和系统
US10937181B2 (en) Information processing apparatus, object recognition apparatus, device control system, movable body, image processing method, and computer-readable recording medium
US11093763B2 (en) Onboard environment recognition device
JP6670071B2 (ja) 車両用画像認識システム、及び対応法
JP3630100B2 (ja) 車線検出装置
JP6614247B2 (ja) 画像処理装置、物体認識装置、機器制御システム、画像処理方法およびプログラム
KR101609303B1 (ko) 카메라 캘리브레이션 방법 및 그 장치
KR20160123668A (ko) 무인자동주차 기능 지원을 위한 장애물 및 주차구획 인식 장치 및 그 방법
JP2013109760A (ja) 対象検知方法及び対象検知システム
EP2924655B1 (en) Disparity value deriving device, equipment control system, movable apparatus, robot, disparity value deriving method, and computer-readable storage medium
KR102507248B1 (ko) 에고모션 추정 시스템 및 방법
JP2013190421A (ja) 車両において通行物体位置検出を向上する方法
Utaminingrum et al. A laser-vision based obstacle detection and distance estimation for smart wheelchair navigation
WO2014002692A1 (ja) ステレオカメラ
WO2019065970A1 (ja) 車両用外界認識装置
JP2017151535A (ja) 画像処理装置、物体認識装置、機器制御システム、画像処理方法およびプログラム
JP6516012B2 (ja) 画像処理装置、物体認識装置、機器制御システム、画像処理方法およびプログラム
Lion et al. Smart speed bump detection and estimation with kinect
US20200193184A1 (en) Image processing device and image processing method
KR102100047B1 (ko) 차선 끝점 검출 알고리즘을 이용한 차량 위치 인식 시스템 및 상기 차선 끝점 검출 알고리즘의 성능을 평가하는 방법
CN114495066A (zh) 一种辅助倒车的方法
JP6701905B2 (ja) 検出装置、視差値導出装置、物体認識装置、機器制御システム、およびプログラム
JP2018092605A (ja) 情報処理装置、撮像装置、機器制御システム、移動体、情報処理方法およびプログラム
KR101910256B1 (ko) 카메라 기반 도로 곡률 추정을 위한 차선 검출 방법 및 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16917110

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16917110

Country of ref document: EP

Kind code of ref document: A1