CN102682292B - Method based on monocular vision for detecting and roughly positioning edge of road - Google Patents

Method based on monocular vision for detecting and roughly positioning edge of road Download PDF

Info

Publication number
CN102682292B
CN102682292B CN201210144082.8A CN201210144082A CN102682292B CN 102682292 B CN102682292 B CN 102682292B CN 201210144082 A CN201210144082 A CN 201210144082A CN 102682292 B CN102682292 B CN 102682292B
Authority
CN
China
Prior art keywords
step
road
edge
image
pixel
Prior art date
Application number
CN201210144082.8A
Other languages
Chinese (zh)
Other versions
CN102682292A (en
Inventor
王宏
严润晨
赵云鹏
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Priority to CN201210144082.8A priority Critical patent/CN102682292B/en
Publication of CN102682292A publication Critical patent/CN102682292A/en
Application granted granted Critical
Publication of CN102682292B publication Critical patent/CN102682292B/en

Links

Abstract

The invention discloses a method based on monocular vision for detecting and roughly positioning the edge of a road, and relates to the field of machine vision and intelligent control. Aiming at a continuous road with different edge characteristics, two road edge detection methods are supplied and suitable for semistructured and nonstructured roads and can be applied to vision navigation and intelligent control over a robot. The invention provides a method for detecting the edge of the road based on colors and a method for detecting the edge of the road based on threshold value partitioning. On the basis of the obtained edge of the road, an image is subjected to inverted perspective projection transformation, so that a front view is transformed into a top view; and according to a linear corresponding relation between a pixel and an actual distance in the image, a perpendicular distance from the current position of the robot to the edge of the road and a course angle of the robot can be calculated. The method is easy to implement, high in anti-interference performance, high in instantaneity and suitable for the semistructured and nonstructured roads.

Description

Road-edge detection based on monocular vision and rough localization method

Technical field

Invention relates to computer vision, area of pattern recognition, Mobile Robotics Navigation technology and method.

Technical background

Intelligent mobile robot is the complication system that comprises a plurality of modules such as environment sensing, control decision, path planning Applied Computer Techniques, infotech, Robotics, microelectric technique.It has generally merged multiple sensors technology, comprises GPS, laser radar, video camera, ultrasound wave, gyroscope etc., the data of perception is processed and is merged, thereby obtain robot locating information, and the place ahead complaint message, for walking planning and decision-making.Mobile robot is widely used in the every field such as military affairs, medical treatment, agricultural, industry now.Particularly when running into extreme environment and unsafe conditions, as nuclear pollution, high pressure, deep-sea, or even the outer space, mobile apparatus can effectively reduce the risk of manual work per capita, increases work efficiency, and has promoted society and scientific and technological progress and development.

Mobile robot has multiple classification, by controlling type, can be divided into drone version robot and autonomous humanoid robot.

On November 26th, 2011, it is the nuclear power remote control type movable robot of the up-to-date science and technology of collection that the U.S. has succeeded in sending up " curious number " mars exploration car of new generation, it is the 4th mars exploration car after " courage number ", " Opportunity Rover ", " phoenix number ", there is very strong mobile driveability, carry a large amount of environmental analysis monitoring instruments, can pass via satellite multiple analysis of data back the earth.

From principal mode mobile robot, it is following development trend.Its integrated application computer vision, pattern-recognition, fuzzy control, the artificial intelligence technologys such as expert system, by known structured environment, the autonomous under semi-structure environment, to the autonomous of destructuring circumstances not known with explore development.

Wherein intelligent automobile is again one of mobile robot technology application receiving much concern at present.Intelligent automobile THMR-the V of Tsing-Hua University's development, can travel on Ordinary Rd and highway, and design speed is highway 80km/h, Ordinary Rd 20km/h.On automobile, be equipped with colour TV camera and laser range finder, navigation and the environmental monitoring system such as differential GPS, magnetic compass and optical code disk can be travelled and keep away barrier in the destructuring environment such as campus.Google has announced the intelligent vehicle of oneself developing, Toyota Prius in 2010.It has installed video frequency pick-up head, laser range finder, and radar sensor, gps satellite positioning system obtains under the digital map navigation of traffic conditions , Google oneself and has travelled and surpassed 160,000 miles (comprising running section that someone controls once in a while) from heat transfer agent.Intelligent vehicle perception road environment is the information fusion that relies on multiple sensors, higher to the accuracy requirement of sensor.

For mobile robot, what the most often use is vision sensor, and colour TV camera is installed.Computer vision, mode identification technology has played vital effect to mobile robot's navigation.For the visual identification algorithm of structured road, be generally to lane line, traffic mark, the detection of signal lamp and road area.For the visual identification algorithm of unstructured road, there is for wheeled region the detection of barrier.At present with the structurized Road Detection algorithm of lane line comparative maturity, and for backroad, the detection with the urban road of lane line does not also lack the algorithm that some versatilities are stronger.

For current most visual identification algorithm, to sensor, require high, the shortcoming that time complexity is high, non-structured environment identification is still needed to meet more condition, the present situation of navigation information be cannot provide accurately, road-edge detection and rough localization method based on monocular vision the present invention proposes.

Summary of the invention

The present invention is directed to mobile robot visual algorithm high to sensor requirement, a little less than real-time, to problems such as unstructured road identification versatility are lower, proposed road-edge detection and rough localization method based on monocular vision.

The invention is characterized in, a kind of semiautomatic road-edge detection and rough localization method based on mobile robot's monocular vision, at the computing machine of being controlled by operating personnel and be contained in the road-edge detection based on monocular vision that the common serial connection of single-lens ccd video camera on mobile robot forms and coarse positioning system in realize, the method is according to following steps, to realize successively;

Step (1), the road conditions image that can reflect road and non-road boundary both sides color distortion that single-lens ccd video camera photographs when robot is advanced is input in computing machine and shows;

Step (2), operating personnel, according to the color distortion of described road and non-road boundary both sides, optionally start two different software modules that are located in described computing machine;

For road and the colored road conditions image of the non-road boundary both sides color RGB color space that there were significant differences, start the road-edge detection module based on color;

For road with non-road boundary both sides color is fuzzy and without the obvious colored road conditions image of RBG color space on border, start the road-edge detection module based on Threshold segmentation;

Step (3), the described road-edge detection module based on color is carried out the detection that following steps are carried out road edge successively;

Step (3.1), parameter initialization, reads in the colored road conditions image of described RBG color space, and the scope that region of interest ROI is set is 1/4~1/2 size of described road conditions image, starts to divide from the described colored road conditions image lower left corner;

Step (3.2) is taken out the image of region of interest ROI and is transformed into hsv color space from described colored road conditions image, obtains new road conditions image;

Brightness v'=max (R, G, B),

Saturation degree

Colourity

If H ' is <0, use (H '+360) replace, wherein RGB is respectively the brightness value of red, blue, green three looks of former pixel, makes V=255tV', S=255S', H=H'/2

Step (3.3), carries out canny with the cvCanny function module in openCV storehouse in the following manner and takes out limit, obtains canny figure:

The control edge that the junction, result images edge that setting step (3.2) obtains is 50 as Grads threshold, the strong edge of control that is 150 as Grads threshold, initial segmentation place, inner side, is input as hsv color spatial image and above-mentioned two Grads threshold of region of interest ROI; Be output as canny and take out edge graph;

Step (3.4), the color gamut of setting HSV is interval, and green area or khaki extracted region out, the scope of its Green interval is (20,0,0)~(70,40,0), khaki scope is (10,0,0)~(30,0,150);

Step (3.5), operation is corroded in the green or the khaki region that utilize the cvErode function module in openCV storehouse to obtain step (3.4), is output as result images;

Step (3.6), the result images that utilizes the cvDilate function module in openCV storehouse to obtain step (3.5) carries out expansive working, is output as green or khaki region result images after excessive erosion expansive working;

Step (3.7), the result images obtaining from step (3.6), extract the marginal point that matching is used: during for road left side edge, in the result images obtaining in step (3.6), scanning from right to left, if taking out in edge graph, canny has green point in 5 pixels on the left of certain marginal point, and correspondingly in 5 pixels on right side there is no a green point, think that this marginal point is true edge point, during 5 pixels of marginal point left and right less than, this marginal point is directly cast out, and obtains a true edge point sample set; Same step is pressed in khaki region;

Step (3.8), judgement true edge point:

If described true edge point sample is concentrated, number m >=30 of true edge point, carry out following steps;

Step (3.8.1), concentrates from described true edge point sample, selects randomly two true edge points, as two sample datas;

Step (3.8.2), the cvFitline function module by openCV storehouse, goes out straight line with least square fitting; Be input as two sample datas that step (3.8.1) obtains, error of calculation type is CV_DIST_L2, and polar coordinates radius and angular error are 0.01, slope and the intercept of output institute fitting a straight line; Statistics is less than the marginal point number k of 4 pixels with described air line distance more afterwards, usings this point that matching is used as subsequent step;

Step (3.8.3), if k/m is greater than 0.3, accepts model of fit, then utilizes least square method fitting a straight line again by described k point, obtains net result; Otherwise, return to step (3.8.1); When repeating number of times and be greater than the maximum cycle 200 times of setting, failure; If counting, the concentrated true edge of described true edge point sample is less than at 30 o'clock unsuccessfully;

Step (4), the described road-edge detection module based on Threshold segmentation, at road right side edge place, takes turns doing following steps and carries out road-edge detection;

Step (4.1), the size that region of interest ROI is set is the image lower right corner 1/4 part;

Step (4.2), takes out region of interest area image, and is converted to gray level image by following formula;

V 0(x, y)=0.212671 * R (x, y) 10,71516O * G (x, y)+0.072169×B(x,y),

X, y represents take the pixel coordinate figure that the upper left corner is initial point in described image; v 0the gray-scale value that represents (x, y) pixel, R represents the red color channel value of place pixel in coloured image, and G represents the green channel value of place pixel in coloured image, and B represents the blue channel value of place pixel in coloured image, v 0, R, G, B is in [0,255];

Step (4.3), presses following formula pixel (x, y) is carried out to average Fuzzy Processing, and 5 * 5 pixel value matrix V and nuclear matrix K convolution that will be centered by (x, y), obtain new pixel value V1;

V 1(x, y)=V*K, * represents convolution,

Step (4.4), calculates the pixel number that each gray-scale value is corresponding, makes color histogram;

Step (4.5), sets the region of interest area image that ccd video camera obtains, and the codomain in gray space is [0, l-1], 1 ∈ (0,255]:

Be calculated as follows judge value η (t), travel through the judge value of all gray-scale values, select a gray-scale value of judge value maximum as the optimum gray threshold of classification;

Wherein, , by calculating, there is probability P in the two class pixels that are greater than and are less than or equal to this value for dividing according to gray scale classification thresholds t 0(t), P 1(t) inter-class variance obtaining;

P, the probability occurring for the pixel of gray-scale value i; Total pixel of image is n represents that gray-scale value is the pixel number of i, and the probability that utilizes histogrammic result to calculate the pixel appearance that gray scale is i is

P 0(t), P 1(t) be according to the gray-scale value probability of occurrence of two classes of described gray threshold t division; μ r(t) be the probability weight summation of pixel in whole gray-scale value codomain;

for the two class pixels that are greater than and are less than or equal to this value of dividing according to gray threshold t, the class internal variance obtaining by calculating probability of occurrence;

Wherein, μ 0(t), μ 1(t) be respectively below threshold value t and in above gray-scale value codomain the weighted mean of the probability that pixel occurs;

Step (4.6), based on step (4. 54) the gray scale classification thresholds t obtaining, higher than described gray scale classification thresholds t and lower than the two class pixels of described gray scale classification thresholds t, carries out binary conversion treatment by its gray-scale value by following formula respectively;

V (x, y) is for carrying out the grey scale pixel value after binary conversion treatment;

Step (4.7), the bianry image that utilizes the cvCanny function module in described openCV storehouse to obtain step (4.5) carries out canny and takes out limit, is output as canny and takes out edge image; The pixel gradient threshold value that setting connects as the control edge of junction, edge is 50, as the pixel gradient threshold value of the junction, the strong edge of control at initial cut place, edge, is 100;

Step (4.8), the later canny of binary conversion treatment obtaining for step (4.7) takes out edge graph, according to from left to right obtaining first white edge point, as frontier point, obtains a border point set;

Step (4.9), utilizes the concentrated all frontier points of frontier point that the cvHoughLines function module in openCV storehouse obtains step (4.8) to carry out Hough fitting a straight line, obtains road edge; The resolution of setting polar coordinates radius is 3 pixels, and polar angle resolution is it is 50 that straight line is transformed into the curve number needing on polar coordinates by this straight line minimum;

Step (5), clicks the relative coarse positioning that step is carried out robot and road edge;

Step (5.1), the forward direction figure that step (4) is detected to the road conditions edge obtaining obtains the vertical view of road edge by doing contrary perspective projection transformation, and step is as follows:

Video camera in step (5.1.1) ,Yong robot is taken a width with ladder diagram picture, is the forward direction figure of rectangle in vertical view;

Step (5.1.2), selects four trapezoidal summits in step (5.1.1) in the forward direction figure obtaining, record coordinate;

Step (5.1.3), arranges arbitrarily the corresponding relation of rectangle size and pixel in vertical view, thereby obtains four summits in vertical view that described four summits obtain after finishing contrary perspective projection transformation, records coordinate;

Step (5.1.4), utilize the cvWrapPerspectiveQMatrix function module in openCV, obtain contrary perspective projection transformation matrix H, be input as the matrix of two 4 * 2 that the cover of two in step (5.1.2), (5.1.3) coordinate forms, be output as contrary perspective projection transformation matrix H;

Step (5.1.5), in the cvWrapPerspective function module in described forward direction figure and contrary perspective projection transformation Input matrix openCV, obtains the vertical view after contrary perspective projection transformation;

Step (5.2), calculate according to the following steps course angle deviation and transversal displacement:

Described course angle deviation refers to the angle theta between robot ride direction AC and the extended line AE of road edge line,

Described transversal displacement refers to that the center B of robot on described robot ride direction AC is to the vertical range BD on described road edge extended line;

Step (5.2.1), asks course angle deviation θ,

Any two points on the road edge line of described forward direction figure, is multiplied by described contrary perspective projection transformation matrix H by the coordinate of each point wherein, just obtains on the extended line of described road edge line the coordinate (x of corresponding 2 l, y 1), (x 2, y 2), slope k=(y of corresponding straight line AE 2one y 2)/(x 2one x 1), θ=90 °-arctan (k); If x 1=x 2, 0=0 °;

Step (5.2.2), asks transversal displacement BD,

Step (5.2.2.1), sets: the upper left corner of described vertical view is true origin, and coordinate represents with pixel, and the corresponding physical length of each pixel is 7.5mm, arranges that vertical view coordinate time sets in step (5.1.3); The ordinate that the following equation of simultaneous can obtain A in robot ride direction AC is h;

width presentation video width,

Step (5.2.2.2), the ordinate of establishing the intersection point C of robot ride direction AC and described vertical view is picture altitude height, the vertical range from C point to the extended line of road edge line is CE=sin (θ) * 7.5* (h-height);

Step (5.2.2.3), is calculated as follows transversal displacement BD:

BD=sin(θ)*7.5*(h-height-d/7.5)

D is the blind area distance in vertical range BC,Ye Shi robot the place ahead of the center B of robot and vertical view lower limb, and unit is mm.

Major advantage of the present invention is:

1, the present invention is directed to the continuous road with different edge features two kinds of road-edge detection methods are provided, the strong adaptability to road environment, versatility is good.A kind of road and larger situation of non-road boundary both sides color distortion (as the both sides road that is greenbelt) of being applicable to, to the susceptibility of illumination condition and shade a little less than; Another kind method is applicable to boundary road conditions relatively not clearly, very sensitive to shade.Two kinds of methods are mainly applicable to semi-structured road and unstructured road.

2, sometimes due to the visual field or the setting angle of video camera, or the reasons such as road is wider, image can not complete demonstration road both sides, road-edge detection method of the present invention, for one-sided rim detection, has reduced the requirement to camera coverage scope and setting angle; Method of the present invention does not need the video camera that resolution is very high, has reduced the requirement to video camera sharpness.

3, the present invention has carried out the coarse positioning with respect to road edge in the edge Hou,Dui robot that road detected, has provided positioning result, and this result can better be assisted advancing of control.

The present invention has realized two kinds of road-edge detection methods based on monocular vision, on its basis, has realized the coarse positioning of robot with respect to road edge.

First method is the road-edge detection method based on color, first image is transformed on hsv color space, utilizes road and off-highroad color distortion, take H value as main scope interval, S and V are that auxiliary range is interval, image are divided into roughly to the two-value gray-scale map of road and non-road area; In conjunction with the canny outline map of original image, color large according to road edge place gradient and both sides of edges is different, extracts road edge point; When the number of road edge point is more, just carry out next step processing, otherwise abandon this two field picture, next frame image is processed; Then by RANSAC method, road edge point is carried out to fitting a straight line.From existing road edge point, randomly draw at 2 and carry out fitting a straight line, when fitting result is better, removes and disturb larger point, remaining marginal point is carried out to random sampling fitting a straight line again.Twice fitting can make fitting result more accurate, and can well remove the interference of some noises in image.

Another kind method is the road-edge detection method based on Threshold segmentation, first original image is changed into gray level image; Select gray threshold t to divide image into road and non-road two classes, calculate the inter-class variance of two classes; The value of traversal t, chooses and makes the t value of inter-class variance maximum as the critical value of road area and non-road area; Utilize gray threshold t by image binaryzation, the marginal point that extracts binary image is road edge point; Road edge point is carried out to Hough straight-line detection, obtain road edge line.

After road edge being detected, ,Dui robot carries out the relative coarse positioning with road edge, the vertical range of the angle of calculating robot and road direction and robot and road edge.The road edge line image having obtained is carried out to contrary perspective projection and change, image is converted to vertical view from the normal visual angle figure of video camera; Image and actual distance after contrary perspective projection transformation have a linear corresponding relation, and the length that pixel represents is in practice α millimeter; Utilize the slope of image cathetus can calculate the angle theta of the current and road direction of robot, the vertical range δ of robot and road edge can obtain according to triangle relation from image.

Quality of the present invention: the road-edge detection method that the present invention proposes is applicable to semi-structured and unstructured road, for having good robustness on picture steadiness and sensor color error ratio, can be used for miniature mobile robot.Meanwhile, can finely meet the demand of real-time navigation, processing delay is low, applied widely for unstructured road.The net result that algorithm draws, can ,Dui robot control decision in auxiliary robot location play decisive role.Because algorithm is based on monocular vision, therefore there is the common fault of vision algorithm, comparatively responsive for illumination condition, shade, detection effect under these conditions can be affected.

Accompanying drawing explanation

Fig. 1,2 is the step key diagram of the road-edge detection algorithm based on color, wherein:

Fig. 1 (a) is for dividing the original image after ROI, and Fig. 1 (b) takes out limit result figure for canny, and Fig. 1 (c) extracts result figure for green area, and Fig. 1 (d) is the green area figure corroding after expansion process.

Fig. 2 is RANSAC fitting a straight line result figure.

Fig. 3,4,5 is the step key diagram of the road-edge detection algorithm based on Threshold segmentation, wherein:

Fig. 3 (a) is for dividing the original image after ROI, and Fig. 3 (b) is for being converted into the image of gray space.

Fig. 4 (a) is for utilizing gray threshold to carry out the result figure of binaryzation, and Fig. 4 (b) takes out the result figure of de-redundant simplification behind limit for canny.

Fig. 6,7,8,9 is the step key diagram of robot coarse positioning implementation method, wherein:

Fig. 5 is final fitting a straight line result figure.

Fig. 6 is world coordinate system and image coordinate system schematic diagram.

Fig. 7 is for real road, the effect contrast figure before and after contrary perspective projection transformation.Wherein (a) is former figure, is (b) figure after contrary perspective projection transformation.

Fig. 8 is the image for the perspective projection transformation matrix of inverting.Wherein (a) is former figure, and four points in figure are four points choosing, is (b) design sketch after contrary perspective projection transformation.

Fig. 9 is that course angle deviation and transversal displacement calculate schematic diagram.

Figure 10 is road-edge detection based on monocular vision and the process flow diagram of rough localization method.

Embodiment

The present invention has designed road-edge detection and the rough localization method based on monocular vision.Shown in Fig. 1 is the flow process of road-edge detection and coarse positioning.The present invention has designed two kinds of road-edge detection methods, and a kind of is road-edge detection method based on color, and another kind is the road-edge detection method based on Threshold segmentation.In the basic Shang,Dui of road-edge detection robot, carried out the coarse positioning with respect to road edge.Image processing and utilizing in implementation procedure be the openCV storehouse of increasing income.

One, according to actual environment, select Road Detection algorithm

First any algorithm of environmental selection according to real road, if the larger situation of road and non-road boundary both sides color distortion (as the both sides road that is greenbelt) is just selected the road-edge detection method based on color; Otherwise just choose the road-edge detection method based on Threshold segmentation.

Two, road-edge detection algorithm

Road-edge detection method based on color

The key of this method is the extraction of road edge point and the matching of road edge line.Because the color of road edge both sides generally has very large difference, and the image gradient at road edge place is larger, so this method is used the method extraction marginal point that color gamut is interval and canny image combines; The marginal point extracting may have some to disturb, and this method is carried out matching road edge line by the stronger RANSAC method of interference resistance.Take both sides is that the road of greenbelt is example, and concrete step is as follows:

1. the initialization of parameter: use scope interval H, the S of color, the value of V are set, for example green: (20,0,0)~(70,40,0), khaki: (10,0,0)~(30,0,150), ROI(area-of-interest) size (scope is in 1/4th to 1/2nd sizes of image, from the image lower left corner or the lower right corner start to divide);

2. read in image, take out the ROI of image, original image is transformed into HSV space, formula is as follows:

A) by color image from RGB color space conversion to hsv color space

I. the H of each pixel, S, the value of V component is following calculating respectively:

Brightness V=max (R, G, B)

Saturation degree

Colourity

If H<0, uses (H+360) to replace

R wherein, G, B is respectively the brightness value of this red, green, blue three looks.

Ii. for convenient, store and calculate, the H to each point, S, V component value is done following calculating:

V'=255V,S'=255S,H'=H/2;

For the coloured image of RGB color space, for each pixel, carry out above-mentioned calculating, can be converted into hsv color spatial image.

3. carrying out canny takes out limit and obtains canny figure: utilize the cvCanny function in openCV storehouse to carry out, input parameter is original image, as the Grads threshold of controlling edge, be 50, initial segmentation place, inner side is 150 as the pixel threshold of controlling strong edge, is output as canny and takes out edge graph; Fig. 1 (a) is for dividing the image after ROI, and Fig. 1 (b) is that the canny taking out after ROI takes out edge graph;

4. with the scope interval of HSV by green or khaki extracted region out; As Fig. 1 (c);

5. pair green or khaki areal map corrode operation: utilize the function cvErode function in openCV storehouse, be input as original image, all the other parameters, for the rectangular matrix of acquiescence, without input, are output as result images;

6. pair green or khaki areal map carry out expansive working: utilize the function cvDilate function in openCV storehouse, be input as original image, all the other parameters, for acquiescence, without input, are output as result images; As Fig. 1 (d);

7. extract the marginal point for matching: to canny figure scanning (if road left side edge) from right to left from bottom to up, if have green point or colour of loess color dot in 5 pixels of the left side of the marginal point in canny figure, in 5 pixels of right side, there is no green point or colour of loess color dot, regard as real marginal point.A line only has at most a true edge point.The point of left and right less than 5 pixels is directly cast out;

8. if the number of true edge point is less than 30, cannot matching; If the number of true edge point is greater than 30, adopt RANSAC algorithm to carry out fitting a straight line to the true edge point that step obtains above:

A) establish true edge point sample and concentrate total total m point, select randomly two sample datas;

B) by these two some least square fitting straight lines: utilize the cvFitLine function in openCV storehouse, be input as the point set of two points, error of calculation type is CV_DIST_L2, and polar coordinates radius and angular error are 0.01, are output as the parameter of straight line; Then statistics is the number K of the point in matching with the point that air line distance is less than 4 pixels;

C) if K/m is greater than 0.3, just accept model of fit, accept this fitting result, and utilize least square method fitting a straight line by all data on can matching; Otherwise re-execute a)-c);

D) if repeat number of times, be greater than the maximum number of times 200 that allows, failure;

Net result is as Fig. 2.

Road-edge detection algorithm based on Threshold segmentation

In some unstructured roads, road area blur margin is clear, the feature that road area and non-road area color distortion are large, in system, utilized a kind of maximum variance between clusters, the image that video camera is captured, in gray space, carry out adaptive cluster, make it dynamically carry out Threshold segmentation, thereby identify the road area of wheeled.

In algorithm, road area and non-road area are divided into two class C 0and C 1, by calculating respectively inter-class variance and the class internal variance of two classes, obtain optimum gradation segmentation threshold t, it is maximum that gray threshold t can make the inter-class variance of two classes, and class internal variance is minimum.The judgment condition of the method, is according to by image two timesharing, of a sort pixel color similarity, and the feature that inhomogeneous pixel color distortion is large develops.

Concrete implementation step is as follows:

1. region of interest ROI size is set for the image lower right corner four/part.The image of Fig. 3 (a) for obtaining after interested area division;

2. image is converted into gray level image, formula is following, and (x wherein, take the pixel coordinate figure that the upper left corner is initial point in y representative image, v 0the gray-scale value that represents (x, y) pixel, R represents the red color channel value of place pixel in coloured image, and G represents the green channel value of place pixel in coloured image, and B represents the blue channel value of place pixel in coloured image, v 0, R, G, B ∈ [0,255]):

v 0(x,y)=0.212671×R(x,y)+0.71516O×G(x,y)+O.072169×B(x,y)

3. utilize average fuzzy algorithm to carry out Fuzzy Processing to image, in order to reduce other color change point.The image that Fig. 3 (b) is gray space; Concrete formula is following, and (x wherein, take the pixel coordinate figure that the upper left corner is initial point in y representative image, v 0the gray-scale value that represents (x, y) pixel, K represents the nuclear matrix that average is fuzzy, V represents 5 * 5 pixel value matrix centered by (x, y), * represents convolution):

V 1(x,y)=V*K

4. pair gray-scale map carries out statistical color histogram, calculates every kind of pixel number that gray-scale value is corresponding;

5. the codomain of the image that video camera obtains in gray space is [0, I-1] (l ∈ [0,255]), and the pixel number of establishing gray-scale value and be i is n itotal pixel of image is the probability that utilizes histogrammic result can calculate the pixel appearance that gray scale is i is

So we can obtain according to above-mentioned formula:

0≤p i≤ 1 and

For some gray-scale value t, as threshold value, gray-scale value can be divided into two classes.The probability of occurrence that can calculate two classes by following formula is:

Therefore, can calculate the average of two classes, and the overall gray level mathematical expectation of probability of image is respectively:

The inter-class variance that can draw thus two classes is:

Class internal variance is:

Final judge value computing formula is

Traveling through all gray scale judge value sizes selects final judge and is worth maximum gray-scale value, the optimum gray threshold of classifying.

6. according to previous step, obtain gray scale classification thresholds, will carry out simple black and white two-value processing higher than this numerical value with lower than two class pixels of this numerical value.Result is as Fig. 4 (a).Formula is following, and (x wherein, take the pixel coordinate figure that the upper left corner is initial point in y representative image, v 1represent (x, y) pixel binaryzation gray-scale value before, v represents the gray-scale value after the pixel binary conversion treatment of place, and t is the gray scale classification thresholds that previous step is obtained):

7. this bianry image is carried out to canny and take out limit: utilize the cvCanny function in openCV storehouse to carry out, input parameter is original image, as the Grads threshold of controlling edge, be 50, initial segmentation place, inner side is 150 as the pixel threshold of controlling strong edge, is output as canny and takes out edge graph

8. according to the canny of previous step, take out edge graph, by row, from left to right obtain first white point (if along road right hand edge), obtain a border point set, play the effect that redundant information is simplified result of removing.Fig. 4 (b) is for taking out the result after limit and edge are simplified;

9. utilize border obtained in the previous step point set to carry out Hough fitting a straight line: to use the cvHoughLines function in openCV storehouse, be input as original image, resolution 3 pixels of polar coordinates radius, the resolution of polar coordinate system angle the required minimum intersections of complex curve 50 of straight line, is output as Hough fitted figure.Obtained road edge.As Fig. 5.

The relative coarse positioning of San, robot and road edge

When robot advances, need constantly by the detected road edge of video camera, to calculate transversal displacement and the course angle deviation between current robot and road edge.Transversal displacement is the vertical range of robot center and road edge.Course angle deviation be robot current driving direction and the programme path wanting to travel between angle.Concrete calculation procedure is as follows:

1, contrary perspective projection transformation

In order to calculate easily transversal displacement deviation and the course angle deviation of robot, the track edge image detecting is done to contrary perspective projection transformation, forward direction figure is transformed into vertical view, as Fig. 7.

In Euclidean space, define two coordinate system W and I, represent respectively world coordinate system and image coordinate system:

W={(x,y,z))∈E 3

I={(u,v)}∈E 2

Wherein E is Euclidean space, and x, y, z is the coordinate in world coordinate system W, and the plane of z=0 is ground level; U, v are the coordinates in image coordinate system, and the initial point of image coordinate system is the upper left corner of image.

The essence of contrary perspective projection transformation is exactly by the road image under image coordinate system I, and the image that video camera obtains, transforms in the z=0 plane under world coordinate system W, and both sides relation is as Fig. 6.

Therefore only due to z=0, and the image after conversion do not have rotational transform, the transformation matrix of 3 * 3 and original image need to be multiplied each other and just can obtain the image against after perspective projection transformation.

f(x,y)=(x,y,1)×H

Wherein (x, y) is any coordinate in original image, and H is 3 * 3 contrary perspective projection transformation matrix, and f (x, y) is the coordinate after conversion.

Concrete steps are as follows:

(1) with the video camera in robot, taking a width with trapezoidal pattern, is the forward direction figure of rectangle during vertical view;

(2) in picture, get four trapezoidal summits, record coordinate;

(3) according to the x of these four summits of the correspondence setting rectangle of the rectangle size in vertical view and pixel in new coordinate system, y coordinate;

(4) utilize the cvWarpPerspectiveQMatrix function in openCV to obtain contrary perspective projection matrix H.The input of cvWarpPerspectiveQMatrix function is the matrix of two 4 * 2 that above-mentioned two cover coordinates form, and output is contrary perspective projection transformation matrix H.

(5) utilize the function cvWarpPerspective in openCV, the input of function is original image and contrary perspective projection transformation matrix H, and output is the image after conversion.

For calculating image after the image of contrary perspective projection transformation matrix and contrary perspective projection change as Fig. 8.

Instantiation: as Fig. 8, the resolution of image is 768*576, gets four points, and coordinate is respectively (92,574), (230,437), (722,566), (572,432); The coordinate being set under new coordinate system is (304,575), (304,375), (464,575), (464,375).With the cvWarpPerspectiveQMatrix function in openCV, obtain contrary perspective projection transformation matrix.The input of cvWarpPerspectiveQMatrix function is the matrix of two matrixes 4 * 2, and each matrix is comprised of the transverse and longitudinal coordinate of 4 points, is exactly the coordinate before conversion and after conversion.The contrary perspective projection transformation matrix of trying to achieve is:

-0.291892??????-1.36039??????487.334

-0.0454482?????-2.89973??????1062.64

-0.000060317???-0.00356855???1

Obtain the image after contrary perspective projection transformation, four points originally, after contrary perspective projection transformation, are foursquare four summits substantially.

Because the developed width of a square brick in picture is 600mm, the coordinate of each summit in image is also known, and can calculate after contrary perspective projection transformation pixel distance is in practice 7.5mm, and this is the important evidence of subsequent calculations robot ride state.Because the coordinate of four points of the image after contrary perspective projection transformation is artificial regulation, therefore can controls the position of these four points and adjust an actual range that pixel is corresponding in image.

2, course angle deviation and transversal displacement calculate

In Fig. 9, the position of supposing robot is the position of AC mid point B, center solid line (straight line ABC) is the direction of robot ride, the direction at edge, track is the track edge line after contrary perspective projection transformation, dotted line is the extended line of track edge line, the angle of Ta He center solid line (straight line ABC) is the course angle deviation θ of robot, and line segment BD length is exactly transversal displacement.

(1) ask the angular deviation θ that travels.By detection algorithm above, can obtain the coordinate of the every bit on road edge line in original image, be multiplied by contrary perspective projection transformation matrix H and just can obtain the coordinate that straight line DE goes up any point, then use the coordinate (x of 2 1, y 1), (x 2, y 2) (x 1be not equal to x 2) calculate the slope k=(y of straight line 2-y 2)/(x 2-x 1), then can obtain a θ=90 ° arctan (k); If x 1=x 2, θ=0 °.

(2) ask transversal displacement deviation (BD).

I. the coordinate that in figure, A is ordered can be obtained with the equations simultaneousness of straight line ABC and straight line DE.Suppose that in image, the upper left corner is true origin, coordinate represents by pixel, and the ordinate that A is ordered is h.Equation is as follows:

(width presentation video width),

Ii. when doing contrary perspective projection transformation, there is an optional yardstick ∝ (mm), be the actual length (for example 7.5mm above-mentioned) that pixel is corresponding, can draw so, the physical length of line segment CE is sin (θ) * ∝ * (h-height), the pixels tall that wherein height is image, the namely ordinate of C.

Iii. the actual range (BC) of supposing robot center and image below is d(mm), blind area distance after this distance is video camera and is arranged in robot, the distance that converts pixel to is exactly d/ ∝, and the calculating of transversal displacement (BD) just becomes sin (θ) * ∝ * (h-height-d/ ∝) so.

Claims (1)

1. the road-edge detection based on monocular vision and rough localization method, it is characterized in that, a kind of semiautomatic road-edge detection and rough localization method based on mobile robot's monocular vision, at the computing machine of being controlled by operating personnel and be contained in the road-edge detection based on monocular vision that the common serial connection of single-lens ccd video camera on mobile robot forms and coarse positioning system in realize, the method is according to following steps, to realize successively;
Step (1), the road conditions image that can reflect road and non-road boundary both sides color distortion that single-lens ccd video camera photographs when robot is advanced is input in computing machine and shows;
Step (2), operating personnel, according to the color distortion of described road and non-road boundary both sides, optionally start two different software modules that are located in described computing machine;
For road and the colored road conditions image of the non-road boundary both sides color RGB color space that there were significant differences, start the road-edge detection module based on color;
For road with non-road boundary both sides color is fuzzy and without the obvious colored road conditions image of RBG color space on border, start the road-edge detection module based on Threshold segmentation;
Step (3), the described road-edge detection module based on color is carried out the detection that following steps are carried out road edge successively;
Step (3.1), parameter initialization, reads in the colored road conditions image of described RBG color space, and the scope that region of interest ROI is set is 1/4~1/2 size of described road conditions image, starts to divide from the described colored road conditions image lower left corner;
Step (3.2) is taken out the image of region of interest ROI and is transformed into hsv color space from described colored road conditions image, obtains new road conditions image;
Brightness V '=max (R, G, B),
Saturation degree
Colourity
If H ' <0, uses (H '+360) to replace, wherein RGB is respectively the brightness value of red, blue, green three looks of former pixel, makes V=255V ', S=255S ', H=H '/2;
Step (3.3), carries out canny with the cvCanny function module in openCV storehouse in the following manner and takes out limit, obtains canny figure:
The control edge that the junction, result images edge that setting step (3.2) obtains is 50 as Grads threshold, the strong edge of control that is 150 as Grads threshold, initial segmentation place, inner side, is input as hsv color spatial image and above-mentioned two Grads threshold of region of interest ROI; Be output as canny and take out edge graph;
Step (3.4), the color gamut of setting HSV is interval, and green area or khaki extracted region out, the scope of its Green interval is (20,0,0)~(70,40,0), khaki scope is (10,0,0)~(30,0,150);
Step (3.5), operation is corroded in the green or the khaki region that utilize the cvErode function module in openCV storehouse to obtain step (3.4), is output as result images;
Step (3.6), the result images that utilizes the cvDilate function module in openCV storehouse to obtain step (3.5) carries out expansive working, is output as green or khaki region result images after excessive erosion expansive working;
Step (3.7), the result images obtaining from step (3.6), extract the marginal point that matching is used: during for road left side edge, in the result images obtaining in step (3.6), scanning from right to left, if taking out in edge graph, canny has green point in 5 pixels on the left of certain marginal point, and correspondingly in 5 pixels on right side there is no a green point, think that this marginal point is true edge point, during 5 pixels of marginal point left and right less than, this marginal point is directly cast out, and obtains a true edge point sample set; Same step is pressed in khaki region;
Step (3.8), judgement true edge point:
If described true edge point sample is concentrated, number m >=30 of true edge point, carry out following steps;
Step (3.8.1), concentrates from described true edge point sample, selects randomly two true edge points, as two sample datas;
Step (3.8.2), the cvFitline function module by openCV storehouse, goes out straight line with least square fitting; Be input as two sample datas that step (3.8.1) obtains, error of calculation type is CV_DIST_L2, and polar coordinates radius and angular error are 0.01, is output as slope and the intercept of institute's fitting a straight line; Statistics is less than the marginal point number k of 4 pixels with described air line distance more afterwards, usings this point that matching is used as subsequent step;
Step (3.8.3), if k/m is greater than 0.3, accepts model of fit, then utilizes least square method fitting a straight line again by described k point, obtains net result; Otherwise, return to step (3.8.1); When repeating number of times and be greater than the maximum cycle 200 times of setting, failure; If counting, the concentrated true edge of described true edge point sample is less than at 30 o'clock unsuccessfully;
Step (4), the described road-edge detection module based on Threshold segmentation, at road right side edge place, takes turns doing following steps and carries out road-edge detection;
Step (4.1), the size that region of interest ROI is set is the image lower right corner 1/4 part;
Step (4.2), takes out region of interest area image, and is converted to gray level image by following formula;
v 0(x,y)=0.212671×R(x,y)+0.715160×G(x,y)+0.072169×B(x,y),
X, y represents take the pixel coordinate figure that the upper left corner is initial point in described image; v 0the gray-scale value that represents (x, y) pixel, R represents the red color channel value of place pixel in coloured image, and G represents the green channel value of place pixel in coloured image, and B represents the blue channel value of place pixel in coloured image, v 0, R, G, B is in [0,255];
Step (4.3), presses following formula pixel (x, y) is carried out to average Fuzzy Processing, and 5 * 5 pixel value matrix V and nuclear matrix K convolution that will be centered by (x, y), obtain new pixel value V1;
V 1(x, y)=V*K, * represents convolution,
Step (4.4), calculates the pixel number that each gray-scale value is corresponding, makes color histogram;
Step (4.5), sets the region of interest area image that ccd video camera obtains, and the codomain in gray space is [0, l-1], l ∈ (0,255];
Be calculated as follows judge value η (t), travel through the judge value of all gray-scale values, select a gray-scale value of judge value maximum as the optimum gray threshold of classification;
Wherein, , by calculating, there is probability P in the two class pixels that are greater than and are less than or equal to this value for dividing according to gray scale classification thresholds t 0(t), P 1(t) inter-class variance obtaining;
P ithe probability occurring for the pixel of gray-scale value i; Total pixel of image is n represents that gray-scale value is the pixel number of i, and the probability that utilizes histogrammic result to calculate the pixel appearance that gray scale is i is
P 0(t), P 1(t) be according to the gray-scale value probability of occurrence of two classes of described gray threshold t division; μ t(t) be the probability weight summation of pixel in whole gray-scale value codomain;
for the two class pixels that are greater than and are less than or equal to this value of dividing according to gray threshold t, the class internal variance obtaining by calculating probability of occurrence;
Wherein, μ 0(t), μ 1(t) be respectively below threshold value t and in above gray-scale value codomain the weighted mean of the probability that pixel occurs;
Step (4.6), the gray scale classification thresholds t obtaining based on step (4.5), higher than described gray scale classification thresholds t and lower than the two class pixels of described gray scale classification thresholds t, carries out binary conversion treatment by its gray-scale value by following formula respectively;
V (x, y) is for carrying out the grey scale pixel value after binary conversion treatment;
Step (4.7), the bianry image that utilizes the cvCanny function module in described openCV storehouse to obtain step (4.5) carries out canny and takes out limit, is output as canny and takes out edge image; The pixel gradient threshold value that setting connects as the control edge of junction, edge is 50, as the pixel gradient threshold value of the junction, the strong edge of control at initial cut place, edge, is 100;
Step (4.8), the later canny of binary conversion treatment obtaining for step (4.7) takes out edge graph, according to from left to right obtaining first white edge point, as frontier point, obtains a border point set;
Step (4.9), utilizes the concentrated all frontier points of frontier point that the cvHoughLines function module in openCV storehouse obtains step (4.8) to carry out Hough fitting a straight line, obtains road edge; The resolution of setting polar coordinates radius is 3 pixels, and polar angle resolution is it is 50 that straight line is transformed into the curve number needing on polar coordinates by this straight line minimum;
Step (5), carries out the relative coarse positioning of robot and road edge according to the following steps;
Step (5.1), the forward direction figure that step (4) is detected to the road conditions edge obtaining obtains the vertical view of road edge by doing contrary perspective projection transformation, and step is as follows:
Video camera in step (5.1.1) ,Yong robot is taken a width with ladder diagram picture, is the forward direction figure of rectangle in vertical view;
Step (5.1.2), selects four trapezoidal summits in step (5.1.1) in the forward direction figure obtaining, record coordinate;
Step (5.1.3), arranges arbitrarily the corresponding relation of rectangle size and pixel in vertical view, thereby obtains four summits in vertical view that described four summits obtain after finishing contrary perspective projection transformation, records coordinate;
Step (5.1.4), utilize the cvWrapPerspectiveQMatrix function module in openCV, obtain contrary perspective projection transformation matrix H, be input as the matrix of two 4 * 2 that the cover of two in step (5.1.2), (5.1.3) coordinate forms, be output as contrary perspective projection transformation matrix H;
Step (5.1.5), in the cvWrapPerspective function module in described forward direction figure and contrary perspective projection transformation Input matrix openCV, obtains the vertical view after contrary perspective projection transformation;
Step (5.2), calculate according to the following steps course angle deviation and transversal displacement:
Described course angle deviation refers to the angle theta between robot ride direction AC and the extended line AE of road edge line,
Described transversal displacement refers to that the center B of robot on described robot ride direction AC is to the vertical range BD on described road edge extended line;
Step (5.2.1), asks course angle deviation θ,
Any two points on the road edge line of described forward direction figure, is multiplied by described contrary perspective projection transformation matrix H by the coordinate of each point wherein, just obtains on the extended line of described road edge line the coordinate (x of corresponding 2 1, y 1), (x 2, y 2), slope k=(y of corresponding straight line AE 2-y 2)/(x 2-x 1), θ=90 °-arctan (k); If x 1=x 2, θ=0 °;
Step (5.2.2), asks transversal displacement BD,
Step (5.2.2.1), sets: the upper left corner of described vertical view is true origin, and coordinate represents with pixel, and the corresponding physical length of each pixel is 7.5mm, arranges that vertical view coordinate time sets in step (5.1.3); The ordinate that the following equation of simultaneous can obtain A in robot ride direction AC is h;
width presentation video width,
Step (5.2.2.2), the ordinate of establishing the intersection point C of robot ride direction AC and described vertical view is picture altitude height, the vertical range from C point to the extended line of road edge line is CE=sin (θ) * 7.5* (h-height);
Step (5.2.2.3), is calculated as follows transversal displacement BD:
BD=sin(θ)*7.5*(h-height-d/7.5)
D is the blind area distance in vertical range BC,Ye Shi robot the place ahead of the center B of robot and vertical view lower limb, and unit is mm.
CN201210144082.8A 2012-05-10 2012-05-10 Method based on monocular vision for detecting and roughly positioning edge of road CN102682292B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210144082.8A CN102682292B (en) 2012-05-10 2012-05-10 Method based on monocular vision for detecting and roughly positioning edge of road

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210144082.8A CN102682292B (en) 2012-05-10 2012-05-10 Method based on monocular vision for detecting and roughly positioning edge of road

Publications (2)

Publication Number Publication Date
CN102682292A CN102682292A (en) 2012-09-19
CN102682292B true CN102682292B (en) 2014-01-29

Family

ID=46814188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210144082.8A CN102682292B (en) 2012-05-10 2012-05-10 Method based on monocular vision for detecting and roughly positioning edge of road

Country Status (1)

Country Link
CN (1) CN102682292B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622502A (en) * 2017-07-28 2018-01-23 南京航空航天大学 The path extraction of robot vision leading system and recognition methods under the conditions of complex illumination

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102997853A (en) * 2012-11-22 2013-03-27 华中科技大学 Device and method for detecting ice and snow thickness
CN103971361B (en) * 2013-02-06 2017-05-10 富士通株式会社 Image processing device and method
CN103308056B (en) * 2013-05-23 2015-09-16 中国科学院自动化研究所 A kind of roadmarking detection method
CN103400150B (en) * 2013-08-14 2017-07-07 浙江大学 A kind of method and device that road edge identification is carried out based on mobile platform
CN103630122B (en) * 2013-10-15 2015-07-15 北京航天科工世纪卫星科技有限公司 Monocular vision lane line detection method and distance measurement method thereof
CN103809186B (en) * 2013-12-20 2016-10-05 长沙中联重科环卫机械有限公司 Curb detecting system, method, device and engineering machinery
CN103696381B (en) * 2013-12-20 2016-02-03 长沙中联重科环卫机械有限公司 A kind of curb cleaning control method, control device, control system and road-sweeper thereof
CN103714538B (en) * 2013-12-20 2016-12-28 中联重科股份有限公司 road edge detection method, device and vehicle
CN103699899B (en) * 2013-12-23 2016-08-17 北京理工大学 Method for detecting lane lines based on equidistant curve model
CN104021388B (en) * 2014-05-14 2017-08-22 西安理工大学 Barrier during backing automatic detection and method for early warning based on binocular vision
CN104036246B (en) * 2014-06-10 2017-02-15 电子科技大学 Lane line positioning method based on multi-feature fusion and polymorphism mean value
CN105224908A (en) * 2014-07-01 2016-01-06 北京四维图新科技股份有限公司 A kind of roadmarking acquisition method based on orthogonal projection and device
KR101668802B1 (en) * 2014-09-03 2016-11-09 신동윤 Apparatus for generating image identifiable in long distance and operation method thereof
CN104760812B (en) * 2015-02-26 2017-06-30 三峡大学 Product real-time positioning system and method on conveyer belt based on monocular vision
CN105005761B (en) * 2015-06-16 2018-08-17 北京师范大学 A kind of full-color high resolution remote sensing images Approach for road detection of combination significance analysis
CN105549603B (en) * 2015-12-07 2018-08-28 北京航空航天大学 A kind of Intelligent road inspection control method of multi-rotor unmanned aerial vehicle
CN105701496B (en) * 2016-01-12 2019-07-05 北京万同科技有限公司 A kind of go disk recognition methods based on artificial intelligence technology
CN105740826A (en) * 2016-02-02 2016-07-06 大连楼兰科技股份有限公司 Lane mark binaryzation detection method based on dual scales
CN105844614B (en) * 2016-03-15 2019-08-23 广东工业大学 It is a kind of that northern method is referred to based on the vision for proofreading robot angle
CN106127177A (en) * 2016-07-01 2016-11-16 蔡雄 A kind of unmanned road roller
CN106295556A (en) * 2016-08-09 2017-01-04 中国科学院遥感与数字地球研究所 A kind of Approach for road detection based on SUAV Aerial Images
CN107831675A (en) * 2016-09-16 2018-03-23 天津思博科科技发展有限公司 Online robot control device based on intelligence learning technology
CN106500714B (en) * 2016-09-22 2019-11-29 福建网龙计算机网络信息技术有限公司 A kind of robot navigation method and system based on video
CN106444758B (en) * 2016-09-27 2019-07-23 华南农业大学 A kind of road Identification based on machine vision and the preferred AGV transport vehicle in path
CN106503636B (en) * 2016-10-12 2019-08-20 同济大学 A kind of road sighting distance detection method and device of view-based access control model image
CN108122252A (en) * 2016-11-26 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of image processing method and relevant device based on panoramic vision robot localization
CN106872995B (en) * 2017-04-14 2019-09-20 北京佳讯飞鸿电气股份有限公司 A kind of laser radar detection method and device
CN107220976A (en) * 2017-05-17 2017-09-29 南京航空航天大学 A kind of highway localization method for highway map picture of taking photo by plane
CN107221070A (en) * 2017-05-24 2017-09-29 广州市银科电子有限公司 A kind of bill anti-counterfeit discrimination method recognized based on master pattern fluorescent characteristics
CN107665489A (en) * 2017-09-18 2018-02-06 华中科技大学 A kind of glass dihedral angle detection method based on computer vision
CN107742304B (en) * 2017-10-10 2020-04-21 广州视源电子科技股份有限公司 Method and device for determining movement track, mobile robot and storage medium
CN107560634A (en) * 2017-10-18 2018-01-09 江苏卡威汽车工业集团股份有限公司 A kind of path adjustment system and method for new-energy automobile
CN107831762A (en) * 2017-10-18 2018-03-23 江苏卡威汽车工业集团股份有限公司 The path planning system and method for a kind of new-energy automobile
CN107807638A (en) * 2017-10-18 2018-03-16 江苏卡威汽车工业集团股份有限公司 The image processing system and method for a kind of new-energy automobile
CN107728620A (en) * 2017-10-18 2018-02-23 江苏卡威汽车工业集团股份有限公司 A kind of Unmanned Systems of new-energy automobile and method
CN107525525A (en) * 2017-10-18 2017-12-29 江苏卡威汽车工业集团股份有限公司 A kind of path alignment system and method for new-energy automobile
CN107808140A (en) * 2017-11-07 2018-03-16 浙江大学 A kind of monocular vision Road Recognition Algorithm based on image co-registration
CN108052941A (en) * 2017-12-19 2018-05-18 北京奇艺世纪科技有限公司 A kind of news caption tracking and device
CN108090479B (en) * 2018-01-26 2020-05-12 湖北工业大学 Lane detection method for improving Gabor conversion and updating vanishing point
CN109753081A (en) * 2018-12-14 2019-05-14 中国矿业大学 A kind of patrol unmanned machine system in tunnel based on machine vision and air navigation aid
CN110825080A (en) * 2019-10-31 2020-02-21 华南农业大学 Orchard path visual navigation method, system and medium based on fuzzy control algorithm

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4248558B2 (en) * 2006-03-24 2009-04-02 トヨタ自動車株式会社 Road marking line detection device
CN102156977A (en) * 2010-12-22 2011-08-17 浙江大学 Vision-based road detection method
CN102436644B (en) * 2011-11-02 2013-10-16 南京物联网研究院发展有限公司 Unstructured road detection method based on adaptive edge registration

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622502A (en) * 2017-07-28 2018-01-23 南京航空航天大学 The path extraction of robot vision leading system and recognition methods under the conditions of complex illumination

Also Published As

Publication number Publication date
CN102682292A (en) 2012-09-19

Similar Documents

Publication Publication Date Title
CN104298976B (en) Detection method of license plate based on convolutional neural networks
KR101856401B1 (en) Method, apparatus, storage medium, and device for processing lane line data
Wang et al. Torontocity: Seeing the world with a million eyes
CN104848851B (en) Intelligent Mobile Robot and its method based on Fusion composition
Son et al. Real-time illumination invariant lane detection for lane departure warning system
Ozgunalp et al. Multiple lane detection algorithm based on novel dense vanishing point estimation
Yenikaya et al. Keeping the vehicle on the road: A survey on on-road lane detection systems
Li et al. A sensor-fusion drivable-region and lane-detection system for autonomous vehicle navigation in challenging road scenarios
Borkar et al. A novel lane detection system with efficient ground truth generation
Levinson et al. Traffic light mapping, localization, and state detection for autonomous vehicles
Alvarez et al. Combining priors, appearance, and context for road detection
CN102789234B (en) Robot navigation method and robot navigation system based on color coding identifiers
EP2574958B1 (en) Road-terrain detection method and system for driver assistance systems
Siogkas et al. Traffic Lights Detection in Adverse Conditions using Color, Symmetry and Spatiotemporal Information.
US9852357B2 (en) Clear path detection using an example-based approach
US6363161B2 (en) System for automatically generating database of objects of interest by analysis of images recorded by moving vehicle
DE69736764T2 (en) Local positioning device and method therefor
Pfeiffer et al. Efficient representation of traffic scenes by means of dynamic stixels
Rasmussen Combining laser range, color, and texture cues for autonomous road following
EP2372310B1 (en) Image processing system and position measurement system
Nedevschi et al. A sensor for urban driving assistance systems based on dense stereovision
US9141870B2 (en) Three-dimensional object detection device and three-dimensional object detection method
US9652980B2 (en) Enhanced clear path detection in the presence of traffic infrastructure indicator
CN102314600B (en) Shadow removal in image captured by vehicle-based camera for clear path detection
CN101900567B (en) No-texture clear path detection based on pixel

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
C14 Grant of patent or utility model
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140129

Termination date: 20180510

CF01 Termination of patent right due to non-payment of annual fee