CN102682292B  Method based on monocular vision for detecting and roughly positioning edge of road  Google Patents
Method based on monocular vision for detecting and roughly positioning edge of road Download PDFInfo
 Publication number
 CN102682292B CN102682292B CN201210144082.8A CN201210144082A CN102682292B CN 102682292 B CN102682292 B CN 102682292B CN 201210144082 A CN201210144082 A CN 201210144082A CN 102682292 B CN102682292 B CN 102682292B
 Authority
 CN
 China
 Prior art keywords
 road
 edge
 image
 pixel
 point
 Prior art date
Links
 238000003708 edge detection Methods 0.000 claims abstract description 39
 230000001131 transforming Effects 0.000 claims abstract description 38
 230000000875 corresponding Effects 0.000 claims abstract description 15
 239000011159 matrix material Substances 0.000 claims description 27
 238000006073 displacement reaction Methods 0.000 claims description 16
 230000011218 segmentation Effects 0.000 claims description 15
 238000006243 chemical reaction Methods 0.000 claims description 13
 238000010586 diagram Methods 0.000 claims description 8
 230000004807 localization Effects 0.000 claims description 8
 239000000284 extract Substances 0.000 claims description 6
 241001269238 Data Species 0.000 claims description 5
 238000004364 calculation method Methods 0.000 claims description 4
 210000000299 Nuclear Matrix Anatomy 0.000 claims description 3
 239000012141 concentrate Substances 0.000 claims description 3
 210000003141 Lower Extremity Anatomy 0.000 claims description 2
 230000003628 erosive Effects 0.000 claims description 2
 238000001514 detection method Methods 0.000 claims 1
 239000003086 colorant Substances 0.000 abstract 1
 238000000638 solvent extraction Methods 0.000 abstract 1
 241001157067 Leucoagaricus meleagris Species 0.000 description 21
 238000004422 calculation algorithm Methods 0.000 description 19
 238000000034 methods Methods 0.000 description 6
 230000000007 visual effect Effects 0.000 description 6
 230000001276 controlling effects Effects 0.000 description 5
 238000005516 engineering processes Methods 0.000 description 5
 230000018109 developmental process Effects 0.000 description 4
 230000000694 effects Effects 0.000 description 4
 239000003570 air Substances 0.000 description 2
 238000000605 extraction Methods 0.000 description 2
 238000005286 illumination Methods 0.000 description 2
 238000003909 pattern recognition Methods 0.000 description 2
 238000004805 robotics Methods 0.000 description 2
 239000007787 solids Substances 0.000 description 2
 280000115721 A Line companies 0.000 description 1
 281000189608 Apricot Computers companies 0.000 description 1
 241001212149 Cathetus Species 0.000 description 1
 241001061260 Emmelichthys struhsakeri Species 0.000 description 1
 241000273386 Eulithis prunata Species 0.000 description 1
 280000809143 Phoenix Venture Holding companies 0.000 description 1
 281000177952 Toyota companies 0.000 description 1
 230000003044 adaptive Effects 0.000 description 1
 238000004458 analytical methods Methods 0.000 description 1
 239000011449 bricks Substances 0.000 description 1
 230000000052 comparative effects Effects 0.000 description 1
 238000003891 environmental analysis Methods 0.000 description 1
 239000000686 essences Substances 0.000 description 1
 230000004927 fusion Effects 0.000 description 1
 239000011551 heat transfer agents Substances 0.000 description 1
 230000003287 optical Effects 0.000 description 1
 238000005070 sampling Methods 0.000 description 1
 238000002604 ultrasonography Methods 0.000 description 1
Abstract
The invention discloses a method based on monocular vision for detecting and roughly positioning the edge of a road, and relates to the field of machine vision and intelligent control. Aiming at a continuous road with different edge characteristics, two road edge detection methods are supplied and suitable for semistructured and nonstructured roads and can be applied to vision navigation and intelligent control over a robot. The invention provides a method for detecting the edge of the road based on colors and a method for detecting the edge of the road based on threshold value partitioning. On the basis of the obtained edge of the road, an image is subjected to inverted perspective projection transformation, so that a front view is transformed into a top view; and according to a linear corresponding relation between a pixel and an actual distance in the image, a perpendicular distance from the current position of the robot to the edge of the road and a course angle of the robot can be calculated. The method is easy to implement, high in antiinterference performance, high in instantaneity and suitable for the semistructured and nonstructured roads.
Description
Technical field
Invention relates to computer vision, area of pattern recognition, Mobile Robotics Navigation technology and method.
Technical background
Intelligent mobile robot is the complication system that comprises a plurality of modules such as environment sensing, control decision, path planning Applied Computer Techniques, infotech, Robotics, microelectric technique.It has generally merged multiple sensors technology, comprises GPS, laser radar, video camera, ultrasound wave, gyroscope etc., the data of perception is processed and is merged, thereby obtain robot locating information, and the place ahead complaint message, for walking planning and decisionmaking.Mobile robot is widely used in the every field such as military affairs, medical treatment, agricultural, industry now.Particularly when running into extreme environment and unsafe conditions, as nuclear pollution, high pressure, deepsea, or even the outer space, mobile apparatus can effectively reduce the risk of manual work per capita, increases work efficiency, and has promoted society and scientific and technological progress and development.
Mobile robot has multiple classification, by controlling type, can be divided into drone version robot and autonomous humanoid robot.
On November 26th, 2011, it is the nuclear power remote control type movable robot of the uptodate science and technology of collection that the U.S. has succeeded in sending up " curious number " mars exploration car of new generation, it is the 4th mars exploration car after " courage number ", " Opportunity Rover ", " phoenix number ", there is very strong mobile driveability, carry a large amount of environmental analysis monitoring instruments, can pass via satellite multiple analysis of data back the earth.
From principal mode mobile robot, it is following development trend.Its integrated application computer vision, patternrecognition, fuzzy control, the artificial intelligence technologys such as expert system, by known structured environment, the autonomous under semistructure environment, to the autonomous of destructuring circumstances not known with explore development.
Wherein intelligent automobile is again one of mobile robot technology application receiving much concern at present.Intelligent automobile THMRthe V of TsingHua University's development, can travel on Ordinary Rd and highway, and design speed is highway 80km/h, Ordinary Rd 20km/h.On automobile, be equipped with colour TV camera and laser range finder, navigation and the environmental monitoring system such as differential GPS, magnetic compass and optical code disk can be travelled and keep away barrier in the destructuring environment such as campus.Google has announced the intelligent vehicle of oneself developing, Toyota Prius in 2010.It has installed video frequency pickup head, laser range finder, and radar sensor, gps satellite positioning system obtains under the digital map navigation of traffic conditions ， Google oneself and has travelled and surpassed 160,000 miles (comprising running section that someone controls once in a while) from heat transfer agent.Intelligent vehicle perception road environment is the information fusion that relies on multiple sensors, higher to the accuracy requirement of sensor.
For mobile robot, what the most often use is vision sensor, and colour TV camera is installed.Computer vision, mode identification technology has played vital effect to mobile robot's navigation.For the visual identification algorithm of structured road, be generally to lane line, traffic mark, the detection of signal lamp and road area.For the visual identification algorithm of unstructured road, there is for wheeled region the detection of barrier.At present with the structurized Road Detection algorithm of lane line comparative maturity, and for backroad, the detection with the urban road of lane line does not also lack the algorithm that some versatilities are stronger.
For current most visual identification algorithm, to sensor, require high, the shortcoming that time complexity is high, nonstructured environment identification is still needed to meet more condition, the present situation of navigation information be cannot provide accurately, roadedge detection and rough localization method based on monocular vision the present invention proposes.
Summary of the invention
The present invention is directed to mobile robot visual algorithm high to sensor requirement, a little less than realtime, to problems such as unstructured road identification versatility are lower, proposed roadedge detection and rough localization method based on monocular vision.
The invention is characterized in, a kind of semiautomatic roadedge detection and rough localization method based on mobile robot's monocular vision, at the computing machine of being controlled by operating personnel and be contained in the roadedge detection based on monocular vision that the common serial connection of singlelens ccd video camera on mobile robot forms and coarse positioning system in realize, the method is according to following steps, to realize successively;
Step (1), the road conditions image that can reflect road and nonroad boundary both sides color distortion that singlelens ccd video camera photographs when robot is advanced is input in computing machine and shows;
Step (2), operating personnel, according to the color distortion of described road and nonroad boundary both sides, optionally start two different software modules that are located in described computing machine;
For road and the colored road conditions image of the nonroad boundary both sides color RGB color space that there were significant differences, start the roadedge detection module based on color;
For road with nonroad boundary both sides color is fuzzy and without the obvious colored road conditions image of RBG color space on border, start the roadedge detection module based on Threshold segmentation;
Step (3), the described roadedge detection module based on color is carried out the detection that following steps are carried out road edge successively;
Step (3.1), parameter initialization, reads in the colored road conditions image of described RBG color space, and the scope that region of interest ROI is set is 1/4～1/2 size of described road conditions image, starts to divide from the described colored road conditions image lower left corner;
Step (3.2) is taken out the image of region of interest ROI and is transformed into hsv color space from described colored road conditions image, obtains new road conditions image;
Brightness v'=max (R, G, B),
Saturation degree
Colourity
If H ' is <0, use (H '+360) replace, wherein RGB is respectively the brightness value of red, blue, green three looks of former pixel, makes V=255tV', S=255S', H=H'/2
Step (3.3), carries out canny with the cvCanny function module in openCV storehouse in the following manner and takes out limit, obtains canny figure:
The control edge that the junction, result images edge that setting step (3.2) obtains is 50 as Grads threshold, the strong edge of control that is 150 as Grads threshold, initial segmentation place, inner side, is input as hsv color spatial image and abovementioned two Grads threshold of region of interest ROI; Be output as canny and take out edge graph;
Step (3.4), the color gamut of setting HSV is interval, and green area or khaki extracted region out, the scope of its Green interval is (20,0,0)～(70,40,0), khaki scope is (10,0,0)～(30,0,150);
Step (3.5), operation is corroded in the green or the khaki region that utilize the cvErode function module in openCV storehouse to obtain step (3.4), is output as result images;
Step (3.6), the result images that utilizes the cvDilate function module in openCV storehouse to obtain step (3.5) carries out expansive working, is output as green or khaki region result images after excessive erosion expansive working;
Step (3.7), the result images obtaining from step (3.6), extract the marginal point that matching is used: during for road left side edge, in the result images obtaining in step (3.6), scanning from right to left, if taking out in edge graph, canny has green point in 5 pixels on the left of certain marginal point, and correspondingly in 5 pixels on right side there is no a green point, think that this marginal point is true edge point, during 5 pixels of marginal point left and right less than, this marginal point is directly cast out, and obtains a true edge point sample set; Same step is pressed in khaki region;
Step (3.8), judgement true edge point:
If described true edge point sample is concentrated, number m >=30 of true edge point, carry out following steps;
Step (3.8.1), concentrates from described true edge point sample, selects randomly two true edge points, as two sample datas;
Step (3.8.2), the cvFitline function module by openCV storehouse, goes out straight line with least square fitting; Be input as two sample datas that step (3.8.1) obtains, error of calculation type is CV_DIST_L2, and polar coordinates radius and angular error are 0.01, slope and the intercept of output institute fitting a straight line; Statistics is less than the marginal point number k of 4 pixels with described air line distance more afterwards, usings this point that matching is used as subsequent step;
Step (3.8.3), if k/m is greater than 0.3, accepts model of fit, then utilizes least square method fitting a straight line again by described k point, obtains net result; Otherwise, return to step (3.8.1); When repeating number of times and be greater than the maximum cycle 200 times of setting, failure; If counting, the concentrated true edge of described true edge point sample is less than at 30 o'clock unsuccessfully;
Step (4), the described roadedge detection module based on Threshold segmentation, at road right side edge place, takes turns doing following steps and carries out roadedge detection;
Step (4.1), the size that region of interest ROI is set is the image lower right corner 1/4 part;
Step (4.2), takes out region of interest area image, and is converted to gray level image by following formula;
V
_{0}(x, y)=0.212671 * R (x, y) 10,71516O * G (x, y)+0.072169×B(x，y)，
X, y represents take the pixel coordinate figure that the upper left corner is initial point in described image; v
_{0}the grayscale value that represents (x, y) pixel, R represents the red color channel value of place pixel in coloured image, and G represents the green channel value of place pixel in coloured image, and B represents the blue channel value of place pixel in coloured image, v
_{0}, R, G, B is in [0,255];
Step (4.3), presses following formula pixel (x, y) is carried out to average Fuzzy Processing, and 5 * 5 pixel value matrix V and nuclear matrix K convolution that will be centered by (x, y), obtain new pixel value V1;
V
_{1}(x, y)=V*K, * represents convolution,
Step (4.4), calculates the pixel number that each grayscale value is corresponding, makes color histogram;
Step (4.5), sets the region of interest area image that ccd video camera obtains, and the codomain in gray space is [0, l1], 1 ∈ (0,255]:
Be calculated as follows judge value η (t), travel through the judge value of all grayscale values, select a grayscale value of judge value maximum as the optimum gray threshold of classification;
Wherein,
, by calculating, there is probability P in the two class pixels that are greater than and are less than or equal to this value for dividing according to gray scale classification thresholds t
_{0}(t), P
_{1}(t) interclass variance obtaining;
P, the probability occurring for the pixel of grayscale value i; Total pixel of image is
n represents that grayscale value is the pixel number of i, and the probability that utilizes histogrammic result to calculate the pixel appearance that gray scale is i is
P
_{0}(t), P
_{1}(t) be according to the grayscale value probability of occurrence of two classes of described gray threshold t division; μ
_{r}(t) be the probability weight summation of pixel in whole grayscale value codomain;
for the two class pixels that are greater than and are less than or equal to this value of dividing according to gray threshold t, the class internal variance obtaining by calculating probability of occurrence;
Wherein, μ
_{0}(t), μ
_{1}(t) be respectively below threshold value t and in above grayscale value codomain the weighted mean of the probability that pixel occurs;
Step (4.6), based on step (4.
54) the gray scale classification thresholds t obtaining, higher than described gray scale classification thresholds t and lower than the two class pixels of described gray scale classification thresholds t, carries out binary conversion treatment by its grayscale value by following formula respectively;
V (x, y) is for carrying out the grey scale pixel value after binary conversion treatment;
Step (4.7), the bianry image that utilizes the cvCanny function module in described openCV storehouse to obtain step (4.5) carries out canny and takes out limit, is output as canny and takes out edge image; The pixel gradient threshold value that setting connects as the control edge of junction, edge is 50, as the pixel gradient threshold value of the junction, the strong edge of control at initial cut place, edge, is 100;
Step (4.8), the later canny of binary conversion treatment obtaining for step (4.7) takes out edge graph, according to from left to right obtaining first white edge point, as frontier point, obtains a border point set;
Step (4.9), utilizes the concentrated all frontier points of frontier point that the cvHoughLines function module in openCV storehouse obtains step (4.8) to carry out Hough fitting a straight line, obtains road edge; The resolution of setting polar coordinates radius is 3 pixels, and polar angle resolution is
it is 50 that straight line is transformed into the curve number needing on polar coordinates by this straight line minimum;
Step (5), clicks the relative coarse positioning that step is carried out robot and road edge;
Step (5.1), the forward direction figure that step (4) is detected to the road conditions edge obtaining obtains the vertical view of road edge by doing contrary perspective projection transformation, and step is as follows:
Video camera in step (5.1.1) ，Yong robot is taken a width with ladder diagram picture, is the forward direction figure of rectangle in vertical view;
Step (5.1.2), selects four trapezoidal summits in step (5.1.1) in the forward direction figure obtaining, record coordinate;
Step (5.1.3), arranges arbitrarily the corresponding relation of rectangle size and pixel in vertical view, thereby obtains four summits in vertical view that described four summits obtain after finishing contrary perspective projection transformation, records coordinate;
Step (5.1.4), utilize the cvWrapPerspectiveQMatrix function module in openCV, obtain contrary perspective projection transformation matrix H, be input as the matrix of two 4 * 2 that the cover of two in step (5.1.2), (5.1.3) coordinate forms, be output as contrary perspective projection transformation matrix H;
Step (5.1.5), in the cvWrapPerspective function module in described forward direction figure and contrary perspective projection transformation Input matrix openCV, obtains the vertical view after contrary perspective projection transformation;
Step (5.2), calculate according to the following steps course angle deviation and transversal displacement:
Described course angle deviation refers to the angle theta between robot ride direction AC and the extended line AE of road edge line,
Described transversal displacement refers to that the center B of robot on described robot ride direction AC is to the vertical range BD on described road edge extended line;
Step (5.2.1), asks course angle deviation θ,
Any two points on the road edge line of described forward direction figure, is multiplied by described contrary perspective projection transformation matrix H by the coordinate of each point wherein, just obtains on the extended line of described road edge line the coordinate (x of corresponding 2
_{l}, y
_{1}), (x
_{2}, y
_{2}), slope k=(y of corresponding straight line AE
_{2}one y
_{2})/(x
_{2}one x
_{1}), θ=90 °arctan (k); If x
_{1}=x
_{2}, 0=0 °;
Step (5.2.2), asks transversal displacement BD,
Step (5.2.2.1), sets: the upper left corner of described vertical view is true origin, and coordinate represents with pixel, and the corresponding physical length of each pixel is 7.5mm, arranges that vertical view coordinate time sets in step (5.1.3); The ordinate that the following equation of simultaneous can obtain A in robot ride direction AC is h;
width presentation video width,
Step (5.2.2.2), the ordinate of establishing the intersection point C of robot ride direction AC and described vertical view is picture altitude height, the vertical range from C point to the extended line of road edge line is CE=sin (θ) * 7.5* (hheight);
Step (5.2.2.3), is calculated as follows transversal displacement BD:
BD=sin(θ)*7.5*(hheightd/7.5)
D is the blind area distance in vertical range BC，Ye Shi robot the place ahead of the center B of robot and vertical view lower limb, and unit is mm.
Major advantage of the present invention is:
1, the present invention is directed to the continuous road with different edge features two kinds of roadedge detection methods are provided, the strong adaptability to road environment, versatility is good.A kind of road and larger situation of nonroad boundary both sides color distortion (as the both sides road that is greenbelt) of being applicable to, to the susceptibility of illumination condition and shade a little less than; Another kind method is applicable to boundary road conditions relatively not clearly, very sensitive to shade.Two kinds of methods are mainly applicable to semistructured road and unstructured road.
2, sometimes due to the visual field or the setting angle of video camera, or the reasons such as road is wider, image can not complete demonstration road both sides, roadedge detection method of the present invention, for onesided rim detection, has reduced the requirement to camera coverage scope and setting angle; Method of the present invention does not need the video camera that resolution is very high, has reduced the requirement to video camera sharpness.
3, the present invention has carried out the coarse positioning with respect to road edge in the edge Hou，Dui robot that road detected, has provided positioning result, and this result can better be assisted advancing of control.
The present invention has realized two kinds of roadedge detection methods based on monocular vision, on its basis, has realized the coarse positioning of robot with respect to road edge.
First method is the roadedge detection method based on color, first image is transformed on hsv color space, utilizes road and offhighroad color distortion, take H value as main scope interval, S and V are that auxiliary range is interval, image are divided into roughly to the twovalue grayscale map of road and nonroad area; In conjunction with the canny outline map of original image, color large according to road edge place gradient and both sides of edges is different, extracts road edge point; When the number of road edge point is more, just carry out next step processing, otherwise abandon this two field picture, next frame image is processed; Then by RANSAC method, road edge point is carried out to fitting a straight line.From existing road edge point, randomly draw at 2 and carry out fitting a straight line, when fitting result is better, removes and disturb larger point, remaining marginal point is carried out to random sampling fitting a straight line again.Twice fitting can make fitting result more accurate, and can well remove the interference of some noises in image.
Another kind method is the roadedge detection method based on Threshold segmentation, first original image is changed into gray level image; Select gray threshold t to divide image into road and nonroad two classes, calculate the interclass variance of two classes; The value of traversal t, chooses and makes the t value of interclass variance maximum as the critical value of road area and nonroad area; Utilize gray threshold t by image binaryzation, the marginal point that extracts binary image is road edge point; Road edge point is carried out to Hough straightline detection, obtain road edge line.
After road edge being detected, ，Dui robot carries out the relative coarse positioning with road edge, the vertical range of the angle of calculating robot and road direction and robot and road edge.The road edge line image having obtained is carried out to contrary perspective projection and change, image is converted to vertical view from the normal visual angle figure of video camera; Image and actual distance after contrary perspective projection transformation have a linear corresponding relation, and the length that pixel represents is in practice α millimeter; Utilize the slope of image cathetus can calculate the angle theta of the current and road direction of robot, the vertical range δ of robot and road edge can obtain according to triangle relation from image.
Quality of the present invention: the roadedge detection method that the present invention proposes is applicable to semistructured and unstructured road, for having good robustness on picture steadiness and sensor color error ratio, can be used for miniature mobile robot.Meanwhile, can finely meet the demand of realtime navigation, processing delay is low, applied widely for unstructured road.The net result that algorithm draws, can ，Dui robot control decision in auxiliary robot location play decisive role.Because algorithm is based on monocular vision, therefore there is the common fault of vision algorithm, comparatively responsive for illumination condition, shade, detection effect under these conditions can be affected.
Accompanying drawing explanation
Fig. 1,2 is the step key diagram of the roadedge detection algorithm based on color, wherein:
Fig. 1 (a) is for dividing the original image after ROI, and Fig. 1 (b) takes out limit result figure for canny, and Fig. 1 (c) extracts result figure for green area, and Fig. 1 (d) is the green area figure corroding after expansion process.
Fig. 2 is RANSAC fitting a straight line result figure.
Fig. 3,4,5 is the step key diagram of the roadedge detection algorithm based on Threshold segmentation, wherein:
Fig. 3 (a) is for dividing the original image after ROI, and Fig. 3 (b) is for being converted into the image of gray space.
Fig. 4 (a) is for utilizing gray threshold to carry out the result figure of binaryzation, and Fig. 4 (b) takes out the result figure of deredundant simplification behind limit for canny.
Fig. 6,7,8,9 is the step key diagram of robot coarse positioning implementation method, wherein:
Fig. 5 is final fitting a straight line result figure.
Fig. 6 is world coordinate system and image coordinate system schematic diagram.
Fig. 7 is for real road, the effect contrast figure before and after contrary perspective projection transformation.Wherein (a) is former figure, is (b) figure after contrary perspective projection transformation.
Fig. 8 is the image for the perspective projection transformation matrix of inverting.Wherein (a) is former figure, and four points in figure are four points choosing, is (b) design sketch after contrary perspective projection transformation.
Fig. 9 is that course angle deviation and transversal displacement calculate schematic diagram.
Figure 10 is roadedge detection based on monocular vision and the process flow diagram of rough localization method.
Embodiment
The present invention has designed roadedge detection and the rough localization method based on monocular vision.Shown in Fig. 1 is the flow process of roadedge detection and coarse positioning.The present invention has designed two kinds of roadedge detection methods, and a kind of is roadedge detection method based on color, and another kind is the roadedge detection method based on Threshold segmentation.In the basic Shang，Dui of roadedge detection robot, carried out the coarse positioning with respect to road edge.Image processing and utilizing in implementation procedure be the openCV storehouse of increasing income.
One, according to actual environment, select Road Detection algorithm
First any algorithm of environmental selection according to real road, if the larger situation of road and nonroad boundary both sides color distortion (as the both sides road that is greenbelt) is just selected the roadedge detection method based on color; Otherwise just choose the roadedge detection method based on Threshold segmentation.
Two, roadedge detection algorithm
Roadedge detection method based on color
The key of this method is the extraction of road edge point and the matching of road edge line.Because the color of road edge both sides generally has very large difference, and the image gradient at road edge place is larger, so this method is used the method extraction marginal point that color gamut is interval and canny image combines; The marginal point extracting may have some to disturb, and this method is carried out matching road edge line by the stronger RANSAC method of interference resistance.Take both sides is that the road of greenbelt is example, and concrete step is as follows:
1. the initialization of parameter: use scope interval H, the S of color, the value of V are set, for example green: (20,0,0)～(70,40,0), khaki: (10,0,0)～(30,0,150), ROI(areaofinterest) size (scope is in 1/4th to 1/2nd sizes of image, from the image lower left corner or the lower right corner start to divide);
2. read in image, take out the ROI of image, original image is transformed into HSV space, formula is as follows:
A) by color image from RGB color space conversion to hsv color space
I. the H of each pixel, S, the value of V component is following calculating respectively:
Brightness V=max (R, G, B)
Saturation degree
Colourity
If H<0, uses (H+360) to replace
R wherein, G, B is respectively the brightness value of this red, green, blue three looks.
Ii. for convenient, store and calculate, the H to each point, S, V component value is done following calculating:
V'=255V,S'=255S,H'=H/2;
For the coloured image of RGB color space, for each pixel, carry out abovementioned calculating, can be converted into hsv color spatial image.
3. carrying out canny takes out limit and obtains canny figure: utilize the cvCanny function in openCV storehouse to carry out, input parameter is original image, as the Grads threshold of controlling edge, be 50, initial segmentation place, inner side is 150 as the pixel threshold of controlling strong edge, is output as canny and takes out edge graph; Fig. 1 (a) is for dividing the image after ROI, and Fig. 1 (b) is that the canny taking out after ROI takes out edge graph;
4. with the scope interval of HSV by green or khaki extracted region out; As Fig. 1 (c);
5. pair green or khaki areal map corrode operation: utilize the function cvErode function in openCV storehouse, be input as original image, all the other parameters, for the rectangular matrix of acquiescence, without input, are output as result images;
6. pair green or khaki areal map carry out expansive working: utilize the function cvDilate function in openCV storehouse, be input as original image, all the other parameters, for acquiescence, without input, are output as result images; As Fig. 1 (d);
7. extract the marginal point for matching: to canny figure scanning (if road left side edge) from right to left from bottom to up, if have green point or colour of loess color dot in 5 pixels of the left side of the marginal point in canny figure, in 5 pixels of right side, there is no green point or colour of loess color dot, regard as real marginal point.A line only has at most a true edge point.The point of left and right less than 5 pixels is directly cast out;
8. if the number of true edge point is less than 30, cannot matching; If the number of true edge point is greater than 30, adopt RANSAC algorithm to carry out fitting a straight line to the true edge point that step obtains above:
A) establish true edge point sample and concentrate total total m point, select randomly two sample datas;
B) by these two some least square fitting straight lines: utilize the cvFitLine function in openCV storehouse, be input as the point set of two points, error of calculation type is CV_DIST_L2, and polar coordinates radius and angular error are 0.01, are output as the parameter of straight line; Then statistics is the number K of the point in matching with the point that air line distance is less than 4 pixels;
C) if K/m is greater than 0.3, just accept model of fit, accept this fitting result, and utilize least square method fitting a straight line by all data on can matching; Otherwise reexecute a)c);
D) if repeat number of times, be greater than the maximum number of times 200 that allows, failure;
Net result is as Fig. 2.
Roadedge detection algorithm based on Threshold segmentation
In some unstructured roads, road area blur margin is clear, the feature that road area and nonroad area color distortion are large, in system, utilized a kind of maximum variance between clusters, the image that video camera is captured, in gray space, carry out adaptive cluster, make it dynamically carry out Threshold segmentation, thereby identify the road area of wheeled.
In algorithm, road area and nonroad area are divided into two class C
_{0}and C
_{1}, by calculating respectively interclass variance and the class internal variance of two classes, obtain optimum gradation segmentation threshold t, it is maximum that gray threshold t can make the interclass variance of two classes, and class internal variance is minimum.The judgment condition of the method, is according to by image two timesharing, of a sort pixel color similarity, and the feature that inhomogeneous pixel color distortion is large develops.
Concrete implementation step is as follows:
1. region of interest ROI size is set for the image lower right corner four/part.The image of Fig. 3 (a) for obtaining after interested area division;
2. image is converted into gray level image, formula is following, and (x wherein, take the pixel coordinate figure that the upper left corner is initial point in y representative image, v
_{0}the grayscale value that represents (x, y) pixel, R represents the red color channel value of place pixel in coloured image, and G represents the green channel value of place pixel in coloured image, and B represents the blue channel value of place pixel in coloured image, v
_{0}, R, G, B ∈ [0,255]):
v
_{0}(x,y)=0.212671×R(x,y)+0.71516O×G(x,y)+O.072169×B(x,y)
3. utilize average fuzzy algorithm to carry out Fuzzy Processing to image, in order to reduce other color change point.The image that Fig. 3 (b) is gray space; Concrete formula is following, and (x wherein, take the pixel coordinate figure that the upper left corner is initial point in y representative image, v
_{0}the grayscale value that represents (x, y) pixel, K represents the nuclear matrix that average is fuzzy, V represents 5 * 5 pixel value matrix centered by (x, y), * represents convolution):
V
_{1}(x,y)=V*K
4. pair grayscale map carries out statistical color histogram, calculates every kind of pixel number that grayscale value is corresponding;
5. the codomain of the image that video camera obtains in gray space is [0, I1] (l ∈ [0,255]), and the pixel number of establishing grayscale value and be i is n
_{i}total pixel of image is
the probability that utilizes histogrammic result can calculate the pixel appearance that gray scale is i is
So we can obtain according to abovementioned formula:
0≤p
_{i}≤ 1 and
For some grayscale value t, as threshold value, grayscale value can be divided into two classes.The probability of occurrence that can calculate two classes by following formula is:
Therefore, can calculate the average of two classes, and the overall gray level mathematical expectation of probability of image is respectively:
The interclass variance that can draw thus two classes is:
Class internal variance is:
Final judge value computing formula is
Traveling through all gray scale judge value sizes selects final judge and is worth maximum grayscale value, the optimum gray threshold of classifying.
6. according to previous step, obtain gray scale classification thresholds, will carry out simple black and white twovalue processing higher than this numerical value with lower than two class pixels of this numerical value.Result is as Fig. 4 (a).Formula is following, and (x wherein, take the pixel coordinate figure that the upper left corner is initial point in y representative image, v
_{1}represent (x, y) pixel binaryzation grayscale value before, v represents the grayscale value after the pixel binary conversion treatment of place, and t is the gray scale classification thresholds that previous step is obtained):
7. this bianry image is carried out to canny and take out limit: utilize the cvCanny function in openCV storehouse to carry out, input parameter is original image, as the Grads threshold of controlling edge, be 50, initial segmentation place, inner side is 150 as the pixel threshold of controlling strong edge, is output as canny and takes out edge graph
8. according to the canny of previous step, take out edge graph, by row, from left to right obtain first white point (if along road right hand edge), obtain a border point set, play the effect that redundant information is simplified result of removing.Fig. 4 (b) is for taking out the result after limit and edge are simplified;
9. utilize border obtained in the previous step point set to carry out Hough fitting a straight line: to use the cvHoughLines function in openCV storehouse, be input as original image, resolution 3 pixels of polar coordinates radius, the resolution of polar coordinate system angle
the required minimum intersections of complex curve 50 of straight line, is output as Hough fitted figure.Obtained road edge.As Fig. 5.
The relative coarse positioning of San, robot and road edge
When robot advances, need constantly by the detected road edge of video camera, to calculate transversal displacement and the course angle deviation between current robot and road edge.Transversal displacement is the vertical range of robot center and road edge.Course angle deviation be robot current driving direction and the programme path wanting to travel between angle.Concrete calculation procedure is as follows:
1, contrary perspective projection transformation
In order to calculate easily transversal displacement deviation and the course angle deviation of robot, the track edge image detecting is done to contrary perspective projection transformation, forward direction figure is transformed into vertical view, as Fig. 7.
In Euclidean space, define two coordinate system W and I, represent respectively world coordinate system and image coordinate system:
W={(x,y,z))∈E
^{3}
I={(u,v)}∈E
^{2}
Wherein E is Euclidean space, and x, y, z is the coordinate in world coordinate system W, and the plane of z=0 is ground level; U, v are the coordinates in image coordinate system, and the initial point of image coordinate system is the upper left corner of image.
The essence of contrary perspective projection transformation is exactly by the road image under image coordinate system I, and the image that video camera obtains, transforms in the z=0 plane under world coordinate system W, and both sides relation is as Fig. 6.
Therefore only due to z=0, and the image after conversion do not have rotational transform, the transformation matrix of 3 * 3 and original image need to be multiplied each other and just can obtain the image against after perspective projection transformation.
f(x,y)=(x,y,1)×H
Wherein (x, y) is any coordinate in original image, and H is 3 * 3 contrary perspective projection transformation matrix, and f (x, y) is the coordinate after conversion.
Concrete steps are as follows:
(1) with the video camera in robot, taking a width with trapezoidal pattern, is the forward direction figure of rectangle during vertical view;
(2) in picture, get four trapezoidal summits, record coordinate;
(3) according to the x of these four summits of the correspondence setting rectangle of the rectangle size in vertical view and pixel in new coordinate system, y coordinate;
(4) utilize the cvWarpPerspectiveQMatrix function in openCV to obtain contrary perspective projection matrix H.The input of cvWarpPerspectiveQMatrix function is the matrix of two 4 * 2 that abovementioned two cover coordinates form, and output is contrary perspective projection transformation matrix H.
(5) utilize the function cvWarpPerspective in openCV, the input of function is original image and contrary perspective projection transformation matrix H, and output is the image after conversion.
For calculating image after the image of contrary perspective projection transformation matrix and contrary perspective projection change as Fig. 8.
Instantiation: as Fig. 8, the resolution of image is 768*576, gets four points, and coordinate is respectively (92,574), (230,437), (722,566), (572,432); The coordinate being set under new coordinate system is (304,575), (304,375), (464,575), (464,375).With the cvWarpPerspectiveQMatrix function in openCV, obtain contrary perspective projection transformation matrix.The input of cvWarpPerspectiveQMatrix function is the matrix of two matrixes 4 * 2, and each matrix is comprised of the transverse and longitudinal coordinate of 4 points, is exactly the coordinate before conversion and after conversion.The contrary perspective projection transformation matrix of trying to achieve is:
0.291892??????1.36039??????487.334
0.0454482?????2.89973??????1062.64
0.000060317???0.00356855???1
Obtain the image after contrary perspective projection transformation, four points originally, after contrary perspective projection transformation, are foursquare four summits substantially.
Because the developed width of a square brick in picture is 600mm, the coordinate of each summit in image is also known, and can calculate after contrary perspective projection transformation pixel distance is in practice 7.5mm, and this is the important evidence of subsequent calculations robot ride state.Because the coordinate of four points of the image after contrary perspective projection transformation is artificial regulation, therefore can controls the position of these four points and adjust an actual range that pixel is corresponding in image.
2, course angle deviation and transversal displacement calculate
In Fig. 9, the position of supposing robot is the position of AC mid point B, center solid line (straight line ABC) is the direction of robot ride, the direction at edge, track is the track edge line after contrary perspective projection transformation, dotted line is the extended line of track edge line, the angle of Ta He center solid line (straight line ABC) is the course angle deviation θ of robot, and line segment BD length is exactly transversal displacement.
(1) ask the angular deviation θ that travels.By detection algorithm above, can obtain the coordinate of the every bit on road edge line in original image, be multiplied by contrary perspective projection transformation matrix H and just can obtain the coordinate that straight line DE goes up any point, then use the coordinate (x of 2
_{1}, y
_{1}), (x
_{2}, y
_{2}) (x
_{1}be not equal to x
_{2}) calculate the slope k=(y of straight line
_{2}y
_{2})/(x
_{2}x
_{1}), then can obtain a θ=90 ° arctan (k); If x
_{1}=x
_{2}, θ=0 °.
(2) ask transversal displacement deviation (BD).
I. the coordinate that in figure, A is ordered can be obtained with the equations simultaneousness of straight line ABC and straight line DE.Suppose that in image, the upper left corner is true origin, coordinate represents by pixel, and the ordinate that A is ordered is h.Equation is as follows:
(width presentation video width),
Ii. when doing contrary perspective projection transformation, there is an optional yardstick ∝ (mm), be the actual length (for example 7.5mm abovementioned) that pixel is corresponding, can draw so, the physical length of line segment CE is sin (θ) * ∝ * (hheight), the pixels tall that wherein height is image, the namely ordinate of C.
Iii. the actual range (BC) of supposing robot center and image below is d(mm), blind area distance after this distance is video camera and is arranged in robot, the distance that converts pixel to is exactly d/ ∝, and the calculating of transversal displacement (BD) just becomes sin (θ) * ∝ * (hheightd/ ∝) so.
Claims (1)
1. the roadedge detection based on monocular vision and rough localization method, it is characterized in that, a kind of semiautomatic roadedge detection and rough localization method based on mobile robot's monocular vision, at the computing machine of being controlled by operating personnel and be contained in the roadedge detection based on monocular vision that the common serial connection of singlelens ccd video camera on mobile robot forms and coarse positioning system in realize, the method is according to following steps, to realize successively;
Step (1), the road conditions image that can reflect road and nonroad boundary both sides color distortion that singlelens ccd video camera photographs when robot is advanced is input in computing machine and shows;
Step (2), operating personnel, according to the color distortion of described road and nonroad boundary both sides, optionally start two different software modules that are located in described computing machine;
For road and the colored road conditions image of the nonroad boundary both sides color RGB color space that there were significant differences, start the roadedge detection module based on color;
For road with nonroad boundary both sides color is fuzzy and without the obvious colored road conditions image of RBG color space on border, start the roadedge detection module based on Threshold segmentation;
Step (3), the described roadedge detection module based on color is carried out the detection that following steps are carried out road edge successively;
Step (3.1), parameter initialization, reads in the colored road conditions image of described RBG color space, and the scope that region of interest ROI is set is 1/4～1/2 size of described road conditions image, starts to divide from the described colored road conditions image lower left corner;
Step (3.2) is taken out the image of region of interest ROI and is transformed into hsv color space from described colored road conditions image, obtains new road conditions image;
Brightness V '=max (R, G, B),
Saturation degree
Colourity
If H ' <0, uses (H '+360) to replace, wherein RGB is respectively the brightness value of red, blue, green three looks of former pixel, makes V=255V ', S=255S ', H=H '/2;
Step (3.3), carries out canny with the cvCanny function module in openCV storehouse in the following manner and takes out limit, obtains canny figure:
The control edge that the junction, result images edge that setting step (3.2) obtains is 50 as Grads threshold, the strong edge of control that is 150 as Grads threshold, initial segmentation place, inner side, is input as hsv color spatial image and abovementioned two Grads threshold of region of interest ROI; Be output as canny and take out edge graph;
Step (3.4), the color gamut of setting HSV is interval, and green area or khaki extracted region out, the scope of its Green interval is (20,0,0)～(70,40,0), khaki scope is (10,0,0)～(30,0,150);
Step (3.5), operation is corroded in the green or the khaki region that utilize the cvErode function module in openCV storehouse to obtain step (3.4), is output as result images;
Step (3.6), the result images that utilizes the cvDilate function module in openCV storehouse to obtain step (3.5) carries out expansive working, is output as green or khaki region result images after excessive erosion expansive working;
Step (3.7), the result images obtaining from step (3.6), extract the marginal point that matching is used: during for road left side edge, in the result images obtaining in step (3.6), scanning from right to left, if taking out in edge graph, canny has green point in 5 pixels on the left of certain marginal point, and correspondingly in 5 pixels on right side there is no a green point, think that this marginal point is true edge point, during 5 pixels of marginal point left and right less than, this marginal point is directly cast out, and obtains a true edge point sample set; Same step is pressed in khaki region;
Step (3.8), judgement true edge point:
If described true edge point sample is concentrated, number m >=30 of true edge point, carry out following steps;
Step (3.8.1), concentrates from described true edge point sample, selects randomly two true edge points, as two sample datas;
Step (3.8.2), the cvFitline function module by openCV storehouse, goes out straight line with least square fitting; Be input as two sample datas that step (3.8.1) obtains, error of calculation type is CV_DIST_L2, and polar coordinates radius and angular error are 0.01, is output as slope and the intercept of institute's fitting a straight line; Statistics is less than the marginal point number k of 4 pixels with described air line distance more afterwards, usings this point that matching is used as subsequent step;
Step (3.8.3), if k/m is greater than 0.3, accepts model of fit, then utilizes least square method fitting a straight line again by described k point, obtains net result; Otherwise, return to step (3.8.1); When repeating number of times and be greater than the maximum cycle 200 times of setting, failure; If counting, the concentrated true edge of described true edge point sample is less than at 30 o'clock unsuccessfully;
Step (4), the described roadedge detection module based on Threshold segmentation, at road right side edge place, takes turns doing following steps and carries out roadedge detection;
Step (4.1), the size that region of interest ROI is set is the image lower right corner 1/4 part;
Step (4.2), takes out region of interest area image, and is converted to gray level image by following formula;
v
_{0}(x,y)=0.212671×R(x,y)+0.715160×G(x,y)+0.072169×B(x,y)，
X, y represents take the pixel coordinate figure that the upper left corner is initial point in described image; v
_{0}the grayscale value that represents (x, y) pixel, R represents the red color channel value of place pixel in coloured image, and G represents the green channel value of place pixel in coloured image, and B represents the blue channel value of place pixel in coloured image, v
_{0}, R, G, B is in [0,255];
Step (4.3), presses following formula pixel (x, y) is carried out to average Fuzzy Processing, and 5 * 5 pixel value matrix V and nuclear matrix K convolution that will be centered by (x, y), obtain new pixel value V1;
V
_{1}(x, y)=V*K, * represents convolution,
Step (4.4), calculates the pixel number that each grayscale value is corresponding, makes color histogram;
Step (4.5), sets the region of interest area image that ccd video camera obtains, and the codomain in gray space is [0, l1], l ∈ (0,255];
Be calculated as follows judge value η (t), travel through the judge value of all grayscale values, select a grayscale value of judge value maximum as the optimum gray threshold of classification;
Wherein,
, by calculating, there is probability P in the two class pixels that are greater than and are less than or equal to this value for dividing according to gray scale classification thresholds t
_{0}(t), P
_{1}(t) interclass variance obtaining;
P
_{i}the probability occurring for the pixel of grayscale value i; Total pixel of image is
n represents that grayscale value is the pixel number of i, and the probability that utilizes histogrammic result to calculate the pixel appearance that gray scale is i is
P
_{0}(t), P
_{1}(t) be according to the grayscale value probability of occurrence of two classes of described gray threshold t division; μ
_{t}(t) be the probability weight summation of pixel in whole grayscale value codomain;
for the two class pixels that are greater than and are less than or equal to this value of dividing according to gray threshold t, the class internal variance obtaining by calculating probability of occurrence;
Wherein, μ
_{0}(t), μ
_{1}(t) be respectively below threshold value t and in above grayscale value codomain the weighted mean of the probability that pixel occurs;
Step (4.6), the gray scale classification thresholds t obtaining based on step (4.5), higher than described gray scale classification thresholds t and lower than the two class pixels of described gray scale classification thresholds t, carries out binary conversion treatment by its grayscale value by following formula respectively;
V (x, y) is for carrying out the grey scale pixel value after binary conversion treatment;
Step (4.7), the bianry image that utilizes the cvCanny function module in described openCV storehouse to obtain step (4.5) carries out canny and takes out limit, is output as canny and takes out edge image; The pixel gradient threshold value that setting connects as the control edge of junction, edge is 50, as the pixel gradient threshold value of the junction, the strong edge of control at initial cut place, edge, is 100;
Step (4.8), the later canny of binary conversion treatment obtaining for step (4.7) takes out edge graph, according to from left to right obtaining first white edge point, as frontier point, obtains a border point set;
Step (4.9), utilizes the concentrated all frontier points of frontier point that the cvHoughLines function module in openCV storehouse obtains step (4.8) to carry out Hough fitting a straight line, obtains road edge; The resolution of setting polar coordinates radius is 3 pixels, and polar angle resolution is
it is 50 that straight line is transformed into the curve number needing on polar coordinates by this straight line minimum;
Step (5), carries out the relative coarse positioning of robot and road edge according to the following steps;
Step (5.1), the forward direction figure that step (4) is detected to the road conditions edge obtaining obtains the vertical view of road edge by doing contrary perspective projection transformation, and step is as follows:
Video camera in step (5.1.1) ，Yong robot is taken a width with ladder diagram picture, is the forward direction figure of rectangle in vertical view;
Step (5.1.2), selects four trapezoidal summits in step (5.1.1) in the forward direction figure obtaining, record coordinate;
Step (5.1.3), arranges arbitrarily the corresponding relation of rectangle size and pixel in vertical view, thereby obtains four summits in vertical view that described four summits obtain after finishing contrary perspective projection transformation, records coordinate;
Step (5.1.4), utilize the cvWrapPerspectiveQMatrix function module in openCV, obtain contrary perspective projection transformation matrix H, be input as the matrix of two 4 * 2 that the cover of two in step (5.1.2), (5.1.3) coordinate forms, be output as contrary perspective projection transformation matrix H;
Step (5.1.5), in the cvWrapPerspective function module in described forward direction figure and contrary perspective projection transformation Input matrix openCV, obtains the vertical view after contrary perspective projection transformation;
Step (5.2), calculate according to the following steps course angle deviation and transversal displacement:
Described course angle deviation refers to the angle theta between robot ride direction AC and the extended line AE of road edge line,
Described transversal displacement refers to that the center B of robot on described robot ride direction AC is to the vertical range BD on described road edge extended line;
Step (5.2.1), asks course angle deviation θ,
Any two points on the road edge line of described forward direction figure, is multiplied by described contrary perspective projection transformation matrix H by the coordinate of each point wherein, just obtains on the extended line of described road edge line the coordinate (x of corresponding 2
_{1}, y
_{1}), (x
_{2}, y
_{2}), slope k=(y of corresponding straight line AE
_{2}y
_{2})/(x
_{2}x
_{1}), θ=90 °arctan (k); If x
_{1}=x
_{2}, θ=0 °;
Step (5.2.2), asks transversal displacement BD,
Step (5.2.2.1), sets: the upper left corner of described vertical view is true origin, and coordinate represents with pixel, and the corresponding physical length of each pixel is 7.5mm, arranges that vertical view coordinate time sets in step (5.1.3); The ordinate that the following equation of simultaneous can obtain A in robot ride direction AC is h;
width presentation video width,
Step (5.2.2.2), the ordinate of establishing the intersection point C of robot ride direction AC and described vertical view is picture altitude height, the vertical range from C point to the extended line of road edge line is CE=sin (θ) * 7.5* (hheight);
Step (5.2.2.3), is calculated as follows transversal displacement BD:
BD=sin(θ)*7.5*(hheightd/7.5)
D is the blind area distance in vertical range BC，Ye Shi robot the place ahead of the center B of robot and vertical view lower limb, and unit is mm.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN201210144082.8A CN102682292B (en)  20120510  20120510  Method based on monocular vision for detecting and roughly positioning edge of road 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN201210144082.8A CN102682292B (en)  20120510  20120510  Method based on monocular vision for detecting and roughly positioning edge of road 
Publications (2)
Publication Number  Publication Date 

CN102682292A CN102682292A (en)  20120919 
CN102682292B true CN102682292B (en)  20140129 
Family
ID=46814188
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN201210144082.8A CN102682292B (en)  20120510  20120510  Method based on monocular vision for detecting and roughly positioning edge of road 
Country Status (1)
Country  Link 

CN (1)  CN102682292B (en) 
Cited By (1)
Publication number  Priority date  Publication date  Assignee  Title 

CN107622502A (en) *  20170728  20180123  南京航空航天大学  The path extraction of robot vision leading system and recognition methods under the conditions of complex illumination 
Families Citing this family (50)
Publication number  Priority date  Publication date  Assignee  Title 

CN102997853A (en) *  20121122  20130327  华中科技大学  Device and method for detecting ice and snow thickness 
CN103971361B (en) *  20130206  20170510  富士通株式会社  Image processing device and method 
CN103308056B (en) *  20130523  20150916  中国科学院自动化研究所  A kind of roadmarking detection method 
CN103400150B (en) *  20130814  20170707  浙江大学  A kind of method and device that road edge identification is carried out based on mobile platform 
CN103630122B (en) *  20131015  20150715  北京航天科工世纪卫星科技有限公司  Monocular vision lane line detection method and distance measurement method thereof 
CN103809186B (en) *  20131220  20161005  长沙中联重科环卫机械有限公司  Curb detecting system, method, device and engineering machinery 
CN103714538B (en) *  20131220  20161228  中联重科股份有限公司  road edge detection method, device and vehicle 
CN103696381B (en) *  20131220  20160203  长沙中联重科环卫机械有限公司  A kind of curb cleaning control method, control device, control system and roadsweeper thereof 
CN103699899B (en) *  20131223  20160817  北京理工大学  Method for detecting lane lines based on equidistant curve model 
CN104021388B (en) *  20140514  20170822  西安理工大学  Barrier during backing automatic detection and method for early warning based on binocular vision 
CN104036246B (en) *  20140610  20170215  电子科技大学  Lane line positioning method based on multifeature fusion and polymorphism mean value 
CN105224908A (en) *  20140701  20160106  北京四维图新科技股份有限公司  A kind of roadmarking acquisition method based on orthogonal projection and device 
KR101668802B1 (en) *  20140903  20161109  신동윤  Apparatus for generating image identifiable in long distance and operation method thereof 
CN104760812B (en) *  20150226  20170630  三峡大学  Product realtime positioning system and method on conveyer belt based on monocular vision 
CN105005761B (en) *  20150616  20180817  北京师范大学  A kind of fullcolor high resolution remote sensing images Approach for road detection of combination significance analysis 
CN105549603B (en) *  20151207  20180828  北京航空航天大学  A kind of Intelligent road inspection control method of multirotor unmanned aerial vehicle 
CN105701496B (en) *  20160112  20190705  北京万同科技有限公司  A kind of go disk recognition methods based on artificial intelligence technology 
CN105740826A (en) *  20160202  20160706  大连楼兰科技股份有限公司  Lane mark binaryzation detection method based on dual scales 
CN105844614B (en) *  20160315  20190823  广东工业大学  It is a kind of that northern method is referred to based on the vision for proofreading robot angle 
CN106127177A (en) *  20160701  20161116  蔡雄  A kind of unmanned road roller 
CN106295556A (en) *  20160809  20170104  中国科学院遥感与数字地球研究所  A kind of Approach for road detection based on SUAV Aerial Images 
CN107831675A (en) *  20160916  20180323  天津思博科科技发展有限公司  Online robot control device based on intelligence learning technology 
CN106500714B (en) *  20160922  20191129  福建网龙计算机网络信息技术有限公司  A kind of robot navigation method and system based on video 
CN106444758B (en) *  20160927  20190723  华南农业大学  A kind of road Identification based on machine vision and the preferred AGV transport vehicle in path 
CN106503636B (en) *  20161012  20190820  同济大学  A kind of road sighting distance detection method and device of viewbased access control model image 
CN108122252A (en) *  20161126  20180605  沈阳新松机器人自动化股份有限公司  A kind of image processing method and relevant device based on panoramic vision robot localization 
CN110402310A (en) *  20170315  20191101  3M创新有限公司  Pavement marker system for lane identification 
CN106872995B (en) *  20170414  20190920  北京佳讯飞鸿电气股份有限公司  A kind of laser radar detection method and device 
CN107220976B (en) *  20170517  20201120  南京航空航天大学  Highway positioning method for aerial highway image 
CN107221070A (en) *  20170524  20170929  广州市银科电子有限公司  A kind of bill anticounterfeit discrimination method recognized based on master pattern fluorescent characteristics 
CN107665489A (en) *  20170918  20180206  华中科技大学  A kind of glass dihedral angle detection method based on computer vision 
CN107729824A (en) *  20170928  20180223  湖北工业大学  A kind of monocular visual positioning method for intelligent scoring of being set a table for Chinese meal dinner party table top 
CN107705301B (en) *  20170929  20210413  南京中设航空科技发展有限公司  Highway marking damage detection method based on unmanned aerial vehicle aerial highway image 
CN107742304B (en) *  20171010  20200421  广州视源电子科技股份有限公司  Method and device for determining movement track, mobile robot and storage medium 
CN107560634A (en) *  20171018  20180109  江苏卡威汽车工业集团股份有限公司  A kind of path adjustment system and method for newenergy automobile 
CN107525525A (en) *  20171018  20171229  江苏卡威汽车工业集团股份有限公司  A kind of path alignment system and method for newenergy automobile 
CN107831762A (en) *  20171018  20180323  江苏卡威汽车工业集团股份有限公司  The path planning system and method for a kind of newenergy automobile 
CN107807638A (en) *  20171018  20180316  江苏卡威汽车工业集团股份有限公司  The image processing system and method for a kind of newenergy automobile 
CN107728620A (en) *  20171018  20180223  江苏卡威汽车工业集团股份有限公司  A kind of Unmanned Systems of newenergy automobile and method 
CN107808140B (en) *  20171107  20200731  浙江大学  Monocular vision road recognition algorithm based on image fusion 
CN109859264A (en) *  20171130  20190607  北京机电工程研究所  A kind of aircraft of viewbased access control model guiding catches control tracking system 
CN108038826A (en) *  20171228  20180515  上海小零网络科技有限公司  The bearing calibration of the shelf image of perspective distortion and device 
CN108090479B (en) *  20180126  20200512  湖北工业大学  Lane detection method for improving Gabor conversion and updating vanishing point 
CN108647664A (en) *  20180518  20181012  河海大学常州校区  It is a kind of based on the method for detecting lane lines for looking around image 
CN109032125A (en) *  20180531  20181218  上海工程技术大学  A kind of air navigation aid of vision AGV 
CN108777071A (en) *  20180704  20181109  深圳智达机械技术有限公司  A kind of highway patrol robot 
CN109165579A (en) *  20180808  20190108  奇瑞汽车股份有限公司  The method and apparatus for detecting stop line 
CN109753081B (en) *  20181214  20200821  煤炭科学研究总院  Roadway inspection unmanned aerial vehicle system based on machine vision and navigation method 
CN109990706A (en) *  20190328  20190709  中国核电工程有限公司  The online measuring device and method of fuel pellet apparent size 
CN110825080B (en) *  20191031  20201113  华南农业大学  Orchard path visual navigation method, system and medium based on fuzzy control algorithm 
Family Cites Families (3)
Publication number  Priority date  Publication date  Assignee  Title 

JP4248558B2 (en) *  20060324  20090402  トヨタ自動車株式会社  Road marking line detection device 
CN102156977A (en) *  20101222  20110817  浙江大学  Visionbased road detection method 
CN102436644B (en) *  20111102  20131016  南京物联网研究院发展有限公司  Unstructured road detection method based on adaptive edge registration 

2012
 20120510 CN CN201210144082.8A patent/CN102682292B/en not_active IP Right Cessation
Cited By (2)
Publication number  Priority date  Publication date  Assignee  Title 

CN107622502A (en) *  20170728  20180123  南京航空航天大学  The path extraction of robot vision leading system and recognition methods under the conditions of complex illumination 
CN107622502B (en) *  20170728  20201020  南京航空航天大学  Path extraction and identification method of visual guidance system under complex illumination condition 
Also Published As
Publication number  Publication date 

CN102682292A (en)  20120919 
Similar Documents
Publication  Publication Date  Title 

Wang et al.  Torontocity: Seeing the world with a million eyes  
CN105667518B (en)  The method and device of lane detection  
Ozgunalp et al.  Multiple lane detection algorithm based on novel dense vanishing point estimation  
CN104298976B (en)  Detection method of license plate based on convolutional neural networks  
KR101856401B1 (en)  Method, apparatus, storage medium, and device for processing lane line data  
WO2018068653A1 (en)  Point cloud data processing method and apparatus, and storage medium  
US10817731B2 (en)  Imagebased pedestrian detection  
CN104848851B (en)  Intelligent Mobile Robot and its method based on Fusion composition  
Son et al.  Realtime illumination invariant lane detection for lane departure warning system  
CN107235044B (en)  A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior  
CN103902976B (en)  A kind of pedestrian detection method based on infrared image  
Borkar et al.  A novel lane detection system with efficient ground truth generation  
US10262216B2 (en)  Hazard detection from a camera in a scene with moving shadows  
Li et al.  A sensorfusion drivableregion and lanedetection system for autonomous vehicle navigation in challenging road scenarios  
CN105160309B (en)  Three lanes detection method based on morphological image segmentation and region growing  
CN106845547B (en)  A kind of intelligent automobile positioning and road markings identifying system and method based on camera  
Alvarez et al.  Combining priors, appearance, and context for road detection  
US10696227B2 (en)  Determining a road surface characteristic  
WO2018145602A1 (en)  Lane determination method, device and storage medium  
US6363161B2 (en)  System for automatically generating database of objects of interest by analysis of images recorded by moving vehicle  
Song et al.  Lane detection and classification for forward collision warning system based on stereo vision  
Rasmussen  Combining laser range, color, and texture cues for autonomous road following  
US9141870B2 (en)  Threedimensional object detection device and threedimensional object detection method  
Nedevschi et al.  A sensor for urban driving assistance systems based on dense stereovision  
Pfeiffer et al.  Efficient representation of traffic scenes by means of dynamic stixels 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
C06  Publication  
SE01  Entry into force of request for substantive examination  
C10  Entry into substantive examination  
GR01  Patent grant  
C14  Grant of patent or utility model  
CF01  Termination of patent right due to nonpayment of annual fee 
Granted publication date: 20140129 Termination date: 20180510 

CF01  Termination of patent right due to nonpayment of annual fee 