CN103487034B - Method for measuring distance and height by vehicle-mounted monocular camera based on vertical type target - Google Patents

Method for measuring distance and height by vehicle-mounted monocular camera based on vertical type target Download PDF

Info

Publication number
CN103487034B
CN103487034B CN201310445576.4A CN201310445576A CN103487034B CN 103487034 B CN103487034 B CN 103487034B CN 201310445576 A CN201310445576 A CN 201310445576A CN 103487034 B CN103487034 B CN 103487034B
Authority
CN
China
Prior art keywords
point
template
pixel
angle
angle point
Prior art date
Application number
CN201310445576.4A
Other languages
Chinese (zh)
Other versions
CN103487034A (en
Inventor
高峰
徐国艳
丁能根
黄小云
邢龙龙
朱金龙
Original Assignee
北京航空航天大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京航空航天大学 filed Critical 北京航空航天大学
Priority to CN201310445576.4A priority Critical patent/CN103487034B/en
Publication of CN103487034A publication Critical patent/CN103487034A/en
Application granted granted Critical
Publication of CN103487034B publication Critical patent/CN103487034B/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical means
    • G01B11/02Measuring arrangements characterised by the use of optical means for measuring length, width or thickness

Abstract

The invention discloses a method for measuring distance and height by a vehicle-mounted monocular camera based on a vertical type target, and belongs to the technical field of intelligent vehicle environmental perception. Through the operations of template matching, candidate point clustering and screening and accurate positioning on a region of interest of a vertical type target image, the method realizes the detection and the positioning of a sub pixel level angular point, combines a projective geometry model, and creates a mapping relation between an image ordinate and an actual imaging angle, thereby realizing the measurement of the distance and the height. The inner and outer parameters of the camera are not needed to be calibrated, a calibrating board or a reference object is not needed to be placed repeatedly, the possibility of the occurrence of error is reduced, and not only are the operation steps reduced, but also the measuring accuracy is improved; compared with the conventional corner detection, the target point of the target can be detected more accurately, so that the calculated amount of the follow-up clustering and screening is reduced, the height measure of the monocular camera is realized on the basis of calculating the actual imaging angle and distance, and the cost is greatly reduced.

Description

High method is surveyed in a kind of range finding of the vehicle-mounted monocular camera based on vertical target
Technical field
The invention belongs to intelligent vehicle technical field of environmental perception, relate to a kind of range finding based on machine vision and height-finding technique, be specifically related to a kind of vehicle-mounted monocular camera based on vertical target for the range finding of barrier, bridge opening or culvert etc. and survey high method.
Background technology
Machine vision is as ingredient most important in intelligent vehicle context aware systems, and for decision-making level provides a large amount of necessary environmental information, tool is of great significance.Wherein, the range finding of object is highly respectively unmanned or the anti-collision early warning of DAS (Driver Assistant System), path planning and vehicle classification, culvert bridge provide important parameter information by property detection etc. with surveying.At present, the machine vision metrology of intelligent vehicle is generally divided into Binocular vision photogrammetry and monocular vision to measure two classes.Binocular distance measurement is easily subject to the impact of unique point error hiding, and calculated amount is large, is difficult to requirement of real time.And monocular vision distance measuring structure is simple, fast operation, is with a wide range of applications.
Current single camera vision system adopts corresponding point standardization (abbreviation standardization) to obtain the depth information of object under test usually.After tradition corresponding point standardization normally utilizes gridiron pattern scaling board to calibrate camera internal and external parameter, in conjunction with projection model, solve the corresponding relation between image coordinate system and actual imaging angle, thus obtain range information.The method needs the scaling board image of multi collect different azimuth, and need accurately to record the respective coordinates of each point in world coordinate system and image coordinate system, and the error of calibration result can amplify tens of even hundreds of times in the measurements, on the whole, process is complicated and error is larger.In addition, by putting object of reference on road surface and measuring its distance, utilize object of reference Distance geometry pixel data directly simulate distance and image coordinate between mathematical model, thus realize range finding.The method also has in engineering to be used widely, but needs larger place, and precision can be subject to the impact of actual measurement and data fitting error.And high for survey, the main sensors such as laser radar that still use are measured, and only survey height in real time with monocular and also rarely have the achievement in research delivered.
Summary of the invention
The object of the present invention is to provide a kind of vehicle-mounted monocular camera based on vertical target to find range and survey high method, region of interest particular by vertical target image carries out template matches, candidate point cluster and screening, the operations such as accurate location, realize sub-pixel Corner Detection and location, in conjunction with perspective geometry model, set up the mapping relations between image ordinate and actual imaging angle, thus realize range finding and survey high, the method not only increases measuring accuracy, and without the need to demarcating camera internal and external parameter, simple to operate, enforceability is strong, there is stronger engineering practical value and Research Significance.
High method is surveyed in a kind of range finding of the vehicle-mounted monocular camera based on vertical target provided by the invention, comprises the following steps:
Step 101: after camera is installed on the correct position of car body, first vertical target is positioned over camera dead ahead, and as far as possible while camera, need meet in the target image of collection and must comprise minimum angle point, and angle point sum is greater than 8, then measure camera setting height(from bottom) h and the horizontal range D with vertical target target surface thereof;
Step 102: gather target image, resolution is mm*nn, arranges image coordinate system: upper left angle point is true origin, and level is to the right x-axis positive dirction, is y-axis positive dirction vertically downward.The region of interest of Corner Detection is set: x direction is [mm/3-1,2*mm/3-1], and y direction is [0, nn-1].Piecemeal is carried out to region of interest: block size s*v can adjust, but is generally greater than 50*50, adopt maximum variance between clusters to carry out self-adaption binaryzation process respectively to each piece;
Step 103: utilize the template (a) of design and (b) entirely to search for coupling respectively in region of interest, retain the upper left angle point of all Matching sub-images, candidate's point set CC={ (x of composition target angle point 1, y 1), (x 2, y 2) ..., (x k, y k), wherein k is the subgraph sum of coupling;
Step 104: classify a little in candidate's point set CC, is less than threshold value T by horizontal ordinate difference between 2 o'clock and ordinate difference simultaneously 1point be designated as same group, suppose altogether to divide into g group, then the collection W={w of group 1, w 2..., w g, then calculate the center point coordinate (x of each group respectively wi, y wi) (i=1,2 ..., g), the difference of central point horizontal ordinate is less than T 1group be designated as same large class, finally retain and comprise the maximum large class of group number, reject other groups, and the center point coordinate of the group remained is added in the x and y direction respectively the half e of template width and height, save as initial angle point set A={ (x a1, y a1), (x a2, y a2) ..., (x aj, y aj), wherein j is the number of the group remained, and in A, the order of each point is with y a1, y a2..., y ajvalue arranges from big to small;
Step 105: search out maximum abscissa value x in initial angle point set A max, utilize template (c) to be less than x at horizontal ordinate maxsearch for the subgraph of coupling in the region of interest of-e from top to bottom, from right to left, once search, then stop search.Suppose that the subgraph upper left angle point searching coupling is (x f, y f), then the reference interval ss=x of angle point max-(x f+ e), recycling template (d) is at point (x f, y f) search for the subgraph of coupling in the region of interest of lower left from top to bottom, from right to left, once search, then stop search, the subgraph upper left angle point of record matching is (x j, y j);
Step 106: after search, judges whether two the location reference point (x that there is step 105 f, y f) and (x j, y j), enter step 107 if existed; Otherwise return step 101;
Step 107: by ordinate maximal value y in initial angle point set A c1with y jcompare, if y c1-y jfor about 3 times of ss, think point (x c1, y c1) be the minimum angle point of target; Otherwise, by point (x max, y j+ ss*3) as the minimum angle point of target, then utilize initial angle point set A and with reference to interval ss, angle point whole in image supplemented complete, and obtaining angle point collection C={ (x c1, y c1), (x c2, y c2) ..., (x cn, y cn), wherein n represents target angle point sum in figure, and the order of each point also arranges from big to small with y value in C, finally use the cvFindCornerSubPix () function in openCV, integrate C with angle point and be updated to sub-pixel angle point collection B={ (x as benchmark b1, y b1), (x b2, y b2) ..., (x bn, y bn);
Step 108: the height collection HH={1.00 of n angle point in image, 1.05 ..., 1.00+ (n-1) * 0.05}, then utilizes parameter h and D, calculates the actual imaging transform angle set Q={q of each angle point 1, q 2..., q n, the wherein ordinate { y of each angle value and sub-pixel angle point collection B b1, y b2..., y bnone_to_one corresponding in order, obtain mapping point set P={ (y b1, q 1), (y b2, q 2) ..., (y bn, q n), with adjacent 2 points of fitting a straight line, obtain consecutive point mapping relations collection F={f 1, f 2..., f n-1;
Step 109: in real time distance process, y direction, the portion coordinate y such as at the bottom of the barrier that the detection algorithm that breaks the barriers is obtained zas parameters input, first judge y zaffiliated mapping relationship f i(0<i<n), f is utilized istraight-line equation calculate y zcorresponding actual imaging angle q z, then with q zas the distance L of input by range equation dyscalculia thing etc. z;
Step 110: need the height judging whether to need to measure barrier etc. according to system, if need to continue step 111; Otherwise terminate the range finding of this barrier etc.;
Step 111: y direction, the barrier top pixel value y that the detection algorithm that breaks the barriers is obtained das parameters input, first judge y daffiliated mapping relationship f i(0<i<n), f is utilized istraight-line equation calculate y dcorresponding actual imaging angle q d, then with q dwith obstacle distance L zas the height of input by surveying high equation dyscalculia thing etc.
The advantage that high method is surveyed in a kind of range finding of the vehicle-mounted monocular camera based on vertical target of the present invention is:
(1) the present invention does not need the internal and external parameter demarcating camera, does not need repeatedly to place scaling board or object of reference yet, reduces the possibility occurring error, both decreased operation link, turn improve measuring accuracy;
(2) region of interest and four templates are devised, angle point in vertical target and location reference point is detected in the mode of template matches, compared with traditional Corner Detection, the impact point in target can be detected more accurately, thus decrease the calculated amount of follow-up cluster screening;
(3) ordinate and the actual imaging angle one_to_one corresponding of sub-pixel angle point collection is made by detection and location reference point, by the mode fitted figure of segmented linear as the mapping relations between ordinate and actual imaging angle, decrease by the error caused by straight line matching, thus improve measuring accuracy;
(4) the present invention is without the need to other sensors such as radars, the actual imaging angle of calculating and the basis of distance achieves monocular cam and surveys high, greatly reduce cost.
Accompanying drawing explanation
Fig. 1 is the overall flow chart of steps that high method is surveyed in a kind of range finding of the vehicle-mounted monocular camera based on vertical target of the present invention;
Fig. 2 is the flowchart that in the present invention, matching method detects angle point;
The schematic diagram of the vertical target of Fig. 3 used by the present invention;
Fig. 4 is for four kinds of template schematic diagram that angle point and location reference point detect in the present invention, and e=11.
Embodiment
Below in conjunction with accompanying drawing, technical scheme of the present invention is described in further detail.
The invention reside in and provide a kind of vehicle-mounted monocular camera based on vertical target to find range high method, mainly for when vehicle-mounted monocular camera has detected road surface object, calculate object height and apart from this spacing.Vehicle front barrier is equidistant and to survey height be unmanned or the anti-collision early warning of DAS (Driver Assistant System), path planning and vehicle classification, culvert bridge by the important parameter information of property detection etc., there is stronger engineer applied value.Method provided by the invention only needs a camera then can realize monocular range finding and survey high, and has higher measuring accuracy, and operation is simple and feasible.
Method provided by the invention does not need to demarcate camera, avoid the internal and external parameter error of demarcation to the impact of measuring, do not need repeatedly to place object of reference or long range measurements yet, decrease the possibility that error produces, make measuring accuracy be enough to meet intelligent vehicle context aware systems to finding range and surveying high precision and requirement of real-time.After vertical target is positioned over the correct position in camera dead ahead by the present invention, gather an image, region of interest has been carried out to the self-adaption binaryzation process of piecemeal, the schematic diagram of vertical target as shown in Figure 3; In region of interest, utilize all subgraphs of template (a) and (b) search coupling, obtain candidate's point set CC of angle point in region of interest, as shown in Figure 2, template as shown in Figure 4 for idiographic flow; Obtain initial angle point set A after carrying out the operation such as cluster, screening to candidate's point set CC, utilize the position relationship of ordinate maximum point in reference point and point set A, final polishing also locates all angle points; Due to the height of all angle points and be all known with the horizontal range of camera, the actual imaging angle of each angle point can be obtained, and gone out the mapping relations between the ordinate of image and actual imaging angle with sectional straight line fitting, the bottom on image such as last Use barriers thing and apical pixel value can realize respectively finding range with survey high.
Described actual imaging angle refers to: it, from the nearest bottom line intersection point of car body, is connected linearly with camera photocentre by the lateral plane of camera optical axis and testee, this straight line and camera photocentre perpendicular to ground straight line between angle.
Fig. 1 illustrates the entire protocol flow process that high method is surveyed in a kind of range finding of the vehicle-mounted monocular camera based on vertical target of the present invention, and the method is divided into following step:
Step 101: camera is arranged on the correct position on car body, then vertical target is positioned over camera dead ahead, and as far as possible while camera, the minimum angle point that must comprise vertical target need be met in the image of camera collection, and angle point sum is greater than 8, measure camera setting height(from bottom) h and the horizontal range D with vertical target target surface thereof;
Step 102: gather target image, resolution is mm*nn, arranges image coordinate system: upper left angle point is true origin, and level is to the right x-axis positive dirction, is y-axis positive dirction vertically downward.The region of interest of Corner Detection is set: x direction is [mm/3-1,2*mm/3-1], and y direction is [0, nn-1].Piecemeal is carried out to region of interest, adopts maximum variance between clusters to carry out self-adaption binaryzation process respectively to each piece, make the image in region of interest be converted into bianry image;
Wherein the size s*v of block can adjust according to the wide height in interesting image district, but generally s*v is greater than 50*50, is less than 150*150, and unit is pixel.
Step 103: utilize template (a) and (b) entirely to search for coupling in region of interest respectively, adopts matching method to detect angle point, obtains candidate's point set CC={ (x of target angle point 1, y 1), (x 2, y 2) ..., (x k, y k), wherein k is the subgraph sum of coupling.The execution flow process of this step as shown in Figure 2;
As shown in Figure 2, in the inventive method, matching method detection angle point is divided into following step:
Step 201: loop initialization parameter ii and jj, is set to zero;
Step 202: with (mm/3-1+jj, ii) for upper left angle point, expands the to be detected subgraph S identical with template size in region of interest; Described template as shown in Figure 4, comprises the template that four kinds of sizes are identical, and template size size is 2e*2e, and unit is pixel, but pixel value is different.Wherein, e is the half of the square templates length of side, and in template (a), the pixel value of e*e the pixel in the upper left corner and the lower right corner is 0 (black), and all the other are 255 (whites); In template (b), the pixel value of e*e the pixel in the upper right corner and the lower left corner is 0 (black), and all the other are 255 (whites); In template (c), the pixel value of a upper right corner e*e pixel is 0 (black), and all the other are 255 (whites); In template (d), the pixel value of a lower right corner e*e pixel is 0 (black), and all the other are 255 (whites); And its purposes is not identical, when searching for the angle point in target image, use template (a) and (b) yet, during search location reference point, use template (c) and (d).
Step 203: the error image G calculating subgraph S and template;
Described error image refers to asks poor to subgraph S and template (a) image at the pixel value of same position pixel, and the image obtained after taking absolute value, namely when asking the pixel value of poor two width binary image corresponding pixel points equal, the pixel value of error image in corresponding pixel points is 0(black), if not identical, pixel value is 255(white).
Step 204: to each white pixel point in error image G, be handled as follows: with this white pixel point for upper left angle point, along the block being extended for 7 pixel × 7 pixel sizes to the right and in downward direction, using this block as statistic unit, calculate the density M of white pixel point in this block;
Described density M to refer in error image with the block of 7 pixel × 7 pixel sizes for statistic unit, and wherein pixel value is 255(white) the number gg of pixel divided by total number 49 of pixel, formula is as follows:
M=gg49 (1)
Step 205: described density M is made the following judgment:
(A) judge whether to exist the density M being greater than density threshold, if the density M that there is certain region is greater than the density threshold of setting, then think that template (a) is not mated, enter step (B); Otherwise, enter step 206;
(B) calculate subgraph S and template (b) error image, if the density M that there is certain region is greater than the density threshold of setting, then thinks that template (b) is not mated, enter step 207, otherwise, enter step 206;
Density threshold is set to 0.32, two width will there be is the image of larger difference to think coupling if density threshold is excessive, increase error hiding, otherwise, if the more too small difference scattered among a small circle caused because light or Uneven road cause target to there is certain anglec of rotation will being mistaken for of density threshold is not mated.By test, the density threshold set by the present invention can realize good Detection results.
Step 206: by subgraph S upper left corner point coordinate (mm/3-1+jj, ii) stored in candidate's point set CC;
Step 207: ii is added 1, judges whether ii is greater than nn-1-2e, enters step 208 if be greater than, otherwise, return step 202;
Step 208: jj is added 1, ii and be set to initial value 0, judges whether jj is greater than mm/3-2e, if be greater than, terminates the coupling of this template, otherwise, return step 202;
Step 104: carry out cluster and Screening Treatment a little in candidate's point set CC;
Due to the subgraph matched with template (a) and (b) also may can be there is in the background image in region of interest, so be not the candidate point of target corner point to reject, need to carry out cluster and Screening Treatment.
Cluster: first points all in candidate's point set CC is carried out classification process, this process is: these two points, if two differences are less than threshold value T1 simultaneously, are designated as same group w by the difference of horizontal ordinate and the difference of ordinate that calculate first point and second point respectively 1, otherwise these two points are designated as two group w respectively 1and w 2; Then other point and the institute being the divided into group difference of horizontal ordinate a little and the difference of ordinate in node-by-node algorithm candidate point set CC, if be less than threshold value T with certain two difference put in group simultaneously 1, then the point of point to be sorted and this group is designated as same group, otherwise thinks the class not belonging to and existed, then by group new for increase by.Suppose altogether to divide into g group, then the collection W={w of group 1, w 2..., w g.Calculate the center point coordinate (x of each group respectively wi, y wi) (i=1,2 ..., g), the difference of any central point horizontal ordinate is less than threshold value T 1group be designated as same large class;
Described calculating each group center point coordinate refers to and institute's horizontal stroke a little, ordinate in group is added respectively, then x, y value obtained divided by the number that group mid point is total is as central point horizontal stroke, the ordinate value of this group.
Screening: because target is perpendicular to ground, and the parallel installation of camera, in region of interest, the line of the angle point of target should be substantially vertical, and the difference of horizontal ordinate should be less than threshold value T 1, then a large class should be belonged to.And because the Mismatching point group belonging to same large class is relative to the corner point group belonging to same large class, negligible amounts, the maximum large class of group number is comprised so retain, reject other large classes, and the center point coordinate of the group remained is added in the x and y direction respectively the half e of template width and height, save as initial angle point set A={ (x a1, y a1), (x a2, y a2) ..., (x aj, y aj), wherein j is the number of the group remained, and in initial angle point set A, the order of each point is with y a1, y a2..., y ajvalue arranges from big to small.
Step 105: search out maximum abscissa value x in initial angle point set A max, utilize template (c) to be less than x at horizontal ordinate maxsearch for the subgraph of coupling in the region of interest of-e from top to bottom, from right to left, once search, then stop search.Suppose that the subgraph upper left angle point searching coupling is (x f, y f), then the reference interval ss=x of angle point max-(x f+ e), recycling template (d) is at point (x f, y f) search for the subgraph of coupling in the region of interest of lower left from top to bottom, from right to left, once search, then stop search, the subgraph upper left angle point of record matching is (x j, y j);
This time the process of template matching method detection and location reference point comprises matching template (c) and matching template (d) two parts, when matching template (c), at x direction [mm/3-1, x max-2*e], in y direction [0, nn-1-2*e] region, according to from top to bottom, dextrosinistral order point by point search, and carry out matching detection.Method be using Searching point as upper left angle point, expand the to be detected subgraph S identical with template size; Then the error image G of subgraph S and template (c) is calculated; Finally each white pixel point in error image G is extended for the block of 7 pixel × 7 pixel sizes, the density M of white pixel point in calculating all pieces, if the density M that there is certain block is greater than the density threshold of setting, think that this subgraph S does not mate with template (c), a bit detect as upper left angle point under continuing search, otherwise think and coupling terminate search.During matching template (d), region of search changes x direction [mm/3-1, x into f-2*e], y direction [y f+ 2*e, nn-1-2*e], equally according to from top to bottom, dextrosinistral order point by point search, and carry out matching detection.Testing process is identical with template (c).
Step 106: after search, judges whether two the location reference point (x that there is step 105 f, y f) and (x j, y j), enter step 107 if existed; Otherwise return step 101;
Wherein location reference point (x f, y f) be likely the reference point shown in Fig. 3 1. or reference point 2., because if target is completely vertically or exist and turn clockwise, reference point that what first that searched is 1., otherwise be reference point 2., and (x j, y j) for reference point 3..The object of search location reference point is: (1) only has two places to meet the feature of template (c) because the target surface of vertical target carries out analyzing rear discovery in the left side of vertical angle point set, and its lower left only has a place to meet the feature of template (d), so search location reference point can confirm the correctness of the initial angle point set A detected further; (2) due to reference point 3. and with the uniqueness of angle point relative position, it can be utilized to position each angle point.1. described reference point has the point of obvious intensity profile feature on the left of the 6th angle point from bottom to up, and the pixel value of e*e the pixel in its upper right corner is 0 (black), and all the other are 255 (whites); 2. reference point is 1. have the point that same grayscale distributes on the left of the 4th angle point with reference point; 3. reference point is general twice interval on the left of second angle point, and intensity profile is the pixel value of e*e the pixel in the lower right corner is 0 (black), and all the other are the point of 255 (whites).
So, if due to video camera install or target place improper, two location reference point (x do not detected in step 105 f, y f) and (x j, y j), then cannot judge the accuracy of initial angle point set A, more cannot locate, then need to return 101 and restart.
Step 107: by ordinate maximal value y in initial angle point set A a1with y jcompare, if y a1-y jfor about 3 times of ss, think point (x a1, y a1) be the minimum angle point of target, otherwise, by point (x max, y j+ ss*3) as the minimum angle point of target; Then angle point is supplemented complete, and obtain angle point collection C={ (x c1, y c1), (x c2, y c2) ..., (x cn, y cn), wherein n represents that angle point concentrates target angle point sum, and the order of each point also arranges from big to small with y value in C, finally use the cvFindCornerSubPix () function in openCV, integrate C with angle point and be updated to sub-pixel angle point collection B={ (x as benchmark b1, y b1), (x b2, y b2) ..., (x bn, y bn);
Wherein, angle point being supplemented complete process is: as fruit dot (x a1, y a1) be the minimum angle point of target, then calculate the difference y of consecutive point ordinate value in initial angle point set A successively aii-y a (ii+1), when difference is the t doubly left and right of ss, then need supplementary between the i-th i point and the i-th i+1 the point of point set A on t-1 point (if t is 1, not needing at this point-to-point transmission supplementary), coordinate is respectively (x aii, y aii-jj*ss) (jj=1 ..., t-1), when searching last point (x in A aj, y aj) time, with y ajcalculate as difference; As fruit dot (x max, y j+ ss*2) be the minimum angle point of target, so also need calculated difference y j+ ss*2-y a1, in the same way by point (x max, y j+ ss*2) and point (x a1, y a1) between angle point supplement complete.
Step 108: the height collection HH={h of n angle point in image 1, h 2..., h n}={ 1.00,1.05 ..., 1.00+ (n-1) * 0.05}, then utilizes parameter h and D, and through type (2) calculates the actual imaging transform angle set Q={q of each angle point 1, q 2..., q n, the wherein ordinate { y of each angle value and sub-pixel angle point collection B b1, y b2..., y bnone_to_one corresponding in order, obtain mapping point set P={ (y b1, q 1), (y b2, q 2) ..., (y bn, q n), map adjacent 2 points in point set with fitting a straight line, obtain consecutive point mapping relations collection F={f 1, f 2..., f n-1, as the formula (3);
q ii=tan -1[D/(h-h ii)] (2)
Wherein, ii=1 ..., n.
f ii = q ii + 1 - q ii y b ( ii + 1 ) - y bii ( y - y bii ) + q ii - - - ( 3 )
Wherein, ii=1 ..., n-1.
Step 109: in real time distance process, y direction coordinate y bottom the barrier that the detection algorithm that breaks the barriers is obtained zas parameters input, first judge y zaffiliated mapping relationship f ii(0<ii<n), f is utilized iistraight-line equation calculate y zcorresponding actual imaging angle q z, then with q zas the distance L of input by range equation (4) dyscalculia thing z;
L z=h·tanq z(4)
Wherein, y is worked as zbe less than or equal to y b2time, Choose for user relation f 1calculate actual imaging angle q z; Work as y zbe more than or equal to y b (n-1)time, Choose for user relation f n-1calculate actual imaging angle q z; In other situations, first search y zbetween location, namely
Y bii<y z<y b (ii+1), then Choose for user relation f iicalculate actual imaging angle q z.
Step 110: need the height judging whether to need to measure barrier according to system, if need to judge, continue step 111; Otherwise terminate the range finding of this barrier;
Due in different system, the information of needs is different, such as, may not need elevation information in anti-collision early warning, but must obtain elevation information, so need interpreting system the need of elevation information when bridge culvert etc. is detected by property.
Step 111: first utilize y direction, the barrier top coordinate y that the same method disturbance in judgement quality testing method of determining and calculating of range finding obtains daffiliated mapping relationship f ii(0<ii<n), f is utilized iistraight-line equation calculate y dcorresponding actual imaging angle q d, then with q dwith obstacle distance L zas the height H of input by surveying high equation (5) dyscalculia thing etc. z.
H z=h-a·L z·tan(|90°-q d|) (5)
Wherein, q is worked as dwhen>=90 °, a gets-1, works as q dduring <90 °, a gets 1.
Table 5 is the experimental result that utilizes the method to find range and error, the image resolution ratio selected is 752*480, camera setting height(from bottom) is 1.32m, the horizontal range of target and camera is 1.8m, and as can be seen from Table 5, the method global error is very little, be generally less than 1%, although may due to impacts such as surface evenness or detection of obstacles precision when 80m, error reaches 2.3029%, also meets the distance accuracy requirement of intelligent vehicle far away.
Table 6 surveys high experimental result and error for utilizing the method, and image resolution ratio, camera height and target placement location are identical with Range finding experiments, and to survey high experimental subjects be height is the people of 1.77m, and as can be seen from Table 6, error all remains within 4%.The impacts such as range error, detection of obstacles and pattern distortion can be subject to owing to surveying height, so altimetry error is generally large than range error, but substantially can meet the requirement that vehicle detected by property at bridge, culvert etc.
The experimental result and error that method provided by the invention carries out finding range applied by table 5
Table 6 is applied method provided by the invention and is carried out surveying high experimental result and error

Claims (7)

1. a high method is surveyed in the vehicle-mounted monocular camera range finding based on vertical target, it is characterized in that, comprises the following steps:
Step 101: after camera is installed on the correct position of car body, is first positioned over camera dead ahead by vertical target, then measures camera setting height(from bottom) h and the horizontal range D with vertical target target surface thereof; Distance between described vertical target and camera need meet in the target image of collection must comprise minimum angle point, and angle point sum is greater than 8;
Step 102: gather target image, resolution is mm*nn, arranges image coordinate system: upper left angle point is true origin, and level is to the right x-axis positive dirction, is y-axis positive dirction vertically downward; The region of interest of Corner Detection is set: x direction is [mm/3-1,2*mm/3-1], and y direction is [0, nn-1]; Piecemeal is carried out to region of interest and carries out self-adaption binaryzation process;
Step 103: utilize the template a of design and template b entirely to search for coupling respectively in region of interest, adopts matching method to detect angle point, retains the upper left angle point of all Matching sub-images, candidate's point set CC={ (x of composition target angle point 1, y 1), (x 2, y 2) ..., (x k, y k), wherein k is the subgraph sum of coupling;
Step 104: carry out cluster and Screening Treatment a little in candidate's point set CC, obtains initial angle point set A;
Step 105: search out maximum abscissa value x in initial angle point set A max, utilize template c to be less than x at horizontal ordinate maxsearch for the subgraph of coupling in the region of interest of-e from top to bottom, from right to left, once search, then stop search; Suppose that the subgraph upper left angle point searching coupling is (x f, y f), then the reference interval ss=x of angle point max-(x f+ e), template d is at point (x for recycling f, y f) search for the subgraph of coupling in the region of interest of lower left from top to bottom, from right to left, once search, then stop search, the subgraph upper left angle point of record matching is (x j, y j); By angle point (x f, y f) and (x j, y j) as location reference point;
Step 106: after search, judges whether two the location reference point (x that there is step 105 f, y f) and (x j, y j), enter step 107 if existed; Otherwise return step 101;
Step 107: by ordinate maximal value y in initial angle point set A c1with y jcompare, if y c1-y jfor 3 times of ss, think point (x c1, y c1) be the minimum angle point of target; Otherwise, by point (x max, y j+ ss*3) as the minimum angle point of target, then utilize initial angle point set A and with reference to interval ss, angle point whole in image supplemented complete, and obtaining angle point collection C={ (x c1, y c1), (x c2, y c2) ..., (x cn, y cn), wherein n represents target angle point sum in figure, and the order of each point also arranges from big to small with y value in C, finally use the cvFindCornerSubPix () function in openCV, integrate C with angle point and be updated to sub-pixel angle point collection B={ (x as benchmark b1, y b1), (x b2, y b2) ..., (x bn, y bn);
Step 108: the height collection HH={1.00 of n angle point in image, 1.05 ..., 1.00+ (n-1) * 0.05}, then utilizes parameter h and D, calculates the actual imaging transform angle set Q={q of each angle point 1, q 2..., q n, the wherein ordinate { y of each angle value and sub-pixel angle point collection B b1, y b2..., y bnone_to_one corresponding in order, obtain mapping point set P={ (y b1, q 1), (y b2, q 2) ..., (y bn, q n), with adjacent 2 points of fitting a straight line, obtain consecutive point mapping relations collection F={f 1, f 2..., f n-1;
Step 109: in real time distance process, y direction coordinate y bottom the barrier that the detection algorithm that breaks the barriers is obtained zas parameters input, first judge y zaffiliated mapping relationship f i, 0<i<n; Utilize f istraight-line equation calculate y zcorresponding actual imaging angle q z, then with q zas the distance L of input by range equation dyscalculia thing z:
L z=h·tanq z(4)
Wherein, y is worked as zbe less than or equal to y b2time, Choose for user relation f 1calculate actual imaging angle q z; Work as y zbe more than or equal to y b (n-1)time, Choose for user relation f n-1calculate actual imaging angle q z; In other situations, first search y zbetween location, namely
Y bii<y z<y b (ii+1), then Choose for user relation f iicalculate actual imaging angle q z;
Step 110: need the height judging whether to need to measure barrier according to system, if need to continue step 111; Otherwise terminate the range finding of this barrier;
Step 111: y direction, the barrier top pixel value y that the detection algorithm that breaks the barriers is obtained das parameters input, first judge y daffiliated mapping relationship f i, 0<i<n, utilizes f istraight-line equation calculate y dcorresponding actual imaging angle q d, then with q dwith obstacle distance L zas the height H z of input by surveying high equation dyscalculia thing:
H z=h-a·L z·tan(|90°-q d|) (5)
Wherein, q is worked as dwhen>=90 °, a gets-1, works as q dduring <90 °, a gets 1;
Described template comprises the identical template of four kinds of sizes, and template size size is 2e*2e, and unit is pixel, and wherein, e is the half of the square templates length of side, and in template a, the pixel value of e*e the pixel in the upper left corner and the lower right corner is 0, and all the other are 255; In template b, the pixel value of e*e the pixel in the upper right corner and the lower left corner is 0, and all the other are 255; In template c, the pixel value of a upper right corner e*e pixel is 0, and all the other are 255; In template d, the pixel value of a lower right corner e*e pixel is 0, and all the other are 255; Use template a and template b when searching for the angle point in target image, during search location reference point, use template c and template d.
2. high method is surveyed in a kind of range finding of the vehicle-mounted monocular camera based on vertical target according to claim 1, it is characterized in that: in region of interest described in step 102, block size is greater than 50*50, unit is pixel, adopts maximum variance between clusters to carry out self-adaption binaryzation process respectively to each piece.
3. high method is surveyed in a kind of range finding of the vehicle-mounted monocular camera based on vertical target according to claim 1, it is characterized in that: the matching method described in step 103 detects angle point and is divided into following step:
Step 201: loop initialization parameter ii and jj, is set to zero;
Step 202: with (mm/3-1+jj, ii) for upper left angle point, expands the to be detected subgraph S identical with template a or template b size in region of interest;
Step 203: the error image G calculating subgraph S and template a;
Described error image refers to asks poor to subgraph S and template a image at the pixel value of same position pixel, and the image obtained after taking absolute value, namely when asking the pixel value of poor two width binary image corresponding pixel points equal, the pixel value of error image in corresponding pixel points is 0, if not identical, pixel value is 255;
Step 204: to each white pixel point in error image G, be handled as follows: with this white pixel point for upper left angle point, along the block being extended for 7 pixel × 7 pixel sizes to the right and in downward direction, using this block as statistic unit, calculate the density M of white pixel point in this block;
Described density M to refer in error image that wherein pixel value is the total number 49 of number gg divided by pixel of the pixel of 255, and formula is as follows with the block of 7 pixel × 7 pixel sizes for statistic unit:
M=gg/49 (1)
Step 205: described density M is made the following judgment:
(A) judge whether to exist the density M being greater than density threshold, if the density M that there is certain region is greater than the density threshold of setting, then think that template a does not mate, enter step (B); Otherwise, enter step 206;
(B) calculate subgraph S and template b error image, if the density M that there is certain region is greater than the density threshold of setting, then thinks that template b does not mate, enter step 207, otherwise, enter step 206;
Step 206: by subgraph S upper left corner point coordinate (mm/3-1+jj, ii) stored in candidate's point set CC;
Step 207: ii is added 1, judges whether ii is greater than nn-1-2e, enters step 208 if be greater than, otherwise, return step 202;
Step 208: jj is added 1, ii and be set to initial value 0, judges whether jj is greater than mm/3-2e, if be greater than, terminates the coupling of this template, otherwise, return step 202.
4. high method is surveyed in a kind of range finding of the vehicle-mounted monocular camera based on vertical target according to claim 3, it is characterized in that: described density threshold is set to 0.32.
5. high method is surveyed in a kind of range finding of the vehicle-mounted monocular camera based on vertical target according to claim 1, it is characterized in that: the cluster described in step 104 is specially: first points all in candidate's point set CC is carried out classification process, this process is: these two points, if two differences are less than threshold value T1 simultaneously, are designated as same group w by the difference of horizontal ordinate and the difference of ordinate that calculate first point and second point respectively 1, otherwise these two points are designated as two group w respectively 1and w 2; Then other point and the institute being the divided into group difference of horizontal ordinate a little and the difference of ordinate in node-by-node algorithm candidate point set CC, if be less than threshold value T with certain two difference put in group simultaneously 1, then the point of point to be sorted and this group is designated as same group, otherwise thinks the class not belonging to and existed, then by group new for increase by; Suppose altogether to divide into g group, then the collection W={w of group 1, w 2..., w g; Calculate the center point coordinate (x of each group respectively wi, y wi), i=1,2 ..., g, is less than threshold value T by the difference of any central point horizontal ordinate 1group be designated as same large class; Described calculating each group center point coordinate refers to and institute's horizontal stroke a little, ordinate in group is added respectively, then x, y value obtained divided by the number that group mid point is total is as central point horizontal stroke, the ordinate value of this group;
Described screening is specially: in region of interest, the difference of the angle point horizontal ordinate of target is less than threshold value T 1, be divided into a large class, retain and comprise the maximum large class of group number, reject other large classes, and the center point coordinate of the group remained is added in the x and y direction respectively the half e of template width and height, save as initial angle point set A={ (x a1, y a1), (x a2, y a2) ..., (x aj, y aj), wherein j is the number of the group remained, and in initial angle point set A, the order of each point is with y a1, y a2..., y ajvalue arranges from big to small.
6. high method is surveyed in a kind of range finding of the vehicle-mounted monocular camera based on vertical target according to claim 1, it is characterized in that: the location reference point described in step 105 adopts template matching method to detect, comprise matching template c and matching template d two parts, when matching template c, at x direction [mm/3-1, x max-2*e], in y direction [0, nn-1-2*e] region, according to from top to bottom, dextrosinistral order point by point search, and carry out matching detection; Method be using Searching point as upper left angle point, expand the to be detected subgraph S identical with template size; Then the error image G of subgraph S and template c is calculated; Finally each white pixel point in error image G is extended for the block of 7 pixel × 7 pixel sizes, the density M of white pixel point in calculating all pieces, if the density M that there is certain block is greater than the density threshold of setting, think that this subgraph S does not mate with template c, a bit detect as upper left angle point under continuing search, otherwise think and coupling terminate search; During matching template d, region of search changes x direction [mm/3-1, x into f-2*e], y direction [y f+ 2*e, nn-1-2*e], equally according to from top to bottom, dextrosinistral order point by point search, and carry out matching detection, testing process is identical with template c.
7. high method is surveyed in a kind of range finding of the vehicle-mounted monocular camera based on vertical target according to claim 1, it is characterized in that: described in step 107, angle point being supplemented complete process is: as fruit dot (x a1, y a1) be the minimum angle point of target, then calculate the difference y of consecutive point ordinate value in initial angle point set A successively aii-y a (ii+1), when difference be the t of ss doubly, then need the i-th i the point at point set A put with the i-th i+1 between supplementary on t-1 point, coordinate is respectively (x aii, y aii-jj*ss), jj=1 ..., t-1, when searching last point (x in A aj, y aj) time, with y ajcalculate as difference; As fruit dot (x max, y j+ ss*2) be the minimum angle point of target, so also need calculated difference y j+ ss*2-y a1, in the same way by point (x max, y j+ ss*2) and point (x a1, y a1) between angle point supplement complete.
CN201310445576.4A 2013-09-26 2013-09-26 Method for measuring distance and height by vehicle-mounted monocular camera based on vertical type target CN103487034B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310445576.4A CN103487034B (en) 2013-09-26 2013-09-26 Method for measuring distance and height by vehicle-mounted monocular camera based on vertical type target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310445576.4A CN103487034B (en) 2013-09-26 2013-09-26 Method for measuring distance and height by vehicle-mounted monocular camera based on vertical type target

Publications (2)

Publication Number Publication Date
CN103487034A CN103487034A (en) 2014-01-01
CN103487034B true CN103487034B (en) 2015-07-15

Family

ID=49827449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310445576.4A CN103487034B (en) 2013-09-26 2013-09-26 Method for measuring distance and height by vehicle-mounted monocular camera based on vertical type target

Country Status (1)

Country Link
CN (1) CN103487034B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015201317A1 (en) * 2015-01-27 2016-07-28 Bayerische Motoren Werke Aktiengesellschaft Measuring a dimension on a surface
CN105203034B (en) * 2015-07-29 2018-07-17 四川大学 A kind of survey height survey area method based on monocular cam three-dimensional ranging model
CN105241424B (en) * 2015-09-25 2017-11-21 小米科技有限责任公司 Indoor orientation method and intelligent management apapratus
CN105405117B (en) * 2015-10-16 2018-07-03 凌云光技术集团有限责任公司 Angular Point Extracting Method and device based on image outline
CN105539311B (en) * 2016-01-29 2017-12-05 深圳市美好幸福生活安全系统有限公司 The installation method and erecting device of a kind of camera
CN106023271B (en) * 2016-07-22 2018-12-11 武汉海达数云技术有限公司 A kind of target center coordinate extraction method and device
CN106504287B (en) * 2016-10-19 2019-02-15 大连民族大学 Monocular vision object space positioning system based on template
CN107305632B (en) * 2017-02-16 2020-06-12 武汉极目智能技术有限公司 Monocular computer vision technology-based target object distance measuring method and system
CN106981082B (en) * 2017-03-08 2020-04-17 驭势科技(北京)有限公司 Vehicle-mounted camera calibration method and device and vehicle-mounted equipment
CN109215083A (en) * 2017-07-06 2019-01-15 华为技术有限公司 The method and apparatus of the calibrating external parameters of onboard sensor
CN109959919B (en) * 2017-12-22 2021-03-26 比亚迪股份有限公司 Automobile and monocular camera ranging method and device
CN108445496B (en) * 2018-01-02 2020-12-08 北京汽车集团有限公司 Ranging calibration device and method, ranging equipment and ranging method
CN111241224A (en) * 2020-01-10 2020-06-05 福瑞泰克智能系统有限公司 Method, system, computer device and storage medium for target distance estimation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10338884A1 (en) * 2003-08-23 2005-03-17 Valeo Schalter Und Sensoren Gmbh Vehicle, especially motor vehicle, object separation measurement method is based on imaging of the object using a 2D monocular camera and then determining its separation distance by analysis of its light intensity distribution
CN101038165A (en) * 2007-02-16 2007-09-19 北京航空航天大学 Vehicle environment based on two eyes visual and distance measuring system
CN101055177A (en) * 2007-05-30 2007-10-17 北京航空航天大学 Double surface drone based flow type tri-dimensional visual measurement splicing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10338884A1 (en) * 2003-08-23 2005-03-17 Valeo Schalter Und Sensoren Gmbh Vehicle, especially motor vehicle, object separation measurement method is based on imaging of the object using a 2D monocular camera and then determining its separation distance by analysis of its light intensity distribution
CN101038165A (en) * 2007-02-16 2007-09-19 北京航空航天大学 Vehicle environment based on two eyes visual and distance measuring system
CN101055177A (en) * 2007-05-30 2007-10-17 北京航空航天大学 Double surface drone based flow type tri-dimensional visual measurement splicing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于角点特征的立体视觉车辆环境感知系统研究;姜岩等;《机械工程学报》;20110731;第47卷(第14期);第99-107页 *
车辆视频检测感兴趣区域确定算法;徐国艳等;《北京航空航天大学学报》;20100731;第36卷(第7期);第781-784页 *

Also Published As

Publication number Publication date
CN103487034A (en) 2014-01-01

Similar Documents

Publication Publication Date Title
Suhr et al. Sensor fusion-based low-cost vehicle localization system for complex urban environments
CN104374376B (en) A kind of vehicle-mounted three-dimension measuring system device and application thereof
Hata et al. Feature detection for vehicle localization in urban environments using a multilayer LIDAR
CN105404844B (en) A kind of Method for Road Boundary Detection based on multi-line laser radar
CN103176185B (en) Method and system for detecting road barrier
US9454816B2 (en) Enhanced stereo imaging-based metrology
CN104931977B (en) A kind of obstacle recognition method for intelligent vehicle
CN101604448B (en) Method and system for measuring speed of moving targets
US10386476B2 (en) Obstacle detection method and apparatus for vehicle-mounted radar system
Alonso et al. Accurate global localization using visual odometry and digital maps on urban environments
CN106053475B (en) Tunnel defect tunneling boring dynamic device for fast detecting based on active panoramic vision
CN103559791B (en) A kind of vehicle checking method merging radar and ccd video camera signal
CN105160702B (en) The stereopsis dense Stereo Matching method and system aided in based on LiDAR point cloud
CN102208013B (en) Landscape coupling reference data generation system and position measuring system
Choi et al. Environment-detection-and-mapping algorithm for autonomous driving in rural or off-road environment
CN104021676B (en) Vehicle location based on vehicle dynamic video features and vehicle speed measurement method
CN106767853A (en) A kind of automatic driving vehicle high-precision locating method based on Multi-information acquisition
CN103456172B (en) A kind of traffic parameter measuring method based on video
CN105674880B (en) Contact net geometric parameter measurement method and system based on binocular principle
CN104766058A (en) Method and device for obtaining lane line
Broggi et al. Obstacle detection with stereo vision for off-road vehicle navigation
Brenner Extraction of features from mobile laser scanning data for future driver assistance systems
CN104950313A (en) Road-surface abstraction and road gradient recognition method
CN104005325B (en) Based on pavement crack checkout gear and the method for the degree of depth and gray level image
Häne et al. Obstacle detection for self-driving cars using only monocular cameras and wheel odometry

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150715

Termination date: 20180926