CN103487034B  Method for measuring distance and height by vehiclemounted monocular camera based on vertical type target  Google Patents
Method for measuring distance and height by vehiclemounted monocular camera based on vertical type target Download PDFInfo
 Publication number
 CN103487034B CN103487034B CN201310445576.4A CN201310445576A CN103487034B CN 103487034 B CN103487034 B CN 103487034B CN 201310445576 A CN201310445576 A CN 201310445576A CN 103487034 B CN103487034 B CN 103487034B
 Authority
 CN
 China
 Prior art keywords
 point
 template
 pixel
 angle
 angle point
 Prior art date
Links
 238000003384 imaging method Methods 0.000 claims abstract description 25
 280000638271 Reference Point companies 0.000 claims description 26
 230000001808 coupling Effects 0.000 claims description 21
 238000010168 coupling process Methods 0.000 claims description 21
 238000005859 coupling reactions Methods 0.000 claims description 21
 238000000034 methods Methods 0.000 claims description 21
 230000000875 corresponding Effects 0.000 claims description 17
 206010049669 Dyscalculia Diseases 0.000 claims description 6
 235000013399 edible fruits Nutrition 0.000 claims description 4
 235000020127 ayran Nutrition 0.000 claims description 3
 238000004064 recycling Methods 0.000 claims description 3
 230000000576 supplementary Effects 0.000 claims description 3
 280000603137 Block as companies 0.000 claims description 2
 239000000203 mixtures Substances 0.000 claims description 2
 238000005259 measurement Methods 0.000 abstract description 3
 238000001514 detection method Methods 0.000 abstract 2
 280000647271 Whites companies 0.000 description 6
 238000010586 diagrams Methods 0.000 description 3
 230000005540 biological transmission Effects 0.000 description 1
 239000012141 concentrates Substances 0.000 description 1
 230000003247 decreasing Effects 0.000 description 1
 238000005516 engineering processes Methods 0.000 description 1
 239000004615 ingredients Substances 0.000 description 1
 238000009434 installation Methods 0.000 description 1
 230000003287 optical Effects 0.000 description 1
 238000005498 polishing Methods 0.000 description 1
Classifications

 G—PHYSICS
 G01—MEASURING; TESTING
 G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
 G01C3/00—Measuring distances in line of sight; Optical rangefinders

 G—PHYSICS
 G01—MEASURING; TESTING
 G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
 G01B11/00—Measuring arrangements characterised by the use of optical means
 G01B11/02—Measuring arrangements characterised by the use of optical means for measuring length, width or thickness
Abstract
The invention discloses a method for measuring distance and height by a vehiclemounted monocular camera based on a vertical type target, and belongs to the technical field of intelligent vehicle environmental perception. Through the operations of template matching, candidate point clustering and screening and accurate positioning on a region of interest of a vertical type target image, the method realizes the detection and the positioning of a sub pixel level angular point, combines a projective geometry model, and creates a mapping relation between an image ordinate and an actual imaging angle, thereby realizing the measurement of the distance and the height. The inner and outer parameters of the camera are not needed to be calibrated, a calibrating board or a reference object is not needed to be placed repeatedly, the possibility of the occurrence of error is reduced, and not only are the operation steps reduced, but also the measuring accuracy is improved; compared with the conventional corner detection, the target point of the target can be detected more accurately, so that the calculated amount of the followup clustering and screening is reduced, the height measure of the monocular camera is realized on the basis of calculating the actual imaging angle and distance, and the cost is greatly reduced.
Description
Technical field
The invention belongs to intelligent vehicle technical field of environmental perception, relate to a kind of range finding based on machine vision and heightfinding technique, be specifically related to a kind of vehiclemounted monocular camera based on vertical target for the range finding of barrier, bridge opening or culvert etc. and survey high method.
Background technology
Machine vision is as ingredient most important in intelligent vehicle context aware systems, and for decisionmaking level provides a large amount of necessary environmental information, tool is of great significance.Wherein, the range finding of object is highly respectively unmanned or the anticollision early warning of DAS (Driver Assistant System), path planning and vehicle classification, culvert bridge provide important parameter information by property detection etc. with surveying.At present, the machine vision metrology of intelligent vehicle is generally divided into Binocular vision photogrammetry and monocular vision to measure two classes.Binocular distance measurement is easily subject to the impact of unique point error hiding, and calculated amount is large, is difficult to requirement of real time.And monocular vision distance measuring structure is simple, fast operation, is with a wide range of applications.
Current single camera vision system adopts corresponding point standardization (abbreviation standardization) to obtain the depth information of object under test usually.After tradition corresponding point standardization normally utilizes gridiron pattern scaling board to calibrate camera internal and external parameter, in conjunction with projection model, solve the corresponding relation between image coordinate system and actual imaging angle, thus obtain range information.The method needs the scaling board image of multi collect different azimuth, and need accurately to record the respective coordinates of each point in world coordinate system and image coordinate system, and the error of calibration result can amplify tens of even hundreds of times in the measurements, on the whole, process is complicated and error is larger.In addition, by putting object of reference on road surface and measuring its distance, utilize object of reference Distance geometry pixel data directly simulate distance and image coordinate between mathematical model, thus realize range finding.The method also has in engineering to be used widely, but needs larger place, and precision can be subject to the impact of actual measurement and data fitting error.And high for survey, the main sensors such as laser radar that still use are measured, and only survey height in real time with monocular and also rarely have the achievement in research delivered.
Summary of the invention
The object of the present invention is to provide a kind of vehiclemounted monocular camera based on vertical target to find range and survey high method, region of interest particular by vertical target image carries out template matches, candidate point cluster and screening, the operations such as accurate location, realize subpixel Corner Detection and location, in conjunction with perspective geometry model, set up the mapping relations between image ordinate and actual imaging angle, thus realize range finding and survey high, the method not only increases measuring accuracy, and without the need to demarcating camera internal and external parameter, simple to operate, enforceability is strong, there is stronger engineering practical value and Research Significance.
High method is surveyed in a kind of range finding of the vehiclemounted monocular camera based on vertical target provided by the invention, comprises the following steps:
Step 101: after camera is installed on the correct position of car body, first vertical target is positioned over camera dead ahead, and as far as possible while camera, need meet in the target image of collection and must comprise minimum angle point, and angle point sum is greater than 8, then measure camera setting height(from bottom) h and the horizontal range D with vertical target target surface thereof;
Step 102: gather target image, resolution is mm*nn, arranges image coordinate system: upper left angle point is true origin, and level is to the right xaxis positive dirction, is yaxis positive dirction vertically downward.The region of interest of Corner Detection is set: x direction is [mm/31,2*mm/31], and y direction is [0, nn1].Piecemeal is carried out to region of interest: block size s*v can adjust, but is generally greater than 50*50, adopt maximum variance between clusters to carry out selfadaption binaryzation process respectively to each piece;
Step 103: utilize the template (a) of design and (b) entirely to search for coupling respectively in region of interest, retain the upper left angle point of all Matching subimages, candidate's point set CC={ (x of composition target angle point
_{1}, y
_{1}), (x
_{2}, y
_{2}) ..., (x
_{k}, y
_{k}), wherein k is the subgraph sum of coupling;
Step 104: classify a little in candidate's point set CC, is less than threshold value T by horizontal ordinate difference between 2 o'clock and ordinate difference simultaneously
_{1}point be designated as same group, suppose altogether to divide into g group, then the collection W={w of group
_{1}, w
_{2}..., w
_{g}, then calculate the center point coordinate (x of each group respectively
_{wi}, y
_{wi}) (i=1,2 ..., g), the difference of central point horizontal ordinate is less than T
_{1}group be designated as same large class, finally retain and comprise the maximum large class of group number, reject other groups, and the center point coordinate of the group remained is added in the x and y direction respectively the half e of template width and height, save as initial angle point set A={ (x
_{a1}, y
_{a1}), (x
_{a2}, y
_{a2}) ..., (x
_{aj}, y
_{aj}), wherein j is the number of the group remained, and in A, the order of each point is with y
_{a1}, y
_{a2}..., y
_{aj}value arranges from big to small;
Step 105: search out maximum abscissa value x in initial angle point set A
_{max}, utilize template (c) to be less than x at horizontal ordinate
_{max}search for the subgraph of coupling in the region of interest ofe from top to bottom, from right to left, once search, then stop search.Suppose that the subgraph upper left angle point searching coupling is (x
_{f}, y
_{f}), then the reference interval ss=x of angle point
_{max}(x
_{f}+ e), recycling template (d) is at point (x
_{f}, y
_{f}) search for the subgraph of coupling in the region of interest of lower left from top to bottom, from right to left, once search, then stop search, the subgraph upper left angle point of record matching is (x
_{j}, y
_{j});
Step 106: after search, judges whether two the location reference point (x that there is step 105
_{f}, y
_{f}) and (x
_{j}, y
_{j}), enter step 107 if existed; Otherwise return step 101;
Step 107: by ordinate maximal value y in initial angle point set A
_{c1}with y
_{j}compare, if y
_{c1}y
_{j}for about 3 times of ss, think point (x
_{c1}, y
_{c1}) be the minimum angle point of target; Otherwise, by point (x
_{max}, y
_{j}+ ss*3) as the minimum angle point of target, then utilize initial angle point set A and with reference to interval ss, angle point whole in image supplemented complete, and obtaining angle point collection C={ (x
_{c1}, y
_{c1}), (x
_{c2}, y
_{c2}) ..., (x
_{cn}, y
_{cn}), wherein n represents target angle point sum in figure, and the order of each point also arranges from big to small with y value in C, finally use the cvFindCornerSubPix () function in openCV, integrate C with angle point and be updated to subpixel angle point collection B={ (x as benchmark
_{b1}, y
_{b1}), (x
_{b2}, y
_{b2}) ..., (x
_{bn}, y
_{bn});
Step 108: the height collection HH={1.00 of n angle point in image, 1.05 ..., 1.00+ (n1) * 0.05}, then utilizes parameter h and D, calculates the actual imaging transform angle set Q={q of each angle point
_{1}, q
_{2}..., q
_{n}, the wherein ordinate { y of each angle value and subpixel angle point collection B
_{b1}, y
_{b2}..., y
_{bn}one_to_one corresponding in order, obtain mapping point set P={ (y
_{b1}, q
_{1}), (y
_{b2}, q
_{2}) ..., (y
_{bn}, q
_{n}), with adjacent 2 points of fitting a straight line, obtain consecutive point mapping relations collection F={f
_{1}, f
_{2}..., f
_{n1};
Step 109: in real time distance process, y direction, the portion coordinate y such as at the bottom of the barrier that the detection algorithm that breaks the barriers is obtained
_{z}as parameters input, first judge y
_{z}affiliated mapping relationship f
_{i}(0<i<n), f is utilized
_{i}straightline equation calculate y
_{z}corresponding actual imaging angle q
_{z}, then with q
_{z}as the distance L of input by range equation dyscalculia thing etc.
_{z};
Step 110: need the height judging whether to need to measure barrier etc. according to system, if need to continue step 111; Otherwise terminate the range finding of this barrier etc.;
Step 111: y direction, the barrier top pixel value y that the detection algorithm that breaks the barriers is obtained
_{d}as parameters input, first judge y
_{d}affiliated mapping relationship f
_{i}(0<i<n), f is utilized
_{i}straightline equation calculate y
_{d}corresponding actual imaging angle q
_{d}, then with q
_{d}with obstacle distance L
_{z}as the height of input by surveying high equation dyscalculia thing etc.
The advantage that high method is surveyed in a kind of range finding of the vehiclemounted monocular camera based on vertical target of the present invention is:
(1) the present invention does not need the internal and external parameter demarcating camera, does not need repeatedly to place scaling board or object of reference yet, reduces the possibility occurring error, both decreased operation link, turn improve measuring accuracy;
(2) region of interest and four templates are devised, angle point in vertical target and location reference point is detected in the mode of template matches, compared with traditional Corner Detection, the impact point in target can be detected more accurately, thus decrease the calculated amount of followup cluster screening;
(3) ordinate and the actual imaging angle one_to_one corresponding of subpixel angle point collection is made by detection and location reference point, by the mode fitted figure of segmented linear as the mapping relations between ordinate and actual imaging angle, decrease by the error caused by straight line matching, thus improve measuring accuracy;
(4) the present invention is without the need to other sensors such as radars, the actual imaging angle of calculating and the basis of distance achieves monocular cam and surveys high, greatly reduce cost.
Accompanying drawing explanation
Fig. 1 is the overall flow chart of steps that high method is surveyed in a kind of range finding of the vehiclemounted monocular camera based on vertical target of the present invention;
Fig. 2 is the flowchart that in the present invention, matching method detects angle point;
The schematic diagram of the vertical target of Fig. 3 used by the present invention;
Fig. 4 is for four kinds of template schematic diagram that angle point and location reference point detect in the present invention, and e=11.
Embodiment
Below in conjunction with accompanying drawing, technical scheme of the present invention is described in further detail.
The invention reside in and provide a kind of vehiclemounted monocular camera based on vertical target to find range high method, mainly for when vehiclemounted monocular camera has detected road surface object, calculate object height and apart from this spacing.Vehicle front barrier is equidistant and to survey height be unmanned or the anticollision early warning of DAS (Driver Assistant System), path planning and vehicle classification, culvert bridge by the important parameter information of property detection etc., there is stronger engineer applied value.Method provided by the invention only needs a camera then can realize monocular range finding and survey high, and has higher measuring accuracy, and operation is simple and feasible.
Method provided by the invention does not need to demarcate camera, avoid the internal and external parameter error of demarcation to the impact of measuring, do not need repeatedly to place object of reference or long range measurements yet, decrease the possibility that error produces, make measuring accuracy be enough to meet intelligent vehicle context aware systems to finding range and surveying high precision and requirement of realtime.After vertical target is positioned over the correct position in camera dead ahead by the present invention, gather an image, region of interest has been carried out to the selfadaption binaryzation process of piecemeal, the schematic diagram of vertical target as shown in Figure 3; In region of interest, utilize all subgraphs of template (a) and (b) search coupling, obtain candidate's point set CC of angle point in region of interest, as shown in Figure 2, template as shown in Figure 4 for idiographic flow; Obtain initial angle point set A after carrying out the operation such as cluster, screening to candidate's point set CC, utilize the position relationship of ordinate maximum point in reference point and point set A, final polishing also locates all angle points; Due to the height of all angle points and be all known with the horizontal range of camera, the actual imaging angle of each angle point can be obtained, and gone out the mapping relations between the ordinate of image and actual imaging angle with sectional straight line fitting, the bottom on image such as last Use barriers thing and apical pixel value can realize respectively finding range with survey high.
Described actual imaging angle refers to: it, from the nearest bottom line intersection point of car body, is connected linearly with camera photocentre by the lateral plane of camera optical axis and testee, this straight line and camera photocentre perpendicular to ground straight line between angle.
Fig. 1 illustrates the entire protocol flow process that high method is surveyed in a kind of range finding of the vehiclemounted monocular camera based on vertical target of the present invention, and the method is divided into following step:
Step 101: camera is arranged on the correct position on car body, then vertical target is positioned over camera dead ahead, and as far as possible while camera, the minimum angle point that must comprise vertical target need be met in the image of camera collection, and angle point sum is greater than 8, measure camera setting height(from bottom) h and the horizontal range D with vertical target target surface thereof;
Step 102: gather target image, resolution is mm*nn, arranges image coordinate system: upper left angle point is true origin, and level is to the right xaxis positive dirction, is yaxis positive dirction vertically downward.The region of interest of Corner Detection is set: x direction is [mm/31,2*mm/31], and y direction is [0, nn1].Piecemeal is carried out to region of interest, adopts maximum variance between clusters to carry out selfadaption binaryzation process respectively to each piece, make the image in region of interest be converted into bianry image;
Wherein the size s*v of block can adjust according to the wide height in interesting image district, but generally s*v is greater than 50*50, is less than 150*150, and unit is pixel.
Step 103: utilize template (a) and (b) entirely to search for coupling in region of interest respectively, adopts matching method to detect angle point, obtains candidate's point set CC={ (x of target angle point
_{1}, y
_{1}), (x
_{2}, y
_{2}) ..., (x
_{k}, y
_{k}), wherein k is the subgraph sum of coupling.The execution flow process of this step as shown in Figure 2;
As shown in Figure 2, in the inventive method, matching method detection angle point is divided into following step:
Step 201: loop initialization parameter ii and jj, is set to zero;
Step 202: with (mm/31+jj, ii) for upper left angle point, expands the to be detected subgraph S identical with template size in region of interest; Described template as shown in Figure 4, comprises the template that four kinds of sizes are identical, and template size size is 2e*2e, and unit is pixel, but pixel value is different.Wherein, e is the half of the square templates length of side, and in template (a), the pixel value of e*e the pixel in the upper left corner and the lower right corner is 0 (black), and all the other are 255 (whites); In template (b), the pixel value of e*e the pixel in the upper right corner and the lower left corner is 0 (black), and all the other are 255 (whites); In template (c), the pixel value of a upper right corner e*e pixel is 0 (black), and all the other are 255 (whites); In template (d), the pixel value of a lower right corner e*e pixel is 0 (black), and all the other are 255 (whites); And its purposes is not identical, when searching for the angle point in target image, use template (a) and (b) yet, during search location reference point, use template (c) and (d).
Step 203: the error image G calculating subgraph S and template;
Described error image refers to asks poor to subgraph S and template (a) image at the pixel value of same position pixel, and the image obtained after taking absolute value, namely when asking the pixel value of poor two width binary image corresponding pixel points equal, the pixel value of error image in corresponding pixel points is 0(black), if not identical, pixel value is 255(white).
Step 204: to each white pixel point in error image G, be handled as follows: with this white pixel point for upper left angle point, along the block being extended for 7 pixel × 7 pixel sizes to the right and in downward direction, using this block as statistic unit, calculate the density M of white pixel point in this block;
Described density M to refer in error image with the block of 7 pixel × 7 pixel sizes for statistic unit, and wherein pixel value is 255(white) the number gg of pixel divided by total number 49 of pixel, formula is as follows:
M＝gg49 （1）
Step 205: described density M is made the following judgment:
(A) judge whether to exist the density M being greater than density threshold, if the density M that there is certain region is greater than the density threshold of setting, then think that template (a) is not mated, enter step (B); Otherwise, enter step 206;
(B) calculate subgraph S and template (b) error image, if the density M that there is certain region is greater than the density threshold of setting, then thinks that template (b) is not mated, enter step 207, otherwise, enter step 206;
Density threshold is set to 0.32, two width will there be is the image of larger difference to think coupling if density threshold is excessive, increase error hiding, otherwise, if the more too small difference scattered among a small circle caused because light or Uneven road cause target to there is certain anglec of rotation will being mistaken for of density threshold is not mated.By test, the density threshold set by the present invention can realize good Detection results.
Step 206: by subgraph S upper left corner point coordinate (mm/31+jj, ii) stored in candidate's point set CC;
Step 207: ii is added 1, judges whether ii is greater than nn12e, enters step 208 if be greater than, otherwise, return step 202;
Step 208: jj is added 1, ii and be set to initial value 0, judges whether jj is greater than mm/32e, if be greater than, terminates the coupling of this template, otherwise, return step 202;
Step 104: carry out cluster and Screening Treatment a little in candidate's point set CC;
Due to the subgraph matched with template (a) and (b) also may can be there is in the background image in region of interest, so be not the candidate point of target corner point to reject, need to carry out cluster and Screening Treatment.
Cluster: first points all in candidate's point set CC is carried out classification process, this process is: these two points, if two differences are less than threshold value T1 simultaneously, are designated as same group w by the difference of horizontal ordinate and the difference of ordinate that calculate first point and second point respectively
_{1}, otherwise these two points are designated as two group w respectively
_{1}and w
_{2}; Then other point and the institute being the divided into group difference of horizontal ordinate a little and the difference of ordinate in nodebynode algorithm candidate point set CC, if be less than threshold value T with certain two difference put in group simultaneously
_{1}, then the point of point to be sorted and this group is designated as same group, otherwise thinks the class not belonging to and existed, then by group new for increase by.Suppose altogether to divide into g group, then the collection W={w of group
_{1}, w
_{2}..., w
_{g}.Calculate the center point coordinate (x of each group respectively
_{wi}, y
_{wi}) (i=1,2 ..., g), the difference of any central point horizontal ordinate is less than threshold value T
_{1}group be designated as same large class;
Described calculating each group center point coordinate refers to and institute's horizontal stroke a little, ordinate in group is added respectively, then x, y value obtained divided by the number that group mid point is total is as central point horizontal stroke, the ordinate value of this group.
Screening: because target is perpendicular to ground, and the parallel installation of camera, in region of interest, the line of the angle point of target should be substantially vertical, and the difference of horizontal ordinate should be less than threshold value T
_{1}, then a large class should be belonged to.And because the Mismatching point group belonging to same large class is relative to the corner point group belonging to same large class, negligible amounts, the maximum large class of group number is comprised so retain, reject other large classes, and the center point coordinate of the group remained is added in the x and y direction respectively the half e of template width and height, save as initial angle point set A={ (x
_{a1}, y
_{a1}), (x
_{a2}, y
_{a2}) ..., (x
_{aj}, y
_{aj}), wherein j is the number of the group remained, and in initial angle point set A, the order of each point is with y
_{a1}, y
_{a2}..., y
_{aj}value arranges from big to small.
Step 105: search out maximum abscissa value x in initial angle point set A
_{max}, utilize template (c) to be less than x at horizontal ordinate
_{max}search for the subgraph of coupling in the region of interest ofe from top to bottom, from right to left, once search, then stop search.Suppose that the subgraph upper left angle point searching coupling is (x
_{f}, y
_{f}), then the reference interval ss=x of angle point
_{max}(x
_{f}+ e), recycling template (d) is at point (x
_{f}, y
_{f}) search for the subgraph of coupling in the region of interest of lower left from top to bottom, from right to left, once search, then stop search, the subgraph upper left angle point of record matching is (x
_{j}, y
_{j});
This time the process of template matching method detection and location reference point comprises matching template (c) and matching template (d) two parts, when matching template (c), at x direction [mm/31, x
_{max}2*e], in y direction [0, nn12*e] region, according to from top to bottom, dextrosinistral order point by point search, and carry out matching detection.Method be using Searching point as upper left angle point, expand the to be detected subgraph S identical with template size; Then the error image G of subgraph S and template (c) is calculated; Finally each white pixel point in error image G is extended for the block of 7 pixel × 7 pixel sizes, the density M of white pixel point in calculating all pieces, if the density M that there is certain block is greater than the density threshold of setting, think that this subgraph S does not mate with template (c), a bit detect as upper left angle point under continuing search, otherwise think and coupling terminate search.During matching template (d), region of search changes x direction [mm/31, x into
_{f}2*e], y direction [y
_{f}+ 2*e, nn12*e], equally according to from top to bottom, dextrosinistral order point by point search, and carry out matching detection.Testing process is identical with template (c).
Step 106: after search, judges whether two the location reference point (x that there is step 105
_{f}, y
_{f}) and (x
_{j}, y
_{j}), enter step 107 if existed; Otherwise return step 101;
Wherein location reference point (x
_{f}, y
_{f}) be likely the reference point shown in Fig. 3 1. or reference point 2., because if target is completely vertically or exist and turn clockwise, reference point that what first that searched is 1., otherwise be reference point 2., and (x
_{j}, y
_{j}) for reference point 3..The object of search location reference point is: (1) only has two places to meet the feature of template (c) because the target surface of vertical target carries out analyzing rear discovery in the left side of vertical angle point set, and its lower left only has a place to meet the feature of template (d), so search location reference point can confirm the correctness of the initial angle point set A detected further; (2) due to reference point 3. and with the uniqueness of angle point relative position, it can be utilized to position each angle point.1. described reference point has the point of obvious intensity profile feature on the left of the 6th angle point from bottom to up, and the pixel value of e*e the pixel in its upper right corner is 0 (black), and all the other are 255 (whites); 2. reference point is 1. have the point that same grayscale distributes on the left of the 4th angle point with reference point; 3. reference point is general twice interval on the left of second angle point, and intensity profile is the pixel value of e*e the pixel in the lower right corner is 0 (black), and all the other are the point of 255 (whites).
So, if due to video camera install or target place improper, two location reference point (x do not detected in step 105
_{f}, y
_{f}) and (x
_{j}, y
_{j}), then cannot judge the accuracy of initial angle point set A, more cannot locate, then need to return 101 and restart.
Step 107: by ordinate maximal value y in initial angle point set A
_{a1}with y
_{j}compare, if y
_{a1}y
_{j}for about 3 times of ss, think point (x
_{a1}, y
_{a1}) be the minimum angle point of target, otherwise, by point (x
_{max}, y
_{j}+ ss*3) as the minimum angle point of target; Then angle point is supplemented complete, and obtain angle point collection C={ (x
_{c1}, y
_{c1}), (x
_{c2}, y
_{c2}) ..., (x
_{cn}, y
_{cn}), wherein n represents that angle point concentrates target angle point sum, and the order of each point also arranges from big to small with y value in C, finally use the cvFindCornerSubPix () function in openCV, integrate C with angle point and be updated to subpixel angle point collection B={ (x as benchmark
_{b1}, y
_{b1}), (x
_{b2}, y
_{b2}) ..., (x
_{bn}, y
_{bn});
Wherein, angle point being supplemented complete process is: as fruit dot (x
_{a1}, y
_{a1}) be the minimum angle point of target, then calculate the difference y of consecutive point ordinate value in initial angle point set A successively
_{aii}y
_{a (ii+1)}, when difference is the t doubly left and right of ss, then need supplementary between the ith i point and the ith i+1 the point of point set A on t1 point (if t is 1, not needing at this pointtopoint transmission supplementary), coordinate is respectively (x
_{aii}, y
_{aii}jj*ss) (jj=1 ..., t1), when searching last point (x in A
_{aj}, y
_{aj}) time, with y
_{aj}calculate as difference; As fruit dot (x
_{max}, y
_{j}+ ss*2) be the minimum angle point of target, so also need calculated difference y
_{j}+ ss*2y
_{a1}, in the same way by point (x
_{max}, y
_{j}+ ss*2) and point (x
_{a1}, y
_{a1}) between angle point supplement complete.
Step 108: the height collection HH={h of n angle point in image
_{1}, h
_{2}..., h
_{n}}={ 1.00,1.05 ..., 1.00+ (n1) * 0.05}, then utilizes parameter h and D, and through type (2) calculates the actual imaging transform angle set Q={q of each angle point
_{1}, q
_{2}..., q
_{n}, the wherein ordinate { y of each angle value and subpixel angle point collection B
_{b1}, y
_{b2}..., y
_{bn}one_to_one corresponding in order, obtain mapping point set P={ (y
_{b1}, q
_{1}), (y
_{b2}, q
_{2}) ..., (y
_{bn}, q
_{n}), map adjacent 2 points in point set with fitting a straight line, obtain consecutive point mapping relations collection F={f
_{1}, f
_{2}..., f
_{n1}, as the formula (3);
q
_{ii}＝tan
^{1}[D/(hh
_{ii})] （2）
Wherein, ii=1 ..., n.
Wherein, ii=1 ..., n1.
Step 109: in real time distance process, y direction coordinate y bottom the barrier that the detection algorithm that breaks the barriers is obtained
_{z}as parameters input, first judge y
_{z}affiliated mapping relationship f
_{ii}(0<ii<n), f is utilized
_{ii}straightline equation calculate y
_{z}corresponding actual imaging angle q
_{z}, then with q
_{z}as the distance L of input by range equation (4) dyscalculia thing
_{z};
L
_{z}＝h·tanq
_{z}（4）
Wherein, y is worked as
_{z}be less than or equal to y
_{b2}time, Choose for user relation f
_{1}calculate actual imaging angle q
_{z}; Work as y
_{z}be more than or equal to y
_{b (n1)}time, Choose for user relation f
_{n1}calculate actual imaging angle q
_{z}; In other situations, first search y
_{z}between location, namely
Y
_{bii}<y
_{z}<y
_{b (ii+1)}, then Choose for user relation f
_{ii}calculate actual imaging angle q
_{z}.
Step 110: need the height judging whether to need to measure barrier according to system, if need to judge, continue step 111; Otherwise terminate the range finding of this barrier;
Due in different system, the information of needs is different, such as, may not need elevation information in anticollision early warning, but must obtain elevation information, so need interpreting system the need of elevation information when bridge culvert etc. is detected by property.
Step 111: first utilize y direction, the barrier top coordinate y that the same method disturbance in judgement quality testing method of determining and calculating of range finding obtains
_{d}affiliated mapping relationship f
_{ii}(0<ii<n), f is utilized
_{ii}straightline equation calculate y
_{d}corresponding actual imaging angle q
_{d}, then with q
_{d}with obstacle distance L
_{z}as the height H of input by surveying high equation (5) dyscalculia thing etc.
_{z}.
H
_{z}＝ha·L
_{z}·tan(90°q
_{d}) （5）
Wherein, q is worked as
_{d}when>=90 °, a gets1, works as q
_{d}during <90 °, a gets 1.
Table 5 is the experimental result that utilizes the method to find range and error, the image resolution ratio selected is 752*480, camera setting height(from bottom) is 1.32m, the horizontal range of target and camera is 1.8m, and as can be seen from Table 5, the method global error is very little, be generally less than 1%, although may due to impacts such as surface evenness or detection of obstacles precision when 80m, error reaches 2.3029%, also meets the distance accuracy requirement of intelligent vehicle far away.
Table 6 surveys high experimental result and error for utilizing the method, and image resolution ratio, camera height and target placement location are identical with Range finding experiments, and to survey high experimental subjects be height is the people of 1.77m, and as can be seen from Table 6, error all remains within 4%.The impacts such as range error, detection of obstacles and pattern distortion can be subject to owing to surveying height, so altimetry error is generally large than range error, but substantially can meet the requirement that vehicle detected by property at bridge, culvert etc.
The experimental result and error that method provided by the invention carries out finding range applied by table 5
Table 6 is applied method provided by the invention and is carried out surveying high experimental result and error
Claims (7)
1. a high method is surveyed in the vehiclemounted monocular camera range finding based on vertical target, it is characterized in that, comprises the following steps:
Step 101: after camera is installed on the correct position of car body, is first positioned over camera dead ahead by vertical target, then measures camera setting height(from bottom) h and the horizontal range D with vertical target target surface thereof; Distance between described vertical target and camera need meet in the target image of collection must comprise minimum angle point, and angle point sum is greater than 8;
Step 102: gather target image, resolution is mm*nn, arranges image coordinate system: upper left angle point is true origin, and level is to the right xaxis positive dirction, is yaxis positive dirction vertically downward; The region of interest of Corner Detection is set: x direction is [mm/31,2*mm/31], and y direction is [0, nn1]; Piecemeal is carried out to region of interest and carries out selfadaption binaryzation process;
Step 103: utilize the template a of design and template b entirely to search for coupling respectively in region of interest, adopts matching method to detect angle point, retains the upper left angle point of all Matching subimages, candidate's point set CC={ (x of composition target angle point
_{1}, y
_{1}), (x
_{2}, y
_{2}) ..., (x
_{k}, y
_{k}), wherein k is the subgraph sum of coupling;
Step 104: carry out cluster and Screening Treatment a little in candidate's point set CC, obtains initial angle point set A;
Step 105: search out maximum abscissa value x in initial angle point set A
_{max}, utilize template c to be less than x at horizontal ordinate
_{max}search for the subgraph of coupling in the region of interest ofe from top to bottom, from right to left, once search, then stop search; Suppose that the subgraph upper left angle point searching coupling is (x
_{f}, y
_{f}), then the reference interval ss=x of angle point
_{max}(x
_{f}+ e), template d is at point (x for recycling
_{f}, y
_{f}) search for the subgraph of coupling in the region of interest of lower left from top to bottom, from right to left, once search, then stop search, the subgraph upper left angle point of record matching is (x
_{j}, y
_{j}); By angle point (x
_{f}, y
_{f}) and (x
_{j}, y
_{j}) as location reference point;
Step 106: after search, judges whether two the location reference point (x that there is step 105
_{f}, y
_{f}) and (x
_{j}, y
_{j}), enter step 107 if existed; Otherwise return step 101;
Step 107: by ordinate maximal value y in initial angle point set A
_{c1}with y
_{j}compare, if y
_{c1}y
_{j}for 3 times of ss, think point (x
_{c1}, y
_{c1}) be the minimum angle point of target; Otherwise, by point (x
_{max}, y
_{j}+ ss*3) as the minimum angle point of target, then utilize initial angle point set A and with reference to interval ss, angle point whole in image supplemented complete, and obtaining angle point collection C={ (x
_{c1}, y
_{c1}), (x
_{c2}, y
_{c2}) ..., (x
_{cn}, y
_{cn}), wherein n represents target angle point sum in figure, and the order of each point also arranges from big to small with y value in C, finally use the cvFindCornerSubPix () function in openCV, integrate C with angle point and be updated to subpixel angle point collection B={ (x as benchmark
_{b1}, y
_{b1}), (x
_{b2}, y
_{b2}) ..., (x
_{bn}, y
_{bn});
Step 108: the height collection HH={1.00 of n angle point in image, 1.05 ..., 1.00+ (n1) * 0.05}, then utilizes parameter h and D, calculates the actual imaging transform angle set Q={q of each angle point
_{1}, q
_{2}..., q
_{n}, the wherein ordinate { y of each angle value and subpixel angle point collection B
_{b1}, y
_{b2}..., y
_{bn}one_to_one corresponding in order, obtain mapping point set P={ (y
_{b1}, q
_{1}), (y
_{b2}, q
_{2}) ..., (y
_{bn}, q
_{n}), with adjacent 2 points of fitting a straight line, obtain consecutive point mapping relations collection F={f
_{1}, f
_{2}..., f
_{n1};
Step 109: in real time distance process, y direction coordinate y bottom the barrier that the detection algorithm that breaks the barriers is obtained
_{z}as parameters input, first judge y
_{z}affiliated mapping relationship f
_{i}, 0<i<n; Utilize f
_{i}straightline equation calculate y
_{z}corresponding actual imaging angle q
_{z}, then with q
_{z}as the distance L of input by range equation dyscalculia thing
_{z}:
L
_{z}＝h·tanq
_{z}(4)
Wherein, y is worked as
_{z}be less than or equal to y
_{b2}time, Choose for user relation f
_{1}calculate actual imaging angle q
_{z}; Work as y
_{z}be more than or equal to y
_{b (n1)}time, Choose for user relation f
_{n1}calculate actual imaging angle q
_{z}; In other situations, first search y
_{z}between location, namely
Y
_{bii}<y
_{z}<y
_{b (ii+1)}, then Choose for user relation f
_{ii}calculate actual imaging angle q
_{z};
Step 110: need the height judging whether to need to measure barrier according to system, if need to continue step 111; Otherwise terminate the range finding of this barrier;
Step 111: y direction, the barrier top pixel value y that the detection algorithm that breaks the barriers is obtained
_{d}as parameters input, first judge y
_{d}affiliated mapping relationship f
_{i}, 0<i<n, utilizes f
_{i}straightline equation calculate y
_{d}corresponding actual imaging angle q
_{d}, then with q
_{d}with obstacle distance L
_{z}as the height H z of input by surveying high equation dyscalculia thing:
H
_{z}＝ha·L
_{z}·tan(90°q
_{d}) (5)
Wherein, q is worked as
_{d}when>=90 °, a gets1, works as q
_{d}during <90 °, a gets 1;
Described template comprises the identical template of four kinds of sizes, and template size size is 2e*2e, and unit is pixel, and wherein, e is the half of the square templates length of side, and in template a, the pixel value of e*e the pixel in the upper left corner and the lower right corner is 0, and all the other are 255; In template b, the pixel value of e*e the pixel in the upper right corner and the lower left corner is 0, and all the other are 255; In template c, the pixel value of a upper right corner e*e pixel is 0, and all the other are 255; In template d, the pixel value of a lower right corner e*e pixel is 0, and all the other are 255; Use template a and template b when searching for the angle point in target image, during search location reference point, use template c and template d.
2. high method is surveyed in a kind of range finding of the vehiclemounted monocular camera based on vertical target according to claim 1, it is characterized in that: in region of interest described in step 102, block size is greater than 50*50, unit is pixel, adopts maximum variance between clusters to carry out selfadaption binaryzation process respectively to each piece.
3. high method is surveyed in a kind of range finding of the vehiclemounted monocular camera based on vertical target according to claim 1, it is characterized in that: the matching method described in step 103 detects angle point and is divided into following step:
Step 201: loop initialization parameter ii and jj, is set to zero;
Step 202: with (mm/31+jj, ii) for upper left angle point, expands the to be detected subgraph S identical with template a or template b size in region of interest;
Step 203: the error image G calculating subgraph S and template a;
Described error image refers to asks poor to subgraph S and template a image at the pixel value of same position pixel, and the image obtained after taking absolute value, namely when asking the pixel value of poor two width binary image corresponding pixel points equal, the pixel value of error image in corresponding pixel points is 0, if not identical, pixel value is 255;
Step 204: to each white pixel point in error image G, be handled as follows: with this white pixel point for upper left angle point, along the block being extended for 7 pixel × 7 pixel sizes to the right and in downward direction, using this block as statistic unit, calculate the density M of white pixel point in this block;
Described density M to refer in error image that wherein pixel value is the total number 49 of number gg divided by pixel of the pixel of 255, and formula is as follows with the block of 7 pixel × 7 pixel sizes for statistic unit:
M＝gg/49 (1)
Step 205: described density M is made the following judgment:
(A) judge whether to exist the density M being greater than density threshold, if the density M that there is certain region is greater than the density threshold of setting, then think that template a does not mate, enter step (B); Otherwise, enter step 206;
(B) calculate subgraph S and template b error image, if the density M that there is certain region is greater than the density threshold of setting, then thinks that template b does not mate, enter step 207, otherwise, enter step 206;
Step 206: by subgraph S upper left corner point coordinate (mm/31+jj, ii) stored in candidate's point set CC;
Step 207: ii is added 1, judges whether ii is greater than nn12e, enters step 208 if be greater than, otherwise, return step 202;
Step 208: jj is added 1, ii and be set to initial value 0, judges whether jj is greater than mm/32e, if be greater than, terminates the coupling of this template, otherwise, return step 202.
4. high method is surveyed in a kind of range finding of the vehiclemounted monocular camera based on vertical target according to claim 3, it is characterized in that: described density threshold is set to 0.32.
5. high method is surveyed in a kind of range finding of the vehiclemounted monocular camera based on vertical target according to claim 1, it is characterized in that: the cluster described in step 104 is specially: first points all in candidate's point set CC is carried out classification process, this process is: these two points, if two differences are less than threshold value T1 simultaneously, are designated as same group w by the difference of horizontal ordinate and the difference of ordinate that calculate first point and second point respectively
_{1}, otherwise these two points are designated as two group w respectively
_{1}and w
_{2}; Then other point and the institute being the divided into group difference of horizontal ordinate a little and the difference of ordinate in nodebynode algorithm candidate point set CC, if be less than threshold value T with certain two difference put in group simultaneously
_{1}, then the point of point to be sorted and this group is designated as same group, otherwise thinks the class not belonging to and existed, then by group new for increase by; Suppose altogether to divide into g group, then the collection W={w of group
_{1}, w
_{2}..., w
_{g}; Calculate the center point coordinate (x of each group respectively
_{wi}, y
_{wi}), i=1,2 ..., g, is less than threshold value T by the difference of any central point horizontal ordinate
_{1}group be designated as same large class; Described calculating each group center point coordinate refers to and institute's horizontal stroke a little, ordinate in group is added respectively, then x, y value obtained divided by the number that group mid point is total is as central point horizontal stroke, the ordinate value of this group;
Described screening is specially: in region of interest, the difference of the angle point horizontal ordinate of target is less than threshold value T
_{1}, be divided into a large class, retain and comprise the maximum large class of group number, reject other large classes, and the center point coordinate of the group remained is added in the x and y direction respectively the half e of template width and height, save as initial angle point set A={ (x
_{a1}, y
_{a1}), (x
_{a2}, y
_{a2}) ..., (x
_{aj}, y
_{aj}), wherein j is the number of the group remained, and in initial angle point set A, the order of each point is with y
_{a1}, y
_{a2}..., y
_{aj}value arranges from big to small.
6. high method is surveyed in a kind of range finding of the vehiclemounted monocular camera based on vertical target according to claim 1, it is characterized in that: the location reference point described in step 105 adopts template matching method to detect, comprise matching template c and matching template d two parts, when matching template c, at x direction [mm/31, x
_{max}2*e], in y direction [0, nn12*e] region, according to from top to bottom, dextrosinistral order point by point search, and carry out matching detection; Method be using Searching point as upper left angle point, expand the to be detected subgraph S identical with template size; Then the error image G of subgraph S and template c is calculated; Finally each white pixel point in error image G is extended for the block of 7 pixel × 7 pixel sizes, the density M of white pixel point in calculating all pieces, if the density M that there is certain block is greater than the density threshold of setting, think that this subgraph S does not mate with template c, a bit detect as upper left angle point under continuing search, otherwise think and coupling terminate search; During matching template d, region of search changes x direction [mm/31, x into
_{f}2*e], y direction [y
_{f}+ 2*e, nn12*e], equally according to from top to bottom, dextrosinistral order point by point search, and carry out matching detection, testing process is identical with template c.
7. high method is surveyed in a kind of range finding of the vehiclemounted monocular camera based on vertical target according to claim 1, it is characterized in that: described in step 107, angle point being supplemented complete process is: as fruit dot (x
_{a1}, y
_{a1}) be the minimum angle point of target, then calculate the difference y of consecutive point ordinate value in initial angle point set A successively
_{aii}y
_{a (ii+1)}, when difference be the t of ss doubly, then need the ith i the point at point set A put with the ith i+1 between supplementary on t1 point, coordinate is respectively (x
_{aii}, y
_{aii}jj*ss), jj=1 ..., t1, when searching last point (x in A
_{aj}, y
_{aj}) time, with y
_{aj}calculate as difference; As fruit dot (x
_{max}, y
_{j}+ ss*2) be the minimum angle point of target, so also need calculated difference y
_{j}+ ss*2y
_{a1}, in the same way by point (x
_{max}, y
_{j}+ ss*2) and point (x
_{a1}, y
_{a1}) between angle point supplement complete.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN201310445576.4A CN103487034B (en)  20130926  20130926  Method for measuring distance and height by vehiclemounted monocular camera based on vertical type target 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN201310445576.4A CN103487034B (en)  20130926  20130926  Method for measuring distance and height by vehiclemounted monocular camera based on vertical type target 
Publications (2)
Publication Number  Publication Date 

CN103487034A CN103487034A (en)  20140101 
CN103487034B true CN103487034B (en)  20150715 
Family
ID=49827449
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN201310445576.4A CN103487034B (en)  20130926  20130926  Method for measuring distance and height by vehiclemounted monocular camera based on vertical type target 
Country Status (1)
Country  Link 

CN (1)  CN103487034B (en) 
Families Citing this family (13)
Publication number  Priority date  Publication date  Assignee  Title 

DE102015201317A1 (en) *  20150127  20160728  Bayerische Motoren Werke Aktiengesellschaft  Measuring a dimension on a surface 
CN105203034B (en) *  20150729  20180717  四川大学  A kind of survey height survey area method based on monocular cam threedimensional ranging model 
CN105241424B (en) *  20150925  20171121  小米科技有限责任公司  Indoor orientation method and intelligent management apapratus 
CN105405117B (en) *  20151016  20180703  凌云光技术集团有限责任公司  Angular Point Extracting Method and device based on image outline 
CN105539311B (en) *  20160129  20171205  深圳市美好幸福生活安全系统有限公司  The installation method and erecting device of a kind of camera 
CN106023271B (en) *  20160722  20181211  武汉海达数云技术有限公司  A kind of target center coordinate extraction method and device 
CN106504287B (en) *  20161019  20190215  大连民族大学  Monocular vision object space positioning system based on template 
CN107305632B (en) *  20170216  20200612  武汉极目智能技术有限公司  Monocular computer vision technologybased target object distance measuring method and system 
CN106981082B (en) *  20170308  20200417  驭势科技（北京）有限公司  Vehiclemounted camera calibration method and device and vehiclemounted equipment 
CN109215083A (en) *  20170706  20190115  华为技术有限公司  The method and apparatus of the calibrating external parameters of onboard sensor 
CN109959919B (en) *  20171222  20210326  比亚迪股份有限公司  Automobile and monocular camera ranging method and device 
CN108445496B (en) *  20180102  20201208  北京汽车集团有限公司  Ranging calibration device and method, ranging equipment and ranging method 
CN111241224A (en) *  20200110  20200605  福瑞泰克智能系统有限公司  Method, system, computer device and storage medium for target distance estimation 
Citations (3)
Publication number  Priority date  Publication date  Assignee  Title 

DE10338884A1 (en) *  20030823  20050317  Valeo Schalter Und Sensoren Gmbh  Vehicle, especially motor vehicle, object separation measurement method is based on imaging of the object using a 2D monocular camera and then determining its separation distance by analysis of its light intensity distribution 
CN101038165A (en) *  20070216  20070919  北京航空航天大学  Vehicle environment based on two eyes visual and distance measuring system 
CN101055177A (en) *  20070530  20071017  北京航空航天大学  Double surface drone based flow type tridimensional visual measurement splicing method 

2013
 20130926 CN CN201310445576.4A patent/CN103487034B/en not_active IP Right Cessation
Patent Citations (3)
Publication number  Priority date  Publication date  Assignee  Title 

DE10338884A1 (en) *  20030823  20050317  Valeo Schalter Und Sensoren Gmbh  Vehicle, especially motor vehicle, object separation measurement method is based on imaging of the object using a 2D monocular camera and then determining its separation distance by analysis of its light intensity distribution 
CN101038165A (en) *  20070216  20070919  北京航空航天大学  Vehicle environment based on two eyes visual and distance measuring system 
CN101055177A (en) *  20070530  20071017  北京航空航天大学  Double surface drone based flow type tridimensional visual measurement splicing method 
NonPatent Citations (2)
Title 

基于角点特征的立体视觉车辆环境感知系统研究;姜岩等;《机械工程学报》;20110731;第47卷(第14期);第99107页 * 
车辆视频检测感兴趣区域确定算法;徐国艳等;《北京航空航天大学学报》;20100731;第36卷(第7期);第781784页 * 
Also Published As
Publication number  Publication date 

CN103487034A (en)  20140101 
Similar Documents
Publication  Publication Date  Title 

Suhr et al.  Sensor fusionbased lowcost vehicle localization system for complex urban environments  
CN104374376B (en)  A kind of vehiclemounted threedimension measuring system device and application thereof  
Hata et al.  Feature detection for vehicle localization in urban environments using a multilayer LIDAR  
CN105404844B (en)  A kind of Method for Road Boundary Detection based on multiline laser radar  
CN103176185B (en)  Method and system for detecting road barrier  
US9454816B2 (en)  Enhanced stereo imagingbased metrology  
CN104931977B (en)  A kind of obstacle recognition method for intelligent vehicle  
CN101604448B (en)  Method and system for measuring speed of moving targets  
US10386476B2 (en)  Obstacle detection method and apparatus for vehiclemounted radar system  
Alonso et al.  Accurate global localization using visual odometry and digital maps on urban environments  
CN106053475B (en)  Tunnel defect tunneling boring dynamic device for fast detecting based on active panoramic vision  
CN103559791B (en)  A kind of vehicle checking method merging radar and ccd video camera signal  
CN105160702B (en)  The stereopsis dense Stereo Matching method and system aided in based on LiDAR point cloud  
CN102208013B (en)  Landscape coupling reference data generation system and position measuring system  
Choi et al.  Environmentdetectionandmapping algorithm for autonomous driving in rural or offroad environment  
CN104021676B (en)  Vehicle location based on vehicle dynamic video features and vehicle speed measurement method  
CN106767853A (en)  A kind of automatic driving vehicle highprecision locating method based on Multiinformation acquisition  
CN103456172B (en)  A kind of traffic parameter measuring method based on video  
CN105674880B (en)  Contact net geometric parameter measurement method and system based on binocular principle  
CN104766058A (en)  Method and device for obtaining lane line  
Broggi et al.  Obstacle detection with stereo vision for offroad vehicle navigation  
Brenner  Extraction of features from mobile laser scanning data for future driver assistance systems  
CN104950313A (en)  Roadsurface abstraction and road gradient recognition method  
CN104005325B (en)  Based on pavement crack checkout gear and the method for the degree of depth and gray level image  
Häne et al.  Obstacle detection for selfdriving cars using only monocular cameras and wheel odometry 
Legal Events
Date  Code  Title  Description 

C06  Publication  
PB01  Publication  
C10  Entry into substantive examination  
SE01  Entry into force of request for substantive examination  
C14  Grant of patent or utility model  
GR01  Patent grant  
CF01  Termination of patent right due to nonpayment of annual fee  
CF01  Termination of patent right due to nonpayment of annual fee 
Granted publication date: 20150715 Termination date: 20180926 