CN105825203A - Ground arrowhead sign detection and identification method based on dotted pair matching and geometric structure matching - Google Patents

Ground arrowhead sign detection and identification method based on dotted pair matching and geometric structure matching Download PDF

Info

Publication number
CN105825203A
CN105825203A CN201610200615.8A CN201610200615A CN105825203A CN 105825203 A CN105825203 A CN 105825203A CN 201610200615 A CN201610200615 A CN 201610200615A CN 105825203 A CN105825203 A CN 105825203A
Authority
CN
China
Prior art keywords
point
image
candidate region
template
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610200615.8A
Other languages
Chinese (zh)
Other versions
CN105825203B (en
Inventor
李建华
魏瑾瑜
卢湖川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201610200615.8A priority Critical patent/CN105825203B/en
Publication of CN105825203A publication Critical patent/CN105825203A/en
Application granted granted Critical
Publication of CN105825203B publication Critical patent/CN105825203B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/09Recognition of logos

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of computer vision, relates to image processing correlation knowledge and especially relates to a shape matching method. The method is characterized in that ground arrowhead signs are extracted for identification from a video to be detected. First of all, a top view of each frame of an image is obtained by use of inverse projection mapping; secondly, image segmentation is carried out on an HSV space by use of a K mean value clustering method, communication areas whose brightness and colors accord with conditions are separated, and geometric shape screening is performed on these communication areas; then, edge extraction is performed on candidate areas, and local multi-dimensional HOG features are extracted from each edge point; and finally, by use of the features, dotted pair matching is performed between templates and the candidate areas, geometric structure matching is carried out on matching results, and categories of the areas are identified. The method provided by the invention has the following advantages: such conditions as shielding, wear, deformation, rotation and other sign interference which occur in detection of the ground arrowhead signs are overcome, and in case that the aforementioned conditions are not ideal, the method still has a quite good identification rate.

Description

Based on point, the ground arrow mark of coupling and geometry coupling is detected and recognition methods
Technical field
The invention belongs to computer vision field, relate to the relevant knowledge in image procossing, based on point, the ground arrow mark of coupling and geometry coupling is detected and recognition methods particularly to a kind of.
Background technology
In two ten years in past, traffic above-ground landmark identification, as unmanned and intelligent transportation important component part, has attracted the researcher of many computer vision fields.Therefore, the most efficient and practical technology and method are occurred in that.In these indicate, ground arrow mark contains important Traffic Information, so the detection to this kind of mark is particularly important with identification.Below the representative article starting to deliver successively from 2004 is described as follows.
Rebut, et al. J. in " Imagesegmentationandpatternrecognitionforroadmarkinganal ysis.InInternationalSymposiumonIndustrialElectronics; 2004 " literary composition, describe, by Fourier's operator, the candidate region extracted, and combine KNN grader candidate region is identified.But, the integrality of outline of candidate region is required the highest by Fourier's operator, is not suitable in this way having and blocks and the situation of candidate region well damage.Suchitra, et al. S. first candidate region is resolved into several pieces at article " Apracticalsystemforroadmarkingdetectionandrecognition.In TIP; 2009 ", utilize Grad size in the x and y direction and positive and negative by edge segments.Then every section of edge is carried out respectively Hough transformation, and the hough space obtained is carried out peakology, obtain the angle information at edge, it is judged that every section of edge is tilted to the left or is tilted to the right.Next again by template piecemeal, the left hand edge of each piece and the incline direction of right hand edge are summed up, and according to the result summed up, candidate region is detected, found out the image block meeting condition, and these image blocks are combined, it may be judged whether it is arrow mark and classifies.But the method depends on the integrity at arrow mark edge, thus bad for the arrow mark recognition effect of abrasion occurs.
YuhangHe et al. proposes a node diagnostic in article " Usingeditdistanceandjunctionfeaturetodetectandrecognizea rrowroadmarking.inIntelligentTransportationSystems (ITSC); 2014 ", each candidate region is expressed as a node string, and the position and angle according to each node encodes.Then coding result is utilized to calculate the similarity of candidate region and template image.The partial structurtes of arrow mark are combined by the method with overall structure, higher to abrasion and the robustness blocked, but relatively low for there is the arrow mark discrimination of overall deformation.
Summary of the invention
For the deficiencies in the prior art, the present invention provides a kind of and detects the ground arrow mark of coupling and geometry coupling and recognition methods based on point, the method has other vehicles situation to ground arrow mark partial occlusion at the video that vehicle-mounted camera shoots, in the case of situation that arrow mark distance automobile in ground deforms upon too far and the change of vehicle traveling direction make the arrow mark run-off the straight extracted, containing (lane line during other traffic above-ground marks in video, zebra crossing etc.), can correctly detect ground arrow mark, and it accurately and is quickly classified.
In order to achieve the above object, the technical scheme is that
A kind of ground arrow mark detection mated coupling and geometry based on point and recognition methods, the method utilizes Inverse projection method to process each frame of Vehicular video, obtains the top view of road scene.According to the luminance difference of traffic above-ground mark with ground, K means clustering method is utilized to carry out cluster to extract brightness and the qualified connected region of saturation in the HSV space of top view.Connected region is screened by the stock size utilizing arrow mark, it is thus achieved that arrow mark candidate region, completes arrow mark detection.At cognitive phase, the present invention proposes a kind of point and coupling and geometry is mated the recognition methods combined, and takes full advantage of the local of arrow mark and overall shape information.On real road, craspedodrome left/right rotation mark and mark of keeping straight on seldom exist, and turn left mark and right-hand rotation mark be symmetrical, only one of them need to be detected and identify, therefore the present invention is only to craspedodrome mark (S), turns left to indicate that (L) and right-hand rotation mark (SR) of keeping straight on detect and identify.Fig. 1 is the system block diagram of the present invention.Implement step to include:
The first step, inverse projection maps
Due to the viewing angle problem of vehicle mounted camera shooting video, there is serious perspective distortion in the traffic above-ground mark of acquisition, this can affect the recognition effect to arrow mark.In order to eliminate this impact, the most each frame road image is carried out inverse projection mapping process, obtain the top view of road scene, it is to avoid traffic above-ground mark generation serious deformation.We use trilinear method to realize inverse projection mapping, initially set up bodywork reference frame and camera coordinate system.In world coordinate system, XvPoint to the front of automotive ordinate axis, YvPoint to the right being perpendicular to automotive ordinate axis, ZvPoint to the top being perpendicular to automotive ordinate axis.If ground even, camera coordinate system initial point is at the photocentre of video camera, successively around Xv、Yv、ZvThe anglec of rotation of axle be ψ,And θ.Photocentre coordinate in bodywork reference frame be t=(l, d, h).If bodywork reference frame having a little for pv(xv,yv,zv), its coordinate in camera coordinate system is pc(xc,yc,zc), relation therebetween is:
p v T = R · p c T + t T - - - ( 1 )
Wherein,
If it can be seen that inverse projection to be realized maps, need to calculate ψ,θ and t=(l, d, h) six outer parameters.
Taken up an official post to anticipate one in smooth ground and be parallel to XvAxle, and be a straight line L to its distance, its parametric equation in bodywork reference frame is xv=s, yv=a, zv=0, wherein s is any real number.According to pin-hole imaging model, in conjunction with formula (1), straight line L parametric equation in plane of delineation coordinate system is:
{ u = f i d x y c / x c = f i d x ( sr 12 + ar 22 - lr 12 - dr 22 - hr 32 ) / ( sr 11 + ar 21 - lr 11 - dr 21 - hr 31 ) v = - f j d y z c / x c = - f j d y ( sr 13 + ar 23 - lr 13 - dr 23 - hr 33 ) / ( sr 11 + ar 21 - lr 11 - dr 21 - hr 31 ) - - - ( 3 )
Wherein, dx, dyIt is respectively horizontal and vertical proportionality coefficient (camera intrinsic parameter);U and v is the coordinate of plane of delineation coordinate system;I and j is the coordinate of pixel coordinate system;fiAnd fjIt is respectively the focal length (camera intrinsic parameter) in i and j direction.Straight line is fastened in image coordinate and be there is end point (uh,vh)
If there being three straight lines being parallel to L on ground, then these three straight lines have identical end point.We utilize this relation of equal quantity, in the case of known camera intrinsic parameter, it is possible to obtain outer parameter ψ,θ and t=(l, d, h).By outer parameter ψ,(l, d, h) substitute into formula (3) to θ and t=, find the corresponding point in image coordinate of the point on car body coordinate plane, realize the conversion to car body coordinate plane of the image coordinate plane, complete inverse projection and map, obtain the top view of road scene.
Second step, image is split
Ideally, ground arrow mark for white connected region and has obvious luminance difference with ground.But, owing to blocking the interference with other regions, it is difficult with and set the binarization method of threshold value by these extracted region out.In order to avoid the loss of ground arrow mark monochrome information, present invention employs and carry out the method for K mean cluster in particular color space image is split.Cluster is a kind of by the method for targeted packets.K mean cluster thinks that each target has the locus of oneself, to the principle of these target partitions is: the Target space position in place clusters is near as far as possible, and the Target space position in other clusters is as far as possible.K mean cluster needs specify number of clusters in advance and weigh the distance measure that two Target space positions are far and near.
Owing to hsv color space is the homogeneous color space of approximation, relative to rgb space, it more meets human visual system, and the Euclidean distance that hsv color space is 2 is approximated to direct ratio with the perception degree of people.In hsv color space, saturation component S and luminance component V can be respectively described color of image feature and shape facility, and two components are independent, and luminance component V is unrelated with the color information of image.Therefore, in the present invention RGB color image is transformed into hsv color space, saturation component S and luminance component V is recombinated, again reconstructed picture is carried out K mean cluster image is split, utilizing Euclidean distance to estimate, to divide the image into be three layers, meeting most that layer of pixel required is final segmentation result, and using the connected region that wherein comprises as candidate region, and wherein one layer contains all colours saturation and brightness meets the connected region of condition.
3rd step, candidate region is screened
In China, the size of traffic above-ground mark needs to meet unification of the motherland standard.In the present invention, we utilize physical dimension parameter to screen candidate region.Being produced distortion owing to arrow length can affect by distance, we only have chosen arrow width, the ratio of width to height and area and get rid of non-arrow region.But in actual road conditions, arrow mark can be owing to the factor such as blocking and imperfect, thus we do not screen in strict accordance with national standard size, but certain interval as filter criteria near selection standard size.
4th step, rim detection
The edge of connected region contains its major part geological information, and therefore we represent, with edge, candidate's connected region that previous step filters out.Follow-up matching result can be impacted owing to the edge of connected region there will be inevitable noise in burr and image, so we have carried out expansive working to candidate region, reduce burrs on edges, and choose the Canny edge detection algorithm that can effectively suppress noise and edge registration, candidate region is carried out Canny rim detection, obtains more smooth edge.Signal to noise ratio is estimated by this algorithm with location product, can obtain optimization Approaching Results.Therefore, the problem that this edge detection method can well solve the candidate region marginal existence burr extracted.
5th step, feature extraction and construction feature collection
The local shape information that marginal point comprises can be described by feature.Have employed a local multiple dimensioned HOG feature connected region edge is described.The division of rectangle HOG block a: image block (Block) is made up of some unit (Cell), and a unit is made up of several pixels.Do gradient direction statistics the most independently, the rectangular histogram of gained with gradient direction as transverse axis, desirable 0~180 degree or 0~360 degree of gradient direction, ground arrow mark in the present invention is detected and chooses 0~180 degree and can obtain more preferable result.This gradient scope being divided into several director more interval (orientationbin), each Direction interval can a corresponding Nogata post.In the present invention, we use 9 subintervals.
Centered by the most multiple dimensioned HOG feature refers to the marginal point of above onestep extraction, the rectangular block choosing several yardstick carries out gradient direction statistics, obtain the HOG characteristic vector of the different scale of local, and by these combination of eigenvectors together, the characteristic vector of this combination contains local feature abundant at marginal point.Feature extraction and construction feature collection concretely comprise the following steps:
5.1) carrying out the HOG feature extraction of local at marginal point, all marginal points are calculated gradient direction, choose any edge point A, intercept centered by marginal point A, size is the image block of a × a, and this image block is divided into 4 unit;Gradient direction span is divided into k subinterval, obtains 4 × k=4k dimension local HOG characteristic vector;
5.2) take centered by marginal point A, size is the image block of 2a × 2a, image block is equally divided into the sub-block of 4 a × a, respectively each sub-block is calculated 4k dimensional feature vector according to method described in 5.1, four 4k dimensional feature vectors are connected in series and obtain 4k × 4=16k dimension local HOG characteristic vector;
5.3) intercept centered by marginal point A, size is the image block of 4a × 4a, image block is equally divided into the sub-block of 4 a × a, respectively each sub-block is calculated 16k dimensional feature vector according to method described in 5.2, four 16k dimensional feature vectors are connected in series and obtain 16k × 4=64k dimension local HOG characteristic vector;
5.4) the local HOG characteristic vector under above three yardstick is connected in series, forms the 4k+16k+64k=84k dimensional feature vector of marginal point A, and represent this connected region with this 84k dimensional feature vector;
5.5) structure template base comprises craspedodrome mark S, and turn left mark L and the mark SR that turns right that keeps straight on;For the candidate region in the arrow image in template base and test image, all obtain marginal point and 84k dimensional feature vector corresponding to each marginal point according to above-mentioned steps;Candidate region all marginal points characteristic of correspondence vector constitutes the set of eigenvectors of this candidate region;All marginal point characteristic of correspondence vectors of template image constitute the set of eigenvectors of this template.
In order to realize arrow mark coupling, we construct a template base.This template base contains six kinds of ground arrow marks, including left-hand rotation of turning left, turn right, keep straight on, keep straight on, keeps straight on and turns right and craspedodrome left/right rotation.All arrow marks in template base are required for carrying out above-mentioned rim detection and two steps of feature extraction, and the characteristic vector composition characteristic collection extracted, for follow-up matching process.
6th step, point is to coupling
Image is tested for each, has all carried out above-mentioned steps to obtain candidate region and edge thereof, and each edge, candidate region is carried out feature extraction, obtain characteristic of correspondence vector.Each candidate region in one width test image needs template all of with template base to mate.In the present invention, we carry out a little to coupling first with the characteristic vector of marginal point, filter out the marginal point with identical partial structurtes, get rid of abnormity point, improve the efficiency of coupling further.
First, assume that candidate region and a certain template have extracted M and N number of marginal point respectively, the corresponding characteristic vector of each marginal point, we construct a M × N matrix D and store the Euclidean distance between two stack features vectors, and Euclidean distance represents the difference between two characteristic vectors.Wherein, the element d of D matrixi,jEuclidean distance between the jth marginal point characteristic of correspondence vector of the i-th marginal point characteristic of correspondence vector sum template image of in store candidate region.It follows that utilize Euclidean distance matrix D to carry out a little to coupling.Assume DiIt is the i-th row of matrix D, DiIn element according to ascending order arrangement obtain vectorD’iIn adjacent two elements ratio constitute vector R=[r1,…,rj,…,rN-1], i.e.If r in RkIt is first value more than predetermined threshold value α, then D 'iBefore in, the template image marginal point of k value correspondence is the point mated with candidate region i-th marginal point.By that analogy, each provisional capital of D is carried out this operation, it is possible to find, on test image, the point mated with each candidate region marginal point, formed from candidate region to the matching double points in template image direction.This matching process is two-way, and therefore our string every to D is also carried out processing equally, obtains from template image to the matching double points in direction, candidate region, and the common factor of these two groups of matching double points defines matching double points collection, here it is the final result that point is to coupling.This matching double points collection will be used for geometry coupling.
7th step, geometry mates
The point of previous step improves the effectiveness of coupling further to coupling, but candidate region multiple templates may all have identical partial structurtes with in template base, so point can accurately not provide recognition result to coupling.In the present invention, we will carry out geometry coupling on the basis of point is to coupling, be analyzed these to the whole geometry structure constituted, identify the classification of candidate region accurately.
Matching double points collection obtained in the previous step contains the matching double points between candidate region and all template images, and the some half in matching double points is present in candidate region, and half is present in template image.These two groups of points define two scatterplot respectively in test image and template image.The geometric center of scatterplot is c, the mean value computation of point coordinates get.Assume piAnd pjIt is any two points on a scatterplot, d (pi, c) with d (pj, c) difference representation vectorWithLength, θijIt is two vectorial angle (θij∈[0,π]).We utilize two K0×K0Lower triangular matrix represents and comprises K0The scatterplot of individual point, the two matrix is defined as:
G={gij|i∈[1,K0-1];j∈[0,i-1]}(6)
Θ={ θij|i∈[1,K0-1];j∈[0,i-1]}(7)
Wherein, gij=min (d (pi,c)/d(pj,c),d(pj,c)/d(pi,c)).Obviously, two matrixes have invariance for rotating with yardstick, are only affected by the geometry of scatterplot.
GcAnd GtIt is respectively candidate region and the G matrix of template image, ΘcAnd ΘtIt is respectively candidate region and the Θ matrix of template image.We utilize ΘcAnd ΘtThe difference of middle element is come GcAnd GtIt is filtered so that the matching double points of some exceptions can be excluded.Filtering principle is as follows:
g ~ c i j = g c i j | θ c i j - θ t i j | ≤ γ 0 | θ c i j - θ t i j | > γ - - - ( 8 )
g ~ t i j = g t i j | θ c i j - θ t i j | ≤ γ 0 | θ c i j - θ t i j | > γ - - - ( 9 )
G ~ c = { g ~ c i j | i ∈ [ 1 , K 0 - 1 ] ; j ∈ [ 0 , i - 1 ] } - - - ( 10 )
G ~ t = { g ~ t i j | i ∈ [ 1 , K 0 - 1 ] ; j ∈ [ 0 , i - 1 ] } - - - ( 11 )
Wherein, γ is the threshold value set by experiment.Have employed Euclidean distance in the present invention to weighWithDifference:
e = 1 s Σ i , j ( g ~ c i j - g ~ t i j ) 2 - - - ( 12 )
Wherein, s is the number of matrix non-zero element.Assume in template base, have K template image, arbitrary candidate region is all calculated an e-value with this K template image, minima in these values is if less than default threshold value, then it is identical category that the template image that this minima is corresponding is considered as with candidate region.The setting of threshold value is different to different templates, is all obtained by experiment.
The invention have the benefit that overcome the detection of ground arrow mark with identify during often the blocking of appearance, the situation such as deformation and rotation, and discrimination is higher.
Accompanying drawing explanation
Fig. 1 is system block diagram;
Fig. 2 (a) is initial pictures;Fig. 2 (b) is the top view after Inverse projection;Bianry image after Fig. 2 (c) K mean cluster;
Detection and recognition result figure when the multiple arrow of Fig. 3 (a) occurs simultaneously;Detection and recognition result figure during the abrasion of Fig. 3 (b) arrow;Detection and recognition result figure during Fig. 3 (c) arrow serious deformation;Detection and recognition result figure when Fig. 3 (d) arrow tilts;Detection and recognition result figure during the interference of other surface marks of Fig. 3 (e).
Detailed description of the invention
Step one: on real road, craspedodrome left/right rotation mark and mark of keeping straight on seldom exist, and turn left mark and right-hand rotation mark be symmetrical, only one of them need to be detected and identify, therefore the present invention is only to craspedodrome mark (S), turns left to indicate that (L) and right-hand rotation mark (SR) of keeping straight on detect and identify.
Step 2: the road assuming video camera front is smooth, and I={ (u, v) } ∈ E2Represent the initial pictures obtained, V={ (xv,yv,zv)}∈E3Represent the image after Inverse projection.We want the scene top view obtained to be W={ (x, y, 0) } ∈ V.The process that inverse projection maps can be regarded as from image coordinate plane to the conversion of car body coordinate plane, and the most different coordinate positions represents Same Scene.In the present invention, we are provided with an area-of-interest (bodywork reference frame nearly car 1/2 region), formula (1) and (2) the image W after can being converted.In W, the pixel value of pixel is equal to pixel value related like vegetarian refreshments in I, if in W pixel in I correspondence position beyond the image range obtained, then in W, this pixel will be set to black.As shown in Fig. 2 (b), the top view of original image after Inverse projection, can be obtained.
Step 3: the Euclidean distance that hsv color space is 2 is approximated to direct ratio with the perception degree of people, and it has an important feature: luminance component V is unrelated with the color information of image, i.e. between saturation and the brightness of the image in hsv color space, there is relatively independent feature, first RGB color image is transformed into hsv color spatially, take out V layer therein and S layer reassembles into two channel image, this reconstructed picture is carried out K mean cluster.In the present invention, the segmentation number of plies is set to 3 by us, uses Euclidean distance to estimate and carries out 3 times clustering having obtained optimum Clustering Effect.Finally have chosen pixel value most that layer of pixel more than 200 is final segmentation result, and using the connected region that wherein comprises as candidate region.The method takes full advantage of ground arrow mark and the luminance difference on ground and colouring information to detect the candidate region in each two field picture of video.
Step 3: containing many connected regions in the image after segmentation, in order to screen these connected regions, we have asked for the width of each connected region, the ratio of width to height and area, then by the step of table 1, these three amount are carried out preliminary screening.As shown in Fig. 2 (c), after K mean cluster and physical dimension choice of parameters, obtain comprising the bianry image of all candidate regions.
Table 1 candidate region is screened
Step 4: in order to make the edge extracted more smooth, we have carried out expansive working to candidate region, decreases the interference such as burr.Then candidate region is carried out Canny rim detection, obtain more smooth edge.
Step 5: all marginal points detecting previous step calculate gradient direction, choose any edge point A, first intercept centered by marginal point A, size is the image block of 16 × 16, this image block is divided into 4 unit, gradient direction is averagely divided into 9 subintervals, thus can obtain 4 × 9=36 and tie up HOG characteristic vector.Intercept the most again centered by marginal point A, size is the image block of 32 × 32, image block is equally divided into the sub-block of 4 16 × 16 again, each sub-block is further divided into 4 unit, 4 × the 9=36 calculating each unit ties up local feature vectors, four characteristic vectors are connected in series, so can obtain 36 × 4=144 dimensional feature vector.The image block of 64 × 64 is chosen centered by marginal point A, image block is equally divided into the sub-block of 4 32 × 32, respectively each sub-block is calculated 36 × 4=144 dimensional feature vector according to the method described above, four 144 dimensional feature vectors are connected in series and obtain 36 × 4 × 4=576.Finally the local HOG feature under these three yardstick is connected in series, forms 756 dimensional feature vectors of this marginal point.
Step 6: for the candidate region in the arrow image in template base and test image, is required to obtain marginal point and 756 dimensional feature vectors corresponding to each marginal point according to above-mentioned steps.Candidate region all marginal points characteristic of correspondence vector constitutes the set of eigenvectors of this candidate region, and similarly, all marginal point characteristic of correspondence vectors of a template image constitute the set of eigenvectors of this template.It follows that each candidate region is required for carrying out with all templates in template base, a little to mating, choosing the marginal point with candidate region in template set with identical partial structurtes.We are carried out a little to coupling according to the step of table 2.
2, table is to coupling
Step 7: match point obtained in the previous step has half to belong to candidate region, and forms scatterplot at candidate region, and similarly, second half marginal point defines scatterplot on template image.It follows that we will carry out geometry coupling to the scatterplot of the scatterplot at candidate region and each template.Calculate the angle theta of any two points on scatterplotijG is compared with the distance to centerij, and according to formula (6) and (7) structural matrix G and Θ.So we can respectively obtain two matrix G of candidate regioncAnd Θc, and the G of template imagetAnd Θt.Then, we utilize ΘcAnd ΘtThe difference of middle element according to formula (8) and (9) to GcAnd GtIn element screen, if the difference of angle more than 90 degree, is considered as this two groups of matching double points exceptions.Finally to the G through screeningcAnd GtMatrix asks for Euclidean distance, and this single stepping is carried out between candidate region and each template image, if Euclidean distance minima thinks identical with candidate region classification less than threshold value, the template of this distance correspondence, otherwise candidate region is non-arrow mark.
Fig. 3 is in varied situations to the detection of ground arrow mark and the result of identification in the present invention, our method can process situation that many arrows occur simultaneously, arrow mark abrasion, arrow mark entirety deformation, arrow mark rotate, and the situation of other surface marks interference.Meanwhile, under the conditions of the most undesirable, the present invention still has preferable discrimination.

Claims (1)

1. the ground arrow mark of coupling and geometry coupling is detected and recognition methods by one kind based on point, it is characterized in that, this ground arrow mark detection only indicates S to keeping straight on recognition methods, and the mark L and the right-hand rotation mark SR that keeps straight on that turns left detects and identifies, comprises the steps:
The first step, inverse projection maps
1.1) in world coordinate system, XvPoint to the front of automotive ordinate axis, YvPoint to the right being perpendicular to automotive ordinate axis, ZvPoint to the top being perpendicular to automotive ordinate axis;If ground even, camera coordinate system initial point is at the photocentre of video camera, successively around Xv、Yv、ZvThe anglec of rotation of axle be ψ,And θ;Photocentre coordinate in bodywork reference frame be t=(l, d, h);If bodywork reference frame having a little for pv(xv,yv,zv), its coordinate in camera coordinate system is pc(xc,yc,zc), relation therebetween is:
p v T = R · p c T + t T - - - ( 1 )
Wherein,
1.2) for smooth ground take up an official post meaning one be parallel to XvAxle, and arrive XvWheelbase is from the straight line L for a, and its parametric equation in bodywork reference frame is xv=s, yv=a, zv=0, wherein s is any real number;In the case of known camera intrinsic parameter, straight line L parametric equation in plane of delineation coordinate system is:
Wherein, dx、dy、fiAnd fjFor camera intrinsic parameter, dxFor grid scale coefficient, dyLongitudinal proportionality coefficient, fiFor i direction focal length, fjFor j direction focal length;;U and v is the coordinate of plane of delineation coordinate system;I and j is the coordinate of pixel coordinate system;
1.3) on road surface, two straight lines are chosen again so that it is be parallel to straight line L, calculate, according to formula (4) (5), the end point (u that straight line L fastens in image coordinate with these two straight lines respectivelyh,vh):
These three straight lines have identical end point, utilize relation of equal quantity, calculate outer parameter ψ,θ and t=(l, d, h);
1.4) by outer parameter ψ,(l, d, h) substitute into formula (3) to θ and t=, obtain the corresponding point in image coordinate of the point on car body coordinate plane, realize the conversion to car body coordinate plane of the image coordinate plane, complete inverse projection and map, obtain the top view of road scene;
Second step, image is split
RGB color image is transformed into hsv color space, saturation component S and luminance component V is recombinated and carries out K mean cluster, utilizing Euclidean distance to estimate, to divide the image into be three layers, meeting most that layer of pixel required is final segmentation result, and using the connected region that wherein comprises as candidate region;
3rd step, screens candidate region
Use arrow mark standard size that candidate region is screened, use arrow width, the ratio of width to height and area to get rid of non-arrow region;
4th step, rim detection
Candidate region is carried out expansive working, reduces burrs on edges;Candidate region is carried out Canny rim detection, obtains more smooth edge;
5th step, feature extraction and construction feature collection
5.1) carrying out the HOG feature extraction of local at marginal point, all marginal points are calculated gradient direction, choose any edge point A, intercept centered by marginal point A, size is the image block of a × a, and this image block is divided into 4 unit;Gradient direction span is divided into k subinterval, obtains 4 × k=4k dimension local HOG characteristic vector;
5.2) take centered by marginal point A, size is the image block of 2a × 2a, image block is equally divided into the sub-block of 4 a × a, respectively each sub-block is calculated 4k dimensional feature vector according to method described in 5.1, four 4k dimensional feature vectors are connected in series and obtain 4k × 4=16k dimension local HOG characteristic vector;
5.3) intercept centered by marginal point A, size is the image block of 4a × 4a, image block is equally divided into the sub-block of 4 a × a, respectively each sub-block is calculated 16k dimensional feature vector according to method described in 5.2, four 16k dimensional feature vectors are connected in series and obtain 16k × 4=64k dimension local HOG characteristic vector;
5.4) the local HOG characteristic vector under above three yardstick is connected in series, forms the 4k+16k+64k=84k dimensional feature vector of marginal point A, and represent this connected region with this 84k dimensional feature vector;
5.5) structure template base comprises craspedodrome mark S, and turn left mark L and the mark SR that turns right that keeps straight on;For the candidate region in the arrow image in template base and test image, all obtain marginal point and 84k dimensional feature vector corresponding to each marginal point according to above-mentioned steps;Candidate region all marginal points characteristic of correspondence vector constitutes the set of eigenvectors of this candidate region;All marginal point characteristic of correspondence vectors of template image constitute the set of eigenvectors of this template;
6th step, point is to coupling
6.1) candidate region and a certain template have extracted M and N number of marginal point respectively, construct a M × N matrix D, the Euclidean distance between storage candidate region and template image characteristic vector;DiIt it is the i-th row of matrix D;DiIn element according to ascending order arrangement obtainD’iThe ratio of middle adjacent element constitutes R=[r1,…,rj,…,rN-1], i.e.If r in RkIt is first value more than predetermined threshold value α, then D 'iBefore in, the template image marginal point of k value correspondence is the point mated with candidate region i-th marginal point;
6.2) each provisional capital of matrix D is carried out above-mentioned process, obtain the point mated with each candidate region marginal point on template image, formed from candidate region to the matching double points in template image direction;
6.3) row of matrix D are carried out above-mentioned process, obtain from template image to the matching double points in direction, candidate region;The matching result of row and column is taken common factor, completes the template image point with candidate region to mating;
7th step, geometry mates
Point forms scatterplot respectively to matching result on template image and test image, calculates the angle theta of any two points on scatterplotijG is compared with the distance to centerij, construct two K0×K0Lower triangular matrix represents and comprises K0The scatterplot of individual point:
G={gij|i∈[1,K0-1];j∈[0,i-1]}(6)
Θ={ θij|i∈[1,K0-1];j∈[0,i-1]}(7)
GcAnd GtIt is respectively candidate region and the G matrix of template image, ΘcAnd ΘtIt is respectively candidate region and the Θ matrix of template image;We utilize ΘcAnd ΘtThe difference of middle element is come GcAnd GtIt is filtered, gets rid of abnormity point, obtainWith
g ~ c i j = g c i j | θ c i j - θ t i j | ≤ γ 0 | θ c i j - θ t i j | > γ - - - ( 8 )
g ~ t i j = g t i j | θ c i j - θ t i j | ≤ γ 0 | θ c i j - θ t i j | > γ - - - ( 9 )
G ~ c = { g ~ c i j | i ∈ [ 1 , K 0 - 1 ] ; j ∈ [ 0 , i - 1 ] } - - - ( 10 )
G ~ t = { g ~ t i j | i ∈ [ 1 , K 0 - 1 ] ; j ∈ [ 0 , i - 1 ] } - - - ( 11 )
Euclidean distance is used to weigh the matrix of candidate regionMatrix with all template imagesDiversity, if wherein Euclidean distance minima is considered as with candidate region being identical category less than the threshold value preset, then corresponding template image.
CN201610200615.8A 2016-03-30 2016-03-30 Based on point to matching and the matched ground arrow mark detection of geometry and recognition methods Expired - Fee Related CN105825203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610200615.8A CN105825203B (en) 2016-03-30 2016-03-30 Based on point to matching and the matched ground arrow mark detection of geometry and recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610200615.8A CN105825203B (en) 2016-03-30 2016-03-30 Based on point to matching and the matched ground arrow mark detection of geometry and recognition methods

Publications (2)

Publication Number Publication Date
CN105825203A true CN105825203A (en) 2016-08-03
CN105825203B CN105825203B (en) 2018-12-18

Family

ID=56526615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610200615.8A Expired - Fee Related CN105825203B (en) 2016-03-30 2016-03-30 Based on point to matching and the matched ground arrow mark detection of geometry and recognition methods

Country Status (1)

Country Link
CN (1) CN105825203B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651963A (en) * 2016-12-29 2017-05-10 清华大学苏州汽车研究院(吴江) Mounting parameter calibration method for vehicular camera of driving assistant system
GB2559250A (en) * 2016-12-09 2018-08-01 Ford Global Tech Llc Parking-lot-navigation system and method
CN108437986A (en) * 2017-02-16 2018-08-24 上海汽车集团股份有限公司 Vehicle drive assist system and householder method
CN108898078A (en) * 2018-06-15 2018-11-27 上海理工大学 A kind of traffic sign real-time detection recognition methods of multiple dimensioned deconvolution neural network
CN109902718A (en) * 2019-01-24 2019-06-18 西北大学 A kind of two-dimensional shapes matching process
CN109934169A (en) * 2019-03-13 2019-06-25 东软睿驰汽车技术(沈阳)有限公司 A kind of Lane detection method and device
CN111161140A (en) * 2018-11-08 2020-05-15 银河水滴科技(北京)有限公司 Method and device for correcting distorted image
CN111210456A (en) * 2019-12-31 2020-05-29 武汉中海庭数据技术有限公司 High-precision direction arrow extraction method and system based on point cloud
CN111476157A (en) * 2020-04-07 2020-07-31 南京慧视领航信息技术有限公司 Lane guide arrow recognition method under intersection monitoring environment
CN111656358A (en) * 2017-12-22 2020-09-11 诺瓦拉姆德克斯有限公司 Analyzing captured images to determine test outcomes
CN111783807A (en) * 2019-04-28 2020-10-16 北京京东尚科信息技术有限公司 Picture extraction method and device and computer-readable storage medium
CN111932621A (en) * 2020-08-07 2020-11-13 武汉中海庭数据技术有限公司 Method and device for evaluating arrow extraction confidence
CN112464737A (en) * 2020-11-04 2021-03-09 浙江预策科技有限公司 Road marking detection and identification method, electronic device and storage medium
CN113158976A (en) * 2021-05-13 2021-07-23 北京纵目安驰智能科技有限公司 Ground arrow recognition method, system, terminal and computer readable storage medium
CN113826107A (en) * 2019-05-23 2021-12-21 日立安斯泰莫株式会社 Object recognition device
CN114440834A (en) * 2022-01-27 2022-05-06 中国人民解放军战略支援部队信息工程大学 Object space and image space matching method of non-coding mark
CN114549649A (en) * 2022-04-27 2022-05-27 江苏智绘空天技术研究院有限公司 Feature matching-based rapid identification method for scanned map point symbols

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060098877A1 (en) * 2004-11-09 2006-05-11 Nick Barnes Detecting shapes in image data
US20110052046A1 (en) * 2006-11-07 2011-03-03 Recognition Robotics, Inc. System and method for visual searching of objects using lines
CN104361350A (en) * 2014-10-28 2015-02-18 奇瑞汽车股份有限公司 Traffic sign identification system
CN104463105A (en) * 2014-11-19 2015-03-25 深圳市腾讯计算机系统有限公司 Guide board recognizing method and device
CN105069419A (en) * 2015-07-27 2015-11-18 上海应用技术学院 Traffic sign detection method based on edge color pair and characteristic filters

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060098877A1 (en) * 2004-11-09 2006-05-11 Nick Barnes Detecting shapes in image data
US20110052046A1 (en) * 2006-11-07 2011-03-03 Recognition Robotics, Inc. System and method for visual searching of objects using lines
CN104361350A (en) * 2014-10-28 2015-02-18 奇瑞汽车股份有限公司 Traffic sign identification system
CN104463105A (en) * 2014-11-19 2015-03-25 深圳市腾讯计算机系统有限公司 Guide board recognizing method and device
CN105069419A (en) * 2015-07-27 2015-11-18 上海应用技术学院 Traffic sign detection method based on edge color pair and characteristic filters

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李成名 等: "面条目标Voronoi图生成的动态距离变换策略", 《遥感信息》 *
牛杰 等: "基于多尺度-多形状HOG特征的行人检测方法", 《计算机技术与发展》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2559250A (en) * 2016-12-09 2018-08-01 Ford Global Tech Llc Parking-lot-navigation system and method
US10481609B2 (en) 2016-12-09 2019-11-19 Ford Global Technologies, Llc Parking-lot-navigation system and method
CN106651963A (en) * 2016-12-29 2017-05-10 清华大学苏州汽车研究院(吴江) Mounting parameter calibration method for vehicular camera of driving assistant system
CN108437986A (en) * 2017-02-16 2018-08-24 上海汽车集团股份有限公司 Vehicle drive assist system and householder method
CN108437986B (en) * 2017-02-16 2020-07-03 上海汽车集团股份有限公司 Vehicle driving assistance system and assistance method
CN111656358A (en) * 2017-12-22 2020-09-11 诺瓦拉姆德克斯有限公司 Analyzing captured images to determine test outcomes
CN108898078A (en) * 2018-06-15 2018-11-27 上海理工大学 A kind of traffic sign real-time detection recognition methods of multiple dimensioned deconvolution neural network
CN111161140A (en) * 2018-11-08 2020-05-15 银河水滴科技(北京)有限公司 Method and device for correcting distorted image
CN111161140B (en) * 2018-11-08 2023-09-19 银河水滴科技(北京)有限公司 Distortion image correction method and device
CN109902718A (en) * 2019-01-24 2019-06-18 西北大学 A kind of two-dimensional shapes matching process
CN109934169A (en) * 2019-03-13 2019-06-25 东软睿驰汽车技术(沈阳)有限公司 A kind of Lane detection method and device
CN111783807A (en) * 2019-04-28 2020-10-16 北京京东尚科信息技术有限公司 Picture extraction method and device and computer-readable storage medium
CN113826107A (en) * 2019-05-23 2021-12-21 日立安斯泰莫株式会社 Object recognition device
CN111210456A (en) * 2019-12-31 2020-05-29 武汉中海庭数据技术有限公司 High-precision direction arrow extraction method and system based on point cloud
CN111210456B (en) * 2019-12-31 2023-03-10 武汉中海庭数据技术有限公司 High-precision direction arrow extraction method and system based on point cloud
CN111476157B (en) * 2020-04-07 2020-11-03 南京慧视领航信息技术有限公司 Lane guide arrow recognition method under intersection monitoring environment
CN111476157A (en) * 2020-04-07 2020-07-31 南京慧视领航信息技术有限公司 Lane guide arrow recognition method under intersection monitoring environment
CN111932621B (en) * 2020-08-07 2022-06-17 武汉中海庭数据技术有限公司 Method and device for evaluating arrow extraction confidence
CN111932621A (en) * 2020-08-07 2020-11-13 武汉中海庭数据技术有限公司 Method and device for evaluating arrow extraction confidence
CN112464737A (en) * 2020-11-04 2021-03-09 浙江预策科技有限公司 Road marking detection and identification method, electronic device and storage medium
CN112464737B (en) * 2020-11-04 2022-02-22 浙江预策科技有限公司 Road marking detection and identification method, electronic device and storage medium
CN113158976A (en) * 2021-05-13 2021-07-23 北京纵目安驰智能科技有限公司 Ground arrow recognition method, system, terminal and computer readable storage medium
CN113158976B (en) * 2021-05-13 2024-04-02 北京纵目安驰智能科技有限公司 Ground arrow identification method, system, terminal and computer readable storage medium
CN114440834A (en) * 2022-01-27 2022-05-06 中国人民解放军战略支援部队信息工程大学 Object space and image space matching method of non-coding mark
CN114440834B (en) * 2022-01-27 2023-05-02 中国人民解放军战略支援部队信息工程大学 Object space and image space matching method of non-coding mark
CN114549649A (en) * 2022-04-27 2022-05-27 江苏智绘空天技术研究院有限公司 Feature matching-based rapid identification method for scanned map point symbols

Also Published As

Publication number Publication date
CN105825203B (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN105825203A (en) Ground arrowhead sign detection and identification method based on dotted pair matching and geometric structure matching
Kong et al. General road detection from a single image
CN103605977B (en) Extracting method of lane line and device thereof
CN103258432B (en) Traffic accident automatic identification processing method and system based on videos
CN105335716B (en) A kind of pedestrian detection method extracting union feature based on improvement UDN
Fritsch et al. Monocular road terrain detection by combining visual and spatial information
CN102509098B (en) Fisheye image vehicle identification method
CN105005989B (en) A kind of vehicle target dividing method under weak contrast
CN102799859B (en) Method for identifying traffic sign
CN105488454A (en) Monocular vision based front vehicle detection and ranging method
CN102708356A (en) Automatic license plate positioning and recognition method based on complex background
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN103324920A (en) Method for automatically identifying vehicle type based on vehicle frontal image and template matching
DE102009050505A1 (en) Clear path detecting method for vehicle i.e. motor vehicle such as car, involves modifying clear path based upon analysis of road geometry data, and utilizing clear path in navigation of vehicle
CN104134200A (en) Mobile scene image splicing method based on improved weighted fusion
DE102009048892A1 (en) Clear traveling path detecting method for vehicle e.g. car, involves generating three-dimensional map of features in view based upon preferential set of matched pairs, and determining clear traveling path based upon features
CN104050447A (en) Traffic light identification method and device
CN111539303B (en) Monocular vision-based vehicle driving deviation early warning method
CN105139011A (en) Method and apparatus for identifying vehicle based on identification marker image
CN103324958B (en) Based on the license plate locating method of sciagraphy and SVM under a kind of complex background
CN103500327A (en) Vehicle type identification method of vehicles of same brand based on space position information
CN112381101B (en) Infrared road scene segmentation method based on category prototype regression
CN110733416B (en) Lane departure early warning method based on inverse perspective transformation
CN106919939A (en) A kind of traffic signboard Tracking Recognition method and system
Liu et al. Application of color filter adjustment and k-means clustering method in lane detection for self-driving cars

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181218

CF01 Termination of patent right due to non-payment of annual fee