CN105005759A - Multi-characteristic fused monitoring image front vehicle window positioning and extracting method - Google Patents
Multi-characteristic fused monitoring image front vehicle window positioning and extracting method Download PDFInfo
- Publication number
- CN105005759A CN105005759A CN201510222612.XA CN201510222612A CN105005759A CN 105005759 A CN105005759 A CN 105005759A CN 201510222612 A CN201510222612 A CN 201510222612A CN 105005759 A CN105005759 A CN 105005759A
- Authority
- CN
- China
- Prior art keywords
- image
- region
- vehicle window
- value
- candidate region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Abstract
The present invention provides a multi-characteristic fused monitoring image front vehicle window positioning and extracting method. The method comprises the steps of positioning the boundary of a vehicle and tailoring according to the boundary; positioning the license plate of the vehicle; establishing the relation of the license plate and a vehicle window boundary, carrying out the coarse positioning on a vehicle window and tailoring according to the boundary; carrying out the area division on the coarsely positioned vehicle window to obtain the candidate areas of the vehicle window; extracting three characteristics of morphological building index, shape and sealing strip spectrum of each candidate area; fusing the three characteristics and establishing a fusion function to realize the fine positioning of the vehicle window and tailor the vehicle window. According to the present invention, by carrying out the multi-characteristic extraction on the vehicle window, the limitation that the single characteristic is only adaptive to a special environment is avoided, the vehicle window detection accuracy is improved, and the multi-characteristic fused monitoring image front vehicle window positioning and extracting method of the present invention has a better robustness.
Description
Technical field
The present invention relates to a kind of location technology, particularly a kind of location of bayonet socket image front window of multiple features fusion and extracting method.
Background technology
Along with the development of science and technology, whether improving and reinforcement of structure of the law, automatically detecting driver will become an important component part in intelligent transportation system by regulation safe driving.And vehicle window location is the previous work automatically detecting safe driving, whether vehicle pastes environmental mark by regulation, whether driver fastens the safety belt and makes a phone call all to search in vehicle window region in travelling, and therefore how locating front window more accurately and rapidly will be a key point of intelligent transportation system.
Vehicle window has permeability and light absorptive, and for bayonet socket image, the scene in vehicle window can be mapped on vehicle window, and the change, the shadow of the trees etc. of illumination all will be reflected on vehicle window, and this will cause vehicle window to be blocked or the color change of vehicle window; Vehicle window because of vehicle different, own form is different, and vehicle color is various, the distance of clapping point adopted by vehicle distances video camera and cause the factor such as to vary in size of the vehicle window of car of the same type, all makes directly to carry out vehicle window positioning difficulty to bayonet socket image larger.Meanwhile, only from graphical analysis angle, be difficult to exclusive PCR region or assert that this region is vehicle window.
For front window localization method, existing technology many extracting directly vehicle window feature, such as color characteristic and linear feature, thus the effect reaching location.But its degree of accuracy is lower and accuracy is poor, cannot practical requirement.
Summary of the invention
The object of the present invention is to provide the more accurate front window localization method of one.In order to solve the problems of the technologies described above, disclose a kind of location and extracting method of bayonet socket image front window based on multi-feature fusion, comprise Location vehicle, License Plate, vehicle window coarse positioning, candidate region segmentation, multi-feature extraction, Fusion Features, vehicle window fine positioning, vehicle window extraction.
The location of the bayonet socket image front window of multiple features fusion and an extracting method, comprising:
Step 1, positions the border of vehicle and cuts out by border;
Step 2, the car plate of positioned vehicle;
Step 3, sets up the relation on car plate and vehicle window border, carries out coarse positioning and cut out by border vehicle window;
Step 4, carries out Region dividing to the vehicle window of coarse positioning, obtains the candidate region of vehicle window;
Step 5, extracts morphology buildings index, shape, sealing strip spectrum three kinds of features to each candidate region;
Step 6, merges three kinds of features and sets up fusion function, realizing the fine positioning of vehicle window, and cut out vehicle window.
The present invention compared with prior art, has the following advantages: by carrying out multi-feature extraction to vehicle window, avoids the limitation that single feature is only adapted to a certain particular surroundings, and the degree of accuracy not only increasing vehicle window detection also makes the present invention have good robustness.
Below in conjunction with Figure of description, the present invention is described further.
Accompanying drawing explanation
Fig. 1 is the inventive method process flow diagram.
Fig. 2 MBI feature extraction process flow diagram.
The result schematic diagram of each step of Fig. 3 (a) to (q) case picture, Fig. 3 (r) (s) is the result figure of candidate region shape facility, Fig. 3 (t) is final vehicle window location map.
Specific embodiment party
Composition graphs 1, the present invention mainly comprises the following steps:
Step 1, input bayonet socket image, positions vehicle;
Step 2, carries out License Plate to the vehicle detected;
Step 3, according to the central point of car plate and the relation on vehicle window four borders, carries out coarse positioning to vehicle window;
Step 4, according to the width characteristics of vehicle window, carries out Region dividing to the vehicle window of coarse positioning, obtains the candidate region of vehicle window;
Step 5, extracts three kinds of features to each candidate region, effectively merges three kinds of features;
Step 6, according to fusion function, finds the most qualified vehicle window region, realizes the fine positioning of vehicle window, and cut out vehicle window;
For the ease of understanding, we illustrate doing following character, and original bayonet socket image is I, the image I ' after Location vehicle, and the image of vehicle window coarse positioning is I ", the vehicle window coarse positioning image after adjustment right boundary is CAR1.
Step 1 comprises car body up-and-down boundary location, car body right boundary coarse positioning and car body right boundary fine positioning, particularly:
(1) car body up-and-down boundary location comprises:
Step 111: input bayonet socket image I, as shown in Fig. 3 (a), carries out gray processing and gaussian filtering to it;
Step 112: use horizontal shuttering
Carry out template convolution to the image I of step 111 gained, after namely carrying out convolution to image slices vegetarian refreshments, do thresholding process to the absolute value of result, threshold value here obtains with maximum variance between clusters, obtains image IG, as shown in Fig. 3 (b);
Step 113: with square structure unit se1=[3,3], namely the length of side is 3, IG is carried out to the expansive working of continuous i=3 time, obtain horizontal bianry image IGH, as shown in Fig. 3 (c), then by integral projection method, by formula (1), horizontal direction projection is carried out to IGH, obtain gathering YIG
In formula (1), I (x, y) represents the value of this point in pending horizontal bianry image, and n represents the number of all pixels of xth row, and y is row.
Step 114: row length in YIG being less than to d1=30, the then behavior 0, unit is pixel, and the line number of getting first element place of non-zero in YIG is coboundary, and in YIG, the line number at last element place of non-zero is lower boundary.
(2) method of car body right boundary coarse positioning is located similar to car body up-and-down boundary, unlike in step 113, is carry out vertical direction projection to vertical bianry image, as shown in Fig. 3 (d), comprises:
Step 121, carries out gray processing to the buckle image of input and gaussian filtering process obtains image I;
Step 122, adopts horizontal shuttering
After carrying out template convolution to image I, the absolute value obtaining result adopts maximum variance between clusters to carry out thresholding process, obtains image IG;
Step 123, adopts square structure unit se1=[3,3], image IG is carried out to the expansive working of continuous 3 times, obtain vertical bianry image IGH ';
Step 124, adopts integral projection method
projection on vertical direction is carried out to image IGH ', obtains gathering YIG ', wherein I'(x, y) be the value of image IGH ' point (x, y), n represents the number of all pixels of xth row, and y represents it is y row;
Step 125, redefine length in YIG ' be less than 40 be classified as 0, the columns getting first element place of non-zero in YIG ' is car body left margin, and the columns getting last element place of non-zero in YIG ' is car body right margin;
(3) car body right boundary fine positioning
Step 131: be added with vertical bianry image by horizontal bianry image, carries out vertical direction projection to new image, obtains gathering XIG;
Step 132: connected region is marked to XIG, if adjacent connected region interval is less than d2=10 pixel, then merge adjacent connected region, after upgrading all connected regions, find the connected region that length is maximum, then the right boundary in this region is the right boundary of car body, the position of record car body right boundary.According to the border cuts of gained on original image I, obtain image I ', as shown in Fig. 3 (e).
Step 2 comprises following concrete steps:
Step 21: with the Sobel operator of vertical direction, rim detection is carried out to the image I ' of gained in step 1, with se2=[15, 13] rectangle structure unit, namely length be 13 wide be 15, edge image carries out make and break operation, use rectangle structure unit se3=[5 again, 10], namely length is 10 pixels, wide is 5 pixels, carry out etching operation, after deletion area is less than the region of d3=20, use se3=[5 again, 10] expansive working is carried out to it, delete the region that area is less than d4=500, obtain the license plate area of the connection comprising candidate, as shown in Fig. 3 (f),
Step 22: to the connected region of step 21 gained, the minimum external matrix of mark connected region, if the distance of level height difference d4≤6 pixel in two regions, vertical spacing simultaneously, namely the interval of the right margin of first region external matrix and the left margin of Two Areas is not more than the distance of d5=70 pixel, then merge this two regions;
Step 23: the minimum enclosed rectangle upgrading each connected region, according to the Aspect Ratio of minimum enclosed rectangle and the size of length and width, by the architectural characteristic of car plate, namely certain length breadth ratio 2.5 < r1 < 7 and width 15 < width1 < 55 and length 50 < len1 < 180 will be met, filter out the candidate region meeting above condition, as shown in Fig. 3 (g);
Step 24: usually occupy this symmetry in the middle of car body according to car plate from the license plate area of candidate, by minimum with car body Center Gap and the connected component labeling being positioned at vehicle bottom is license plate area;
Step 25: the central point (C calculating license plate area
x, C
y), the length C of car plate
len, wherein
Cx=(license plate area coboundary+license plate area lower boundary)/2,
Cx=(license plate area left margin+license plate area right margin)/2.
The length (hang) of step 26: computed image I ' and width (lie), wherein length is the line number of image array, and width is the columns of image array, as shown in Fig. 3 (h);
Step 3 comprises following concrete steps:
Step 31: the coarse positioning of vehicle window can be determined by formula (2) by the central point of car plate:
W
top=k·C
x-C
1
W
bottom=k
1·C
x-C
2(2)
W
left=min(1,C
y-k
2·C
len)
W
right=max(lie,C
y+k
2·C
len)
Wherein W
top, W
bottom, W
left, W
rightfor the border, four, upper and lower, left and right of the minimum enclosed rectangle of vehicle window, k, k
1, k
2be coefficient, value is respectively 0.5247, and 0.9238,2.75; C
1, C
2for constant, be 260,155 respectively;
Step 32: along W on image I '
top, W
bottom, W
left, W
rightcarry out cutting out the image I obtaining vehicle window coarse positioning ", as shown in Fig. 3 (i).
Step 4 comprises following concrete steps:
Step 41: for image I ", obtain only containing the image IB of horizontal linear;
Step 42: to image I " gray processing; be that the template tmp2 of template tmp1 and 120 degree of 60 degree carries out edge convolution by degree of tilt; by the image addition after convolution; process the threshold value that the absolute value maximum variance between clusters of result obtains again; obtain being rich in the bianry image IC of the straight line with angle of inclination; as shown in Fig. 3 (l), wherein
Step 43: the image of integrating step 41 and 42 gained, to the right boundary coarse positioning of vehicle window, filter out the candidate's vehicle window region A satisfied condition, the element number in A is Alen simultaneously;
In said method, described step 41 comprises following concrete steps:
Step 411: to image I " carry out rim detection with the horizontal operator of Canny after, then use horizontal shuttering
To its filtering, with linear junction constitutive element se4=[' line', 15,0], namely degree of tilt is 0, and length is 15, carries out morphology open operation to it, obtains the bianry image IB1 being rich in horizontal linear section, as shown in Fig. 3 (j);
Step 412, carries out integrate levels projection to image IB1, if the horizontal projection value of certain row is lower than d6, i.e. and 0.05 times of image IB1 columns, then this row value is redefined is 0;
Step 413, gathers the row that all integrate levels projection values are not 0, obtains gathering luz, if adjacent element interval is less than or equal to d7=2 in luz, is then a region by these two rubidium markings;
Step 414: for the straight-line segment in same region, on first element in this region of being all added to by all straight lines is expert at, with linear junction constitutive element se5=[' line', 15,0] to its etching operation again that first expands, remove the straight line that some length are less than d8=110 pixel, obtain image IB, as shown in Fig. 3 (k), length is greater than d9, namely the straight-line segment straight-line segment alternatively of 0.45 times of image IB1 columns, obtains straight-line segment set LC;
In said method, described step 43 comprises following concrete steps:
Step 431: carry out descending sort to the line segment in straight-line segment set LC, finds the longest line segment ml as the coarse positioning of car body right boundary, is designated as L, R respectively;
Step 432: if the longest line segment ml is made up of more than two line segments, then image IB and image IC is superposed, in the image of superposition, vertical direction projection is carried out to the row at ml place, and find first crest L1 in vertical direction projection and last crest R1, if these two peak separations are less than d10=250 pixel, then the left margin of car body is the maximal value in L and L1, and right margin is the minimum value of R and R1; If the longest line segment ml is made up of a line segment, the starting point of ml and terminal are respectively as the left and right border of car body;
Step 433: at image I " on cut out image CAR1, as shown in Fig. 3 (m) by right boundary;
Step 434, obtains line number and the columns of CAR1, is designated as a respectively, b, and traversal LC if adjacent straight line segment meets following condition, then using LC (i) and LC (i+1) as the up-and-down boundary in region, is placed in set A; If only satisfy condition (first), do not satisfy condition (second), be then placed in set B; If neither meet, then carry out the contrast of next group adjacent straight line segment,
(first) LC (i+1)-LC (i) >=width2, width2=80
(second) 1.5≤b/ (LC (i+1)-LC (i))≤4.5, d11=b/ (LC (i+1)-LC (i))
Wherein, i is the index value of element in set LC;
Step 435: if A is empty, and B is not empty, then candidate region integrates as B; Otherwise be A; If candidate region is B, then by B assignment to A.Obtain candidate region A as shown in Fig. 3 (n) He 3 (o).
Step 5 comprises:
MBI is extracted to the regional in the A of candidate region, according to rule, selects the region of the most satisfied rule as the vehicle window region derived by MBI feature;
Shape facility is extracted to the regional in the A of candidate region, according to rule, selects the region of the most satisfied rule as the vehicle window region derived by shape facility;
Sealing strip spectral signature is extracted to the regional in the A of candidate region, according to rule, selects the region of the most satisfied rule as the vehicle window region derived by sealing strip spectral signature.
Composition graphs 2, extracts morphology buildings index characteristic MBI and comprises following concrete steps:
Step 511: to candidate region A, if only have 1 candidate region, then namely this candidate region is by the determined vehicle window region of this feature; If candidate region is more than or equal to 2, then for candidate region A (k), on image CAR1, with 0.5lia
kfor line number, with 0.33lib
k0.33*lib is that columns cuts out image ID11, and wherein k is the index value of candidate region, lia
kfor the up-and-down boundary length of A (k), lib
kfor the right boundary length of A (k); When extracting MBI characteristic pattern in the present invention, the initial length of linear junction constitutive element is 10, and step-length is 5, and final lengths is 30, and the start angle of inclination is 0, and step-length is 22.5 degree, and final angle is 180 degree.
Step 512: because MBI characteristic pattern inside has certain hole, therefore need to carry out morphologic holes filling and open operation to connected region, open the structural elements se6=[' square', 5] of operation, namely length 5 square that is, obtains level and smooth connected region.Inside because of MBI characteristic pattern has certain hole, therefore needs to carry out morphologic holes filling to connected region and open operation, obtains level and smooth connected region.
Step 513: calculate by the minimum external matrix of each connected region of step 511 gained.According to length and the width of external matrix, filter out the connected region meeting length len3 (length and columns, len3≤0.231b) and width width3 (width and line number, width3), the connected region do not satisfied condition is set to 0 entirely.
Step 514: carry out Canny rim detection to image ID11, is added with the connected region of step 513 gained, and overlapping part is the image border figure of gained, as shown in Fig. 3 (p) He 3 (q);
Step 515: the average obtaining pixel in each candidate region;
Step 516: step 511 is all performed to step 513 to each candidate region, add up in each candidate region, by the average of pixel in each candidate region of step 515 gained, the vehicle window region of region namely for being obtained by MBI feature that average is maximum, obtain the numbering t1 in region, the average of the MBI characteristic pattern of case picture is as shown in table 1.
The value of table 1 candidate region MBI feature
(2) shape facility concrete steps are extracted:
Step 521: to candidate region A, if only have 1 candidate region, then namely this candidate region is by the determined vehicle window region of this feature; If candidate region has more than 2, then on image IC according to the up-and-down boundary of candidate region A (k), intercept and obtain image ID12, as shown in Fig. 3 (r) He Fig. 3 (s), the average of pixel in computed image ID12, travel through all candidate regions, obtain the set C of the average of each region gained;
Step 522: travel through each candidate region, if the value of some candidate regions is less than d12=10 pixel and differ by more than d13=15 pixel with maximal value, then delete this candidate region, upgrading candidate region becomes A ', if candidate region A ' only has one, then it is vehicle window; Otherwise judge by step 523;
Step 523: descending sort is carried out to A ', if maximal value is greater than d14=48 pixel and is greater than d15=18 pixel with the absolute value of the difference of secondary maximal value, the region of vehicle window corresponding to maximal value; If maximal value is in 18≤d16≤48, and secondary maximal value differs with maximal value and is not less than d17=12 pixel, and last position of the Bu Shi candidate region, position at secondary maximal value place, the columns of image IC is less than d18=450 pixel simultaneously, then the region at secondary maximal value place is the region of vehicle window, otherwise the region at the place of maximal value is the region of vehicle window;
Step 524: arranging vehicle window zone number is t2.In table 2
The value of table 2 candidate region shape facility
(3) sealing strip spectral signature is extracted:
Step 531: to candidate region, if only have 1 candidate region, this region is vehicle window region; Otherwise perform step 532;
Step 532: point centered by the coboundary of candidate region A (k), d19=20 the pixel that fluctuate forms a region, the corresponding region of image CAR1 intercepts out this region, obtains image ID13;
Step 533: by ID13 gray processing, horizontal direction projection is carried out to it, obtain projection set YID13, the inquiry minimum value of YID13 and the line number corresponding to minimum value record, travel through the up-and-down boundary of all candidate regions, obtain matrix sum_blk (m, n) with matrix y_hou (m, n), wherein sum_blk (m, what n) store is the minimum value of the image YID13 formed centered by the n-th border of m candidate region, when n is 1, represent coboundary, n represents lower boundary when being 2, y_hou (m, what n) store is the position of minimum value in image CAR1 of the image YID13 formed centered by n-th border in m region,
Step 534: traversal y_hou (m, 2) and the interval of y_hou (m+1,1), if interval is less than d20=45 pixel, then contrast the value of sum_blk (m, 2) and sum_blk (m+1,1); If sum_blk (m, 2) is little, sum_blk (m+1,1) is made to equal sum_blk (m, 2); Otherwise sum_blk (m, 2) equals sum_blk (m+1,1), order; Y_hou (m+1,1) is made to equal y_hou (m, 2); If interval is not less than d20=45 pixel, go to step 535;
Step 535: the minimum value of searching sum_blk, if minimum value is unique, then the region corresponding to this value is exactly the vehicle window region derived by this feature; If minimum value is not unique, then compare the value on another border in the region at minimum value place, obtain by the minimum value in the set that another border is formed, the region corresponding to this value is exactly the vehicle window region derived by this feature;
Step 536: arranging vehicle window zone number is t3.In table 3
The value of table 3 candidate region sum_blk
The detailed process of step 6 is:
Step 61: set up fusion function, f (j)=q1t1'+q2t2'+q3t3', when j equals t1, t1'=1; When j equals t2, t2'=1; When j equals t3, t3'=1; Wherein q1, q2, q3 are the weights of three kinds of eigenwerts, and the scope of i is 1 to Alen;
Step 62: the maximal value obtaining f, the candidate region corresponding to maximal value is the region of vehicle window;
Step 63 the: have ± inclination of 5 ° in bayonet socket image shoot process, therefore the right boundary of vehicle window respectively will expand the distance of 10 pixels to the left and right on the basis of CAR1, finally obtain the fine positioning of vehicle window, cut out vehicle window, result is as shown in Fig. 3 (t).
Claims (7)
1. the location of the bayonet socket image front window of multiple features fusion and an extracting method, is characterized in that, comprising:
Step 1, positions the border of vehicle and cuts out by border;
Step 2, the car plate of positioned vehicle;
Step 3, sets up the relation on car plate and vehicle window border, carries out coarse positioning and cut out by border vehicle window;
Step 4, carries out Region dividing to the vehicle window of coarse positioning, obtains the candidate region of vehicle window;
Step 5, extracts morphology buildings index, shape, sealing strip spectrum three kinds of features to each candidate region;
Step 6, merges three kinds of features and sets up fusion function, realizing the fine positioning of vehicle window, and cut out vehicle window.
2. the location of the bayonet socket image front window of multiple features fusion according to claim 1 and extracting method, it is characterized in that, the boundary alignment of vehicle described in step 1 comprises car body up-and-down boundary location, car body right boundary coarse positioning and car body right boundary fine positioning;
(1) car body up-and-down boundary location comprises:
Step 111, carries out gray processing to the buckle image of input and gaussian filtering process obtains image I;
Step 112, adopts horizontal shuttering
After carrying out template convolution to image I, the absolute value obtaining result adopts maximum variance between clusters to carry out thresholding process, obtains image IG;
Step 113, adopts square structure unit se1=[3,3], image IG is carried out to the expansive working of continuous 3 times, obtain horizontal bianry image IGH;
Step 114, adopts integral projection method
carry out the projection in horizontal direction to image IGH, obtain gathering YIG, wherein I (x, y) is the value of image IGH point (x, y), and n represents the number of all pixels of xth row, and y represents it is y row;
Step 115, redefines the behavior 0 that length in YIG is less than 30, and the line number of getting first element place of non-zero in YIG is car body coboundary, and the line number of getting last element place of non-zero in YIG is car body lower boundary;
(2) car body right boundary coarse positioning comprises:
Step 121, carries out gray processing to the buckle image of input and gaussian filtering process obtains image I;
Step 122, adopts horizontal shuttering
After carrying out template convolution to image I, the absolute value obtaining result adopts maximum variance between clusters to carry out thresholding process, obtains image IG;
Step 123, adopts square structure unit se1=[3,3], image IG is carried out to the expansive working of continuous 3 times, obtain vertical bianry image IGH ';
Step 124, adopts integral projection method
projection on vertical direction is carried out to image IGH ', obtains gathering YIG ', wherein I'(x, y) be the value of image IGH ' point (x, y), n represents the number of all pixels of xth row, and y represents it is y row;
Step 125, redefine length in YIG ' be less than 40 be classified as 0, the columns getting first element place of non-zero in YIG ' is car body left margin, and the columns getting last element place of non-zero in YIG ' is car body right margin;
(3) car body right boundary fine positioning
Step 131, is added horizontal bianry image IGH with vertical bianry image IGH ', carries out vertical direction projection to new image, obtains gathering XIG;
Step 132, marks connected region to XIG, if adjacent connected region interval is less than 10 pixels, then merges adjacent connected region;
Step 133, after upgrading all connected regions, find the connected region that length is maximum, then the right boundary in this region is the right boundary of car body, the position of record car body right boundary.
3. the location of the bayonet socket image front window of multiple features fusion according to claim 2 and extracting method, it is characterized in that, the detailed process of step 2 is:
Step 21, carries out rim detection to the Sobel operator of the image I ' vertical direction that step 1 is cut out; Adopt rectangle structure unit se2=[15 successively, 13] and se3=[5,10] edge image carries out make and break operation and etching operation, and delete after area is less than the region of 20, adopt rectangle structure unit se3=[5,10] carry out expansive working, delete the region that area is less than 500, obtain the license plate area comprising the connection of candidate;
Step 22, the connected region that mark step 21 obtains, if the level height in two regions differs the distance within 6 pixels, vertical spacing is not more than the distance of 70 pixels simultaneously, then merge this two regions;
Step 23, upgrades the minimum enclosed rectangle of each connected region, filters out the candidate region met the following conditions:
(1) Aspect Ratio of minimum enclosed rectangle meets 2.5 < r1 < 7,
(2) width meets 15 < width1 < 55, unit picture element,
(3) length meets 50 < len1 < 180, unit picture element;
Step 24, occupy this symmetry in the middle of car body usually according to car plate, chooses minimum with car body Center Gap and the connected component labeling being positioned at vehicle bottom is license plate area from the license plate area of candidate;
Step 25, calculates the central point (C of license plate area
x, C
y), the length C of car plate
len, wherein
Cx=(license plate area coboundary+license plate area lower boundary)/2,
Cx=(license plate area left margin+license plate area right margin)/2;
Step 26, the length hang of computed image I ' and width lie, described length is the line number of image I ' matrix, and width is image I ' matrix column number.
4. the location of the bayonet socket image front window of multiple features fusion according to claim 3 and extracting method, it is characterized in that, the detailed process of step 3 is:
Step 31, sets up the relation between vehicle window and car plate
W
top=k·C
x-C
1
W
bottom=k
1·C
x-C
2
W
left=min(1,C
y-k
2·C
len)
W
right=max(lie,C
y+k
2·C
len)
Wherein W
top, W
bottom, W
left, W
rightfor the border, four, upper and lower, left and right of the minimum enclosed rectangle of vehicle window, k=0.5247, k
1=0.9238, k
2=2.75, C
1=260, C
2=155;
Step 32, along W on image I '
top, W
bottom, W
left, W
rightcarry out cutting out the image I obtaining vehicle window coarse positioning ".
5. the location of the bayonet socket image front window of multiple features fusion according to claim 4 and extracting method, it is characterized in that, the detailed process of step 4 is:
Step 41, for image I ", obtain only containing the image IB of horizontal linear, concrete steps are as follows:
Step 411, to image I " adopt the horizontal operator of Canny, horizontal masterplate successively
Carry out rim detection, filtering and morphology with linear junction constitutive element se4=[' line', 15,0] and open operation, obtain the bianry image IB1 of moisture flat level;
Step 412, carries out integrate levels projection to image IB1, if the horizontal projection value of certain row is lower than 0.05 times of image IB1 columns, then this row value is redefined is 0;
Step 413, gathers the row that all integrate levels projection values are not 0, obtains gathering luz, if adjacent element interval is less than or equal to 2 in luz, is then a region by these two rubidium markings;
Step 414: for the straight-line segment in same region, on first element in this region of being all added to by all straight lines is expert at, with linear junction constitutive element se5=[' line', 15,0] to its etching operation again that first expands, remove the straight line that some length are less than 110 pixels, obtain image IB, length is greater than the straight-line segment straight-line segment alternatively of 0.45 times of image IB1 columns, obtains straight-line segment set LC;
Step 42, to image I " gray processing; be that the template tmp2 of template tmp1 and 120 degree of 60 degree carries out edge convolution by degree of tilt; by the image addition after convolution; process the threshold value that the absolute value maximum variance between clusters of result obtains again; obtain being rich in the bianry image IC of the straight line with angle of inclination, wherein
Step 43, for image IB and image IC, to the right boundary coarse positioning of vehicle window, filter out the candidate's vehicle window region A satisfied condition, the element number in A is Alen, and detailed process is simultaneously:
Step 431, carries out descending sort to the line segment in straight-line segment set LC, finds the longest line segment ml as the coarse positioning of car body right boundary, is designated as L, R respectively;
Step 432, if the longest line segment ml is made up of more than two line segments, then image IB and image IC is superposed, in the image of superposition, vertical direction projection is carried out to the row at ml place, and find first crest L1 in vertical direction projection and last crest R1, if these two peak separations are less than 250 pixels, then the left margin of car body is the maximal value in L and L1, and right margin is the minimum value of R and R1; If the longest line segment ml is made up of a line segment, the starting point of ml and terminal are respectively as the left and right border of car body;
Step 433, at image I " on cut out image CAR1 by right boundary;
Step 434, obtains line number and the columns of CAR1, is designated as a respectively, b, and traversal LC if adjacent straight line segment meets following condition, then using LC (i) and LC (i+1) as the up-and-down boundary in region, is placed in set A; If only satisfy condition (first), do not satisfy condition (second), be then placed in set B; If neither meet, then carry out the contrast of next group adjacent straight line segment,
(first) LC (i+1)-LC (i) >=80
(second) 1.5≤b/ (LC (i+1)-LC (i))≤4.5
Wherein, i is the index value of element in set LC;
Step 435, if A is empty, and B is not empty, then candidate region integrates as B; Otherwise be A; If candidate region is B, then by B assignment to A.
6. the location of the bayonet socket image front window of multiple features fusion according to claim 5 and extracting method, it is characterized in that, three kinds of feature extracting methods described in step 5 are:
(1) morphology buildings index characteristic MBI is extracted:
Step 511, to candidate region, if only have 1 candidate region, then this candidate region is by the determined vehicle window region of this feature; If candidate region has more than 2, then for candidate region A (k), on image CAR1, with 0.5lia
kfor line number, with 0.33lib
k0.33*lib is that columns cuts out image ID11, and wherein k is the index value of candidate region, lia
kfor the up-and-down boundary length of A (k), lib
kfor the right boundary length of A (k);
Step 512, extracts the MBI characteristic pattern of image ID11, carries out holes filling successively and open operation obtaining level and smooth connected region to the hole of MBI characteristic pattern inside, described in open operation and adopt structural elements se6=[' square', 5];
Step 513, obtains the minimum external matrix of each connected region, and according to length and the width of external matrix, filter out the UNICOM region meeting length len3 and width width3, all the other connected regions are set to 0, wherein len3≤0.231b, width3≤0.225a;
Step 514, carries out Canny rim detection to image ID11, is added with the connected region of step 513 gained, and overlapping part is the image border figure of gained;
Step 515, obtains the average of pixel in each candidate region;
Step 516, all performs step 511 to step 515 to each candidate region, the vehicle window region of region namely for being obtained by MBI feature that average is maximum;
Step 517, arranging vehicle window zone number is t1.
(2) shape facility is extracted:
Step 521, to candidate region, if only have 1 candidate region, then this candidate region is by the determined vehicle window region of this feature; If candidate region has more than 2, then on image IC according to the up-and-down boundary of candidate region A (k), intercept and obtain image ID12, the average of pixel in computed image ID12, travel through all candidate regions, obtain the set C of the average of each region gained;
Step 522, travels through each candidate region, if the value of some candidate regions is less than 10 pixels and differ by more than 15 pixels with maximal value, then deletes this candidate region, and upgrading candidate region becomes A ', if candidate region A ' only has one, is then vehicle window; Otherwise judge by step 523;
Step 523, carries out descending sort to A ', if maximal value is greater than 48 pixels and is greater than 18 pixels with the absolute value of the difference of secondary maximal value, and the region of vehicle window corresponding to maximal value; If maximal value is [18,48] in, and secondary maximal value differs with maximal value and is not less than 12 pixels, and last position of the Bu Shi candidate region, position at secondary maximal value place, the columns of image IC is less than 450 pixels simultaneously, then the region at secondary maximal value place is the region of vehicle window, otherwise the region at the place of maximal value is the region of vehicle window;
Step 524, arranging vehicle window zone number is t2.
(3) sealing strip spectral signature is extracted:
Step 531, to candidate region, if only have 1 candidate region, this region is vehicle window region; Otherwise perform step 532;
Step 532, point centered by the coboundary of candidate region A (k), 20 pixels that fluctuate form a region, the corresponding region of image CAR1 intercepts out this region, obtains image ID13;
Step 533, by ID13 gray processing, horizontal direction projection is carried out to it, obtain projection set YID13, the inquiry minimum value of YID13 and the line number corresponding to minimum value record, travel through the up-and-down boundary of all candidate regions, obtain matrix sum_blk (m, n) with matrix y_hou (m, n), wherein sum_blk (m, what n) store is the minimum value of the image YID13 formed centered by the n-th border of m candidate region, when n is 1, represent coboundary, n represents lower boundary when being 2, y_hou (m, what n) store is the position of minimum value in image CAR1 of the image YID13 formed centered by n-th border in m region,
Step 534, traversal y_hou (m, 2) and the interval of y_hou (m+1,1), if interval is less than 45 pixels, then contrast the value of sum_blk (m, 2) and sum_blk (m+1,1); If sum_blk (m, 2) is little, sum_blk (m+1,1) is made to equal sum_blk (m, 2); Otherwise sum_blk (m, 2) equals sum_blk (m+1,1), order; Y_hou (m+1,1) is made to equal y_hou (m, 2); If interval is not less than 45 pixels, go to step 535;
Step 535: the minimum value of searching sum_blk, if minimum value is unique, then the region corresponding to this value is exactly the vehicle window region derived by this feature; If minimum value is not unique, then compare the value on another border in the region at minimum value place, obtain by the minimum value in the set that another border is formed, the region corresponding to this value is exactly the vehicle window region derived by this feature;
Step 536, arranging vehicle window zone number is t3.
7. the location of the bayonet socket image front window of multiple features fusion according to claim 6 and extracting method, it is characterized in that, the detailed process of step 6 is:
Step 61, sets up fusion function, f (j)=q1t1'+q2t2'+q3t3', when j equals t1, and t1'=1; When j equals t2, t2'=1; When j equals t3, t3'=1; Wherein q1, q2, q3 are the weights of three kinds of eigenwerts, and the scope of i is 1 to Alen;
Step 62, obtain the maximal value of f, the candidate region corresponding to maximal value is the region of vehicle window.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510222612.XA CN105005759B (en) | 2015-05-04 | 2015-05-04 | The positioning of the bayonet image front window of multiple features fusion and extracting method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510222612.XA CN105005759B (en) | 2015-05-04 | 2015-05-04 | The positioning of the bayonet image front window of multiple features fusion and extracting method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105005759A true CN105005759A (en) | 2015-10-28 |
CN105005759B CN105005759B (en) | 2018-10-02 |
Family
ID=54378424
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510222612.XA Active CN105005759B (en) | 2015-05-04 | 2015-05-04 | The positioning of the bayonet image front window of multiple features fusion and extracting method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105005759B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105809699A (en) * | 2016-03-18 | 2016-07-27 | 中山大学 | Image segmentation based car window extraction method and system |
CN106056071A (en) * | 2016-05-30 | 2016-10-26 | 北京智芯原动科技有限公司 | Method and device for detection of driver' behavior of making call |
CN106250824A (en) * | 2016-07-21 | 2016-12-21 | 乐视控股(北京)有限公司 | Vehicle window localization method and system |
CN106611165A (en) * | 2016-12-26 | 2017-05-03 | 广东工业大学 | Automobile window detection method and device based on correlation filtering and color matching |
CN107392207A (en) * | 2017-06-12 | 2017-11-24 | 北京大学深圳研究生院 | A kind of image normalization method, device and readable storage medium storing program for executing |
CN108182385A (en) * | 2017-12-08 | 2018-06-19 | 华南理工大学 | A kind of pilot harness for intelligent transportation system wears recognition methods |
WO2018182538A1 (en) * | 2017-03-31 | 2018-10-04 | Agency For Science, Technology And Research | Systems and methods that improve alignment of a robotic arm to an object |
CN110491133A (en) * | 2019-08-08 | 2019-11-22 | 横琴善泊投资管理有限公司 | A kind of information of vehicles correction system and method based on confidence level |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110274353A1 (en) * | 2010-05-07 | 2011-11-10 | Hailong Yu | Screen area detection method and screen area detection system |
CN104036262A (en) * | 2014-06-30 | 2014-09-10 | 南京富士通南大软件技术有限公司 | Method and system for screening and recognizing LPR license plate |
-
2015
- 2015-05-04 CN CN201510222612.XA patent/CN105005759B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110274353A1 (en) * | 2010-05-07 | 2011-11-10 | Hailong Yu | Screen area detection method and screen area detection system |
CN104036262A (en) * | 2014-06-30 | 2014-09-10 | 南京富士通南大软件技术有限公司 | Method and system for screening and recognizing LPR license plate |
Non-Patent Citations (2)
Title |
---|
侯殿福: "车窗检测技术研究", 《中国优秀硕士学位论文全文数据库》 * |
吴法: "图像处理与机器学习在未系安全带驾车检测中的应用", 《中国博士学位论文全文数据库》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105809699A (en) * | 2016-03-18 | 2016-07-27 | 中山大学 | Image segmentation based car window extraction method and system |
CN105809699B (en) * | 2016-03-18 | 2018-06-19 | 中山大学 | A kind of vehicle window extracting method and system based on figure segmentation |
CN106056071A (en) * | 2016-05-30 | 2016-10-26 | 北京智芯原动科技有限公司 | Method and device for detection of driver' behavior of making call |
CN106056071B (en) * | 2016-05-30 | 2019-05-10 | 北京智芯原动科技有限公司 | A kind of driver makes a phone call the detection method and device of behavior |
CN106250824A (en) * | 2016-07-21 | 2016-12-21 | 乐视控股(北京)有限公司 | Vehicle window localization method and system |
CN106611165A (en) * | 2016-12-26 | 2017-05-03 | 广东工业大学 | Automobile window detection method and device based on correlation filtering and color matching |
CN106611165B (en) * | 2016-12-26 | 2019-07-19 | 广东工业大学 | A kind of automotive window detection method and device based on correlation filtering and color-match |
WO2018182538A1 (en) * | 2017-03-31 | 2018-10-04 | Agency For Science, Technology And Research | Systems and methods that improve alignment of a robotic arm to an object |
CN107392207A (en) * | 2017-06-12 | 2017-11-24 | 北京大学深圳研究生院 | A kind of image normalization method, device and readable storage medium storing program for executing |
CN108182385A (en) * | 2017-12-08 | 2018-06-19 | 华南理工大学 | A kind of pilot harness for intelligent transportation system wears recognition methods |
CN108182385B (en) * | 2017-12-08 | 2020-05-22 | 华南理工大学 | Driver safety belt wearing identification method for intelligent traffic system |
CN110491133A (en) * | 2019-08-08 | 2019-11-22 | 横琴善泊投资管理有限公司 | A kind of information of vehicles correction system and method based on confidence level |
Also Published As
Publication number | Publication date |
---|---|
CN105005759B (en) | 2018-10-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105005759A (en) | Multi-characteristic fused monitoring image front vehicle window positioning and extracting method | |
CN105678285B (en) | A kind of adaptive road birds-eye view transform method and road track detection method | |
CN104217427B (en) | Lane line localization method in a kind of Traffic Surveillance Video | |
CN109726717B (en) | Vehicle comprehensive information detection system | |
Shneier | Road sign detection and recognition | |
CN107506760A (en) | Traffic signals detection method and system based on GPS location and visual pattern processing | |
CN102999749B (en) | Based on the securing band violation event intelligent detecting method of Face datection | |
CN110148196A (en) | A kind of image processing method, device and relevant device | |
CN103324935B (en) | Vehicle is carried out the method and system of location and region segmentation by a kind of image | |
US20150248771A1 (en) | Apparatus and Method for Recognizing Lane | |
CN105893949A (en) | Lane line detection method under complex road condition scene | |
CN102663354A (en) | Face calibration method and system thereof | |
CN102236784A (en) | Screen area detection method and system | |
US20090028389A1 (en) | Image recognition method | |
CN105551082A (en) | Method and device of pavement identification on the basis of laser-point cloud | |
CN111881790A (en) | Automatic extraction method and device for road crosswalk in high-precision map making | |
CN105046198A (en) | Lane detection method | |
JP2013101428A (en) | Building contour extraction device, building contour extraction method, and building contour extraction program | |
CN102902957A (en) | Video-stream-based automatic license plate recognition method | |
CN104463138A (en) | Text positioning method and system based on visual structure attribute | |
CN101470806A (en) | Vehicle lamp detection method and apparatus, interested region splitting method and apparatus | |
JP2003123197A (en) | Recognition device for road mark or the like | |
CN106767854A (en) | mobile device, garage map forming method and system | |
CN103198470A (en) | Image cutting method and image cutting system | |
CN109341692A (en) | Air navigation aid and robot along one kind |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |