CN102509091A - Airplane tail number recognition method - Google Patents

Airplane tail number recognition method Download PDF

Info

Publication number
CN102509091A
CN102509091A CN2011103882397A CN201110388239A CN102509091A CN 102509091 A CN102509091 A CN 102509091A CN 2011103882397 A CN2011103882397 A CN 2011103882397A CN 201110388239 A CN201110388239 A CN 201110388239A CN 102509091 A CN102509091 A CN 102509091A
Authority
CN
China
Prior art keywords
image
aircraft tail
sigma
tail
pixel
Prior art date
Application number
CN2011103882397A
Other languages
Chinese (zh)
Other versions
CN102509091B (en
Inventor
罗喜伶
马秀红
周萍
Original Assignee
北京航空航天大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京航空航天大学 filed Critical 北京航空航天大学
Priority to CN 201110388239 priority Critical patent/CN102509091B/en
Publication of CN102509091A publication Critical patent/CN102509091A/en
Application granted granted Critical
Publication of CN102509091B publication Critical patent/CN102509091B/en

Links

Abstract

The invention discloses an airplane tail number recognition method which comprises the following steps: pre-processing an airplane tail number image based on Otsu dynamic threshold binarization, separating the airplane tail number image from the background, carrying out character segmentation on the airplane tail number image through a connected domain method to obtain single characters of the airplane tail number and acquire projective transformation marking points, and carrying out inverse projective transformation on the airplane tail number image by using the projective transformation marking points as criteria; researching an optimum parameter support vector machine classifier during airplane tail number character recognition; and extracting character features from the airplane tail number characters through a center of gravity method, using an RBF kernel function for the support vector machine, utilizing a secondary grid search method to obtain optimum parameters, and using a 'one-to-one' multi-class classification method to acquire the airplane tail number based on the optimum parameter support vector machine. The method has higher recognition accuracy, and can be applicable to airport occasions and circumstances having diversified lighting environments.

Description

The recognition methods of a kind of aircraft tail number
Technical field
The invention belongs to the technical field of tail number identification, the recognition methods of particularly a kind of aircraft tail number.
Backgroundfield
Aircraft tail number character recognition technologies in the aircraft tail number recognition system, its essence is the important directions in the pattern-recognition category---literal identification.(Optical Character Recognition, OCR) notion is to be proposed first in nineteen twenty-nine by Germany scientist Tausheck in literal identification.Along with the appearance and the development of computing machine, OCR is broad research in the world.Through the development in a nearly century, OCR has become one of most active research contents in the current area of pattern recognition.It combines many-sided knowledge such as Digital Image Processing, computer graphics and artificial intelligence, and in its association area, is used widely.Usually the OCR recognition methods can be divided into following 3 types: statistical nature character recognition technologies, structure character recognition technologies and based on the recognition technology of artificial neural network.The statistical nature character recognition technologies generally choose total in the same type character, metastable and statistical nature that classification performance is good as proper vector.The characteristic of the position feature, character that statistical nature commonly used has a character two dimensional surface after histogram feature, moment characteristics and the character of level or vertical direction projection is through frequency domain transform or other formal arguments etc.The image of two dimension is replaced by the projection of one dimension, and calculated amount reduces, and has also eliminated the influence of literal in the projecting direction skew simultaneously, but powerless for the rotational deformation of character.
Identification is actually and character has been mapped to the structure space that primitive forms discerns based on the literal of structure.Identifying is on the basis of extracting primitive, utilizes formal language and automaton theory, takes the process of the methods analyst charcter topology of lexical analysis, tree coupling, figure coupling and knowledge reasoning.J.Park has analyzed the drawback of traditional structure recognition methods, proposes the initiatively thought of character recognition (Active Character Recognition), initiatively according to input picture, dynamically confirms choosing of architectural feature, reaches the purpose of saving resource, quickening identification.Corresponding with statistical recognition method, the structure recognition technology of character is convenient to distinguish font more and is changed the character character close with font greatly.But because to the description of architectural feature with relatively will take a large amount of storages and computational resource, thus algorithm relative complex, recognition speed are slow in realization.
Try hard to realize the efficient identification of character based on the character recognition technologies of artificial neural network through simulation to human brain function and structure.Through developing rapidly in recent years, artificial neural network has obtained using widely aspect character recognition.In the OCR system, artificial neural network mainly serves as the function of sorter.The input of network is the proper vector of character, and output is the classification results of character, i.e. recognition result.Through repetition learning, neural network can be removed information redundant, contradiction intelligently with proper vector optimization, strengthens the difference between class.Because neural network adopts distributed network structure, itself possesses the condition that can walk abreast, and can accelerate the speed of finding the solution of extensive problem.Krezyak and Le Cun have mainly studied the application of BP (Back-Propagation) neural network aspect literal identification, and the shortcoming a little less than, the generalization ability slow to BP e-learning speed has produced the strategy of competition supervised learning on the basis of BP network.
In aircraft tail number Study of Recognition process, need accurately the aircraft tail number to be positioned, cuts apart and discerns, a main difficult problem that is faced has:
1, take the airdrome scene image that obtains and disturbed by environmental factor, picture quality is difficult to guarantee;
2, aircraft tail number image section is blocked and the anamorphose of aircraft tail number;
3, problem such as airdrome scene monitoring image background complicacy and many aircrafts of piece image tail number;
4, receive the influence of factor such as light in the environment, aircraft tail number image has bigger noise, and the handwriting is blurred to cause tail number character to be split, the adjacent character adhesion, in addition incomplete.
Summary of the invention
The objective of the invention is to deficiency, propose the high-resolution aircraft tail number recognition methods under a kind of different visual angles, improve the recognition performance of system, satisfy the performance of domestic real system to prior art.The present invention is to aircraft tail number Study of Recognition Method; Learn existing pattern-recognition and character recognition technology; And improve and innovate; It is applied in aircraft tail number identification field, is the A-SMGCS systematic study and the support that realizes providing airport aircraft and vehicle detection and identification aspect of China, thereby is that the airport scene monitoring of China provides theoretical research and technical support towards the transition and the development of A-SMGCS system.
For realizing above-mentioned purpose, technical scheme of the present invention is: the recognition methods of a kind of aircraft tail number comprises following concrete steps:
The image capturing of step 1, original image is taken multiple image from different visual angles, and from multiple image, is found out the image that contains the aircraft tail number;
Step 2, the location pre-service of aircraft tail number are judged licence plate according to the provincial characteristics of aircraft tail number earlier, from the image that comprises whole airdrome scene monitoring, find the position in tail number zone;
Step 3, based on the aircraft tail number location technology of DCT territory and rim detection because partly there is a large amount of marginal informations in the aircraft tail number, the aircraft tail number location technology based on DCT territory and edge feature has been proposed;
The aircraft tail number split image pre-service of step 4, employing Otsu dynamic threshold binaryzation; The aircraft tail number split image pre-service based on Otsu dynamic threshold binaryzation is implemented in aircraft tail number zone to having located, obtains the target area of aircraft tail number more clearly;
Step 5, connected region aircraft tail number are cut apart, and adopt the connected region split plot design that aircraft tail number Region Segmentation is become single character zone then;
Step 6, the identification of aircraft tail number propose to realize the identification of aircraft tail number based on the aircraft tail number recognizer of optimized parameter SVMs;
Step 7, recognition result aftertreatment.
Wherein, the aircraft tail number split image pre-service concrete grammar of the described employing of step 4 Otsu dynamic threshold binaryzation is:
Adopt the concrete steps of the dynamic binaryzation algorithm of Otsu following: the grey level range of establishing in the image is 0-L-1, and gray level is that the pixel number of i is n i, then whole pixel counts of image are:
N=n 0+n 1+...+n L-1 (2)
Normalization histogram obtains:
p i = n i N , Σ i = 1 L - 1 p i = 1 - - - ( 3 )
Threshold value t is divided into two types with gray scale:
C 1={1,2,...,t},C 2={t+1,t+2,...,L-1} (4)
Grey level histogram by image can get C 1And C 2The probability of occurrence of class is:
w 1 = Σ i = 1 t p i , w 2 = Σ i = t + 1 L - 1 p i - - - ( 5 )
C then 1And C 2Class average separately is respectively:
M 1 = Σ i = 1 t i · p i / w 1 , M 2 = Σ i = t + 1 L - 1 i · p i / w 2 - - - ( 6 )
C 1And C 2Class variance separately is respectively:
σ 1 2 = Σ i = 1 t ( i - M 1 ) 2 · p i / w 1 , σ 2 2 = Σ i = t + 1 L - 1 ( i - M 2 ) 2 · p i / w 2 - - - ( 7 )
Definition C 1And C 2The inter-class variance of class For:
σ A 2 = w 1 σ 1 2 + w 2 σ 2 2 - - - ( 8 )
C 1And C 2The class internal variance of class For:
σ B 2 = w 1 w 2 ( M 1 - M 2 ) 2 - - - ( 9 )
Then the criterion function of the dynamic binaryzation algorithm of Otsu is:
η = σ B 2 / σ A 2 - - - ( 10 )
From the minimum gradation value to the maximum gradation value, travel through t; T when t makes that η obtains maximal value is the Otsu optimal segmenting threshold; Otsu dynamic threshold binaryzation algorithm is realized comparatively simple; As long as find the scope of all t of traversal, find an appropriate threshold T to get final product, this method is better to the separation of aircraft tail number target and background, image binaryzation effect.
Wherein, the described employing connected region of step 5 split plot design becomes the concrete character cutting method step of single character zone following aircraft tail number Region Segmentation:
Steps A 1, scan image find a current pixel that does not belong to any zone, also promptly will find out a new starting point of carrying out region growing;
Steps A 2,4 neighborhood (upper and lower, left and right around the gray scale of this pixel and its.Or 8 neighborhoods, in order to alleviate the influence of character adhesion, adopt 4 fields in this aircraft tail number recognition system) do not belong to any one regional pixel grey scale and compare, if satisfy certain judgment criterion, just merge it as same zone;
Steps A 3, for the pixel of those new merging, carry out the operation of steps A 2 repeatedly;
Steps A 4, carry out the operation of steps A 2, steps A 3 repeatedly, till the zone can not further expansion;
Steps A 5, turn back to steps A 1, seek to become the pixel of new region starting point.
The advantage that the present invention compares with prior art is:
1, the aircraft tail number split image Preprocessing Algorithm based on Otsu dynamic threshold binaryzation of the present invention's employing; Make two types inter-class variance of Threshold Segmentation big; The class internal variance is little; Be that inter-class variance is maximum with the ratio of class internal variance, get dynamic optimal threshold, this method can be applicable to the multifarious airdrome scene environment of photoenvironment.
2, the aircraft tail number inverse morphism shadow transform method that the present invention is based on aircraft tail number nationality mark can be eliminated the projective transformation that camera produces in airdrome scene is monitored aircraft tail number image acquisition process, the aircraft tail number is carried out the recognition correct rate that the conversion of inverse morphism shadow can effectively improve aircraft tail number recognition system.
Description of drawings
Fig. 1 is the implementation procedure of aircraft tail number recognition methods among the present invention;
Fig. 2 is a greyscale transformation schematic diagram in the aircraft tail number positioning image preprocessing process; Such conversion discovery can be drawn back the tonal range in (r0-r1) between our interested gray area, thereby reaches the purpose of enhancing contrast ratio;
Fig. 3 is for using the aircraft tail number locating effect based on the aircraft tail number location technology of DCT territory and edge feature; Fig. 3 (a) is the gray level image after the RGB image transitions; Fig. 3 (b) is divided into gray level image 8 * 8 image block, and each image block is done two-dimensional dct transform, thereby obtains the DCT matrix of two dimension; Fig. 3 (c) uses the Canny operator that rim detection is made in the tail number zone, obtains this regional outline map; Fig. 3 (d) is for using the tail number positioning result that obtains based on the aircraft tail number location technology of DCT territory and edge feature;
Fig. 4 carries out the part of test results that the pre-service of aircraft tail number split image obtains for adopting Otsu dynamic threshold image binaryzation;
The connected region method that Fig. 5 adopts for the present invention is carried out the experimental result of aircraft tail number Character segmentation;
The process that Fig. 6 goes to the reference mark for the inverse morphism shadow conversion of aircraft tail number image;
Fig. 7 is the experimental result of the inverse morphism shadow conversion of machine tail number image;
Fig. 8 is the SVMs optimal classification face under the situation of sample set of two types of linear separabilities of two dimension;
Fig. 9 extracts the effect of aircraft tail number character feature for using the grid gravity model appoach;
Figure 10 is a multicategory classification device method.
Embodiment:
Technical scheme is for a better understanding of the present invention done further to describe below in conjunction with the accompanying drawing specific embodiments of the invention.
Fig. 1 is the implementation procedure of aircraft tail number of the present invention recognition methods.Each step practical implementation details is following:
The image capturing of step 1, original image is taken multiple image from different visual angles, and from multiple image, is found out the image that contains the aircraft tail number.
In traditional driving against road traffic regulation was captured, specialized camera was taken pictures under specific triggering, so the automotive license plate of capturing generally appears at the FX of image.Because camera focus and visual angle are all fixed, so the size of car plate and angle are also definite relatively.And the number of characters of car plate and font be relative fixed all.Yet above-mentioned recognition methods also is not suitable for the machine tail number identification of airdrome scene aircraft, mainly has following two difficult points: 1, to cut apart difficulty big for the machine tail number.Because aircraft stand,, fuselage different with the relative angle of camera are smooth radian curved surface, cause position, the size of machine tail number in image not fixed, there is perspective distortion in font; The machine tail number does not exist other part of frame and image to cut apart yet, so the image segmentation difficulty is bigger; 2, character diversity.Font on the fuselage (roman, italic), color (white gravoply, with black engraved characters, wrongly written or mispronounced character of the dark end are arranged) and length (number of characters) are also different, and the intrinsic similarity of intercharacter has influenced the accuracy rate of discerning to a great extent in addition.
To above two difficult points, we need take multiple image from different visual angles, and from multiple image, find out the image that contains the aircraft tail number.
Step 2, the location pre-service of aircraft tail number.
The airdrome scene monitoring image is carried out the pre-service of aircraft tail number location and at first will carry out gray processing image.In the RGB model, if R=G=B, a kind of greyscale color of color showing then, wherein the value of R=G=B is called gray-scale value, and we represent with g.By color conversion is that the process of gray scale is called gray processing and handles.Because the storage of coloured image often takies very big space, in processing such as image being discerned, often convert coloured image into gray level image, to accelerate follow-up processing speed.The span of R, G, B is 0-255, so the rank of gray scale is 256 grades.
The fundamental purpose of using greyscale transformation is exactly to improve the contrast of image, promptly strengthens the contrast of original image each several part.If when a width of cloth figure forms images since light cross dark or be under-exposed, view picture figure partially secretly (like tonal range from 0 to 63) then; Light is crossed the bright or over-exposed kine bias bright (like tonal range from 200 to 255) of then scheming, and all can cause picture contrast problem on the low side, and promptly gray scale all has been crowded together, and does not draw back.We can adopt the method for greyscale transformation to strengthen the contrast of image.
So-called greyscale transformation is exactly with being mapped to the conversion between another gray area between a gray area.A kind of piecewise linear gray transformation can be defined as:
s = r s 0 r 0 r &le; r 0 ( r - r 0 ) s 1 - s 0 r 1 - r 0 + s 0 r 0 < r &le; r 1 ( r - r 1 ) 255 - s 1 255 - r 1 + s 1 r > r 1 - - - ( 1 )
Its principle is as shown in Figure 2.Wherein horizontal ordinate r representes the gray shade scale before the conversion, and ordinate s representes corresponding grey scale grade after the conversion, and thick line among the figure (broken line) has been represented the funtcional relationship of output gray level and input gray level, the situation when fine dotted line (straight line) representes not do any conversion.Can find that through contrast such conversion is found can be (r between our interested gray area 0-r 1) in tonal range draw back, thereby reach the purpose of enhancing contrast ratio.
Step 3, based on the aircraft tail number location technology of DCT territory and rim detection.
At first through picture element interpolation pending airdrome scene is monitored RGB image zoom to 240 * 320 Pixel Dimensions, then the RGB image transitions is arrived yuv space, the luminance component of abstract image is as gray level image, shown in Fig. 3 (a).Gray level image is divided into 8 * 8 image block, each image block is done two-dimensional dct transform, from the two-dimensional dct matrix that obtains, choose a kind of suitable DCT characteristic of field, for each image block computation of characteristic values in the image, shown in Fig. 3 (b).Choose suitable sorting technique, image block is divided into two types of tail number piece and background pieces, Primary Location from image thus.Because aircraft tail number edges of regions information rich uses the Canny operator that rim detection is made in the tail number zone, obtains this regional outline map; Shown in Fig. 3 (c); Then this outline map is carried out horizontal scanning, vertical scanning, the marginal density of each piece of calculating tail number zone; Utilize the threshold method to exclude the erroneous judgement piece again, the tail number optimization of region has just obtained more text filed in the image thus.Use based on the aircraft tail number locating effect of the aircraft tail number location technology of DCT territory and edge feature shown in Fig. 3 (d).
The aircraft tail number split image pre-service of step 4, employing Otsu dynamic threshold binaryzation.
The dynamic binaryzation algorithm of the overall situation is from the pixel distribution of whole gray level image; Seek the threshold value an of the best; Classic algorithm Otsu algorithm wherein; Its basic thought is: get a threshold value t; The image slices rope is divided into more than or equal to t with less than two types of t by the gray scale size; Obtain the mean value variance of two types of pixels then; Be inter-class variance and two types of pixels mean square deviation separately, i.e. type internal variance .Find out the threshold value t that makes two variance ratio η maximum, this threshold value is the optimal threshold of binary image.No matter the histogram of this binarization method image has or not significantly bimodal, can both obtain comparatively satisfied effect.Therefore this method is the more excellent method that threshold value is chosen automatically.
The concrete steps of the dynamic binaryzation algorithm of Otsu are following: the grey level range of establishing in the image is 0-L-1, and gray level is that the pixel number of i is n i, then whole pixel counts of image are:
N=n 0+n 1+...+n L-1 (2)
Normalization histogram obtains:
p i = n i N , &Sigma; i = 1 L - 1 p i = 1 - - - ( 3 )
Threshold value t is divided into two types with gray scale:
C 1={1,2,...,t},C 2={t+1,t+2,...,L-1} (4)
Grey level histogram by image can get C 1And C 2The probability of occurrence of class is:
w 1 = &Sigma; i = 1 t p i , w 2 = &Sigma; i = t + 1 L - 1 p i - - - ( 5 )
C then 1And C 2Class average separately is respectively:
M 1 = &Sigma; i = 1 t i &CenterDot; p i / w 1 , M 2 = &Sigma; i = t + 1 L - 1 i &CenterDot; p i / w 2 - - - ( 6 )
C 1And C 2Class variance separately is respectively:
&sigma; 1 2 = &Sigma; i = 1 t ( i - M 1 ) 2 &CenterDot; p i / w 1 , &sigma; 2 2 = &Sigma; i = t + 1 L - 1 ( i - M 2 ) 2 &CenterDot; p i / w 2 - - - ( 7 )
Definition C 1And C 2The inter-class variance of class For:
&sigma; A 2 = w 1 &sigma; 1 2 + w 2 &sigma; 2 2 - - - ( 8 )
C 1And C 2The class internal variance of class For:
&sigma; B 2 = w 1 w 2 ( M 1 - M 2 ) 2 - - - ( 9 )
Then the criterion function of the dynamic binaryzation algorithm of Otsu is:
&eta; = &sigma; B 2 / &sigma; A 2 - - - ( 10 )
From the minimum gradation value to the maximum gradation value, travel through t, the T when t makes that η obtains maximal value is the Otsu optimal segmenting threshold.The dynamic binaryzation algorithm of Otsu reflects the distribution situation of two quasi-modes at model space with two types the inter-class variance of threshold value division and the ratio of type internal variance.Inter-class variance is big more, and type internal variance is more little, then explain the classification results class with type between distance greatly, every type of self each pixel similar performance degree is big more, promptly the threshold value classification results is good more.
Otsu dynamic threshold binaryzation algorithm is realized comparatively simple, as long as find the scope of all t of traversal, finds an appropriate threshold T to get final product.In this project, promptly used the dynamic binaryzation algorithm of Otsu in the aircraft tail number recognition system of reality, experimental result finds that this method is better to the separation of aircraft tail number target and background, image binaryzation effect.Adopt Otsu dynamic threshold binaryzation algorithm as shown in Figure 4 to the pretreated part of test results of aircraft tail number Character segmentation image.
Step 5, connected region aircraft tail number are cut apart.
In ideal conditions, for each character of aircraft tail number, each character has all constituted an independently connected region.The connected region split plot design segments the image into the identical zonule of characteristic (minimum unit is a pixel); Characteristic between each zonule that research is adjacent; Merge zonule successively according to certain judgment criterion with similar characteristics; As long as so obtain the minimum extraneous matrix of each connected region, we just can obtain the position of each aircraft tail number character.Concrete character cutting method is following:
Steps A 1, scan image find a current pixel that does not belong to any zone, also promptly will find out a new starting point of carrying out region growing;
Steps A 2,4 neighborhood (upper and lower, left and right around the gray scale of this pixel and its.Or 8 neighborhoods, in order to alleviate the influence of character adhesion, adopt 4 fields in this aircraft tail number recognition system) do not belong to any one regional pixel grey scale and compare, if satisfy certain judgment criterion, just merge it as same zone;
Steps A 3, for the pixel of those new merging, carry out the operation of steps A 2 repeatedly;
Steps A 4, carry out the operation of steps A 2, steps A 3 repeatedly, till the zone can not further expansion;
Steps A 5, turn back to steps A 1, seek to become the pixel of new region starting point.
Among the present invention, the image that carries out region growing be oneself through having carried out the binary image of Threshold Segmentation, only contain in the image 0 with 255 two kind of pixel, so it is all fairly simple to seek the judgment criterion of starting point and definite region growing.In the experiment, begin from the image upper left corner, from left to right; Scan image finds first black pixel from top to bottom, and promptly gray-scale value is 0 pixel; As first starting point; Carry out region growing, growing direction is 4 neighborhoods, the next pixel that promptly the four direction search satisfies condition from upper and lower, left and right.And judgment criterion to be defined as the difference of neighborhood territory pixel and center pixel be 0, the point that promptly is all black pixel merges in the same zone.In the program, defined formation and come the pixel in the storage area.Just go into formation if run into black pixel, the situation that black pixel is repeatedly gone into formation then may occur, will occur the information of a lot " redundancy " in the formation.Therefore, this paper has adopted the method for " mark filling ", does a mark to the pixel that is pressed into for the first time formation, makes it be different from original pixel 0 and 255 in the image.To the pixel of mark, during region growing, just need not handle next time.Reduced like this between queue empty, improved processing speed.Thus, obtain the connected domain partitioning algorithm of the binary image of native system after to Threshold Segmentation:
Step B1, begin from the upper left corner; The scanning bianry image; Find first black picture element, carry out the starting point of connected region as first zone with this pixel, and after it is added marked; Be pressed into coordinate position among the formation std::queue regionStack and preserve, the counter value nRegionSize++ of record current region size;
Step B2, take out the pixel in the formation, this pixel is carried out 4 neighborhood search, find other black picture elements that is not labeled, put into the formation afterbody after adding marked, merge in the current connected region counter value nRegionSize++ of record current region size;
Step B3, for the pixel of those new merging in the formation, carry out 2 operation repeatedly;
Step B4, carry out the operation of step B2, step B3 repeatedly, can not further expansion until the zone, till storehouse is sky;
More than operation promptly obtains a connected domain in the aircraft tail number image.Repeat above step, obtain all connected regions in the aircraft tail number image.We can know to the standard definition of aircraft tail number form according to International Civil Aviation Organization, and normal aircraft tail number should be divided into 6 connected regions.Each connected region by the area size ordering, is eliminated ordering in 6 later connected regions, these zones are considered to noise.Order is from left to right pressed in zone for ordering is preceding 6, is the 4-digit number of nationality mark, "-" number separator and the register mark of aircraft tail number.Fig. 5 adopts the connected region method to carry out the experimental result of aircraft tail number Character segmentation.
Step 6, the identification of aircraft tail number.
The aircraft tail number identification pre-service of step C1, contrary affined transformation
In order to carry out the inverse morphism shadow conversion of aircraft tail number image, need obtain coordinate and their corresponding coordinate figures in original image of 4 " beacon " point in the aircraft tail number image, can calculate the value of projective transform matrix T.Through in former parts the airdrome scene monitoring image being carried out aircraft tail number location and the aircraft tail number is cut apart, we have obtained the single character in the aircraft tail number, comprising aircraft tail number international symbol and aircraft tail number register mark.The present invention i.e. nationality mark letter from aircraft tail number image, obtains " beacon " reference mark of hinting obliquely at conversion, and aircraft tail number picture is carried out the contrary conversion of hinting obliquely at.
At first to cut apart the aircraft tail number nationality mark part that obtains (be the letter " B " of hinting obliquely at after the conversion, the inverse morphism shadow transform method that the present invention proposes is at present only at nationality mark be the Chinese aircraft tail number of registering) through aircraft tail number location and aircraft tail number.Get the nationality mark part and be respectively k with slope 1=1, k 2The point of contact of=-1 straight line is as two reference mark of projective transformation.Analyze the slope k of these two reference mark lines, whether need carry out the conversion of inverse morphism shadow with decision.Get among the present invention when the angle between line and the y axle and carry out the contrary conversion of hinting obliquely at during greater than 10 °.The point of contact of getting straight line that slope is k and upper and lower part, nationality mark right side again is as two other reference mark.The process of getting the reference mark is as shown in Figure 6.Adopt aforesaid algorithm, nationality mark is partly got 4 reference mark, can obtain the projective transform matrix T in the above-mentioned projective transformation equation.Calculate hint obliquely at transformation matrix T after, can carry out the contrary conversion of hinting obliquely to whole aircraft tail number picture.In practical application, adopt the inverse morphism shadow converting aircraft tail number image pretreating effect of as above algorithm better, can effectively remove the influence of the projective transformation that images acquired receives in the monitoring of video camera airdrome scene.Concrete effect is shown in 7 figure.
Step C2, extract based on the aircraft tail number character feature of SVMs
The sorting technique of SVMs; Statistical Learning Theory based on structural risk minimization is a kind of special small sample statistical theory; It is the statistical model identification under the limited sample situation of research; And set up a theoretical frame preferably for machine learning problem widely, also developed a kind of new mode identification method simultaneously---SVMs (Support Vector Machine, SVM).This is a part the youngest in the Statistical Learning Theory, and its main contents are just accomplished in the period of 1992-1995 basically, still are in continuous developing stage at present.We can say why Statistical Learning Theory has received increasing attention since the nineties in 20th century, is because it develops and this general learning method of SVMs to a great extent.
The SVM method is that the optimal classification face (Optimal Hyperplane) under the linear separability situation proposes.Consider the situation of the sample set of two types of linear separabilities of two dimension shown in Figure 8, the point of the point of band "+" number and band " * " number is represented two types training sample respectively, the sorting track of H for not having mistake to separate, H to two types of samples among the figure 1, H 2Be respectively the straight line that leaves the nearest point of sorting track in all kinds of samples and be parallel to sorting track, H 1And H 2Between distance be called two types classification space or class interval (margin).So-called optimal classification line require exactly sorting track not only can with two types faultless separately, and to make two types class interval maximum.In conjunction with the argumentation of last chapter, error-free division is to make empiric risk minimum (being 0), and the fiducial range that the class interval maximum is actually in the boundary that makes generalization is minimum, thereby makes real risk minimum.This model is generalized to higher dimensional space, and the optimal classification line just expands to the optimal classification face.
Aircraft tail number character feature extraction based on SVMs is specially:
If, will bring bigger calculated amount directly with the input of character picture as sorter.For example, the specification of supposing the single character picture of an aircraft tail number is 16 * 16 pixels, if with the gray-scale value of each pixel as characteristic, then each input all is the proper vector of 256 dimensions nearly.Like this huge characteristic quantity is all to be very disadvantageous from the complexity calculated or to the performance of sorter.
Be exactly total the starting point that character feature extracts finds from the effective proper vector of the angle of identification, promptly the eigenwert from the different samples of same classification should be very close, and from the eigenwert of different samples very big difference should be arranged.For character recognition, extracting effective character feature is the top priority of accomplishing character recognition.The characteristic of character can be divided into architectural feature and statistical nature two big classes.
The extraction of architectural feature focuses on definite structural information of representing with primitive, mainly contains the architectural feature that obtains based on skeleton, profile, stroke etc. at present.Skeleton is the abstraction cognitions of people to character, comprises unique point based on the architectural feature of skeleton---end points, point of crossing, turning point etc.Feature extraction based on skeleton greatly depends on the image thinning quality.Because the change of some topological structures appears in existing thinning algorithm all more or less meeting, like Y shape bifurcated, burr, broken string etc.This just requires follow-up identification that bigger regular dirigibility is arranged.Profile also can reflect the structure of character picture, through seek within the specific limits profile farthest, closest approach and maximum, smallest point obtain a series of architectural feature.Profile phase has been brought more accurate position into for skeleton, also saved the operand of refinement, but it is vulnerable to the influence of stroke width and broken string, and it is applicable to that picture quality is better, writes fixing environment.
Statistical nature is from raw data, to extract and the maximally related information of classification, gap minimization in type of making, gap maximization between type.The deformation that characteristic is tackled same character type remains unchanged as far as possible.Statistical nature can be divided into global characteristics and local feature.Global characteristics is that whole character picture is carried out conversion, with a kind of characteristic of the coefficient after the conversion as image, mainly comprises: KL (Karhunen-Leeve) conversion, Fourier conversion, Hadamand conversion, Hough conversion, moment characteristics etc.Local feature be certain location to the window of specific size in image carry out conversion, mainly comprise local gray level characteristic, projection properties, directional line element feature characteristic etc.The local gray level characteristic is claimed thick meshed feature again, and it is through being divided into standardized images fixing or flexible grid, and obtains average gray or the number of target pixel points in each grid, and obtaining dimension is grid number purpose proper vector.Projection properties obtains two N dimensional feature vectors through the projection of standardized image being asked X and Y direction, and projection properties calculates simple, is that distinguishing is preferably arranged in rough sort.The directional line element feature characteristic is divided into certain grid with character, and the adjacent stain to the different directions of each point in each grid is divided into some types.
Architectural feature and statistical nature extract characterization method respectively has its relative merits: for the statistical nature method, after having confirmed certain characteristic, feature extraction algorithm is simple, is easy to training, can obtain relative high recognition on the given training set.But maximum shortcoming is difficult on feature selecting; One of major advantage of architectural feature method is to describe the structure of character, character pattern integral body is decomposed into subpatterns such as stroke, pen section and radical.The knowledge that in identifying, can combine geometry and structure effectively therefore can obtain the higher recognition result of reliability, but calculated amount is big, is difficult to characterize or be easy to produce error coded to belonging to indefinite line segment.
The fact shows that any characteristic all is difficult to ideally represent arbitrary patterns.Because in practical problems, be not easy to find those most important characteristic, limit by condition can not measure them, and this just makes the selection of characteristic and extraction task complicated, and then becomes one of task of difficulty of structural model recognition system.Therefore, how seeking the feature extracting method that the advantage of above-mentioned two kinds of methods organically combines is the problem that is worth further investigation.
The present invention has finally selected for use the grid gravity model appoach to extract the characteristic of character picture.The step that gravity center characteristics extracts is following:
The center of gravity O of step (1), computed image 0
Step (2), with O 0Image is divided into four parts, and calculates the center of gravity of each part, obtain O 1, O 2, O 3, O 4
Step (3), respectively with O 1, O 2, O 3, O 4Each part is divided into four sub-parts once more, forms 16 fractions so altogether, get the center of gravity O of each fraction 5, O 6..., O 20
This paper has adopted two kinds to levy dimension and be respectively 24 and 48 gravity center characteristics: the characteristic of 24 dimensions is got the O after the normalization 1, O 2, O 3, O 4X, the pixel count of 16 parts that Y coordinate and image are divided into is as aircraft tail number characteristic; The characteristic of 48 dimensions is got O 5, O 6, O 7..., O 20X, the pixel count of 16 parts that Y coordinate and image are divided into is as aircraft tail number characteristic.
This grid gravity center characteristics can react the distribution situation of character in image preferably; Therefore the ability of distinguishing different classes of character is stronger, receives noise effect less, and because intrinsic dimensionality is less; The complexity of room and time is all smaller, is a kind of feature extracting method of character picture preferably.Suppose that width of cloth gray scale aircraft tail number image size is M * N, (x, y) expression is arranged in image (x-1) OK to f, (y-1) gray-scale value of the pixel of row.The center of gravity calculation formula of image is:
x &OverBar; = &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 x &CenterDot; f ( x , y ) / &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 f ( x , y ) - - - ( 11 )
y &OverBar; = &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 y &CenterDot; f ( x , y ) / &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 f ( x , y ) - - - ( 12 )
The effect that gravity center characteristics extracts is as shown in Figure 9.
Step C3, multi-categorizer identification
SVMs itself is the method for discrimination of two types of problems, often need classify to the multiclass problem in the practical application, and this just relates to the conversion of multiclass problem to two type problem.The realization thought of multiclass SVM algorithm mainly contains following two kinds:
1, be the multicategory classification PROBLEM DECOMPOSITION a plurality of two types of classification problems through certain mode; This method need be redistributed training sample; Make sample meet the needs of new a plurality of two types of problems classification, take classification under the Different Strategies decision test sample book according to the realization of algorithm simultaneously.
2, merge to finding the solution of a plurality of classifying faces in the optimization problem, this method is the popularization of two types of problems, disposablely finds the solution a big quadratic programming problem, directly the multiclass problem is separated simultaneously.Though this method is simpler than first kind of thought, its algorithm complex increases greatly, needs more variable, and the training time is longer, and the popularization ability is also more excellent unlike first method, is not suitable for being applied to large-scale data sample.
Based on first kind of thought, introduce several kinds of multicategory classification methods relatively more commonly used below.
1, one type to surplus type method.One type to surplus type of method (One Versus Rest; OVR) be that to occur the earliest also be one of the method the most widely of using at present; Its step is n two types of sorters of structure (total n classification); Wherein i sorter be roughly the same remaining all kinds of the demarcating of i, and i sorter gets that the i class is positive type in the training set during training, and all the other classification points are trained for bearing class.During differentiation, input signal obtains n output valve f through n sorter respectively altogether i(x)=sgn (g i(x)), if having only one+1 to occur, then its corresponding classification is the input signal classification; If output is one+1 (not only claiming that it to one's name for one type) not only, perhaps neither one is output as+1 (being that the neither one class claims that it to one's name), then compares g (x) output valve, and the corresponding classification of the maximum is the classification of input.This method realizes fairly simple, only needs n two types of category support vector machines of training, thus its resultant classification function number less (n), so recognition speed is very fast.But the shortcoming of this algorithm also is very much obvious; The training of each sorter all is as training sample with whole samples; So just need find the solution n quadratic programming problem; Because the training speed of each SVMs is along with the quantity of training sample sharply slows down, therefore, the training time of this method is long.
2, classification one to one.(One Versus One OVO) is also referred to as paired classification to classification one to one.In training set T (total k is individual different classes of), find out all different classes of combinations in twos; Total P=k (k-1)/2; (i j), tries to achieve P discriminant function f respectively with the SVMs classification of finding the solution two types of problems then to form two types of problem training set T with these two classification sample points respectively (i, j)(x)=sgn (g I, j(x)).During differentiation input signal X is delivered to P discriminant function f respectively (i, J)(x), if f (i, j)(x)=1, declaring X is the i class, and the i class obtains a ticket, otherwise is judged to the j class, and the j class obtains a ticket.Add up the number of votes obtained of k classification in P discriminant function result respectively, the classification that number of votes obtained is maximum is exactly the final decision classification.In this method, k class problem just there are k (k-1)/2 two types of sorters, the number of the sorter that obtains than top " one-to-many " method is a lot of greatly.However, the scale of each classification problem of " one to one " method is little a lot, and the problem that learn is also fairly simple.If but k is very big, the number of the sorter that needs will be very big, and at this moment the speed of the method will be many slowly.
The diagram of above-mentioned two kinds of multicategory classification device methods is shown in figure 10.
Step 7, recognition result aftertreatment.
At last, the present invention has designed the aircraft tail number recognizer that aircraft tail number recognition system demoware is used for demonstrating proposition.Modules such as the aircraft tail number of mentioning is in the present invention cut apart, the identification of aircraft tail number all embed this demoware.Because video monitoring method can not only provide airport scope aircraft, vehicle accurately to detect and position measurement; And can carry out image to the aircraft tail number on taxiway, the hardstand and discern automatically; So we with recognition result be dissolved in the final display interface, add label automatically for keeping watch on the result.

Claims (3)

1. aircraft tail number recognition methods is characterized in that: this method comprises following concrete steps:
The image capturing of step 1, original image is taken multiple image from different visual angles, and from multiple image, is found out the image that contains the aircraft tail number;
Step 2, the location pre-service of aircraft tail number are judged licence plate according to the provincial characteristics of aircraft tail number earlier, from the image that comprises whole airdrome scene monitoring, find the position in tail number zone;
Step 3, based on the aircraft tail number location technology of DCT territory and rim detection because partly there is a large amount of marginal informations in the aircraft tail number, the aircraft tail number location technology based on DCT territory and edge feature has been proposed;
The aircraft tail number split image pre-service of step 4, employing Otsu dynamic threshold binaryzation; The aircraft tail number split image pre-service based on Otsu dynamic threshold binaryzation is implemented in aircraft tail number zone to having located, obtains the target area of aircraft tail number more clearly;
Step 5, connected region aircraft tail number are cut apart, and adopt the connected region split plot design that aircraft tail number Region Segmentation is become single character zone then;
Step 6, the identification of aircraft tail number propose to realize the identification of aircraft tail number based on the aircraft tail number recognizer of optimized parameter SVMs;
Step 7, recognition result aftertreatment.
2. a kind of aircraft tail number according to claim 1 recognition methods is characterized in that: the aircraft tail number split image pre-service concrete grammar of the described employing of step 4 Otsu dynamic threshold binaryzation is:
Adopt the concrete steps of the dynamic binaryzation algorithm of Otsu following: the grey level range of establishing in the image is 0-L-1, and gray level is that the pixel number of i is n i, then whole pixel counts of image are:
N=n 0+n 1+...+n L-1 (2)
Normalization histogram obtains:
p i = n i N , &Sigma; i = 1 L - 1 p i = 1 - - - ( 3 )
Threshold value t is divided into two types with gray scale:
C 1={1,2,...,t},C 2={t+1,t+2,...,L-1} (4)
Grey level histogram by image can get C 1And C 2The probability of occurrence of class is:
w 1 = &Sigma; i = 1 t p i , w 2 = &Sigma; i = t + 1 L - 1 p i - - - ( 5 )
C then 1And C 2Class average separately is respectively:
M 1 = &Sigma; i = 1 t i &CenterDot; p i / w 1 , M 2 = &Sigma; i = t + 1 L - 1 i &CenterDot; p i / w 2 - - - ( 6 )
C 1And C 2Class variance separately is respectively:
&sigma; 1 2 = &Sigma; i = 1 t ( i - M 1 ) 2 &CenterDot; p i / w 1 , &sigma; 2 2 = &Sigma; i = t + 1 L - 1 ( i - M 2 ) 2 &CenterDot; p i / w 2 - - - ( 7 )
Definition C 1And C 2The inter-class variance of class For:
&sigma; A 2 = w 1 &sigma; 1 2 + w 2 &sigma; 2 2 - - - ( 8 )
C 1And C 2The class internal variance of class For:
&sigma; B 2 = w 1 w 2 ( M 1 - M 2 ) 2 - - - ( 9 )
Then the criterion function of the dynamic binaryzation algorithm of Otsu is:
&eta; = &sigma; B 2 / &sigma; A 2 - - - ( 10 )
From the minimum gradation value to the maximum gradation value, travel through t; T when t makes that η obtains maximal value is the Otsu optimal segmenting threshold; Otsu dynamic threshold binaryzation algorithm is realized comparatively simple; As long as find the scope of all t of traversal, find an appropriate threshold T to get final product, this method is better to the separation of aircraft tail number target and background, image binaryzation effect.
3. a kind of aircraft tail number according to claim 1 recognition methods is characterized in that: the described employing connected region of step 5 split plot design becomes the concrete character cutting method step of single character zone following aircraft tail number Region Segmentation:
Steps A 1, scan image find a current pixel that does not belong to any zone, also promptly will find out a new starting point of carrying out region growing;
Steps A 2, do not belong to the gray scale of this pixel any one regional pixel grey scale and compare with 4 neighborhoods around it,, just merge it as same zone if satisfy certain judgment criterion;
Steps A 3, for the pixel of those new merging, carry out the operation of steps A 2 repeatedly;
Steps A 4, carry out the operation of steps A 2, steps A 3 repeatedly, till the zone can not further expansion;
Steps A 5, turn back to steps A 1, seek to become the pixel of new region starting point.
CN 201110388239 2011-11-29 2011-11-29 Airplane tail number recognition method CN102509091B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110388239 CN102509091B (en) 2011-11-29 2011-11-29 Airplane tail number recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110388239 CN102509091B (en) 2011-11-29 2011-11-29 Airplane tail number recognition method

Publications (2)

Publication Number Publication Date
CN102509091A true CN102509091A (en) 2012-06-20
CN102509091B CN102509091B (en) 2013-12-25

Family

ID=46221172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110388239 CN102509091B (en) 2011-11-29 2011-11-29 Airplane tail number recognition method

Country Status (1)

Country Link
CN (1) CN102509091B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500323A (en) * 2013-09-18 2014-01-08 西安理工大学 Template matching method based on self-adaptive gray-scale image filtering
CN103971091A (en) * 2014-04-03 2014-08-06 北京首都国际机场股份有限公司 Automatic plane number recognition method
CN104156706A (en) * 2014-08-12 2014-11-19 华北电力大学句容研究中心 Chinese character recognition method based on optical character recognition technology
CN104243935A (en) * 2014-10-10 2014-12-24 南京莱斯信息技术股份有限公司 Target monitoring method for airport field monitoring system on basis of video recognition
CN104408678A (en) * 2014-10-31 2015-03-11 中国科学院苏州生物医学工程技术研究所 Electronic medical record system for personal use
CN105335688A (en) * 2014-08-01 2016-02-17 深圳中集天达空港设备有限公司 Identification method of airplane model on the basis of visual image
CN105512682A (en) * 2015-12-07 2016-04-20 南京信息工程大学 Secret level marking identification method based on Krawtchouk moment and KNN-SMO classifier
CN105957238A (en) * 2016-05-20 2016-09-21 聚龙股份有限公司 Banknote management method and system
CN106056751A (en) * 2016-05-20 2016-10-26 聚龙股份有限公司 Prefix number identification method and system
CN106374394A (en) * 2016-09-28 2017-02-01 刘子轩 Pipeline robot based on image recognition technology and control method
CN108090442A (en) * 2017-12-15 2018-05-29 四川大学 A kind of airport scene monitoring method based on convolutional neural networks
CN108734158A (en) * 2017-04-14 2018-11-02 成都唐源电气股份有限公司 A kind of real-time train number identification method and device
CN109299743A (en) * 2018-10-18 2019-02-01 京东方科技集团股份有限公司 Gesture identification method and device, terminal
CN109409373A (en) * 2018-09-06 2019-03-01 昆明理工大学 A kind of character recognition method based on image procossing
CN109850518A (en) * 2018-11-12 2019-06-07 太原理工大学 A kind of real-time mining adhesive tape early warning tearing detection method based on infrared image
CN111110189A (en) * 2019-11-13 2020-05-08 吉林大学 Anti-snoring device and method based on DSP sound and image recognition technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246648A (en) * 2007-12-28 2008-08-20 北京航空航天大学 Fixed intersection electric police grasp shoot device
EP2169961A1 (en) * 2007-06-28 2010-03-31 Mitsubishi Electric Corporation Image encoder and image decoder
CN101789080A (en) * 2010-01-21 2010-07-28 上海交通大学 Detection method for vehicle license plate real-time positioning character segmentation
CN101859382A (en) * 2010-06-03 2010-10-13 复旦大学 License plate detection and identification method based on maximum stable extremal region
CN101976340A (en) * 2010-10-13 2011-02-16 重庆大学 License plate positioning method based on compressed domain

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2169961A1 (en) * 2007-06-28 2010-03-31 Mitsubishi Electric Corporation Image encoder and image decoder
CN101246648A (en) * 2007-12-28 2008-08-20 北京航空航天大学 Fixed intersection electric police grasp shoot device
CN101789080A (en) * 2010-01-21 2010-07-28 上海交通大学 Detection method for vehicle license plate real-time positioning character segmentation
CN101859382A (en) * 2010-06-03 2010-10-13 复旦大学 License plate detection and identification method based on maximum stable extremal region
CN101976340A (en) * 2010-10-13 2011-02-16 重庆大学 License plate positioning method based on compressed domain

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500323A (en) * 2013-09-18 2014-01-08 西安理工大学 Template matching method based on self-adaptive gray-scale image filtering
CN103500323B (en) * 2013-09-18 2016-02-17 西安理工大学 Based on the template matching method of self-adaptation gray level image filtering
CN103971091B (en) * 2014-04-03 2017-04-26 北京首都国际机场股份有限公司 Automatic plane number recognition method
CN103971091A (en) * 2014-04-03 2014-08-06 北京首都国际机场股份有限公司 Automatic plane number recognition method
CN105335688B (en) * 2014-08-01 2018-07-13 深圳中集天达空港设备有限公司 A kind of aircraft model recognition methods of view-based access control model image
CN105335688A (en) * 2014-08-01 2016-02-17 深圳中集天达空港设备有限公司 Identification method of airplane model on the basis of visual image
CN104156706A (en) * 2014-08-12 2014-11-19 华北电力大学句容研究中心 Chinese character recognition method based on optical character recognition technology
CN104243935A (en) * 2014-10-10 2014-12-24 南京莱斯信息技术股份有限公司 Target monitoring method for airport field monitoring system on basis of video recognition
CN104243935B (en) * 2014-10-10 2018-02-16 南京莱斯信息技术股份有限公司 Airport field prison aims of systems monitoring method based on video identification
CN104408678A (en) * 2014-10-31 2015-03-11 中国科学院苏州生物医学工程技术研究所 Electronic medical record system for personal use
CN105512682B (en) * 2015-12-07 2018-11-23 南京信息工程大学 A kind of security level identification recognition methods based on Krawtchouk square and KNN-SMO classifier
CN105512682A (en) * 2015-12-07 2016-04-20 南京信息工程大学 Secret level marking identification method based on Krawtchouk moment and KNN-SMO classifier
CN106056751B (en) * 2016-05-20 2019-04-12 聚龙股份有限公司 The recognition methods and system of serial number
CN105957238A (en) * 2016-05-20 2016-09-21 聚龙股份有限公司 Banknote management method and system
CN105957238B (en) * 2016-05-20 2019-02-19 聚龙股份有限公司 A kind of paper currency management method and its system
CN106056751A (en) * 2016-05-20 2016-10-26 聚龙股份有限公司 Prefix number identification method and system
US10930105B2 (en) 2016-05-20 2021-02-23 Julong Co., Ltd. Banknote management method and system
CN106374394A (en) * 2016-09-28 2017-02-01 刘子轩 Pipeline robot based on image recognition technology and control method
CN108734158A (en) * 2017-04-14 2018-11-02 成都唐源电气股份有限公司 A kind of real-time train number identification method and device
CN108734158B (en) * 2017-04-14 2020-05-19 成都唐源电气股份有限公司 Real-time train number identification method and device
CN108090442A (en) * 2017-12-15 2018-05-29 四川大学 A kind of airport scene monitoring method based on convolutional neural networks
CN109409373A (en) * 2018-09-06 2019-03-01 昆明理工大学 A kind of character recognition method based on image procossing
CN109299743A (en) * 2018-10-18 2019-02-01 京东方科技集团股份有限公司 Gesture identification method and device, terminal
CN109850518A (en) * 2018-11-12 2019-06-07 太原理工大学 A kind of real-time mining adhesive tape early warning tearing detection method based on infrared image
CN111110189A (en) * 2019-11-13 2020-05-08 吉林大学 Anti-snoring device and method based on DSP sound and image recognition technology

Also Published As

Publication number Publication date
CN102509091B (en) 2013-12-25

Similar Documents

Publication Publication Date Title
Sun et al. A robust approach for text detection from natural scene images
Saha et al. Unsupervised deep change vector analysis for multiple-change detection in VHR images
Kumar et al. A detailed review of feature extraction in image processing systems
Ye et al. Text detection and recognition in imagery: A survey
Huang et al. An SVM ensemble approach combining spectral, structural, and semantic features for the classification of high-resolution remotely sensed imagery
Selmi et al. Deep learning system for automatic license plate detection and recognition
Zaklouta et al. Real-time traffic sign recognition in three stages
Shahab et al. ICDAR 2011 robust reading competition challenge 2: Reading text in scene images
Mahadevan et al. Saliency-based discriminant tracking
CN102622607B (en) Remote sensing image classification method based on multi-feature fusion
Zhang et al. Airport detection and aircraft recognition based on two-layer saliency model in high spatial resolution remote-sensing images
Chen et al. Vehicle detection in high-resolution aerial images via sparse representation and superpixels
Yu et al. Learning hierarchical features for automated extraction of road markings from 3-D mobile LiDAR point clouds
Eikvil et al. Classification-based vehicle detection in high-resolution satellite images
Chaudhuri et al. Automatic building detection from high-resolution satellite images based on morphology and internal gray variance
CN102708356B (en) Automatic license plate positioning and recognition method based on complex background
Li et al. A novel traffic sign detection method via color segmentation and robust shape matching
Alvarez et al. Road scene segmentation from a single image
KR100912746B1 (en) Method for traffic sign detection
Gerónimo et al. 2D–3D-based on-board pedestrian detection system
Sirmacek et al. Urban-area and building detection using SIFT keypoints and graph theory
Zhou et al. Moving vehicle detection for automatic traffic monitoring
CN101334836B (en) License plate positioning method incorporating color, size and texture characteristic
Gonzalez et al. Text detection and recognition on traffic panels from street-level imagery using visual appearance
CN101142584B (en) Method for facial features detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant