CN102509091B - Airplane tail number recognition method - Google Patents

Airplane tail number recognition method Download PDF

Info

Publication number
CN102509091B
CN102509091B CN 201110388239 CN201110388239A CN102509091B CN 102509091 B CN102509091 B CN 102509091B CN 201110388239 CN201110388239 CN 201110388239 CN 201110388239 A CN201110388239 A CN 201110388239A CN 102509091 B CN102509091 B CN 102509091B
Authority
CN
China
Prior art keywords
tail number
airplane tail
image
sigma
airplane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110388239
Other languages
Chinese (zh)
Other versions
CN102509091A (en
Inventor
罗喜伶
马秀红
周萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN 201110388239 priority Critical patent/CN102509091B/en
Publication of CN102509091A publication Critical patent/CN102509091A/en
Application granted granted Critical
Publication of CN102509091B publication Critical patent/CN102509091B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an airplane tail number recognition method which comprises the following steps: pre-processing an airplane tail number image based on Otsu dynamic threshold binarization, separating the airplane tail number image from the background, carrying out character segmentation on the airplane tail number image through a connected domain method to obtain single characters of the airplane tail number and acquire projective transformation marking points, and carrying out inverse projective transformation on the airplane tail number image by using the projective transformation marking points as criteria; researching an optimum parameter support vector machine classifier during airplane tail number character recognition; and extracting character features from the airplane tail number characters through a center of gravity method, using an RBF kernel function for the support vector machine, utilizing a secondary grid search method to obtain optimum parameters, and using a 'one-to-one' multi-class classification method to acquire the airplane tail number based on the optimum parameter support vector machine. The method has higher recognition accuracy, and can be applicable to airport occasions and circumstances having diversified lighting environments.

Description

A kind of airplane tail number recognition method
Technical field
The invention belongs to the technical field of tail number identification, particularly a kind of airplane tail number recognition method.
Background field
Airplane tail number character recognition technologies in the airplane tail number recognition system, its essence is the important directions in the pattern-recognition category---word identification.Word identification (Optical Character Recognition, OCR) concept is to be proposed first in nineteen twenty-nine by Germany scientist Tausheck.Along with appearance and the development of computing machine, OCR is broad research in the world.Through the development in a nearly century, OCR has become one of most active research contents in current area of pattern recognition.It combines many-sided knowledge such as Digital Image Processing, computer graphics and artificial intelligence, and is used widely in its association area.Usually the OCR recognition methods can be divided into following 3 classes: statistical nature character recognition technologies, structure character recognition technologies and the recognition technology based on artificial neural network.The statistical nature character recognition technologies is generally chosen in the same class character statistical nature total, that metastable and classification performance is good as proper vector.The position feature, character that statistical nature commonly used has a character two dimensional surface is in histogram feature, moment characteristics and the character feature after frequency domain conversion or other formal arguments of level or vertical direction projection etc.The image of two dimension is replaced by the projection of one dimension, and calculated amount reduces, and has also eliminated the impact of word in the projecting direction skew simultaneously, but helpless for the rotational deformation of character.
Word based on structure identification is actually and character has been mapped to the structure space that primitive forms is identified.Identifying is on the basis of extracting primitive, utilizes formal language and automaton theory, takes the process of the methods analyst charcter topology of lexical analysis, tree coupling, figure coupling and knowledge reasoning.J.Park has analyzed the drawback of traditional structure recognition methods, the initiatively thought of character recognition (Active Character Recognition) is proposed, initiatively, according to input picture, dynamically determine choosing of architectural feature, reach saving resource, accelerate the purpose of identifying.Corresponding with statistical recognition method, the structure recognition technology of character more is convenient to distinguish font and is changed the character character close with font greatly.But due to the description to architectural feature with relatively will take a large amount of storages and computational resource, thus algorithm relative complex, recognition speed are slow in realization.
Character recognition technologies based on artificial neural network tries hard to realize by the simulation to human brain function and structure the efficient identification of character.Through developing rapidly in recent years, artificial neural network is widely used aspect character recognition.In the OCR system, artificial neural network mainly serves as the function of sorter.The input of network is the proper vector of character, and output is the classification results of character, i.e. recognition result.Through repetition learning, neural network can be removed the information of redundancy, contradiction intelligently by proper vector optimization, the difference between the strengthening class.Because neural network adopts distributed network structure, itself possess can be parallel condition, can accelerate the speed that solves of extensive problem.Krezyak and Le Cun have mainly studied the application of BP (Back-Propagation) neural network aspect word identification, shortcoming a little less than, generalization ability slow for BP e-learning speed has produced the strategy of competition supervised learning on the basis of BP network.
In the airplane tail number recognition method research process, need to position, cut apart and identify airplane tail number accurately, the main bugbear faced has:
1, take the airdrome scene image obtained and disturbed by environmental factor, picture quality is difficult to guarantee;
2, the airplane tail number image section is blocked and airplane tail number anamorphose;
3, the problem such as airdrome scene monitoring image background complexity and the many airplane tail numbers of piece image;
4, be subject to the impact of the factor such as light in environment, the airplane tail number image has larger noise, and the handwriting is blurred to cause tail number character to be split, and the adjacent character adhesion is even incomplete.
Summary of the invention
The object of the invention is to for the deficiencies in the prior art, propose the high-resolution airplane tail number recognition method under a kind of different visual angles, improve the recognition performance of system, meet the performance of domestic real system.The present invention is the research to airplane tail number recognition method, learn existing pattern-recognition and character recognition technology, and carry out improvement and bring new ideas, it is applied in to airplane tail number identification field, for the A-SMGCS systematic study of China with realize providing the support of airport aircraft and vehicle detection and identification aspect, thereby provide theoretical research and technical support towards transition and the development of A-SMGCS system for the airport scene monitoring of China.
For achieving the above object, technical scheme of the present invention is: a kind of airplane tail number recognition method comprises following concrete steps:
The image capture of step 1, original image, take multiple image from different visual angles, and find out the image that contains airplane tail number from multiple image;
Step 2, airplane tail number location pre-service, first judge licence plate according to the provincial characteristics of airplane tail number, finds the position in tail number zone from the image that comprises whole airdrome scene monitoring;
Step 3, the airplane tail number location technology based on DCT territory and rim detection, because partly there is a large amount of marginal informations in airplane tail number, proposed the airplane tail number location technology based on DCT territory and edge feature;
The airplane tail number of step 4, employing Otsu dynamic threshold binaryzation is cut apart the image pre-service, the airplane tail number that the airplane tail number zone of having located is implemented based on Otsu dynamic threshold binaryzation is cut apart the image pre-service, obtains the target area of airplane tail number more clearly;
Step 5, connected region airplane tail number are cut apart, and then adopt the connected region split plot design that the airplane tail number Region Segmentation is become to single character zone;
Step 6, airplane tail number identification, the airplane tail number recognizer proposed based on the optimized parameter support vector machine realizes airplane tail number identification;
Step 7, recognition result aftertreatment.
Wherein, the airplane tail number of the described employing of step 4 Otsu dynamic threshold binaryzation is cut apart image pre-service concrete grammar and is:
Adopt the concrete steps of Otsu Dynamic Binarization Algorithm as follows: the grey level range of establishing in image is 0-L-1, and the pixel number that gray level is i is n i, whole pixel counts of image are:
N=n 0+n 1+...+n L-1 (2)
Normalization histogram obtains:
p i = n i N , Σ i = 1 L - 1 p i = 1 - - - ( 3 )
Threshold value t is divided into two classes by gray scale:
C 1={1,2,...,t},C 2={t+1,t+2,...,L-1} (4)
Grey level histogram by image can obtain C 1and C 2the probability of occurrence of class is:
w 1 = Σ i = 1 t p i , w 2 = Σ i = t + 1 L - 1 p i - - - ( 5 )
C 1and C 2class average separately is respectively:
M 1 = Σ i = 1 t i · p i / w 1 , M 2 = Σ i = t + 1 L - 1 i · p i / w 2 - - - ( 6 )
C 1and C 2class variance separately is respectively:
σ 1 2 = Σ i = 1 t ( i - M 1 ) 2 · p i / w 1 , σ 2 2 = Σ i = t + 1 L - 1 ( i - M 2 ) 2 · p i / w 2 - - - ( 7 )
Definition C 1and C 2the inter-class variance of class
Figure BDA0000113837740000035
for:
σ A 2 = w 1 σ 1 2 + w 2 σ 2 2 - - - ( 8 )
C 1and C 2the class internal variance of class
Figure BDA0000113837740000037
for:
σ B 2 = w 1 w 2 ( M 1 - M 2 ) 2 - - - ( 9 )
The criterion function of Otsu Dynamic Binarization Algorithm is:
η = σ B 2 / σ A 2 - - - ( 10 )
Travel through t from the minimum gradation value to the maximum gradation value, T when t makes η obtain maximal value is the Otsu optimal segmenting threshold, Otsu dynamic threshold Binarization methods is realized comparatively simple, as long as find the scope of all t of traversal, find a suitable threshold value T to get final product, the method is better to the separation of airplane tail number target and background, image binaryzation effect.
Wherein, the described employing connected region of step 5 split plot design becomes the concrete character cutting method step of single character zone as follows the airplane tail number Region Segmentation:
Steps A 1, scan image, find a current pixel that does not belong to any zone, also will find out a new starting point of carrying out region growing;
Steps A 2,4 neighborhood (upper and lower, left and right around the gray scale of this pixel and its.Or 8 neighborhoods, in order to alleviate the impact of Characters Stuck, adopt 4 fields in this airplane tail number recognition system) do not belong to any one regional pixel grey scale and compare, if meet certain judgment criterion, just it is merged as same zone;
Steps A 3, for the pixel of those new merging, repeatedly carry out the operation of steps A 2;
Steps A 4, repeatedly carry out the operation of steps A 2, steps A 3, until zone can not further expansion;
Steps A 5, turn back to steps A 1, seek to become the pixel of new region starting point.
The present invention's advantage compared to the prior art is:
1, the airplane tail number based on Otsu dynamic threshold binaryzation that the present invention adopts is cut apart Image Pretreatment Algorithm, make the inter-class variance of two classes of Threshold segmentation large, the class internal variance is little, it is inter-class variance and the ratio maximum of class internal variance, get dynamic optimal threshold, this method can be applicable to the multifarious airdrome scene environment of photoenvironment.
2, the airplane tail number inverse morphism shadow transform method that the present invention is based on the airplane tail number nationality mark can be eliminated the projective transformation that camera produces in airdrome scene is monitored the airplane tail number image acquisition process, and airplane tail number is carried out to the recognition correct rate that the conversion of inverse morphism shadow can effectively improve the airplane tail number recognition system.
The accompanying drawing explanation
The implementation procedure that Fig. 1 is airplane tail number recognition method in the present invention;
Fig. 2 is greyscale transformation schematic diagram in airplane tail number positioning image preprocessing process; Such conversion is found to pull open the tonal range in (r0-r1) between our interested gray area, thereby reaches the purpose that strengthens contrast;
Fig. 3 is for being used the airplane tail number locating effect of the airplane tail number location technology based on DCT territory and edge feature; Fig. 3 (a) is the gray level image after the RGB image transitions; Fig. 3 (b) is divided into gray level image 8 * 8 image block, and each image block is done to two-dimensional dct transform, thereby obtains two-dimentional DCT matrix; Fig. 3 (c) is used the Canny operator to make rim detection to the tail number zone, obtains this regional outline map; Fig. 3 (d) is for being used the tail number positioning result obtained of the airplane tail number location technology based on DCT territory and edge feature;
Fig. 4 cuts apart for adopting Otsu dynamic threshold image binaryzation to carry out airplane tail number the part of test results that the image pre-service obtains;
Fig. 5 is the experimental result that connected region method that the present invention adopts is carried out the airplane tail number Character segmentation;
The process that reference mark is gone in the inverse morphism shadow conversion that Fig. 6 is the airplane tail number image;
The experimental result of the inverse morphism shadow conversion that Fig. 7 is machine tail number image;
Support vector machine optimal classification face in the situation of the sample set that Fig. 8 is two-dimentional two class linear separabilities;
Fig. 9 is for being used the grid gravity model appoach to extract the effect of airplane tail number character feature;
Figure 10 is multicategory classification device method.
Embodiment:
Technical scheme for a better understanding of the present invention, be further described the specific embodiment of the present invention below in conjunction with accompanying drawing.
The implementation procedure that Fig. 1 is airplane tail number recognition method of the present invention.The concrete implementation detail of each step is as follows:
The image capture of step 1, original image, take multiple image from different visual angles, and find out the image that contains airplane tail number from multiple image.
In traditional driving against road traffic regulation is captured, specialized camera, under specific the triggering, is taken pictures, so the automotive license plate of capturing generally appears at the fixed area of image.Owing to camera focus and visual angle all fixing, so the size of car plate is also relative definite with angle.And the number of characters of car plate is all relative fixing with font.Yet above-mentioned recognition methods also is not suitable for the machine tail number identification of airdrome scene aircraft, mainly has following two difficult points: 1, to cut apart difficulty large for the machine tail number.Because aircraft stand,, fuselage different from the relative angle of camera are smooth radian curved surface, cause position, the size of machine tail number in image fixing, there is perspective distortion in font; The machine tail number does not exist frame and image other parts to cut apart yet, thus image to cut apart difficulty larger; 2, character diversity.Font on fuselage (roman, italic), color (white gravoply, with black engraved characters, wrongly written or mispronounced character of the dark end are arranged) and length (number of characters) are also different, and the intrinsic similarity of intercharacter, affected the accuracy rate of identifying to a great extent in addition.
For above two difficult points, we need to take multiple image from different visual angles, and find out the image that contains airplane tail number from multiple image.
Step 2, airplane tail number location pre-service.
The airdrome scene monitoring image is carried out to the pre-service of airplane tail number location and at first will carry out gray processing to image.In the RGB model, if R=G=B, color means a kind of greyscale color, and wherein the value of R=G=B is called gray-scale value, and we mean with g.The process that is gray scale by color conversion is called gray processing and processes.Because the storage of coloured image often takies very large space, often coloured image is converted to gray level image in the processing such as image being identified to, to accelerate follow-up processing speed.The span of R, G, B is 0-255, so the rank of gray scale is 256 grades.
The fundamental purpose of using greyscale transformation is exactly to improve the contrast of image, strengthens the contrast of original image each several part.If during a width figure imaging due to light cross dark or under-exposed, view picture figure partially dark (as tonal range from 0 to 63); Light is crossed bright or over-exposed figure kine bias bright (as tonal range from 200 to 255), all can cause picture contrast problem on the low side, and gray scale all has been crowded together, and does not pull open.We can adopt the method for greyscale transformation to strengthen the contrast of image.
So-called greyscale transformation is exactly to be mapped to the conversion between another gray area between a gray area.A kind of piecewise linear gray transformation can be defined as:
s = r s 0 r 0 r &le; r 0 ( r - r 0 ) s 1 - s 0 r 1 - r 0 + s 0 r 0 < r &le; r 1 ( r - r 1 ) 255 - s 1 255 - r 1 + s 1 r > r 1 - - - ( 1 )
Its principle as shown in Figure 2.Wherein horizontal ordinate r means the gray shade scale that conversion is front, and ordinate s means to convert rear corresponding grey scale grade, and thick line in figure (broken line) has meaned the funtcional relationship of output gray level and input gray level, situation when fine dotted line (straight line) means not do any conversion.Through contrast, can find, such conversion is found can be (r between our interested gray area 0-r 1) in tonal range pull open, thereby reach the purpose that strengthens contrast.
Step 3, the airplane tail number location technology based on DCT territory and rim detection.
At first by picture element interpolation, pending airdrome scene is monitored to RGB image scaling to 240 * 320 Pixel Dimensions, then the RGB image transitions is arrived to yuv space, the luminance component of abstract image is as gray level image, as shown in Fig. 3 (a).Gray level image is divided into to 8 * 8 image block, each image block is done to two-dimensional dct transform, choose a kind of suitable DCT characteristic of field from the two-dimensional dct matrix obtained, for each image block computation of characteristic values in image, as shown in Fig. 3 (b).Choose suitable sorting technique, image block is divided into to tail number piece and background piece two classes, Primary Location from image thus.Because airplane tail number edges of regions information is abundanter, use the Canny operator to make rim detection to the tail number zone, obtain this regional outline map, as shown in Fig. 3 (c), then this outline map is carried out to horizontal scanning, vertical scanning, the marginal density of each piece of calculating tail number zone, the recycling threshold method excludes the erroneous judgement piece, and the tail number optimization of region has just obtained more text filed in the image thus.The airplane tail number locating effect of the airplane tail number location technology of use based on DCT territory and edge feature is as shown in Fig. 3 (d).
The airplane tail number of step 4, employing Otsu dynamic threshold binaryzation is cut apart the image pre-service.
Overall situation Dynamic Binarization Algorithm is from the pixel distribution of whole gray level image, seek the threshold value an of the best, classic algorithm Otsu algorithm wherein, its basic thought is: get a threshold value t, the image slices rope is divided into and is more than or equal to t and is less than t two classes by the gray scale size, then obtain the mean value variance of two class pixels, i.e. inter-class variance
Figure BDA0000113837740000061
with two class pixels mean square deviation separately, i.e. class internal variance
Figure BDA0000113837740000062
.Find out the threshold value t that makes two variance ratio η maximums, this threshold value is the optimal threshold of binary image.No matter the histogram of this binarization method image has or not significantly bimodal, the effect that can obtain comparatively being satisfied with.Therefore this method is the more excellent method that threshold value is chosen automatically.
The concrete steps of Otsu Dynamic Binarization Algorithm are as follows: the grey level range of establishing in image is 0-L-1, and the pixel number that gray level is i is n i, whole pixel counts of image are:
N=n 0+n 1+...+n L-1 (2)
Normalization histogram obtains:
p i = n i N , &Sigma; i = 1 L - 1 p i = 1 - - - ( 3 )
Threshold value t is divided into two classes by gray scale:
C 1={1,2,...,t},C 2={t+1,t+2,...,L-1} (4)
Grey level histogram by image can obtain C 1and C 2the probability of occurrence of class is:
w 1 = &Sigma; i = 1 t p i , w 2 = &Sigma; i = t + 1 L - 1 p i - - - ( 5 )
C 1and C 2class average separately is respectively:
M 1 = &Sigma; i = 1 t i &CenterDot; p i / w 1 , M 2 = &Sigma; i = t + 1 L - 1 i &CenterDot; p i / w 2 - - - ( 6 )
C 1and C 2class variance separately is respectively:
&sigma; 1 2 = &Sigma; i = 1 t ( i - M 1 ) 2 &CenterDot; p i / w 1 , &sigma; 2 2 = &Sigma; i = t + 1 L - 1 ( i - M 2 ) 2 &CenterDot; p i / w 2 - - - ( 7 )
Definition C 1and C 2the inter-class variance of class
Figure BDA0000113837740000073
for:
&sigma; A 2 = w 1 &sigma; 1 2 + w 2 &sigma; 2 2 - - - ( 8 )
C 1and C 2the class internal variance of class
Figure BDA0000113837740000075
for:
&sigma; B 2 = w 1 w 2 ( M 1 - M 2 ) 2 - - - ( 9 )
The criterion function of Otsu Dynamic Binarization Algorithm is:
&eta; = &sigma; B 2 / &sigma; A 2 - - - ( 10 )
Travel through t from the minimum gradation value to the maximum gradation value, the T when t makes η obtain maximal value is the Otsu optimal segmenting threshold.The inter-class variance of two classes that the Otsu Dynamic Binarization Algorithm is divided with threshold value and the ratio of class internal variance reflect the distribution situation of two quasi-modes at model space.Inter-class variance is larger, and the class internal variance is less, illustrates that between classification results class and class, distance is large, and each pixel character similarity of every class self is larger, and the threshold value classification results is better.
Otsu dynamic threshold Binarization methods is realized comparatively simple, as long as find the scope of all t of traversal, finds a suitable threshold value T to get final product.Applied the Otsu Dynamic Binarization Algorithm in the airplane tail number recognition system of reality in this project, the experimental result discovery, the method is better to the separation of airplane tail number target and background, image binaryzation effect.Adopt Otsu dynamic threshold Binarization methods to the pretreated part of test results of airplane tail number Character segmentation image as shown in Figure 4.
Step 5, connected region airplane tail number are cut apart.
In ideal conditions, for each character of airplane tail number, each character has formed an independently connected region.The connected region split plot design segments the image into the zonule that feature is identical (minimum unit is pixel), feature between each zonule that research is adjacent, zonule with similar characteristics is combined successively according to certain judgment criterion, as long as, so obtain the extraneous matrix of the minimum of each connected region, we just can obtain the position of each airplane tail number character.Concrete character cutting method is as follows:
Steps A 1, scan image, find a current pixel that does not belong to any zone, also will find out a new starting point of carrying out region growing;
Steps A 2,4 neighborhood (upper and lower, left and right around the gray scale of this pixel and its.Or 8 neighborhoods, in order to alleviate the impact of Characters Stuck, adopt 4 fields in this airplane tail number recognition system) do not belong to any one regional pixel grey scale and compare, if meet certain judgment criterion, just it is merged as same zone;
Steps A 3, for the pixel of those new merging, repeatedly carry out the operation of steps A 2;
Steps A 4, repeatedly carry out the operation of steps A 2, steps A 3, until zone can not further expansion;
Steps A 5, turn back to steps A 1, seek to become the pixel of new region starting point.
In the present invention, the image that carries out region growing be oneself through having carried out the binary image of Threshold segmentation, only contain 0 and 255 two kind of pixel in image, therefore find the judgment criterion that starting point and definite area increase all fairly simple.In experiment, from the image upper left corner, from left to right, scan image, find first black pixel, the pixel that gray-scale value is 0 from top to bottom, as first starting point, carry out region growing, growing direction is 4 neighborhoods, the next pixel satisfied condition from the search of upper and lower, left and right four direction.And judgment criterion to be defined as the difference of neighborhood territory pixel and center pixel be 0, the point that is all black pixel merges in same zone.In program, defined queue and carried out the pixel in storage area.Just enter queue if run into black pixel, may occur that black pixel repeatedly enters the situation of queue, just there will be the information of a lot " redundancy " in queue.Therefore, this paper has adopted the method for " mark filling ", and the pixel that is pressed into for the first time queue is done to a mark, makes it be different from original pixel 0 and 255 in image.To the pixel of mark, during region growing, just do not need to have processed next time.Reduced like this between queue empty, improved processing speed.Thus, obtain the connected domain partitioning algorithm of the binary image of native system after to Threshold segmentation:
Step B1, from the upper left corner, the scanning bianry image, find first black picture element, usining this pixel carries out the starting point of connected region as first zone, and after it is added to mark, coordinate position is pressed in queue std::queue regionStack and preserves, record the counter value nRegionSize++ of current region size;
Step B2, take out the pixel in queue, this pixel is carried out to 4 neighborhood search, the black picture element that finds other not to be labeled, put into the queue afterbody after adding mark, merges in current connected region, records the counter value nRegionSize++ of current region size;
Step B3, for the pixel of those new merging in queue, repeatedly carry out 2 operation;
Step B4, repeatedly carry out the operation of step B2, step B3, until zone can not further expansion, till storehouse is sky;
Above operation obtains a connected domain in the airplane tail number image.Repeat above step, obtain all connected regions in the airplane tail number image.According to International Civil Aviation Organization, to the standard definition of airplane tail number form, we are known, and normal airplane tail number should be divided into 6 connected regions.Each connected region, by the area size sequence, is eliminated sequence in 6 later connected regions, these zones are considered to noise.Order is from left to right pressed in zone for first 6 of sequence, is the 4-digit number of nationality mark, "-" number separator and the register mark of airplane tail number.Fig. 5 adopts the connected region method to carry out the experimental result of airplane tail number Character segmentation.
Step 6, airplane tail number identification.
The airplane tail number identification pre-service of step C1, contrary affined transformation
In order to carry out the inverse morphism shadow conversion of airplane tail number image, need to obtain the coordinate of 4 " beacon " point in the airplane tail number image, and their corresponding coordinate figures in original image, can calculate the value of projective transform matrix T.By in former parts, the airdrome scene monitoring image being carried out, airplane tail number is located and airplane tail number is cut apart, and we have obtained the single character in the airplane tail number, comprising airplane tail number international symbol and airplane tail number register mark.The present invention i.e. nationality mark letter from the airplane tail number image, obtains " beacon " reference mark of hinting obliquely at conversion, and the airplane tail number picture is carried out to the contrary conversion of hinting obliquely at.
At first to cut apart the airplane tail number nationality mark part that obtains (be the letter " B " of hinting obliquely at after conversion, the inverse morphism shadow transform method that the present invention proposes is at present only at nationality mark be the Chinese airplane tail number of registering) through airplane tail number location and airplane tail number.Get the nationality mark part and be respectively k with slope 1=1, k 2the point of contact of=-1 straight line, as two reference mark of projective transformation.Analyze the slope k of these two reference mark lines, to determine whether to need to carry out the conversion of inverse morphism shadow.Get in the present invention and carry out the contrary conversion of hinting obliquely at when the angle between line and y axle is greater than 10 °.The straight line that to get slope be k again and the point of contact of upper and lower part, nationality mark right side are as two other reference mark.Get the process at reference mark as shown in Figure 6.Adopt algorithm as above, nationality mark is partly got to 4 reference mark, can obtain the projective transform matrix T in above-mentioned projective transformation equation.After calculating and hinting obliquely at transformation matrix T, can carry out the contrary conversion of hinting obliquely to whole airplane tail number picture.In actual applications, adopt the inverse morphism shadow converting aircraft tail number image pretreating effect of as above algorithm better, can effectively remove in the scene monitoring of video camera airport the impact that gathers the projective transformation that image is subject to.Concrete effect is as shown in 7 figure.
Step C2, the airplane tail number character feature based on support vector machine extract
The sorting technique of support vector machine, Statistical Learning Theory based on structural risk minimization is a kind of special small sample statistical theory, it is the statistical model identification in research finite sample situation, and set up a theoretical frame preferably for Machine Learning Problems widely, also developed a kind of new mode identification method---support vector machine (Support Vector Machine, SVM) simultaneously.This is a part the youngest in Statistical Learning Theory, and its main contents just substantially complete between 1992-1995, still are at present the development stage.Can say, why Statistical Learning Theory has been subject to increasing attention since the nineties in 20th century, is because it has developed this general learning method of support vector machine to a great extent.
The SVM method is that the optimal classification face (Optimal Hyperplane) from the linear separability situation proposes.Consider the situation of the sample set of the two dimension two class linear separabilities shown in Fig. 8, in figure, the point of the point of band "+" number and band " * " number means respectively the training sample of two classes, the sorting track of H for not having mistake to separate, H to two class samples 1, H 2be respectively in Different categories of samples the point nearest from sorting track and be parallel to the straight line of sorting track, H 1and H 2between distance be called classification space or class interval (margin) of two classes.So-called optimal classification line requires the sorting track not only can two classes are faultless separately exactly, and will make the class interval maximum of two classes.In conjunction with the discussion of last chapter, error-free division is to make empirical risk minimization (being 0), and makes the class interval maximum be actually the fiducial range minimum in the boundary that makes generalization, thereby makes the real risk minimum.This model is generalized to higher dimensional space, and the optimal classification line just expands to the optimal classification face.
Airplane tail number character feature based on support vector machine extracts and is specially:
If directly the input using character picture as sorter, will bring larger calculated amount.For example, suppose that the specification of the single character picture of an airplane tail number is 16 * 16 pixels, if using the gray-scale value of each pixel as feature, each input is the proper vectors of 256 dimensions nearly.Like this huge characteristic quantity is complexity from calculating or be all very disadvantageous to the performance of sorter.
Be exactly, total the starting point that character feature extracts finds from the more effective proper vector of angle of identification, the eigenwert from other different samples of same class should be very close, and from the eigenwert of different samples, very large difference should be arranged.For character recognition, extract the top priority that effective character feature has been character recognition.The feature of character can be divided into architectural feature and the large class of statistical nature two.
The extraction of architectural feature focuses on determining the structural information meaned with primitive, mainly contains at present the architectural feature obtained based on skeleton, profile, stroke etc.Skeleton is the abstraction cognitions of people to character, and the architectural feature based on skeleton comprises unique point---end points, point of crossing, turning point etc.Feature extraction based on skeleton greatly depends on the image thinning quality.Due to all changes that there will be some topological structures more or less of existing thinning algorithm, as Y shape bifurcated, burr, broken string etc.This just requires follow-up identification that larger regular dirigibility is arranged.Profile also can reflect the structure of character picture, by find within the specific limits profile farthest, closest approach and maximum, smallest point obtain a series of architectural feature.Profile, with respect to skeleton, has been brought more accurate position into, has also saved the operand of refinement, but it is vulnerable to the impact of stroke width and broken string, and it is better that it is applicable to picture quality, writes more fixing environment.
Statistical nature is from raw data, to extract and the maximally related information of classification, makes gap minimization in class, gap maximization between class.The deformation that feature is tackled same character type remains unchanged as far as possible.Statistical nature can be divided into global characteristics and local feature.Global characteristics is converted whole character picture, and a kind of feature using the coefficient after conversion as image, mainly comprise: KL (Karhunen-Leeve) conversion, Fourier conversion, Hadamand conversion, Hough conversion, moment characteristics etc.Local feature be specific position to the window of specific size in image converted, mainly comprise local gray level feature, projection properties, directional line element feature feature etc.The local gray level feature claims again thick meshed feature, and it is by standardized images being divided into to fixing or flexible grid, and obtains average gray in each grid or the number of target pixel points, and obtaining dimension is grid number purpose proper vector.Projection properties is by ask the projection of X and Y-direction to obtain two N dimensional feature vectors to standardized image, and projection properties calculates simple, in rough sort, is that distinguishing is preferably arranged.The directional line element feature feature is divided into certain grid by character, and the adjacent stain to the different directions of each point in each grid is divided into some classes.
Architectural feature and statistical nature extract characterization method respectively its relative merits: for the statistical nature method, after having determined certain feature, feature extraction algorithm is simple, is easy to training, on given training set, can obtain relatively high discrimination.But maximum shortcoming is more difficult on feature selecting; One of major advantage of architectural feature method is to describe the structure of character, character pattern integral body is decomposed into to the subpatterns such as stroke, pen section and radical.Can therefore can obtain the recognition result that reliability is higher, but calculated amount be large effectively in conjunction with the knowledge of geometry and structure in identifying, to belonging to indefinite line segment, be difficult to characterize or be easy to produce error coded.
The fact shows, any feature all is difficult to ideally represent arbitrary patterns.Because be not easy to find those most important features in practical problems, or limit by condition can not to be measured them, this just make the selection of feature and the extraction task complicated, and then become one of task that the structural model recognition system is the most difficult.Therefore, how seeking the feature extracting method of the advantage of above-mentioned two kinds of methods combination is the problem that is worth further investigation.
Final choice of the present invention the grid gravity model appoach extract the feature of character picture.The step that gravity center characteristics extracts is as follows:
The center of gravity O of step (1), computed image 0;
Step (2), with O 0image is divided into to four parts, and calculates the center of gravity of each part, obtain O 1, O 2, O 3, O 4;
Step (3), respectively with O 1, O 2, O 3, O 4each part is divided into to four subdivisions again, forms so altogether 16 fractions, get the center of gravity O of each fraction 5, O 6..., O 20.
This paper has adopted two kinds to levy dimension and be respectively 24 and 48 gravity center characteristics: the features of 24 dimensions are got the O after normalization 1, O 2, O 3, O 4x, the pixel count of 16 parts that Y coordinate and image are divided into is as the airplane tail number feature; The feature of 48 dimensions is got O 5, O 6, O 7..., O 20x, the pixel count of 16 parts that Y coordinate and image are divided into is as the airplane tail number feature.
This grid gravity center characteristics can react the distribution situation of character in image preferably, therefore the ability of distinguishing different classes of character is stronger, affected by noise less, and because intrinsic dimensionality is less, the complexity of room and time is all smaller, is a kind of feature extracting method of character picture preferably.Suppose that a width gray scale airplane tail number image size is M * N, f (x, y) means to be arranged in image (x-1) OK, (y-1) gray-scale value of the pixel of row.The center of gravity calculation formula of image is:
x &OverBar; = &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 x &CenterDot; f ( x , y ) / &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 f ( x , y ) - - - ( 11 )
y &OverBar; = &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 y &CenterDot; f ( x , y ) / &Sigma; x = 0 M - 1 &Sigma; y = 0 N - 1 f ( x , y ) - - - ( 12 )
The effect that gravity center characteristics extracts as shown in Figure 9.
Step C3, multi-categorizer identification
Support vector machine itself is the method for discrimination of two class problems, in practical application, often need to be classified to the multiclass problem, and this just relates to the conversion of multiclass problem to two class problems.The thought that realizes of multiclass SVM algorithm mainly contains following two kinds:
1, by certain mode, be the multicategory classification PROBLEM DECOMPOSITION a plurality of two class classification problems, the method need to be redistributed training sample, make sample meet the needs of new a plurality of two class Question Classifications, according to the realization of algorithm, take Different Strategies to determine classification under test sample book simultaneously.
2, solving of a plurality of classifying faces merged in an optimization problem, the method is the popularization of two class problems, disposablely solves a large quadratic programming problem, directly the multiclass problem is separated simultaneously.Although the method is simpler than the first thought, its algorithm complex increases greatly, needs more variable, and the training time is longer, and Generalization Ability is also more excellent unlike first method, is not suitable for being applied to large-scale data sample.
Below, based on the first thought, introduce several multi-class classification methods more commonly used.
1, a class is to remaining class method.One class is to remaining class method (One Versus Rest, OVR) be to occur the earliest it being also one of method be most widely used at present, its step is n binary classifier of structure (total n classification), wherein i sorter is all kinds of the demarcating under the i Classfication of Congruence, during training, to get i class in training set be positive class to i sorter, and all the other classification points are trained for negative class.During differentiation, input signal obtains n output valve f altogether through n sorter respectively i(x)=sgn (g i(x)),, if only have one+1 to occur, its corresponding classification is the input signal classification; If output is one+1 (not only a class claims that it to one's name) not only, or be output as+1 (being that the neither one class claims that it to one's name) of neither one, compare g (x) output valve, the classification that the corresponding classification of the maximum is input.The method realizes fairly simple, only needs n two class category support vector machines of training, thus its resulting classification function number less (n), so recognition speed is very fast.But the shortcoming of this algorithm is very also obvious, the training of each sorter is using whole samples as training sample, so just need to solve n quadratic programming problem, because the training speed of each support vector machine is along with the quantity of training sample sharply slows down, therefore, the training time of this method is long.
2, one-against-one method.One-against-one method (One Versus One, OVO) is also referred to as paired classification.Find out all different classes of combination of two in training set T (total k is individual different classes of), total P=k (k-1)/2, form two class problem training set T (i with these two classification sample points respectively, j), then by the support vector machine classification that solves two class problems, try to achieve respectively P discriminant function f (i, j)(x)=sgn (g i, j(x)).During differentiation, input signal X is delivered to respectively to P discriminant function f (i, j)(x), if f (i, j)(x)=1, sentencing X is the i class, and the i class obtains a ticket, otherwise is judged to the j class, and the j class obtains a ticket.Add up respectively the number of votes obtained of k classification in P discriminant function result, the classification that number of votes obtained is maximum is exactly the final decision classification.In this method, k class problem is just had to k (k-1)/2 binary classifier, the number of the sorter obtained than top " one-to-many " method is much larger.However, the scale of each classification problem of " one to one " method is little a lot, and the problem that learn is also fairly simple.But if k is very large, the number of the sorter needed will be very large, and at this moment the speed of the method will be many slowly.
The diagram of above-mentioned two kinds of multicategory classification device methods as shown in figure 10.
Step 7, recognition result aftertreatment.
Finally, the present invention has designed airplane tail number recognition system demoware and has been used for demonstrating the airplane tail number recognizer proposed.The modules such as the airplane tail number of mentioning is in the present invention cut apart, airplane tail number identification all embed this demoware.Because can not only providing airport scope aircraft, vehicle, video monitoring method accurately detects and position measurement; and can carry out image automatic identification to the airplane tail number on taxiway, hardstand; so we by recognition result be dissolved in final display interface, automatically add label for monitoring result.

Claims (1)

1. an airplane tail number recognition method, it is characterized in that: the method comprises following concrete steps:
The image capture of step 1, original image, take multiple image from different visual angles, and find out the image that contains airplane tail number from multiple image;
Step 2, airplane tail number location pre-service, first judge licence plate according to the provincial characteristics of airplane tail number, finds the position in tail number zone from the image that comprises whole airdrome scene monitoring;
Step 3, the airplane tail number location technology based on DCT territory and rim detection, because partly there is a large amount of marginal informations in airplane tail number, proposed the airplane tail number location technology based on DCT territory and edge feature;
The airplane tail number of step 4, employing Otsu dynamic threshold binaryzation is cut apart the image pre-service, the airplane tail number that the airplane tail number zone of having located is implemented based on Otsu dynamic threshold binaryzation is cut apart the image pre-service, obtains the target area of airplane tail number more clearly;
The airplane tail number of described employing Otsu dynamic threshold binaryzation is cut apart image pre-service concrete grammar and is:
Adopt the concrete steps of Otsu Dynamic Binarization Algorithm as follows: the grey level range of establishing in image is 0-L-1, and the pixel number that gray level is i is n i, whole pixel counts of image are:
N=n 0+n 1+...+n L-1 (2)
Normalization histogram obtains:
p i = n i N , &Sigma; i = 1 L - 1 p i = 1 - - - ( 3 )
Threshold value t is divided into two classes by gray scale:
C 1 = { 1,2 , . . . , t } , C 2 { t + 1 , t + 2 , . . . , L - 1 } - - - ( 4 )
Grey level histogram by image can obtain C 1and C 2the probability of occurrence of class is:
w 1 = &Sigma; i = 1 t p i , w 2 = &Sigma; i = t + 1 L - 1 p i - - - ( 5 )
C 1and C 2class average separately is respectively:
M 1 = &Sigma; i = 1 t i &CenterDot; p i / w 1 , M 2 = &Sigma; i = t + 1 L - 1 i &CenterDot; p i / w 2 - - - ( 6 )
C 1and C 2class variance separately is respectively:
&sigma; 1 2 = &Sigma; i = 1 t ( i - M 1 ) 2 &CenterDot; p i / w 1 , &sigma; 2 2 = &Sigma; i = t + 1 L - 1 ( i - M 2 ) 2 &CenterDot; p i / w 2 - - - ( 7 )
Definition C 1and C 2the inter-class variance of class
Figure FDA00003559461400016
for:
&sigma; A 2 = w 1 &sigma; 1 2 + w 2 &sigma; 2 2 - - - ( 8 )
C 1and C 2the class internal variance of class
Figure FDA00003559461400021
for:
&sigma; B 2 = w 1 w 2 ( M 1 - M 2 ) 2 - - - ( 9 )
The criterion function of Otsu Dynamic Binarization Algorithm is:
&eta; = &sigma; B 2 / &sigma; A 2 - - - ( 10 )
Travel through t from the minimum gradation value to the maximum gradation value, T when t makes η obtain maximal value is the Otsu optimal segmenting threshold, Otsu dynamic threshold Binarization methods is realized comparatively simple, as long as find the scope of all t of traversal, find a suitable threshold value T to get final product, the method is better to the separation of airplane tail number target and background, image binaryzation effect;
Step 5, connected region airplane tail number are cut apart, and then adopt the connected region split plot design that the airplane tail number Region Segmentation is become to single character zone;
Described employing connected region split plot design becomes the concrete character cutting method step of single character zone as follows the airplane tail number Region Segmentation:
Steps A 1, scan image, find a current pixel that does not belong to any zone, also will find out a new starting point of carrying out region growing;
Steps A 2, the gray scale of this pixel is not belonged to any one regional pixel grey scale and compares with 4 neighborhoods around it, if meet judgment criterion, just it is merged as same zone;
Steps A 3, for the pixel of those new merging, repeatedly carry out the operation of steps A 2;
Steps A 4, repeatedly carry out the operation of steps A 2, steps A 3, until zone can not further expansion;
Steps A 5, turn back to steps A 1, seek to become the pixel of new region starting point;
Wherein, the image that carries out region growing be oneself through having carried out the binary image of Threshold segmentation, to the connected domain partitioning algorithm of the binary image after Threshold segmentation:
Step B1, from the upper left corner, the scanning bianry image, find first black picture element, usining this pixel carries out the starting point of connected region as first zone, and after it is added to mark, coordinate position is pressed in queue std::queue regionStack and preserves, record the counter value nRegionSize++ of current region size;
Step B2, take out the pixel in queue, this pixel is carried out to 4 neighborhood search, the black picture element that finds other not to be labeled, put into the queue afterbody after adding mark, merges in current connected region, records the counter value nRegionSize++ of current region size;
Step B3, for the pixel of those new merging in queue, repeatedly carry out the operation of B2;
Step B4, repeatedly carry out the operation of step B2, step B3, until zone can not further expansion, till storehouse is sky;
Above operation obtains a connected domain in the airplane tail number image, repeats above step, obtains all connected regions in the airplane tail number image;
Step 6, airplane tail number identification, the airplane tail number recognizer proposed based on the optimized parameter support vector machine realizes airplane tail number identification; Concrete steps are as follows:
The airplane tail number identification pre-service of step C1, contrary affined transformation;
Step C2, the airplane tail number character feature based on support vector machine extract;
Step C3, multi-categorizer identification;
Step 7, recognition result aftertreatment.
CN 201110388239 2011-11-29 2011-11-29 Airplane tail number recognition method Expired - Fee Related CN102509091B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110388239 CN102509091B (en) 2011-11-29 2011-11-29 Airplane tail number recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110388239 CN102509091B (en) 2011-11-29 2011-11-29 Airplane tail number recognition method

Publications (2)

Publication Number Publication Date
CN102509091A CN102509091A (en) 2012-06-20
CN102509091B true CN102509091B (en) 2013-12-25

Family

ID=46221172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110388239 Expired - Fee Related CN102509091B (en) 2011-11-29 2011-11-29 Airplane tail number recognition method

Country Status (1)

Country Link
CN (1) CN102509091B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500323B (en) * 2013-09-18 2016-02-17 西安理工大学 Based on the template matching method of self-adaptation gray level image filtering
CN103971091B (en) * 2014-04-03 2017-04-26 北京首都国际机场股份有限公司 Automatic plane number recognition method
CN105335688B (en) * 2014-08-01 2018-07-13 深圳中集天达空港设备有限公司 A kind of aircraft model recognition methods of view-based access control model image
CN104156706A (en) * 2014-08-12 2014-11-19 华北电力大学句容研究中心 Chinese character recognition method based on optical character recognition technology
CN104243935B (en) * 2014-10-10 2018-02-16 南京莱斯信息技术股份有限公司 Airport field prison aims of systems monitoring method based on video identification
CN104408678A (en) * 2014-10-31 2015-03-11 中国科学院苏州生物医学工程技术研究所 Electronic medical record system for personal use
CN105512682B (en) * 2015-12-07 2018-11-23 南京信息工程大学 A kind of security level identification recognition methods based on Krawtchouk square and KNN-SMO classifier
CN105957238B (en) * 2016-05-20 2019-02-19 聚龙股份有限公司 A kind of paper currency management method and its system
CN106056751B (en) * 2016-05-20 2019-04-12 聚龙股份有限公司 The recognition methods and system of serial number
CN106374394A (en) * 2016-09-28 2017-02-01 刘子轩 Pipeline robot based on image recognition technology and control method
CN108734158B (en) * 2017-04-14 2020-05-19 成都唐源电气股份有限公司 Real-time train number identification method and device
CN108090442A (en) * 2017-12-15 2018-05-29 四川大学 A kind of airport scene monitoring method based on convolutional neural networks
CN108256493A (en) * 2018-01-26 2018-07-06 中国电子科技集团公司第三十八研究所 A kind of traffic scene character identification system and recognition methods based on Vehicular video
CN108564064A (en) * 2018-04-28 2018-09-21 北京宙心科技有限公司 A kind of efficient OCR recognizers of view-based access control model
CN109409373A (en) * 2018-09-06 2019-03-01 昆明理工大学 A kind of character recognition method based on image procossing
CN109299743B (en) * 2018-10-18 2021-08-10 京东方科技集团股份有限公司 Gesture recognition method and device and terminal
CN109850518B (en) * 2018-11-12 2022-01-28 太原理工大学 Real-time mining adhesive tape early warning tearing detection method based on infrared image
CN109858484B (en) * 2019-01-22 2022-10-14 电子科技大学 Multi-class transformation license plate correction method based on deflection evaluation
CN110097052A (en) * 2019-04-22 2019-08-06 苏州海赛人工智能有限公司 A kind of true and false license plate method of discrimination based on image
CN111110189B (en) * 2019-11-13 2021-11-09 吉林大学 Anti-snoring device and method based on DSP sound and image recognition technology
CN111429403B (en) * 2020-02-26 2022-11-08 北京航空航天大学杭州创新研究院 Automobile gear finished product defect detection method based on machine vision
CN113449574A (en) * 2020-03-26 2021-09-28 上海际链网络科技有限公司 Method and device for identifying content on target, storage medium and computer equipment
CN111582237B (en) * 2020-05-28 2022-08-12 国家海洋信息中心 ATSM model-based high-resolution image airplane type identification method
CN116912845B (en) * 2023-06-16 2024-03-19 广东电网有限责任公司佛山供电局 Intelligent content identification and analysis method and device based on NLP and AI

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8139875B2 (en) * 2007-06-28 2012-03-20 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method and image decoding method
CN100590675C (en) * 2007-12-28 2010-02-17 北京航空航天大学 Fixed intersection electric police grasp shoot device
CN101789080B (en) * 2010-01-21 2012-07-04 上海交通大学 Detection method for vehicle license plate real-time positioning character segmentation
CN101859382B (en) * 2010-06-03 2013-07-31 复旦大学 License plate detection and identification method based on maximum stable extremal region
CN101976340B (en) * 2010-10-13 2013-04-24 重庆大学 License plate positioning method based on compressed domain

Also Published As

Publication number Publication date
CN102509091A (en) 2012-06-20

Similar Documents

Publication Publication Date Title
CN102509091B (en) Airplane tail number recognition method
CN109840521B (en) Integrated license plate recognition method based on deep learning
Yuan et al. Large-scale solar panel mapping from aerial images using deep convolutional networks
Pan et al. A robust system to detect and localize texts in natural scene images
CN104050471B (en) Natural scene character detection method and system
CN102663348B (en) Marine ship detection method in optical remote sensing image
Zhang et al. CDNet: A real-time and robust crosswalk detection network on Jetson nano based on YOLOv5
CN102609686B (en) Pedestrian detection method
CN101859382B (en) License plate detection and identification method based on maximum stable extremal region
CN103761531B (en) The sparse coding license plate character recognition method of Shape-based interpolation contour feature
CN107103317A (en) Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN103530600B (en) Licence plate recognition method under complex illumination and system
Zhang et al. Study on traffic sign recognition by optimized Lenet-5 algorithm
CN108009518A (en) A kind of stratification traffic mark recognition methods based on quick two points of convolutional neural networks
CN102708356A (en) Automatic license plate positioning and recognition method based on complex background
CN104766046A (en) Detection and recognition algorithm conducted by means of traffic sign color and shape features
CN106529532A (en) License plate identification system based on integral feature channels and gray projection
CN108427919B (en) Unsupervised oil tank target detection method based on shape-guided saliency model
CN110008900B (en) Method for extracting candidate target from visible light remote sensing image from region to target
CN106529461A (en) Vehicle model identifying algorithm based on integral characteristic channel and SVM training device
Kilic et al. Turkish vehicle license plate recognition using deep learning
Sugiharto et al. Traffic sign detection based on HOG and PHOG using binary SVM and k-NN
Pham et al. CNN-based character recognition for license plate recognition system
CN105335688A (en) Identification method of airplane model on the basis of visual image
Ying et al. License plate detection and localization in complex scenes based on deep learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20131225

Termination date: 20211129