CN106886778A - A kind of car plate segmentation of the characters and their identification method under monitoring scene - Google Patents
A kind of car plate segmentation of the characters and their identification method under monitoring scene Download PDFInfo
- Publication number
- CN106886778A CN106886778A CN201710278593.1A CN201710278593A CN106886778A CN 106886778 A CN106886778 A CN 106886778A CN 201710278593 A CN201710278593 A CN 201710278593A CN 106886778 A CN106886778 A CN 106886778A
- Authority
- CN
- China
- Prior art keywords
- character
- characters
- license plate
- image
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/158—Segmentation of character regions using character size, text spacings or pitch estimation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Character Discrimination (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of car plate segmentation of the characters and their identification method under monitoring scene, comprise the following steps:Step S1:Characters on license plate image is classified, is that the training of explicit features grader and implicit features grader is ready;Step S2:Input license plate image;Step S3:Character segmentation is carried out with the character segmentation method that domain method is combined is connected using projection localization method to license plate image;Step S4:The method blended using explicit features grader and implicit features grader is identified to characters on license plate.The present invention improves the effect of License Plate Character Segmentation using projection localization method with method that domain method is combined is connected, consider single problem for Recognition of License Plate Characters feature, propose the method based on explicit features grader Yu implicit features Multiple Classifier Fusion, both advantages in feature extraction can be combined, so as to improve the effect of character recognition.
Description
Technical field
The present invention relates to pattern-recognition and computer vision field, under particularly a kind of monitoring scene License Plate Character Segmentation with
Recognition methods.
Background technology
As the quickening of safety China Reconstructs paces, monitoring camera distribution are increasingly wider, the resolution ratio of camera is more next
It is higher, directly spread all over the monitors environment in streets and lanes compared with the vehicle image harvester using standard bayonet socket using these,
Bring some new challenges.Traditional Vehicle License Plate Recognition System is at aspects such as highway toll, parking lots with extensive but most of
The support that some special hardware equipment are required for scene, such as ground induction coil can only once be identified to a car, know
Other speed is slow, and image transmitting uses analog signal mostly, and image level sense is not strong, and contrast is poor, in order to ensure resolution ratio
Tend not to collect vehicle panoramic, so as to cause to meet the related service requirement of criminal investigation, public security.Under monitors environment scene
Car plate detection based on dynamic video stream need not install other hardware devices with identifying system, can be simultaneously to the multiple in image
Car plate is detected, not restricted by hardware and environment, efficiency high, multi-frame video image can be identified respectively, selected
Confidence level highest is used as final result, so as to reduce influence of the single-frame images to recognition result, recognition accuracy is higher.But base
Many challenges are also faced with the car plate detecting system under monitors environment scene, such as car plate angle change is big, long-term use
It is image blurring that the dust of surface of camera head attachment can make to collect, and noise increases.These all bring to Recognition of License Plate Characters
Huge challenge.In recent years, with the continuous hair of the technologies such as Computer Image Processing, artificial intelligence, pattern-recognition, transmission of video
Exhibition, the car plate algorithm based on dynamic video stream is obtained increasingly in Criminal Investigation, the traffic accident quickly social life such as treatment
It is widely applied.
Although domestic and foreign scholars also expand research to this one after another, it is proposed that some high levels and the strong car plate word of usability
Symbol segmentation and recognizer, but it is still present that Character segmentation effect is undesirable, and Recognition of License Plate Characters feature considers single asking
Topic.Upright projection split plot design algorithm is simple, and speed is fast, but more sensitive to noise, to the disconnected situation effect of characters on license plate
It is poor;Connection domain method is insensitive to character deformation, illumination, weather environment, but to the disconnected situation effect ratio of character less
It is preferable.Therefore we improve the effect of License Plate Character Segmentation using projection localization method with method that domain method is combined is connected.Pin
Single, recognition accuracy situation not high is considered to character recognition feature, we use explicit features grader and implicit features
The method that grader is blended.The artificial explicit features extracted based on priori have stronger specific aim, but extract
The feature of the feature for arriving generally shallow-layer, and feature extraction too relies on the experience and subjective consciousness of people, the feature extracted
Difference is very big on classification performance influence, or even the order of the feature extracted can also influence last classification performance, and is based on convolution
The feature of the Machine self-learnings such as neutral net (CNN) can automatically learn the profound feature of image, it is to avoid artificial to explicit
Feature is selected, automatically from training data learning feature.Implicit features are extracted and can reduce feature selecting to grader
Influence, but the interpretation of feature extraction is poor, and feature selecting places one's entire reliance upon the selection of model.Methods herein can be combined
Both advantages in feature extraction, so as to improve the effect of character recognition.
The content of the invention
In view of this, it is an object of the invention to provide a kind of car plate segmentation of the characters and their identification method under monitoring scene, with gram
Take defect present in prior art.
The present invention is realized using following scheme:A kind of car plate segmentation of the characters and their identification method under monitoring scene, including it is following
Step:
Step S1:Characters on license plate image is classified, is the training of explicit features grader and implicit features grader
It is ready;
Step S2:Input license plate image;
Step S3:Line character is entered with the character segmentation method that domain method is combined is connected using projection localization method to license plate image
Segmentation;
Step S4:The method blended using explicit features grader and implicit features grader is known to characters on license plate
Not.
Further, in the step S3, characters on license plate is split as follows:
Step S31:Projection localization method first uses formula with the character segmentation method that domain method is combined is connected:F (i, j)=α R
(i, j)+β G (i, j)+χ B (i, j)), wherein α=0.30 is made, β=0.59, χ=0.11 carries out at gray processing license plate image
Reason;
Step S32:One width license plate image is carried out into mesh segmentation, the histogram of image, and normalizing is calculated each grid
Change, calculate cumulative mean value mu, and global gray average, then calculate the probability q for being assigned to class AA, and the probability for being assigned to class B
qB;Using formula:Sigma=qA*qB*(muA-muB)*(muA-muB) inter-class variance is calculated, it is maximum that inter-class variance is found in circulation
It is worth, and writes down threshold value now, as optimal threshold, thresholding is finally carried out to grid using this optimal threshold, repeats this
Individual process is finished until whole license plate image all binaryzations;
Step S33:Extract character outline;
Step S34:Do boundary rectangle operation;
Step S35:If meeting the number of the boundary rectangle of size less than 7, illustrate to there may be Characters Stuck or
The situation that chinese character is lost in car plate;Part when Characters Stuck to adhesion carries out projection localization, the position of segmentation
It is set to the trough for being closer to image midpoint;
Step S36:When chinese character is lost using by the anti-method for pushing away Chinese character of spcial character;Make word
Character block of the Fu Kuai centers in the interval of car plate 1/7~2/7 is spcial character, and the left side of spcial character is Chinese character;
Step S37:If projection character block number is more than 7, illustrate after License Plate Segmentation that some characters may be divided into many
Individual character block, thus further to projection character block merge treatment.
Further, in the step S4, characters on license plate is identified as follows:
Step S41:The convolutional neural networks that input training image is extracted to implicit features;
Step S42:It is input into training image to explicit features sorter network;
Step S43:The convolutional neural networks that training implicit features are extracted, the training of convolutional neural networks mainly includes two
Stage:
First stage is the propagated forward stage:Propagated forward stage main process is one sample of taking-up from training sample
This X (xp,yp) using training sample X as network input, by formula Ox=fn(...(f2(f1(XpW(1))W(2))...)W(n)) meter
Calculate the reality output of training sample X;
Second stage is back-propagation phase:Back-propagation phase will calculate the reality output O of training sample XxWith reason
Want to export YpDifference, the method according to minimization error is adjusted to the parameter of model parameter;
Step S44:One explicit features sorter network of training;Training explicit features sorter network includes three phases:
First stage is the extraction character feature stage:The stage is first rectified using Gamma orthosises to characters on license plate
Just, regulation picture contrast, then the gradient of each pixel is calculated, profile information is obtained, character picture is then divided into n × n
Junior unit, wherein n=6;Then the histogram of gradients of each junior unit is calculated, the Feature Descriptor of junior unit is obtained, then
Again by junior unit according to 3 × 3 model split into m × n groups, every group of Feature Descriptor is together in series can be arrived the feature of group and retouch
Son is stated, the Feature Descriptor of mn group is together in series and is obtained character feature Fexture1;By the n of character × n center matrix mark
Non-character pixel, wherein n=8 are designated as, and are become low-resolution image, as character feature Fexture2, will
Fexture1 and Fexture2 are together in series and obtain character feature;
Second stage is the training data preparatory stage:To every image zooming-out character feature of training data, and stamp
All training images are organized into a matrix by class label;
Three phases are the training stage:Training matrix is input in SVMs, using RBF kernel functions to explicit
Feature classifiers are trained;
Step S45:Image to be sorted is separately input in explicit features sorter network and implicit features sorter network,
Obtain classification results;
Step S46:For each width test image I, the confidence level vector E=that implicit features extract network will be all obtained
{e1,e2,...,eNAnd implicit features network confidence level vector E '={ e '1,e′2,...,e′N, wherein N is characters on license plate
Class number;
Step S47:Obtain in step S45 corresponding | | the E | | of two vectors∞With | | E ' | |∞, the corresponding classifications of image IFor:Wherein i is | | E | |∞Corresponding classification, j is | | E ' | |∞Corresponding classification.
Compared to prior art, the invention has the advantages that:Projection localization method and connected domain that the present invention builds
The character segmentation method that method is combined can improve the accuracy rate of Character segmentation.Set forth herein based on explicit features grader with
The method of implicit features Multiple Classifier Fusion can either give full play to the artificial explicit features extracted based on priori to be had
Stronger targetedly feature and based on convolutional neural networks (CNN), the machine such as sparse autocoder (AutoEncoder) from
The feature of study can automatically learn the profound feature of image, it is to avoid manually explicit features are selected, automatically from instruction
The characteristics of practicing data learning feature, the feature of the feature generally shallow-layer that explicit features extract, and feature can be avoided again
The experience and subjective consciousness for too relying on people are extracted, the different of the feature extracted influence very big on classification performance, or even extract
Feature order can also influence last classification performance shortcoming and implicit features extract interpretation it is poor, feature selecting
Place one's entire reliance upon model selection shortcoming, improve the accuracy of character classification.
Brief description of the drawings
Fig. 1 is the flow chart of car plate segmentation of the characters and their identification method under monitoring scene of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawings and embodiment the present invention will be further described.
The present embodiment provides a kind of car plate segmentation of the characters and their identification method under monitoring scene, as shown in figure 1, being directed to car plate word
Symbol identification feature considers single problem, it is proposed that the method based on explicit features grader Yu implicit features Multiple Classifier Fusion.
The artificial explicit features extracted based on priori have a stronger specific aim, but the feature extracted generally shallow-layer
Feature, and feature extraction too relies on the experience and subjective consciousness of people, the different of the feature extracted influence very on classification performance
Greatly, or even the order of feature extracted can also influence last classification performance, convolutional neural networks (CNN) are based on, it is sparse from
The feature of the Machine self-learnings such as dynamic encoder (AutoEncoder) can automatically learn the profound feature of image, it is to avoid people
Work is selected explicit features, automatically from training data learning feature.Implicit features are extracted and can reduce feature selecting pair
The influence of grader, but the interpretation of feature extraction is poor, feature selecting places one's entire reliance upon the selection of model, methods herein
Both advantages in feature extraction can be combined, so as to improve the effect of character recognition, with comprising the following steps:
Step S1:Characters on license plate image is classified, is the training of explicit features grader and implicit features grader
It is ready;
Step S2:Input license plate image;
Step S3:Line character is entered with the character segmentation method that domain method is combined is connected using projection localization method to license plate image
Segmentation;
Step S4:The method blended using explicit features grader and implicit features grader is known to characters on license plate
Not.
In the present embodiment, in the step S3, characters on license plate is split as follows:
Step S31:Projection localization method first uses formula with the character segmentation method that domain method is combined is connected:F (i, j)=α R
(i, j)+β G (i, j)+χ B (i, j)), wherein α=0.30 is made, β=0.59, χ=0.11 carries out at gray processing license plate image
Reason;
Step S32:One width license plate image is carried out into mesh segmentation, the histogram of image, and normalizing is calculated each grid
Change, calculate cumulative mean value mu, and global gray average, then calculate the probability q for being assigned to class AA, and the probability for being assigned to class B
qB;Using formula:Sigma=qA*qB*(muA-muB)*(muA-muB) inter-class variance is calculated, it is maximum that inter-class variance is found in circulation
It is worth, and writes down threshold value now, as optimal threshold, thresholding is finally carried out to grid using this optimal threshold, repeats this
Individual process is finished until whole license plate image all binaryzations;
Step S33:Extract character outline;
Step S34:Do boundary rectangle operation;
Step S35:If meeting the number of the boundary rectangle of size less than 7, illustrate to there may be Characters Stuck or
The situation that chinese character is lost in car plate;Part when Characters Stuck to adhesion carries out projection localization, the position of segmentation
It is set to the trough for being closer to image midpoint;
Step S36:When chinese character is lost using by the anti-method for pushing away Chinese character of spcial character;Make word
Character block of the Fu Kuai centers in the interval of car plate 1/7~2/7 is spcial character, and the left side of spcial character is Chinese character;
Step S37:If projection character block number is more than 7, illustrate after License Plate Segmentation that some characters may be divided into many
Individual character block, thus further to projection character block merge treatment.
In the present embodiment, in the step S4, characters on license plate is identified as follows:
Step S41:The convolutional neural networks that input training image is extracted to implicit features;
Step S42:It is input into training image to explicit features sorter network;
Step S43:The convolutional neural networks that training implicit features are extracted, the training process of convolutional neural networks be one by
Initial " low layer " feature is gradually converted into the expression process of " high level " feature by the automatic study of parameter, convolutional neural networks
Training mainly includes two stages:
First stage is the propagated forward stage:Propagated forward stage main process is one sample of taking-up from training sample
This X (xp,yp) using training sample X as network input, by formula Ox=fn(...(f2(f1(XpW(1))W(2))...)W(n)) meter
Calculate the reality output of training sample X;
Second stage is back-propagation phase:Back-propagation phase will calculate the reality output O of training sample XxWith reason
Want to export YpDifference, the method according to minimization error is adjusted to the parameter of model parameter;
Step S44:One explicit features sorter network of training;Training explicit features sorter network includes three phases:
First stage is the extraction character feature stage:The stage is first rectified using Gamma orthosises to characters on license plate
Just, regulation picture contrast, then the gradient of each pixel is calculated, profile information is obtained, character picture is then divided into n × n
Junior unit, wherein n=6;Then the histogram of gradients of each junior unit is calculated, the Feature Descriptor of junior unit is obtained, then
Again by junior unit according to 3 × 3 model split into m × n groups, every group of Feature Descriptor is together in series can be arrived the feature of group and retouch
Son is stated, the Feature Descriptor of mn group is together in series and is obtained character feature Fexture1;By the n of character × n center matrix mark
Non-character pixel, wherein n=8 are designated as, and are become low-resolution image, as character feature Fexture2, will
Fexture1 and Fexture2 are together in series and obtain character feature;
Second stage is the training data preparatory stage:To every image zooming-out character feature of training data, and stamp
All training images are organized into a matrix by class label;
Three phases are the training stage:Training matrix is input in SVMs, using RBF kernel functions to explicit
Feature classifiers are trained;
Step S45:Image to be sorted is separately input in explicit features sorter network and implicit features sorter network,
Obtain classification results;
Step S46:For each width test image I, the confidence level vector E=that implicit features extract network will be all obtained
{e1,e2,...,eNAnd implicit features network confidence level vector E '={ e '1,e′2,...,e′N, wherein N is characters on license plate
Class number;
Step S47:Obtain in step S45 corresponding | | the E | | of two vectors∞With | | E ' | |∞, the corresponding classifications of image IFor:Wherein i is | | E | |∞Corresponding classification, j is | | E ' | |∞Corresponding classification.
The foregoing is only presently preferred embodiments of the present invention, all impartial changes done according to scope of the present invention patent with
Modification, should all belong to covering scope of the invention.
Claims (3)
1. a kind of car plate segmentation of the characters and their identification method under monitoring scene, it is characterised in that:Comprise the following steps:
Step S1:Characters on license plate image is classified, is that the training of explicit features grader and implicit features grader is carried out
Prepare;
Step S2:Input license plate image;
Step S3:Line character point is entered with the character segmentation method that domain method is combined is connected using projection localization method to license plate image
Cut;
Step S4:The method blended using explicit features grader and implicit features grader is identified to characters on license plate.
2. car plate segmentation of the characters and their identification method under a kind of monitoring scene according to claim 1, it is characterised in that:Institute
State in step S3, characters on license plate is split as follows:
Step S31:Projection localization method first uses formula with the character segmentation method that domain method is combined is connected:F (i, j)=α R (i, j)+
β G (i, j)+χ B (i, j)), wherein α=0.30 is made, β=0.59, license plate image is carried out gray processing treatment by χ=0.11;
Step S32:One width license plate image is carried out into mesh segmentation, the histogram of image is calculated each grid, and normalized, counted
Cumulative mean value mu, and global gray average are calculated, then calculates the probability q for being assigned to class AA, and the probability q for being assigned to class BB;Using
Formula:Sigma=qA*qB*(muA-muB)*(muA-muB) inter-class variance is calculated, inter-class variance maximum is found in circulation, and writes down
Threshold value now, as optimal threshold, finally carry out thresholding to grid using this optimal threshold, repeat this process until
Whole license plate image all binaryzations are finished;
Step S33:Extract character outline;
Step S34:Do boundary rectangle operation;
Step S35:If meeting the number of the boundary rectangle of size less than 7, illustrate to there may be Characters Stuck or car plate
The situation that middle chinese character is lost;Part when Characters Stuck to adhesion carries out projection localization, and the position of segmentation is
It is closer to the trough at image midpoint;
Step S36:When chinese character is lost using by the anti-method for pushing away Chinese character of spcial character;Make character block
Character block of the center in the interval of car plate 1/7~2/7 is spcial character, and the left side of spcial character is Chinese character;
Step S37:If projection character block number is more than 7, some characters may be divided into multiple words after illustrating License Plate Segmentation
Symbol block, therefore further to projection character block merge treatment.
3. car plate segmentation of the characters and their identification method under a kind of monitoring scene according to claim 1, it is characterised in that:Institute
State in step S4, characters on license plate is identified as follows:
Step S41:The convolutional neural networks that input training image is extracted to implicit features;
Step S42:It is input into training image to explicit features sorter network;
Step S43:The convolutional neural networks that training implicit features are extracted, the training of convolutional neural networks mainly includes two ranks
Section:
First stage is the propagated forward stage:Propagated forward stage main process is one sample X of taking-up from training sample
(xp,yp) using training sample X as network input, by formula Ox=fn(...(f2(f1(XpW(1))W(2))...)W(n)) calculate
The reality output of training sample X;
Second stage is back-propagation phase:Back-propagation phase will calculate the reality output O of training sample XxExported with ideal
YpDifference, the method according to minimization error is adjusted to the parameter of model parameter;
Step S44:One explicit features sorter network of training;Training explicit features sorter network includes three phases:
First stage is the extraction character feature stage:The stage is first corrected using Gamma orthosises to characters on license plate, is adjusted
Section picture contrast, then the gradient of each pixel is calculated, profile information is obtained, then character picture is divided into the small list of n × n
Unit, wherein n=6;Then the histogram of gradients of each junior unit is calculated, the Feature Descriptor of junior unit is obtained, then again will be small
According to 3 × 3 model split into m × n groups, every group of Feature Descriptor is together in series can arrive the Feature Descriptor of group to unit,
The Feature Descriptor of mn group is together in series and obtains character feature Fexture1;By the n of character × n center matrix labeled as non-
Character pixels, wherein n=8, and become low-resolution image, as character feature Fexture2, by Fexture1 and
Fexture2 is together in series and obtains character feature;
Second stage is the training data preparatory stage:To every image zooming-out character feature of training data, and stamp classification
All training images are organized into a matrix by label;
Three phases are the training stage:Training matrix is input in SVMs, using RBF kernel functions to explicit features
Grader is trained;
Step S45:Image to be sorted is separately input in explicit features sorter network and implicit features sorter network, is obtained
Classification results;
Step S46:For each width test image I, the confidence level vector E={ e that implicit features extract network will be all obtained1,
e2,...,eNAnd implicit features network confidence level vector E '={ e '1,e′2,...,e′N, wherein N is the classification of characters on license plate
Number;
Step S47:Obtain in step S45 corresponding | | the E | | of two vectors∞With | | E ' | |∞, the corresponding classifications of image IFor:Wherein i is | | E | |∞Corresponding classification, j is | | E ' | |∞Corresponding classification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710278593.1A CN106886778B (en) | 2017-04-25 | 2017-04-25 | License plate character segmentation and recognition method in monitoring scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710278593.1A CN106886778B (en) | 2017-04-25 | 2017-04-25 | License plate character segmentation and recognition method in monitoring scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106886778A true CN106886778A (en) | 2017-06-23 |
CN106886778B CN106886778B (en) | 2020-02-07 |
Family
ID=59183579
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710278593.1A Active CN106886778B (en) | 2017-04-25 | 2017-04-25 | License plate character segmentation and recognition method in monitoring scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106886778B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107688784A (en) * | 2017-08-23 | 2018-02-13 | 福建六壬网安股份有限公司 | A kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features |
CN107958219A (en) * | 2017-12-06 | 2018-04-24 | 电子科技大学 | Image scene classification method based on multi-model and Analysis On Multi-scale Features |
CN107992897A (en) * | 2017-12-14 | 2018-05-04 | 重庆邮电大学 | Commodity image sorting technique based on convolution Laplce's sparse coding |
CN108509912A (en) * | 2018-04-03 | 2018-09-07 | 深圳市智绘科技有限公司 | Multipath network video stream licence plate recognition method and system |
CN109086767A (en) * | 2018-07-11 | 2018-12-25 | 于洋 | The dividing method of fuzzy license plate in a kind of monitoring scene |
WO2020088338A1 (en) * | 2018-10-30 | 2020-05-07 | 杭州海康威视数字技术股份有限公司 | Method and apparatus for building recognition model |
CN113392838A (en) * | 2021-08-16 | 2021-09-14 | 智道网联科技(北京)有限公司 | Character segmentation method and device and character recognition method and device |
CN115171092A (en) * | 2022-09-08 | 2022-10-11 | 松立控股集团股份有限公司 | End-to-end license plate detection method based on semantic enhancement |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101398894A (en) * | 2008-06-17 | 2009-04-01 | 浙江师范大学 | Automobile license plate automatic recognition method and implementing device thereof |
CN104298976A (en) * | 2014-10-16 | 2015-01-21 | 电子科技大学 | License plate detection method based on convolutional neural network |
CN105354574A (en) * | 2015-12-04 | 2016-02-24 | 山东博昂信息科技有限公司 | Vehicle number recognition method and device |
CN105354572A (en) * | 2015-12-10 | 2016-02-24 | 苏州大学 | Automatic identification system of number plate on the basis of simplified convolutional neural network |
CN106096602A (en) * | 2016-06-21 | 2016-11-09 | 苏州大学 | Chinese license plate recognition method based on convolutional neural network |
-
2017
- 2017-04-25 CN CN201710278593.1A patent/CN106886778B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101398894A (en) * | 2008-06-17 | 2009-04-01 | 浙江师范大学 | Automobile license plate automatic recognition method and implementing device thereof |
CN104298976A (en) * | 2014-10-16 | 2015-01-21 | 电子科技大学 | License plate detection method based on convolutional neural network |
CN105354574A (en) * | 2015-12-04 | 2016-02-24 | 山东博昂信息科技有限公司 | Vehicle number recognition method and device |
CN105354572A (en) * | 2015-12-10 | 2016-02-24 | 苏州大学 | Automatic identification system of number plate on the basis of simplified convolutional neural network |
CN106096602A (en) * | 2016-06-21 | 2016-11-09 | 苏州大学 | Chinese license plate recognition method based on convolutional neural network |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107688784A (en) * | 2017-08-23 | 2018-02-13 | 福建六壬网安股份有限公司 | A kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features |
CN107958219A (en) * | 2017-12-06 | 2018-04-24 | 电子科技大学 | Image scene classification method based on multi-model and Analysis On Multi-scale Features |
CN107992897A (en) * | 2017-12-14 | 2018-05-04 | 重庆邮电大学 | Commodity image sorting technique based on convolution Laplce's sparse coding |
CN108509912A (en) * | 2018-04-03 | 2018-09-07 | 深圳市智绘科技有限公司 | Multipath network video stream licence plate recognition method and system |
CN108509912B (en) * | 2018-04-03 | 2021-09-28 | 深圳市智绘科技有限公司 | License plate recognition method and system for multi-channel network video stream |
CN109086767A (en) * | 2018-07-11 | 2018-12-25 | 于洋 | The dividing method of fuzzy license plate in a kind of monitoring scene |
CN109086767B (en) * | 2018-07-11 | 2019-05-07 | 于洋 | The dividing method of fuzzy license plate in a kind of monitoring scene |
WO2020088338A1 (en) * | 2018-10-30 | 2020-05-07 | 杭州海康威视数字技术股份有限公司 | Method and apparatus for building recognition model |
CN113392838A (en) * | 2021-08-16 | 2021-09-14 | 智道网联科技(北京)有限公司 | Character segmentation method and device and character recognition method and device |
CN113392838B (en) * | 2021-08-16 | 2021-11-19 | 智道网联科技(北京)有限公司 | Character segmentation method and device and character recognition method and device |
CN115171092A (en) * | 2022-09-08 | 2022-10-11 | 松立控股集团股份有限公司 | End-to-end license plate detection method based on semantic enhancement |
CN115171092B (en) * | 2022-09-08 | 2022-11-18 | 松立控股集团股份有限公司 | End-to-end license plate detection method based on semantic enhancement |
Also Published As
Publication number | Publication date |
---|---|
CN106886778B (en) | 2020-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106886778A (en) | A kind of car plate segmentation of the characters and their identification method under monitoring scene | |
CN110956094B (en) | RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network | |
CN104778453B (en) | A kind of night pedestrian detection method based on infrared pedestrian's brightness statistics feature | |
Yuan et al. | Robust traffic sign recognition based on color global and local oriented edge magnitude patterns | |
CN107491762B (en) | A kind of pedestrian detection method | |
CN107316035A (en) | Object identifying method and device based on deep learning neutral net | |
CN107463892A (en) | Pedestrian detection method in a kind of image of combination contextual information and multi-stage characteristics | |
CN107122776A (en) | A kind of road traffic sign detection and recognition methods based on convolutional neural networks | |
CN111965636A (en) | Night target detection method based on millimeter wave radar and vision fusion | |
CN108564673A (en) | A kind of check class attendance method and system based on Global Face identification | |
CN110555465A (en) | Weather image identification method based on CNN and multi-feature fusion | |
CN108764096B (en) | Pedestrian re-identification system and method | |
CN111062381B (en) | License plate position detection method based on deep learning | |
CN111242911A (en) | Method and system for determining image definition based on deep learning algorithm | |
CN115601717B (en) | Deep learning-based traffic offence behavior classification detection method and SoC chip | |
CN104217206A (en) | Real-time attendance counting method based on high-definition videos | |
CN115115908A (en) | Cross-domain target detection model training method, target detection method and storage medium | |
CN114187664B (en) | Rope skipping counting system based on artificial intelligence | |
CN115019340A (en) | Night pedestrian detection algorithm based on deep learning | |
CN112347967B (en) | Pedestrian detection method fusing motion information in complex scene | |
CN117994573A (en) | Infrared dim target detection method based on superpixel and deformable convolution | |
Jin et al. | Fusing Canny operator with vibe algorithm for target detection | |
CN108520208A (en) | Localize face recognition method | |
CN114694133B (en) | Text recognition method based on combination of image processing and deep learning | |
CN109145744A (en) | A kind of LSTM network pedestrian recognition methods again based on adaptive prediction mode |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |