CN110866430B - License plate recognition method and device - Google Patents

License plate recognition method and device Download PDF

Info

Publication number
CN110866430B
CN110866430B CN201810989671.3A CN201810989671A CN110866430B CN 110866430 B CN110866430 B CN 110866430B CN 201810989671 A CN201810989671 A CN 201810989671A CN 110866430 B CN110866430 B CN 110866430B
Authority
CN
China
Prior art keywords
license plate
detection
frames
character
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810989671.3A
Other languages
Chinese (zh)
Other versions
CN110866430A (en
Inventor
党韩兵
陈雷
胡鑫源
居彩霞
林洪周
杨超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Fullhan Microelectronics Co ltd
Original Assignee
Shanghai Fullhan Microelectronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Fullhan Microelectronics Co ltd filed Critical Shanghai Fullhan Microelectronics Co ltd
Priority to CN201810989671.3A priority Critical patent/CN110866430B/en
Publication of CN110866430A publication Critical patent/CN110866430A/en
Application granted granted Critical
Publication of CN110866430B publication Critical patent/CN110866430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

The invention discloses a license plate recognition method and a license plate recognition device, wherein the method comprises the following steps: step S1, training an AdaBoost cascade license plate classifier by using a local license plate image based on image characteristics, and performing sliding window detection on a video sequence picture by using the license plate classifier to obtain a license plate rough selection area in the video sequence picture; step S2, merging the obtained detection frames by using a multi-scale frame fusion method, and eliminating the detection frames with low confidence degrees; step S3, after the fused detection frame is obtained, a license plate region image is cut on the original input image, and license plate accurate positioning is carried out; step S4, extracting character edge information from the image with the accurate positioning of the upper and lower boundaries based on an edge detection operator to obtain a complete license plate; step S5, performing character cutting on the accurately positioned complete license plate according to a binarization algorithm and connected domain analysis; and step S6, recognizing the cutting characters through the convolutional neural network to obtain a detection result.

Description

License plate recognition method and device
Technical Field
The invention relates to the technical field of license plate recognition, in particular to a license plate recognition method and device on an embedded platform.
Background
The automatic license plate recognition system is an important component of an intelligent traffic system, and the current main license plate recognition methods can be divided into two categories according to strategies, namely a traditional method based on texture and color characteristics and a machine learning method.
For the license plate recognition method based on texture and color features, the traditional method positions the license plate position in a monitoring video sequence by performing operations such as edge extraction, morphological operation (expansion and corrosion), color space conversion (RGB- > HSV) and the like on an image, then performs character segmentation by performing operations such as binarization, connected domain analysis and the like, and finally performs character recognition by matching with a preset character template (calculating a correlation coefficient) or matching with the features of the character template (such as 13 features). For example, the invention provides a Chinese patent application 'license plate positioning method' with publication number CN102999753A, which is proposed by Shenzhen Limited company and applies for Tencent science and technology (Shenzhen), and the invention detects the color of the collected panoramic image and extracts the region with the specified color in the panoramic image as a license plate candidate region; extracting edge features of the license plate candidate regions, and segmenting the license plate candidate regions by using the edge features to generate a license plate candidate region set; performing inclination angle correction operation on the license plate candidate regions in the license plate candidate region set; and verifying each license plate candidate region in the license plate candidate region set to remove non-license plate regions, thereby screening out the license plate regions. The similar traditional license plate recognition method has the advantages of simple and convenient calculation and small parameter scale, but the license plate positioning fails because the texture characteristics and the color characteristics are easily influenced by the surrounding background (such as unobvious texture, close color of the automobile and the license plate and the like when the illumination condition is insufficient). In the character recognition process, the matching information quantity of the binary character template is greatly reduced compared with that of the gray space, so that the recognition error is caused.
For a license plate recognition method based on machine learning, the existing method generally obtains network parameters for license plate positioning through training, automatically searches out a license plate region from a video sequence image, then performs character segmentation through a character segmentation network, and finally completes character recognition through the character recognition network, for example, the invention is a license plate detection method based on a convolutional neural network in Chinese patent application with publication number CN104298976A, which is provided by the university of electronic technology, the invention detects a license plate image to be detected based on an Adaboost license plate detector (the window size of the Adaboost license plate detector is 45x15) based on Haar characteristics to obtain a rough license plate selection region, recognizes the rough plate selection region through a complete license plate recognition model of the convolutional neural network to obtain a final license plate candidate region, and segments the final license plate candidate region through a multi-threshold segmentation algorithm to obtain license plate Chinese characters, letters and numbers, Chinese characters and numbers, and numbers, And the letter and number convolution neural network recognition model recognizes the license plate Chinese characters, letters and numbers to obtain a license plate recognition result. The method has large parameter scale, needs to consume a large amount of storage space for parameter storage on an embedded platform, has large calculation amount, is difficult to achieve ideal recognition speed on the platform, and is time-consuming when training each detection and recognition network parameter. Also as the applicant proposes the published numbers for Shanghai transportation university as: CN104809443A, chinese patent application for invention, a license plate detection method and system based on convolutional neural network, trains convolutional neural network by constructing a picture library with labels as a sample set, processes the picture to be detected with the trained convolutional neural network, and determines whether the picture is a license plate picture and a best matching license plate according to the output vector of the convolutional neural network, however, it needs to perform multi-level pyramid detection on the input image in the recognition process, which greatly increases the amount of calculation, and the parameter scale of the license plate location network is much larger than that of Haar feature or LBP feature, which is not favorable for implementing the transplantation of embedded platform.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a license plate recognition method and a license plate recognition device, so as to solve the problems of insufficient parameter storage space and high cost of the license plate recognition system on the existing embedded platform.
In order to achieve the above and other objects, the present invention provides a license plate recognition method, comprising the steps of:
step S1, training an AdaBoost cascade license plate classifier by using a local license plate image based on image characteristics, and performing sliding window detection on a video sequence picture by using the AdaBoost cascade license plate classifier obtained by training to obtain a license plate rough selection area in the video sequence picture, namely a detection frame;
step S2, merging the detection frames obtained in the step S1 by using a multi-scale frame fusion method, and removing the detection frames with low confidence degree;
step S3, after the fused detection frame is obtained, a license plate region image is cut on the original input image, and license plate accurate positioning is carried out to complete accurate positioning of the upper and lower boundaries;
step S4, extracting character edge information from the image with the accurate positioning of the upper and lower boundaries based on an edge detection operator, and obtaining a complete license plate by row accumulation and defining the left and right boundaries of the license plate;
step S5, performing character cutting on the accurately positioned complete license plate according to a binarization algorithm and connected domain analysis;
and step S6, recognizing the cutting characters through the convolutional neural network to obtain a detection result.
Preferably, in step S2, the detection frames are classified according to levels, frame fusion is performed on each level, all candidate frames on the current level are traversed, the degree of similarity between the detection frames is measured, when the degree of similarity exceeds a set threshold, the similar detection frames are merged, the total number of frames in the current set is counted as a confidence C, after the merging of the frames on each level is completed, the detection frames with the confidence C smaller than the set threshold are removed, the frames meeting the condition are scaled to the original breadth size, and the second frame merging is performed on the original breadth.
Preferably, in step S2, the similarity and confidence C between the detection frames are calculated by the following formula:
Similarity=f(rect1,rect2,…,rectn)
C=g(Similarity>Thresh)
wherein the f function is a function for measuring the similarity degree between the detection frames, rectiThe method comprises the steps that a detection frame to be evaluated is obtained, frame Similarity is obtained through Similarity calculation, Thresh is a threshold value for measuring the Similarity, and confidence degree C of a fusion frame is obtained through the Similarity by a g function.
Preferably, the step S3 further includes:
step S300, calculating a binarization threshold value of a license plate candidate area, carrying out binarization for multiple times within a certain gray scale range near the calculated threshold value, carrying out connected domain analysis on each level of binarized image, and extracting frames which accord with the aspect ratio of characters;
s301, positioning an upper boundary and a lower boundary by using a RANSAC algorithm;
and S302, performing geometric correction on the positioning result obtained in the step S301, and calibrating the license plate to a standard position.
Preferably, in step S301, two data sets are used to record the top left vertex and the bottom right vertex of the frame meeting the above conditions, two vertices a and B are randomly selected from the top left vertex set, a fitting straight line AB is fitted, whether the remaining points in the set fall within a certain threshold range of the fitting straight line is determined, the number Count of the points meeting the threshold is counted, iteration is performed in a loop, and the vertex a corresponding to the maximum Count is recordedbestAnd BbestWith AbestAnd BbestThe determined straight line is used as the upper boundary line of the license plate; and obtaining the lower boundary line of the license plate in the same way.
Preferably, in step S3, in the precise positioning process, the candidate area is detected for positive color and negative color twice, the number of the character frames obtained by the two times is compared, the character color of the input image is determined, and if the character frames of the positive color image are more than the character frames of the negative color image, the original license plate is a white character; otherwise, the original license plate is black characters.
Preferably, in step S302, four intersections of the upper and lower boundary lines and the image edge are used as control points to geometrically correct the license plate to the standard position.
Preferably, the step S5 further includes:
s500, performing binarization and connected domain analysis on the precisely positioned complete license plate on a gray scale map to obtain an external frame of the character;
step S501, counting the ratio of white dots in each external frame, removing the detection frames with the number of white dots smaller than a certain threshold, sorting the detection frames according to the ascending abscissa of the frame to obtain the central position of the frame and the total number N of the frames, and detecting the position of the image in the sorted frames
Figure BDA0001780497840000041
And the character frame P is used for forward presuming the position of one character frame as a Chinese character, then backwards searching the rear M-2 characters to obtain M characters on the license plate, and obtaining the cut characters to be detected, wherein M is the number of the characters of the license plate to be detected.
Preferably, the method further comprises the steps of:
after the characters are recognized, the recognition probability of each character is obtained, the recognition probabilities of the M characters are comprehensively evaluated, and the frame combination corresponding to the maximum confidence coefficient is calculated to be the optimal solution.
In order to achieve the above object, the present invention also provides a vehicle identification device, comprising:
the license plate preliminary detection unit is used for training an AdaBoost cascade license plate classifier by using a local license plate image based on image characteristics, and performing sliding window detection on the video sequence picture by using the AdaBoost cascade license plate classifier obtained by training to obtain a license plate rough selection area, namely a detection frame, in the video sequence picture;
the multi-scale frame fusion unit is used for merging the detection frames obtained by the license plate preliminary detection unit by using a multi-scale frame fusion method and removing the detection frames with low confidence degrees;
the license plate accurate positioning unit is used for cutting a license plate region image on the original input image after the fused detection frame is obtained, and performing license plate accurate positioning to finish accurate positioning of the upper and lower boundaries;
the character edge information extraction unit is used for extracting character edge information from the image with the accurate positioning of the upper and lower boundaries based on an edge detection operator, and obtaining a complete license plate by row accumulation and definition of the left and right boundaries of the license plate;
the character segmentation unit is used for carrying out character segmentation on the accurately positioned complete license plate according to a binarization algorithm and connected domain analysis;
and the character recognition unit is used for recognizing the cutting characters through the convolutional neural network to obtain a detection result.
Compared with the prior art, the license plate recognition method and device provided by the invention train the AdaBoost cascade license plate classifier through a local license plate image based on image characteristics, eliminate invalid detection frames by adopting a multi-scale detection frame fusion method, accurately divide the upper and lower boundaries of the license plate through multi-threshold binarization and RANSAC, extract character edge information based on an edge detection operator, obtain a complete license plate through row accumulation and definition of the left and right boundaries of the license plate, perform character cutting according to a binarization algorithm and connected domain analysis, and finally recognize the optimal division character through a convolutional neural network to obtain a detection result, thereby solving the problems of insufficient parameter storage space and high cost of a license plate recognition system on the existing embedded platform.
Drawings
FIG. 1 is a flow chart illustrating steps of a license plate recognition method according to the present invention;
FIG. 2 is a schematic diagram illustrating the training of an Adaboost license plate classifier based on image feature training in an embodiment of the present invention;
FIG. 3 is a diagram of a license plate character recognition network framework according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a character cutting optimization strategy based on detection confidence in an embodiment of the present invention;
FIG. 5 is a system architecture diagram of a license plate recognition device according to the present invention;
FIG. 6 is a detailed structure diagram of a license plate accurate positioning unit according to an embodiment of the present invention;
FIG. 7 is a detailed diagram of a character segmentation unit according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a license plate recognition process in an embodiment of the invention.
Detailed Description
Other advantages and capabilities of the present invention will be readily apparent to those skilled in the art from the present disclosure by describing the embodiments of the present invention with specific embodiments thereof in conjunction with the accompanying drawings. The invention is capable of other and different embodiments and its several details are capable of modification in various other respects, all without departing from the spirit and scope of the present invention.
Fig. 1 is a flowchart illustrating steps of a license plate recognition method according to the present invention. As shown in fig. 1, the license plate recognition method of the present invention includes the following steps:
step S1, an AdaBoost cascade license plate classifier is trained by using the local license plate image based on the image characteristics, and the video sequence picture is subjected to sliding window detection by using the AdaBoost cascade license plate classifier (namely, a license plate detector) obtained by training, so that a license plate rough selection area in the video sequence picture can be obtained. In the invention, the Adaboost classifier trains different weak classifiers aiming at the same training set, and then the weak classifiers are integrated to form a strong classifier.
Specifically, the license plate classifier detects the approximate position of a license plate from a video sequence, and the detection accuracy directly influences the subsequent character segmentation and character recognition. In the existing license plate positioning method, the method based on texture and color space is easily influenced by light and background, the parameter scale of the CNN-based method is overlarge, and the effect of the characteristic-based method is optimal. In consideration of the storage space and hardware requirements of the embedded platform, the method aims to reduce the complexity of the algorithm and the required storage cost, therefore, the invention provides the method for training the AdaBoost license plate detector by utilizing the image characteristics, the license plate positioning precision is over 99.7 percent, and the requirement of the embedded platform on low power consumption can be met. Meanwhile, in consideration of the repeatability of the character characteristics of the license plate region, the local license plate is used for replacing the original whole license plate to carry out the training of the license plate detector, so that the data sample and the parameter scale required by the training are reduced. The license plate positive sample comprises license plates with different illumination conditions and different fouling degrees, and the negative sample comprises non-license plate images of scenes where automobiles are located, such as a road surface, an automobile body, a street background and the like.
In the specific embodiment of the invention, 4000 complete license plates (46x12 size) are prepared, the license plates are completely divided into two left and right license plates with 23x12 size, 8000 positive samples are obtained, 24000 negative samples are prepared according to the proportion of 1:3, wherein the positive samples of the license plates comprise yellow plates and blue plates with different illumination conditions and different fouling degrees, the negative samples comprise non-license plate images of scenes where automobiles are located, such as road surfaces, automobile bodies, street backgrounds and the like, and the training level of the license plates is set to 10 levels. The training diagram is shown in fig. 2.
After the AdaBoost cascade license plate classifier is trained, the AdaBoost cascade license plate classifier obtained through training is used for carrying out sliding window detection on the video sequence picture, and a license plate rough selection area can be obtained. Compared with the prior complete license plate training, the method has the advantages that the required training data is smaller, the parameters required to be stored in the detection process are reduced, the breadth of the data required to be processed in each detection in the detection process is reduced, the area occupied by an Application-Specific Integrated Circuit (ASIC) is reduced in the implementation process of the embedded system, and the resources are saved.
And step S2, merging the detection frames obtained in the step S1 by using a multi-scale frame fusion method, and removing the detection frames with low confidence degrees.
Specifically, in the process of detecting the license plate by using the detector, the detector using local license plate training provided by the invention detects the license plate area on different levels for multiple times, and for the purpose, the invention adopts a multi-scale frame fusion method to combine the detection frames and simultaneously remove the low-confidence detection frame. Specifically, the detection frames are classified according to levels, frame fusion is performed on each level, all candidate frames on the current level are traversed, the similarity degree between the detection frames is measured by using a formula 1, when the similarity degree exceeds a set threshold value, the similar detection frames are combined, and the total number of frames in the current set is counted to serve as a confidence coefficient C.
Similarity=f(rect1,rect2,...rectn) (formula 1)
C ═ g (Similarity > Thresh) (equation 2)
Wherein the f function is a function for measuring the similarity degree between the detection frames, rectiThe method comprises the steps that detection frames to be evaluated are obtained, frame Similarity is obtained through Similarity calculation, Thresh is a threshold value for measuring the Similarity, confidence degree C of fusion frames is obtained through the g function according to the Similarity, after combination of frames on all levels is completed, the detection frames with the confidence degrees C smaller than a set threshold value are removed, the frames meeting conditions are scaled to the size of an original breadth, and second frame combination is carried out on the original breadth.
And step S3, after the fused detection frame is obtained, cutting a license plate Region (ROI) image on the original input image, and accurately positioning the license plate, namely accurately positioning the upper and lower boundaries.
Specifically, in step S3, the approximate position of the license plate is cut out from the original image, and the license plate is precisely located. The invention adopts a multi-threshold binarization strategy to extract character vertexes, combines a RANSAC algorithm to position upper and lower boundaries, and completes geometric correction. Compared with the background technology that CNN is firstly used for recognizing the complete license plate, then angle correction is carried out, and then multi-level binarization is carried out to determine the character position, the method has the advantages that the used strategy calculation amount is greatly reduced. In the accurate positioning process, performing positive color and reverse color detection twice on the candidate area, comparing the number of character frames obtained twice, judging the character color of the input image, and if the positive image character frame is more than the reverse image character frame, determining that the original license plate is a white character; otherwise, the original license plate is black characters.
Specifically, step S3 further includes:
step S300, calculating a binarization threshold value of the license plate candidate area, carrying out binarization for multiple times within a certain gray scale range near the calculated threshold value, carrying out connected domain analysis on each level of binarized image, and extracting frames which accord with the aspect ratio of characters.
Step S301, the RANSAC algorithm is used for positioning the upper and lower boundaries.
In a specific embodiment of the present invention, two data sets are used to record the top left and bottom right vertices of the box, respectively, that meet the above conditions. Randomly selecting two vertexes A and B in the top left vertex set, fitting a straight line AB, judging whether the remaining points in the set fall within a certain threshold range of the fitting straight line, counting the number Count of the points meeting the threshold, circularly iterating, and recording the vertex A corresponding to the maximum CountbestAnd Bbest. With AbestAnd BbestThe determined straight line is used as the upper boundary line of the license plate; the lower boundary line of the license plate can be obtained by the same method.
And S302, performing geometric correction on the positioning result obtained in the step S301, and calibrating the license plate to a standard position. Since the license plate image has different degrees of deformation (distortion, rotation, etc.) during the shooting process, in step S302, the four intersections of the upper and lower boundary lines and the image edge are used as control points to geometrically correct the license plate to the standard position.
And step S4, extracting character edge information from the image with the accurate positioning of the upper and lower boundaries based on an edge detection operator, and obtaining a complete license plate by row accumulation and defining the left and right boundaries of the license plate. Specifically, the edge of the image with the accurate upper and lower boundaries is extracted by using an edge detection operator, the image with the vertical edges is binarized, the cumulative sum in the row direction is calculated, a cumulative sum edge intensity change curve is drawn, the intensity of the left edge and the right edge of the license plate is small relative to a character area and approaches to 0 on the change curve, the left edge and the right edge are respectively shrunk to the middle from the left end and the right end by a certain threshold value according to the energy, and the left edge and the right edge are defined, so that the complete license plate with the accurate positioning is obtained.
And step S5, performing character cutting on the accurately positioned complete license plate according to a binarization algorithm and connected domain analysis.
Specifically, step S5 further includes:
and S500, performing binarization and connected domain analysis on the accurately positioned complete license plate on the gray scale map to obtain an external frame of the character.
Step S501, counting the ratio of white dots in each external frame, removing the detection frames with the number of white dots smaller than a certain threshold, sorting the detection frames according to the ascending abscissa of the frame to obtain the central position of the frame and the total number N of the frames, and detecting the position of the image in the sorted frames
Figure BDA0001780497840000091
And (M is the number of the characters of the license plate to be detected), forward presuming the position of one character frame as a Chinese character, and then backward searching the following M-2 characters to obtain M characters on the license plate, thereby obtaining the cut characters to be detected.
And step S6, identifying the optimal segmentation character through the convolutional neural network to obtain a detection result. In an embodiment of the present invention, a character recognition convolutional neural network is constructed as shown in FIG. 3, where convolutional layers are used for image feature extraction, full link layers are used for classification, and the last full link layer FCnOutput summary containing Q output nodes corresponding to Q charactersRate pi. Firstly, network parameter training is carried out through calibrated character data, M characters are sequentially recognized through parameters obtained through training, and a license plate recognition result is obtained.
Preferably, for character segmentation, the present invention provides an optimization strategy based on character detection probability to prevent recognition error caused by inaccurate detection of a second character. The first detection is still performed with character segmentation according to the above strategy, histogram equalization is performed, and then the character recognition in step S6 is performed to obtain the recognition probability p of each characteri(piFor the output probability of the ith character in the detection neural network), comprehensively evaluating the recognition probability of the M characters g (p)i) And calculating the combination of the boxes corresponding to the maximum confidence coefficient to be the optimal solution optimal, as shown in formula 3.
Figure BDA0001780497840000101
Wherein the merit function g (x) is the confidence score of the current segmentation. Max (x) is the comparison of confidence scores for K (K ═ N-M +1) box combinations to yield the optimal combination. When the number of the cut frames is more than M, the number of the cut frames indicates that the license plate area has frame redundancy, the frame division can be influenced by the license plate boundary, the frame division is translated leftwards and rightwards on the basis of the position of the second character frame obtained above, new frame combination is recalculated, and the new combined frame is detected to obtain a new confidence score. As shown in FIG. 4, the first calculation yields a character box P1And dividing M license plate characters, performing character recognition, and calculating the confidence coefficient of the characters. To the right to P2Position, again until P-N-M +2 or P-1, the loop ends. And judging the confidence degrees of all the frame combinations to obtain the optimal frame combination.
Fig. 5 is a system architecture diagram of a license plate recognition device according to the present invention. As shown in fig. 5, a vehicle identification device of the present invention includes:
the license plate preliminary detection unit 501 is configured to train an AdaBoost cascade license plate classifier using a local license plate image based on image features, and perform sliding window detection on a video sequence picture using the trained AdaBoost cascade license plate classifier (that is, a license plate detector), so as to obtain a license plate rough selection area in the video sequence picture. In the invention, the Adaboost classifier trains different weak classifiers aiming at the same training set, and then the weak classifiers are integrated to form a strong classifier.
Specifically, the license plate classifier detects the approximate position of the license plate from the video sequence, and the detection accuracy directly influences the subsequent character segmentation and character recognition. In the existing license plate positioning method, the method based on texture and color space is easily influenced by light and background, the parameter scale of the CNN-based method is overlarge, and the effect of the characteristic-based method is optimal. In consideration of the storage space and hardware requirements of the embedded platform, the method aims to reduce the complexity of the algorithm and the required storage cost, therefore, the invention provides the method for training the AdaBoost license plate detector by utilizing the image characteristics, the license plate positioning precision is over 99.7 percent, and the requirement of the embedded platform on low power consumption can be met. Meanwhile, in consideration of the repeatability of the character characteristics of the license plate region, the local license plate is used for replacing the original whole license plate to carry out the training of the license plate detector, so that the data sample and the parameter scale required by the training are reduced. The license plate positive sample comprises license plates with different illumination conditions and different fouling degrees, and the negative sample comprises non-license plate images of scenes where automobiles are located, such as a road surface, an automobile body, a street background and the like.
And the multi-scale frame fusion unit 502 is configured to merge the detection frames obtained by the license plate preliminary detection unit 501 by using a multi-scale frame fusion method, and reject the detection frames with low confidence degrees.
Specifically, in the process of detecting the license plate by using the detector, the detector for training the local license plate provided by the invention detects the license plate region on different levels for multiple times, and for this, the multi-scale frame fusion unit 502 adopts a multi-scale frame fusion method to combine the detection frames and remove the detection frames with low confidence. Specifically, firstly, the detection frames are classified according to levels, frame fusion is respectively carried out on each level, all candidate frames on the current level are traversed, the similarity degree between the detection frames is measured by using a formula 1, when the similarity degree exceeds a set threshold value, the similar detection frames are merged, and the total number of the frames in the current set is counted to be used as a confidence coefficient C.
And after the merging of the upper frames of all levels is finished, the detection frames with the confidence coefficient C smaller than the set threshold value are removed, the frames meeting the conditions are zoomed to the size of the original breadth, and the second frame merging is carried out on the original breadth.
And a license plate accurate positioning unit 503, after obtaining the fused detection frame, cutting out a license plate Region (ROI) image on the original input image, and performing license plate accurate positioning, that is, completing accurate positioning of the upper and lower boundaries.
That is, the license plate accurate positioning unit 503 cuts out the approximate position of the license plate from the original image, and performs the accurate license plate positioning. The license plate accurate positioning unit 503 adopts a multi-threshold binarization strategy to extract character vertexes, performs positioning of upper and lower boundaries by combining a RANSAC algorithm, and completes geometric correction. In the accurate positioning process, performing positive color and reverse color detection twice on the candidate area, comparing the number of character frames obtained twice, judging the character color of the input image, and if the positive image character frame is more than the reverse image character frame, determining that the original license plate is a white character; otherwise, the original license plate is black characters.
Specifically, as shown in fig. 6, the license plate accurate positioning unit 503 further includes:
a multi-threshold binarization unit 5031, configured to calculate a binarization threshold for the license plate candidate region, binarize multiple times within a certain grayscale range around the calculated threshold, perform connected domain analysis on a graph for each level of binarization, and extract a frame that meets a character aspect ratio.
An upper and lower boundary positioning unit 5032 for positioning the upper and lower boundaries using the RANSAC algorithm.
In a specific embodiment of the present invention, two data sets are used to record the top left and bottom right vertices of the box, respectively, that meet the above conditions. Randomly selecting two vertexes A and B in the top left vertex set, fitting a straight line AB, judging whether the remaining points in the set fall within a certain threshold range of the fitting straight line, counting the number Count of the points meeting the threshold, circularly iterating, and recording the vertex A corresponding to the maximum CountbestAnd Bbest. With AbestAnd BbestThe determined straight line is used as the upper boundary line of the license plate; the lower boundary line of the license plate can be obtained by the same method.
A geometric correction unit 5033, configured to perform geometric correction on the positioning result of the upper and lower boundary positioning unit 5032, so as to calibrate the license plate to a standard position. Since the license plate image has different degrees of deformation (distortion, rotation, etc.) during the photographing process, the geometric correction unit 5033 geometrically corrects the license plate to a standard position by taking four intersection points of the upper and lower boundary lines and the image edge as control points.
And a character edge information extraction unit 504, configured to extract character edge information for the image with the upper and lower boundaries accurately positioned based on the edge detection operator, and obtain a complete license plate by performing column accumulation and defining the left and right boundaries of the license plate. Specifically, the edge of the image with the accurate upper and lower boundaries is extracted by using an edge detection operator, the image with the vertical edges is binarized, the cumulative sum in the row direction is calculated, a cumulative sum edge intensity change curve is drawn, the intensity of the left edge and the right edge of the license plate is small relative to a character area and approaches to 0 on the change curve, the left edge and the right edge are respectively shrunk to the middle from the left end and the right end by a certain threshold value according to the energy, and the left edge and the right edge are defined, so that the complete license plate with the accurate positioning is obtained.
And the character segmentation unit 505 is configured to perform character segmentation on the precisely positioned complete license plate according to a binarization algorithm and connected domain analysis.
Specifically, as shown in fig. 7, the character segmentation unit 505 further includes:
the binarization unit 5051 is configured to perform binarization and connected domain analysis on the precisely located complete license plate on the grayscale map to obtain an external frame of the character.
The character cutting unit 5052 counts the proportion of white dots in each circumscribed frame, removes the detection frames with the number of white dots smaller than a certain threshold, sorts the detection frames according to the horizontal coordinates of the frames from small to large to obtain the center positions of the frames and the total number N of the frames, and detects the position of the detected image in the sorted frames
Figure BDA0001780497840000121
(M is the number of characters of the license plate to be detected) The character frame P is used for estimating the position of one character frame forward to serve as a Chinese character, then, the rear M-2 characters are searched backward to obtain M characters on the license plate, and the character to be detected is obtained.
The character recognition unit 506 recognizes the optimal segmentation character through a convolutional neural network to obtain a detection result. In an embodiment of the present invention, a character recognition convolutional neural network is constructed as shown in FIG. 3, where convolutional layers are used for image feature extraction, full link layers are used for classification, and the last full link layer FCnOutput probability p of Q output nodes corresponding to Q charactersi. Firstly, network parameter training is carried out through calibrated character data, M characters are sequentially recognized through parameters obtained through training, and a license plate recognition result is obtained.
Preferably, for character segmentation, the present invention provides an optimization strategy based on character detection probability to prevent recognition error caused by inaccurate detection of the second character, i.e. the present invention further comprises a confidence evaluation unit 508, the first detection still performs character segmentation according to the above strategy, performs histogram equalization, then uses character recognition of the character recognition unit 506, and then uses the confidence evaluation unit 507 to obtain recognition probability p of each characteri(piFor the output probability of the ith character in the detection neural network), carrying out comprehensive evaluation g (p) on the recognition probability of the M charactersi) And calculating the combination of the boxes corresponding to the maximum confidence coefficient to be the optimal solution optimal, as shown in formula 3.
Figure BDA0001780497840000131
Wherein, the evaluation function g (x) is to find the confidence score of the current segmentation. Nax (x) is the comparison of confidence scores for K (K ═ N-M +1) box combinations to yield the optimal combination. When the number of the division frames is more than M, the number of the division frames in the license plate area has frame redundancy, the frame division can be influenced by the boundary of the license plate, the frame division is translated leftwards and rightwards on the basis of the position of the second character frame obtained above, new frame combination is recalculated, a new combined frame is detected, and the confidence coefficient is used for evaluatingPrice element 507 gets a new confidence score. The first calculation obtains a character frame P1Dividing M license plate characters, performing character recognition, calculating confidence coefficient by using confidence coefficient evaluation unit 507, and moving right to P2Position, again until P-N-M +2 or P-1, the loop ends. And judging the confidence degrees of all the frame combinations to obtain the optimal frame combination.
Fig. 8 is a schematic diagram of a license plate recognition process in an embodiment of the invention. The license plate recognition of the present invention is further illustrated by a specific embodiment as follows: in the specific embodiment of the present invention, a license plate recognition method on an embedded platform is provided, which includes but is not limited to license plate recognition, and has a generalized recognition capability for print fonts (such as house numbers) under general conditions. The method has universality for all license plates and other printing fonts.
Specifically, the license plate recognition process is as follows:
1. in consideration of the repeatability of the character characteristics of the license plate region, in the embodiment, the original whole license plate is replaced by the 1/2-sized local license plate, and the LBP characteristics are utilized to train the license plate detector. Compared with Haar feature training using 45x15 in the patents proposed by phyllocene and the like, the LBP feature is simpler and less in calculation amount compared with the Haar feature, and the number of license plates needing to be prepared is only half of that of the whole license plate training by compressing the training sample to 23x 12. In order to detect the license plate position from a video sequence, 4000 complete license plates (46x12 size) are prepared, the license plates are completely divided into two license plates of the left and right size of 23x12, 8000 positive samples are obtained, and 24000 negative samples are prepared according to the proportion of 1: 3. The license plate positive sample comprises yellow plates and blue plates with different illumination conditions and different fouling degrees, and the negative sample comprises non-license plate images of scenes where automobiles are located, such as a road surface, an automobile body, a street background and the like. The license plate training grade is set to 10 grades. The training diagram is shown in fig. 2. And performing sliding window detection on the video sequence picture by using the parameters obtained by training to obtain a license plate rough selection area. The invention uses 1000 pieces of data (including night of low light) of a certain high-speed intersection all day, 997 license plate positions can be correctly detected by using the parameters trained by the invention, and the positioning accuracy reaches 99.7%. Compared with the prior complete license plate training, the method provided by the invention has the advantages that the required training data is halved, the parameters required to be stored in the detection process are halved, the breadth of the data required to be processed in each detection process is halved, the area occupied by an Application-Specific Integrated Circuit (ASIC) is reduced in the embedded system implementation process, and half of resources are saved.
And 2, in the process of detecting the license plate by using the detector, the license plate area is detected for multiple times on different levels, and the detection frames are merged and the low-confidence detection frame is removed according to the multi-scale frame fusion method provided by the invention. The similarity formula in this embodiment is specifically expressed as the following formula 4. And judging the similarity by calculating the overlapping ratio between the two frames.
Figure BDA0001780497840000151
The Intersect () is to find the intersection area of the two detection frames, and the min () is to find the area of the smaller area of the detection frames. Similar frames are merged on different scales, and the embodiment adopts the operation of merging set, and then all frames are restored to the original breadth for second merging.
3. And (3) after the fused detection frame is obtained from the step (2), a license plate region (ROI image) is cut out on the original input image, and the license plate is accurately positioned. Specifically, the invention adopts a multi-level binarization strategy to extract character vertexes, combines a RANSAC algorithm to position upper and lower boundaries, and completes geometric correction. The invention carries out positive and negative color detection on the ROI image, compares the number of character frames obtained twice and judges the character color of the input image. In this embodiment, an OTSU threshold is adopted, binarization is performed at a certain gray level interval within a certain gray level range near the calculated OTSU threshold, connected domain analysis is performed on each level of binarized image, candidate frames in accordance with the aspect ratio of the character are stored, and the number of frames obtained by the positive color image and the negative color image is stored for character color judgment.
1) And (4) respectively recording the top left vertex and the bottom right vertex of the detection box obtained in the step (3) by using two data sets. Randomly selecting two vertexes A and B in the top left vertex set, fitting a straight line AB, judging whether the remaining points in the set are in the threshold range of the fitted straight line, counting the number of points Count meeting the threshold, circularly iterating, and recording the vertex A corresponding to the maximum CountbestAnd Bbest. With AbestAnd BbestThe determined straight line is used as the upper boundary line and the lower boundary line of the license plate; the lower boundary line of the license plate can be obtained by the same method.
2) And cutting out a license plate region according to the upper boundary line and the lower boundary line, and taking four intersection points of the upper boundary line and the lower boundary line and the edge of the ROI image as control points to perform geometric correction. The embodiment selects a projection mapping method.
3) In this embodiment, a Sobel operator (formula 4) is selected to extract a vertical edge from the corrected image, binarize the vertical edge image, calculate the sum of row sums to obtain a row direction edge intensity distribution map, remove a region where the left and right boundary energies are smaller than a threshold value according to the edge intensity information, cut the image to a complete license plate size, and output the complete license plate.
Figure BDA0001780497840000152
4. The method comprises the steps of conducting binarization on a complete license plate image, selecting an OTSU method in the embodiment, conducting connected domain analysis on a binary image to obtain an external rectangular frame of each character, counting the proportion of white dots in each frame, removing detection frames with the number of the white dots being smaller than a certain threshold value, conducting ascending sorting according to the x coordinate size of the rectangular frames, searching the rectangular frames located between the image abscissa 1/7 and 2/7, and predicting the size of one character forward according to the central position and the width of the frame to obtain the detection frame of the Chinese character. And searching the next 5 characters in the subsequent candidate boxes to obtain the cut characters to be detected. And (5) carrying out the character detection of the step (S8) on the cut characters to obtain the detection confidence coefficient of the current frame combination, and evaluating the confidence coefficients of all the detected characters to obtain the optimal character cutting combination. The present embodiment uses the principle that the mean root mean square of the detection confidence is the largest to perform the evaluation, and equation 3 is transformed into equation 6.
Figure BDA0001780497840000161
5. And resampling the cut characters to the standard size of the input of the detection network, carrying out histogram equalization and carrying out character detection. In this embodiment, there are 65 outputs (31 chinese characters, 24 characters and 10 numbers), the first digit of the license plate character is a chinese character, the second digit is a letter, and the following digits are alphanumerics, and the maximum value of the character at a specific position is searched in the result range (for example, the chinese character is searched in the output 1-31), which is the detection result. And combining the detection of the 7 characters to obtain a license plate recognition result and character detection probability. The embodiment can simultaneously and correctly identify a plurality of license plates in the scene, and has good identification effect in the scene of multiple targets.
In summary, the license plate recognition method and device provided by the invention train an AdaBoost cascade license plate classifier through a local license plate image based on image characteristics, eliminate invalid detection frames by adopting a multi-scale detection frame fusion method, accurately segment the upper and lower boundaries of a license plate through multi-threshold binarization and RANSAC, extract character edge information based on an edge detection operator, obtain a complete license plate through column accumulation and definition of the left and right boundaries of the license plate, perform character segmentation according to a binarization algorithm and connected domain analysis, and finally recognize the optimal segmented characters through a convolutional neural network to obtain a detection result, thereby solving the problems of insufficient parameter storage space and high cost of a license plate recognition system on the existing embedded platform.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Modifications and variations can be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the present invention. Therefore, the scope of the invention should be determined from the following claims.

Claims (9)

1. A license plate recognition method comprises the following steps:
step S1, training an AdaBoost cascade license plate classifier by using a local license plate image based on image characteristics, and performing sliding window detection on a video sequence picture by using the AdaBoost cascade license plate classifier obtained by training to obtain a license plate rough selection area in the video sequence picture, namely a detection frame;
step S2, merging the detection frames obtained in the step S1 by using a multi-scale frame fusion method, and eliminating the detection frames with low confidence;
step S3, after the fused detection frame is obtained, a license plate region image is cut on the original input image, and license plate accurate positioning is carried out to complete accurate positioning of the upper and lower boundaries;
step S4, extracting character edge information from the image with the accurate positioning of the upper and lower boundaries based on an edge detection operator to obtain a complete license plate;
step S5, performing character cutting on the accurately positioned complete license plate according to a binarization algorithm and connected domain analysis;
step S6, recognizing the cutting characters through a convolutional neural network to obtain a detection result;
in step S2, the detection frames are classified according to levels, frame fusion is performed on each level, all candidate frames on the current level are traversed, the degree of similarity between the detection frames is measured, when the degree of similarity exceeds a set threshold, the similar detection frames are merged, the total number of frames in the current set is counted as a confidence C, after merging of the frames on each level, the detection frames with the confidence C smaller than the set threshold are removed, the frames meeting the conditions are scaled to the original breadth size, and second frame merging is performed on the original breadth.
2. The license plate recognition method of claim 1, wherein: in step S2, the similarity and confidence C between the detection frames are calculated using the following formulas:
Similarity=f(rect1,rect2,...,rectn)
C=g(Similarity>Thresh)
wherein the f function is used for measuring the similarity degree between the detection framesFunction of, rectiThe method comprises the steps that a detection frame to be evaluated is obtained, frame Similarity is obtained through Similarity calculation, Thresh is a threshold value for measuring the Similarity, and confidence degree C of a fusion frame is obtained through the Similarity by a g function.
3. The method for recognizing the license plate of claim 1, wherein the step S3 further comprises:
step S300, calculating a binarization threshold value of a license plate candidate area, carrying out binarization for multiple times within a certain gray scale range near the calculated threshold value, carrying out connected domain analysis on each level of binarized image, and extracting frames which accord with the aspect ratio of characters;
s301, positioning an upper boundary and a lower boundary by using a RANSAC algorithm;
and S302, performing geometric correction on the positioning result obtained in the step S301, and calibrating the license plate to a standard position.
4. The license plate recognition method of claim 3, wherein: in step S301, two data sets are used for respectively recording the upper left vertex and the lower right vertex of the frame which accords with the character width-height ratio in step S300, two vertices A and B are randomly selected in the upper left vertex set, a fitting straight line AB is adopted, whether the remaining points in the set fall within a certain threshold range of the fitting straight line is judged, the number Count of the points which accord with the threshold is counted, iteration is carried out in a circulating mode, and the vertex A corresponding to the maximum Count is recordedbestAnd BbestWith AbestAnd BbestThe determined straight line is used as the upper boundary line of the license plate; and obtaining the lower boundary line of the license plate in the same way.
5. The license plate recognition method of claim 3, wherein: in the step S3, in the accurate positioning process, performing positive color and reverse color detection twice on the candidate area, comparing the number of the character frames obtained twice, judging the character color of the input image, and if the positive image character frames are more than the reverse image character frames, the original license plate is a white character; otherwise, the original license plate is black characters.
6. The license plate recognition method of claim 5, wherein: in step S302, the license plate is geometrically corrected to a standard position by using four intersections of the upper and lower boundary lines and the image edge as control points.
7. The method for recognizing the license plate of claim 1, wherein the step S5 further comprises:
s500, performing binarization and connected domain analysis on the precisely positioned complete license plate on a gray scale map to obtain an external frame of the character;
step S501, counting the proportion of white dots in each external frame, removing the detection frames with the number of the white dots smaller than a certain threshold, sorting the detection frames according to the ascending abscissa of the frames, obtaining the center positions of the frames and the total number N of the frames, and detecting the position of the detected image in the sorted frames
Figure FDA0003654967310000031
And the character frame P is used for forward presuming the position of one character frame as a Chinese character, then backwards searching the rear M-2 characters to obtain M characters on the license plate, and obtaining the cut characters to be detected, wherein M is the number of the characters of the license plate to be detected.
8. The method of claim 1, further comprising the steps of:
after the characters are recognized, the recognition probability of each character is obtained, the recognition probabilities of the M characters are comprehensively evaluated, and the frame combination corresponding to the maximum confidence coefficient is calculated to be the optimal solution.
9. A vehicle identification device comprising:
the license plate preliminary detection unit is used for training an AdaBoost cascade license plate classifier by using a local license plate image based on image characteristics, and performing sliding window detection on the video sequence picture by using the AdaBoost cascade license plate classifier obtained by training to obtain a license plate rough selection area, namely a detection frame, in the video sequence picture;
the multi-scale frame fusion unit is used for classifying the detection frames according to levels, performing frame fusion on each level, traversing all candidate frames on the current level, measuring the similarity degree between the detection frames, merging the similar detection frames when the similarity degree exceeds a set threshold, counting the total number of frames in the current set as a confidence coefficient C, eliminating the detection frames with the confidence coefficient C being smaller than the set threshold after the merging of the frames on each level is completed, zooming the frames meeting the conditions to the original breadth size, and performing second frame merging on the original breadth;
the license plate accurate positioning unit is used for cutting a license plate region image on the original input image after the fused detection frame is obtained, and performing license plate accurate positioning to finish accurate positioning of the upper and lower boundaries;
the character edge information extraction unit is used for extracting character edge information from the image with the accurate positioning of the upper and lower boundaries based on an edge detection operator to obtain a complete license plate;
the character segmentation unit is used for carrying out character segmentation on the accurately positioned complete license plate according to a binarization algorithm and connected domain analysis;
and the character recognition unit is used for recognizing the cutting characters through the convolutional neural network to obtain a detection result.
CN201810989671.3A 2018-08-28 2018-08-28 License plate recognition method and device Active CN110866430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810989671.3A CN110866430B (en) 2018-08-28 2018-08-28 License plate recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810989671.3A CN110866430B (en) 2018-08-28 2018-08-28 License plate recognition method and device

Publications (2)

Publication Number Publication Date
CN110866430A CN110866430A (en) 2020-03-06
CN110866430B true CN110866430B (en) 2022-07-01

Family

ID=69651508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810989671.3A Active CN110866430B (en) 2018-08-28 2018-08-28 License plate recognition method and device

Country Status (1)

Country Link
CN (1) CN110866430B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111524045A (en) * 2020-04-13 2020-08-11 北京猿力教育科技有限公司 Dictation method and device
CN111881914B (en) * 2020-06-23 2024-02-13 安徽清新互联信息科技有限公司 License plate character segmentation method and system based on self-learning threshold
CN112464934A (en) * 2020-12-08 2021-03-09 广州小鹏自动驾驶科技有限公司 Parking space number detection method, device and equipment
CN112560856B (en) * 2020-12-18 2024-04-12 深圳赛安特技术服务有限公司 License plate detection and identification method, device, equipment and storage medium
CN112686252A (en) * 2020-12-28 2021-04-20 中国联合网络通信集团有限公司 License plate detection method and device
CN112733851B (en) * 2021-01-14 2023-08-18 福建江夏学院 License plate recognition method for optimizing grain warehouse truck based on convolutional neural network
CN113313143B (en) * 2021-04-29 2022-08-09 浙江大华技术股份有限公司 License plate detection method and device and computer storage medium
CN114155473B (en) * 2021-12-09 2022-11-08 成都智元汇信息技术股份有限公司 Picture cutting method based on frame compensation, electronic equipment and medium
CN115601744B (en) * 2022-12-14 2023-04-07 松立控股集团股份有限公司 License plate detection method for vehicle body and license plate with similar colors

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729636A (en) * 2013-12-18 2014-04-16 小米科技有限责任公司 Method and device for cutting character and electronic device
CN104298976A (en) * 2014-10-16 2015-01-21 电子科技大学 License plate detection method based on convolutional neural network
CN105279475A (en) * 2014-07-15 2016-01-27 贺江涛 Fake-licensed vehicle identification method and apparatus based on vehicle identity recognition
CN107103317A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729636A (en) * 2013-12-18 2014-04-16 小米科技有限责任公司 Method and device for cutting character and electronic device
CN105279475A (en) * 2014-07-15 2016-01-27 贺江涛 Fake-licensed vehicle identification method and apparatus based on vehicle identity recognition
CN104298976A (en) * 2014-10-16 2015-01-21 电子科技大学 License plate detection method based on convolutional neural network
CN107103317A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"EMS 表单中手写体中文识别图像预处理方法研究";许秦蓉;《包装工程》;20141130;第35卷(第21期);第80-85页 *

Also Published As

Publication number Publication date
CN110866430A (en) 2020-03-06

Similar Documents

Publication Publication Date Title
CN110866430B (en) License plate recognition method and device
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN108830188B (en) Vehicle detection method based on deep learning
CN110363182B (en) Deep learning-based lane line detection method
CN109101924B (en) Machine learning-based road traffic sign identification method
CN106650731B (en) Robust license plate and vehicle logo recognition method
CN103699905B (en) Method and device for positioning license plate
CN108921083B (en) Illegal mobile vendor identification method based on deep learning target detection
CN109255350B (en) New energy license plate detection method based on video monitoring
CN108960055B (en) Lane line detection method based on local line segment mode characteristics
CN116758059B (en) Visual nondestructive testing method for roadbed and pavement
CN114332650B (en) Remote sensing image road identification method and system
CN108509950B (en) Railway contact net support number plate detection and identification method based on probability feature weighted fusion
CN107944354B (en) Vehicle detection method based on deep learning
CN112825192B (en) Object identification system and method based on machine learning
CN111898627B (en) SVM cloud microparticle optimization classification recognition method based on PCA
CN110751619A (en) Insulator defect detection method
CN106845458A (en) A kind of rapid transit label detection method of the learning machine that transfinited based on core
CN114913498A (en) Parallel multi-scale feature aggregation lane line detection method based on key point estimation
CN109543498B (en) Lane line detection method based on multitask network
CN112818905A (en) Finite pixel vehicle target detection method based on attention and spatio-temporal information
CN110969164A (en) Low-illumination imaging license plate recognition method and device based on deep learning end-to-end
Asgarian Dehkordi et al. Vehicle type recognition based on dimension estimation and bag of word classification
CN112613392A (en) Lane line detection method, device and system based on semantic segmentation and storage medium
CN110458019B (en) Water surface target detection method for eliminating reflection interference under scarce cognitive sample condition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant