CN111368830A - License plate detection and identification method based on multi-video frame information and nuclear phase light filtering algorithm - Google Patents

License plate detection and identification method based on multi-video frame information and nuclear phase light filtering algorithm Download PDF

Info

Publication number
CN111368830A
CN111368830A CN202010138492.6A CN202010138492A CN111368830A CN 111368830 A CN111368830 A CN 111368830A CN 202010138492 A CN202010138492 A CN 202010138492A CN 111368830 A CN111368830 A CN 111368830A
Authority
CN
China
Prior art keywords
license plate
image
frame
data set
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010138492.6A
Other languages
Chinese (zh)
Other versions
CN111368830B (en
Inventor
王�琦
袁媛
芦肖城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202010138492.6A priority Critical patent/CN111368830B/en
Publication of CN111368830A publication Critical patent/CN111368830A/en
Application granted granted Critical
Publication of CN111368830B publication Critical patent/CN111368830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Abstract

The invention provides a license plate detection and identification method based on multi-video frame information and a nuclear phase light filtering algorithm. Different deep learning models are respectively constructed to carry out license plate region detection and license plate information identification of a single-frame image, and multi-frame information tracking fusion is carried out by utilizing a nuclear correlation filtering algorithm, so that the accuracy of license plate identification and positioning can be improved, the calculation efficiency is higher, and the method can be used for real-time processing.

Description

License plate detection and identification method based on multi-video frame information and nuclear phase light filtering algorithm
Technical Field
The invention belongs to the field of intelligent traffic systems, and particularly relates to a license plate detection and identification method based on multi-video frame information and a nuclear phase light filtering algorithm.
Background
In recent years, with the rapid development of social economy, an Intelligent Transportation System (ITS) has become a popular research topic in the field of transportation control and management in the world. The license plate of the automobile is used as an 'identity card' of the automobile, and similar to human fingerprints, the license plate can be used for uniquely determining the identity of the automobile. The License Plate Recognition (LPR) system is an important link in a vehicle detection system, and the system process is as follows: firstly, the position of a license plate in a picture is obtained through a license plate detection technology, then the license plate number is recognized through an optical character recognition technology (OCR), and the license plate number of a vehicle in the picture is obtained. The traditional license plate character recognition method is generally divided into two parts of character segmentation and character recognition.
If the complex environment background is not considered, the traditional license plate recognition method, such as a recognition technology based on template matching and character features, can be used for detecting the license plate, and the problems of complex background, fuzzy license plate, angle inclination and the like often exist in the actual image. The convolutional neural network learning capability is improved by researching parameter optimization of a deep learning model, such as a convolutional kernel, a network structure, network depth, an activation function, an optimization function and the like, and utilizing a high-performance server to accelerate a GPU and the like. The license plate recognition method can be divided into two parts of license plate detection and license plate recognition: the license plate detection can be carried out by taking the license plate as a target and adopting a target detection method, such as fast-RCNN, YOLO, Densebox and the like; the license plate identification is to identify the detected license plate number, such as a CNN + RNN + CTC model and a CNN + RNN + Attention model.
The license plate recognition method based on the static image is characterized in that the static image recognition is based on the single-frame image to perform license plate recognition, the definition, the snapshot angle and the like of the image determine the recognition effect to a great extent, the single-frame image provides little information, and the recognition accuracy is low. Therefore, at present, research is being carried out on more identification methods based on dynamic videos, each frame of image in the videos can be identified, and the method has the advantages of being little affected by a single frame of image, strong in adaptability of identification technology, high in speed, good in detection quality and the like. If the video information can be fully utilized, the processing speed and the accuracy of the license plate recognition system can be greatly improved. As the target has problems of Motion blur, occlusion, and low resolution between some video frames, and the context information in the video can help well deal with these problems, representative methods are Motion-guided Propagation (MGP) and Multi-context suppression (MCS) in T-CNN, target tracking for continuous target missing detection between video frames, and a video target detection model based on deep learning, etc. However, these methods are not ideal in real-time performance and are not practical.
In practical applications, there are methods: underutilization of interframe context information in the driving video is realized; the background is complex in a natural scene, fuzzy, shielding, bad weather and other influences can exist, and the robustness of the method is not very strong; and the license plate detection and identification effect is not ideal.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a license plate detection and identification method based on multi-video frame information and a nuclear phase light filtering algorithm. The method has the advantages that different deep learning models are utilized to respectively carry out license plate region detection and license plate information recognition of single-frame images, and multi-frame information tracking fusion is carried out by utilizing a nuclear correlation filtering algorithm, so that the accuracy of license plate recognition and positioning can be improved, the calculation efficiency is higher, and the method can be used for real-time processing.
A license plate detection and identification method based on multi-video frame information and a nuclear phase light filtering algorithm is characterized by comprising the following steps:
step 1: zooming each image in the CCPD (China continental license plate data set) to an image with the size of 512 x 512, wherein all zoomed images and the marking information thereof form a pre-training data set;
step 2: respectively carrying out enhancement processing on each image in the pre-training data set, wherein all images before and after enhancement processing and labeling information thereof jointly form a final training data set; the enhancement processing comprises turning at any angle, cutting at any size and color transformation at any degree;
and step 3: inputting the training data set obtained in the step (2) into a license plate detection network model for training to obtain a trained license plate detection model; the license plate detection network model extracts feature maps with different sizes in a VggNet network to perform up-sampling, the feature maps with the same size and different width channels are changed into feature maps with the same size and different width channels, merging is performed, the feature maps obtained through merging are sent into a full connection layer, and confidence coefficients and license plate region coordinates (cx, cy, w, h and score) are output, wherein cx and cy are respectively an abscissa and an ordinate of a central point of a license plate region, w and h are the length and the width of the region, and the score represents the confidence coefficients;
and 4, step 4: intercepting a license plate area in each image in the training data set, wherein all the intercepted images and license plate number marking information thereof form a training data set of a license plate recognition model, inputting the training data set into a character recognition network based on a visual attention mechanism for training, and taking the trained network as a final license plate recognition model; the character recognition network based on the visual attention mechanism is formed by connecting 7 layers of CNN networks, an attention module and an LSTM network, wherein the 7 layers of CNN networks extract image features with representation capacity, the attention module combines the output of the previous layer and the state of an RNN hidden layer to make a line and a softmax, the attention weight W is obtained, a feature matrix output by multiplying the W by the CNN is obtained to obtain a feature map with the attention features and sent to the LSTM layer, and the LSTM layer is used as a decoder to output a final character sequence;
and 5: inputting the 1st to F th frame images in the license plate video data set to be detected into the license plate detection model obtained in the step 3, respectively obtaining coordinates of 4 vertexes of the license plate area of each image, and cutting the original image according to the vertex coordinates to obtain the license plate area image of each image; the value range of F is 5-30;
for the k frame image in the license plate video data set to be detected, k is more than or equal to F +1 and less than or equal to N-1, the k is input into the license plate detection model obtained in the step 3 to obtain coordinates of 4 vertexes of the license plate area, the original image is cut according to the vertex coordinates to obtain the license plate area image of the image, then, the image and the license plate area image obtained by the previous F frame are used for tracking and predicting by adopting a KCF kernel correlation filtering algorithm, and the obtained predicted image is used as the predicted license plate area image of the k +1 frame image; inputting the (k + 1) th frame of image into the license plate detection model obtained in the step (3) to obtain coordinates of 4 vertexes of the license plate region of the license plate detection model, and performing weighted average on the image obtained by cutting the (k + 1) th frame of image according to the vertex coordinates and the predicted license plate region image of the (k + 1) th frame of image to obtain a final license plate region image of the (k + 1) th frame of image; n represents the total number of video frames contained in the data set;
step 6: inputting the license plate region image of each frame image obtained in the step (5) into the license plate recognition model obtained in the step (4), and taking the obtained result as the initial recognition result; then, voting each character according to the initial recognition result of each frame of image and the initial recognition result of the previous M frames of images, if the current frame is less than M frames of images, voting all previous frames, and taking the license plate number formed by the characters with the most occurrence times as the recognition result of the frame of image; the value range of M is 5-20.
The invention has the beneficial effects that: because the multi-frame information in the video and the traditional nuclear correlation filtering algorithm are effectively combined to detect and identify the license plate in the video, the accuracy of license plate identification is improved, and the real-time performance is ensured: because various modes such as turning, deformation, color transformation and the like are applied to carry out data enhancement preprocessing, the method can adapt to license plate detection and recognition under various scenes; because the characteristic graphs in different stages are utilized in the license plate detection stage, the detection model has better robustness for detecting small targets such as the license plate; because a visual attention mechanism is used in the license plate recognition stage, the region where the license plate characters are located can be mainly recognized, and the accuracy and the robustness of license plate recognition in a single-frame image are improved; due to the adoption of the method for voting by using the identification result of the plurality of frames in the video, the accuracy of identifying the license plate in the video is improved.
Drawings
FIG. 1 is a flow chart of a license plate detection and identification method based on multi-video frame information and a nuclear phase light filtering algorithm.
Detailed Description
The present invention will be further described with reference to the following drawings and examples, which include, but are not limited to, the following examples.
As shown in fig. 1, the present invention provides a license plate detection and identification method based on multi-video frame information and a nuclear phase light filtering algorithm. The basic implementation process is as follows:
1. selecting and constructing pre-training data set
In order to be suitable for various scenes, a proper license plate positioning and license plate recognition data set with diversified samples needs to be selected. The present invention adopts Chinese continental License Plate data set CCPD, CCPD is proposed in the literature "Zhenbo Xu, Ajin Meng, Nanxue Lu." means End-to-End License Plate Detection and Recognition: A Large data series base: 15th European Conference, Munich, Germany, September 8-14,2018, Proceedings, Part XIII, "Eprint Arxiv, 2018". Because the size of the images in the original data set is not uniform, the invention carries out size transformation on the images to ensure that the images have the same size, namely, each image is zoomed into an image with the size of 512 x 512, and the zoomed image and the marked information thereof form a pre-training data set.
2. Data set enhancement
And enhancing the sample data set in a targeted manner according to the distribution of the sample data set. The CCPD data set has complex natural conditions such as fuzzy license plate, shielding, rainy and snowy weather, strong light and backlight, and for the conditions, data enhancement processing such as random angle overturning, random size cutting, random degree color change and the like is carried out on the images in the pre-training data set, and all the images before and after enhancement processing and the labeling information thereof form a final training data set.
3. License plate detection model training
Inputting the training data set obtained after the enhancement processing into a license plate detection network model for training, and storing the trained network parameters to obtain a license plate detection model to be used later.
Besides the text characteristics of fixed character classes, the license plate has a long rectangular shape and a clear closed edge outline. Based on the characteristics, the license plate can be used as a text for detection. The invention sends the enhanced data set to a license plate detection model for supervised training, continuously adjusts network parameters, modifies the model structure and loss function, simultaneously compares the influence of two classic networks of VggNet and ResNet as the main networks on the model effect, and finally determines VggNet as the main network by considering the time performance.
Therefore, the invention takes the classical network VggNet as a backbone network, extracts feature graphs in different stages and different sizes from the backbone network and performs feature aggregation so as to solve the problem of severe scale transformation of the license plate, and the whole network of the invention is similar to a pyramid structure. In the specific network model, VggNet is used as a backbone network, feature maps with different sizes (conv 3_3, conv4_3 and conv5_3 are extracted in the embodiment) are extracted, an up-sampling operation is carried out on the feature maps with the same size and different width channels, then merging operation is carried out on the feature maps, the merged feature maps are sent to a full connection layer to output confidence degrees and license plate region coordinates (cx, cy, w, h and score), wherein cx is the horizontal coordinate of the center point of a license plate region, cy is the vertical coordinate of the center point of the license plate region, w is the length of the license plate region, h is the width of the license plate region, and score represents the confidence degree.
4. License plate recognition model training
The CCPD data set is provided with license plate positioning marking information and license plate number information, as the identification model and the detection model are two independent models, in the training stage of the identification part, firstly, the license plate area of the image in the training data set is intercepted according to the marking information, the intercepted license plate area image and the license plate number marking information form the training data set of the license plate identification model, and the training data set is input into the license plate identification network model for training to obtain the trained license plate identification model.
Because the license plate number in China consists of Chinese characters, capital letters and numbers and can be uniformly arranged in a license plate area according to a certain rule, the invention adopts a character recognition network model based on a visual attention mechanism as a license plate recognition network model, namely a network obtained by connecting 7 layers of Convolutional Neural Networks (CNN), an attention module and a long short-Term Memory network (LSTM). The image features with the characterization capability are extracted by using seven layers of CNNs, namely a plurality of slices are formed along the horizontal direction, each slice corresponds to a feature vector, the convolved receptive fields are mutually overlapped to enable the features to have context relations, then an attention module is stacked on the top layers of the CNNs to learn the region where the license plate characters are located, and finally, the LSTM is used as a decoder to output a final character sequence. Wherein, the attention module is specifically: combining the last output and the state of the RNN hidden layer, doing a linear and softmax, obtaining the attention weight W, multiplying the W by the feature matrix output by the CNN to obtain a feature map with the attention feature, and sending the feature map into the next LSTM layer.
5. License plate positioning and tracking
After the license plate area of the video frame image is detected, the displacement scale of the license plate between adjacent frames is not large, in order to fully utilize the context information between different frame images, the invention adopts a KCF nuclear correlation filtering algorithm to carry out positioning tracking on the license plate between adjacent frames, the detection position of the next frame is predicted according to the positioning results of the previous frames, and in order to strengthen the effect of the detection part, the invention carries out weighted average on the result of KCF tracking and the result of the next frame detection as the final detection result. KFC algorithm can obtain better balance effect on time performance and tracking effect, and is described in documents of 'Henriques, Joao F., Caseiro, Rui, Martins, Pedro, & Batista, Joge.. High-speed tracking with kernellized correlation filters, IEEE Transactions on Pattern Analysis & machine Analysis, 37(3),583- & 596', the specific method is to train a correlation filter according to the information of the current frame and the information of the previous frame, then to perform correlation calculation with the newly input frame, and the obtained confidence map is the predicted tracking result; and since the closer the frame is to the current frame, the more referential the positioning information is provided, it gives different information weights to the previous frames.
The calculation is specifically divided into three cases:
(1) inputting the 1st to F th frame images in the license plate video data set to be detected into the license plate detection model obtained in the step 3, respectively obtaining coordinates of 4 vertexes of the license plate area of each image, and cutting the original image according to the vertex coordinates to obtain the license plate area image of each image; the value range of F is 5-30.
(2) For the k frame image in the license plate video data set to be detected, k is more than or equal to F +1 and less than or equal to N-1, the k and the previous F frame images are respectively input into the license plate detection model obtained in the step 3, coordinates of 4 vertexes of the license plate area of each image are respectively obtained, the original image is cut according to the vertex coordinates to obtain the license plate area image of each frame image, and then the images are used for tracking and predicting the k +1 frame image by adopting a KCF algorithm to obtain the predicted license plate area image of the k +1 frame image; inputting the (k + 1) th frame of image into the license plate detection model obtained in the step (3) to obtain coordinates of 4 vertexes of the license plate region of the license plate detection model, and performing weighted average on the image obtained by cutting the (k + 1) th frame of image according to the vertex coordinates and the predicted license plate region image of the (k + 1) th frame of image to obtain a final license plate region image of the (k + 1) th frame of image; n represents the total number of video frames contained in the data set.
(3) And (3) respectively inputting the N frame image and the previous F frame image in the license plate video data set to be detected into the license plate detection model obtained in the step (3), respectively obtaining coordinates of 4 vertexes of the license plate area of each image, cutting the original image according to the vertex coordinates to obtain the license plate area image of each image, and then performing tracking prediction processing on the N frame image by using the images and adopting a KCF algorithm to obtain a predicted image as the license plate area image of the N frame image.
6. Multi-frame identification voting
And (4) inputting the license plate region image of each frame image obtained in the step (5) into the license plate recognition model obtained in the step (4), and taking the obtained result as the initial recognition result.
In an actual scene, because of the change of a license plate background, the difference of license plate identification information between adjacent frames is larger, the initial identification result of each frame of image and the initial identification result of the previous M frames of images vote for each character, and a license plate number formed by the characters with the largest occurrence frequency is used as the identification result of the frame of image; if the current frame is less than M frames of images before, then vote is carried out on all the previous frames. Wherein the value range of M is 5-20. And finally obtaining license plate identification information in the complete video frequency section.
In order to verify the effect of the method of the present invention, the CPU is
Figure BDA0002398174080000071
CPU E5-2697 v2@2.70GHz, a memory of 128G, a graphics processor of
Figure BDA0002398174080000072
And (3) performing simulation experiments on GeForce 1080Ti GPU and Red Hat 6.5 operating systems by using a pytoreh frame and combining Python language. Separate selection of documents "
Figure BDA0002398174080000073
The method in Gabriel Resend, David Menotti, and William Robson Schwartz, "License performance based on temporal performance reduction," 2016IEEE 19th International Conference transfer Systems (ITSC) IEEE,2016 "is a method of comparing 1, the document" silver, Sengio Montzola, and Claudio Rositon Jung, "read-time License performance detection and recording using the method of comparing 2, Laggen 2018, the method in Pattern Graphics (" BGE Graphics ") and method in Jun Lab Audio 2018. the method of comparing the effects of I, III, I, II, III. By different methods to the literature "
Figure BDA0002398174080000074
The SSEG-PLATE license PLATE video data set provided in Gabriel Resend, et al, "Real-time automatic license PLATE visualization through multiple-task networks," 201831 st SIBBRAPI configuration Graphics, Patterns and Images (SIBBRAPI), IEEE,2018, "was subjected to an experiment, where both F and M were fixed to 10, the recognition accuracy and the time performance FPS of different methods on all frames were compared, and the experimental result data are shown in Table 1. It can be seen that the recognition accuracy of the method of the present invention is higher than all the comparison methods, while the time performance is better than comparison methods 1 and 3.
TABLE 1
Method of producing a composite material Rate of identification accuracy FPS
Comparative method
1 81.8% 28
Comparative method 2 63.1% 55
Comparative method 3 85.4% 36
The method of the invention 87.6% 38
The method is a license plate detection and identification method which has stronger robustness and can adapt to more complex natural scenes, and is not limited to the scenes although the method is carried out in the complex driving scenes. In addition, the method can fully mine the inter-frame information in the video, strengthen the identification performance on each frame, ensure that the video has strong generalization, simultaneously, the method design considers the time performance, and can efficiently process the off-line video segment.

Claims (1)

1. A license plate detection and identification method based on multi-video frame information and a nuclear phase light filtering algorithm is characterized by comprising the following steps:
step 1: zooming each image in the CCPD (China continental license plate data set) to an image with the size of 512 x 512, wherein all zoomed images and the marking information thereof form a pre-training data set;
step 2: respectively carrying out enhancement processing on each image in the pre-training data set, wherein all images before and after enhancement processing and labeling information thereof jointly form a final training data set; the enhancement processing comprises turning at any angle, cutting at any size and color transformation at any degree;
and step 3: inputting the training data set obtained in the step (2) into a license plate detection network model for training to obtain a trained license plate detection model; the license plate detection network model extracts feature maps with different sizes in a VggNet network to perform up-sampling, the feature maps with the same size and different width channels are changed into feature maps with the same size and different width channels, merging is performed, the feature maps obtained through merging are sent into a full connection layer, and confidence coefficients and license plate region coordinates (cx, cy, w, h and score) are output, wherein cx and cy are respectively an abscissa and an ordinate of a central point of a license plate region, w and h are the length and the width of the region, and the score represents the confidence coefficients;
and 4, step 4: intercepting a license plate area in each image in the training data set, wherein all the intercepted images and license plate number marking information thereof form a training data set of a license plate recognition model, inputting the training data set into a character recognition network based on a visual attention mechanism for training, and taking the trained network as a final license plate recognition model; the character recognition network based on the visual attention mechanism is formed by connecting 7 layers of CNN networks, an attention module and an LSTM network, wherein the 7 layers of CNN networks extract image features with representation capacity, the attention module combines the output of the previous layer and the state of an RNN hidden layer to make a line and a softmax, the attention weight W is obtained, a feature matrix output by multiplying the W by the CNN is obtained to obtain a feature map with the attention features and sent to the LSTM layer, and the LSTM layer is used as a decoder to output a final character sequence;
and 5: inputting the 1st to F th frame images in the license plate video data set to be detected into the license plate detection model obtained in the step 3, respectively obtaining coordinates of 4 vertexes of the license plate area of each image, and cutting the original image according to the vertex coordinates to obtain the license plate area image of each image; the value range of F is 5-30;
for the k frame image in the license plate video data set to be detected, k is more than or equal to F +1 and less than or equal to N-1, the k is input into the license plate detection model obtained in the step 3 to obtain coordinates of 4 vertexes of the license plate area, the original image is cut according to the vertex coordinates to obtain the license plate area image of the image, then, the image and the license plate area image obtained by the previous F frame are used for tracking and predicting by adopting a KCF kernel correlation filtering algorithm, and the obtained predicted image is used as the predicted license plate area image of the k +1 frame image; inputting the (k + 1) th frame of image into the license plate detection model obtained in the step (3) to obtain coordinates of 4 vertexes of the license plate region of the license plate detection model, and performing weighted average on the image obtained by cutting the (k + 1) th frame of image according to the vertex coordinates and the predicted license plate region image of the (k + 1) th frame of image to obtain a final license plate region image of the (k + 1) th frame of image; n represents the total number of video frames contained in the data set;
step 6: inputting the license plate region image of each frame image obtained in the step (5) into the license plate recognition model obtained in the step (4), and taking the obtained result as the initial recognition result; then, voting each character according to the initial recognition result of each frame of image and the initial recognition result of the previous M frames of images, if the current frame is less than M frames of images, voting all previous frames, and taking the license plate number formed by the characters with the most occurrence times as the recognition result of the frame of image; the value range of M is 5-20.
CN202010138492.6A 2020-03-03 2020-03-03 License plate detection and recognition method based on multi-video frame information and kernel correlation filtering algorithm Active CN111368830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010138492.6A CN111368830B (en) 2020-03-03 2020-03-03 License plate detection and recognition method based on multi-video frame information and kernel correlation filtering algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010138492.6A CN111368830B (en) 2020-03-03 2020-03-03 License plate detection and recognition method based on multi-video frame information and kernel correlation filtering algorithm

Publications (2)

Publication Number Publication Date
CN111368830A true CN111368830A (en) 2020-07-03
CN111368830B CN111368830B (en) 2024-02-27

Family

ID=71208415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010138492.6A Active CN111368830B (en) 2020-03-03 2020-03-03 License plate detection and recognition method based on multi-video frame information and kernel correlation filtering algorithm

Country Status (1)

Country Link
CN (1) CN111368830B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914837A (en) * 2020-07-10 2020-11-10 北京嘉楠捷思信息技术有限公司 License plate detection method, device, equipment and storage medium
CN112149661A (en) * 2020-08-07 2020-12-29 珠海欧比特宇航科技股份有限公司 License plate recognition method, device and medium
CN112597888A (en) * 2020-12-22 2021-04-02 西北工业大学 On-line education scene student attention recognition method aiming at CPU operation optimization
CN112836683A (en) * 2021-03-04 2021-05-25 广东建邦计算机软件股份有限公司 License plate recognition method, device, equipment and medium for portable camera equipment
CN112997190A (en) * 2020-12-29 2021-06-18 深圳市锐明技术股份有限公司 License plate recognition method and device and electronic equipment
CN113269105A (en) * 2021-05-28 2021-08-17 西安交通大学 Real-time faint detection method, device, equipment and medium in elevator scene
CN113408549A (en) * 2021-07-14 2021-09-17 西安电子科技大学 Few-sample weak and small target detection method based on template matching and attention mechanism
CN114677500A (en) * 2022-05-25 2022-06-28 松立控股集团股份有限公司 Weak surveillance video license plate recognition method based on eye tracker point annotation information
CN115019297A (en) * 2022-08-04 2022-09-06 之江实验室 Real-time license plate detection and identification method and device based on color augmentation

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130266190A1 (en) * 2012-04-06 2013-10-10 Xerox Corporation System and method for street-parking-vehicle identification through license plate capturing
EP2790130A1 (en) * 2013-04-08 2014-10-15 Cogisen SRL Method for object recognition
CN107066953A (en) * 2017-03-22 2017-08-18 北京邮电大学 It is a kind of towards the vehicle cab recognition of monitor video, tracking and antidote and device
CN107240122A (en) * 2017-06-15 2017-10-10 国家新闻出版广电总局广播科学研究院 Video target tracking method based on space and time continuous correlation filtering
CN108447091A (en) * 2018-03-27 2018-08-24 北京颂泽科技有限公司 Object localization method, device, electronic equipment and storage medium
CN108734189A (en) * 2017-04-20 2018-11-02 天津工业大学 Vehicle License Plate Recognition System based on atmospherical scattering model and deep learning under thick fog weather
CN109448027A (en) * 2018-10-19 2019-03-08 成都睿码科技有限责任公司 A kind of adaptive, lasting motion estimate method based on algorithm fusion
CN109544603A (en) * 2018-11-28 2019-03-29 上饶师范学院 Method for tracking target based on depth migration study
US20190251369A1 (en) * 2018-02-11 2019-08-15 Ilya Popov License plate detection and recognition system
CN110427871A (en) * 2019-07-31 2019-11-08 长安大学 A kind of method for detecting fatigue driving based on computer vision
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking
CN110619279A (en) * 2019-08-22 2019-12-27 天津大学 Road traffic sign instance segmentation method based on tracking

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130266190A1 (en) * 2012-04-06 2013-10-10 Xerox Corporation System and method for street-parking-vehicle identification through license plate capturing
EP2790130A1 (en) * 2013-04-08 2014-10-15 Cogisen SRL Method for object recognition
CN107066953A (en) * 2017-03-22 2017-08-18 北京邮电大学 It is a kind of towards the vehicle cab recognition of monitor video, tracking and antidote and device
CN108734189A (en) * 2017-04-20 2018-11-02 天津工业大学 Vehicle License Plate Recognition System based on atmospherical scattering model and deep learning under thick fog weather
CN107240122A (en) * 2017-06-15 2017-10-10 国家新闻出版广电总局广播科学研究院 Video target tracking method based on space and time continuous correlation filtering
US20190251369A1 (en) * 2018-02-11 2019-08-15 Ilya Popov License plate detection and recognition system
CN108447091A (en) * 2018-03-27 2018-08-24 北京颂泽科技有限公司 Object localization method, device, electronic equipment and storage medium
CN109448027A (en) * 2018-10-19 2019-03-08 成都睿码科技有限责任公司 A kind of adaptive, lasting motion estimate method based on algorithm fusion
CN109544603A (en) * 2018-11-28 2019-03-29 上饶师范学院 Method for tracking target based on depth migration study
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking
CN110427871A (en) * 2019-07-31 2019-11-08 长安大学 A kind of method for detecting fatigue driving based on computer vision
CN110619279A (en) * 2019-08-22 2019-12-27 天津大学 Road traffic sign instance segmentation method based on tracking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李祥鹏等: "基于深度学习的车牌定位和识别方法", 计算机辅助设计与图形学学报, 15 June 2019 (2019-06-15) *
黄宝生等: "车牌视频跟踪识别系统的设计", 现代电子技术, 15 May 2013 (2013-05-15) *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914837A (en) * 2020-07-10 2020-11-10 北京嘉楠捷思信息技术有限公司 License plate detection method, device, equipment and storage medium
CN112149661A (en) * 2020-08-07 2020-12-29 珠海欧比特宇航科技股份有限公司 License plate recognition method, device and medium
CN112597888A (en) * 2020-12-22 2021-04-02 西北工业大学 On-line education scene student attention recognition method aiming at CPU operation optimization
CN112597888B (en) * 2020-12-22 2024-03-08 西北工业大学 Online education scene student attention recognition method aiming at CPU operation optimization
CN112997190B (en) * 2020-12-29 2024-01-12 深圳市锐明技术股份有限公司 License plate recognition method and device and electronic equipment
CN112997190A (en) * 2020-12-29 2021-06-18 深圳市锐明技术股份有限公司 License plate recognition method and device and electronic equipment
CN112836683A (en) * 2021-03-04 2021-05-25 广东建邦计算机软件股份有限公司 License plate recognition method, device, equipment and medium for portable camera equipment
CN113269105A (en) * 2021-05-28 2021-08-17 西安交通大学 Real-time faint detection method, device, equipment and medium in elevator scene
CN113408549A (en) * 2021-07-14 2021-09-17 西安电子科技大学 Few-sample weak and small target detection method based on template matching and attention mechanism
CN113408549B (en) * 2021-07-14 2023-01-24 西安电子科技大学 Few-sample weak and small target detection method based on template matching and attention mechanism
CN114677500B (en) * 2022-05-25 2022-08-23 松立控股集团股份有限公司 Weak surveillance video license plate recognition method based on eye tracker point annotation information
CN114677500A (en) * 2022-05-25 2022-06-28 松立控股集团股份有限公司 Weak surveillance video license plate recognition method based on eye tracker point annotation information
CN115019297A (en) * 2022-08-04 2022-09-06 之江实验室 Real-time license plate detection and identification method and device based on color augmentation
CN115019297B (en) * 2022-08-04 2022-12-09 之江实验室 Real-time license plate detection and identification method and device based on color augmentation

Also Published As

Publication number Publication date
CN111368830B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
CN111368830A (en) License plate detection and identification method based on multi-video frame information and nuclear phase light filtering algorithm
CN112884064B (en) Target detection and identification method based on neural network
CN109840521B (en) Integrated license plate recognition method based on deep learning
CN109345547B (en) Traffic lane line detection method and device based on deep learning multitask network
CN109815867A (en) A kind of crowd density estimation and people flow rate statistical method
CN110399840B (en) Rapid lawn semantic segmentation and boundary detection method
CN109446922B (en) Real-time robust face detection method
CN108537816A (en) A kind of obvious object dividing method connecting priori with background based on super-pixel
CN114998815B (en) Traffic vehicle identification tracking method and system based on video analysis
CN111915583A (en) Vehicle and pedestrian detection method based on vehicle-mounted thermal infrared imager in complex scene
CN116612292A (en) Small target detection method based on deep learning
CN107609509A (en) A kind of action identification method based on motion salient region detection
CN110837769B (en) Image processing and deep learning embedded far infrared pedestrian detection method
CN114708566A (en) Improved YOLOv 4-based automatic driving target detection method
CN114463800A (en) Multi-scale feature fusion face detection and segmentation method based on generalized intersection-parallel ratio
Xiang et al. A real-time vehicle traffic light detection algorithm based on modified YOLOv3
CN115909072A (en) Improved YOLOv4 algorithm-based impact point water column detection method
Chowdary et al. Sign board recognition based on convolutional neural network using yolo-3
CN111178158B (en) Rider detection method and system
CN114926456A (en) Rail foreign matter detection method based on semi-automatic labeling and improved deep learning
CN114998879A (en) Fuzzy license plate recognition method based on event camera
CN111986233B (en) Large-scene minimum target remote sensing video tracking method based on feature self-learning
CN113888590A (en) Video target tracking method based on data enhancement and twin network
CN113378598A (en) Dynamic bar code detection method based on deep learning
Gao et al. Research on license plate detection and recognition based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant