CN110969164A - Low-illumination imaging license plate recognition method and device based on deep learning end-to-end - Google Patents

Low-illumination imaging license plate recognition method and device based on deep learning end-to-end Download PDF

Info

Publication number
CN110969164A
CN110969164A CN201911327687.9A CN201911327687A CN110969164A CN 110969164 A CN110969164 A CN 110969164A CN 201911327687 A CN201911327687 A CN 201911327687A CN 110969164 A CN110969164 A CN 110969164A
Authority
CN
China
Prior art keywords
license plate
image
low
layer
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911327687.9A
Other languages
Chinese (zh)
Inventor
张斯尧
罗茜
王思远
蒋杰
张�诚
李乾
谢喜林
黄晋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Qianshitong Information Technology Co Ltd
Original Assignee
Hunan Qianshitong Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Qianshitong Information Technology Co Ltd filed Critical Hunan Qianshitong Information Technology Co Ltd
Priority to CN201911327687.9A priority Critical patent/CN110969164A/en
Publication of CN110969164A publication Critical patent/CN110969164A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/243Aligning, centring, orientation detection or correction of the image by compensating for image skew or non-uniform image deformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Abstract

The invention discloses a method and a device for recognizing a license plate based on deep learning end-to-end low-illumination imaging, wherein the method comprises the following steps: inputting the original license plate low-illumination image acquired by the camera module into a multi-scale context aggregation network based on deep learning for image processing to acquire a license plate image with improved identifiability; carrying out license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image; and identifying the positioned license plate through an integrated deep network model. By the embodiment of the invention, the recognition efficiency and accuracy of the low-illumination imaging license plate can be improved.

Description

Low-illumination imaging license plate recognition method and device based on deep learning end-to-end
Technical Field
The invention relates to the technical field of computer vision, in particular to a method and a device for recognizing a license plate based on deep learning end-to-end low-illumination imaging, a terminal device and a computer readable storage medium.
Background
With the development of computer vision, digital image processing technology and intelligent transportation technology, the application of license plate recognition technology in the field of intelligent transportation is more and more extensive. Related products for license plate recognition are more and more on the market, but the problems which are ubiquitous at present are as follows: the existing license plate recognition system has higher requirements on the quality of images, but in a complex application environment, the obtained images are often lower in quality and cannot meet the requirements of the system, so that the recognition rate of the license plate is low. Therefore, how to improve the recognition rate of the license plate in the low-quality image so as to adapt to the complex and changeable application environment is an important problem to be solved in the research of the license plate recognition system. The gray scale range of the image acquired under low illumination is narrow, the gray scale change is not obvious, the spatial correlation of adjacent pixels is high, and the characteristics enable details, background, noise and the like in the image to be contained in the narrow gray scale range. Therefore, in order to improve the visual effect of the image acquired under low illumination, the image acquired under low illumination needs to be converted into a form more suitable for human eye observation and computer processing, so that useful information can be extracted. Specifically, in the application of license plate recognition, when the quality of a license plate image is not high, the current main technical idea is to perform corresponding processing on a single-frame image by using related digital image processing technologies (such as image, filtering, image enhancement and the like) so as to improve the quality of the image. Most of the methods are traditional ideas, and generally, the methods have the defects that image details are not clear enough, recognition details are not accurate enough, and processing effects are changed greatly according to different environments. In addition, in the traditional method, characters in the low-illumination imaging license plate need to be cut, and the calculation amount is huge, so that the license plate recognition efficiency and accuracy are greatly influenced.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, a system, a terminal device, and a computer readable medium for recognizing a low-illumination imaging license plate based on deep learning end-to-end, which can improve recognition efficiency and accuracy of a low-illumination imaging license plate.
The first aspect of the embodiment of the invention provides a deep learning end-to-end low-illumination imaging license plate recognition method, which comprises the following steps:
inputting the original license plate low-illumination image acquired by the camera module into a multi-scale context aggregation network based on deep learning for image processing to acquire a license plate image with improved identifiability;
carrying out license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image;
identifying the positioned license plate through an integrated depth network model; the integrated deep network model comprises a convolution layer, a BRNN layer, a linear transformation layer and a CTC layer.
A second aspect of the embodiments of the present invention provides an end-to-end low-illumination imaging license plate recognition device based on deep learning, including:
the identification degree improving module is used for inputting the original license plate low-illumination image acquired by the camera module into a multi-scale context aggregation network based on deep learning for image processing so as to acquire a license plate image with improved identification degree;
the positioning correction module is used for carrying out license plate positioning and license plate inclination correction processing on the license plate image so as to obtain a positioned license plate image;
the recognition module is used for recognizing the positioned license plate through an integrated depth network model; the integrated deep network model comprises a convolution layer, a BRNN layer, a linear transformation layer and a CTC layer.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method for recognizing a license plate based on deep learning end-to-end low-illumination imaging.
A fourth aspect of the embodiments of the present invention provides a computer-readable medium, where a computer program is stored, and when the computer program is processed and executed, the steps of the method for recognizing a license plate based on deep learning end-to-end low-illumination imaging are implemented.
According to the end-to-end low-illumination imaging license plate recognition method based on deep learning, an original license plate low-illumination image can be input into a multi-scale context aggregation network based on deep learning to be subjected to image processing, so that a license plate image with improved recognizability is obtained, license plate positioning and license plate inclination correction processing are performed on the license plate image, so that a positioned license plate image is obtained, and the positioned license plate is recognized by using BRNN with CTC through an integrated deep network model, so that the recognition efficiency and accuracy of the low-illumination imaging license plate can be improved.
Drawings
FIG. 1 is a schematic flow chart of a first embodiment of a deep learning end-to-end low-illumination imaging license plate recognition method according to the present invention;
FIG. 2 is a diagram of an embodiment of a deep learning based CAN network;
fig. 3 is a schematic diagram of a process of identifying a positioned license plate through an integrated deep network model according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a deep learning end-to-end low-illumination imaging license plate recognition device according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a detailed structure of the identification enhancing module shown in FIG. 4;
FIG. 6 is a schematic diagram of a detailed structure of the positioning correction module in FIG. 4;
FIG. 7 is a schematic diagram of a detailed structure of the recognition module in FIG. 4;
fig. 8 is a schematic diagram of a terminal device according to an embodiment of the present invention.
The objects, features and advantages of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
Referring to fig. 1, to achieve the above object, a first embodiment of the present invention provides an end-to-end low-illumination imaging license plate recognition method based on deep learning, including the following steps:
and S10, inputting the original license plate low-illumination image acquired by the camera module into a multi-scale context aggregation network based on deep learning for image processing to acquire a license plate image with improved recognizability.
And S20, carrying out license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image.
And S30, identifying the positioned license plate through an integrated depth network model. Wherein the integrated deep network model comprises a convolutional layer, a Bidirectional Recurrent Neural Network (BRNN) layer, a linear transformation layer and a joint-sense time classification (CTC) layer.
In the technical scheme of the invention, the low-illumination imaging license plate recognition method inputs the low-illumination image of the original license plate acquired by the camera module into the multi-scale context aggregation network based on deep learning for image processing to obtain the license plate image with improved identifiability, thereby improving the signal-to-noise ratio and the image display details of the license plate image and achieving the purpose of clearly imaging the license plate image in the low-illumination environment by the video image. Then, carrying out license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image; and finally, identifying the positioned license plate image through an integrated depth network model without carrying out character segmentation processing with large calculation amount. The technical scheme of the invention has the advantages of high reliability of reading the license plate characters, good recognition degree, good robustness, simple step calculation, high efficiency maintenance and real-time property meeting the requirement.
Aiming at the specific difficult problems of the existing license plate recognition system in the low-illumination environment, the invention provides a low-illumination imaging license plate recognition system based on a deep learning multi-scale context aggregation network in order to improve the accuracy of license plate recognition of a monitoring system and meet the real-time requirement.
Based on the first embodiment of the low-illumination imaging license plate recognition method of the present invention, in the second embodiment of the low-illumination imaging license plate recognition method of the present invention, the step S10 includes:
and step S11, preprocessing the original license plate low-illumination Bayer (Bayer) image acquired by the camera module, and packing and transforming pixel channels to obtain a pixel image for inputting an FCN (full convolution neural network) model for training. Specifically, preprocessing an original license plate low-illumination Bayer image: packing and transforming pixel channels, and processing the pixel channels into pixel images more suitable for FCN training input; for a Bayer array, the input is packed into four channels and the spatial resolution is reduced by half on each channel. For an X-Trans array, the raw data is made up of 6X 6 arranged blocks; the array of 36 lanes is packed into 9 lanes by swapping the adjacent lane elements. In addition, black pixels are eliminated and the data is scaled by a desired multiple (e.g., x 100 or x 300). The processed data is used as the input of the FCN model, and the output is an image with 12 channels, and the spatial resolution of the image is only half of that of the input. The data volume of the processed image is reduced, and meanwhile, the details of the image are not influenced, so that the method is beneficial to subsequent convolution processing. And outputting the processed low-illumination license plate pixel image.
And step S12, training the pixel image based on a deep learning CAN (controller area network) network, and outputting the processed image. Fig. 2 is a diagram of a CAN network architecture. The circle represents the non-linear function lreuu. The first and last layers are three channels, the remaining are multiple channels, and the penultimate layer uses a 1 × 1 convolution without nonlinear transformation to obtain the last layer. The core part of the method is shown as follows:
Figure BDA0002328801250000051
wherein the content of the first and second substances,
Figure BDA0002328801250000052
is s layer LsI is not less than 0 in the ith characteristic layer; rsWhich represents the operation of the convolution of the hole,
Figure BDA0002328801250000053
representing a convolution kernel of 3 x 3,
Figure BDA0002328801250000054
is an offset term, #sIs an adaptive normalization function and phi is the non-linear element lretl at pixel level phi (x) max (α x, x) where α takes 0.2.
When the CAN structure is used for training, the picture pair is required to be input for supervised training, and after a plurality of loss functions are adopted for training, the mean square error is found to be the optimal scheme in the actual process. Loss function formula of CAN structure
Figure BDA0002328801250000057
The following were used:
Figure BDA0002328801250000055
where κ represents the convolution kernel, B is the bias term, f is the layer function of the CAN structure,
Figure BDA0002328801250000056
estimated value of layer function representing CAN structure, IiRepresents the ith feature layer of the CAN architecture,Nithe convolution layer number of the ith characteristic layer is represented, and the whole loss function is a mean square error formula.
After the CAN structure is established, data training is started. The algorithm of the invention uses an Adam optimizer in the training of the CAN network, and starts training from zero. During training, the network input is the original short exposure image and the real data in the sRGB space is the corresponding long exposure time image. The algorithm trains a network for each camera and uses the difference in the multiple of the exposure time between the original image and the reference image as our magnification factor (e.g., x 100, x 250, or x 300). In each training iteration, a 512 x 512 patch is randomly cropped for training and randomly enhanced with flipping, rotating, etc. The initial learning rate was set to 0.0001, and after 2000 iterations the learning rate dropped to 0.00001, training for a total of 4000 iterations. And after the training model is finished based on the corresponding database, outputting a corresponding sRGB space result image every time the preprocessed low-illumination Bayer image is input.
And step S13, performing wide dynamic enhancement processing on the processed image, and outputting the license plate image with improved reduction degree and image quality. Specifically, the processed image is subjected to wide dynamic enhancement processing, the reduction degree and the image quality of the low-illumination license plate image are further improved, and the final license plate image is directly output after the processing.
The invention of the present section uses an improved local algorithm to perform wide dynamic processing on the image to be processed. A frame of video image is divided into two cases: high light portion, low light portion. Aiming at the classification, the invention respectively adopts different parameters to adjust each part, and achieves the effect of wide dynamic of video images together. The low light compensation algorithm formula of the invention of the part is as follows:
Figure BDA0002328801250000061
wherein, Y2The value of the low light compensation part, k is a low light compensation parameter, which is usually set according to the system requirement, I is the pixel value of the input video image, Y1To prepareThe correction value of the video image inputted by the processing section. The algorithm formula for the highlight portion is as follows:
Figure BDA0002328801250000062
α is a highlight part adjustment parameter to adjust the maximum value, the parameter range is generally 0.7-1, Max a is the pixel maximum value of the video image, finally, the wide dynamic video image output after correction is:
Y=Y2+Y3
Figure BDA0002328801250000063
and Y is a video image finally output by the system after the wide dynamic algorithm is processed. Fig. 3 is a comparison graph of two groups of low-illumination license plate images after processing, and it can be found that the algorithm of the present invention enhances the image contrast while more retaining scene detail information, and the image brightness is significantly improved, so that the algorithm is an efficient low-illumination imaging algorithm based on deep learning.
Based on the first embodiment or the second embodiment of the low-illumination imaging license plate recognition method, in the third embodiment of the low-illumination imaging license plate recognition method, license plate positioning and inclination correction of a license plate are performed through multi-information fusion and serve as preprocessing of license plate recognition. And when judging whether one area is a license plate, the method for positioning the object by the human can be used for reference. The edge density of the license plate area is greater than that of the surrounding area, especially the area below the license plate area, and is important environmental information. A large number of non-license plate areas can be excluded through the environmental information; for a single-layer license plate, all characters are distributed on a straight line, and for a double-layer license plate, all characters of a lower-layer license plate are distributed on a straight line, which is the structural information of the license plate; each character of the license plate except the Chinese character is a letter or a number, which is the part information of the license plate. Through the 3 types of information, a good license plate positioning effect can be obtained. The step S20 includes:
and step S21, roughly positioning the license plate image based on the environmental information to filter a part of background area of the license plate image. Specifically, a gray image is adopted to carry out coarse positioning on a license plate, and an edge image of the license plate image is obtained through a gradient operator [ -101 ]: 1) the edge density of the license plate area is higher, but if the density value is too high, the license plate area is not included; 2) the edge density of the license plate area is larger than that of the adjacent area; 3) the edge density distribution of the license plate area is uniform. Meanwhile, generally, for most license plate located scenes, the size distribution of the license plate in the image is within a certain known range.
According to the analysis, the coarse positioning of the license plate is realized through the following steps: setting the minimum size of the license plate in the image as Wmin,HminMaximum dimension of Wmax,HmaxWherein W ismin,Hmin,Wmax,HmaxRespectively a minimum width, a minimum height, a maximum width and a maximum height in the image.
1) The entire image is divided into small cells (cells), and the edge density of each cell (cell) is calculated. The size of each unit is w × H, wherein w ═ Hmin/2. For each small cell (cell), its edge density is calculated:
Figure BDA0002328801250000071
in the formula, Em,nThe edge density of the cell (cell) in the mth row and the nth column is shown. e.g. of the typei,jThe pixel values of the ith row and the jth column in the edge map are shown, and m, n, i and j are respectively greater than or equal to 0.
2) The background area is filtered according to the edge density value. The edge density distribution of the license plate region in a certain range can be determined according to the following formula:
Figure BDA0002328801250000072
filtering the background area, wherein Ai,j1 indicates that the cell (cell) in the ith row and the jth column belongs to a candidate region of a license plate, Ai,j0 indicates that the cell belongs to the background region, t1And t2Is edge denseA low threshold and a high threshold of degrees.
3) And filtering the background area according to the edge density contrast of the current cell (cell) and the cell (cell) below the current cell. By observation, the edge density of the license plate region is greater than the edge density of other surrounding regions, particularly the region below it. Thus, the background area is filtered at this step, primarily by comparing the edge density of each cell (cell) with the cells (cells) below it. Selecting the current cell (cell) and the H-th cell below the current cellmaxThe alignment was performed per h units (cells). If the current cell (cell) is the same as the H-th cell below the current cellmaxThe edge density contrast of/h units (cells) is larger than a given threshold value, the cells are considered to belong to a license plate candidate area, and otherwise, the cells are filtered.
4) And (4) uniformly filtering the background area according to the density distribution of the edge of the license plate area. Because the edge density distribution of the license plate region is uniform, when one cell belongs to the license plate region, cells (cells) close to the edge density of the cell should exist in the neighborhood. Therefore, the number of cells (cells) in the left and right neighbors of the current cell (cell) close to the edge density of the current cell (cell) can be calculated, if the number is greater than a given threshold value, the current cell (cell) is judged to belong to the license plate candidate region, otherwise, the current cell (cell) belongs to the background region, and the current cell (cell) is filtered.
5) And filtering the background area according to the size of the license plate area. The license plate area has a certain size, and when the number of the cells (cells) contained in the connected area where one cell (cell) is located is less than (W)min/w)×(HminH), or greater than (W)max/w)×(HmaxH), the connected region in which the cell is located is filtered.
Through the above steps, most of the background area is filtered.
And step S22, accurately positioning the license plate image after the coarse positioning based on the license plate structure information to filter the residual background area of the license plate image. In particular, most of the background area is filtered by the course of coarse positioning. And for the rest areas, accurately positioning through the license plate structure information. A license plate is composed of characters distributed on a straight line or two straight lines, and the license plate region can be accurately positioned through the distribution information of the license plate characters.
Top-hat operation (top-hat) can extract a local bright area by making a difference between an original image and an opening operation image; the low-hat transform (bot-hat) can extract a local dark region by subtracting the original image from the closed-loop computed image. The license plate has two types of bright-bottom dark characters and dark-bottom bright characters, and the character regions cannot be simultaneously and successfully extracted for license plate positioning only by being suitable for single morphological operation. Therefore, the concept of the pseudo characters is put forward, namely, the interval parts between the license plate characters are regarded as the pseudo characters, the character areas of the license plate characters are extracted through paired morphological operations (for the license plate with bright characters on the dark bottom, the character areas of the license plate characters are extracted through top hat operation, the pseudo character areas of the license plate characters are extracted through low hat operation, for the license plate with dark characters on the bright bottom, the pseudo character areas of the license plate characters are extracted through top hat operation, the character areas of the license plate characters are extracted through low hat operation), the character information and the license plate background information (the pseudo characters) are combined in an explicit mode, and two types of license plates.
Firstly, calculating a license plate candidate region through paired morphological operation operators (top hat operation and low hat operation), carrying out binarization and connected component analysis on a result, obtaining a candidate region of each character and each pseudo character, extracting license plate characters and pseudo characters, and carrying out linear detection on all candidate regions through Hough transformation to further obtain the accurate position of the license plate. Morphological operations in a small area can be done quickly since most of the background area has been filtered out. The license plate positioning method combining coarse positioning and fine positioning can effectively improve the speed of license plate positioning, and improves the accuracy of license plate positioning by excluding most background images. And finally, intercepting and outputting the license plate image after accurate positioning.
And step S23, performing non-maximum suppression processing and Hough transform-based inclination correction processing on the precisely positioned license plate image to obtain the positioned license plate image. In particular, the non-maximum suppression is widely applied to object detection, and the main purpose is to eliminate unnecessary interference factors and find the optimal object detection position. Non-maximum suppression is a post-processing process of detection and is one of the key links. The heuristic window fusion algorithm has a good detection effect on the non-coincident target, but is not suitable for detecting the license plate of the vehicle. A heuristic window fusion algorithm divides an initial detection window into a plurality of non-coincident subsets, then calculates the center of each subset, and finally only one detection window is reserved for each subset, so that obviously, the algorithm is easy to cause a large amount of missed detections. Dalal et al propose mean shift non-maximum suppression, which is not only computationally complex, requiring the detection window to be represented in 3-dimensional space (abscissa, ordinate, scale), detection score conversion, calculation of uncertainty matrix, iterative optimization, but also requiring adjustment of many parameters associated with the detector step size, etc., and is therefore less used at present.
Currently, most target detection generally uses a greedy strategy-based non-maximum suppression algorithm, because it is simple and efficient, the main steps are as follows: (1) sorting the initial detection windows from high to low according to detection scores; (2) taking the 1 st initial detection window as a current suppression window; (3) non-maxima suppression. And taking an initial window with lower detection scores than the current suppression window as a suppressed window. Calculating the overlapping area ratio of the current suppression window and the suppressed window: intersection of areas/union of areas. Eliminating a window with the coincidence area ratio higher than a set threshold value; (4) and (4) ending if only the last initial detection window is left, otherwise, taking down one window which is not suppressed as a suppression window according to the sorted sequence, and turning to the step (3).
The invention also uses a simple and efficient non-maximum suppression algorithm based on the greedy strategy. And performing slope correction based on Hough transform on the license plate image subjected to non-maximum suppression processing. The Hough transform is a powerful feature extraction method, and utilizes local image information to effectively accumulate the basis of all possible model examples, so that the Hough transform can conveniently obtain additional information from external data and can vividly present effective information from only a part of examples. Hough transform is generally applied to the judgment of shape, position and geometric transformation parameters in computer vision. Since the hough transform was proposed, it has been widely used. In recent years, experts and scholars have further studied the theoretical properties and application methods of hough transform. The Hough transform is used as an effective algorithm for identifying straight lines, and has good anti-interference performance and robustness.
The hough transform method involves a mapping from features in image space to a collection of points in parameter space. Each point in the parameter space represents an instance of the model in the image space, and the image features are mapped into the parameter space using a function that produces all parameter combinations that are compatible with the observed image features and the assumed model. Each image feature will produce a different plane in the multidimensional parameter space, but all planes produced by all image features that belong to the same model instance will intersect at a point that depicts a common instance. The basis of the hough transform is to generate these planes and identify the parameter points that intersect them. And the license plate image after the inclination correction based on Hough transform is the image after the secondary positioning of the system.
Based on the first to third embodiments of the low-illumination imaging license plate recognition method of the present invention, the fourth embodiment of the low-illumination imaging license plate recognition method of the present invention can be understood with reference to fig. 3, and the step S30 includes:
step S31, performing feature extraction after RoI pooling on the located license plate (e.g., cia a.02u10), and processing the extracted features (region features C × X × Y) through two convolutional layers and a rectangular pooling layer between the two convolutional layers to transform the extracted features into a feature sequence D × L; wherein D-512 and L-19, and said signature sequence is represented by V-V (V1, V2, VL).
And S32, applying the characteristic sequence V to a BRNN layer to form two mutually separated cyclic neural networks (RNNs), wherein one RNN forwards processes the characteristic sequence V, the other RNN backwards processes the characteristic sequence V, two implicit states are cascaded together and input into a linear transformation layer with 37 outputs, the linear transformation layer is switched to a Softmax layer, the 37 outputs are converted into probabilities, the probabilities correspond to the probabilities of 26 letters, 10 numbers and a non-character class, the probabilities are coded by the BRNN layer, the characteristic sequence V is converted into a probability estimation q (q1, q 2.., qL) with the same length as L, and a long-short term memory network (LSTM) is used to define memory cells containing three multiplication gates so as to selectively store related information and solve the problem of gradient disappearance in RNN training.
Step S33, the probability estimation q is subjected to sequence decoding through a CTC layer, and an approximate optimal path with the maximum probability is searched through the decoded probability estimation q:
Figure BDA0002328801250000101
where pi is the near-optimal path with the highest probability (e.g., a02U10), B operator is used for one repeated token and non-character token, P is a probability operation, an example is: b (a-a-B-) (B (-a-bb) ═ aab), and the specific details of CTCs are the structure of existing CTCs and are not described herein.
And step S34, determining a loss function of the integrated depth network model through the approximate optimal path, and identifying the positioned license plate through the loss function. The method for identifying the positioned license plate through the overall loss function of the model is the same as the prior art, and therefore, the detailed description is omitted here. It should be noted that the steps S31-S34 are only step numbers for easy reading, and are not labeled in the relevant figures. Furthermore, as can be seen from the above, the integrated deep network model may include a Softmax layer and a rectangular pooling layer between two convolutional layers, in addition to the main convolutional layers (two), the BRNN layer, the linear transformation layer, and the CTC layer.
The method provided by the invention can be actually embedded into an FPGA (field programmable gate array) for realization, and is applied to a night vision camera or a video camera monitoring system with a low-illumination imaging license plate recognition function and a real-time image output function.
In the end-to-end low-illumination imaging license plate recognition method based on deep learning, an original license plate low-illumination image can be input into a multi-scale context aggregation network based on deep learning to be subjected to image processing, so that a license plate image with improved recognizability can be obtained, license plate positioning and license plate inclination correction processing are performed on the license plate image, so that a positioned license plate image can be obtained, the positioned license plate can be recognized by using BRNN with CTC through an integrated deep network model, and therefore, the low-illumination license plate can be recognized without cutting license plate characters, the recognition degree is good, the robustness is good, meanwhile, the calculation is simple, and the high efficiency and the real-time performance can be kept.
Referring to fig. 4, fig. 4 is a block diagram illustrating a structure of an end-to-end low-illumination imaging license plate recognition device based on deep learning according to an embodiment of the present invention. As shown in fig. 4, the deep learning end-to-end low-illumination imaging license plate recognition device 40 of the present embodiment includes a recognition degree improving module 401, a positioning correction module 402, and a recognition module 403. The identification degree improving module 401, the positioning correcting module 402 and the identifying module 403 are respectively configured to perform the specific methods in S10, S20 and S30 in fig. 1, and the details can be referred to the related introduction of fig. 1 and will be described only briefly herein:
the identification degree improving module 401 is configured to input the original license plate low-illumination image obtained by the camera module into a multi-scale context aggregation network based on deep learning to perform image processing, so as to obtain a license plate image with improved identification degree.
And the positioning correction module 402 is configured to perform license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image.
The recognition module 403 is configured to recognize the located license plate through an integrated deep network model; the integrated deep network model comprises a convolution layer, a BRNN layer, a linear transformation layer and a CTC layer.
Further, as can be seen in fig. 5, the identification degree improving module 401 may specifically include a preprocessing unit 4011, a training unit 4012, and an enhancing unit 4013:
the preprocessing unit 4011 is configured to preprocess the original license plate low-illumination Bayer image obtained through the camera module, and perform packing and transformation on pixel channels to obtain a pixel image used for inputting an FCN model for training.
And the training unit 4012 is configured to train the pixel image based on a deep learning CAN network, and output a processed image.
And the enhancement unit 4013 is configured to perform wide dynamic enhancement processing on the processed image, and output the license plate image with improved reduction degree and image quality.
Further, referring to fig. 6, the positioning correction module 402 may specifically include a coarse positioning unit 4021, a fine positioning unit 4022, and a correction unit 4023:
the rough positioning unit 4021 is configured to perform rough positioning on the license plate image based on the environment information to filter a part of a background region of the license plate image.
And the fine positioning unit 4022 is configured to perform fine positioning on the license plate image after the coarse positioning based on the license plate structure information, so as to filter the remaining background area of the license plate image.
The correcting unit 4023 is configured to perform non-maximum suppression processing and hough transform-based tilt correction processing on the precisely positioned license plate image to obtain a positioned license plate image.
Further, referring to fig. 7, the recognition module 403 may specifically include a feature extraction unit 3031, a probability estimation unit 4032, an optimal path unit 4033, and a unit 4034:
a feature extraction unit 4031, configured to perform feature extraction after RoI pooling on the located license plate, and process the extracted features through two convolution layers and a rectangular pooling layer between the two convolution layers to transform the extracted features into a feature sequence dx L; wherein D-512 and L-19, and said signature sequence is represented by V-V (V1, V2, VL).
A probability estimation unit 4032, configured to apply the feature sequence V at the BRNN layer to form two RNNs separated from each other, where one RNN processes the feature sequence V forward, and the other RNN processes the feature sequence V backward, concatenates two implicit states, inputs the two states into a linear transformation layer having 37 outputs, and switches the linear transformation layer to a Softmax layer, and converts the 37 outputs into probabilities, where the probabilities correspond to probabilities of 26 letters, 10 numbers, and a non-character class, and the probabilities are encoded by the BRNN layer, so that the feature sequence V is converted into a probability estimate q ═ of the same length as L (q1, q 2.., qL), and at the same time, LSTM is used to define a memory cell including three multiplication gates, so as to selectively store related information and solve a gradient vanishing problem in RNN training.
An optimal path unit 4033, configured to perform sequence decoding on the probability estimate q through a CTC layer, and find an approximate optimal path with the maximum probability through the decoded probability estimate q:
Figure BDA0002328801250000121
wherein pi is an approximate optimal path with the maximum probability, the B operator is used for repeated marks and non-character marks at one position, and P is probability operation.
And the identification unit 4034 is configured to determine a loss function of the integrated deep network model according to the approximate optimal path, and identify the license plate after positioning according to the loss function.
The deep learning end-to-end-based low-illumination imaging license plate recognition device provided by the figure 4 can input an original license plate low-illumination image into a deep learning-based multi-scale context aggregation network for image processing to obtain a license plate image with improved recognizability, perform license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image, and recognize the positioned license plate by using a BRNN with a CTC through an integrated deep network model, so that the low-illumination license plate can be recognized under the condition that license plate characters do not need to be cut, and the recognition degree is good, the robustness is good, meanwhile, the calculation is simple, and the high efficiency and the real-time performance can be kept.
Fig. 8 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 8, the terminal device 8 of this embodiment includes: a processor 80, a memory 81, and a computer program 82 stored in the memory 81 and operable on the processor 80, such as a program to perform a depth learning end-to-end based low-light imaging license plate recognition method. The processor 80, when executing the computer program 82, implements the steps in the above-described method embodiments, e.g., S10-S30 shown in fig. 1. Alternatively, the processor 80, when executing the computer program 82, implements the functions of each module/unit in each system embodiment described above, for example, the functions of the modules 301 to 303 shown in fig. 3.
Illustratively, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 82 in the terminal device 8. For example, the computer program 82 may be divided into a resolution enhancement module 401, a location correction module 402 and an identification module 403. (modules in the virtual system), the specific functions of each module are as follows:
the identification degree improving module 401 is configured to input the original license plate low-illumination image obtained by the camera module into a multi-scale context aggregation network based on deep learning to perform image processing, so as to obtain a license plate image with improved identification degree.
And the positioning correction module 402 is configured to perform license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image.
The recognition module 403 is configured to recognize the located license plate through an integrated deep network model; the integrated deep network model comprises a convolution layer, a BRNN layer, a linear transformation layer and a CTC layer.
The terminal device 8 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. Terminal device 8 may include, but is not limited to, a processor 80, a memory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of a terminal device 8 and does not constitute a limitation of terminal device 8 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may be an internal storage unit of the terminal device 8, such as a hard disk or a memory of the terminal device 7. The memory 81 may also be an external storage device of the terminal device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) and the like provided on the terminal device 8. Further, the memory 81 may also include both an internal storage unit of the terminal device 8 and an external storage device. The memory 81 is used for storing the computer programs and other programs and data required by the terminal device 8. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the system is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed system/terminal device and method can be implemented in other ways. For example, the above-described system/terminal device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method for recognizing a license plate based on deep learning end-to-end low-illumination imaging is characterized by comprising the following steps:
inputting the original license plate low-illumination image acquired by the camera module into a multi-scale context aggregation network based on deep learning for image processing to acquire a license plate image with improved identifiability;
carrying out license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image;
identifying the positioned license plate through an integrated depth network model; the integrated deep network model comprises a convolutional layer, a bidirectional cyclic neural network BRNN layer, a linear transformation layer and a connection meaning time classification CTC layer.
2. The end-to-end low-illumination imaging license plate recognition method based on deep learning of claim 1, wherein the step of inputting the low-illumination image of the original license plate acquired by the camera module into a multi-scale context aggregation network based on deep learning for image processing to obtain the license plate image with improved recognizability comprises the steps of:
preprocessing the original license plate low-illumination Bayer image acquired by the camera module, and packing and transforming pixel channels to obtain a pixel image for inputting a full convolution neural network (FCN) model for training;
the pixel image is trained based on a deep learning CAN network, and a processed image is output;
and carrying out wide dynamic enhancement processing on the processed image, and outputting the license plate image with improved reduction degree and image quality.
3. The end-to-end low-illumination imaging license plate recognition method based on deep learning of claim 1, wherein the license plate positioning and license plate inclination correction processing of the license plate image to obtain a positioned license plate image comprises:
roughly positioning the license plate image based on environmental information to filter a partial background area of the license plate image;
accurately positioning the license plate image after the coarse positioning based on the license plate structure information to filter the residual background area of the license plate image;
and carrying out non-maximum value inhibition processing and Hough transform-based inclination correction processing on the license plate image after accurate positioning to obtain the positioned license plate image.
4. The end-to-end low-illumination imaging license plate recognition method based on deep learning of claim 1, wherein the recognition of the positioned license plate through an integrated deep network model comprises:
performing feature extraction after the RoI pooling of the region of interest on the positioned license plate, and processing the extracted features through two convolution layers and a rectangular pooling layer between the two convolution layers to transform the extracted features into a feature sequence DxL; wherein D-512 and L-19, said signature sequence is represented by V-1, V2, VL;
applying the characteristic sequence V at a BRNN layer to form two mutually separated recurrent neural networks RNN, wherein one RNN processes the characteristic sequence V forward, the other RNN processes the characteristic sequence V backward, two implicit states are concatenated together, the input is input into a linear transformation layer with 37 outputs, the output is transferred to a Softmax layer, the 37 outputs are converted into probabilities, the probabilities correspond to the probabilities of 26 letters, 10 numbers and a non-character class, the probabilities are coded by the BRNN layer, the characteristic sequence V is converted into a probability estimation q (q1, q 2.., q L.) with the same length as L, and a long-short term memory network LSTM is used to define a memory cell containing three multiplication gates so as to selectively store related information and solve the problem of gradient disappearance in RNN training,
performing sequence decoding on the probability estimation q through a CTC layer, and searching an approximate optimal path with the maximum probability through the decoded probability estimation q:
Figure FDA0002328801240000021
wherein pi is an approximate optimal path with the maximum probability, the B operator is used for repeated marks and non-character marks at one position, and P is probability operation;
and determining a loss function of the integrated depth network model through the approximate optimal path, and identifying the positioned license plate through the loss function.
5. A low-illumination imaging license plate recognition device based on deep learning end-to-end is characterized by comprising:
the identification degree improving module is used for inputting the original license plate low-illumination image acquired by the camera module into a multi-scale context aggregation network based on deep learning for image processing so as to acquire a license plate image with improved identification degree;
the positioning correction module is used for carrying out license plate positioning and license plate inclination correction processing on the license plate image so as to obtain a positioned license plate image;
the recognition module is used for recognizing the positioned license plate through an integrated depth network model; the integrated deep network model comprises a convolution layer, a BRNN layer, a linear transformation layer and a CTC layer.
6. The deep learning end-to-end based low-illumination imaging license plate recognition device of claim 5, wherein the identification degree improving module comprises:
the preprocessing unit is used for preprocessing the original license plate low-illumination Bayer image acquired by the camera module, and packing and converting pixel channels to acquire a pixel image for inputting an FCN (fuzzy C-means) model for training;
the training unit is used for training the pixel image based on the CAN network of deep learning and outputting a processed image;
and the enhancement unit is used for carrying out wide dynamic enhancement processing on the processed image and outputting the license plate image with improved reduction degree and image quality.
7. The deep-learning end-to-end based low-illumination imaging license plate recognition device of claim 5, wherein the positioning correction module comprises:
the coarse positioning unit is used for performing coarse positioning on the license plate image based on the environment information so as to filter a partial background area of the license plate image;
the fine positioning unit is used for accurately positioning the license plate image after coarse positioning based on license plate structure information so as to filter the residual background area of the license plate image;
and the correction unit is used for carrying out non-maximum value inhibition processing and Hough transform-based inclination correction processing on the license plate image after accurate positioning so as to obtain the positioned license plate image.
8. The deep learning end-to-end based low-illumination imaging license plate recognition device of claim 5, wherein the recognition module comprises:
the characteristic extraction unit is used for extracting the characteristic of the positioned license plate after the RoI pooling, and processing the extracted characteristic through the two convolution layers and the rectangular pooling layer between the two convolution layers so as to transform the extracted characteristic into a characteristic sequence DxL; wherein D-512 and L-19, said signature sequence is represented by V-1, V2, VL;
a probability estimation unit, configured to apply the feature sequence V at a BRNN layer to form two RNNs separated from each other, where one RNN processes the feature sequence V forward, and the other RNN processes the feature sequence V backward, concatenates two implicit states, inputs the two states into a linear transformation layer having 37 outputs, and switches the linear transformation layer to a Softmax layer, converts the 37 outputs into probabilities corresponding to the probabilities of 26 letters, 10 numbers, and a non-character class, and the probabilities are encoded by the BRNN layer, so that the feature sequence V is converted into a probability estimation q (q1, q 2.,. q., qL) having the same length as L, and defines a memory cell including three multiplication gates using LSTM, so as to selectively store related information and solve a gradient vanishing problem in RNN training;
the optimal path unit is used for performing sequence decoding on the probability estimation q through a CTC layer, and searching an approximate optimal path with the maximum probability through the decoded probability estimation q:
Figure FDA0002328801240000041
wherein pi is an approximate optimal path with the maximum probability, the B operator is used for repeated marks and non-character marks at one position, and P is probability operation;
and the identification unit is used for determining a loss function of the integrated depth network model through the approximate optimal path and identifying the positioned license plate through the loss function.
9. A terminal device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor; the computer program when executed by the processor implements the steps of the low-illumination imaging license plate recognition method of any of claims 1-4.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the low-illumination imaging license plate recognition method according to any one of claims 1 to 4.
CN201911327687.9A 2019-12-20 2019-12-20 Low-illumination imaging license plate recognition method and device based on deep learning end-to-end Pending CN110969164A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911327687.9A CN110969164A (en) 2019-12-20 2019-12-20 Low-illumination imaging license plate recognition method and device based on deep learning end-to-end

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911327687.9A CN110969164A (en) 2019-12-20 2019-12-20 Low-illumination imaging license plate recognition method and device based on deep learning end-to-end

Publications (1)

Publication Number Publication Date
CN110969164A true CN110969164A (en) 2020-04-07

Family

ID=70035722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911327687.9A Pending CN110969164A (en) 2019-12-20 2019-12-20 Low-illumination imaging license plate recognition method and device based on deep learning end-to-end

Country Status (1)

Country Link
CN (1) CN110969164A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163584A (en) * 2020-10-13 2021-01-01 安谋科技(中国)有限公司 Electronic device, and method and medium for extracting image features based on wide dynamic range
CN112200192A (en) * 2020-12-03 2021-01-08 南京风兴科技有限公司 License plate recognition method and device
CN112560856A (en) * 2020-12-18 2021-03-26 深圳赛安特技术服务有限公司 License plate detection and identification method, device, equipment and storage medium
CN113159204A (en) * 2021-04-28 2021-07-23 深圳市捷顺科技实业股份有限公司 License plate recognition model generation method, license plate recognition method and related components

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109508715A (en) * 2018-10-30 2019-03-22 南昌大学 A kind of License Plate and recognition methods based on deep learning
CN109840521A (en) * 2018-12-28 2019-06-04 安徽清新互联信息科技有限公司 A kind of integrated licence plate recognition method based on deep learning
CN110097106A (en) * 2019-04-22 2019-08-06 苏州千视通视觉科技股份有限公司 The low-light-level imaging algorithm and device of U-net network based on deep learning
CN110111269A (en) * 2019-04-22 2019-08-09 深圳久凌软件技术有限公司 Low-light-level imaging algorithm and device based on multiple dimensioned context converging network
CN110414451A (en) * 2019-07-31 2019-11-05 深圳市捷顺科技实业股份有限公司 It is a kind of based on end-to-end licence plate recognition method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109508715A (en) * 2018-10-30 2019-03-22 南昌大学 A kind of License Plate and recognition methods based on deep learning
CN109840521A (en) * 2018-12-28 2019-06-04 安徽清新互联信息科技有限公司 A kind of integrated licence plate recognition method based on deep learning
CN110097106A (en) * 2019-04-22 2019-08-06 苏州千视通视觉科技股份有限公司 The low-light-level imaging algorithm and device of U-net network based on deep learning
CN110111269A (en) * 2019-04-22 2019-08-09 深圳久凌软件技术有限公司 Low-light-level imaging algorithm and device based on multiple dimensioned context converging network
CN110414451A (en) * 2019-07-31 2019-11-05 深圳市捷顺科技实业股份有限公司 It is a kind of based on end-to-end licence plate recognition method, device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUI LI 等: "Toward End-to-End Car License Plate Detection and Recognition With Deep Neural Networks", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 *
王伟: "车牌字符分割和字符识别的算法研究与实现", 《万方数据库》 *
王永杰 等: "多信息融合的快速车牌定位", 《中国图象图形学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163584A (en) * 2020-10-13 2021-01-01 安谋科技(中国)有限公司 Electronic device, and method and medium for extracting image features based on wide dynamic range
CN112200192A (en) * 2020-12-03 2021-01-08 南京风兴科技有限公司 License plate recognition method and device
CN112560856A (en) * 2020-12-18 2021-03-26 深圳赛安特技术服务有限公司 License plate detection and identification method, device, equipment and storage medium
CN112560856B (en) * 2020-12-18 2024-04-12 深圳赛安特技术服务有限公司 License plate detection and identification method, device, equipment and storage medium
CN113159204A (en) * 2021-04-28 2021-07-23 深圳市捷顺科技实业股份有限公司 License plate recognition model generation method, license plate recognition method and related components

Similar Documents

Publication Publication Date Title
CN109635744B (en) Lane line detection method based on deep segmentation network
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN112686812B (en) Bank card inclination correction detection method and device, readable storage medium and terminal
CN110969164A (en) Low-illumination imaging license plate recognition method and device based on deep learning end-to-end
CN111445459B (en) Image defect detection method and system based on depth twin network
CN104978567B (en) Vehicle checking method based on scene classification
CN106815583B (en) Method for positioning license plate of vehicle at night based on combination of MSER and SWT
Khalifa et al. Malaysian Vehicle License Plate Recognition.
CN107944354B (en) Vehicle detection method based on deep learning
CN110689003A (en) Low-illumination imaging license plate recognition method and system, computer equipment and storage medium
CN113592911B (en) Apparent enhanced depth target tracking method
CN116030396B (en) Accurate segmentation method for video structured extraction
CN113537211A (en) Deep learning license plate frame positioning method based on asymmetric IOU
CN112784834A (en) Automatic license plate identification method in natural scene
CN113052170A (en) Small target license plate recognition method under unconstrained scene
CN111027564A (en) Low-illumination imaging license plate recognition method and device based on deep learning integration
CN112613392A (en) Lane line detection method, device and system based on semantic segmentation and storage medium
WO2022121021A1 (en) Identity card number detection method and apparatus, and readable storage medium and terminal
WO2022121025A1 (en) Certificate category increase and decrease detection method and apparatus, readable storage medium, and terminal
CN112733851B (en) License plate recognition method for optimizing grain warehouse truck based on convolutional neural network
CN112053407B (en) Automatic lane line detection method based on AI technology in traffic law enforcement image
Shi et al. License plate localization in complex environments based on improved GrabCut algorithm
CN112528994A (en) Free-angle license plate detection method, license plate identification method and identification system
CN110633705A (en) Low-illumination imaging license plate recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200407