CN110689003A - Low-illumination imaging license plate recognition method and system, computer equipment and storage medium - Google Patents

Low-illumination imaging license plate recognition method and system, computer equipment and storage medium Download PDF

Info

Publication number
CN110689003A
CN110689003A CN201910776944.0A CN201910776944A CN110689003A CN 110689003 A CN110689003 A CN 110689003A CN 201910776944 A CN201910776944 A CN 201910776944A CN 110689003 A CN110689003 A CN 110689003A
Authority
CN
China
Prior art keywords
license plate
image
low
characters
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910776944.0A
Other languages
Chinese (zh)
Inventor
张斯尧
王思远
谢喜林
张�诚
文戎
田磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Qianshitong Intelligent Technology Co Ltd
Original Assignee
Changsha Qianshitong Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Qianshitong Intelligent Technology Co Ltd filed Critical Changsha Qianshitong Intelligent Technology Co Ltd
Priority to CN201910776944.0A priority Critical patent/CN110689003A/en
Publication of CN110689003A publication Critical patent/CN110689003A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • G06V30/1475Inclination or skew detection or correction of characters or of image to be recognised
    • G06V30/1478Inclination or skew detection or correction of characters or of image to be recognised of characters or characters lines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Abstract

The invention discloses a low-illumination imaging license plate recognition method, which comprises the following steps: inputting the original license plate low-illumination image acquired by the camera module into a multi-scale context aggregation network based on deep learning for image processing to acquire a license plate image with improved identifiability; carrying out license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image; dividing the positioned license plate image into a plurality of license plate characters; and identifying each license plate character. The invention also discloses a license plate recognition system, computer equipment and a storage medium. The technical scheme of the invention aims to solve the problems that the image details are not clear enough, the identification details are not accurate enough, and the processing effect is often changed greatly according to different environments in the existing method.

Description

Low-illumination imaging license plate recognition method and system, computer equipment and storage medium
Technical Field
The invention relates to the technical field of computer vision, in particular to a low-illumination imaging license plate recognition method, a license plate recognition system applying the low-illumination imaging license plate recognition method, computer equipment and a computer readable storage medium.
Background
With the development of computer vision, digital image processing technology and intelligent transportation technology, the application of license plate recognition technology in the field of intelligent transportation is more and more extensive. Related products for license plate recognition are more and more on the market, but the problems which are ubiquitous at present are as follows: the existing license plate recognition system has higher requirements on the quality of images, but in a complex application environment, the obtained images are often lower in quality and cannot meet the requirements of the system, so that the recognition rate of the license plate is low. Therefore, how to improve the recognition rate of the license plate in the low-quality image so as to adapt to the complex and changeable application environment is an important problem to be solved in the research of the license plate recognition system.
The gray scale range of the image acquired under low illumination is narrow, the gray scale change is not obvious, the spatial correlation of adjacent pixels is high, and the characteristics enable details, background, noise and the like in the image to be contained in the narrow gray scale range. Therefore, in order to improve the visual effect of the image acquired under low illumination, the image acquired under low illumination needs to be converted into a form more suitable for human eye observation and computer processing, so that useful information can be extracted.
Specifically, in the application of license plate recognition, when the quality of a license plate image is not high, the current main technical idea is to perform corresponding processing on a single-frame image by using related digital image processing technologies (such as image, filtering, image enhancement and the like) so as to improve the quality of the image. Most of the methods are traditional ideas, and generally, the methods have the defects that image details are not clear enough, recognition details are not accurate enough, and processing effects are changed greatly according to different environments.
Disclosure of Invention
The invention mainly aims to provide a low-illumination imaging license plate recognition method, and aims to solve the problems that image details are not clear enough, recognition details are not accurate enough, and processing effects are often changed greatly according to different environments in the existing method.
In order to achieve the purpose, the low-illumination imaging license plate recognition method provided by the invention comprises the following steps:
inputting the original license plate low-illumination image acquired by the camera module into a multi-scale context aggregation network based on deep learning for image processing to acquire a license plate image with improved identifiability;
carrying out license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image;
dividing the positioned license plate image into a plurality of license plate characters;
and identifying each license plate character.
Preferably, the step of inputting the original license plate low-illumination image acquired by the camera module into a multi-scale context aggregation network based on deep learning for image processing to obtain the license plate image with improved recognizability includes:
preprocessing the original license plate low-illumination Bayer image acquired by the camera module, and packing and transforming pixel channels to obtain a pixel image for inputting an FCN model for training;
the pixel image is trained based on a deep learning CAN network, and a processed image is output;
and carrying out wide dynamic enhancement processing on the processed image, and outputting the license plate image with improved reduction degree and image quality.
Preferably, the step of performing license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image includes:
roughly positioning the license plate image based on environmental information to filter a partial background area of the license plate image;
accurately positioning the license plate image after the coarse positioning based on the license plate structure information to filter the residual background area of the license plate image;
and carrying out non-maximum value inhibition processing and Hough transform-based inclination correction processing on the license plate image after accurate positioning to obtain the positioned license plate image.
Preferably, the step of segmenting the positioned license plate image into a plurality of license plate characters includes:
judging whether the positioned license plate image has a license plate frame or not;
when the positioned license plate image has a license plate frame, removing the license plate frame to obtain a frameless license plate image;
and (4) dividing the license plate image without the frame into a plurality of license plate characters.
Preferably, the step of segmenting the frameless license plate image into a plurality of license plate characters includes:
counting the maximum pixel value maxvalue and the minimum pixel value minvalue of pixel points in the whole license plate image, wherein maxvalue is more than or equal to 0, and minvalue is more than or equal to 0;
setting a proportionality coefficient coef of the number of pixels to be enhanced to the number of all pixels, wherein the coef is more than or equal to 0 and less than or equal to 1;
acquiring the number i of pixels correspondingly appearing on the pixel values of 0-255, and storing the number i of the pixels correspondingly appearing in an array count (1, i) of 1 x 255, wherein i is more than or equal to 0;
counting the number of pixel points from count (1, i), i being 255, continuing counting i-1 if the counted pixel point pixel < width height coef, otherwise stopping counting and recording the current pixel value index, wherein width is the width of the license plate image, width is more than 0, height is the height of the license plate image, and height is more than 0;
enhancing each point of the license plate image, wherein the enhancing formula is as follows:
wherein i and j are pixel point positions in the license plate image respectively, i is more than or equal to 0, and j is more than or equal to 0.
Preferably, the step of removing the license plate frame when the license plate frame exists in the positioned license plate image to obtain a frameless license plate image includes:
acquiring a binary image of the positioned license plate image;
acquiring the upper and lower frames of the license plate of the binary image, and removing the upper and lower frames of the license plate;
and acquiring the left and right frames of the license plate of the binary image, and removing the left and right frames of the license plate.
Preferably, the step of recognizing each of the license plate characters includes:
determining a character classifier;
extracting gray scale direction gradient Histogram (HOG) features, binary HOG features and 16-value HOG features of Chinese characters, numbers and letters to determine combined HOG features;
reducing the dimension of the combined HOG characteristic by using a kernel principal component analysis method;
inputting the combined HOG characteristics into a support vector machine for training and predicting to obtain recognition results corresponding to the Chinese characters, the numbers and the letters respectively;
and combining the recognition results respectively corresponding to the Chinese characters, the numbers and the letters to determine a final license plate character recognition result.
In addition, in order to achieve the above object, the present invention further provides a license plate recognition system, which applies the low illumination imaging license plate recognition method according to any one of the above aspects.
Further, to achieve the above object, the present invention also provides a computer apparatus comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor; the computer program, when executed by the processor, implements the steps of the low-illumination imaging license plate recognition method of any one of the above.
In addition, to achieve the above object, the present invention further provides a computer readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the steps of the low-illumination imaging license plate recognition method according to any one of the above aspects.
In the technical scheme of the invention, the low-illumination imaging license plate recognition method inputs the low-illumination image of the original license plate acquired by the camera module into the multi-scale context aggregation network based on deep learning for image processing to obtain the license plate image with improved identifiability, thereby improving the signal-to-noise ratio and the image display details of the license plate image and achieving the purpose of clearly imaging the license plate image in the low-illumination environment by the video image. Then, carrying out license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image; dividing the positioned license plate image into a plurality of license plate characters; and identifying each license plate character. The technical scheme of the invention has the advantages of high reliability of reading the license plate characters, good recognition degree, good robustness, simple step calculation, high efficiency maintenance and real-time property meeting the requirement.
Drawings
FIG. 1 is a schematic flow chart of a license plate recognition method according to a first embodiment of the present invention;
FIG. 2 is a diagram of an embodiment of a deep learning based CAN network;
FIG. 3 is a license plate image of Hough transform tilt correction according to an embodiment of the present invention;
FIG. 4 is a license plate image without being processed by an enhancement algorithm according to an embodiment of the present invention;
FIG. 5 is an effect diagram of the license plate image shown in FIG. 4 after being processed by an enhancement algorithm;
FIG. 6 is a flowchart of a license plate segmentation algorithm in an embodiment of the present invention;
FIG. 7 is a diagram illustrating the effects of a license plate projection curve before and after filtering according to an embodiment of the present disclosure;
fig. 8 is a flowchart of a license plate recognition based on a joint HOG according to an embodiment of the present invention.
The objects, features and advantages of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
Referring to fig. 1, to achieve the above object, a first embodiment of the present invention provides a method for recognizing a license plate with low illumination imaging, including the following steps:
step S10, inputting the original license plate low-illumination image obtained by the camera module into a multi-scale context aggregation network based on deep learning for image processing to obtain a license plate image with improved recognizability;
step S20, license plate positioning and license plate inclination correction processing are carried out on the license plate image to obtain a positioned license plate image;
step S30, dividing the positioned license plate image into a plurality of license plate characters;
and step S40, recognizing each license plate character.
In the technical scheme of the invention, the low-illumination imaging license plate recognition method inputs the low-illumination image of the original license plate acquired by the camera module into the multi-scale context aggregation network based on deep learning for image processing to obtain the license plate image with improved identifiability, thereby improving the signal-to-noise ratio and the image display details of the license plate image and achieving the purpose of clearly imaging the license plate image in the low-illumination environment by the video image. Then, carrying out license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image; dividing the positioned license plate image into a plurality of license plate characters; and identifying each license plate character. The technical scheme of the invention has the advantages of high reliability of reading the license plate characters, good recognition degree, good robustness, simple step calculation, high efficiency maintenance and real-time property meeting the requirement.
Aiming at the specific difficult problems of the existing license plate recognition system in the low-illumination environment, the invention provides a low-illumination imaging license plate recognition system based on a deep learning multi-scale context aggregation network in order to improve the accuracy of license plate recognition of a monitoring system and meet the real-time requirement.
Based on the first embodiment of the low-illumination imaging license plate recognition method of the present invention, in the second embodiment of the low-illumination imaging license plate recognition method of the present invention, the step S10 includes:
and step S11, preprocessing the original license plate low-illumination Bayer image acquired by the camera module, and packing and transforming pixel channels to obtain a pixel image for inputting an FCN model for training. Specifically, preprocessing an original license plate low-illumination Bayer image: packing and transforming pixel channels, and processing the pixel channels into pixel images more suitable for FCN training input; for a Bayer array, the input is packed into four channels and the spatial resolution is reduced by half on each channel. For the X-Trans array, the original data is composed of 6X 6 arrangement blocks; the array of 36 lanes is packed into 9 lanes by swapping the adjacent lane elements. In addition, black pixels are eliminated and the data is scaled by a desired multiple (e.g., x 100 or x 300). The processed data is used as the input of the FCN model, and the output is an image with 12 channels, and the spatial resolution of the image is only half of that of the input. The data volume of the processed image is reduced, and meanwhile, the details of the image are not influenced, so that the method is beneficial to subsequent convolution processing. And outputting the processed low-illumination license plate pixel image.
And step S12, training the pixel image based on the deep learning CAN network, and outputting the processed image. Fig. 2 is a diagram of a CAN network architecture. The circle represents the non-linear function lreuu. The first and last layers are three channels, the remaining are multiple channels, and the penultimate layer uses a 1 × 1 convolution without nonlinear transformation to obtain the last layer. The core part of the method is shown as follows:
Figure BDA0002175386050000061
wherein the content of the first and second substances,
Figure BDA0002175386050000062
is s layer LsI is not less than 0 in the ith characteristic layer; rsWhich represents the operation of the convolution of the hole,
Figure BDA0002175386050000063
representing a convolution kernel of 3 x 3,
Figure BDA0002175386050000064
is an offset term, #sIs an adaptive normalization function, phi is the non-linear unit at pixel level lreol: phi (x) max (α x, x). Where α is taken to be 0.2.
When the CAN structure is trained, the picture pair is required to be input for supervised training, and after a plurality of Loss functions are adopted for training, the mean square error is found to be the optimal scheme in the actual process. The loss function is formulated as follows:
Figure BDA0002175386050000065
after the CAN structure is established, data training is started. The algorithm of the invention uses an Adam optimizer in the training of the CAN network, and starts training from zero. During training, the network input is the original short exposure image and the real data in the sRGB space is the corresponding long exposure time image. The algorithm trains a network for each camera and uses the difference in the multiple of the exposure time between the original image and the reference image as our magnification factor (e.g., x 100, x 250, or x 300). In each training iteration, a 512 x 512 patch is randomly cropped for training and randomly enhanced with flipping, rotating, etc. The initial learning rate was set to 0.0001, and after 2000 iterations the learning rate dropped to 0.00001, training for a total of 4000 iterations. And after the training model is finished based on the corresponding database, outputting a corresponding sRGB space result image every time the preprocessed low-illumination Bayer image is input.
And step S13, performing wide dynamic enhancement processing on the processed image, and outputting the license plate image with improved reduction degree and image quality. Specifically, the processed image is subjected to wide dynamic enhancement processing, the reduction degree and the image quality of the low-illumination license plate image are further improved, and the final license plate image is directly output after the processing.
The invention of the present section uses an improved local algorithm to perform wide dynamic processing on the image to be processed. A frame of video image is divided into two cases: high light portion, low light portion. Aiming at the classification, the invention respectively adopts different parameters to adjust each part, and achieves the effect of wide dynamic of video images together. The low light compensation algorithm formula of the invention of the part is as follows:
Figure BDA0002175386050000071
wherein, Y2The value of the low light compensation part, k is a low light compensation parameter, which is usually set according to the system requirement, I is the pixel value of the input video image, Y1Correction values for the video image input for the preprocessing section. The algorithm formula for the highlight portion is as follows:
Figure BDA0002175386050000072
wherein alpha is a highlight part adjusting parameter to adjust the maximum value, the parameter range is generally 0.7-1, and Max a is the pixel maximum value of the video image. Finally, the wide dynamic video image output after correction is:
Y=Y2+Y3
Figure BDA0002175386050000073
and Y is a video image finally output by the system after the wide dynamic algorithm is processed. Fig. 3 is a comparison graph of two groups of low-illumination license plate images after processing, and it can be found that the algorithm of the present invention enhances the image contrast while more retaining scene detail information, and the image brightness is significantly improved, so that the algorithm is an efficient low-illumination imaging algorithm based on deep learning.
Based on the first embodiment or the second embodiment of the low-illumination imaging license plate recognition method, in the third embodiment of the low-illumination imaging license plate recognition method, license plate positioning and inclination correction of a license plate are performed through multi-information fusion and serve as preprocessing of license plate recognition. And when judging whether one area is a license plate, the method for positioning the object by the human can be used for reference. The edge density of the license plate area is greater than that of the surrounding area, especially the area below the license plate area, and is important environmental information. A large number of non-license plate areas can be excluded through the environmental information; for a single-layer license plate, all characters are distributed on a straight line, and for a double-layer license plate, all characters of a lower-layer license plate are distributed on a straight line, which is the structural information of the license plate; each character of the license plate except the Chinese character is a letter or a number, which is the part information of the license plate. Through the 3 types of information, a good license plate positioning effect can be obtained. The step S20 includes:
and step S21, roughly positioning the license plate image based on the environmental information to filter a part of background area of the license plate image. Specifically, a gray image is adopted to carry out coarse positioning on a license plate, and an edge image of the license plate image is obtained through a gradient operator [ -101 ]: 1) the edge density of the license plate area is higher, but if the density value is too high, the license plate area is not included; 2) the edge density of the license plate area is larger than that of the adjacent area; 3) the edge density distribution of the license plate area is uniform. Meanwhile, generally, for most license plate located scenes, the size distribution of the license plate in the image is within a certain known range.
According to the analysis, the coarse positioning of the license plate is realized through the following steps: setting the minimum size of the license plate in the image as Wmin,HminMaximum dimension of Wmax,HmaxWherein W ismin,Hmin,Wmax,HmaxRespectively a minimum width, a minimum height, a maximum width and a maximum height in the image.
1) The entire image is divided into small cells (cells), and the edge density of each cell (cell) is calculated. The size of each unit is w × H, wherein w ═ Hmin/2. For each small cell (cell), its edge density is calculated:
Figure BDA0002175386050000081
in the formula, Em,nThe edge density of the cell (cell) in the mth row and the nth column is shown. e.g. of the typei,jThe pixel values of the ith row and the jth column in the edge map are shown, and m, n, i and j are respectively greater than or equal to 0.
2) The background area is filtered according to the edge density value. The edge density distribution of the license plate region in a certain range can be determined according to the following formula:
filtering the background area, wherein Ai,j1 indicates that the cell (cell) in the ith row and the jth column belongs to a candidate region of a license plate, Ai,j0 indicates that the cell belongs to the background region, t1And t2Low and high thresholds for edge density.
3) And filtering the background area according to the edge density contrast of the current cell (cell) and the cell (cell) below the current cell. By observation, the edge density of the license plate region is greater than the edge density of other surrounding regions, particularly the region below it. Thus, the background area is filtered at this step, primarily by comparing the edge density of each cell (cell) with the cells (cells) below it. Selecting the current cell (cell) and the H-th cell below the current cellmaxThe alignment was performed per h units (cells). If the current cell (cell) is the same as the H-th cell below the current cellmaxThe edge density contrast of/h units (cells) is larger than a given threshold value, the cells are considered to belong to a license plate candidate area, and otherwise, the cells are filtered.
4) And (4) uniformly filtering the background area according to the density distribution of the edge of the license plate area. Because the edge density distribution of the license plate region is uniform, when one cell belongs to the license plate region, cells (cells) close to the edge density of the cell should exist in the neighborhood. Therefore, the number of cells (cells) in the left and right neighbors of the current cell (cell) close to the edge density of the current cell (cell) can be calculated, if the number is greater than a given threshold value, the current cell (cell) is judged to belong to the license plate candidate region, otherwise, the current cell (cell) belongs to the background region, and the current cell (cell) is filtered.
5) And filtering the background area according to the size of the license plate area. The license plate area has a certain size, and when the number of the cells (cells) contained in the connected area where one cell (cell) is located is less than (W)min/w)×(HminH), or greater than (W)max/w)×(HmaxH), the connected region in which the cell is located is filtered.
Through the above steps, most of the background area is filtered.
And step S22, accurately positioning the license plate image after the coarse positioning based on the license plate structure information to filter the residual background area of the license plate image. In particular, most of the background area is filtered by the course of coarse positioning. And for the rest areas, accurately positioning through the license plate structure information. A license plate is composed of characters distributed on a straight line or two straight lines, and the license plate region can be accurately positioned through the distribution information of the license plate characters.
Top-hat operation (top-hat) can extract a local bright area by making a difference between an original image and an opening operation image; the low-hat transform (bot-hat) can extract a local dark region by subtracting the original image from the closed-loop computed image. The license plate has two types of bright-bottom dark characters and dark-bottom bright characters, and the character regions cannot be simultaneously and successfully extracted for license plate positioning only by being suitable for single morphological operation. Therefore, the concept of the pseudo characters is put forward, namely, the interval parts between the license plate characters are regarded as the pseudo characters, the character areas of the license plate characters are extracted through paired morphological operations (for the license plate with bright characters on the dark bottom, the character areas of the license plate characters are extracted through top hat operation, the pseudo character areas of the license plate characters are extracted through low hat operation, for the license plate with dark characters on the bright bottom, the pseudo character areas of the license plate characters are extracted through top hat operation, the character areas of the license plate characters are extracted through low hat operation), the character information and the license plate background information (the pseudo characters) are combined in an explicit mode, and two types of license plates.
Firstly, calculating a license plate candidate region through paired morphological operation operators (top hat operation and low hat operation), carrying out binarization and connected component analysis on a result, obtaining a candidate region of each character and each pseudo character, extracting license plate characters and pseudo characters, and carrying out linear detection on all candidate regions through Hough transformation to further obtain the accurate position of the license plate. Morphological operations in a small area can be done quickly since most of the background area has been filtered out. The license plate positioning method combining coarse positioning and fine positioning can effectively improve the speed of license plate positioning, and improves the accuracy of license plate positioning by excluding most background images. And finally, intercepting and outputting the license plate image after accurate positioning.
And step S23, performing non-maximum suppression processing and Hough transform-based inclination correction processing on the precisely positioned license plate image to obtain the positioned license plate image. In particular, the non-maximum suppression is widely applied to object detection, and the main purpose is to eliminate unnecessary interference factors and find the optimal object detection position. Non-maximum suppression is a post-processing process of detection and is one of the key links. The heuristic window fusion algorithm has a good detection effect on the non-coincident target, but is not suitable for detecting the license plate of the vehicle. A heuristic window fusion algorithm divides an initial detection window into a plurality of non-coincident subsets, then calculates the center of each subset, and finally only one detection window is reserved for each subset, so that obviously, the algorithm is easy to cause a large amount of missed detections. Dalal et al propose mean shift non-maximum suppression, which is not only computationally complex, requiring the detection window to be represented in 3-dimensional space (abscissa, ordinate, scale), detection score conversion, calculation of uncertainty matrix, iterative optimization, but also requiring adjustment of many parameters associated with the detector step size, etc., and is therefore less used at present.
Currently, most target detection generally uses a greedy strategy-based non-maximum suppression algorithm, because it is simple and efficient, the main steps are as follows: (1) sorting the initial detection windows from high to low according to detection scores; (2) taking the 1 st initial detection window as a current suppression window; (3) non-maxima suppression. And taking an initial window with lower detection scores than the current suppression window as a suppressed window. Calculating the overlapping area ratio of the current suppression window and the suppressed window: intersection of areas/union of areas. Eliminating a window with the coincidence area ratio higher than a set threshold value; (4) and (4) ending if only the last initial detection window is left, otherwise, taking down one window which is not suppressed as a suppression window according to the sorted sequence, and turning to the step (3).
The invention also uses a simple and efficient non-maximum suppression algorithm based on the greedy strategy. And performing slope correction based on Hough transform on the license plate image subjected to non-maximum suppression processing. The Hough transform is a powerful feature extraction method, and utilizes local image information to effectively accumulate the basis of all possible model examples, so that the Hough transform can conveniently obtain additional information from external data and can vividly present effective information from only a part of examples. Hough transform is generally applied to the judgment of shape, position and geometric transformation parameters in computer vision. Since the hough transform was proposed, it has been widely used. In recent years, experts and scholars have further studied the theoretical properties and application methods of hough transform. The Hough transform is used as an effective algorithm for identifying straight lines, and has good anti-interference performance and robustness.
The hough transform method involves a mapping from features in image space to a collection of points in parameter space. Each point in the parameter space represents an instance of the model in the image space, and the image features are mapped into the parameter space using a function that produces all parameter combinations that are compatible with the observed image features and the assumed model. Each image feature will produce a different plane in the multidimensional parameter space, but all planes produced by all image features that belong to the same model instance will intersect at a point that depicts a common instance. The basis of the hough transform is to generate these planes and identify the parameter points that intersect them. And the license plate image after the inclination correction based on Hough transform is the image after the secondary positioning of the system. An example of a license plate image with hough transform tilt correction is shown in fig. 3.
Based on the first to third embodiments of the low-illumination imaging license plate recognition method of the present invention, in a fourth embodiment of the low-illumination imaging license plate recognition method of the present invention, the step S30 includes:
step S31, judging whether the positioned license plate image has a license plate frame;
step S32, when the positioned license plate image has a license plate frame, removing the license plate frame to obtain a frameless license plate image;
and step S33, dividing the borderless license plate image into a plurality of license plate characters.
For the number plate image, two kinds of number plate images are positioned, one is a number plate with a frame, and the other is a number plate without a frame. After the license plate candidate area is rotated to be horizontal, the license plate can be accurately positioned, namely the license plate frame is removed. Statistical analysis of the test data can yield: the number plate candidate area frames after positioning and rotation are of two types, one type is the frame of the number plate, the other type is the candidate number plate area formed by the white background around the number plate and the number plate, and the white background can be regarded as the frame of the number plate. The processing of the license plate frame comprises the processing of the upper frame and the lower frame of the license plate and the processing of the left frame and the right frame of the license plate. The processing of the upper and lower frames of the license plate is simpler, and the upper and lower frames of the license plate are divided into two types: one is the white frame of the license plate itself, and the other is the white background on the upper and lower positions of the license plate. The left and right borders of the license plate can be classified into two categories, however, due to the characteristics of the image, the upper and lower borders of the license plate are generally wider than the left and right borders, and the left and right borders of the license plate are more complicated.
Based on the fourth embodiment of the low-illumination imaging license plate recognition method of the present invention, in the fifth embodiment of the low-illumination imaging license plate recognition method of the present invention, before segmenting the characters, it should be noted that due to different illumination, dirty and old license plates, etc., the contrast between the background and the characters in the gray-scale image of the license plate is not strong, which brings a certain difficulty to the character segmentation by the projection method in the next step, so that the contrast of the characters needs to be enhanced on the license plate image before segmentation.
The whole license plate region character pixels account for 20% of the whole license plate region pixels, and for some pictures, although the difference between the characters and the background in the license plate is not very large due to other reasons, the pixel values of the characters are higher than those of the background in general. Therefore, the characteristic can be utilized to enhance the first 20 percent of pixels in the license plate area and inhibit other pixels, thereby achieving the purpose of enhancing the background of the target character and inhibiting the background. For the license plate enhancement algorithm adopted in the present invention, please refer to the following steps S33a to S33 e. The step S33 includes:
step S33a, counting the maximum pixel value maxvalue and the minimum pixel value minvalue of the pixel points in the whole license plate image, wherein maxvalue is more than or equal to 0, and minvalue is more than or equal to 0.
Step S33b, setting a proportionality coefficient coef of the pixel number to be enhanced to all the pixel numbers, wherein coef is more than or equal to 0 and less than or equal to 1;
step S33c, acquiring the number i of the pixels correspondingly appearing on the pixel values of 0-255, and storing the number i of the pixels correspondingly appearing in an array count (1, i) of 1 x 255, wherein i is more than or equal to 0;
step S33d, counting the number of pixel points from count (1, i), i is 255, if the counted pixel point is < width, height, coef, continuing to count i-1, otherwise, stopping counting and recording the current pixel value index, wherein width is the width of the license plate image, width is more than 0, height is the height of the license plate image, and height is more than 0;
step S33e, each point of the license plate image is enhanced, wherein the enhancement formula is as follows:
Figure BDA0002175386050000121
wherein i and j are pixel point positions in the license plate image respectively, i is more than or equal to 0, and j is more than or equal to 0. After the transformation, the image can be enhanced, if the original image has a good contrast, the image effect will not be deteriorated by the transformation, and the contrast effect before and after the enhancement is shown in fig. 4 and 5.
As can be seen from the comparison between fig. 4 and fig. 5, the contrast of the first two license plate regions directly converted from RGB to grayscale images is not very obvious, the contrast of the background and the characters is obviously improved after enhancement, and the original license plate region with a general contrast has better effect after enhancement. Such enhancements are advantageous for the next step of segmenting the character. Because the character segmentation method adopted by the invention is based on the gray projection algorithm, the original character has low contrast, and the wave crest and the wave trough characteristics of the gray projection image are not obvious, but after the image is enhanced, the gray projection image can well express the wave crest and the wave trough characteristics, thereby being beneficial to accurately segmenting the characters.
The gray projection segmentation characters used by the invention fully utilize the characteristics of license plate characters, and have great superiority compared with the common projection segmentation. A common projection segmentation is to segment a character using the valley point of a gray projection curve. The invention improves the common projection algorithm, and greatly improves the character segmentation accuracy. As can be seen from the projection curve of the license plate characters, the characters of the five characters on the right side of the license plate dots are numbers, and of course, a few license plates are Chinese characters. For characters and numbers, the projection curve is either a bimodal structure or a unimodal structure. The invention improves the projection segmentation algorithm by fully utilizing the characteristic in segmenting the character. The flow chart of the character segmentation algorithm of the present invention is shown in FIG. 6.
Before character segmentation, the pixel values of the image are accumulated in rows for the license plate image enhanced in front, so that the projection curve of the license plate can be obtained, but the obtained projection curve has a lot of noises to make the projection curve not smooth, which affects the segmentation of the characters, so that the projection curve needs to be smoothed firstly, and the gaussian filtering is adopted in the algorithm to smooth the projection curve, and the kernels used for filtering are [0.25,0.5,1,0.5,0.25 ]. FIG. 7 is a diagram of the effect of the license plate projection curve before and after filtering. It is obvious from the figure that the projection curve after filtering is much smoother than the original curve, and some peaks caused by noise in the original curve disappear after filtering, so that peak-valley points caused by noise cannot be detected when detecting the peak-valley.
And performing character segmentation according to the projection curve by using the filtered license plate gray level projection image. The present invention uses an improved projection method for character segmentation. The general projection method for segmenting characters directly utilizes valley points to segment the characters, and the projection method of the invention fully considers the projection characteristics of license plate characters when segmenting the characters, and comprises the following specific steps:
step 1: according to the gray projection curve chart of the license plate, five double-peak structures appear at most behind the license plate, so that the first five maximum valley points are searched, whether the points are valley points in the double-peak structures or not is judged, and if the points are the valley points in the double-peak structures, the starting and stopping positions of the double-peak structures are recorded.
Step 2: and determining the width of the license plate character. If a bimodal structure is detected in step 1, the character width is taken as the average of all bimodal structures detected, otherwise the character width is taken as the maximum of the first 3 unimodal widths.
And step 3: and setting a character starting point as a segmentation point of the second character and the third character, and setting an end point as a last valley point of the license plate. Step 4 is performed if a bimodal structure is detected in step 1, otherwise step 5 is performed.
And 4, step 4: setting a starting point of a temporary character segmentation segment as a starting point of the character, setting an end point of the temporary character segmentation segment as a starting position of a double-peak structure, detecting in the temporary character segmentation segment, if a peak structure exists in the segment, the peak is a single character, if two peak structures exist in the segment, judging whether the two peaks are a double-peak character or two single-peak characters, and comparing the width of the two peaks and the width of the character according to a specific judgment rule. If the sum of the two peak widths is less than 1.2 times the character width and the two peak widths differ very little. Otherwise, the two peak structures are not the projection of a double peak character, and it is determined that the previous peak structure is a character, so the previous peak structure can be segmented and then the character temporary segmentation segment is updated as follows: updating the starting point of the character temporary segmentation segment to the back of the segmented peak, and the end point of the character temporary segmentation segment is not changed, but if the starting point of the character temporary segmentation segment is equal to the end point, updating the starting point to the end position of the previous bimodal structure, and updating the end point of the temporary segmentation segment to the starting point of the next bimodal structure, if no bimodal structure exists behind the character temporary segmentation segment, updating the end point of the temporary segmentation segment to the end point of the character, and repeating the step 4 until the character is segmented to the end point.
And 5: proceeding to this step means that the double peak structure is not detected when detecting the double peak structure, but does not represent that there is no character with the double peak structure in the license plate, and cannot exclude that there is a character with the double peak structure. At this time, the segmentation is directly started from the starting point of the character until 5 characters are segmented. What needs to be detected in the segmentation is to detect whether the adjacent two peak structures are double peak curves of one character. The method used for detection is the same as in step 4, and the judgment is made using the widths of the two peaks and the relationship between the widths and the character width.
Step 6: the first two characters are divided according to the divided last five characters. The maximum width of the five divided characters is taken as the width of the first two characters. The first two characters are letters or kanji characters and the characters are also of a bimodal structure, so it is reasonable to use the maximum width of the next five characters as the width of the preceding character. The method for dividing the first two characters comprises the following steps: and moving forward by a character width pixel from the division point of the second character and the third character, and taking the nearest valley value of the point as the division point of the first character and the second character of the license plate. The start position of the first character can also be determined in the same way.
And 7: and detecting the segmented character sequence to see whether the sequence accords with the characteristics of the license plate character sequence. The feature can be expressed by the following expression, and assuming that dis1 is a width vector of the first two characters, dis2 is a width vector of the last five characters, width is a license plate width, and height is a license plate height, a reasonable license plate character sequence must satisfy the following expression:
min(min(dts1),min(dis2))>wtdth/10
max(dis2)>width/5
height/min(dis1)<3
therefore, the character sequence can be segmented from the license plate region, and the segmentation algorithm has high robustness for the license plates with partial left and right frames.
Based on the fourth embodiment of the low-illumination imaging license plate recognition method of the present invention, in the sixth embodiment of the low-illumination imaging license plate recognition method of the present invention, the step S32 includes:
step S32a, acquiring a binary image of the positioned license plate image;
step S32b, acquiring the upper and lower frames of the license plate of the binary image, and removing the upper and lower frames of the license plate;
and S32c, acquiring the left and right frames of the license plate of the binary image, and removing the left and right frames of the license plate.
For removing the upper and lower frames of the license plate, the invention adopts the following steps to process:
1. removing the upper and lower frames;
a. obtaining a binary threshold value of a candidate area of the license plate by using an OTSU (Otsu algorithm) method so as to obtain a binary image of the candidate area, solving a line row of the middle part of the binary image in order to eliminate the influence of the inclination angle of the license plate, and then processing the line row as follows:
Figure BDA0002175386050000151
Figure BDA0002175386050000152
b. the algorithm of the invention adopts the distance of 0.75 × height to find the boundary with the distance of zero in the up-down direction of the rowsum from the middle to the two ends, and the height of the graph is adopted as the reference distance because the removal of the upper and lower frames of the general license plate is more accurate. After the processing, the upper frame and the lower frame of most pictures can be processed, and then the left frame and the right frame of the license plate region can be removed.
For removing the left and right frames of the license plate, the invention adopts the following steps to process:
2. removing the left and right frames;
c. the same method for removing the upper frame and the lower frame is used for finding the boundary of a left frame and a right frame: left1, right 1.
d. Reconstructing a binary image and then using the same projection method as the first step to find the boundary left2, right 2. The binary image constructed at this time is subjected to binarization according to the h value of the HSI model of the license plate region. Firstly, counting the range of h value of the middle region of the license plate region, and then carrying out binarization on all license plate regions according to the range to obtain a binary image required by people.
e. The final boundary is determined based on the two boundary information obtained in the first and second steps. The last determined boundary can be represented by the following equation:
left=max(left1,left2)
right=min(right1,right2)
after the frame is removed according to the two steps, the obtained license plate area is more accurate than the original license plate area which is originally positioned, but is not absolutely accurate, and can be regarded as an error introduced in the frame removing process. The segmentation algorithm adopted by the invention can tolerate a little error existing when the license plate frame is removed. That is to say, when the left and right frames of the license plate are removed, the frames are not completely removed, and the correct segmentation of the characters is not influenced.
Based on the first to sixth embodiments of the low-illumination imaging license plate recognition method of the present invention, in a seventh embodiment of the low-illumination imaging license plate recognition method of the present invention, the step S40 includes:
step S41, determining a character classifier;
step S42, extracting gray scale direction gradient Histogram (HOG) features, binary HOG features and 16-value HOG features of Chinese characters, numbers and letters to determine combined HOG features;
step S43, reducing the dimension of the combined HOG characteristic by using a kernel principal component analysis method;
step S44, inputting the combined HOG characteristics into a support vector machine for training and prediction to obtain recognition results corresponding to Chinese characters, numbers and letters respectively;
and step S45, combining the recognition results respectively corresponding to the Chinese characters, the numbers and the letters to determine the final license plate character recognition result.
For the license plate characters which are well segmented, the license plate characters can be output only by identification, the invention provides the characteristics of a combined direction gradient histogram and a kernel principal component analysis method, integrates the advantages of the direction gradient histogram characteristics of a binary image, a gray level image and a 16-value image, and can better extract the structural characteristics of Chinese characters. After the features of the histogram of directional gradients are combined, the dimension of the HOG features is increased, and at the moment, in order to shorten the feature extraction time, the dimension reduction is carried out by the system by using a kernel principal component analysis method. The character recognition method adopts a support vector machine with better classification effect on small sample problems.
1. License plate character recognition algorithm based on combined HOG (histogram of oriented gradient) features
The number plate of the common vehicle has 7 characters, and the text identifies the 7 segmented characters. The license plate characters are composed of English letters, Chinese characters and numbers, and the characteristics of the Chinese characters and the English numbers are different: the strokes of the Chinese characters are dense and the outlines of the Chinese characters are complex; the figure and English have clear outline and simple structure. Therefore, different classifiers are used for Chinese characters and English digits in the text to respectively extract features of the Chinese characters and the English digits. The license plate recognition process of the invention is as follows: a classifier of the character is first determined. And then, extracting gray-scale direction gradient Histogram (HOG) features, binary HOG features and 16-value HOG features of Chinese characters and digital letters respectively, combining the features into combined HOG features, and performing dimension reduction on the obtained combined HOG features by using a kernel principal component analysis method. And finally, the combined HOG characteristics of the Chinese characters and the alphanumerics are sent to a support vector machine for training and prediction, and the recognition results of the Chinese characters and the alphanumerics are combined to obtain a final license plate character recognition result. The license plate recognition process based on the joint HOG is shown in fig. 8.
2. Histogram of directional gradients
The core idea of the histogram of directional gradients is to compute the statistical information of the local gradients of the detected objects in the image. Since the gradient is for an edge profile, the contour of the object can be described by a gradient profile. Therefore, the HOG feature is a feature that a single character is divided into small connected regions to become cell units, each pixel in each cell unit generates a gradient histogram, and the concatenation of the histograms can represent the feature of the detected object. In order to improve the adaptability of illumination variation, the histograms are subjected to contrast normalization in a larger area in a segmented single character, specifically, the density of each local histogram in a block is calculated, and each cell unit in the block is normalized according to the density. After normalization, the HOG features can obtain better adaptability to illumination changes and shadows.
The specific implementation process of the HOG is as follows:
(1) calculating the image gradient: using the template [ -1,0,1 [ ]]Performing convolution operation on the divided single character to obtain a horizontal gradient component Gh(x, y) is shown as formula (1); reuse of templates [ -1,0,1 [ -1]Performing convolution operation on the divided single character to obtain a vertical value direction gradient component Gv(x, y) as shown in formula (2); finally, calculating the gradient amplitude M (x, y) and the gradient direction of theta (x, y) of the pixel point, as shown in formulas (3) and (4), wherein f (x, y) represents the pixel value of the point, and the calculation formula is as follows:
Gh(x,y)=f(x+1,y)-f(x-1,y)(1)
Gv(x,y)=f(x,y+1)-f(x,y-1)(2)
Figure BDA0002175386050000171
M(x,y)≈|Gh(x,y)|+|Gv(x,y)|(3)
Figure BDA0002175386050000181
(2) constructing a gradient direction histogram: each pixel point in the cell unit is voted for a histogram based on a certain gradient direction, the gradient direction can be 0-180 degrees or 0-360 degrees, and the effect of 0-180 degrees is proved to be good according to previous experiments. The single character image is divided into a plurality of cell units, each cell unit comprises 8 × 8 pixels, the gradient range is divided into 9 direction angles, and therefore the gradient information of the 8 × 8 pixels is voted by using the 9 direction angles. In particular, histogram voting takes weighted voting, i.e. the gradient magnitude of each pixel is taken as the voting weight.
(3) Assembly of the cell units into blocks: the block structure is of two kinds: a rectangular block (R-HOG) and a ring block (C-HOG). The invention adopts a rectangular block to detect the target, and the rectangular block generally comprises 3 parameters: the number of cell units in each block, the number of pixel points in each cell unit, and the number of azimuth angles of each cell unit.
(4) The intra-block normalization calculation formula is as follows:
Figure BDA0002175386050000182
Figure BDA0002175386050000183
Figure BDA0002175386050000184
l2_ hys: l2_ norm was calculated first, and then the maximum value of v was limited to 0.2, followed by normalization.
Wherein v represents an unnormalized vector containing information for a given block statistical histogram; δ is a small constant that works to avoid a denominator of 0; II v IIkIs the k-th norm of v. In Dalal's experiment, L2_ hys, L2_ norm, L1_ sqrt are found to be almost as effective, and L1_ norm character recognition effect is slightly worse, but the 4 normalization methods are obviously improved in recognition performance compared with non-normalization methods. In the present invention, L2_ norm is used for normalization.
Assuming that the license plate characters are normalized to 64 × 128, each 8 × 8 pixels constitute one cell unit, and each 2 × 2 cell unit constitutes one block, when the block sliding step size is 8, the scanning can slide 15 times in the vertical direction and 7 times in the horizontal direction, so that a characteristic operator with 36 × 7 × 15 bits or 3780 bits can be obtained. The processing effect of a single license plate character is shown in fig. 4 and 5, and a gradient amplitude diagram and a gradient angle diagram of a license plate character gray level diagram contain more detail information, but the defect is that the character outline in the angle diagram is not obvious, and the license plate character recognition rate is influenced. In order to overcome the above disadvantages, a joint HOG feature is proposed, which combines the HOG feature of a gray scale map, the HOG feature of a binary map, and the HOG feature of a 16-value map.
3. Joint directional gradient histogram
The joint HOG method is to calculate HOG separately from the gray level map and the binary map and combine them into a joint feature, as follows: h represents the resulting combined characteristics, HiHOG feature, ω, representing a grayscale map and a binary mapiThe representative is the weight values of the gray scale map and the binary image HOG, and the sum of the weights is 1. The difference in weight distribution has a large influence on the subsequent recognition results. Experiments prove that when the weight values are all 0.5, the recognition effect is the best, and the recognition effect is good compared with that of a single gray-scale image or a binary image:
H=∑iωih, where Σ ωi=1;
And simultaneously adding the HOG features of the 16-value image into the combined HOG features, namely respectively carrying out HOG calculation on the gray image, the binary image and the 16-value image of the license plate character, and linearly combining the results according to a certain relation to obtain the combined HOG features, wherein the combination is shown as the following formula:
H=ωgrayhgray2h216h16
h represents the final combined HOG signature, Hgray,h2,h16HOG characteristics, omega, respectively representing license plate character gray level image, binary image and 16-value imageiRepresenting the weight.
The combined HOG combines the characteristics of the gray-scale image, the binary image and the 16-value image, can make up for the defects caused by the HOG operation of the gray-scale image or the binary image to a certain extent, and improves the recognition rate to a certain extent.
4. License plate character feature classification
The license plate character classification mainly refers to that the character features to be recognized and the learned training character features are compared through a certain algorithm to be recognized. Commonly used classifiers mainly include minimum distance classifiers, k-nearest neighbor classifiers, bayesian classifiers, decision trees, Adaboost cascade classifiers, artificial neural networks and Support Vector Machines (SVMs). According to the characteristics of the license plate characters needing to be trained and classified and the characteristics of different classifiers, the invention mainly adopts a support vector machine to classify. The core idea of the support vector machine is to maximize the edge distance of both positive and negative classes by using a classification hyperplane as the decision surface. In consideration of the fact that the number of training samples in the license plate character recognition is limited and the generated HOG dimension is large, the support vector machine which has a good classification effect on small sample problems is adopted in the invention. The division is performed in a one-to-one manner as used herein for the problem of multi-classification. The process of processing samples and training, predicting and recognizing by the SVM is roughly as follows: selecting a training sample set and a testing sample set from license plate character samples, respectively preprocessing the training sample set and the testing sample set, extracting features such as HOG (hot object), selecting optimal parameters c and g by using a cross-validation method, training an SVM (support vector machine) by using the optimal parameters to obtain a training model, and predicting the testing set by using the training model to obtain the prediction classification accuracy. Commonly used kernel functions in SVMs include linear kernel functions, radial basis kernel functions, polynomial kernel functions, sigmoid kernel functions. The classification accuracy rates of different kernel function test sets are different, and the classification accuracy rate of the radial basis kernel function adopted in the license plate character recognition is the highest. Therefore, the kernel function of the SVM of the present invention employs an RBF kernel function.
In summary, after the features are extracted, training classification is performed by using the SVM. After training and classification, inputting the segmented characters into a trainer, and outputting the recognized license plate characters.
The method provided by the invention can be actually embedded into an FPGA (field programmable gate array) to realize the method, and is applied to a night vision camera or a camera monitoring system with a low-illumination license plate recognition function and a real-time image output function.
In order to achieve the above object, the present invention further provides a license plate recognition system, which applies any one of the above low-illumination imaging license plate recognition methods.
Since the technical solution of the license plate recognition system in this embodiment at least includes all technical solutions of the above-mentioned low-illumination imaging license plate recognition method embodiments, at least all technical effects of the above embodiments are achieved, and details are not repeated here.
Furthermore, to achieve the above object, the present invention also provides a computer apparatus, a memory, a processor, and a computer program stored on the memory and executable on the processor; the computer program, when executed by the processor, implements the steps of the low-illumination imaging license plate recognition method according to any one of the embodiments.
Furthermore, to achieve the above object, the present invention further provides a computer readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the steps of any one of the first to eighth embodiments of the low-illumination imaging license plate recognition method.
Since the technical solution of the computer-readable storage medium of this embodiment at least includes all the technical solutions of the above-mentioned embodiments of the low-illumination imaging license plate recognition method, at least all the technical effects of the above embodiments are achieved, and details are not repeated here.

Claims (10)

1. A low-illumination imaging license plate recognition method is characterized by comprising the following steps:
inputting the original license plate low-illumination image acquired by the camera module into a multi-scale context aggregation network based on deep learning for image processing to acquire a license plate image with improved identifiability;
carrying out license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image;
dividing the positioned license plate image into a plurality of license plate characters;
and identifying each license plate character.
2. The low-illumination imaging license plate recognition method of claim 1, wherein the step of inputting the original license plate low-illumination image obtained by the camera module into a deep learning-based multi-scale context aggregation network for image processing to obtain the license plate image with improved recognizability comprises:
preprocessing the original license plate low-illumination Bayer image acquired by the camera module, and packing and transforming pixel channels to obtain a pixel image for inputting an FCN model for training;
the pixel image is trained based on a deep learning CAN network, and a processed image is output;
and carrying out wide dynamic enhancement processing on the processed image, and outputting the license plate image with improved reduction degree and image quality.
3. The low-illumination imaging license plate recognition method of claim 1, wherein the step of performing license plate positioning and license plate inclination correction processing on the license plate image to obtain a positioned license plate image comprises:
roughly positioning the license plate image based on environmental information to filter a partial background area of the license plate image;
accurately positioning the license plate image after the coarse positioning based on the license plate structure information to filter the residual background area of the license plate image;
and carrying out non-maximum value inhibition processing and Hough transform-based inclination correction processing on the license plate image after accurate positioning to obtain the positioned license plate image.
4. The low-illumination imaging license plate recognition method of claim 1, wherein the step of segmenting the positioned license plate image into a plurality of license plate characters comprises:
judging whether the positioned license plate image has a license plate frame or not;
when the positioned license plate image has a license plate frame, removing the license plate frame to obtain a frameless license plate image;
and (4) dividing the license plate image without the frame into a plurality of license plate characters.
5. The low-illumination imaging license plate recognition method of claim 4, wherein the step of segmenting the frameless license plate image into a plurality of license plate characters comprises:
counting the maximum pixel value maxvalue and the minimum pixel value minvalue of pixel points in the whole license plate image, wherein maxvalue is more than or equal to 0, and minvalue is more than or equal to 0;
setting a proportionality coefficient coef of the number of pixels to be enhanced to the number of all pixels, wherein the coef is more than or equal to 0 and less than or equal to 1;
acquiring the number i of pixels correspondingly appearing on the pixel values of 0-255, and storing the number i of the pixels correspondingly appearing in an array count (1, i) of 1 x 255, wherein i is more than or equal to 0;
counting the number of pixel points from count (1, i), i being 255, continuing counting i-1 if the counted pixel point pixelnum is less than width height coef, otherwise stopping counting and recording the current pixel value index, wherein width is the width of the license plate image, width is more than 0, height is the height of the license plate image, and height is more than 0;
enhancing each point of the license plate image, wherein the enhancing formula is as follows:
Figure FDA0002175386040000021
wherein i and j are pixel point positions in the license plate image respectively, i is more than or equal to 0, and j is more than or equal to 0.
6. The low-illumination imaging license plate recognition method of claim 4, wherein the step of removing the license plate frame to obtain a frameless license plate image when the license plate frame exists in the positioned license plate image comprises:
acquiring a binary image of the positioned license plate image;
acquiring the upper and lower frames of the license plate of the binary image, and removing the upper and lower frames of the license plate;
and acquiring the left and right frames of the license plate of the binary image, and removing the left and right frames of the license plate.
7. The low-illumination imaging license plate recognition method of any one of claims 1 to 6, wherein the step of recognizing each license plate character comprises:
determining a character classifier;
extracting gray scale direction gradient Histogram (HOG) features, binary HOG features and 16-value HOG features of Chinese characters, numbers and letters to determine combined HOG features;
reducing the dimension of the combined HOG characteristic by using a kernel principal component analysis method;
inputting the combined HOG characteristics into a support vector machine for training and predicting to obtain recognition results corresponding to the Chinese characters, the numbers and the letters respectively;
and combining the recognition results respectively corresponding to the Chinese characters, the numbers and the letters to determine a final license plate character recognition result.
8. A license plate recognition system, characterized in that the low-illumination imaging license plate recognition method of any one of claims 1 to 7 is applied.
9. A computer device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor; the computer program when executed by the processor implements the steps of the low-illumination imaging license plate recognition method of any of claims 1-7.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the low-illumination imaging license plate recognition method according to any one of claims 1 to 7.
CN201910776944.0A 2019-08-22 2019-08-22 Low-illumination imaging license plate recognition method and system, computer equipment and storage medium Pending CN110689003A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910776944.0A CN110689003A (en) 2019-08-22 2019-08-22 Low-illumination imaging license plate recognition method and system, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910776944.0A CN110689003A (en) 2019-08-22 2019-08-22 Low-illumination imaging license plate recognition method and system, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110689003A true CN110689003A (en) 2020-01-14

Family

ID=69108330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910776944.0A Pending CN110689003A (en) 2019-08-22 2019-08-22 Low-illumination imaging license plate recognition method and system, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110689003A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785550A (en) * 2020-12-29 2021-05-11 浙江大华技术股份有限公司 Image quality value determination method, image quality value determination device, storage medium, and electronic device
CN113516096A (en) * 2021-07-29 2021-10-19 中国工商银行股份有限公司 Finger vein ROI (region of interest) region extraction method and device
CN113642570A (en) * 2021-07-02 2021-11-12 山东黄金矿业(莱州)有限公司三山岛金矿 Method for recognizing license plate of mine car in dark environment
CN114419624A (en) * 2022-03-28 2022-04-29 天津市北海通信技术有限公司 Image character checking method and system based on image visual algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971097A (en) * 2014-05-15 2014-08-06 武汉睿智视讯科技有限公司 Vehicle license plate recognition method and system based on multiscale stroke models
CN105512600A (en) * 2014-09-28 2016-04-20 江苏省兴泽实业发展有限公司 License plate identification method based on mutual information and characteristic extraction
CN107832762A (en) * 2017-11-06 2018-03-23 广西科技大学 A kind of License Plate based on multi-feature fusion and recognition methods
CN109657676A (en) * 2018-12-06 2019-04-19 河池学院 Licence plate recognition method and system based on convolutional neural networks
CN110111269A (en) * 2019-04-22 2019-08-09 深圳久凌软件技术有限公司 Low-light-level imaging algorithm and device based on multiple dimensioned context converging network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971097A (en) * 2014-05-15 2014-08-06 武汉睿智视讯科技有限公司 Vehicle license plate recognition method and system based on multiscale stroke models
CN105512600A (en) * 2014-09-28 2016-04-20 江苏省兴泽实业发展有限公司 License plate identification method based on mutual information and characteristic extraction
CN107832762A (en) * 2017-11-06 2018-03-23 广西科技大学 A kind of License Plate based on multi-feature fusion and recognition methods
CN109657676A (en) * 2018-12-06 2019-04-19 河池学院 Licence plate recognition method and system based on convolutional neural networks
CN110111269A (en) * 2019-04-22 2019-08-09 深圳久凌软件技术有限公司 Low-light-level imaging algorithm and device based on multiple dimensioned context converging network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
李云: "车牌定位与字符分割算法的研究及实现", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *
殷羽 等: "基于联合HOG特征的车牌识别算法", 《计算机工程与设计》 *
王伟: "车牌字符分割和字符识别的算法研究与实现", 《万方数据库》 *
王永杰 等: "多信息融合的快速车牌定位", 《中国图象图形学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785550A (en) * 2020-12-29 2021-05-11 浙江大华技术股份有限公司 Image quality value determination method, image quality value determination device, storage medium, and electronic device
CN113642570A (en) * 2021-07-02 2021-11-12 山东黄金矿业(莱州)有限公司三山岛金矿 Method for recognizing license plate of mine car in dark environment
CN113516096A (en) * 2021-07-29 2021-10-19 中国工商银行股份有限公司 Finger vein ROI (region of interest) region extraction method and device
CN114419624A (en) * 2022-03-28 2022-04-29 天津市北海通信技术有限公司 Image character checking method and system based on image visual algorithm

Similar Documents

Publication Publication Date Title
CN109086714B (en) Form recognition method, recognition system and computer device
CN107609549B (en) Text detection method for certificate image in natural scene
CN107545239B (en) Fake plate detection method based on license plate recognition and vehicle characteristic matching
CN111401372B (en) Method for extracting and identifying image-text information of scanned document
CN108596166A (en) A kind of container number identification method based on convolutional neural networks classification
CN110689003A (en) Low-illumination imaging license plate recognition method and system, computer equipment and storage medium
CN112686812B (en) Bank card inclination correction detection method and device, readable storage medium and terminal
CN107103317A (en) Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN109255350B (en) New energy license plate detection method based on video monitoring
CN106529532A (en) License plate identification system based on integral feature channels and gray projection
CN110766017B (en) Mobile terminal text recognition method and system based on deep learning
CN113592911B (en) Apparent enhanced depth target tracking method
CN109255326B (en) Traffic scene smoke intelligent detection method based on multi-dimensional information feature fusion
CN110659550A (en) Traffic sign recognition method, traffic sign recognition device, computer equipment and storage medium
CN110766016B (en) Code-spraying character recognition method based on probabilistic neural network
CN110866430A (en) License plate recognition method and device
CN104978567A (en) Vehicle detection method based on scenario classification
CN110969164A (en) Low-illumination imaging license plate recognition method and device based on deep learning end-to-end
CN111259893A (en) Intelligent tool management method based on deep learning
WO2022121021A1 (en) Identity card number detection method and apparatus, and readable storage medium and terminal
CN111597875A (en) Traffic sign identification method, device, equipment and storage medium
Mei et al. A novel framework for container code-character recognition based on deep learning and template matching
Fernández-Caballero et al. Display text segmentation after learning best-fitted OCR binarization parameters
CN113537211A (en) Deep learning license plate frame positioning method based on asymmetric IOU
CN110188693B (en) Improved complex environment vehicle feature extraction and parking discrimination method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200114