CN115546780A - License plate recognition method, model and device - Google Patents

License plate recognition method, model and device Download PDF

Info

Publication number
CN115546780A
CN115546780A CN202211506588.9A CN202211506588A CN115546780A CN 115546780 A CN115546780 A CN 115546780A CN 202211506588 A CN202211506588 A CN 202211506588A CN 115546780 A CN115546780 A CN 115546780A
Authority
CN
China
Prior art keywords
license plate
resolution
image
recognition model
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211506588.9A
Other languages
Chinese (zh)
Other versions
CN115546780B (en
Inventor
王国梁
陈娜华
陈思瑶
彭大蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCI China Co Ltd
Original Assignee
CCI China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCI China Co Ltd filed Critical CCI China Co Ltd
Priority to CN202211506588.9A priority Critical patent/CN115546780B/en
Publication of CN115546780A publication Critical patent/CN115546780A/en
Application granted granted Critical
Publication of CN115546780B publication Critical patent/CN115546780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/1444Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • G06V30/1475Inclination or skew detection or correction of characters or of image to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19147Obtaining sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Character Discrimination (AREA)

Abstract

The application provides a license plate recognition method, a license plate recognition model and a license plate recognition device, a brand-new license plate recognition model is designed, the license plate recognition model can directly generate a high-definition super-resolution image from a fuzzy small image, the structure of a discriminator and an antagonistic learning mode are optimized in an image enhancement module in the license plate recognition model, a reconstruction self-coding network is introduced, and the license plate character recognition module adopts an end-to-end network framework and can quickly and accurately recognize license plate characters.

Description

License plate recognition method, model and device
Technical Field
The present application relates to the field of image recognition, and in particular, to a license plate recognition method, model, and apparatus.
Background
In the urban traffic management process, vehicles are often required to be identified and relevant information of the vehicles is acquired, and among a plurality of pieces of vehicle relevant information, a license plate serving as unique identification information of the vehicles can be used for identifying the vehicles. License plate recognition technology has wide application in scenes such as parking lot management, toll station management, illegal vehicle management and the like.
Conventional license plate recognition techniques typically include two stages: the method comprises the steps of character positioning and character recognition, wherein a region where a license plate is located is positioned firstly, then a part of region is intercepted, and then character recognition is carried out. In actual monitoring requirements, a license plate is generally located at the middle-lower part of a vehicle, and the area of the license plate is relatively small, so that the conditions of low resolution and poor image quality exist in a license plate area in a vehicle image acquired by a monitoring camera, and the existing license plate recognition algorithm cannot be well utilized for license plate recognition.
In other words, the conventional license plate recognition technology has great defects that the license plate cannot be well recognized: firstly, a license plate sample for training only forms an incomplete search space and cannot cover all possible license plate conditions; secondly, in many cases, the inclination angle of the license plate relative to the horizontal plane can reach 30 degrees, which greatly interferes with character detection and recognition; thirdly, the character positioning position is difficult and the error rate is high.
Disclosure of Invention
The embodiment of the application provides a license plate recognition method, a license plate recognition model and a license plate recognition device, and provides an end-to-end GAN enhanced license plate recognition model, which can perform super-resolution processing on an input vehicle image, position a text area of a license plate under the condition of no character segmentation and perform high-accuracy recognition on the license plate.
In a first aspect, an embodiment of the present application provides a method for constructing a license plate recognition model, including:
obtaining a training sample: acquiring a high-resolution license plate character image, and obtaining a low-resolution license plate character image by down-sampling the high-resolution license plate character image;
constructing an image enhancement module and a license plate character recognition module which are connected in sequence: the image enhancement module comprises a generator and a discriminator which are trained in a confrontation type learning mode, wherein the generator internally comprises a high-resolution generation network for generating a super-resolution image and a reconstruction self-coding network for reconstructing the super-resolution image which are sequentially connected, and the full-connection layer of the discriminator is divided into a parallel character number calculation branch and a judgment comparison branch; counting information output by the character number counting branch is input to the last layer of the license plate character recognition module;
training the image enhancement module and the license plate character recognition module: inputting the high-resolution license plate character image into the discriminator for counterlearning with the generator, inputting the low-resolution character image into the generator, and iteratively optimizing a loss function of the generator and the discriminator; and inputting the enhanced image output by the image enhancement module and the high-resolution license plate character image into the license plate character recognition module, and iterating the loss function of the recognition image module.
In a second aspect, the embodiment of the present application provides a license plate recognition model, which is constructed by the above-mentioned construction method of a license plate recognition model.
In a third aspect, an embodiment of the present application provides a license plate recognition method, where a to-be-detected image including a license plate to be detected is input to the license plate recognition model to obtain license plate characters.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the license plate recognition model construction method or the license plate recognition method.
In a fifth aspect, an embodiment of the present application provides a readable storage medium, where a computer program is stored in the readable storage medium, where the computer program includes program code for controlling a process to execute a process, where the process includes the method for constructing the license plate recognition model or the method for recognizing the license plate.
The main contributions and innovation points of the invention are as follows:
the embodiment of the application provides a license plate recognition method, a license plate recognition model and a license plate recognition device, a brand-new license plate recognition model is designed, the license plate recognition model can directly generate a high-definition super-resolution image from a fuzzy small image, the structure of a discriminator and an antagonistic learning mode are optimized in an image enhancement module in the license plate recognition model, a reconstruction self-coding network is introduced, and the license plate character recognition module adopts an end-to-end network framework and can quickly and accurately recognize license plate characters.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more concise and understandable description of the application, and features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a block diagram of a license plate recognition model according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a network architecture of a reconstructed self-encoder according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for constructing a license plate recognition model according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Example one
The embodiment of the application provides a method for constructing a license plate recognition model, and specifically, referring to fig. 1 and 3, the method includes:
obtaining a training sample: acquiring a high-resolution license plate character image, and obtaining a low-resolution license plate character image by down-sampling the high-resolution license plate character image;
constructing an image enhancement module and a license plate character recognition module which are connected in sequence: the image enhancement module comprises a generator and a discriminator which are trained in a confrontation type learning mode, wherein the generator internally comprises a high-resolution generation network for generating a super-resolution image and a reconstruction self-coding network for reconstructing the super-resolution image which are sequentially connected, and the full-connection layer of the discriminator is divided into a parallel character number calculation branch and a judgment comparison branch; counting information output by the character number counting branch is input to the last layer of the license plate character recognition module;
training the image enhancement module, the image enhancement module and the license plate character recognition module: inputting the high-resolution license plate character image into the discriminator for counterlearning with the generator, inputting the low-resolution character image into the generator, and iteratively optimizing a loss function of the generator and the discriminator; and inputting the enhanced image output by the image enhancement module and the high-resolution license plate character image into the license plate character recognition module, and iterating the loss function of the image recognition module.
The image enhancement module adopts an improved antagonism super-resolution image enhancement method, a reconstruction self-coding network is introduced into the image enhancement module to correct the inclination of the license plate characters in the super-resolution image, the problem that the inclination angle of some license plates is too large is solved, and the target function of the reconstruction self-coding network is the difference value between the inclined image of the inclined license plate characters and the straightened image of the straightened license plate characters; the loss function of the discriminator is optimized, so that the discriminator can calculate the number of license plate characters in parallel to obtain a counting result while distinguishing the super-resolution image from the input high-resolution license plate character image, the counting result is beneficial to improving the character recognition precision of the license plate character recognition module, the image enhancement module of the scheme introduces a counterstudy concept, the minimum and maximum game is taken as an optimization target, the image smoothing effect is avoided, and the generated license plate effect has a sharpening effect.
The high-resolution generation network can restore low-resolution images (LR) into super-resolution images by adopting an image super-resolution method, the traditional image super-resolution method takes a pixel average value as an optimization target, but the traditional method takes the minimum mean square error between the super-resolution images and real images, so that the images are smoother, and the smoother images mean small contrast, which means low precision in license plate character recognition.
In addition, the license plate character recognition module adopted by the scheme is an end-to-end single-stage recognition model, and the region where the license plate characters are located can be located under the condition that the license plate characters are not segmented. The license plate character recognition module of the scheme adopts a multi-scale image detection method to carry out image detection.
Specifically, the low-resolution image is input into a high-resolution generation network, and a super-resolution image can be generated by iteratively optimizing a loss function of the high-resolution generation network; the super-resolution image is input into a reconstruction self-coding network and is corrected in a denoising learning mode to obtain an enhanced image.
The high-resolution generation network of the scheme is composed of a series of convolution layers and up-sampling layers, and 2 times of up-sampling is realized by using the two up-sampling layers, so that 4 times of enhanced super-resolution images can be obtained. Specifically, the network framework of the high-resolution generation network comprises a plurality of residual blocks, each residual block comprises two convolution layers 3*3, a batch normalization layer is connected behind the convolution layers, a PReLU is selected as an activation function, and then two sub-pixel convolution layers are connected to serve as an up-sampling network layer.
The reconstruction self-coding network can be used for reconstructing a super-resolution image, and can be used as an image refining task for improving the resolution, so that an enhanced image can be obtained by a denoising learning mode and correcting the super-resolution image of the image. The enhanced image obtained by the scheme is an image with the corrected license plate inclination condition in the super-resolution image.
For example, noise can be added into an original image, the image is tilted by a certain angle to generate an input sample, finally, the difference loss between the output sample and the original image is compared to reconstruct a denoised and corrected image of the self-coding network society, and the trained reconstructed self-coding network can reconstruct the super-resolution image into an enhanced image.
As shown in fig. 2, the reconstruction self-coding network provided by the present scheme includes an encoder and a decoder connected in sequence, the encoder and the decoder use a convolutional neural network, both the encoder and the decoder are composed of the same number of convolutional layers, the encoder adds a MaxPoolig2D layer for spatial down-sampling, and the latter adds an up-sampling 2D layer and uses BatchNormalization. In an embodiment of the present solution, the encoder comprises three convolution layers, a first convolution layer of 5 × 3 × 64, a second convolution layer of 5 × 64 × 128 and a third convolution layer of 5 × 128 × 256.
The output layer of the discriminator is divided into two parallel branches, namely a character number calculating branch and a judging and comparing branch, the character number branch can be used for calculating the number of characters contained in an input image and further obtaining counting information, and the judging and comparing branch is used for comparing an enhanced image with a high-resolution license plate character image to obtain a comparison result. In some embodiments, the discriminator used in the present scheme is the structure of VGG 19.
The generator and the discriminator adopt a mode of counterstudy, and the result of the counterstudy is to make the probability distribution and the data distribution generated by the generator completely match. In addition, the scheme optimizes a minimax value function of counterstudy, so that the generator creates a super-resolution image from the low-resolution license plate character image, and the discriminator distinguishes the super-resolution image from the low-resolution license plate character image.
The optimized countermeasure learning of the scheme trains the infinitesimal maximum game by alternately updating the generator G and the discriminator D at the same time, and the specific training formula is as follows:
Figure 100002_DEST_PATH_IMAGE002
wherein,
Figure 100002_DEST_PATH_IMAGE003
is a high-resolution license plate character image,
Figure 100002_DEST_PATH_IMAGE004
is a low-resolution license plate character image,
Figure 100002_DEST_PATH_IMAGE005
and
Figure 100002_DEST_PATH_IMAGE006
respectively expressed by feed-forward CNN G
Figure 7076DEST_PATH_IMAGE005
And D
Figure 189796DEST_PATH_IMAGE006
The trained parameters correspond to weight data of the neural network.
The loss function comprises a left part and a right part, wherein the left part has the function of ensuring the basic judgment capability of the discriminator and can drive the discriminator to judge true samples as much as possible, and the right part ensures that the discriminator can distinguish false samples. In the step of 'iteratively optimizing the loss functions of the generator and the discriminator', the loss functions include a pixel loss function, a countermeasure loss function, a reconstruction loss function and a discrimination loss function, and four different loss functions are weighted and calculated by adopting different weights to obtain the loss functions.
Wherein the pixel loss function describes the difference between the super-resolution image and the high-resolution map, making the image generated by the generator more realistic; the penalty-fighting function can be used to allow the discriminator to distinguish between true and generated high-resolution maps; the reconstruction loss function can be used for judging the difference between the input and the output of the reconstruction self-coding network and denoising and correcting the problem; and the discrimination loss function is used for improving the judgment capability of the discriminator and counting characters.
In order to enable the super-resolution image generated by the generator to be as close to the high-resolution license plate character image as possible, the generator optimizes the MSE loss of each pixel between the generated super-resolution image set and the small and fuzzy low-resolution license plate character image.
The pixel loss function for the pixel MSE of the generator is as follows:
Figure 100002_DEST_PATH_IMAGE008
where Gs1 () represents a high resolution generating network, gs2 () represents a reconstructed self-encoding network,
Figure 100002_DEST_PATH_IMAGE010
representing a low-resolution license plate character image,
Figure 100002_DEST_PATH_IMAGE012
representing a high resolution license plate character image.
In addition, in order to provide a sharpening effect on the generated enhanced image that is different from the MSE loss that is typically employed by GAN, the penalty function of the generator is:
Figure 100002_DEST_PATH_IMAGE014
wherein
Figure 100002_DEST_PATH_IMAGE010A
Representing low resolution license plate character images
Figure 100002_DEST_PATH_IMAGE012A
Representing a high-resolution license plate character image, gw representing a generator, and D representing a discriminator.
In order to make the quality of the enhanced image more realistic, the present solution also proposes reconstruction losses, which can correct for disturbing detected variations in the topology of the enhanced image,
the reconstruction loss function of the reconstruction loss of the generator is:
Figure 100002_DEST_PATH_IMAGE016
where Gs1 () represents a high resolution generating network, gs2 () represents a reconstructed self-encoding network,
Figure 100002_DEST_PATH_IMAGE010AA
representing the license plate character image with low resolution, and the reconstruction loss is the difference between the high resolution generation network and the reconstruction self-coding network.
In addition, the scheme also optimizes the loss function of the discriminator, so that the discriminator can calculate the number of license plate characters in parallel to obtain a counting result while distinguishing the super-resolution image from the input high-resolution license plate character image.
The discrimination loss function of the discriminator at this time is:
Figure DEST_PATH_IMAGE018
wherein
Figure DEST_PATH_IMAGE020
An AND operation indicating the number of characters and the predicted value if
Figure DEST_PATH_IMAGE019
And
Figure DEST_PATH_IMAGE021
are correctly predicted, 1 is output respectively,
Figure 284047DEST_PATH_IMAGE021
the actual number of characters representing the license plate characters.
The license plate character recognition module provided by the scheme is an end-to-end single-node recognition network, and integrates character positioning and character recognition together. According to the scheme, a YOLOv3 algorithm model network structure is selected for the license plate character recognition module, characters are detected on three different scales by the YOLOv3 algorithm model network structure, images of the different scales are obtained by reducing the dimensionality of the images, and no pooling layer is required to be additionally arranged.
The license plate character recognition module provided by the scheme has better detection performance on small-size characters, license plate recognition mainly comprises small-size positioning and recognition, and the model is optimized for the characters on the license plate and has cross-layer output connection.
Specifically, the license plate character recognition module comprises a plurality of convolution layers which are connected in sequence, the output characteristics and the counting information of the convolution layer A are input into the last convolution layer, the accuracy of character recognition can be increased, and the license plate character recognition network can effectively calculate and position license plate characters.
The input features input into the license plate character recognition module are subjected to multilayer convolutional layers to obtain first scale features, the convolutional layers outputting the first scale features are defined as A convolutional layers, the first scale features are input into B convolutional layers to obtain first features, the first scale features are subjected to convolutional sum and 1/4 sampling and then input into C convolutional layers to obtain second features, the first features and the second features are added to obtain second scale features, and the second scale features are enlarged and added with the first scale features to obtain third scale features.
After the characteristic diagram corresponding to the overlapped third-scale characteristics is obtained, the characteristic diagram is sent to a detection head to extract output information, a position frame with three sizes can be extracted from each point on the characteristic diagram, the characters in the frame are predicted, and then the position frame is subjected to duplication elimination (NMS), and the number of the characters obtained from the discriminator is combined to obtain the final license plate character frame and the character content (license plate characters).
The shape of the detection kernel of the present scheme is represented as 1 × 1 × (B × (5+C)), where B is the number of position frames of each license plate character, 5 is the four attributes (coordinates (x, y), width, and height) and one object confidence score of the position frame of each license plate character, and C is the number of categories of characters, representing the category of the word corresponding to each character. In the method of the scheme, the detection kernel size is defined as B =3,C as 66 (10 numbers (0-9), 24 english letters and 32 chinese letters), where B is the number of boxes corresponding to each point in the feature map, and C is the word corresponding to each character, and the result is 1 × 1 × 213.
According to the scheme, a large number of high-resolution license plate character images can be collected from a monitoring video, low-resolution license plate character images are obtained through down-sampling, collected data are input into a built network construction as training samples to be trained, so that a license plate recognition model can obtain a super-resolution image from the low-resolution license plate character images through continuous training, and license plate characters are obtained through prediction from the super-resolution image.
The license plate recognition model can be used for detecting license plate characters, can convert a low-definition image into a clear super-resolution image, and can detect the license plate characters from end to end from the super-resolution image. For the same technical features in the second embodiment as those in the first embodiment, refer to the technical contents of the first embodiment.
The third embodiment of the invention also provides a license plate recognition method, which comprises the following steps:
and inputting the image to be detected containing the license plate to be detected into the license plate recognition model to obtain license plate characters.
For the same technical features of the third embodiment as those of the first embodiment, refer to the technical contents of the first embodiment.
Example four
The present embodiment further provides an electronic apparatus, referring to fig. 4, including a memory 404 and a processor 402, where the memory 404 stores a computer program, and the processor 402 is configured to execute the computer program to perform any one of the above-mentioned steps in the license plate recognition model construction method or the license plate recognition method embodiment.
Specifically, the processor 402 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more integrated circuits of the embodiments of the present application.
Memory 404 may include, among other things, mass storage 404 for data or instructions. By way of example, and not limitation, memory 404 may include a hard disk drive (hard disk drive, HDD for short), a floppy disk drive, a solid state drive (SSD for short), flash memory, an optical disk, a magneto-optical disk, tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Memory 404 may include removable or non-removable (or fixed) media, where appropriate. The memory 404 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 404 is a Non-Volatile (Non-Volatile) memory. In certain embodiments, memory 404 includes Read-only memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or FLASH memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a static random-access memory (SRAM) or a dynamic random-access memory (DRAM), where the DRAM may be a fast page mode dynamic random-access memory 404 (FPMDRAM), an extended data output dynamic random-access memory (EDODRAM), a synchronous dynamic random-access memory (SDRAM), or the like.
Memory 404 may be used to store or cache various data files for processing and/or communication use, as well as possibly computer program instructions for execution by processor 402.
The processor 402 reads and executes the computer program instructions stored in the memory 404 to implement any one of the license plate recognition model construction methods or the license plate recognition methods in the above embodiments.
Optionally, the electronic apparatus may further include a transmission device 406 and an input/output device 408, where the transmission device 406 is connected to the processor 402, and the input/output device 408 is connected to the processor 402.
The transmitting device 406 may be used to receive or transmit data via a network. Specific examples of the network described above may include wired or wireless networks provided by communication providers of the electronic devices. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmitting device 406 may be a Radio Frequency (RF) module configured to communicate with the internet via wireless.
The input and output devices 408 are used to input or output information. In this embodiment, the input information may be a license plate image with insufficient definition, and the output information may be a super-resolution image or license plate characters.
Alternatively, in this embodiment, the processor 402 may be configured to execute the following steps by a computer program:
obtaining a training sample: acquiring a high-resolution license plate character image, and obtaining a low-resolution license plate character image by down-sampling the high-resolution license plate character image;
constructing an image enhancement module and a license plate character recognition module which are connected in sequence: the image enhancement module comprises a generator and a discriminator which are trained in a confrontation type learning mode, wherein the generator internally comprises a high-resolution generation network for generating a super-resolution image and a reconstruction self-coding network for reconstructing the super-resolution image which are sequentially connected, and the full-connection layer of the discriminator is divided into a parallel character number calculation branch and a judgment comparison branch; counting information output by the character number counting branch is input to the last layer of the license plate character recognition module;
training the image enhancement module, the image enhancement module and the license plate character recognition module: inputting the high-resolution license plate character image into the discriminator for counterlearning with the generator, inputting the low-resolution character image into the generator, and iteratively optimizing a loss function of the generator and the discriminator; and inputting the enhanced image output by the image enhancement module and the high-resolution license plate character image into the license plate character recognition module, and iterating the loss function of the recognition image module.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In general, the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects of the invention may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Embodiments of the invention may be implemented by computer software executable by a data processor of the mobile device, such as in a processor entity, or by hardware, or by a combination of software and hardware. Computer software or programs (also referred to as program products) including software routines, applets and/or macros can be stored in any device-readable data storage medium and they include program instructions for performing particular tasks. The computer program product may comprise one or more computer-executable components configured to perform embodiments when the program is run. The one or more computer-executable components may be at least one software code or a portion thereof. Further in this regard it should be noted that any block of the logic flow as in the figures may represent a program step, or an interconnected logic circuit, block and function, or a combination of a program step and a logic circuit, block and function. The software may be stored on physical media such as memory chips or memory blocks implemented within the processor, magnetic media such as hard or floppy disks, and optical media such as, for example, DVDs and data variants thereof, CDs. The physical medium is a non-transitory medium.
It should be understood by those skilled in the art that various features of the above embodiments can be combined arbitrarily, and for the sake of brevity, all possible combinations of the features in the above embodiments are not described, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (12)

1. A construction method of a license plate recognition model is characterized by comprising the following steps:
obtaining a training sample: acquiring a high-resolution license plate character image, and obtaining a low-resolution license plate character image by down-sampling the high-resolution license plate character image;
constructing an image enhancement module and a license plate character recognition module which are connected in sequence: the image enhancement module comprises a generator and a discriminator which are trained in a confrontation type learning mode, wherein the generator internally comprises a high-resolution generation network for generating a super-resolution image and a reconstruction self-coding network for reconstructing the super-resolution image which are sequentially connected, and the full-connection layer of the discriminator is divided into a parallel character number calculation branch and a judgment comparison branch; counting information output by the character number counting branch is input to the last layer of the license plate character recognition module;
training the image enhancement module, the image enhancement module and the license plate character recognition module: inputting the high-resolution license plate character image into the discriminator for counterlearning with the generator, inputting the low-resolution character image into the generator, and iteratively optimizing a loss function of the generator and the discriminator; and inputting the enhanced image output by the image enhancement module and the high-resolution license plate character image into the license plate character recognition module, and iterating the loss function of the image recognition module.
2. The method for constructing the license plate recognition model of claim 1, wherein the reconstructed self-coding network comprises an encoder and a decoder which are connected in sequence, the encoder and the decoder use a convolutional neural network, the encoder and the decoder both comprise the same number of convolutional layers, the encoder adds a MaxPoint 2D layer for spatial down-sampling, and the latter adds an up-sampling 2D layer and uses BatchNormalization.
3. The method for constructing the license plate recognition model according to claim 1, wherein the training formula for the counterlearning is as follows:
Figure DEST_PATH_IMAGE002
wherein,
Figure DEST_PATH_IMAGE003
is a high-resolution license plate character image,
Figure DEST_PATH_IMAGE004
is a low-resolution license plate character image,
Figure DEST_PATH_IMAGE005
and
Figure DEST_PATH_IMAGE006
respectively, represent weight data of the neural network.
4. The method for constructing the license plate recognition model of claim 1, wherein in the step of iteratively optimizing the loss functions of the generator and the discriminator, the loss functions include a pixel loss function, a countermeasure loss function, a reconstruction loss function and a discrimination loss function, and four different loss functions are weighted by different weights to obtain the loss functions.
5. The method for constructing the license plate recognition model of claim 4, wherein the pixel loss function is as follows:
Figure DEST_PATH_IMAGE008
where Gs1 () represents a high resolution generating network, gs2 () represents a reconstructed self-encoding network,
Figure DEST_PATH_IMAGE010
representing a low-resolution license plate character image,
Figure DEST_PATH_IMAGE012
representing a high resolution license plate character image.
6. The method for constructing the license plate recognition model of claim 4, wherein the countermeasure loss function is:
Figure DEST_PATH_IMAGE014
wherein
Figure DEST_PATH_IMAGE010A
Representing a low-resolution image of the license plate characters,
Figure DEST_PATH_IMAGE012A
representing a high-resolution license plate character image, gw representing a generator, and D representing a discriminator.
7. The method for constructing the license plate recognition model of claim 4, wherein the reconstruction loss function is:
Figure DEST_PATH_IMAGE016
where Gs1 () represents a high resolution generating network, gs2 () represents a reconstructed self-encoding network,
Figure DEST_PATH_IMAGE010AA
representing the license plate character image with low resolution, and the reconstruction loss is the difference between the high resolution generation network and the reconstruction self-coding network.
8. The method for constructing the license plate recognition model of claim 4, wherein the reconstruction loss function is:
the discrimination loss function is:
wherein, the AND operation of the character number and the predicted value is shown, if the sum is predicted correctly, 1 is output respectively to represent the actual character number of the license plate character.
9. A license plate recognition model, which is constructed by the method for constructing a license plate recognition model according to any one of claims 1 to 8.
10. A license plate recognition method, characterized in that a to-be-detected image containing a license plate to be detected is input into the license plate recognition model according to claim 9 to obtain license plate characters.
11. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to execute the license plate recognition model construction method according to any one of claims 1 to 8 or the license plate recognition method according to claim 10.
12. A readable storage medium having stored therein a computer program including a program code for controlling a process to execute a process, the process comprising the method of constructing a license plate recognition model according to any one of claims 1 to 8 or the method of recognizing a license plate according to claim 10.
CN202211506588.9A 2022-11-29 2022-11-29 License plate recognition method, model and device Active CN115546780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211506588.9A CN115546780B (en) 2022-11-29 2022-11-29 License plate recognition method, model and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211506588.9A CN115546780B (en) 2022-11-29 2022-11-29 License plate recognition method, model and device

Publications (2)

Publication Number Publication Date
CN115546780A true CN115546780A (en) 2022-12-30
CN115546780B CN115546780B (en) 2023-04-18

Family

ID=84722187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211506588.9A Active CN115546780B (en) 2022-11-29 2022-11-29 License plate recognition method, model and device

Country Status (1)

Country Link
CN (1) CN115546780B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190188830A1 (en) * 2017-12-15 2019-06-20 International Business Machines Corporation Adversarial Learning of Privacy Protection Layers for Image Recognition Services
CN111461134A (en) * 2020-05-18 2020-07-28 南京大学 Low-resolution license plate recognition method based on generation countermeasure network
CN111915490A (en) * 2020-08-14 2020-11-10 深圳清研智城科技有限公司 License plate image super-resolution reconstruction model and method based on multi-scale features
CN112232237A (en) * 2020-10-20 2021-01-15 城云科技(中国)有限公司 Vehicle flow monitoring method, system, computer device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190188830A1 (en) * 2017-12-15 2019-06-20 International Business Machines Corporation Adversarial Learning of Privacy Protection Layers for Image Recognition Services
CN111461134A (en) * 2020-05-18 2020-07-28 南京大学 Low-resolution license plate recognition method based on generation countermeasure network
CN111915490A (en) * 2020-08-14 2020-11-10 深圳清研智城科技有限公司 License plate image super-resolution reconstruction model and method based on multi-scale features
CN112232237A (en) * 2020-10-20 2021-01-15 城云科技(中国)有限公司 Vehicle flow monitoring method, system, computer device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHANGZHI TENG等: "Viewpoint and Scale Consistency Reinforcement for UAV Vehicle Re-Identification" *
陈威: "基于深度学习联合多损失函数的车辆重识别算法研究" *

Also Published As

Publication number Publication date
CN115546780B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN110032998B (en) Method, system, device and storage medium for detecting characters of natural scene picture
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
CN111652217A (en) Text detection method and device, electronic equipment and computer storage medium
CN109886330B (en) Text detection method and device, computer readable storage medium and computer equipment
CN113780296A (en) Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN107808138B (en) Communication signal identification method based on FasterR-CNN
CN112669323B (en) Image processing method and related equipment
CN115457565A (en) OCR character recognition method, electronic equipment and storage medium
US20230281974A1 (en) Method and system for adaptation of a trained object detection model to account for domain shift
CN111738055A (en) Multi-class text detection system and bill form detection method based on same
CN115063786A (en) High-order distant view fuzzy license plate detection method
CN113901972A (en) Method, device and equipment for detecting remote sensing image building and storage medium
CN112598076A (en) Motor vehicle attribute identification method and system
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112200186A (en) Car logo identification method based on improved YOLO _ V3 model
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN114419430A (en) Cultivated land plot extraction method and device based on SE-U-Net +model
CN115546809A (en) Table structure identification method based on cell constraint and application thereof
CN116612280A (en) Vehicle segmentation method, device, computer equipment and computer readable storage medium
CN116071557A (en) Long tail target detection method, computer readable storage medium and driving device
CN116129280B (en) Method for detecting snow in remote sensing image
CN115346206B (en) License plate detection method based on improved super-resolution deep convolution feature recognition
CN115546780B (en) License plate recognition method, model and device
CN114842198B (en) Intelligent damage assessment method, device, equipment and storage medium for vehicle
CN116246161A (en) Method and device for identifying target fine type of remote sensing image under guidance of domain knowledge

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant