CN113095320A - License plate recognition method and system and computing device - Google Patents

License plate recognition method and system and computing device Download PDF

Info

Publication number
CN113095320A
CN113095320A CN202110358257.4A CN202110358257A CN113095320A CN 113095320 A CN113095320 A CN 113095320A CN 202110358257 A CN202110358257 A CN 202110358257A CN 113095320 A CN113095320 A CN 113095320A
Authority
CN
China
Prior art keywords
license plate
image
original
acquiring
characteristic region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110358257.4A
Other languages
Chinese (zh)
Inventor
陈宁
饶展
刘坚
谭云鹏
崔志伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202110358257.4A priority Critical patent/CN113095320A/en
Publication of CN113095320A publication Critical patent/CN113095320A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Character Input (AREA)

Abstract

The invention discloses a license plate recognition method which is executed in computing equipment and comprises the following steps: acquiring an original license plate image; determining the license plate outline position according to the original license plate image, and acquiring a characteristic region image from the original license plate image based on the license plate outline position; carrying out segmentation processing on the characteristic region image to generate a segmentation image so as to distinguish a license plate region from a background region; determining four corner points of the license plate based on the license plate region; performing projection transformation processing on the license plate in the characteristic region image according to the four corner points of the license plate and the four vertexes of the characteristic region image to generate a corrected license plate image; and acquiring license plate characters based on the corrected license plate image. The invention also discloses a corresponding license plate recognition system and a corresponding computing device. The license plate recognition method is beneficial to improving the license plate recognition precision under the condition of large-scale deflection.

Description

License plate recognition method and system and computing device
Technical Field
The invention relates to the technical field of visual recognition, in particular to a license plate recognition method, a license plate recognition system and a computing device.
Background
Although the license plate recognition technology is continuously developed, the license plate recognition system in the prior art is mainly used for recognizing license plate images with the license plates close to the front or with small bending degree, however, in the practical application process, the outdoor parking charging system usually takes license plate images in the natural environment, and a large amount of large-angle deflection conditions exist in the acquired license plate images. Under the condition that the license plate picture has large-scale deflection, the recognition performance of the existing license plate recognition system is obviously reduced, so that the license plate recognition precision is very low.
Therefore, a license plate recognition method is needed to solve the problems in the above technical solutions.
Disclosure of Invention
Accordingly, the present invention provides a license plate recognition method, a license plate recognition system and a computing device to solve or at least alleviate the above-mentioned problems.
According to a first aspect of the present invention, there is provided a license plate recognition method, executed in a computing device, comprising the steps of: acquiring an original license plate image; determining the license plate outline position according to the original license plate image, and acquiring a characteristic region image from the original license plate image based on the license plate outline position; carrying out segmentation processing on the characteristic region image to generate a segmentation image so as to distinguish a license plate region from a background region; determining four corner points of the license plate based on the license plate region; performing projection transformation processing on the license plate in the characteristic region image according to the four corner points of the license plate and the four vertexes of the characteristic region image to generate a corrected license plate image; and acquiring license plate characters based on the corrected license plate image.
Optionally, in the license plate recognition method according to the present invention, after the segmented image is generated, the method further includes: carrying out binarization on the segmentation image to generate a binary image; and performing expansion processing on the binary image to generate an expanded image.
Optionally, in the license plate recognition method according to the present invention, the step of determining four corner points of the license plate includes: acquiring one or more connected domains from the expansion image, and selecting the connected domain with the largest area; and extracting license plate contour lines based on the connected domain with the largest area, and determining four corner points of the license plate according to the extracted license plate contour lines.
Optionally, in the license plate recognition method according to the present invention, before acquiring one or more connected domains from the dilated image, the method includes the steps of: and carrying out filtering processing on the expansion image.
Optionally, in the license plate recognition method according to the present invention, the license plate contour line includes an upper edge line, a right edge line, a lower edge line, and a left edge line; the step of filtering the binary image includes: filtering the expansion image based on a first filtering algorithm to extract an upper edge line; filtering the expansion image based on a second filtering algorithm to extract a right edge line; filtering the expansion image based on a third filtering algorithm to extract a lower edge line; and filtering the expansion image based on a fourth filtering algorithm to extract a left edge line.
Optionally, in the license plate recognition method according to the present invention, determining the license plate outline position according to the original license plate image includes: detecting a license plate contour in the original license plate image based on a detection model, and judging whether the height ratio of the license plate contour to the original license plate image exceeds a height threshold value or not and whether the width ratio of the license plate contour to the original license plate image exceeds a width threshold value or not; and if the height ratio exceeds a height threshold value and the width ratio exceeds a width threshold value, acquiring a characteristic region image according to the license plate contour position.
Optionally, in the license plate recognition method according to the present invention, the segmenting the feature region image includes: and performing segmentation processing on the characteristic region image based on the segmentation model.
Optionally, in the license plate recognition method according to the present invention, the step of obtaining license plate characters based on the corrected license plate image includes: and processing the corrected license plate image based on the recognition model, and decoding the processing result to obtain license plate characters.
Optionally, in the license plate recognition method according to the present invention, the recognition model includes: the device comprises a characteristic extraction layer, an ASPP layer and a Bi-lstm structure which are sequentially connected.
According to a first aspect of the present invention, there is provided a license plate recognition system including: the acquisition module is suitable for acquiring an original license plate image; the detection module is suitable for determining the license plate outline position according to the original license plate image and acquiring a characteristic region image from the original license plate image based on the license plate outline position; the correction module is suitable for segmenting the characteristic region image to generate a segmented image so as to distinguish a license plate region from a background region, determining four angular points of the license plate based on the license plate region, and performing projection transformation processing on the license plate in the characteristic region image according to the four angular points of the license plate and four vertexes of the characteristic region image to generate a corrected license plate image; and the recognition module is suitable for acquiring license plate characters based on the corrected license plate image.
According to an aspect of the invention, there is provided a computing device comprising: at least one processor; and a memory storing program instructions that, when read and executed by the processor, cause the computing device to perform the license plate recognition method as described above.
According to an aspect of the present invention, there is provided a readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform a license plate recognition method as described above.
According to the technical scheme, the license plate recognition method and the license plate recognition system are provided, the license plate image is subjected to segmentation processing and projection transformation processing based on the correction module, and therefore the problem that the perspective of the license plate image collected under the large-scale deflection condition is large can be solved, the license plate recognition accuracy under the large-scale deflection condition can be improved, the license plate recognition system can be suitable for various application scenes, and the application range is wide.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a schematic diagram of a computing device 100, according to one embodiment of the invention;
FIG. 2 illustrates a flow diagram of a license plate recognition method 200 according to one embodiment of the invention;
FIG. 3 shows a schematic diagram of a license plate recognition system 300 according to one embodiment of the present invention;
FIG. 4 illustrates an original license plate image in accordance with an embodiment of the present invention;
FIG. 5 illustrates a feature region image in accordance with one embodiment of the present invention;
FIG. 6 illustrates a segmented image in accordance with an embodiment of the present invention;
FIG. 7 illustrates a binary image in accordance with an embodiment of the invention;
FIG. 8 shows a dilated image in accordance with an embodiment of the invention;
FIG. 9 is a schematic diagram illustrating four corners of a license plate according to an embodiment of the invention; and
FIG. 10 illustrates a rectified license plate image in accordance with an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As described above, the license plate recognition scheme in the prior art has some defects, so the invention provides a license plate recognition method with more optimized performance. The license plate recognition method 200 in the present invention is suitable for execution in a computing device.
FIG. 1 shows a schematic diagram of a computing device 100, according to one embodiment of the invention.
As shown in FIG. 1, in a basic configuration 102, a computing device 100 typically includes a system memory 106 and one or more processors 104. A memory bus 108 may be used for communication between the processor 104 and the system memory 106.
Depending on the desired configuration, the processor 104 may be any type of processing, including but not limited to: a microprocessor (UP), a microcontroller (UC), a digital information processor (DSP), or any combination thereof. The processor 104 may include one or more levels of cache, such as a level one cache 110 and a level two cache 112, a processor core 114, and registers 116. The example processor core 114 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 118 may be used with the processor 104, or in some implementations the memory controller 118 may be an internal part of the processor 104.
Depending on the desired configuration, system memory 106 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 106 may include an operating system 120, one or more applications 122, and program data 124. In some implementations, the application 122 can be arranged to execute instructions on an operating system with program data 124 by one or more processors 104.
Computing device 100 also includes a storage device 132, storage device 132 including removable storage 136 and non-removable storage 138.
Computing device 100 may also include a storage interface bus 134. The storage interface bus 134 enables communication from the storage devices 132 (e.g., removable storage 136 and non-removable storage 138) to the basic configuration 102 via the bus/interface controller 130. At least a portion of the operating system 120, applications 122, and data 124 may be stored on removable storage 136 and/or non-removable storage 138, and loaded into system memory 106 via storage interface bus 134 and executed by the one or more processors 104 when the computing device 100 is powered on or the applications 122 are to be executed.
Computing device 100 may also include an interface bus 140 that facilitates communication from various interface devices (e.g., output devices 142, peripheral interfaces 144, and communication devices 146) to the basic configuration 102 via the bus/interface controller 130. The example output device 142 includes a graphics processing unit 148 and an audio processing unit 150. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 152. Example peripheral interfaces 144 may include a serial interface controller 154 and a parallel interface controller 156, which may be configured to facilitate communication with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 158. An example communication device 146 may include a network controller 160, which may be arranged to facilitate communications with one or more other computing devices 162 over a network communication link via one or more communication ports 164.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in a manner that encodes information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
Computing device 100 may be implemented as a personal computer including both desktop and notebook computer configurations. Of course, computing device 100 may also be implemented as part of a small-form factor portable (or mobile) electronic device such as a cellular telephone, a digital camera, a Personal Digital Assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset, an application specific device, or a hybrid device that include any of the above functions. And may even be implemented as a server, such as a file server, a database server, an application server, a WEB server, and so forth. The embodiments of the present invention are not limited thereto.
In an embodiment in accordance with the invention, the computing device 100 is configured to perform a license plate recognition method 200 in accordance with the invention. The application 122 of the computing device 100 includes a plurality of program instructions for executing the license plate recognition 200 according to the present invention.
In one embodiment, the application 122 of the computing device 100 includes a license plate recognition system 300, and the license plate recognition system 300 includes a plurality of program instructions for executing the license plate recognition method 200 according to the present invention, so that the computing device 100 executes the license plate recognition method 200 according to the present invention.
FIG. 2 shows a flow diagram of a license plate recognition method 200 according to one embodiment of the invention. The method 200 is suitable for execution in a computing device (e.g., the aforementioned computing device 100), and in particular may be executed in a license plate recognition system 300 in the computing device 100.
As shown in fig. 2, the method 200 begins at step S210. In step S210, an original license plate image is acquired. Here, referring to the original license plate image shown in fig. 4, the original license plate image is an original image that is not processed after the license plate image is captured by the camera device.
Subsequently, in step S220, a license plate contour position is determined according to the original license plate image, and a feature region image is acquired from the original license plate image based on the determined license plate contour position. Here, referring to the feature region image shown in fig. 5, the feature region image is an image that is captured from an original license plate image according to a license plate outline position, includes a complete license plate region and a corresponding small portion of background regions, and can reflect license plate features. Therefore, segmentation processing and correction processing can be carried out subsequently according to the intercepted license plate feature region image, so that license plate characters can be recognized according to the processed image, and the license plate number can be determined.
According to one embodiment of the invention, the yolov5I model is trained in advance to serve as a detection model for detecting the outline of the license plate in the invention. Therefore, in step S220, a pre-trained yolov5I detection model can be used to detect the license plate contours in the original license plate image, and when a plurality of license plate contours exist in the original license plate image, the license plate contour with the highest confidence is obtained and determined as the final license plate contour. After the final license plate contour is determined, whether the ratio (height ratio) of the height of the license plate contour to the height of the original license plate image exceeds a height threshold value or not and whether the ratio (width ratio) of the width of the license plate contour to the width of the original license plate image exceeds a width threshold value or not are further judged. And if the height ratio exceeds the height threshold and the width ratio exceeds the width threshold, intercepting a characteristic region image comprising the complete license plate from the original license plate image according to the license plate outline position.
It should be noted that the height threshold and the width threshold set by the present invention are used to determine whether the detected license plate is the license plate on the current road, so as to avoid detecting the license plate on the opposite road. According to the embodiment of the invention, when the height ratio of the license plate outline to the original license plate image is determined to exceed the height threshold and the width ratio of the license plate outline to the original license plate image exceeds the width threshold, the detected license plate (namely the license plate corresponding to the detected license plate outline) can be determined to be the license plate on the current road, so that the characteristic region image is obtained according to the license plate outline, and the image is further processed. In one embodiment, the height threshold is, for example, 0.04 and the width threshold is, for example, 0.04. It should be noted that the present invention is not limited to the specific values of the height threshold and the width threshold provided herein, which can be set by one skilled in the art according to the actual image capturing location, road and vehicle conditions.
According to the embodiment of the invention, after the license plate contour is detected based on the original license plate image and the characteristic region image is obtained, the license plate is corrected based on the characteristic region image. The license plate correction process mainly includes a segmentation process on the feature region image and a projection change process after the segmentation process, and can be specifically realized according to the following steps S230 to S250.
In step S230, the feature region image is subjected to segmentation processing to generate a segmentation image, which is shown in fig. 6. Here, the segmentation processing in the present invention is divided into two categories, one is a background region and the other is a license plate region. In this way, the license plate region and the background region in the feature region image can be segmented by the segmentation processing, so that the license plate region and the background region can be distinguished according to the segmented image.
According to one embodiment of the invention, the feature region image may be subjected to segmentation processing using a Unet segmentation model. Specifically, by training the Unet segmentation model in advance, the feature region image can be segmented using the previously trained Unet segmentation model.
In one embodiment, the loss function of the Unet segmentation model is:
Loss=mean(l0,…,ln) (1)
in the above formula (1), Loss is an average value of losses of all classes, lnSince there are only two categories, n is 1 for the nth category loss.
ln=[yn·log(σ(xn))+(1-yn)·log(1-σ(xn))] (2)
Figure BDA0003004446290000081
Figure BDA0003004446290000082
α+β=1 (5)
Wherein, in the above (2), lnFor cross entropy loss, ynTo be an actual value, σ (x) is the activation function.
In the above formula (4), L (A, B) is Tversesky Loss, wherein A represents a predicted value, B represents a true value, | A-B | represents a false positive, and | B-A | represents a false negative. It should be noted that in the prior art, l is generally adopted for the segmentation problemnAs a loss function, the present invention uses L (α, β), which is advantageous for improving the segmentation accuracy.
According to one embodiment of the invention, after the characteristic region image is segmented to generate the segmented image, projection change processing is carried out on the segmented image to realize correction on the license plate in the characteristic region image.
Specifically, the divided image is first binarized to generate a binary image, such as the binary image shown in fig. 7. In addition, in order to prevent the contour line of the license plate on the image from being overlapped with the edge of the image, the invention also expands the peripheral edges of the binary image and the characteristic region image before binarization by a preset width, for example, the width of 10 pixels, and fills black in the expanded region. It should be noted that the present invention is not limited to a particular width value that extends outward for the four sides of the image. It should be noted that, by expanding the predetermined width outward around the image, the phenomenon that the outline of the license plate on the image coincides with the edge of the image can be avoided, so as to extract the license plate outline in the following.
Further, the binary image is subjected to dilation processing. Here, the binary image is subjected to the expansion processing to obtain an expanded image as shown in fig. 8. And expanding the binary image to ensure that the license plate region in the expanded image can surround the license plate contour line.
Subsequently, in step S240, four corner points of the license plate are determined based on the license plate region in the segmented image.
Specifically, one or more connected domains are obtained from the binary image (i.e., the dilated image) after dilation processing, and the connected domain with the largest area is selected from all the obtained connected domains. And then, extracting license plate contour lines based on the connected domain with the largest area, and determining four corner points of the license plate according to the extracted license plate contour lines, namely determining coordinates of the four corner points of the license plate on the image. Here, fig. 9 shows a schematic diagram of four corner points of a license plate.
The license plate recognition scheme of the invention aims at the license plate recognition under the condition of large-scale deflection and the problem that the perspective of a license plate image is large under the condition of large-scale deflection, and the license plate in the characteristic region image is corrected by determining four corner points of the license plate.
According to one embodiment of the invention, before one or more connected domains are obtained from the expansion image, the expansion image is firstly subjected to filtering processing, the connected domains are obtained according to the expansion image after filtering processing, and the license plate contour line is extracted based on the connected domain with the largest area. Note that the license plate outline includes an upper edge line, a right edge line, a lower edge line, and a left edge line.
According to one embodiment of the invention, when the license plate contour line is extracted based on the expansion image, an upper edge line, a right edge line, a lower edge line and a left edge line are respectively extracted. And aiming at the extraction of each edge line, filtering the expansion image by using a corresponding filtering algorithm, wherein only one edge line corresponding to the filtering algorithm appears on the expansion image after filtering, and then extracting the corresponding edge line by searching a connected domain.
Specifically, for the extraction of each edge line, firstly, filtering processing is performed on the expanded image based on a filtering algorithm corresponding to the edge line, only one edge line corresponding to the filtering algorithm appears on the filtered image, one or more connected domains are obtained from the filtered expanded image, the connected domain with the largest area is selected from all the obtained connected domains, and then, one edge line corresponding to the filtering algorithm can be extracted based on the connected domain with the largest area. After four edge lines are extracted based on the four filtering algorithms, a complete license plate contour line can be obtained according to the four edge lines through fitting, four corner points of the license plate are extracted according to the license plate contour line, and coordinates of the four corner points of the license plate are determined. Therefore, each edge line can be extracted more accurately, and the four corner points of the finally obtained license plate are more accurate.
That is, in performing filter processing on the dilated image, filter processing is performed on the dilated image based on the first filter algorithm corresponding to the upper edge line so as to extract the upper edge line based on the filtered image. And performing filtering processing on the expanded image based on a second filtering algorithm corresponding to the right edge line so as to extract the right edge line based on the filtered image. And performing filtering processing on the expanded image based on a third filtering algorithm corresponding to the lower edge line so as to extract the lower edge line based on the filtered image. And performing filtering processing on the expanded image based on a fourth filtering algorithm corresponding to the left edge line so as to extract the left edge line based on the filtered image.
Subsequently, in step S250, a projection transformation process is performed on the license plate in the feature region image according to the four corners of the license plate and the four vertices of the feature region image, so as to generate a corrected license plate image. The corrected license plate image is shown in fig. 10.
It should be noted that, in the invention, it is considered that if the license plate image is corrected only according to the four corner points of the license plate, the license plate image is easily affected by the coordinate error of a single corner point, and further the correction effect of the license plate image is affected. Based on this, according to an embodiment of the present invention, when the license plate in the feature region image is corrected, four vertices of the feature region image are further obtained, and the four corners of the license plate correspond to the four vertices of the feature region image respectively. Thus, the license plate in the characteristic region image is subjected to projection transformation processing according to the corresponding relation between the four corners of the license plate and the four vertexes of the characteristic region image, so that an accurately corrected license plate image (see fig. 10) can be obtained, the corrected license plate image is ensured to overcome the influence of large-size deflection, and license plate characters can be extracted more accurately according to the corrected license plate image.
Finally, in step S260, a plurality of license plate characters are acquired based on the corrected license plate image, so as to determine license plate number information according to each acquired license plate character. It should be noted that after the collected license plate image is corrected (including segmentation processing and projection transformation processing) according to the above steps, license plate characters can be extracted more accurately, and license plate information can be determined accurately.
According to one embodiment of the invention, the AspNet-Istm recognition model is trained in advance to serve as the license plate recognition model. In this way, in step S260, the corrected license plate image may be processed by using the previously trained aspnet-Istm recognition model, and the processing result may be output. Specifically, the corrected license plate image is input into an AspNet-Istm model, and the AspNet-Istm model acquires the corrected license plate image, processes the corrected license plate image, and further outputs a 1x64 processing result. And then, decoding the processing result to obtain a plurality of license plate characters in the license plate image. Here, when decoding the processing result, if a space character appears, the space character is deleted. If the adjacent characters are the same, only one of the characters is retained.
Specifically, the AspNet-Istm recognition model comprises a feature extraction layer, an ASPP layer and a Bi-lstm structure which are sequentially connected. The feature extraction layer can be one of Moilenetv2, Resnet34 and VGG-16 models, the three models are moderate in size, and the recognition rate of the license plate recognition system can be guaranteed on the premise of ensuring the recognition accuracy. The Bi-lstm structure includes a plurality of Bi-lstm layers, for example, two Bi-lstm layers. And finally obtaining the AsspNet-lstm recognition model by taking the Bi-lstm layer as a final output layer. The loss function of the recognition model is:
Figure BDA0003004446290000111
in the above formula (6), S is data in the training set, L (S) represents the final loss value, x is the input space, z is the target space, β-1Is a mapping function of z all path sets,/'TIs the complete character set containing the null sub-characters.
When the corrected license plate image is processed based on the AspNet-Istm recognition model, the corrected license plate image is firstly input to a feature extraction layer for processing, then, an output result after the feature extraction layer performs processing is provided for an ASPP layer, an output result after the ASPP layer further processes is provided for a Bi-lstm structure, and finally, a final processing result is output after the Bi-lstm layer in the Bi-lstm structure performs processing.
According to one embodiment of the invention, the present invention trains the various models used in method 200 through a method of transfer learning.
Specifically, the yolov5 model was first trained based on the CCPD dataset until the model converged. Subsequently, the CTSLD data set is trained in a transfer learning mode until the model converges. In this way, a yolov5I test model suitable for use in the method 200 of the present invention can be obtained.
And then, intercepting the characteristic region image of the license plate from all data in the CTSLD data set according to the labeling data of the outline coordinates of the license plate. And marking mask data of the license plate on all the characteristic region images, dividing the data set into a training set and a verification set, and training the Unet model based on the data set until the model is converged. In this way, a net segmentation model suitable for application in the method 200 of the present invention can be obtained.
Further, the trained Unet segmentation model predicts all data in the data set and generates a mask data set. And then, generating a corrected license plate image according to the newly generated mask image and the corresponding characteristic region image by adopting a projection transformation method.
And finally, intercepting a feature region image of the license plate from a CCPD data set according to the marked license plate outline, dividing the feature region image into a training set and a verification set, training an AsspNet-lstm model by using the data set, and training the corrected license plate image by using a transfer learning method until the model converges. In this way, an AspNet-lstm model suitable for application in the method 200 of the present invention can be obtained.
FIG. 3 shows a flow diagram of a license plate recognition system 300 according to one embodiment of the invention.
As shown in fig. 3, the license plate recognition system 300 includes an acquisition module 310, a detection module 320, a correction module 330, and a recognition module 340, which are coupled in sequence.
Specifically, the acquisition module 310 is adapted to acquire an original license plate image. The detection module 320 is adapted to determine a license plate contour position according to the original license plate image, and obtain a feature region image from the original license plate image based on the license plate contour position. The correction module 330 is adapted to perform segmentation processing on the feature region image to generate a segmented image so as to distinguish a license plate region from a background region, determine four corner points of the license plate based on the license plate region, and perform projection transformation processing on the license plate in the feature region image according to the four corner points of the license plate and four vertices of the feature region image so as to generate a corrected license plate image. The recognition module 340 is adapted to obtain license plate characters based on the corrected license plate image.
It should be noted that the obtaining module 310 is specifically configured to perform step S210 in the method 200, the detecting module 320 is specifically configured to perform step S220 in the method 200, the correcting module 330 is specifically configured to perform steps S230 to S250 in the method 200, and the identifying module 340 is specifically configured to perform step S260 in the method 200. Here, specific execution logics of the obtaining module 310, the detecting module 320, the correcting module 330, and the identifying module 340 are not described again, and the execution logics of each module may specifically refer to the description of the corresponding step in the method 200.
According to the license plate recognition scheme, the license plate image is subjected to segmentation processing and projection transformation processing based on the correction module, so that the problem that the perspective of the license plate image collected under the large-scale deflection condition is large can be solved, the license plate recognition accuracy under the large-scale deflection condition can be improved, the license plate recognition system can be suitable for various application scenes, and the application range is wider.
A8, the method according to any one of a1-a5, wherein the step of obtaining license plate characters based on the corrected license plate image comprises: and processing the corrected license plate image based on the recognition model, and decoding the processing result to obtain license plate characters.
A9, the method of A8, wherein the identifying a model comprises: the device comprises a characteristic extraction layer, an ASPP layer and a Bi-lstm structure which are sequentially connected.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U.S. disks, floppy disks, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to execute the multilingual spam-text recognition method of the present invention according to instructions in said program code stored in the memory.
By way of example, and not limitation, readable media may comprise readable storage media and communication media. Readable storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
In the description of the present specification, the terms "connected", "fixed", and the like are to be construed broadly unless otherwise explicitly specified or limited. Furthermore, the terms "upper", "lower", "inner", "outer", "top", "bottom", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the referred device or unit must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.

Claims (10)

1. A license plate recognition method is executed in a computing device and comprises the following steps:
acquiring an original license plate image;
determining the license plate outline position according to the original license plate image, and acquiring a characteristic region image from the original license plate image based on the license plate outline position;
carrying out segmentation processing on the characteristic region image to generate a segmentation image so as to distinguish a license plate region from a background region;
determining four corner points of the license plate based on the license plate region;
performing projection transformation processing on the license plate in the characteristic region image according to the four corner points of the license plate and the four vertexes of the characteristic region image to generate a corrected license plate image; and
and acquiring license plate characters based on the corrected license plate image.
2. The method of claim 1, wherein after generating the segmented image, further comprising the steps of:
carrying out binarization on the segmentation image to generate a binary image;
and performing expansion processing on the binary image to generate an expanded image.
3. The method of claim 2, wherein the step of determining the four corner points of the license plate comprises:
acquiring one or more connected domains from the expansion image, and selecting the connected domain with the largest area;
and extracting license plate contour lines based on the connected domain with the largest area, and determining four corner points of the license plate according to the extracted license plate contour lines.
4. The method of claim 3, wherein prior to acquiring the one or more connected components from the dilated image, comprising the steps of:
and carrying out filtering processing on the expansion image.
5. The method of claim 4, wherein the license plate outline comprises an upper edge line, a right edge line, a lower edge line, and a left edge line; the step of filtering the binary image includes:
filtering the expansion image based on a first filtering algorithm to extract an upper edge line;
filtering the expansion image based on a second filtering algorithm to extract a right edge line;
filtering the expansion image based on a third filtering algorithm to extract a lower edge line;
and filtering the expansion image based on a fourth filtering algorithm to extract a left edge line.
6. The method of any one of claims 1-5, wherein determining license plate outline locations from the original license plate image comprises:
detecting a license plate contour in the original license plate image based on a detection model, and judging whether the height ratio of the license plate contour to the original license plate image exceeds a height threshold value or not and whether the width ratio of the license plate contour to the original license plate image exceeds a width threshold value or not;
and if the height ratio exceeds a height threshold value and the width ratio exceeds a width threshold value, acquiring a characteristic region image according to the license plate contour position.
7. The method of any one of claims 1-5, wherein segmenting the feature region image comprises:
and performing segmentation processing on the characteristic region image based on the segmentation model.
8. A license plate recognition system comprising:
the acquisition module is suitable for acquiring an original license plate image;
the detection module is suitable for determining the license plate outline position according to the original license plate image and acquiring a characteristic region image from the original license plate image based on the license plate outline position;
the correction module is suitable for segmenting the characteristic region image to generate a segmented image so as to distinguish a license plate region from a background region, determining four angular points of the license plate based on the license plate region, and performing projection transformation processing on the license plate in the characteristic region image according to the four angular points of the license plate and four vertexes of the characteristic region image to generate a corrected license plate image; and
and the recognition module is suitable for acquiring license plate characters based on the corrected license plate image.
9. A computing device, comprising:
at least one processor; and
a memory storing program instructions;
the program instructions, when read and executed by the processor, cause the computing device to perform the method of any of claims 1-7.
10. A readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the method of any of claims 1-7.
CN202110358257.4A 2021-04-01 2021-04-01 License plate recognition method and system and computing device Pending CN113095320A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110358257.4A CN113095320A (en) 2021-04-01 2021-04-01 License plate recognition method and system and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110358257.4A CN113095320A (en) 2021-04-01 2021-04-01 License plate recognition method and system and computing device

Publications (1)

Publication Number Publication Date
CN113095320A true CN113095320A (en) 2021-07-09

Family

ID=76672840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110358257.4A Pending CN113095320A (en) 2021-04-01 2021-04-01 License plate recognition method and system and computing device

Country Status (1)

Country Link
CN (1) CN113095320A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116702809A (en) * 2023-05-22 2023-09-05 山东黄金矿业(莱州)有限公司三山岛金矿 Simple pattern license plate recognition and detection method

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984949A (en) * 2014-06-11 2014-08-13 四川九洲电器集团有限责任公司 License plate positioning method and system based on high and low cap transformation and connected domain
CN106203433A (en) * 2016-07-13 2016-12-07 西安电子科技大学 In a kind of vehicle monitoring image, car plate position automatically extracts and the method for perspective correction
CN108537099A (en) * 2017-05-26 2018-09-14 华南理工大学 A kind of licence plate recognition method of complex background
US20180268238A1 (en) * 2017-03-14 2018-09-20 Mohammad Ayub Khan System and methods for enhancing license plate and vehicle recognition
CN109145900A (en) * 2018-07-30 2019-01-04 中国科学技术大学苏州研究院 A kind of licence plate recognition method based on deep learning
CN109271991A (en) * 2018-09-06 2019-01-25 公安部交通管理科学研究所 A kind of detection method of license plate based on deep learning
CN109271984A (en) * 2018-07-24 2019-01-25 广东工业大学 A kind of multi-faceted license plate locating method based on deep learning
CN109670498A (en) * 2018-11-10 2019-04-23 江苏网进科技股份有限公司 A kind of license plate locating method
CN109886896A (en) * 2019-02-28 2019-06-14 闽江学院 A kind of blue License Plate Segmentation and antidote
CN110222593A (en) * 2019-05-18 2019-09-10 四川弘和通讯有限公司 A kind of vehicle real-time detection method based on small-scale neural network
CN110414507A (en) * 2019-07-11 2019-11-05 和昌未来科技(深圳)有限公司 Licence plate recognition method, device, computer equipment and storage medium
CN110619327A (en) * 2018-06-20 2019-12-27 湖南省瞬渺通信技术有限公司 Real-time license plate recognition method based on deep learning in complex scene
CN110781882A (en) * 2019-09-11 2020-02-11 南京钰质智能科技有限公司 License plate positioning and identifying method based on YOLO model
CN111310754A (en) * 2019-12-31 2020-06-19 创泽智能机器人集团股份有限公司 Method for segmenting license plate characters
CN111768646A (en) * 2020-05-20 2020-10-13 同济大学 Intelligent parking system and parking method based on NB-IOT and intelligent lamp pole
CN112529001A (en) * 2020-11-03 2021-03-19 创泽智能机器人集团股份有限公司 License plate recognition method based on neural network

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984949A (en) * 2014-06-11 2014-08-13 四川九洲电器集团有限责任公司 License plate positioning method and system based on high and low cap transformation and connected domain
CN106203433A (en) * 2016-07-13 2016-12-07 西安电子科技大学 In a kind of vehicle monitoring image, car plate position automatically extracts and the method for perspective correction
US20180268238A1 (en) * 2017-03-14 2018-09-20 Mohammad Ayub Khan System and methods for enhancing license plate and vehicle recognition
CN108537099A (en) * 2017-05-26 2018-09-14 华南理工大学 A kind of licence plate recognition method of complex background
CN110619327A (en) * 2018-06-20 2019-12-27 湖南省瞬渺通信技术有限公司 Real-time license plate recognition method based on deep learning in complex scene
CN109271984A (en) * 2018-07-24 2019-01-25 广东工业大学 A kind of multi-faceted license plate locating method based on deep learning
CN109145900A (en) * 2018-07-30 2019-01-04 中国科学技术大学苏州研究院 A kind of licence plate recognition method based on deep learning
CN109271991A (en) * 2018-09-06 2019-01-25 公安部交通管理科学研究所 A kind of detection method of license plate based on deep learning
CN109670498A (en) * 2018-11-10 2019-04-23 江苏网进科技股份有限公司 A kind of license plate locating method
CN109886896A (en) * 2019-02-28 2019-06-14 闽江学院 A kind of blue License Plate Segmentation and antidote
CN110222593A (en) * 2019-05-18 2019-09-10 四川弘和通讯有限公司 A kind of vehicle real-time detection method based on small-scale neural network
CN110414507A (en) * 2019-07-11 2019-11-05 和昌未来科技(深圳)有限公司 Licence plate recognition method, device, computer equipment and storage medium
CN110781882A (en) * 2019-09-11 2020-02-11 南京钰质智能科技有限公司 License plate positioning and identifying method based on YOLO model
CN111310754A (en) * 2019-12-31 2020-06-19 创泽智能机器人集团股份有限公司 Method for segmenting license plate characters
CN111768646A (en) * 2020-05-20 2020-10-13 同济大学 Intelligent parking system and parking method based on NB-IOT and intelligent lamp pole
CN112529001A (en) * 2020-11-03 2021-03-19 创泽智能机器人集团股份有限公司 License plate recognition method based on neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李祥鹏: "基于深度学习的车牌识别算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *
赵坚勇, 西安电子科技大学出版社 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116702809A (en) * 2023-05-22 2023-09-05 山东黄金矿业(莱州)有限公司三山岛金矿 Simple pattern license plate recognition and detection method

Similar Documents

Publication Publication Date Title
CN108898142B (en) Recognition method of handwritten formula and computing device
CN109829453B (en) Method and device for recognizing characters in card and computing equipment
CN108304814B (en) Method for constructing character type detection model and computing equipment
US20200167596A1 (en) Method and device for determining handwriting similarity
CN108416345B (en) Answer sheet area identification method and computing device
CN107545223B (en) Image recognition method and electronic equipment
CN110427946B (en) Document image binarization method and device and computing equipment
CN111582267B (en) Text detection method, computing device and readable storage medium
CN111259878A (en) Method and equipment for detecting text
CN111461113B (en) Large-angle license plate detection method based on deformed plane object detection network
CN110431563A (en) The method and apparatus of image rectification
CN114038004A (en) Certificate information extraction method, device, equipment and storage medium
WO2021227058A1 (en) Text processing method and apparatus, and electronic device and storage medium
CN111353325A (en) Key point detection model training method and device
CN110766068B (en) Verification code identification method and computing equipment
CN112949649B (en) Text image identification method and device and computing equipment
CN111783561A (en) Picture examination result correction method, electronic equipment and related products
CN114758332A (en) Text detection method and device, computing equipment and storage medium
CN111462098A (en) Method, device, equipment and medium for detecting overlapping of shadow areas of object to be detected
CN113095320A (en) License plate recognition method and system and computing device
CN112801067B (en) Method for detecting iris light spot and computing equipment
CN113052162A (en) Text recognition method and device, readable storage medium and computing equipment
CN111753830A (en) Job image correction method and computing device
CN115239776B (en) Point cloud registration method, device, equipment and medium
CN116342973A (en) Data labeling method and system based on semi-supervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210709

RJ01 Rejection of invention patent application after publication