CN111160073A - License plate type identification method and device and computer readable storage medium - Google Patents

License plate type identification method and device and computer readable storage medium Download PDF

Info

Publication number
CN111160073A
CN111160073A CN201811324694.9A CN201811324694A CN111160073A CN 111160073 A CN111160073 A CN 111160073A CN 201811324694 A CN201811324694 A CN 201811324694A CN 111160073 A CN111160073 A CN 111160073A
Authority
CN
China
Prior art keywords
license plate
image
connected domain
determining
color characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811324694.9A
Other languages
Chinese (zh)
Other versions
CN111160073B (en
Inventor
姚文凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201811324694.9A priority Critical patent/CN111160073B/en
Publication of CN111160073A publication Critical patent/CN111160073A/en
Application granted granted Critical
Publication of CN111160073B publication Critical patent/CN111160073B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a license plate type identification method and device and a computer readable storage medium, and relates to the technical field of license plate identification. Wherein the method comprises the following steps: determining a license plate submission image area according to the license plate boundary determined from the image to be processed; identifying license plate background color characteristic parameters, front and back color characteristic parameters and layer number characteristic parameters corresponding to the license plate submission image area by using a deep learning model; and judging the license plate type of the license plate appearing in the image to be processed based on the license plate background color characteristic parameter, the front and back color characteristic parameter and the layer number characteristic parameter. Through the scheme, the identification accuracy can be effectively improved, the training workload can be reduced, and the maintenance cost can be effectively controlled.

Description

License plate type identification method and device and computer readable storage medium
Technical Field
The invention relates to the technical field of license plate identification, in particular to a license plate type identification method and device and a computer readable storage medium.
Background
With the progress of society, the vehicle holding amount and the traffic running amount in China are continuously increased, and the ground traffic management task is also increased. In order to cope with the increasingly heavy ground traffic management work, the intelligent traffic system is produced and gradually replaces manpower, and becomes the mainstream system of the current traffic supervision. In the case of an intelligent transportation system, license plate recognition is undoubtedly one of the most important tasks. However, with the refined management of the vehicles, the management standards for different types of vehicles are different, and meanwhile, different types of license plates are generally configured for vehicles with different management standards (for example, green-brand new energy vehicles do not need to be restricted). Obviously, the simple identification of the license plate number cannot meet the business requirements.
The related technology proposes that a trained learning model is adopted to recognize the license plate and directly output the license plate type, and the recognition capability of the learning model for each type of license plate type not only needs to be trained by a large number of samples, but also has unsatisfactory accuracy. However, as the fine management of the vehicle is further advanced, the number plate types will increase gradually, and when a new number plate type is added, other vehicle types need to be considered comprehensively, and sample training needs to be performed again, which will undoubtedly increase the development cost greatly.
Disclosure of Invention
The present invention is directed to a license plate type recognition method, a license plate type recognition device, and a computer-readable storage medium, which are used to solve the above problems.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a license plate type recognition method, which is applied to an electronic device, where a deep learning model is stored in the electronic device in advance, and the method includes: determining a license plate submission image area according to the license plate boundary determined from the image to be processed; identifying license plate background color characteristic parameters, front and back color characteristic parameters and layer number characteristic parameters corresponding to the license plate inspection image area by using the deep learning model; and judging the license plate type of the license plate appearing in the image to be processed based on the license plate background color characteristic parameter, the front and back color characteristic parameter and the layer number characteristic parameter.
In a second aspect, an embodiment of the present invention provides a license plate type recognition apparatus, which is applied to an electronic device, where a deep learning model is stored in the electronic device in advance, and the apparatus includes: the determining module is used for determining a license plate submission image area according to the license plate boundary determined from the image to be processed; the characteristic identification module is used for identifying license plate background color characteristic parameters, front and back color characteristic parameters and layer number characteristic parameters corresponding to the license plate inspection image area by using the deep learning model; and the judging module is used for judging the license plate type of the license plate appearing in the image to be processed based on the license plate ground color characteristic parameter, the front and back color characteristic parameter and the layer number characteristic parameter.
In a third aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, implement the steps of the aforementioned method.
The license plate type identification method is different from the prior art in that after the license plate submission image area is obtained, the license plate background color characteristic parameter, the front and back color characteristic parameter and the layer number characteristic parameter are extracted from the license plate submission image area by using a preset deep learning model. At the moment, the element characteristic parameters of the license plate inspection image area are identified only by using the deep learning model, and the license plate type is not directly identified, so that when a license plate type formed by combining identifiable element characteristics is newly added, model training does not need to be carried out again based on a license plate sample of the newly added type, the training workload of the model is reduced, and the maintenance cost is effectively controlled. And further, judging the license plate type of the license plate appearing in the image to be processed by using the extracted license plate ground color characteristic parameter, the extracted front and back color characteristic parameter and the extracted layer number characteristic parameter. Namely, the combination of a plurality of element characteristic parameters in the license plate is used for judging the type of the license plate, so that the judgment accuracy is improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 shows a schematic mechanism diagram of an electronic device according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating steps of a license plate type identification method according to an embodiment of the present invention.
Fig. 3 shows another part of a flowchart illustrating steps of a license plate type identification method according to an embodiment of the present invention.
Fig. 4 is a flowchart illustrating steps of an application example of a license plate type recognition method according to an embodiment of the present invention.
Fig. 5 is a schematic diagram illustrating functional modules of a license plate type recognition apparatus according to an embodiment of the present invention.
Icon: 100-an electronic device; 111-a memory; 112-a processor; 113-a communication unit; 200-license plate type recognition means; 201-an identification module; 202-a fusion module; 203-a screening module; 204-an obtaining module; 205-a determination module; 206-a feature recognition module; 207-judgment module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, an electronic device 100 according to an embodiment of the invention is provided. The electronic device 100 may be a mobile phone, a computer, a server, a mobile smart terminal, or the like. Optionally, the electronic device 100 includes a license plate type recognition device 200, a memory 111, a processor 112, and a communication unit 113.
The memory 111, the processor 112 and the communication unit 113 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 111 is used for solidifying a software function module in an Operating System (OS) of the electronic device 100. The processor 112 is configured to execute an executable module stored in the memory 111, for example, a program segment of the license plate type recognition apparatus 200 stored in the memory 111, so as to implement the license plate type recognition method provided in the present embodiment.
The Memory 111 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. Optionally, the trained deep learning may be stored in the memory 111 of the electronic device 100 in advance.
The communication unit 113 is configured to establish a communication connection between the electronic apparatus 100 and another communication terminal via the network, and to transceive data via the network.
First embodiment
Referring to fig. 2, fig. 2 shows a license plate type recognition method according to a preferred embodiment of the invention. The license plate type recognition method can be applied to the electronic device 100 shown in fig. 1. Optionally, the method comprises:
and step S101, determining a license plate submission image area according to the license plate boundary determined from the image to be processed.
In the embodiment of the present invention, the image to be processed may be image data of a vehicle appearing acquired by the electronic device 100, and may be acquired by the electronic device 100 itself, or may be image data of a vehicle appearing received from the outside. Further, the electronic device 100 determines a license plate submission image area based on the license plate boundary determined from the image to be processed, so that the license plate boundary completely appears in the license plate submission image area.
As an implementation mode, a license plate submission image area is preliminarily determined from an image to be processed, whether the distance between the left boundary of the license plate submission image area and the left boundary of the license plate is within 20 pixel points or not, whether the distance between the right boundary of the license plate submission image area and the right boundary of the license plate is within 20 pixel points or not, whether the distance between the upper boundary of the license plate submission image area and the upper boundary of the license plate is within 10 pixel points or not, and whether the distance between the lower boundary of the license plate submission image area and the lower boundary of the license plate is within 10 pixel points or not are sequentially detected. If the above conditions are all satisfied (that is, the distance between the left and right boundaries is within 20 pixel points, and the distance between the upper and lower boundaries is within 10 pixel points), the range of the license plate submission image area does not need to be adjusted, and the license plate submission image area is used as the final license plate submission image area. If the conditions are not satisfied, the preliminarily determined license plate submission image area needs to be adjusted until the conditions are satisfied between the obtained license plate submission image area and the license plate boundary. Specifically, the adjustment mode may be to expand the license plate inspection image area by 0.2 times the area width or 0.4 times the area height. It should be noted that, the specific values (for example, "20 pixels," "10 pixels," "0.2 time of extension," and "0.4 time of extension") mentioned in the foregoing embodiment are only used to describe the implementation process of the embodiment more intuitively, and do not represent the limitation of the value ranges of the various types. It will be understood that this is by way of example only.
It should be noted that, in this embodiment, the directions or positional relationships indicated by the left, right, upper, lower, and the like are four relative directions or positional relationships, or the directions or positional relationships usually presented when the license plate censorship image area and the license plate boundary are displayed, are only for convenience of describing the present invention and simplifying the description, but are not for indicating or implying that a specific direction is provided.
And S102, identifying license plate background color characteristic parameters, front and back color characteristic parameters and layer number characteristic parameters corresponding to the license plate submission image area by using the deep learning model.
In the embodiment of the invention, the finally determined license plate submission image area is subjected to size adjustment processing so as to meet the input requirement of a deep learning model, and the deep learning model is utilized to process the license plate submission image area so as to output corresponding license plate element characteristics such as license plate ground color characteristic parameters, front and back color characteristic parameters, layer number characteristic parameters and the like.
Optionally, the characteristic parameter of the bottom color of the license plate may be characteristic information representing a color type of the bottom of the license plate extracted from the license plate, the characteristic parameter of the front and back colors may be characteristic information representing a color of a character on the license plate extracted from the license plate, and the characteristic parameter of the number of layers may represent characteristic information representing a number of rows of characters extracted from the license plate.
It can be understood that the bottom color of the license plate mentioned in the embodiments of the present invention may refer to the color of the region on the license plate except for the characters, the positive and negative colors may refer to the color of the characters on the license plate, and the mentioned number of layers may refer to the number of rows of the arrangement of the characters on the license plate.
It should be noted that, when the characteristic parameters of the bottom color of the license plate are different, different types of the bottom color of the license plate can be represented. Optionally, the color category to which the license plate base color may correspond may be predefined, for example, the color category to which the base color may correspond may be defined to include blue, yellow, white, new energy green, civil aviation green, and the like. The parameter value corresponding to blue is 0, the parameter value corresponding to yellow is 1, the parameter value corresponding to white is 2, the parameter value corresponding to new energy green is 3, the parameter value corresponding to civil aviation green is 4, and so on. That is, the desirable parameter values of the license plate background color characteristic parameter include 0, 1, 2, 3 and 4, and if the extracted license plate background color characteristic parameter can represent that the license plate is blue, the parameter value of the license plate background color characteristic parameter is 0; if the extracted license plate background color characteristic parameter can represent that the license plate is yellow, the parameter value of the license plate background color characteristic parameter is 1, and so on.
It should be noted that, when the characteristic parameters of the front and the back colors are different, different font color types can be represented. Alternatively, the color class of the character may be predefined. For example, the color categories of the characters may be defined to include white characters, black characters, and the like. The parameter value corresponding to the black character is 0, and the parameter value corresponding to the white character is 1. That is, the desirable parameter values of the positive and negative color parameters include 0 and 1, and if the extracted positive and negative color characteristic parameters can represent that the characters on the license plate are black, the parameter value of the positive and negative color characteristic parameters is 0; if the extracted forward and reverse color characteristic parameters can represent that characters on the license plate are white, the parameter value of the forward and reverse color characteristic parameters is 1.
It should be noted that, when the layer number characteristic parameter is different, the types of the corresponding character ranks on the license plate can be represented. Alternatively, the character arrangement types may be predefined. For example, several categories of character ranks may be defined including one row of characters (i.e., single-level) and two rows of characters (i.e., double-level), etc. The parameter value corresponding to one row of characters is 0, and the parameter value corresponding to two rows of characters is 1. That is, the parameter values of the layer number characteristic parameters include 0 and 1, and if the extracted layer number characteristic parameters can represent that only one row of characters is arranged on the license plate, the parameter value of the layer number characteristic parameter is 0; if the extracted layer number characteristic parameters can represent that the characters on the license plate are arranged in two rows, the parameter value of the layer number characteristic parameters is 1. Of course, more or less license plate element features can be selected as the feature parameters output by the deep learning model according to the change of the license plate types actually put into use (for example, unique element features of newly added license plate types). For convenience of explanation, in the embodiment of the present invention, the feature parameters that can be output by the deep learning model are described as the license plate background color feature parameter, the front-back color feature parameter, and the number-of-layers feature parameter, and even if the types and the numbers of the feature parameters that can be output are changed, the principles are the same.
Further, the deep learning model may be a convolutional neural network model, and preferably, the convolutional neural network model includes three convolutional layers, a posing layer, and a softmax layer, after the three convolutional layers process the license plate inspection image region, a plurality of branches are output through the posing layer, and the number of the branches is consistent with the number of the types of the characteristic parameters to be output finally. For example, there may be three branches, a first branch outputting a vector of 1 × N, a second branch outputting a vector of 1 × M, and a third branch outputting a vector of 1 × S. The vector output by the first branch obtains the probability that the license plate background color characteristic parameter takes each parameter value through the softmax layer, the vector output by the second branch obtains the probability that the positive and negative color characteristic parameters take each parameter value through the softmax layer, and the vector output by the third branch obtains the probability that the layer number characteristic parameter takes each parameter value through the softmax layer. The value of N is the number of the desirable parameter values of the license plate background color characteristic parameters, the value of M is the number of the desirable parameter values of the positive and negative color characteristic parameters, and the value of S is the number of the desirable parameter values of the layer number characteristic parameters. It can be understood that, because the present scheme is described based on three types of license plate element characteristics of the license plate ground color characteristic parameter, the front and back color characteristic parameter and the layer number characteristic parameter, only three branches are mentioned, and if other characteristic parameters are added to cooperate with the identification of the license plate type, the deep learning model may further include a fourth branch, a fifth branch and the like.
And further, taking the highest value in the corresponding probabilities when the license plate background color characteristic parameters output by the softmax layer are taken as the parameter values of the license plate background color characteristic parameters of the license plate submission image area output by the deep learning model. And taking the highest value in the probabilities corresponding to the positive and negative color characteristic parameters output by the softmax layer when the positive and negative color characteristic parameters are taken as the parameter values of the positive and negative color characteristic parameters corresponding to the license plate submission image area output by the deep learning model. And taking the highest value in the probabilities corresponding to the layer number characteristic parameters output by the softmax layer as the parameter value of the layer number characteristic parameter corresponding to the license plate submission image area output by the deep learning model.
It should be noted that the deep learning model is adopted to identify the single layer and the double layer of the license plate pair, so that the traditional method of determining the single layer and the double layer by using the upper and the lower boundaries is replaced, and the problem of inaccurate judgment of the single layer and the double layer license plates caused by contamination is also solved.
And S103, judging the license plate type of the license plate appearing in the image to be processed based on the license plate background color characteristic parameter, the front and back color characteristic parameter and the layer number characteristic parameter.
In the embodiment of the invention, the license plate type displayed in the license plate inspection image area is determined according to the output license plate ground color characteristic parameter, the front and back color characteristic parameter and the layer number characteristic parameter.
Optionally, the correspondence between the combination of the ground color, the front color, the back color, and the number of layers of each license plate and each type of license plate may be stored in advance. The following table is an example:
Figure BDA0001858396650000081
Figure BDA0001858396650000091
of course, the table above is only an example, and when the license plate type is newly added, the adjustment can be performed correspondingly.
And judging the corresponding license plate type based on the parameter value of the license plate ground color characteristic parameter, the parameter value of the front and back color characteristic parameters and the parameter value of the layer number characteristic parameter output by the deep learning model.
Through mass experimental demonstration, the deep learning model is adopted to output characteristic parameters (namely the license plate background color characteristic parameters, the front and back color characteristic parameters and the layer number characteristic parameters), and the accuracy rate of further judging the corresponding license plate type based on the output characteristic parameters can reach more than 99 percent, which is obviously higher than the accuracy rate of directly outputting the license plate type by directly adopting the deep learning model. Meanwhile, when the license plate types need to be added, the new license plate types only need to be added into the training set and the testing set, and the corresponding labels are modified to support the judgment of the new license plate types. The influence on other license plate types which can be identified is not required to be considered, the workload of updating the deep learning model is reduced, and the development cost is saved.
Further, in the embodiment of the invention, in the process of determining the license plate boundary from the image to be processed, a relatively large error occurs in the determination of the license plate boundary in order to avoid the influence of light or stains on the license plate, and the accuracy of subsequent license plate type identification is further influenced. Therefore, in the embodiment of the present invention, as shown in fig. 3, the license plate type recognition may further include the following steps:
step S201, a plurality of connected domains in the image to be processed are identified.
As one embodiment, an Otsu algorithm may be employed to determine a plurality of connected domains in the image to be processed. Specifically, a variance function between a foreground pixel and a background pixel in the image to be processed may be constructed first, that is, g ═ w0*w1*(u0-u1)*(u0-u1) Where g is the variance based on a variable threshold, w0Representing the ratio of the number of foreground pixels to the total pixels of the image to be processed under the variable threshold, w1Representing the proportion of the number of background pixels to the total pixels of the image to be processed under the variable threshold value, u0Mean gray value, u, representing foreground pixels1Representing the average gray value of the background pixels. It should be noted that the foreground pixels may be pixel points of a determined region of interest, for example, in the embodiment of the present invention, if the region of interest is a region where a license plate is located, the pixel points of an image region of the license plate appearing in the image to be processed are foreground pixels. The background pixels may be pixel points corresponding to image regions in the image to be processed, except for the region of interest.
And then determining a full map ostu threshold corresponding to the image to be processed based on the variance function. Specifically, when the maximum variance is obtained by using the variance function, the corresponding variable threshold may be used as the full-graph ostu threshold.
The image to be processed is divided into a plurality of region blocks, optionally, the size of the image to be processed may be sequentially divided, for example, the image to be processed of M × N may be uniformly divided into M × N region blocks. The region ostu threshold for each region block is then calculated in turn. Specifically, the principle of calculating the region ostu threshold of the region block is the same as that of calculating the full map ostu threshold, and is not described herein again.
And determining a binarization threshold value of each region block according to each region block, the corresponding region ostu threshold value and the full-image ostu threshold value. As an embodiment, a formula may be utilized according to each of the region blocks, the corresponding region ostu threshold, and the full map ostu threshold: thi=w0*th0+wi*thiCalculating a binarization threshold value of each region block, wherein in the formula, ThiBinary threshold, th, representing the i-th area block0Represents the full graph ostu threshold, thiRegion ostu threshold, w, representing the ith region block0+wiI ranges from 1 to M N.
And sequentially carrying out binarization processing on each region block based on the corresponding binarization threshold value.
And finally, determining the connected domain on the image to be processed after the binarization processing based on the specification of the pre-selected connected domain. For example, if the specification of the selected connected component is 4 connected components, 4 connected component calculation is performed on the to-be-processed image obtained after the binarization processing.
And step S202, performing fusion processing on the plurality of connected domains to obtain a plurality of connected domains to be selected.
In the embodiment of the invention, the fusion processing of the plurality of connected domains can perform sequencing fusion between the connected domains which are identified to be fused. Alternatively, a specific direction may be selected on the image to be processed, and at least two connected components are truly adjacent in the selected specific direction, for example, the x-axis may be selected as the specific direction on the image to be processed.
Alternatively, whether the two connected domains adjacent to each other in the specified direction have boundary connection or not is sequentially detected, for example, whether the upper boundary of one connected domain and the lower boundary of the other connected domain exist in each two connected domains adjacent to each other is detected. Of course, the above is only the distance, that is, it is not limited to only detecting whether the upper and lower boundaries are connected, but also whether any boundaries of two adjacent connected domains are connected. If the boundary connection exists, the two adjacent connected domains are fused.
Further, a plurality of connected domains to be selected are obtained after fusion processing. It should be noted that, if only part of the connected domains are fused, the connected domains to be selected may include the un-fused connected domains and the new connected domains obtained by fusion; and if the original connected domain does not have a connected domain needing to be fused, the connected domain to be selected is the originally determined connected domain.
And step S203, screening the connected domain to be selected by using preset reference standard information.
In the embodiment of the present invention, the reference standard information may be an allowable interval of a preset connected domain aspect ratio, a preset height, and a preset boundary coordinate. The width is a length of the to-be-selected communication domain in a predetermined direction, and the height is a length in a direction perpendicular to the predetermined direction, for example, when the predetermined direction is an x-axis direction, the direction perpendicular to the predetermined direction is a y-axis direction. And further, screening out the connected domains to be selected, wherein the corresponding aspect ratio, height and boundary coordinates do not belong to the corresponding allowable interval in the reference standard information.
And step S204, determining at least one target connected domain from the screened connected domains to be selected according to the average height value of the screened connected domains to be selected.
In the embodiment of the invention, the average height value of the screened connected domain to be selected is calculated firstly. And calculating the height difference between each connected domain to be selected and the next connected domain to be selected in the appointed direction again. And comparing the height difference corresponding to each connected domain to be selected with the average height value. And if the corresponding height difference does not exceed the average height value, taking the connected domain to be selected as a target connected domain.
Further, in order to further improve the accuracy, if there is a connected domain to be selected whose corresponding height difference exceeds the average height value, the connected domain to be selected that exceeds the average height value needs to be split, so as to re-determine the connected domain to be selected. Optionally, the splitting the connected component to be selected that exceeds the average height value includes: and judging whether the connected domain to be selected exceeding the average height value is the fused connected domain or not to obtain the connected domain. And if the connected domain is the fused connected domain, canceling the previous fusion, and recovering the fused connected domain into at least two connected domains to replace the connected domain to be selected exceeding the average height value to serve as the newly added connected domain to be selected. And if the connected domain is not fused, searching the narrowest part of the width of the connected domain to be selected, and disassembling and dividing the connected domain to be selected into two new connected domains to be selected for replacing the original connected domain to be selected exceeding the average height value. Therefore, the newly determined candidate connected domain is obtained, and it should be noted that the candidate connected domain may include an undisassembled candidate connected domain. And repeatedly screening based on the re-determined connected domain to be selected and re-determining at least one target connected domain from the screened connected domain to be selected. I.e., the flow returns to step S203.
And S205, determining the license plate boundary according to the at least one target connected domain.
In the embodiment of the invention, the boundary coordinate value of the determined target connected domain is obtained, and the license plate boundary is determined according to the boundary coordinate value.
The license plate boundary obtained by the steps improves the influence of stains on the license plate on the accuracy of the license plate boundary determination, also improves the dependence on color extraction when the boundary is determined, enhances the scene adaptability and improves the robustness. The influence of the back on the single-layer and double-layer recognition of the license plate is improved to a certain extent.
In order to further describe the license plate recognition method provided in the embodiment of the present invention, an operation example shown in fig. 4 is described below, and as shown in fig. 4, the license plate recognition method may include:
and S1, acquiring a connected domain in the image to be processed.
And S2, performing sequencing fusion processing on the obtained connected domains.
And S3, screening the fused connected domains, and determining a target connected domain from the screened connected domains to be selected.
And S4, determining the boundary of the license plate based on the determined boundary coordinates of the target connected domain.
S5, determining a license plate submission image region on the image to be processed, and detecting whether a license plate border in the license plate submission image region is complete? If the incomplete flow advances to step S6, if the complete flow advances to step S7.
S6, the license plate censorship image area is subjected to edge expansion processing, and whether the license plate boundary in the expanded license plate censorship image area is complete or not is repeatedly detected until the license plate boundary in the license plate censorship image area is complete. The flow advances to step S7.
And S7, adjusting the license plate censorship image area to the input size which accords with the preset convolutional neural network model.
And S8, inputting the license plate censorship image area into the convolutional neural network model for processing so as to output a characteristic diagram.
And S9, outputting the license plate background color characteristic parameters, the front and back color characteristic parameters and the layer number characteristic parameters through the convolutional neural network model.
And S10, judging the license plate type based on the license plate ground color characteristic parameter, the front and back color characteristic parameter and the layer number characteristic parameter.
Second embodiment
Referring to fig. 5, a license plate type recognition device 200 according to an embodiment of the present invention is provided. The license plate type recognition apparatus 200 is applied to the electronic device 100. Alternatively, as shown in fig. 5, the license plate type recognition device 200 includes: the system comprises an identification module 201, a fusion module 202, a screening module 203, an acquisition module 204, a determination module 205, a feature identification module 206 and a judgment module 207.
An identifying module 201, configured to identify a plurality of connected domains in the image to be processed.
And a fusion module 202, configured to perform fusion processing on the multiple connected domains to obtain multiple connected domains to be selected.
And the screening module 203 is configured to screen the connected domain to be selected by using preset reference standard information.
An obtaining module 204, configured to determine at least one target connected domain from the screened connected domains to be selected according to the average height value of the screened connected domains to be selected.
The determining module 205 is configured to determine the license plate boundary according to the at least one target connected domain.
The determining module 205 is further configured to determine a license plate submission image area according to the license plate boundary determined from the image to be processed.
And the feature recognition module 206 is configured to recognize the license plate background color feature parameter, the front-back color feature parameter, and the number-of-layers feature parameter corresponding to the license plate inspection image region by using the deep learning model.
And the judging module 207 is used for judging the license plate type of the license plate appearing in the image to be processed based on the license plate ground color characteristic parameter, the front and back color characteristic parameter and the layer number characteristic parameter.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the license plate type recognition apparatus 200 described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
The embodiment of the present invention also discloses a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by the processor 112, implements the license plate type recognition method disclosed in the foregoing embodiment of the present invention.
In summary, embodiments of the present invention provide a license plate type identification method, a license plate type identification device, and a computer-readable storage medium. The license plate type identification method, the license plate type identification device and the computer readable storage medium are applied to electronic equipment, and a deep learning model is stored in the electronic equipment in advance. Wherein the method comprises the following steps: determining a license plate submission image area according to the license plate boundary determined from the image to be processed; identifying license plate background color characteristic parameters, front and back color characteristic parameters and layer number characteristic parameters corresponding to the license plate inspection image area by using the deep learning model; and judging the license plate type of the license plate appearing in the image to be processed based on the license plate background color characteristic parameter, the front and back color characteristic parameter and the layer number characteristic parameter. The license plate type is carried out through the combination of a plurality of element characteristics in the license plate, so that the identification accuracy can be improved. The method has the advantages that only the element characteristics of the license plate inspection image area are output by using the deep learning model, and the license plate type does not need to be directly output, so that for the newly added license plate type combined by the recognizable element characteristics, repeated training is not needed to correspond to the newly added license plate type containing the new element characteristics, only training and learning are needed to be carried out on the newly added element characteristics, other license plate types which can be recognized are not needed to be considered, the training workload is reduced, and the maintenance cost is effectively controlled.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

Claims (10)

1. A license plate type recognition method is applied to electronic equipment, a deep learning model is stored in the electronic equipment in advance, and the method is characterized by comprising the following steps:
determining a license plate submission image area according to the license plate boundary determined from the image to be processed;
identifying license plate background color characteristic parameters, front and back color characteristic parameters and layer number characteristic parameters corresponding to the license plate inspection image area by using the deep learning model;
and judging the license plate type of the license plate appearing in the image to be processed based on the license plate background color characteristic parameter, the front and back color characteristic parameter and the layer number characteristic parameter.
2. The method of claim 1, wherein the method further comprises:
identifying a plurality of connected domains in the image to be processed;
performing fusion processing on the plurality of connected domains to obtain a plurality of connected domains to be selected;
screening the connected domain to be selected by using preset reference standard information;
determining at least one target connected domain from the screened connected domains to be selected according to the average height value of the screened connected domains to be selected;
and determining the license plate boundary according to the at least one target connected domain.
3. The method of claim 2, wherein the identifying a plurality of connected components in the image to be processed comprises:
constructing a variance function between a foreground pixel and a background pixel in the image to be processed;
determining a full map ostu threshold corresponding to the image to be processed based on the variance function;
sequentially calculating the region ostu threshold of each region block obtained by dividing the image to be processed;
determining a binarization threshold value of each region block according to each region block, the corresponding region ostu threshold value and the full-image ostu threshold value;
sequentially carrying out binarization processing on each region block based on the corresponding binarization threshold value;
and determining the connected domain on the image to be processed after the binarization processing based on the specification of the pre-selected connected domain.
4. The method of claim 3, wherein the fusing the plurality of connected domains comprises:
sequentially detecting whether two adjacent connected domains in the image to be processed have boundary connection or not;
if the boundary connection exists, the two adjacent connected domains are fused.
5. The method of claim 4, wherein the step of determining at least one target connected component from the screened connected components to be selected according to the screened average height value of the connected components to be selected comprises:
calculating the height difference between each connected domain to be selected and the next connected domain to be selected adjacent to the connected domain to be selected in the appointed direction;
comparing the height difference corresponding to each connected domain to be selected with the average height value;
and if the corresponding height difference does not exceed the average height value, taking the connected domain to be selected as a target connected domain.
6. The method of claim 5, wherein the step of determining at least one target connected component from the screened connected components to be selected according to the screened average height value of the connected components to be selected further comprises:
if the corresponding connected domain to be selected with the height difference exceeding the average height value exists, splitting the connected domain to be selected with the height difference exceeding the average height value so as to re-determine the connected domain to be selected;
and repeatedly screening based on the re-determined connected domain to be selected and re-determining at least one target connected domain from the screened connected domain to be selected.
7. The method of claim 2, wherein the step of determining the license plate boundary based on the at least one target connected component comprises:
acquiring the determined boundary coordinate value of the target connected domain;
and determining the boundary of the license plate according to the boundary coordinate value.
8. The method of claim 1, wherein the step of determining the license plate type of the license plate appearing in the image to be processed based on the license plate ground color characteristic parameter, the front-back color characteristic parameter, and the floor number characteristic parameter comprises:
and determining the license plate types corresponding to the acquired license plate ground color characteristic parameters, the positive and negative color characteristic parameters and the layer number characteristic parameters according to the pre-stored corresponding relationship between the combination of each license plate ground color, the positive and negative colors and the layer number and various license plate types.
9. A license plate type recognition device is applied to electronic equipment, a deep learning model is stored in the electronic equipment in advance, and the license plate type recognition device is characterized by comprising:
the determining module is used for determining a license plate submission image area according to the license plate boundary determined from the image to be processed;
the characteristic identification module is used for identifying license plate background color characteristic parameters, front and back color characteristic parameters and layer number characteristic parameters corresponding to the license plate inspection image area by using the deep learning model;
and the judging module is used for judging the license plate type of the license plate appearing in the image to be processed based on the license plate ground color characteristic parameter, the front and back color characteristic parameter and the layer number characteristic parameter.
10. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 8.
CN201811324694.9A 2018-11-08 2018-11-08 License plate type recognition method and device and computer readable storage medium Active CN111160073B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811324694.9A CN111160073B (en) 2018-11-08 2018-11-08 License plate type recognition method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811324694.9A CN111160073B (en) 2018-11-08 2018-11-08 License plate type recognition method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111160073A true CN111160073A (en) 2020-05-15
CN111160073B CN111160073B (en) 2023-09-19

Family

ID=70554854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811324694.9A Active CN111160073B (en) 2018-11-08 2018-11-08 License plate type recognition method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111160073B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111258319A (en) * 2020-01-23 2020-06-09 中汽数据(天津)有限公司 Intelligent patrol car
CN112070025A (en) * 2020-09-09 2020-12-11 北京字节跳动网络技术有限公司 Image recognition method and device, electronic equipment and computer readable medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120114184A1 (en) * 2009-07-21 2012-05-10 Thomson Licensing Trajectory-based method to detect and enhance a moving object in a video sequence
WO2014000261A1 (en) * 2012-06-29 2014-01-03 中国科学院自动化研究所 Trademark detection method based on spatial connected component pre-location
CN107704860A (en) * 2017-12-06 2018-02-16 四川知创空间孵化器管理有限公司 A kind of number-plate number recognition methods
CN108108729A (en) * 2016-11-25 2018-06-01 杭州海康威视数字技术股份有限公司 A kind of recognition methods of car plate type and device
CN108154160A (en) * 2017-12-27 2018-06-12 苏州科达科技股份有限公司 Color recognizing for vehicle id method and system
CN207816806U (en) * 2017-12-06 2018-09-04 四川知创空间孵化器管理有限公司 A kind of car plate type compartment system
CN108564088A (en) * 2018-04-17 2018-09-21 广东工业大学 Licence plate recognition method, device, equipment and readable storage medium storing program for executing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120114184A1 (en) * 2009-07-21 2012-05-10 Thomson Licensing Trajectory-based method to detect and enhance a moving object in a video sequence
WO2014000261A1 (en) * 2012-06-29 2014-01-03 中国科学院自动化研究所 Trademark detection method based on spatial connected component pre-location
CN108108729A (en) * 2016-11-25 2018-06-01 杭州海康威视数字技术股份有限公司 A kind of recognition methods of car plate type and device
CN107704860A (en) * 2017-12-06 2018-02-16 四川知创空间孵化器管理有限公司 A kind of number-plate number recognition methods
CN207816806U (en) * 2017-12-06 2018-09-04 四川知创空间孵化器管理有限公司 A kind of car plate type compartment system
CN108154160A (en) * 2017-12-27 2018-06-12 苏州科达科技股份有限公司 Color recognizing for vehicle id method and system
CN108564088A (en) * 2018-04-17 2018-09-21 广东工业大学 Licence plate recognition method, device, equipment and readable storage medium storing program for executing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111258319A (en) * 2020-01-23 2020-06-09 中汽数据(天津)有限公司 Intelligent patrol car
CN112070025A (en) * 2020-09-09 2020-12-11 北京字节跳动网络技术有限公司 Image recognition method and device, electronic equipment and computer readable medium

Also Published As

Publication number Publication date
CN111160073B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN110516208B (en) System and method for extracting PDF document form
CN109583345B (en) Road recognition method, device, computer device and computer readable storage medium
CN109977997B (en) Image target detection and segmentation method based on convolutional neural network rapid robustness
JP5455038B2 (en) Image processing apparatus, image processing method, and program
CN103577475A (en) Picture automatic sorting method, picture processing method and devices thereof
CN111967545B (en) Text detection method and device, electronic equipment and computer storage medium
CN108830275B (en) Method and device for identifying dot matrix characters and dot matrix numbers
US20170178341A1 (en) Single Parameter Segmentation of Images
CN109389110B (en) Region determination method and device
CN106203454A (en) The method and device that certificate format is analyzed
CN112001406A (en) Text region detection method and device
CN115239644B (en) Concrete defect identification method, device, computer equipment and storage medium
CN111160073A (en) License plate type identification method and device and computer readable storage medium
Tian et al. An algorithm combined with color differential models for license-plate location
Huang et al. Detecting shadows in high-resolution remote-sensing images of urban areas using spectral and spatial features
CN111738252B (en) Text line detection method, device and computer system in image
CN114638818A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110390224B (en) Traffic sign recognition method and device
CN110084117B (en) Document table line detection method and system based on binary image segmentation projection
CN109033797B (en) Permission setting method and device
CN109145916B (en) Image character recognition and cutting method and storage device
CN114511862B (en) Form identification method and device and electronic equipment
CN110059572B (en) Document image Chinese keyword detection method and system based on single character matching
CN113807315B (en) Method, device, equipment and medium for constructing object recognition model to be recognized
CN109146893B (en) Oil light area segmentation method and device and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant