CN110490179B - License plate recognition method and device and storage medium - Google Patents

License plate recognition method and device and storage medium Download PDF

Info

Publication number
CN110490179B
CN110490179B CN201810461160.4A CN201810461160A CN110490179B CN 110490179 B CN110490179 B CN 110490179B CN 201810461160 A CN201810461160 A CN 201810461160A CN 110490179 B CN110490179 B CN 110490179B
Authority
CN
China
Prior art keywords
license plate
feature
sequences
characteristic
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810461160.4A
Other languages
Chinese (zh)
Other versions
CN110490179A (en
Inventor
钱华
蔡晓蕙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201810461160.4A priority Critical patent/CN110490179B/en
Publication of CN110490179A publication Critical patent/CN110490179A/en
Application granted granted Critical
Publication of CN110490179B publication Critical patent/CN110490179B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Character Discrimination (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a license plate recognition method and device and a computer readable storage medium, and belongs to the technical field of intelligent transportation. The method comprises the following steps: acquiring a license plate region containing a license plate in an image, and extracting feature information in the license plate region by using a Convolutional Neural Network (CNN) model, wherein the feature information comprises a plurality of feature sequences; identifying characters in the license plate region based on the plurality of characteristic sequences; and determining the license plate number based on the character recognition result. According to the embodiment of the invention, the characters in the license plate region are recognized, the license plate number is determined according to the character recognition result, and the license plate is recognized without obtaining a plurality of character regions by dividing the license plate region, so that image processing parameters do not need to be adjusted according to a specific scene, the interference of scene factors on license plate recognition is effectively avoided, and the universality and the accuracy of the license plate recognition method are improved.

Description

License plate recognition method and device and storage medium
Technical Field
The invention relates to the technical field of intelligent transportation, in particular to a license plate recognition method and device and a computer readable storage medium.
Background
The license plate is an 'identity card' of the vehicle and is important identification information different from other vehicles. In the current intelligent transportation field, monitoring equipment can be arranged in a plurality of scenes such as a gate, a parking lot or a street, images of license plates containing vehicles in the scenes are obtained through the monitoring equipment, and then the license plates in the images are identified.
In the related art, license plate recognition can be mainly summarized into three steps, namely license plate region detection, license plate region segmentation and character recognition. When the license plate is recognized through the three steps, due to the influence of scene factors, such as weather, illumination, inclination of monitoring equipment, inclination of the license plate and the like, the accuracy of character segmentation is difficult to guarantee, and the accuracy of license plate recognition is low.
Disclosure of Invention
The embodiment of the invention provides a license plate recognition method, a license plate recognition device and a computer readable storage medium, which can be used for solving the problem of low license plate recognition accuracy in the related technology. The technical scheme is as follows:
in a first aspect, a license plate recognition method is provided, and the method includes:
acquiring a license plate region containing a license plate in an image, and extracting feature information in the license plate region by using a Convolutional Neural Network (CNN) model, wherein the feature information comprises a plurality of feature sequences;
identifying characters in the license plate region based on the plurality of feature sequences;
and determining the license plate number based on the character recognition result.
Optionally, the recognizing characters in the license plate region based on the plurality of feature sequences includes:
and processing each characteristic sequence in the plurality of characteristic sequences through an Attention model to obtain characters corresponding to each characteristic sequence in the license plate region.
Optionally, the processing, by the Attention authorization model, each of the plurality of feature sequences to obtain a character corresponding to each feature sequence in the license plate region includes:
for any characteristic sequence A in the plurality of characteristic sequences, determining the weight of the characteristic sequence A and the weight of each of the rest characteristic sequences except the characteristic sequence A through an Attention model, wherein the weight of the characteristic sequence A is greater than the weight of each of the rest characteristic sequences;
determining semantic information of the feature sequence A based on the feature sequence A, the weight of the feature sequence A, the rest of feature sequences and the weight of the rest of feature sequences;
and decoding and identifying the semantic information of the characteristic sequence A to obtain the character corresponding to the characteristic sequence A.
Optionally, after the feature information in the license plate region is extracted by using the convolutional neural network CNN model, the method further includes:
determining the license plate type of the license plate based on the plurality of characteristic sequences;
accordingly, the determining of the license plate number based on the character recognition result includes:
and determining the license plate number of the license plate based on the character recognition result and the license plate type.
Optionally, the determining, based on the plurality of feature sequences, a license plate category to which the license plate belongs includes:
determining a probability value of the license plate belonging to each preset license plate type by using the CNN model based on the plurality of characteristic sequences;
and determining the preset license plate type corresponding to the maximum probability value as the license plate type to which the license plate belongs.
Optionally, the determining the license plate number of the license plate based on the character recognition result and the license plate type includes:
obtaining a license plate number sample corresponding to the license plate type;
judging whether a license plate number belonging to the license plate type contains a sub-segment and a main segment or not based on the license plate type, wherein the sub-segment and the main segment both refer to continuous character strings in the license plate number, and the number of characters contained in the main segment is larger than that contained in the sub-segment, or the size of an area occupied by the characters contained in the main segment is larger than that of the area occupied by the characters contained in the sub-segment;
and if the license plate number belonging to the license plate type comprises the sub-sections and the main sections, dividing the character recognition result into the sub-sections and the main sections according to the license plate number sample, and determining the divided character recognition result as the license plate number of the license plate.
Optionally, after determining the license plate number based on the character recognition result, the method further includes:
acquiring license plate color information and belonging area information corresponding to the license plate type;
and outputting the license plate number of the license plate, the license plate color information and the region information.
In a second aspect, there is provided a license plate recognition device, the device comprising:
the system comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring a license plate region containing a license plate in an image and extracting characteristic information in the license plate region by utilizing a Convolutional Neural Network (CNN) model, and the characteristic information comprises a plurality of characteristic sequences;
the recognition module is used for recognizing characters in the license plate area based on the plurality of characteristic sequences;
and the first determining module is used for determining the license plate number based on the character recognition result.
Optionally, the identification module is configured to:
and processing each characteristic sequence in the plurality of characteristic sequences through an Attention model to obtain characters corresponding to each characteristic sequence in the license plate region.
Optionally, the identification module is specifically configured to:
for any characteristic sequence A in the plurality of characteristic sequences, determining the weight of the characteristic sequence A and the weight of each of the rest characteristic sequences except the characteristic sequence A through an Attention model, wherein the weight of the characteristic sequence A is greater than the weight of each of the rest characteristic sequences;
determining semantic information of the feature sequence A based on the feature sequence A, the weight of the feature sequence A, the rest of feature sequences and the weight of the rest of feature sequences;
and decoding and identifying the semantic information of the characteristic sequence A to obtain the character corresponding to the characteristic sequence A.
Optionally, the apparatus further comprises:
the second determination module is used for determining the license plate type of the license plate based on the plurality of characteristic sequences;
accordingly, the first determining module comprises:
and the determining submodule is used for determining the license plate number of the license plate based on the character recognition result and the license plate type.
Optionally, the second determining module is specifically configured to:
determining a probability value of the license plate belonging to each preset license plate category by using the CNN based on the plurality of feature sequences;
and determining the preset license plate type corresponding to the maximum probability value as the license plate type to which the license plate belongs.
Optionally, the determining sub-module is specifically configured to:
obtaining a license plate number sample corresponding to the license plate type;
judging whether a license plate number belonging to the license plate type contains a sub-segment and a main segment or not based on the license plate type, wherein the sub-segment and the main segment both refer to continuous character strings in the license plate number, and the number of characters contained in the main segment is larger than that contained in the sub-segment, or the size of an area occupied by the characters contained in the main segment is larger than that of the area occupied by the characters contained in the sub-segment;
and if the license plate number belonging to the license plate type comprises the sub-sections and the main sections, dividing the character recognition result into the sub-sections and the main sections according to the license plate number sample, and determining the divided character recognition result as the license plate number of the license plate.
Optionally, the apparatus is further configured to:
acquiring license plate color information and belonging area information corresponding to the license plate type;
and outputting the license plate number of the license plate, the license plate color information and the region information.
In a third aspect, a license plate recognition device is provided, the device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform any of the methods of the first aspect above.
In a fourth aspect, there is provided a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the method of any of the first aspects above.
The technical scheme provided by the embodiment of the invention has the following beneficial effects: the method comprises the steps of obtaining a license plate area containing a license plate in an image, extracting feature information in the license plate area by utilizing a CNN model, wherein the feature information can comprise a plurality of feature sequences, identifying characters in the license plate area based on the feature sequences, and determining a license plate number based on a character identification result. That is, in the embodiment of the present invention, characters in the license plate region may be directly recognized, and the license plate number may be determined according to the character recognition result, instead of obtaining a plurality of character regions by segmenting the license plate region to recognize the license plate, since the license plate region segmentation is not necessary, the interference of the scene factor on the license plate recognition is avoided, and the accuracy of the license plate recognition is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a system architecture diagram of a license plate recognition method according to an embodiment of the present invention;
fig. 2 is a flowchart of a license plate recognition method according to an embodiment of the present invention;
fig. 3A is a flowchart of a license plate recognition method according to an embodiment of the present invention;
FIG. 3B is a diagram of an Attention model according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a license plate recognition device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal for license plate recognition according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Before explaining the embodiments of the present invention in detail, an application scenario related to the embodiments of the present invention will be described.
Currently, the license plate recognition technology is widely applied to the field of intelligent transportation. In practical application, monitoring equipment can be arranged in a plurality of scenes such as a gate, a parking lot, a road and the like to acquire images, and license plates in the images are identified to acquire license plate information. In this case, the monitoring device generally needs to work in different complex scenes, and therefore, the quality of the image acquired by the monitoring device is also affected by the scene factor. For example, monitoring devices disposed in a gate or a road may be affected by weather, illumination, and the like, so that the acquired images may be unclear, and for example, monitoring devices disposed in various scenes may be inclined due to external force, so that vehicles and license plates in the images acquired by the monitoring devices may also be inclined. In addition, the monitoring equipment can acquire images in different scenes, and the license plate identification method provided by the embodiment of the invention can be used for identifying license plates in the images acquired by the monitoring equipment in any scene.
Next, a system architecture according to an embodiment of the present invention will be described.
Fig. 1 is a system architecture diagram of a license plate recognition method according to an embodiment of the present invention. As shown in fig. 1, the system may include a monitoring device 101 and a terminal 102.
The monitoring device 101 and the terminal 102 establish a communication connection, and through the communication connection, the monitoring device 101 may send the acquired image to the terminal 102. When receiving the image sent by the monitoring device, the terminal 102 may identify the license plate in the image and output a final identification result.
It should be noted that the monitoring Device 101 may be a CCD (Charge Coupled Device) camera, or may be another camera capable of performing image acquisition and communicating with the terminal 102. The terminal 102 may be a computer device such as a desktop computer, a laptop computer, a network server, etc.
Next, a license plate recognition method provided by an embodiment of the present invention is described.
Fig. 2 is a flowchart of a license plate recognition method according to an embodiment of the present invention. The method can be applied to the terminal shown in fig. 1, and as shown in fig. 2, the method comprises the following steps:
step 201: and acquiring a license plate region containing a license plate in the image, and extracting characteristic information in the license plate region by using a CNN model.
The feature information in the license plate region may include a plurality of feature sequences.
Step 202: and identifying characters in the license plate area based on the plurality of characteristic sequences.
The license plate area contains a license plate number, the license plate number is generally composed of a plurality of characters, and the plurality of characters in the license plate area are recognized to obtain the plurality of characters forming the license plate number. The characters can be English characters, numbers and other special characters.
Step 203: and determining the license plate number based on the character recognition result.
In the embodiment of the invention, the terminal can acquire the license plate region containing the license plate in the image, and extract the characteristic information in the license plate region by using the CNN model, wherein the characteristic information can comprise a plurality of characteristic sequences, identifies the characters in the license plate region based on the plurality of characteristic sequences, and determines the license plate number based on the character identification result. That is, in the embodiment of the present invention, characters in a license plate region may be directly recognized, and a license plate number may be determined according to a character recognition result, without obtaining a plurality of character regions by segmenting the license plate region to recognize the license plate, and since the license plate region segmentation is not necessary, image processing parameters do not need to be adjusted according to a specific scene, thereby effectively avoiding interference of scene factors on license plate recognition, and thus improving the versatility and accuracy of the license plate recognition method.
Fig. 3A is a flowchart of a license plate recognition method according to an embodiment of the present invention, where the method may be applied to the terminal shown in fig. 1, and as shown in fig. 3A, the method includes the following steps:
step 301: and acquiring a license plate area containing the license plate in the image by using the FRCNN.
The FRCNN model is a model developed on the RCNN (Region-Based Convolutional Neural Networks) model for object detection. When the target detection is carried out through the FRCNN model, the detection process mainly comprises 4 basic steps of candidate region generation, feature extraction, classification and position refinement. In general, the FRCNN model may include a plurality of convolutional layers and a plurality of fully-connected layers.
When the license plate region is detected, the terminal can take an image acquired through the monitoring device as an input image, wherein the first convolution layer can carry out convolution operation on pixel values of a plurality of pixel points included in the input image, and outputs a convolution operation result as an input value of the next convolution layer. And in the same way, the output value of the previous convolutional layer is used as the input value of the next convolutional layer until the last convolutional layer determines to obtain a plurality of candidate regions based on the output value of the previous convolutional layer. And then, taking the multiple candidate regions obtained by determination as input values of a first full-connection layer, judging whether the candidate region is a license plate region to be detected or not by the full-connection layer for each candidate region in the multiple candidate regions, and determining the position coordinates of the candidate region. And finally, outputting the probability that the candidate region is the license plate region, the probability that the candidate region is not the license plate region and the position coordinates of the candidate region through the last full-connection layer.
It should be noted that the license plate region refers to a region where a license plate of a vehicle in an image is located, the license plate region includes information such as character information and license plate texture that constitute the license plate, and after the terminal acquires the license plate region, the terminal can further detect the information in the license plate region through step 302 and step 305, thereby identifying the license plate.
In addition, in the embodiment of the present invention, the FRCNN model is a model obtained by training a plurality of training samples in advance. The plurality of training samples can comprise images of vehicles in different countries and different regions acquired by the monitoring equipment, so that the FRCNN model obtained by training can be used for detecting license plate regions in the images of the vehicles in different countries and different regions.
Step 302: and extracting characteristic information in the license plate area by using the CNN model, wherein the characteristic information comprises a plurality of characteristic sequences.
After the license plate region is obtained, the terminal can take the license plate region as the input of the CNN model, and further extract the characteristic information in the license plate region through the CNN model. The feature information mainly comprises a plurality of feature sequences used for indicating a plurality of characters in the license plate area.
The CNN model can extract the feature information in the license plate region from left to right and from top to bottom, so that a plurality of feature sequences are output according to the extraction sequence.
In a possible implementation manner, the terminal may normalize the acquired license plate region to a specified size, and then input the license plate region of the specified size to the CNN model. For example, the designated size may be 180 × 60, and of course, other sizes are also possible, and embodiments of the present invention are not specifically limited herein.
Step 303: and identifying characters in the license plate area based on the plurality of characteristic sequences.
After the plurality of feature sequences are extracted from the license plate region through the CNN model, the terminal can determine semantic information of each feature sequence in the plurality of feature sequences, and then the RNN model is used for decoding and identifying the semantic information of each feature sequence, so that a plurality of characters in the license plate region are obtained.
It should be noted that the terminal may determine the semantic information of each of the plurality of feature sequences one by one through an Attention algorithm according to the output sequence of the plurality of feature sequences. Specifically, in the embodiment of the present invention, a specific implementation process for determining semantic information will be described by taking any one of the feature sequences a as an example.
For any characteristic sequence A in a plurality of characteristic sequences, the terminal can determine the weight of the characteristic sequence A and the weight of each of the rest characteristic sequences except the characteristic sequence A through an Attention algorithm, wherein the weight of the characteristic sequence A is greater than the weight of each of the rest characteristic sequences; and then, determining semantic information of the feature sequence A based on the feature sequence A, the weight of the feature sequence A, each of the rest feature sequences and the weight of each of the rest feature sequences.
It should be noted that the Attention model is a model that can be used for semantic recognition. The Attention model mainly comprises a semantic synthesis module and a decoding and identifying module, wherein the semantic synthesis module is used for solving the semantic information of input characteristic sequences, and the decoding and identifying module is used for decoding and identifying the semantic information of each solved characteristic sequence so as to output character identifying results. Wherein, the two modules can be realized by RNN model.
Fig. 3B is a schematic diagram of an Attention model according to an embodiment of the present invention, where a semantic synthesis module in the Attention model is implemented by a first RNN model, and a decoding recognition module is implemented by a second RNN model. The first RNN model and the second RNN model each include an input layer, a hidden layer, and an output layer. Suppose that a plurality of feature sequences are x respectively in output order 1 、x 2 、x 3 …x n The terminal may input the plurality of feature sequences to an input layer of the first RNN model in sequence, and after receiving the plurality of feature sequences, the input layer of the first RNN model may correspondingly transmit the plurality of feature sequences to the plurality of hidden layer nodes h 1 、h 2 、h 3 …h n Implicit layer node h 1 Can be used for the characteristic sequence x 1 Processing is carried out to obtain a processing result f (x) 1 ) And the processing result f (x) 1 ) As hidden layer node h 2 Input of (2), hidden layer node h 2 Can be according to f (x) 1 ) For the characteristic sequence x 2 Processing is performed to obtain a processing result f (x) 2 ) In this class, withPush, hidden layer node h n According to f (x) n-1 ) For the characteristic sequence x n Processing is performed to obtain a processing result f (x) n ). When f (x) is obtained 2 )、f(x 3 )…f(x n ) Then, the output layer can be based on the preset weight of each feature sequence and f (x) 1 )、f(x 2 )、f(x 3 )…f(x n ) Computing a sequence of features x 1 Semantic information C of 1 Wherein, the characteristic sequence x 1 The preset weight of the characteristic sequence is larger than the preset weight of other characteristic sequences. Then, C is put 1 Output through the output layer as an input value to the second RNN model, which the input layer receives 1 Then, the voice information C is processed 1 Implicit layer node H for transmission to the second RNN model 0 Processing is carried out to obtain a processing result S (C) 1 ) And according to S (C) through the output layer 1 ) Calculating and outputting the characteristic sequence x 1 Corresponding character y 1
When the character recognition result y is obtained 1 Then, the output layers of the first RNN model will be f (x) respectively 1 )、f(x 2 )、f(x 3 )…f(x n ) Hidden layer node H with second RNN model 1 S (C) of the output 1 ) Comparing to determine the weight of each characteristic sequence at the current moment, wherein the characteristic sequence x 2 After the weight value at the current moment is larger than the weight values of other characteristic sequences, the output layer of the first RNN model calculates and obtains a characteristic sequence x based on the determined weight value 2 Semantic information C of 2 After that, C is put 2 Output through the output layer as an input value to the second RNN model, which the input layer receives 2 Then, the voice information C is processed 2 Implicit layer node H for transmission to the second RNN model 2 Hidden layer node H of the second RNN model 2 Hidden layer node H according to a second RNN model 1 Output processing result S (C) 1 ) And character recognition result y 1 For the semantic information C 2 Processing is performed to obtain a processing result S (C) 2 ) And according to S (C) through the output layer 2 ) Calculate and output bitSignature sequence x 2 Corresponding character y 2 . By analogy, the terminal can obtain the characteristic sequence x in sequence through the Attention model 3 …x n-1 、x n Respectively corresponding characters y 3 …y n-1 、y n
Optionally, after the plurality of characters are obtained by identifying each feature sequence through the Attention model, the terminal may further output the plurality of characters according to a preset format through the second RNN model. Specifically, the second RNN model may determine corresponding position coordinates of each character in the license plate region, and divide the plurality of characters into different character strings according to a difference between the position coordinates of each two adjacent characters, and output the different character strings in a sequential order. For example, the character y 1 、y 2 、y 3 The distance between each two is less than a specified threshold, and y 3 And y 4 Is greater than a specified threshold, and y 4 、y 5 、y 6 、y 7 The distance between every two adjacent characters is smaller than a specified threshold, and at this time, the character y can be set 1 、y 2 、y 3 As a string of characters, y 4 、y 5 、y 6 、y 7 And as another character string, outputting the character string with less characters or the character string with smaller occupied area first.
It should be noted that the output according to the above method is because license plate numbers of license plates in some regions or countries are divided into main segments and sub-segments, wherein there often exists a gap between the main segments and the sub-segments, and the number of characters of the main segments may be greater than the number of characters of the sub-segments, or the area occupied by the characters of the main segments may be greater than the area occupied by the characters of the sub-segments. Based on this, the plurality of recognized characters can be output as different character strings according to the characteristics of the main section and the sub section by the method. Since the sub-segments in the license plate number are often used for characterizing the country and the region, the character strings representing the sub-segments can be output first, and the character strings representing the main segments can be output later. Of course, there are some regions or countries where the license plate number does not exist in the main section and the sub-section, in which case a plurality of characters may be sequentially output in the character recognition order.
Step 304: and determining the license plate type of the license plate based on the plurality of characteristic sequences.
After the terminal extracts the plurality of characteristic sequences from the license plate region through the CNN model, the terminal can also determine the license plate type of the license plate based on the plurality of characteristic sequences.
The terminal can determine the probability value of the license plate belonging to each preset license plate type by using the CNN model based on the plurality of characteristic sequences, and determines the preset license plate type corresponding to the maximum probability value as the license plate type to which the license plate belongs.
Specifically, the CNN model may be a model trained according to license plate samples of different countries and regions, and the trained CNN model includes a plurality of labels, where each label is used to indicate a preset license plate type. The terminal can perform softmax normalization processing on the plurality of characteristic sequences, determine the probability that the license plate belongs to each label according to the normalization result, and then determine the license plate type indicated by the label corresponding to the maximum probability value as the license plate type to which the current license plate belongs.
It should be noted that, in the embodiment of the present invention, this step is an optional step. If the terminal executes the step, after the plurality of feature sequences are extracted from the license plate region in step 302, the terminal may first recognize characters in the license plate region according to the plurality of feature sequences, and then determine the type of the license plate based on the plurality of feature sequences. Or, the terminal may determine the type of the license plate based on the plurality of feature sequences, and then recognize the characters in the license plate region according to the plurality of feature sequences. Alternatively, the terminal may perform both operations at the same time. That is, in the embodiment of the present invention, if the terminal performs step 304, the terminal may perform any one of step 303 and step 304 first, or may perform both steps at the same time.
Step 305: and determining the license plate number based on the character recognition result.
After the characters in the license plate region are recognized in step 303, if the terminal does not execute step 304, the terminal may directly determine the character strings sequentially output in step 303 as the license plate number of the license plate. If the terminal performs step 304, the terminal may determine the license plate number by combining the determined license plate type after obtaining the character recognition result.
Specifically, the terminal can obtain a license plate number sample corresponding to the license plate type; judging whether the license plate number belonging to the license plate type contains subsections and a main section based on the license plate number sample, wherein the subsections and the main section both refer to continuous character strings in the license plate number, the number of characters contained in the main section is larger than that contained in the subsections, or the size of the area occupied by the characters contained in the main section is larger than that of the area occupied by the characters contained in the subsections; if the license plate number belonging to the license plate type comprises the sub-sections and the main sections, dividing the character recognition result into the sub-sections and the main sections according to the license plate number sample, and determining the divided character recognition result as the license plate number of the license plate.
The terminal can store license plate number samples corresponding to license plate types and indication information used for indicating whether license plate numbers of corresponding license plate types contain subsections and main sections. The terminal can obtain license plate number samples corresponding to the determined license plate types from the corresponding relations and obtain indication information corresponding to the license plate types, and then the terminal can judge whether the license plate numbers of the types are divided into sub-sections and main sections or not based on the indication information. If the terminal determines that the sub-segment and the main-segment do not exist in the license plate number of the license plate type, the terminal can directly determine the character strings sequentially output in the step 303 as the license plate number of the license plate. If the terminal determines that the license plate number of the license plate type has the sub-segment and main-segment divisions, the terminal may divide the character strings sequentially output in step 303 into the sub-segments and the main-segments according to the license plate number samples, and determine the divided character strings as the license plate number of the license plate.
Step 306: and acquiring license plate color information and belonging area information corresponding to the license plate type, and outputting the license plate number, the license plate color information and the area information of the license plate.
Specifically, when the terminal executes step 304, after determining the license plate type, the terminal may further obtain other information of the license plate according to the license plate type.
The terminal can store the corresponding relationship among the license plate type, the license plate color information and the area information to which the license plate belongs, and can acquire the color information corresponding to the license plate type and the area information to which the license plate belongs from the corresponding relationship. The regional information to which the license plate belongs may include information such as a country and a city to which the license plate belongs. And then, the terminal can output and display the information and the license plate number together.
Optionally, the corresponding relationship may further include more license plate information that can be confirmed according to the license plate type, so that the terminal can output the license plate information according to the license plate type, thereby satisfying the user requirements as much as possible.
In the embodiment of the invention, the terminal can acquire the license plate area containing the license plate in the image through the FRCNN model, and extract the characteristic information in the license plate area by using the CNN model, wherein the characteristic information can comprise a plurality of characteristic sequences, then the terminal can recognize the characters in the license plate area through the Attention model based on the plurality of characteristic sequences, and determine the license plate type of the license plate based on the plurality of characteristic sequences by using the CNN model, and finally, the terminal can determine the license plate information based on the character recognition result and the license plate type. Therefore, in the embodiment of the invention, each module of license plate recognition is realized by a deep learning method, so that end-to-end license plate recognition is really realized, and the interference of a natural scene on license plate recognition is effectively relieved. In addition, in the embodiment of the invention, the characters in the license plate region can be directly recognized, and the license plate type is determined according to the extracted characteristic information, so that the license plate information can be determined by combining the license plate type after the character recognition result is obtained, and the license plate is not required to be recognized by obtaining a plurality of character regions through segmenting the license plate region.
Referring to fig. 4, an embodiment of the present invention provides a license plate recognition apparatus 400, where the apparatus 400 includes:
the acquiring module 401 is configured to acquire a license plate region including a license plate in an image, and extract feature information in the license plate region by using a Convolutional Neural Network (CNN) model, where the feature information includes a plurality of feature sequences;
a recognition module 402, configured to recognize characters in a license plate region based on a plurality of feature sequences;
a first determining module 403, configured to determine a license plate number based on the character recognition result.
Optionally, the identifying module 402 is configured to:
and processing each characteristic sequence in the plurality of characteristic sequences through an Attention model to obtain characters corresponding to each characteristic sequence in the license plate region.
Optionally, the identifying module 402 is specifically configured to:
for any characteristic sequence A in a plurality of characteristic sequences, determining the weight of the characteristic sequence A and the weight of each of the rest characteristic sequences except the characteristic sequence A through an Attention model, wherein the weight of the characteristic sequence A is greater than the weight of each of the rest characteristic sequences;
determining semantic information of the characteristic sequence A based on the characteristic sequence A, the weight of the characteristic sequence A, each of the rest characteristic sequences and the weight of each of the rest characteristic sequences;
and decoding and identifying the semantic information of the characteristic sequence A to obtain the character corresponding to the characteristic sequence A.
Optionally, the apparatus 400 further comprises:
the second determining module is used for determining the license plate type of the license plate based on the plurality of characteristic sequences;
accordingly, the first determining module comprises:
and the determining submodule is used for determining the license plate number of the license plate based on the character recognition result and the license plate type.
Optionally, the second determining module is specifically configured to:
determining the probability value of the license plate belonging to each preset license plate type by using a CNN (convolutional neural network) model based on a plurality of characteristic sequences;
and determining the preset license plate type corresponding to the maximum probability value as the license plate type to which the license plate belongs.
Optionally, the determining submodule is specifically configured to:
obtaining license plate number samples corresponding to the license plate types;
judging whether the license plate number belonging to the license plate type contains subsections and main sections based on the license plate type, wherein the subsections and the main sections both refer to continuous character strings in the license plate number, and the number of characters contained in the main sections is larger than that contained in the subsections, or the size of the area occupied by the characters contained in the main sections is larger than that of the area occupied by the characters contained in the subsections;
and if the license plate number belonging to the license plate category comprises the sub-sections and the main sections, dividing the character recognition result into the sub-sections and the main sections according to the license plate number sample, and determining the divided character recognition result as the license plate number of the license plate.
Optionally, the apparatus 400 is further configured to:
acquiring license plate color information and belonging area information corresponding to license plate types;
and outputting the license plate number, the license plate color information and the region information of the license plate.
In summary, in the embodiment of the present invention, the terminal may obtain a license plate region including a license plate in the image, extract feature information in the license plate region by using the CNN model, where the feature information may include a plurality of feature sequences, recognize characters in the license plate region based on the plurality of feature sequences, and determine a license plate number based on a character recognition result. That is, in the embodiment of the present invention, characters in a license plate region may be directly recognized, and a license plate number may be determined according to a character recognition result, without obtaining a plurality of character regions by segmenting the license plate region to recognize the license plate, and since the license plate region segmentation is not necessary, image processing parameters do not need to be adjusted according to a specific scene, thereby effectively avoiding interference of scene factors on license plate recognition, and thus improving the versatility and accuracy of the license plate recognition method.
It should be noted that: in the license plate recognition device provided in the above embodiment, only the division of the functional modules is exemplified when the license plate is recognized, and in practical applications, the function distribution may be completed by different functional modules as needed, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the embodiments of the license plate recognition device and the license plate recognition method provided by the embodiments belong to the same concept, and specific implementation processes thereof are detailed in the embodiments of the methods and are not described herein again.
Fig. 5 shows a block diagram of a terminal 500 according to an exemplary embodiment of the present invention. The terminal may be the terminal in the system architecture described in fig. 1. Among them, the terminal 500 may be: industrial computers, industrial personal computers, notebook computers, desktop computers, smart phones or tablet computers, and the like. Terminal 500 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, the terminal 500 includes: a processor 501 and a memory 502.
The processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 502 is used to store at least one instruction for execution by processor 501 to implement the method of planning a flight path of a flight device provided by method embodiments herein.
In some embodiments, the terminal 500 may further optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502 and peripheral interface 503 may be connected by a bus or signal lines. Each peripheral may be connected to the peripheral interface 503 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 504, touch screen display 505, camera 506, audio circuitry 507, positioning components 508, and power supply 509.
The peripheral interface 503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 501 and the memory 502. In some embodiments, the processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 501, the memory 502, and the peripheral interface 503 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 504 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 504 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 504 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 505 is a touch display screen, the display screen 505 also has the ability to capture touch signals on or over the surface of the display screen 505. The touch signal may be input to the processor 501 as a control signal for processing. At this point, the display screen 505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 505 may be one, providing the front panel of the terminal 500; in other embodiments, the display screens 505 may be at least two, respectively disposed on different surfaces of the terminal 500 or in a folded design; in still other embodiments, the display 505 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 500. Even more, the display screen 505 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 505 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 506 is used to capture images or video. Optionally, camera assembly 506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 507 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing, or inputting the electric signals to the radio frequency circuit 504 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 501 or the radio frequency circuit 504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 507 may also include a headphone jack.
The positioning component 508 is used for positioning the current geographic Location of the terminal 500 for navigation or LBS (Location Based Service). The Positioning component 508 may be a Positioning component based on the GPS (Global Positioning System) of the united states, the beidou System of china, or the galileo System of the european union.
Power supply 509 is used to power the various components in terminal 500. The power source 509 may be alternating current, direct current, disposable or rechargeable. When power supply 509 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 500 also includes one or more sensors 510. The one or more sensors 510 include, but are not limited to: acceleration sensor 511, gyro sensor 512, pressure sensor 513, fingerprint sensor 514, optical sensor 515, and proximity sensor 516.
The acceleration sensor 511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 500. For example, the acceleration sensor 511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 501 may control the touch screen 505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 511. The acceleration sensor 511 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 512 may detect a body direction and a rotation angle of the terminal 500, and the gyro sensor 512 may cooperate with the acceleration sensor 511 to acquire a 3D motion of the user on the terminal 500. The processor 501 may implement the following functions according to the data collected by the gyro sensor 512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 513 may be disposed on a side bezel of the terminal 500 and/or an underlying layer of the touch display screen 505. When the pressure sensor 513 is disposed on the side frame of the terminal 500, a user's holding signal of the terminal 500 may be detected, and the processor 501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 513. When the pressure sensor 513 is disposed at the lower layer of the touch display screen 505, the processor 501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 514 is used for collecting a fingerprint of the user, and the processor 501 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 514, or the fingerprint sensor 514 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 514 may be provided on the front, rear, or side of the terminal 500. When a physical button or a vendor Logo is provided on the terminal 500, the fingerprint sensor 514 may be integrated with the physical button or the vendor Logo.
The optical sensor 515 is used to collect the ambient light intensity. In one embodiment, the processor 501 may control the display brightness of the touch screen 505 based on the ambient light intensity collected by the optical sensor 515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 505 is turned down. In another embodiment, processor 501 may also dynamically adjust the shooting parameters of camera head assembly 506 based on the ambient light intensity collected by optical sensor 515.
A proximity sensor 516, also referred to as a distance sensor, is typically disposed on the front panel of the terminal 500. The proximity sensor 516 is used to collect the distance between the user and the front surface of the terminal 500. In one embodiment, when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 gradually decreases, the processor 501 controls the touch display screen 505 to switch from the bright screen state to the dark screen state; when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 becomes gradually larger, the processor 501 controls the touch display screen 505 to switch from the screen-rest state to the screen-on state.
That is, not only is an apparatus for planning a flight path of a flight device provided in the terminal 500, which includes a processor and a memory for storing processor-executable instructions, where the processor is configured to execute the method in the embodiments shown in fig. 2 and 3A, but also a computer-readable storage medium is provided, and a computer program is stored in the storage medium, and when the computer program is executed by the processor, the method in the embodiments shown in fig. 2 and 3A can be implemented.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (12)

1. A license plate recognition method is characterized by comprising the following steps:
acquiring a license plate region containing a license plate in an image, and extracting feature information in the license plate region by using a Convolutional Neural Network (CNN) model, wherein the feature information comprises a plurality of feature sequences;
determining the license plate type of the license plate based on the plurality of characteristic sequences;
identifying characters in the license plate area based on the plurality of characteristic sequences to obtain a character identification result;
acquiring a license plate number sample corresponding to the license plate type;
judging whether a license plate number belonging to the license plate type contains a sub-segment and a main segment or not based on the license plate type, wherein the sub-segment and the main segment both refer to continuous character strings in the license plate number, and the number of characters contained in the main segment is larger than that contained in the sub-segment, or the size of an area occupied by the characters contained in the main segment is larger than that of the area occupied by the characters contained in the sub-segment;
and if the license plate number belonging to the license plate type comprises the sub-sections and the main sections, dividing the character recognition result into the sub-sections and the main sections according to the license plate number sample, and determining the divided character recognition result as the license plate number of the license plate.
2. The method of claim 1, wherein the identifying characters within the license plate region based on the plurality of feature sequences comprises:
and processing each characteristic sequence in the plurality of characteristic sequences through an Attention model to obtain characters corresponding to each characteristic sequence in the license plate region.
3. The method of claim 2, wherein the processing each of the plurality of feature sequences through an Attention authorization model to obtain a plurality of characters in the license plate region comprises:
for any feature sequence A in the feature sequences, determining the weight of the feature sequence A and the weight of each rest feature sequence except the feature sequence A, wherein the weight of the feature sequence A is greater than the weight of each rest feature sequence;
determining semantic information of the feature sequence A based on the feature sequence A, the weight of the feature sequence A, the rest of feature sequences and the weight of the rest of feature sequences;
and decoding and identifying the semantic information of the characteristic sequence A to obtain the character corresponding to the characteristic sequence A.
4. The method of claim 1, wherein the determining a license plate class to which the license plate belongs based on the plurality of feature sequences comprises:
determining a probability value of the license plate belonging to each preset license plate type by using the CNN model based on the plurality of characteristic sequences;
and determining the preset license plate type corresponding to the maximum probability value as the license plate type to which the license plate belongs.
5. The method of claim 1, wherein after determining the license plate number based on the character recognition result, further comprising:
acquiring license plate color information and belonging area information corresponding to the license plate type;
and outputting the license plate number of the license plate, the license plate color information and the region information.
6. A license plate recognition device, the device comprising:
the system comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring a license plate region containing a license plate in an image and extracting characteristic information in the license plate region by utilizing a Convolutional Neural Network (CNN) model, and the characteristic information comprises a plurality of characteristic sequences;
the second determination module is used for determining the license plate type of the license plate based on the plurality of characteristic sequences;
the recognition module is used for recognizing characters in the license plate area based on the plurality of characteristic sequences to obtain a character recognition result;
the first determining module includes:
the determining submodule is used for acquiring a license plate number sample corresponding to the license plate type;
judging whether a license plate number belonging to the license plate type contains a sub-segment and a main segment or not based on the license plate type, wherein the sub-segment and the main segment both refer to continuous character strings in the license plate number, and the number of characters contained in the main segment is larger than that contained in the sub-segment, or the size of an area occupied by the characters contained in the main segment is larger than that of the area occupied by the characters contained in the sub-segment;
and if the license plate number belonging to the license plate type comprises the sub-sections and the main sections, dividing the character recognition result into the sub-sections and the main sections according to the license plate number sample, and determining the divided character recognition result as the license plate number of the license plate.
7. The apparatus of claim 6, wherein the identification module is configured to:
and processing each characteristic sequence in the plurality of characteristic sequences through an Attention model to obtain characters corresponding to each characteristic sequence in the license plate region.
8. The apparatus of claim 7, wherein the identification module is specifically configured to:
for any feature sequence A in the plurality of feature sequences, determining a weight of the feature sequence A and a weight of each feature sequence except the feature sequence A through an Attention model, wherein the weight of the feature sequence A is greater than the weight of each feature sequence except the feature sequence A;
determining semantic information of the feature sequence A based on the feature sequence A, the weight of the feature sequence A, the rest of feature sequences and the weight of the rest of feature sequences;
and decoding and identifying the semantic information of the characteristic sequence A to obtain the character corresponding to the characteristic sequence A.
9. The apparatus of claim 6, wherein the second determining module is specifically configured to:
determining a probability value of the license plate belonging to each preset license plate type by using the CNN model based on the plurality of characteristic sequences;
and determining the preset license plate type corresponding to the maximum probability value as the license plate type to which the license plate belongs.
10. The apparatus of claim 6, wherein the apparatus is further configured to:
acquiring license plate color information and belonging area information corresponding to the license plate type;
and outputting the license plate number of the license plate, the license plate color information and the region information.
11. A license plate recognition device is characterized in that the device comprises
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform any of the methods of claims 1-5.
12. A computer-readable storage medium, characterized in that the storage medium has stored therein a computer program which, when being executed by a processor, carries out the method of any one of claims 1-5.
CN201810461160.4A 2018-05-15 2018-05-15 License plate recognition method and device and storage medium Active CN110490179B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810461160.4A CN110490179B (en) 2018-05-15 2018-05-15 License plate recognition method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810461160.4A CN110490179B (en) 2018-05-15 2018-05-15 License plate recognition method and device and storage medium

Publications (2)

Publication Number Publication Date
CN110490179A CN110490179A (en) 2019-11-22
CN110490179B true CN110490179B (en) 2022-08-05

Family

ID=68545110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810461160.4A Active CN110490179B (en) 2018-05-15 2018-05-15 License plate recognition method and device and storage medium

Country Status (1)

Country Link
CN (1) CN110490179B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444911B (en) * 2019-12-13 2021-02-26 珠海大横琴科技发展有限公司 Training method and device of license plate recognition model and license plate recognition method and device
CN111368645A (en) * 2020-02-14 2020-07-03 北京澎思科技有限公司 Method and device for identifying multi-label license plate, electronic equipment and readable medium
CN111310766A (en) * 2020-03-13 2020-06-19 西北工业大学 License plate identification method based on coding and decoding and two-dimensional attention mechanism
CN111832568B (en) * 2020-06-12 2024-01-12 北京百度网讯科技有限公司 License plate recognition method, training method and device of license plate recognition model
CN111563504B (en) * 2020-07-16 2020-10-30 平安国际智慧城市科技股份有限公司 License plate recognition method and related equipment
CN112381129A (en) * 2020-11-10 2021-02-19 浙江大华技术股份有限公司 License plate classification method and device, storage medium and electronic equipment
CN112418234A (en) * 2020-11-19 2021-02-26 北京软通智慧城市科技有限公司 Method and device for identifying license plate number, electronic equipment and storage medium
CN113486885A (en) * 2021-06-17 2021-10-08 杭州鸿泉物联网技术股份有限公司 License plate recognition method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407981A (en) * 2016-11-24 2017-02-15 北京文安智能技术股份有限公司 License plate recognition method, device and system
EP3182334A1 (en) * 2015-12-17 2017-06-21 Xerox Corporation License plate recognition using coarse-to-fine cascade adaptations of convolutional neural networks
CN106960206A (en) * 2017-02-08 2017-07-18 北京捷通华声科技股份有限公司 Character identifying method and character recognition system
CN107704860A (en) * 2017-12-06 2018-02-16 四川知创空间孵化器管理有限公司 A kind of number-plate number recognition methods
CN107944450A (en) * 2017-11-16 2018-04-20 深圳市华尊科技股份有限公司 A kind of licence plate recognition method and device
CN108009543A (en) * 2017-11-29 2018-05-08 深圳市华尊科技股份有限公司 A kind of licence plate recognition method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3182334A1 (en) * 2015-12-17 2017-06-21 Xerox Corporation License plate recognition using coarse-to-fine cascade adaptations of convolutional neural networks
CN106407981A (en) * 2016-11-24 2017-02-15 北京文安智能技术股份有限公司 License plate recognition method, device and system
CN106960206A (en) * 2017-02-08 2017-07-18 北京捷通华声科技股份有限公司 Character identifying method and character recognition system
CN107944450A (en) * 2017-11-16 2018-04-20 深圳市华尊科技股份有限公司 A kind of licence plate recognition method and device
CN108009543A (en) * 2017-11-29 2018-05-08 深圳市华尊科技股份有限公司 A kind of licence plate recognition method and device
CN107704860A (en) * 2017-12-06 2018-02-16 四川知创空间孵化器管理有限公司 A kind of number-plate number recognition methods

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An End-to-End Trainable Neural Network for Image-Based Sequence Recognition and Its Application to Scene Text Recognition;Baoguang Shi等;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20171130;第39卷(第11期);第2298-2304页 *
复杂场景下的中国车牌识别研究;贺赛娜;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20180215(第2期);C034-930 *

Also Published As

Publication number Publication date
CN110490179A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN109829456B (en) Image identification method and device and terminal
CN110490179B (en) License plate recognition method and device and storage medium
CN110059685B (en) Character area detection method, device and storage medium
CN110650379B (en) Video abstract generation method and device, electronic equipment and storage medium
CN110490186B (en) License plate recognition method and device and storage medium
CN112749613B (en) Video data processing method, device, computer equipment and storage medium
CN110991457B (en) Two-dimensional code processing method and device, electronic equipment and storage medium
CN114170349A (en) Image generation method, image generation device, electronic equipment and storage medium
CN110839128A (en) Photographing behavior detection method and device and storage medium
CN111027490A (en) Face attribute recognition method and device and storage medium
CN111754386A (en) Image area shielding method, device, equipment and storage medium
CN110705614A (en) Model training method and device, electronic equipment and storage medium
CN110647881A (en) Method, device, equipment and storage medium for determining card type corresponding to image
CN110503159B (en) Character recognition method, device, equipment and medium
CN115497082A (en) Method, apparatus and storage medium for determining subtitles in video
CN111586279B (en) Method, device and equipment for determining shooting state and storage medium
CN113378705B (en) Lane line detection method, device, equipment and storage medium
CN110728167A (en) Text detection method and device and computer readable storage medium
CN111127541A (en) Vehicle size determination method and device and storage medium
CN112053360A (en) Image segmentation method and device, computer equipment and storage medium
CN111639639B (en) Method, device, equipment and storage medium for detecting text area
CN111444749B (en) Method and device for identifying road surface guide mark and storage medium
CN110163192B (en) Character recognition method, device and readable medium
CN111611414A (en) Vehicle retrieval method, device and storage medium
CN113343709B (en) Method for training intention recognition model, method, device and equipment for intention recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant