CN112686252A - License plate detection method and device - Google Patents

License plate detection method and device Download PDF

Info

Publication number
CN112686252A
CN112686252A CN202011586994.1A CN202011586994A CN112686252A CN 112686252 A CN112686252 A CN 112686252A CN 202011586994 A CN202011586994 A CN 202011586994A CN 112686252 A CN112686252 A CN 112686252A
Authority
CN
China
Prior art keywords
license plate
image
detection
suspected
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011586994.1A
Other languages
Chinese (zh)
Inventor
王达
南一冰
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Unicom Big Data Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Unicom Big Data Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd, Unicom Big Data Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202011586994.1A priority Critical patent/CN112686252A/en
Publication of CN112686252A publication Critical patent/CN112686252A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application provides a license plate detection and identification method and device. The method comprises the following steps: detecting a license plate of a target image containing a vehicle; determining a suspected license plate area in the designated area under the condition that the license plate is not detected; determining whether the suspected license plate area contains the license plate; and under the condition that the suspected license plate area contains the license plate, outputting the position information of the license plate. Therefore, the missed vehicle license plate detection is secondarily detected and classified, and the missed detection rate is reduced. The method further comprises the following steps: acquiring a license plate image based on the position information of the license plate; and carrying out convolution operation on the license plate image to obtain a characteristic matrix, carrying out dimension conversion on the characteristic matrix under the condition that the dimension of the characteristic matrix in the height direction is greater than 1 to enable the characteristic matrix to become the characteristic matrix with the dimension in the height direction being 1, and further carrying out license plate identification. Therefore, the recognition of single-layer and double-layer license plates can be realized through one model, and the problems of long model loading time and computer resource occupation are solved.

Description

License plate detection method and device
Technical Field
The present application relates to the field of intelligent traffic management, and more particularly, to a license plate detection method and apparatus.
Background
The detection and identification of the license plate are used as important components of intelligent management, and the development is rapid in recent years. In the existing intelligent license plate detection technology, a common application scenario is that a target detection algorithm is adopted to detect a license plate. For example, a large number of images containing vehicles are collected and input into a target detection algorithm to realize the detection of license plates. In practical application, collected images containing vehicles are influenced by various factors such as photographed weather environment, splashed sludge on license plates, self aging and the like, and the quality of the images is not high, so that the problem of missing detection of the license plates possibly exists. This affects the subsequent license plate recognition.
Disclosure of Invention
The embodiment of the application provides a license plate detection method and device, and aims to solve the problem of missing detection of a license plate.
In a first aspect, the present application provides a license plate detection method, including: detecting a license plate of a target image, wherein the target image is an image containing a vehicle; determining a suspected license plate area in the designated area under the condition that the license plate is not detected; determining whether the suspected license plate area contains the license plate; and outputting the position information of the license plate under the condition that the suspected license plate area contains the license plate.
Based on the scheme, under the condition that the license plate is not detected in the first detection, the suspected license plate area is obtained by carrying out secondary detection on the designated area, and the suspected license plate area is classified. On one hand, the missing detection of the license plate can be reduced, and on the other hand, the secondary detection can be the detection in the designated area, so that the detection range can be narrowed, the calculation amount can be reduced, and the calculation time can be shortened.
Optionally, the determining a suspected license plate area in the designated area includes: performing convolution operation on the target image by adopting a pre-trained target detection model to obtain at least one layer of characteristic diagram; determining the confidence degree that the region corresponding to each element point contains the license plate in the specified region of the last layer of feature map of the at least one layer of feature map; and determining the region corresponding to the element point with the maximum confidence coefficient as the suspected license plate region.
Optionally, the determining whether the suspected license plate area contains the license plate includes: and classifying the suspected license plate area by adopting a pre-trained classification model so as to determine whether the license plate is contained.
Optionally, the performing license plate recognition on the license plate image includes: performing convolution operation on the license plate image to obtain a characteristic matrix with the dimensionality of C multiplied by H multiplied by N, wherein C is the number of channels, H is the dimensionality in the width direction, N is the dimensionality in the length direction, and C, H and N are positive integers; decomposing the feature matrix into H feature matrices with dimensions C × 1 × N if H is greater than 1; splicing the H feature matrixes according to a preset sequence to obtain a feature matrix with dimensions C multiplied by 1 x (H multiplied by N); and performing license plate recognition based on the feature matrix with the dimension of C multiplied by 1 multiplied by (H multiplied by N). Optionally, the method may further include: and according to a predefined size specification, carrying out normalization processing on the license plate image to obtain the license plate image meeting the size specification.
Optionally, the method may further include: acquiring an original image containing a vehicle; and carrying out image processing on the original image to obtain the target image.
Optionally, the original image is subjected to image processing, which may be defogging and night enhancement processing.
In a second aspect, the present application provides a license plate recognition method, including: acquiring a license plate image, wherein the license plate image is an image containing a license plate; performing convolution operation on the license plate image to obtain a characteristic matrix with dimensionality of C multiplied by H multiplied by N, wherein C is the number of channels, H is the dimensionality in the width direction, N is the dimensionality in the length direction, and C, H and N are positive integers; splicing H feature matrixes in the C multiplied by H multiplied by N feature matrixes according to a preset sequence under the condition that H is larger than 1 to obtain a feature matrix with the dimension of C multiplied by 1 multiplied by (H multiplied by N); and performing license plate recognition based on the feature matrix with the dimension of C multiplied by 1 multiplied by (H multiplied by N).
Based on the scheme, the characteristic dimension conversion is realized by adopting a characteristic splicing mode, so that the license plate feature can be extracted by using a convolution network in the license plate recognition process, other network extraction features are also adopted subsequently, the extracted license plate feature is diversified, single-layer and double-layer license plate recognition is realized without depending on a plurality of algorithm models, the single-layer and double-layer license plate recognition can be realized by only adopting one algorithm model, and the problems of long model loading time and computer resource occupation are solved.
Optionally, the method further comprises: and according to a predefined size specification, carrying out normalization processing on the license plate image to obtain the license plate image meeting the size specification.
Optionally, the method further comprises: and performing convolution operation on the license plate image obtained after the normalization processing by adopting a pre-trained first neural network to obtain a characteristic matrix with dimension of C multiplied by H multiplied by N.
Optionally, the first neural network is a Convolutional Neural Network (CNN).
Optionally, the license plate recognition based on the feature matrix with the dimension of C × 1 × (H × N) includes: and performing license plate recognition based on the feature matrix with the dimension of C multiplied by 1 x (H multiplied by N) by adopting a pre-trained second neural network.
Optionally, the second neural network is a bidirectional long and short term memory network (BLSTM), a long and short term memory network (LSTM), a gated cycle unit (GRU), or a bidirectional gated cycle unit (BGRU).
Optionally, the method further comprises: and detecting a license plate of a target image to determine the position information of the license plate, wherein the target image is an image containing a vehicle.
Optionally, the method further comprises: determining a suspected license plate area in a designated area if the license plate is not detected based on the license plate detection; detecting the suspected license plate area to determine whether the license plate is contained; and outputting the position information of the license plate under the condition that the license plate is detected from the suspected license plate area.
Optionally, the acquiring the license plate image includes: and acquiring the license plate image from the target image according to the position information of the license plate.
In a third aspect, a license plate detection apparatus is provided, which includes a unit or a module for implementing the license plate detection method of any one of the first aspect and the first aspect.
In a fourth aspect, a license plate detection apparatus is provided, which includes a processor configured to execute the license plate detection method of any one of the first aspect and the first aspect.
In a fifth aspect, a license plate recognition device is provided, which includes a unit or a module for implementing the license plate recognition method of any one of the second aspect and the second aspect.
In a sixth aspect, a license plate recognition apparatus is provided, which includes a processor configured to execute the license plate recognition method according to any one of the second aspect and the second aspect.
In a seventh aspect, a license plate detection and recognition system is provided, which includes the license plate detection device and/or the license plate recognition device.
In an eighth aspect, there is provided a computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to carry out the method of any one of the first and second aspects and the first and second aspects.
In a ninth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes a computer to perform the method of any of the first and second aspects and the first and second aspects.
Drawings
FIG. 1 is a schematic structural diagram of a system suitable for use in a license plate detection and recognition method provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a license plate detection method provided in the embodiment of the present application;
FIG. 3 is a schematic diagram of a designated area in a license plate detection process according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a license plate recognition method according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a single-layer and double-layer license plate scale normalization method provided in the embodiment of the present application;
FIG. 6 is a diagram illustrating a feature dimension transformation method provided in an embodiment of the present application;
fig. 7 and 8 are schematic block diagrams of a license plate detection device provided in an embodiment of the present application;
fig. 9 and 10 are schematic block diagrams of a license plate recognition device provided in an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The license plate detection and recognition are particularly important for intelligent management. For example, a traffic management platform usually needs to perform license plate detection and recognition to realize intelligent management; for another example, electronic toll collection systems (ETC) at entrances and exits of parking lots, high-speed intersections, and the like need to be intelligently managed through license plate detection and recognition.
Generally, the object of license plate detection and recognition is an image captured by an image capturing device such as a camera. However, the vehicle is generally exposed to rain, snow, haze and other weather for a long time, so that the problems of splashing mud on the road surface, aging and color change of the license plate paint and the like exist; due to the reasons of night light interference, vehicle body pattern interference and the like in night shooting, the problem of missing detection of the license plate exists in the existing license plate detection algorithm based on the traditional algorithm and deep learning. Once the license plate is missed, difficulty is brought to license plate recognition, and therefore obstacles are brought to intelligent management. On the other hand, the current license plates have a single-layer license plate and a double-layer license plate. For the identification of single-layer and double-layer license plates, the current technology is that the license plate type is judged firstly, and then the algorithm model only supporting the single-layer license plate or the double-layer license plate is used for identification respectively, which may have the problems of slow model loading, large computer cache occupation and/or large video memory occupation during the operation period, and the like. Which may eventually lead to a long recognition time.
In view of this, the present application provides a license plate detection method, so as to solve the problem of missing detection of a license plate, and further provide a powerful guarantee for license plate identification. The application also provides a license plate recognition method, so that single-layer and double-layer license plates can be recognized through a single algorithm model, and therefore model loading is reduced, and occupation of computer cache and/or memory is reduced.
For ease of understanding, a license plate detection and recognition system that can be used to implement the method provided by the embodiments of the present application will be briefly described with reference to fig. 1. As shown in fig. 1, the license plate detection and recognition system 100 may include a license plate detection module 110 and a license plate recognition module 120. The license plate detection module 110 and the license plate recognition module 120 can perform data interaction through a communication interface, and the license plate detection module 110 and the license plate recognition module 120 can also acquire images from an image acquisition device through the communication interface.
It should be understood that in some cases, the image capture device may also be defined as a module in a license plate detection and recognition system. The embodiments of the present application do not limit this.
It should also be understood that the license plate detection and recognition system 100 may also include other modules, which are not shown in fig. 1, but should not limit the present application in any way.
Fig. 2 is a schematic flow chart of a license plate detection method provided by the present application. The license plate detection method 200 shown in fig. 2 can be performed by the license plate detection module 110 shown in fig. 1, for example. The license plate detection module may include a neural network model for implementing the target detection described below. In addition, the license plate detection module is also provided with a secondary detection unit and a classification unit on the basis of the existing neural network model, so as to realize the following secondary detection unit and classification.
As shown in fig. 2, the method 200 may include steps 210 through 250. The steps in fig. 2 are explained in detail below.
In step 210, a target image containing a vehicle is acquired.
The target image may be an original image including a vehicle acquired from an image acquired by the image acquisition device, or an image obtained by image processing of the original image including the vehicle.
The image containing the vehicle is acquired from the image acquired by the image acquisition device, and the acquisition can be realized by target detection, for example. For example, the image acquired by the image acquisition device may be input into a pre-trained target detection model for detecting a vehicle, the target detection model may calculate probability vectors of respective categories corresponding to respective positions in the image, select a category with the largest one of the probability vectors as a final category as an affiliated category of the corresponding position, calculate a probability of the affiliated category of the respective positions and position information corresponding to the affiliated category, determine that the affiliated category is the vehicle according to the probability of the affiliated category and the position information corresponding to the affiliated category, and output the corresponding position information when the affiliated category is the vehicle, so that an original image including the vehicle may be acquired.
Here, since the object detection model is used to detect the vehicle, the object detection model may be referred to as a vehicle detection model for the sake of distinction.
Of course, the acquisition of the original image including the vehicle may also be implemented in other ways, which is not limited in this application. After an original image containing a vehicle is obtained, the license plate detection module can directly use the original image as a target image to carry out license plate detection; the original image may also be subjected to image processing to obtain a clearer image, and the image after the image processing is taken as a target image, which is not limited in the embodiment of the present application.
The image processing includes, but is not limited to, defogging and night enhancement processing, for example.
The defogging treatment can be realized by a defogging algorithm, such as a dark channel prior method, a maximum contrast ratio method, a color attenuation prior method, a deep convolution neural network method and the like. By the defogging processing, the sharpness of the image can be improved.
Night enhancement processing can be implemented using some existing image signal processing method or model. Through the night enhancement processing, the brightness and detail visibility of the image can be improved, noise can be suppressed, and the like.
In step 220, license plate detection is performed on the target image.
The process of detecting the license plate of the target image can be understood as a process of detecting a target, and the detected target can be the license plate. Through license plate detection, the license plate can be positioned from the target image, and therefore the position information of the license plate in the target image can be output. The license plate detection of the target image can be realized by some existing target detection means. For example, this may be accomplished by a neural network that may be used to accomplish target detection. The neural network may be a YOLO (YOLO) network, including, for example, a YOLO (YOLO only look on version 2, YOLO v2) network of the second version, a YOLO (YOLO only look on version 3, YOLO v3) network of the third version, a YOLO (YOLO only look on version 5, YOLO v5) network of the fifth version; the neural network may be a fast convolutional neural network (fast-RCNN) with a convolutional network, a single shot multi-box detector (SSD) neural network, a multi-task convolutional neural network (MTCNN), or the like.
Optionally, step 220 specifically includes:
step 2201, locating a vehicle area in the target image;
step 2202, detecting a license plate within the located vehicle region.
Since the target image includes the vehicle, the vehicle occupies a certain area in the target image. The vehicle area is the area occupied by the vehicle in the target image. In other words, the vehicle area is an area of a smaller range in the target image, and the vehicle area includes the vehicle. The vehicle area is, for example, an area occupied by a vehicle, and the outer contour of the vehicle area can be an irregular figure which is the same as or close to the outer shape of the vehicle; the vehicle region may be a region including the vehicle, and the outer contour may be, for example, a rectangle or the like whose outer contour is a boundary of the vehicle. The embodiments of the present application do not limit this.
By way of example, one possible implementation of the localization of the vehicle region in the target image is: the vehicle detection model performs convolution operation on the target image to obtain a target feature layer, wherein each unit in the feature layer can be called as an element point and can correspond to a weight in a convolution kernel. The vehicle detection model can calculate and output a predicted existing region (hereinafter referred to as a predicted vehicle region) of the vehicle, and when the central point of the predicted vehicle region is located at each element point, the probability that the possible predicted vehicle region corresponding to each element point contains the vehicle is calculated, and when the central point of the predicted vehicle region is located at each element point, the confidence coefficient that the predicted vehicle existing in the corresponding predicted vehicle region is the real vehicle is calculated according to the calculated confidence coefficient, and the region with the confidence coefficient larger than a first preset threshold is used as a final predicted vehicle region.
It should be noted that the vehicle detection model may be obtained through training. For example, in the stage of training the vehicle detection model, the confidence that the predicted vehicle existing in a certain area is the real vehicle
Figure BDA0002866241950000051
Can be calculated by the following formula:
Figure BDA0002866241950000052
wherein, P is the probability of the vehicle existing in a certain area; the IOU is a ratio of an intersection to a union of the predicted vehicle area and the real vehicle area, and embodies the difference between the predicted vehicle area and the real vehicle area.
Due to training, make confidence
Figure BDA0002866241950000053
The real confidence coefficient C is continuously approached, so that the capability of calculating the confidence coefficient is realized in the vehicle detection process by using the trained vehicle detection model. For example, when a certain region is a real vehicle region, the real confidence of the region is C-P-1.
After the license plate area is positioned based on the vehicle detection model, the license plate can be further detected in the positioned vehicle area continuously.
One possible implementation of detecting a license plate in a localized vehicle region is: the object detection model divides the vehicle image into a plurality of uniformly sized meshes, each of which may be referred to as an element point. The license plate detection module can calculate and output a predicted existence region of the license plate (hereinafter referred to as predicted license plate region), and the confidence coefficients of possible predicted license plate regions corresponding to the element points and the predicted license plate existing in each region when the central point of the predicted license plate region is located at each element point, and the region with the confidence coefficient larger than a second preset threshold is used as a final predicted license plate region. The confidence coefficient calculation method is similar to the vehicle confidence coefficient calculation method, and is not described herein.
Here, since the target detection model is used to detect a license plate, the target detection model may be referred to as a license plate detection model for easy discrimination.
It should be understood that the vehicle detection model and the license plate detection model are both models trained based on the principle of target detection.
It should also be understood that the first and second preset thresholds listed above may each be set manually, for example, based on a priori experience, etc. The two may be the same or different. The specific values and the size relationship of the first preset threshold and the second preset threshold are not limited.
It should also be understood that the above-described implementation of license plate detection is only one possible implementation, for example, the license plate detection module may also perform license plate detection directly based on the target image, for example, target detection may be performed directly based on the target image to locate the license plate; for another example, according to the prior information, since the license plate is usually located at the lower half portion of the vehicle, the license plate can be directly located in the region below the middle portion of the target image. The embodiments of the present application do not limit this.
In one possible situation, when the license plate detection module detects a license plate region, the position information of the license plate region in the target image can be output, and the position information can be used for a subsequent license plate recognition module to acquire a license plate image for license plate recognition; or, the license plate detection module can also output a license plate image; or, the license plate detection module can also output the position information of the license plate area and the license plate image together. The embodiments of the present application do not limit this.
The position information of the license plate area in the target image can be used for positioning the license plate area, so that the license plate image can be intercepted from the target image according to the license plate area.
In general, a license plate is rectangular, an outer contour of the license plate region is also a rectangular frame, and the position information of the license plate region may include, for example: the center point of the vehicle area and the length and width of the area, or may be the diagonal position coordinates of an area, and so on. For the sake of brevity, this is not to be enumerated here. As long as the license plate region can be determined from the target image according to the positioning information, the license plate region can be used as the position information, which is not limited in the present application.
Another possibility is that the license plate detection module does not detect a license plate. For example, the problems of road surface splash, mud pollution, aging and discoloration of the paint on the license plate and the like are solved; and the license plate detection module does not detect the license plate based on the existing target detection model due to night light interference, vehicle body pattern interference and other reasons existing in night shooting. Under the condition, the application provides a secondary detection and classification method so as to reduce the probability of missing detection of the license plate. The following steps 230 to 250 will describe the secondary detection and classification process in detail.
As mentioned above, the secondary detection and classification can be realized by the secondary detection unit and the classification unit. Illustratively, step 230 described below may be implemented by a secondary detection unit, which may be, for example, an object detection model, and steps 240 to 250 may be implemented by a classification unit, which may be, for example, a classification model.
In step 230, if no license plate is detected, a secondary detection is performed in the designated area to determine a suspected license plate area.
If the license plate is not detected in the target detection, the position where the license plate may exist can be judged in advance, and one or more regions (hereinafter referred to as designated regions) are designated for secondary detection. The secondary detection module can traverse all the element points in the designated area, determine the confidence coefficient that the predicted license plate existing in each element point in the designated area is a real license plate, and then select the predicted license plate area with the maximum confidence coefficient as a suspected license plate area. Wherein the calculation of confidence may be obtained, for example, with reference to the calculation methods listed above in step 2202.
As mentioned above, the license plate detection may be implemented by a license plate detection model, and the license plate detection model (e.g. CNN, YOLO, etc.) may perform convolution operation on the input target image. Each convolution operation may result in a convolution layer signature. Since each convolution operation performs feature extraction based on one or more convolution kernels (or convolution operators), each resulting convolution layer feature map may include one or more feature maps. At least one layer of convolution layer characteristic graph can be obtained through at least one convolution operation of the license plate detection model on the target image. The target detection module can perform secondary detection based on partial feature maps in one or more of the at least one layer of convolutional layer feature maps, or perform secondary detection based on partial feature maps in feature maps obtained by fusing multiple layers of feature maps. For example, a partial region or a whole region of one or more feature maps may be designated as a region for secondary detection. The feature map obtained by fusing the multilayer feature maps may be, for example: the feature maps of the plurality of convolution layers obtained by merging the feature pyramid networks may also be feature maps obtained by other merging methods, which is not limited in the present application. The partial area may be, for example, an area where a vehicle is located in one or more feature maps, such as the vehicle area. The license plate detection module can locate the vehicle region through the above-mentioned target detection, and then take the corresponding region of the located vehicle region in the feature map as the designated region.
Here, the area designated as the secondary detection is also the above-described designated area. The designated area may be determined according to a preset rule, for example. For example, the vehicle region determined in step 210 may be defined as a designated region, or the lower half region of the feature map may be defined as a designated region, or the entire feature map may be defined as a designated region. The embodiments of the present application do not limit this.
Since the receptive field of the lower half area of the feature map corresponds to the portion of the front face of the vehicle below the window in the real vehicle image, the probability of the existence of the license plate is the greatest. Therefore, the secondary detection is performed by using the region as the designated region, so that the calculation amount can be reduced, and the calculation time can be shortened.
Fig. 3 shows an example of the designated area. An example of a feature map obtained by the convolution operation of the vehicle image is shown on the left side of fig. 3. The feature map includes a plurality of meshes that are uniformly divided, wherein the part inside the black dashed box is a designated area at the time of secondary detection. The inner part of the black solid line frame is a suspected license plate area determined in secondary detection. Corresponding to the characteristic diagram, the right side of fig. 3 shows the vehicle image in which the inside of the black dashed frame corresponds to the designated area at the time of secondary detection. The inner part of the white solid line frame is an area corresponding to the suspected license plate area determined by secondary detection.
Based on the secondary detection, the license plate detection module may determine that the suspected license plate region is shown in dark shading in fig. 3.
It should be understood that fig. 3 is only an example of one characteristic diagram, but this should not limit the present application in any way. The license plate detection module can perform secondary detection on one or more feature images, obtain a region with the maximum confidence coefficient for each feature image detection, compare the confidence coefficients of the regions, and select the region with the maximum confidence coefficient as a suspected license plate region, so as to determine the suspected license plate region. One possible implementation is that the license plate detection module may define a vehicle region located in one or more layers of the at least one layer of feature map subjected to the convolution operation as the designated region. Therefore, a better prediction effect can be achieved.
In step 240, the suspected license plate area is classified to determine whether the suspected license plate area contains a license plate.
In a specific implementation, the position information of the suspected license plate area obtained through the secondary detection can be input into a classification unit, and the classification unit classifies the suspected license plate area to determine whether the suspected license plate area is a license plate.
Still taking fig. 3 as an example, the classification unit may determine whether the suspected license plate area shown in fig. 3 is a license plate.
The classification unit may be understood as a classifier, which is used to detect the suspected license plate region to determine whether the license plate is included therein. The output of the classification unit may be, for example, "yes" or "no", or "contained" or "not contained", or the like, which may be used to indicate whether the suspected license plate area contains a license plate.
In step 250, the position information of the license plate area is output under the condition that the suspected license plate area contains the license plate.
After the suspected license plate area is determined to be a license plate, the target detection module may output the position information of the suspected license plate area detected twice as the position information of the license plate area, or the target detection module may intercept an area in the suspected license plate area corresponding to the target image to obtain and output a license plate image, or the target detection module may output the position information of the license plate area and the license plate image together. The embodiments of the present application do not limit this.
It should be understood that the detailed description of the location information can refer to the related description in step 220 above, and the detailed description is omitted here for brevity. Therefore, the license plate detection method provided by the embodiment of the application aims at the problem of missing detection of the license plate, a suspected license plate area is obtained by carrying out secondary detection on the specified area, and the suspected license plate area is classified. On one hand, the added secondary detection and classification unit can reduce the missing detection of the license plate, and on the other hand, the secondary detection is the monitoring in the designated area, so that the detection range can be narrowed, the calculation amount is reduced, and the calculation time is shortened.
It should be understood that the license plate detected by the above-mentioned license plate detection method can be used for license plate recognition. The embodiment of the application further provides a license plate recognition method, which can acquire a license plate image based on a detection result output by the license plate detection module, and further perform license plate recognition. Of course, other ways of license plate recognition may be used, such as using existing character recognition, multiple models, etc.
Fig. 4 is a schematic flow chart of a license plate recognition method provided by the present application. The license plate recognition method 400 shown in fig. 4 can be performed by the license plate recognition module 120 shown in fig. 4, for example. The license plate recognition module may include a neural network model for implementing license plate recognition as described below. The neural network model may include, for example, but is not limited to, CNN + BLSTM, CNN + LSTM, CNN + GRU, CNN + BGRU, and the like.
The specific flow of the method 400 is described below by taking CNN + BLSTM as an example. The CNN may be used to perform a convolution operation on the license plate image to obtain a feature matrix. BLSTM may be used to perform license plate recognition.
As shown in fig. 4, the method 400 may include steps 410 through 430. The steps in fig. 4 are explained in detail below.
In step 410, a license plate image is obtained.
One possible implementation is: based on the position information of the license plate image output in the method 200, the license plate image is obtained from the target image.
Another possible implementation is: license plate images acquired in other ways, for example: may be obtained from a database containing license plate images, or by some method intercepting a portion of some of the images, etc. For the sake of brevity, this is not to be enumerated here. The embodiment of the application does not limit the acquisition mode of the license plate image.
In step 420, license plate recognition is performed on the license plate image.
One possible implementation manner is that the license plate recognition module can input the license plate image into a pre-trained neural network model, and the neural network performs license plate recognition.
As described above, the license plate recognition module may perform convolution operation on the input license plate image through CNN, and then further extract features from the feature map after convolution operation by BLSTM, LSTM, GRU, BGRU, or the like, and perform license plate recognition.
Of course, the license plate recognition module can also perform license plate recognition in other manners, for example, by using methods such as character recognition. The embodiments of the present application do not limit this.
Exemplarily, step 420 may specifically include:
step 4201, performing size normalization processing on the license plate image;
step 4202, the license plate image after size normalization processing is input to a neural network model (for example, CNN + BLSTM described above), and the neural network model is used to perform license plate recognition on the license plate image.
Step 4201 and step 4202 are explained in detail below.
In step 4201, the size normalization process may refer to performing normalization process on the license plate image according to a predefined size specification, so that the normalized image meets the size specification. For example, a predefined dimension specification is set up to be 128 × 64, and the process of performing dimension normalization on the license plate image is as follows: calculating an aspect ratio of the detected license plate image, if the aspect ratio is larger than 2, normalizing the size of the long edge to 128 by taking the long edge as a reference edge, normalizing the wide edge according to the same proportion, and filling a black edge at the top end of the license plate image to enable the final license plate size to be 128 as the long edge and 64 as the wide edge; if the aspect ratio is less than 2, the dimension of the wide side is normalized to 64 by taking the wide side as a reference side, the long side is normalized according to the same proportion, the black side is supplemented on the right side of the license plate image, and the final license plate dimension is obtained, namely the long side is 128 and the wide side is 64.
Fig. 5 shows several examples of the size normalization process. In fig. 5, 501 and 502 respectively show a single-layer license plate and a double-layer license plate, and the aspect ratios of the two license plates are both greater than 2, so that the long side is taken as a reference side, the long side is normalized to 128, the wide side is normalized according to the same proportion, and then the black side is added to the wide side of the license plate, so that the wide side is 64. In fig. 5, 503 shows a double-layer license plate, which has an aspect ratio smaller than 2, so that the wide side is taken as a reference side, the wide side is normalized to 64, the long side is normalized according to the same proportion, and then the long side of the license plate is subjected to black edge filling to make the long side equal to 128.
It should be understood that the above description describes the process of size normalization with a 128 x 64 size specification as an example for ease of understanding and explanation only. The specific values of the predefined dimensional specifications are not limited in this application.
The size normalization processing of the license plate image can be realized by a pre-trained license plate recognition depth network, for example. The embodiments of the present application do not limit this.
For ease of understanding, the process of license plate recognition is described here by taking CNN + BLSTM as an example. The license plate image obtained through the size normalization processing in the step 4201 is input to CNN, and features of the license plate image are extracted through convolution operation of one or more layers of convolution networks, so that a feature matrix with dimensions of C × H × N can be obtained. Wherein C is the number of channels, H is the dimension in the width direction, and N is the dimension in the length direction.
It should be understood that the dimension in the width direction indicated by H may represent the number of elements included in the width direction.
If H is 1, the license plate in the license plate image is a single-layer license plate, and the license plate can be directly identified.
If H is more than 1, the license plate in the license plate image is a multilayer license plate. In order to use the same neural network model for license plate recognition, a method for performing dimension conversion on a multilayer license plate is provided in the embodiment of the application.
Specifically, the feature matrix with the dimension C × H × N described above may be decomposed into H feature matrices with the dimension C × 1 × N, and the H feature matrices with the dimension C × 1 × N may be spliced according to a preset sequence to obtain the feature matrix with the dimension C × 1 × (H × N). The feature matrix with dimension C × 1 × (H × N) may be further input into BLSTM for license plate recognition.
Here, the preset order may be, for example: the first row is spliced to the left of the second row, or the second row is spliced to the left of the first row. The embodiments of the present application do not limit this.
For convenience of understanding, fig. 6 illustrates the above dimension conversion process by taking C-1, H-2, and N-8 as examples. After convolution, a feature matrix with the dimension of 1 × 2 × 8 shown in 601 is obtained, and the original feature matrix with the dimension of 1 × 2 × 8 is decomposed into two feature matrices 602 and 603 with the dimension of 1 × 1 × 8; and splicing the two feature matrixes according to a preset sequence to obtain a feature matrix 604 with the dimension of 1 × 1 × 16. It should be appreciated that the stitched feature matrix 604 shown in FIG. 6 is stitched in the order that the first row is stitched to the left of the second row.
It should be understood that the illustration of fig. 6 is merely an example and should not be construed as limiting the present application in any way. For example, the license plate may also be a three-layer license plate. In this case, the dimension conversion can be performed according to the same method as described above, and the license plate can be identified.
It should also be understood that the specific implementation of recognizing the license plate through the neural network can refer to the prior art, for example, the implementation can be realized according to the related technology of image recognition. For brevity, no further description is provided herein. The neural network used for license plate recognition may be, for example, BLSTM, LSTM, GRU, etc., which are listed above, but not listed here.
In step 430, the recognition result is output.
After the license plate image is identified, the color of the license plate, the specific license plate number and other results can be output.
It should be understood that, in the embodiment of the present application, the recognition result output by the license plate recognition is not limited.
Therefore, the license plate recognition method provided by the embodiment of the application can judge single and multilayer (for example, double-layer) license plates based on the feature matrix obtained by convolution operation, and perform dimension conversion under the condition of multilayer to convert the feature matrix into the feature matrix with the dimension of 1 in the width direction, so that the license plates of different types can be recognized by a single model, the problem of slow model loading caused by adopting multiple models is avoided, and the occupation of cache and/or video memory of a computer is reduced. On the other hand, the operation of dimension conversion is added after the convolution operation, so that the license plate recognition process can not only use the convolution network to extract the license plate features, but also can subsequently use other networks to extract the features, and the extracted license plate features are diversified.
It should be understood that the license plate detection method and the license plate recognition method provided above can be used in combination or separately. The embodiments of the present application do not limit this.
The following describes in detail a license plate detection device and a license plate recognition device provided in an embodiment of the present application with reference to fig. 7 to 10.
Fig. 7 is a license plate detection device provided by the present application, which can be used to implement the license plate detection function in the above method. Wherein the apparatus may be a system-on-a-chip. In the embodiment of the present application, the chip system may be composed of a chip, and may also include a chip and other discrete devices.
As shown in fig. 7, the apparatus 700 may include: a processing module 710 and an input-output module 720. The processing module 710 may be configured to perform license plate detection on a target image, where the target image is an image including a vehicle; the processing module 710 may further be configured to determine a suspected license plate region in the designated region if a license plate is not detected; the processing module 710 may also be configured to determine whether the suspected license plate region contains the license plate; the input/output module 720 may be configured to output the location information of the license plate.
Optionally, the processing module 710 may be further configured to: performing convolution operation on the target image by adopting a pre-trained target detection model to obtain at least one layer of characteristic diagram; determining the confidence degree that the region corresponding to each element point contains the license plate in the specified region of the specific layer of feature map of the at least one layer of feature map; and determining the region corresponding to the element point with the maximum confidence coefficient as the suspected license plate region.
Optionally, the processing module 710 may be further configured to classify the suspected license plate region by using a pre-trained classification model to determine whether the license plate is included.
Optionally, the processing module 710 may also be used to obtain an original image containing the vehicle; the processing module 710 may be specifically configured to perform image processing on the original image to obtain the target image.
It should be understood that the division of the modules in the embodiments of the present application is illustrative, and is only one logical function division, and there may be other division manners in actual implementation. In addition, functional modules in the embodiments of the present application may be integrated into one processor, may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Fig. 8 is another schematic block diagram of a license plate detection device provided in an embodiment of the present application. The device can be used for realizing the function of license plate detection in the method. Wherein the apparatus may be a system-on-a-chip. In the embodiment of the present application, the chip system may be composed of a chip, and may also include a chip and other discrete devices.
As shown in fig. 8, the apparatus 800 may include at least one processor 810 for implementing the functions of the license plate detection in the method 200 provided by the embodiments of the present application. For example, the processor 810 may be configured to perform license plate detection on a target image, where the target image is an image containing a vehicle; the processor 810 may be configured to determine a suspected license plate region in the designated region if a license plate is not detected; processor 810 may be configured to determine whether the suspected license plate region contains the license plate; the processor 810 may be configured to output location information of the license plate if the suspected license plate area contains the license plate.
For details, reference is made to the detailed description in the example of the method 200, which is not repeated herein.
The apparatus 800 may also include at least one memory 820 for storing program instructions and/or data. The memory 820 is coupled to the processor 810. The coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, and may be an electrical, mechanical or other form for information interaction between the devices, units or modules. The processor 810 may cooperate with the memory 820. Processor 810 may execute program instructions stored in memory 820. At least one of the at least one memory may be included in the processor.
The apparatus 800 may also include a communication interface 830 for communicating with other devices over a transmission medium such that the apparatus used in the apparatus 800 may communicate with other devices. Illustratively, the other device may be a second neural network. The communication interface 830 may be, for example, a transceiver, an interface, a bus, a circuit, or a device capable of performing transceiving functions. The processor 810 may utilize the communication interface 830 to input and output data, and is used to implement the license plate detection method described in the embodiment corresponding to fig. 2.
The specific connection medium between the processor 810, the memory 820 and the communication interface 830 is not limited in the embodiments of the present application. In fig. 8, the processor 810, the memory 820 and the communication interface 830 are connected by a bus 840 according to the embodiment of the present application. The bus 840 is represented by a thick line in fig. 8, and the connection between other components is merely illustrative and not intended to be limiting. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
Fig. 9 is a license plate recognition device provided by the present application, which can be used to implement the license plate recognition function in the above method. Wherein the apparatus may be a system-on-a-chip. In the embodiment of the present application, the chip system may be composed of a chip, and may also include a chip and other discrete devices.
As shown in fig. 9, the apparatus 900 may include: a processing module 910 and an input-output module 920. The processing module 910 may be configured to obtain a license plate image, where the license plate image includes a license plate; the processing module 910 is further configured to perform convolution operation on the license plate image to obtain a feature matrix with dimensions of C × H × N, where C is the number of channels, H is the dimension in the width direction, and N is the dimension in the length direction; the processing module 910 is further configured to splice H feature matrices in the C × H × N feature matrices according to a preset order to obtain a feature matrix with dimensions C × 1 × (H × N) when H is greater than 1; the processing module 910 may also be configured to perform license plate recognition based on the feature matrix having the dimension C × 1 × (H × N).
Optionally, the processing module 910 is further configured to perform normalization processing on the license plate image according to a predefined size specification, so as to obtain a license plate image meeting the size specification.
Optionally, the processing module 910 may further be configured to perform convolution operation on the license plate image by using a pre-trained first neural network to obtain a feature matrix with dimensions of cxhxn; the processing module 910 is specifically configured to splice H feature matrices in the C × H × N feature matrices according to a preset order when H is greater than 1, so as to obtain a feature matrix with dimensions C × 1 × (H × N).
Optionally, the first neural network is: CNN.
Optionally, the processing module 910 is further configured to perform license plate recognition based on the feature matrix with the dimension of C × 1 × (H × N) by using a pre-trained second neural network.
Optionally, the second neural network is BLSTM, LSTM, GRU or BGRU.
Fig. 10 is another schematic block diagram of a license plate recognition device provided in an embodiment of the present application. The device can be used for realizing the function of license plate identification in the method. Wherein the apparatus may be a system-on-a-chip. In the embodiment of the present application, the chip system may be composed of a chip, and may also include a chip and other discrete devices.
As shown in fig. 10, the apparatus 1000 may include at least one processor 1010 for implementing the functions of the license plate recognition in the method 400 provided by the embodiments of the present application. For example, the processor 1010 may be configured to obtain a license plate image, which is an image containing a license plate; the processor 1010 may be further configured to perform convolution operation on the license plate image to obtain a feature matrix with dimensions of cxhxn, where C is a channel number, H is a dimension in a width direction, and N is a dimension in a length direction; the processor 1010 may be further configured to splice H feature matrices in the C × H × N feature matrices according to a preset order to obtain a feature matrix with dimensions C × 1 × (H × N) when H is greater than 1; the processor 1010 may also be configured to perform license plate recognition based on the feature matrix having dimensions C × 1 × (H × N).
For details, reference is made to the detailed description of the method 400, which is not repeated herein.
The apparatus 1000 may also include at least one memory 1020 for storing program instructions and/or data. The memory 1020 is coupled to the processor 1010. The coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, and may be an electrical, mechanical or other form for information interaction between the devices, units or modules. The processor 1010 may operate in conjunction with the memory 1020. Processor 1010 may execute program instructions stored in memory 1020. At least one of the at least one memory may be included in the processor.
The apparatus 1000 may also include a communication interface 1030 for communicating with other devices over a transmission medium such that the apparatus used in the apparatus 1000 may communicate with other devices. Illustratively, the other device may be a second neural network. The communication interface 1030 may be, for example, a transceiver, an interface, a bus, a circuit, or a device capable of performing transceiving functions. The processor 1010 may input and output data using the communication interface 1030, and is used to implement the license plate recognition method described in the corresponding embodiment of fig. 4.
The specific connection medium between the processor 1010, the memory 1020 and the communication interface 1030 is not limited in the embodiments of the present application. In fig. 10, the processor 1010, the memory 1020, and the communication interface 1030 are connected by a bus 1040. The bus 1040 is shown in fig. 10 by a thick line, and the connection between other components is merely illustrative and not intended to be limiting. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
The application also provides a license plate detection and recognition system, and the system comprises the license plate detection device and/or the license plate recognition device.
The present application further provides a computer program product, the computer program product comprising: a computer program (also referred to as code, or instructions), which when executed, causes a computer to perform the method of any of the embodiments shown in fig. 2 or 4.
The present application also provides a computer-readable storage medium having stored thereon a computer program (also referred to as code, or instructions). When executed, cause the computer to perform the method of any of the embodiments shown in fig. 2 or fig. 4.
It should be understood that the processor in the embodiments of the present application may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
It will also be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
As used in this specification, the terms "unit," "module," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution.
Those of ordinary skill in the art will appreciate that the various illustrative logical blocks and steps (step) described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. In the several embodiments provided in the present application, it should be understood that the disclosed apparatus, device and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the functions of the functional units may be fully or partially implemented by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions (programs). The procedures or functions described in accordance with the embodiments of the present application are generated in whole or in part when the computer program instructions (programs) are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in, or transmitted from, a computer-readable storage medium to another computer-readable storage medium, e.g., a website site, computer, server, or data center, by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.) to transmit to another website site, computer, server, or data center. SSD)), etc.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A license plate detection method is characterized by comprising the following steps:
detecting a license plate of a target image, wherein the target image is an image containing a vehicle;
determining a suspected license plate area in the designated area under the condition that the license plate is not detected;
determining whether the suspected license plate area contains the license plate;
and outputting the position information of the license plate under the condition that the suspected license plate area contains the license plate.
2. The method of claim 1, determining a suspected license plate area in the designated area, comprising:
performing convolution operation on the target image by adopting a pre-trained target detection model to obtain at least one layer of characteristic diagram;
determining the confidence degree that the region corresponding to each element point contains the license plate in the specified region of the specific layer feature map of the at least one layer of feature map;
and determining the region corresponding to the element point with the maximum confidence coefficient as the suspected license plate region.
3. The method of claim 1 or 2, wherein said determining whether the suspected license plate region contains the license plate comprises:
and classifying the suspected license plate area by adopting a pre-trained classification model so as to determine whether the license plate is contained.
4. A license plate recognition method is characterized by comprising the following steps:
acquiring a license plate image, wherein the license plate image is an image containing a license plate;
performing convolution operation on the license plate image to obtain a characteristic matrix with dimension of C multiplied by H multiplied by N, wherein C is the number of channels, H is the dimension in the width direction, and N is the dimension in the length direction; C. h and N are positive integers;
decomposing the feature matrix into H feature matrices with dimensions of C × 1 × N under the condition that H is larger than 1, and splicing the H feature matrices in the C × H × N feature matrices according to a preset sequence to obtain a feature matrix with dimensions of C × 1 × (H × N);
and performing license plate recognition based on the feature matrix with the dimension of C multiplied by 1 multiplied by (H multiplied by N).
5. The method of claim 4, wherein the method further comprises:
and according to a predefined size specification, carrying out normalization processing on the license plate image to obtain the license plate image meeting the size specification.
6. A license plate detection device comprising means for implementing the method of any one of claims 1 to 3.
7. A license plate detection device comprising a processor configured to perform the method of any of claims 1-3.
8. A license plate recognition device comprising means for implementing the method of claim 4 or 5.
9. A license plate recognition device comprising a processor configured to perform the method of claim 4 or 5.
10. A computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 3, or perform the method of claim 4 or 5.
CN202011586994.1A 2020-12-28 2020-12-28 License plate detection method and device Pending CN112686252A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011586994.1A CN112686252A (en) 2020-12-28 2020-12-28 License plate detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011586994.1A CN112686252A (en) 2020-12-28 2020-12-28 License plate detection method and device

Publications (1)

Publication Number Publication Date
CN112686252A true CN112686252A (en) 2021-04-20

Family

ID=75454583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011586994.1A Pending CN112686252A (en) 2020-12-28 2020-12-28 License plate detection method and device

Country Status (1)

Country Link
CN (1) CN112686252A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113380652A (en) * 2021-04-29 2021-09-10 厦门通富微电子有限公司 Product detection method, system and computer readable storage medium
CN113408525A (en) * 2021-06-17 2021-09-17 成都崇瑚信息技术有限公司 Multilayer ternary pivot and bidirectional long-short term memory fused text recognition method
CN113610770A (en) * 2021-07-15 2021-11-05 浙江大华技术股份有限公司 License plate recognition method, device and equipment
CN114758279A (en) * 2022-04-24 2022-07-15 安徽理工大学 Video target detection method based on time domain information transfer
CN117746220A (en) * 2023-12-18 2024-03-22 广东安快智能科技有限公司 Identification detection method, device, equipment and medium for intelligent gateway authenticity license plate

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509457A (en) * 2011-10-09 2012-06-20 青岛海信网络科技股份有限公司 Vehicle tracking method and device
CN102915433A (en) * 2012-09-13 2013-02-06 中国科学院自动化研究所 Character combination-based license plate positioning and identifying method
CN105335743A (en) * 2015-10-28 2016-02-17 重庆邮电大学 Vehicle license plate recognition method
CN107103317A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN109977941A (en) * 2018-12-21 2019-07-05 北京融链科技有限公司 Licence plate recognition method and device
CN110070085A (en) * 2019-04-30 2019-07-30 北京百度网讯科技有限公司 Licence plate recognition method and device
CN110414507A (en) * 2019-07-11 2019-11-05 和昌未来科技(深圳)有限公司 Licence plate recognition method, device, computer equipment and storage medium
CN110866430A (en) * 2018-08-28 2020-03-06 上海富瀚微电子股份有限公司 License plate recognition method and device
CN111523544A (en) * 2020-04-23 2020-08-11 上海眼控科技股份有限公司 License plate type detection method and system, computer equipment and readable storage medium
CN111582272A (en) * 2020-04-30 2020-08-25 平安科技(深圳)有限公司 Double-row license plate recognition method, device and equipment and computer readable storage medium
CN112016432A (en) * 2020-08-24 2020-12-01 高新兴科技集团股份有限公司 License plate character recognition method based on deep learning, storage medium and electronic equipment
CN112115904A (en) * 2020-09-25 2020-12-22 浙江大华技术股份有限公司 License plate detection and identification method and device and computer readable storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509457A (en) * 2011-10-09 2012-06-20 青岛海信网络科技股份有限公司 Vehicle tracking method and device
CN102915433A (en) * 2012-09-13 2013-02-06 中国科学院自动化研究所 Character combination-based license plate positioning and identifying method
CN105335743A (en) * 2015-10-28 2016-02-17 重庆邮电大学 Vehicle license plate recognition method
CN107103317A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN110866430A (en) * 2018-08-28 2020-03-06 上海富瀚微电子股份有限公司 License plate recognition method and device
CN109977941A (en) * 2018-12-21 2019-07-05 北京融链科技有限公司 Licence plate recognition method and device
CN110070085A (en) * 2019-04-30 2019-07-30 北京百度网讯科技有限公司 Licence plate recognition method and device
CN110414507A (en) * 2019-07-11 2019-11-05 和昌未来科技(深圳)有限公司 Licence plate recognition method, device, computer equipment and storage medium
CN111523544A (en) * 2020-04-23 2020-08-11 上海眼控科技股份有限公司 License plate type detection method and system, computer equipment and readable storage medium
CN111582272A (en) * 2020-04-30 2020-08-25 平安科技(深圳)有限公司 Double-row license plate recognition method, device and equipment and computer readable storage medium
CN112016432A (en) * 2020-08-24 2020-12-01 高新兴科技集团股份有限公司 License plate character recognition method based on deep learning, storage medium and electronic equipment
CN112115904A (en) * 2020-09-25 2020-12-22 浙江大华技术股份有限公司 License plate detection and identification method and device and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨帅: "开放式环境下视频车牌检测与识别", 《中国优秀硕士学位论文全文数据库 工程科技II辑》, pages 034 - 984 *
聂文真: "出租汽车车牌遮挡行为判定与图像取证技术研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》, pages 034 - 987 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113380652A (en) * 2021-04-29 2021-09-10 厦门通富微电子有限公司 Product detection method, system and computer readable storage medium
CN113408525A (en) * 2021-06-17 2021-09-17 成都崇瑚信息技术有限公司 Multilayer ternary pivot and bidirectional long-short term memory fused text recognition method
CN113610770A (en) * 2021-07-15 2021-11-05 浙江大华技术股份有限公司 License plate recognition method, device and equipment
CN114758279A (en) * 2022-04-24 2022-07-15 安徽理工大学 Video target detection method based on time domain information transfer
CN117746220A (en) * 2023-12-18 2024-03-22 广东安快智能科技有限公司 Identification detection method, device, equipment and medium for intelligent gateway authenticity license plate

Similar Documents

Publication Publication Date Title
CN112686252A (en) License plate detection method and device
CN110619750B (en) Intelligent aerial photography identification method and system for illegal parking vehicle
CN110210474B (en) Target detection method and device, equipment and storage medium
WO2019223586A1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN113468967B (en) Attention mechanism-based lane line detection method, attention mechanism-based lane line detection device, attention mechanism-based lane line detection equipment and attention mechanism-based lane line detection medium
KR101848019B1 (en) Method and Apparatus for Detecting Vehicle License Plate by Detecting Vehicle Area
CN109711264B (en) Method and device for detecting occupation of bus lane
Mu et al. Lane detection based on object segmentation and piecewise fitting
CN111027535B (en) License plate recognition method and related equipment
US20180129883A1 (en) Detection method and apparatus of a status of a parking lot and electronic equipment
CN112149476B (en) Target detection method, device, equipment and storage medium
CN110826429A (en) Scenic spot video-based method and system for automatically monitoring travel emergency
CN109389122B (en) License plate positioning method and device
CN112733652B (en) Image target recognition method, device, computer equipment and readable storage medium
CN113034378B (en) Method for distinguishing electric automobile from fuel automobile
CN111709377B (en) Feature extraction method, target re-identification method and device and electronic equipment
CN114067282A (en) End-to-end vehicle pose detection method and device
CN116311212B (en) Ship number identification method and device based on high-speed camera and in motion state
CN116824152A (en) Target detection method and device based on point cloud, readable storage medium and terminal
CN116486392A (en) License plate face intelligent recognition method and system based on FPGA
JP2021152826A (en) Information processing device, subject classification method, and subject classification program
CN111444867A (en) Convolutional neural network-based vehicle post-beat brand identification method
CN116030542B (en) Unmanned charge management method for parking in road
CN117671496B (en) Unmanned aerial vehicle application result automatic comparison method
CN115762178B (en) Intelligent electronic police violation detection system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210420