CN115346206B - License plate detection method based on improved super-resolution deep convolution feature recognition - Google Patents

License plate detection method based on improved super-resolution deep convolution feature recognition Download PDF

Info

Publication number
CN115346206B
CN115346206B CN202211282956.6A CN202211282956A CN115346206B CN 115346206 B CN115346206 B CN 115346206B CN 202211282956 A CN202211282956 A CN 202211282956A CN 115346206 B CN115346206 B CN 115346206B
Authority
CN
China
Prior art keywords
text
branch
detection
license plate
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211282956.6A
Other languages
Chinese (zh)
Other versions
CN115346206A (en
Inventor
刘寒松
王永
孙小伟
王国强
刘瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonli Holdings Group Co Ltd
Original Assignee
Sonli Holdings Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonli Holdings Group Co Ltd filed Critical Sonli Holdings Group Co Ltd
Priority to CN202211282956.6A priority Critical patent/CN115346206B/en
Publication of CN115346206A publication Critical patent/CN115346206A/en
Application granted granted Critical
Publication of CN115346206B publication Critical patent/CN115346206B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/18Extraction of features or characteristics of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19147Obtaining sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The invention belongs to the technical field of license plate detection, and relates to a license plate detection method based on improved super-resolution deep convolution feature recognition.

Description

License plate detection method based on improved super-resolution deep convolution feature recognition
Technical Field
The invention belongs to the technical field of license plate detection, and relates to a license plate detection method based on improved super-resolution deep convolution feature recognition.
Background
Most of early license plate detection and recognition algorithms are researched based on machine learning algorithms, and license plates are positioned, detected and recognized by using manually selected features. The existing license plate recognition technology is mainly applied to specific environments such as toll parking lot entrances and exits, highway ETC channels and the like, under the condition that the front-view detection visual angle and the detection area are fixed, the accuracy rate of the license plate recognition technology can be very high, but the recognition effect is poor in a complex scene.
Since the problem is not a new research problem, many algorithms with robust performance exist at present, but the algorithms have several serious problems, namely, firstly, text detection and text recognition are divided into two different research subjects, and the two problems are usually combined together in a real scene; secondly, the traditional text detection and recognition algorithm can only process some simple scenes and cannot well process some complex scenes, and a plurality of scenes in reality are just complex scenes.
The method comprises the steps that in a complex scene, license plate detection is possibly caused by the fact that the view angle causes the situation that the image resolution is low, in the existing method, a convolutional neural network based on deep learning is adopted for feature extraction, in the traditional method, detection and identification can be carried out only under the condition of fixed resolution, the receptive field is also fixed, the position of a vehicle is not fixed, the license plate is rotated or deformed, in some cases, the distance between the vehicle and a recognition view is far, even if the high matching degree (high IoU) exists before an anchor frame and an object are detected, character texts can be successfully recognized, but recognition cannot be carried out, namely, the features in the anchor frame cannot well describe the whole object, and the final detection and identification accuracy is low.
Therefore, aiming at the unconstrained scene, the technical problem of low detection and identification precision exists in the existing license plate identification technology, and a more effective method for detecting the license plate is urgently needed.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides the license plate detection method based on the improved super-resolution deep convolution feature recognition, which can be used for the license plate detection and recognition task of the unlimited scene and can efficiently realize the license plate detection and recognition.
In order to achieve the purpose, the specific process of detecting the license plate comprises the following steps:
(1) And (3) data set construction: collecting images containing conventional, inclined, distorted and low-resolution license plates in traffic monitoring and side-position parking lots, constructing a license plate data set, and dividing the data set into a training set, a verification set and a test set;
(2) Deep convolution feature extraction: initializing the size and numerical range of an image, inputting the processed image into a backbone network for convolution feature extraction, and adding a hourglass88 network and a hourglass57 network behind the backbone network for further extraction of the extracted convolution features;
(3) Character detection: inputting the characteristic diagram obtained in the step (2) into a character detection branch to obtain characteristic mapping, wherein the character detection branch inputs the characteristic mapping into 3 different subbranches respectively, including a text instance segmentation subbranch, a character detection branch and a character recognition branch, and the text instance segmentation subbranch outputs 2-channel characteristic mapping to indicate whether each pixel contains a text; the character detection branch outputs a 5-channel feature map which respectively represents the distance from the current position to the upper, lower, left and right sides and the direction of the trunk part; the character recognition branch outputs a feature mapping of 68 channels, each channel respectively represents the feature mapping corresponding to different characters, and the feature mapping specifically comprises 26 English characters, 10 numbers, 32 specific symbols and 34 license plate abbreviations of province, city and autonomous regions; the 3 branches finally output a feature map of equal size; finally, filtering out rectangular boxes with probability values lower than 0.95;
(4) Text detection and recognition: inputting the characteristic diagram obtained in the step (2) into a text detection and recognition branch to obtain the approximate position of the character in the picture, and marking the position of different characters by using an irregular shape, wherein the text detection and recognition branch defines different forms according to the type of a text example and adapts to the existing dynamic text detector through minimum modification;
(5) Training a network structure to obtain trained model parameters;
(6) Testing the network and testing: in the testing process, under the condition that the proportion of the long side and the short side of the image is kept unchanged, the long side of the image is zoomed (resize) to 512, then the short side of the image is filled, the size of the image is 512 multiplied by 512, the image is used as the input of a network, namely the classification confidence coefficient of the license plate and the coordinate position of the license plate can be output, then a threshold value is set to filter the license plate with low confidence coefficient, and finally the license plate is corrected by using non-maximum inhibition.
As a further technical scheme of the invention, resNet50 is adopted as a feature extraction network in the backbone network in the step (2).
As a further technical scheme of the invention, the text example segmentation subbranch and the character detection branch in the step (3) respectively comprise 3 convolutional layers, the sizes of filters corresponding to the 3 convolutional layers are respectively 3x3,3x3 and 1x1, and the character recognition branch comprises 4 convolutional layers with 3x3 convolutional kernels.
As a further technical scheme of the invention, the text detection and identification branch in the step (4) comprises multi-directional texts, curve texts and super-resolution texts according to different forms defined by the type of the text example, and the process of identifying the files in different forms comprises the following steps:
for multi-directional Text, using modified EAST (Efficient and accurate Scene Text) algorithm as Text detection recognition branch, where the Text detection recognition branch contains Text instance detection branch and two sub-branches regressed by using IoU _ loss as the main part of instance level, the predicted bounding box is composed of five parameters of horizontal direction, vertical direction, width, height and length, the intensive prediction is calculated at each spatial position, the obtained feature mapping is composed of two 3 × 3 convolution layers and one 1 × 1 convolution layer, and the Text detection recognition branch finally outputs 2-channel feature mapping indicating Text or non-Text probability and 5-channel detection (main part with direction);
for curved text detection, the Textfield method of adding direction field is used to encode the direction far away from text boundary, the direction field branch is used to separate adjacent text examples and is predicted by a new branch, text detection branch and character branch in parallel, the new branch is composed of two 3 × 3 convolutional layers and a 1 × 1 convolutional layer;
for Super-Resolution text, an SRGAN (Photo-reactive Single Image Super-Resolution Using a general adaptive Network) algorithm is used to perform high-Resolution amplification on the long-distance license plate information.
As a further technical scheme of the invention, the concrete process of training the network structure in the step (5) is as follows: the method comprises the steps of using images of a training set in a data set, adjusting the size of the images to be 512 x3, sequentially inputting the images into a network for 4-time down-sampling according to the number required by each training, using an IOU threshold value as a measurement standard of a sample distribution strategy, outputting classification confidence coefficient of a license plate, obtaining errors by adopting Focal loss calculation to predict a type and a real type, calculating the errors between a predicted license plate position and a real license plate position by adopting Smooth L1 loss, updating parameters through back propagation, setting the learning rate to be 0.0002, storing model parameters with the best result on a verification set after 50 times of training iterations of a complete training set, and obtaining the trained license plate detection network parameters.
Compared with the common deformable convolution, the detection and recognition range is expanded, so that low-quality texts in the images can be correctly detected and recognized, iterative character detection is introduced, wrong feature points can be prevented from being sampled, the feature distribution of real targets is extracted, and the method can be used for detecting license plates in unconstrained scenes, and can also be used for detecting various low-quality targets such as scene text detection, supermarket commodity detection and the like.
Drawings
FIG. 1 is a block diagram of an improved text recognition module according to the present invention.
Fig. 2 is a diagram of the whole network structure for implementing vehicle detection according to the present invention.
Fig. 3 is a flow chart of the license plate detection method provided by the invention.
Fig. 4 is a comparison example of the license plate detection result provided by the present invention and the existing algorithm, wherein (a) is the detection result of the existing algorithm, and (b) is the detection result of the present invention.
Fig. 5 is another comparison example of the license plate detection result provided by the present invention and the existing algorithm, wherein (a) is the detection result of the existing algorithm, and (b) is the detection result of the present invention.
Detailed Description
The invention is further illustrated by the following examples in conjunction with the accompanying drawings.
Example (b):
in this embodiment, the license plate detection is implemented by using the network structure shown in fig. 1 and 2 and the flow shown in fig. 3, and the specific process is as follows:
(1) Data set construction
Collecting images containing conventional, inclined, distorted and low-resolution license plates in scenes such as traffic monitoring, side parking lot and the like, constructing a data set not lower than a certain number of license plates, and dividing the data set into a training set, a verification set and a test set;
(2) Deep convolution feature extraction
Firstly, initializing the size and numerical range of an image, then inputting the processed image into a backbone network for convolution feature extraction, wherein the backbone network uses ResNet50 as a feature extraction network, and adds a hourglass88 and a hourglass57 network after ResNet50, and the hourglass network is used for strengthening and utilizing multi-scale features formed in ResNet50 to obtain a convolution feature map set with stronger expressive force and containing multi-scale license plate information, and further extracting high-level semantic features (mainly used for example segmentation);
(3) Character detection branch
Inputting the features extracted in the step (2) into a character detection branch, and then inputting the feature mapping into 3 different subbranches respectively by the character detection branch, wherein the character detection branch comprises a text instance segmentation subbranch, a character detection branch and a character recognition branch, the two branches of the text instance segmentation subbranch and the character detection branch respectively comprise 3 convolutional layers, the sizes of corresponding filters are 3x3,3x3 and 1x1, and the character recognition branch comprises 4 convolutional layers with the convolutional kernel of 3x 3; the text instance segmentation sub-branch is used for outputting a 2-channel feature mapping to indicate whether each pixel contains text or not; the character detection branch outputs a 5-channel feature map which respectively represents the distance from the current position to the upper, lower, left and right 4 edges and the direction of a backhaul; the character recognition branch outputs feature mapping of 68 channels, each channel respectively represents feature mapping corresponding to different characters, and the feature mapping specifically comprises 26 English characters, 10 numbers, 32 specific symbols and 34 license plate abbreviations of province, city and autonomous regions; the 3 branches finally output a feature map of equal size; finally, filtering out rectangular boxes with probability values lower than 0.95;
(4) Text detection recognition branching
The branch is used for acquiring the approximate position of a character in a picture, the position of different characters is marked by an irregular shape, a Text detection branch defines different forms according to the type of a Text example and can adapt to the existing dynamic Text detector by minimal modification, for multidirectional texts, a changed EAST (Efficient and accurate Scene Text) algorithm is used as a Text detector branch and comprises two sub-branches, namely a Text example detection branch and a backbone part regression using IoU _ loss as an example level, a predicted bounding box consists of five parameters comprising x, y, w, h and d (respectively, a Wie horizontal direction, a vertical direction, a width, a height and a length direction), intensive prediction is calculated at each spatial position, the feature mapping consists of two 3x3 convolutional layers and a 1x1 convolutional layer, and finally, the Text detection branch outputs a 2-channel mapping feature indicating Text or non-Text probability and 5-channel detection; for curved text detection, the present embodiment uses the Textfield method with the addition of a direction field that encodes directions away from text boundaries, the direction field branch being used to separate adjacent text instances and being predicted in parallel with the text detection branch and the character branch by a new branch consisting of two 3x3 convolutional layers, and one 1x1 convolutional layer; for Super-Resolution texts, an SRGAN (Photo-reactive Single Image Super-Resolution Using a general adaptive Network) algorithm is used, and is used for carrying out high-Resolution amplification on long-distance license plate information so as to facilitate detection and identification;
(5) Training a network structure to obtain trained model parameters;
using images of a training set in a data set, wherein the size of the images is 512 multiplied by 3, sequentially inputting the images into a network according to the number of the images required by each training, carrying out 4-time down-sampling, using an IOU threshold value as a measurement standard of a sample distribution strategy, outputting classification confidence coefficient of a license plate, adopting Focal loss calculation to predict a type and a real type to obtain an error, adopting Smooth L1 loss to calculate the error between a predicted license plate position and a real license plate position, updating parameters through back propagation, setting a learning rate to be 0.0002, storing a model parameter with the best result on a verification set after complete training iteration of a set for a set number of times (50 times), and obtaining a trained license plate detection network parameter;
(6) Testing networks and testing
In the testing process, under the condition that the ratio of the long side and the short side of the image is kept unchanged, the long side of the image is zoomed (resize) to 512, then the short side of the image is filled, the size of the image is 512 multiplied by 512, the image is used as the input of a network, the classification confidence coefficient of the license plate and the coordinate position of the license plate can be output, the threshold value is set, the license plate with low confidence coefficient is filtered, and finally the license plate is corrected by using non-maximum inhibition.
The residual error network in the convolutional neural network adopted in the embodiment performs downsampling on the feature map, generates a high-quality candidate frame by using a character set attention mechanism, and acquires a high-quality feature map, so as to solve the problem that a low-resolution license plate is identified incorrectly or even cannot be identified, affine parameters do not need to be learned, the calculation consumption of feature repeated extraction is reduced, and the performance of the convolutional neural network-based detection method on a low-resolution license plate target is greatly improved on the basis of little operation.
The text detection and recognition algorithm provided by the embodiment is an end-to-end text processing algorithm based on a single-stage detection algorithm and a task of combining text detection and text recognition, can directly output the positions of texts and characters in pictures and corresponding character labels, and can be used for better applying a model trained in synthetic data to data in a real scene.
Example 2:
in this embodiment, the technical solution of embodiment 1 is adopted, the images in the ICDAR data Set and the super-resolution data Set14 are respectively adopted to perform license plate detection, and compared with the existing YOLO algorithm, and the detection results are respectively shown in fig. 4 and 5.
Network structures and algorithms not described in detail herein are all common in the art.
It is noted that the disclosed embodiments are intended to aid in further understanding of the invention, but those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.

Claims (3)

1. A license plate detection method based on improved super-resolution deep convolution feature recognition is characterized by comprising the following specific processes:
(1) And (3) data set construction: collecting images of conventional license plates, inclined license plates, distorted license plates and low-resolution license plates in parking lots at side positions for traffic monitoring, constructing a license plate data set, and dividing the data set into a training set, a verification set and a test set;
(2) Deep convolution feature extraction: initializing the size and numerical range of an image, inputting the processed image into a backbone network for convolution feature extraction, and adding a hourglass88 and a hourglass57 network behind the backbone network for further extraction of the extracted convolution feature;
(3) Character detection: inputting the feature map obtained in the step (2) into a character detection branch to obtain feature mapping, wherein the character detection branch inputs the feature mapping into 3 different subbranches respectively, the subbranches comprise a text example segmentation subbranch, a character detection branch and a character recognition branch, the text example segmentation subbranch and the character detection branch respectively comprise 3 convolutional layers, the sizes of filters corresponding to the 3 convolutional layers are 3x3,3x3 and 1x1 respectively, and the character recognition branch comprises 4 convolutional layers with the convolutional cores of 3x 3; the text example segmentation sub-branch outputs a 2-channel feature map to represent whether each pixel contains text or not; the character detection branch outputs a 5-channel feature map which respectively represents the distance from the current position to the upper, lower, left and right sides and the direction of the trunk part; the character recognition branch outputs a feature mapping of 68 channels, each channel respectively represents the feature mapping corresponding to different characters, and the feature mapping specifically comprises 26 English characters, 10 numbers, 32 specific symbols and 34 license plate abbreviations of province, city and autonomous regions; the 3 branches finally output a feature map of equal size; finally, filtering out rectangular boxes with probability values lower than 0.95;
(4) Text detection and recognition: inputting the characteristic diagram obtained in the step (2) into a text detection recognition branch to obtain the approximate position of the character in the image, marking the position of different characters by using an irregular shape, defining different forms of the text detection recognition branch according to the type of a text example, and adapting to the existing dynamic text detector by minimum modification; the different forms of the text detection and identification branch defined according to the types of the text examples comprise multidirectional texts, curve texts and super-resolution texts, and the process for identifying the files in the different forms comprises the following steps:
for a multidirectional text, a changed EAST algorithm is used as a text detection recognition branch, the text detection recognition branch comprises a text instance detection branch and two sub-branches which are regressed by using an IoU _ loss as a main part of an instance level, a predicted bounding box consists of five parameters including a horizontal direction, a vertical direction, a width, a height and a length, intensive prediction is calculated at each spatial position, an obtained feature mapping consists of two 3x3 convolutional layers and one 1x1 convolutional layer, and the text detection recognition branch finally outputs a 2-channel feature mapping indicating text or non-text probability and 5-channel detection;
for curved text detection, the Textfield method of adding direction field is used to encode the direction far away from the text boundary, the direction field branch is used to separate the adjacent text examples, and is predicted by a new branch, the text detection branch and the character branch in parallel, the new branch is composed of two 3 × 3 convolution layers and a 1 × 1 convolution layer;
for the super-resolution text, performing high-resolution amplification on the long-distance license plate information by using an SRGAN algorithm;
(5) Training a network structure to obtain trained model parameters;
(6) Testing the network and testing: in the testing process, under the condition that the proportion of the long side and the short side of the image is kept unchanged, the long side of the image is scaled to 512, then the short side of the image is filled, the size of the image is 512 multiplied by 512, the image is used as the input of a network, the classification confidence coefficient of the license plate and the coordinate position of the license plate can be output, then a threshold value is set to filter the license plate with low confidence coefficient, and finally the non-maximum inhibition is used for correcting the license plate.
2. The license plate detection method based on the improved super-resolution deep convolution feature recognition of claim 1, wherein ResNet50 is adopted as a feature extraction network by the backbone network in the step (2).
3. The license plate detection method based on the improved super-resolution deep convolution feature recognition of claim 2 is characterized in that the specific process of training the network structure in the step (5) is as follows: the method comprises the steps of using images of a training set in a data set, adjusting the size of the images to be 512 multiplied by 3, sequentially inputting the images into a network according to the number required by each training for 4 times of downsampling, using an IOU threshold value as a measuring standard of a sample distribution strategy, outputting the classification confidence coefficient of a license plate, calculating a prediction type and a real type by adopting a Focal loss to obtain errors, calculating the errors between a predicted license plate position and a real license plate position by adopting a Smooth L1 loss, updating parameters through back propagation, setting the learning rate to be 0.0002, storing model parameters with the best results on a verification set after 50 times of complete training set training iteration, and obtaining the trained license plate detection network parameters.
CN202211282956.6A 2022-10-20 2022-10-20 License plate detection method based on improved super-resolution deep convolution feature recognition Active CN115346206B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211282956.6A CN115346206B (en) 2022-10-20 2022-10-20 License plate detection method based on improved super-resolution deep convolution feature recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211282956.6A CN115346206B (en) 2022-10-20 2022-10-20 License plate detection method based on improved super-resolution deep convolution feature recognition

Publications (2)

Publication Number Publication Date
CN115346206A CN115346206A (en) 2022-11-15
CN115346206B true CN115346206B (en) 2023-01-31

Family

ID=83957071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211282956.6A Active CN115346206B (en) 2022-10-20 2022-10-20 License plate detection method based on improved super-resolution deep convolution feature recognition

Country Status (1)

Country Link
CN (1) CN115346206B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071707B (en) * 2023-02-27 2023-11-28 南京航空航天大学 Airport special vehicle identification method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751232A (en) * 2019-11-04 2020-02-04 哈尔滨理工大学 Chinese complex scene text detection and identification method
CN113792739A (en) * 2021-08-25 2021-12-14 电子科技大学 Universal license plate text recognition method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549893B (en) * 2018-04-04 2020-03-31 华中科技大学 End-to-end identification method for scene text with any shape
KR102095685B1 (en) * 2019-12-02 2020-04-01 주식회사 넥스파시스템 vehicle detection method and device
CN113673384A (en) * 2021-08-05 2021-11-19 辽宁师范大学 Oracle character detection method for guiding texture feature autonomous learning by LM filter bank
CN113822278B (en) * 2021-11-22 2022-02-11 松立控股集团股份有限公司 License plate recognition method for unlimited scene
CN115063786A (en) * 2022-08-18 2022-09-16 松立控股集团股份有限公司 High-order distant view fuzzy license plate detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751232A (en) * 2019-11-04 2020-02-04 哈尔滨理工大学 Chinese complex scene text detection and identification method
CN113792739A (en) * 2021-08-25 2021-12-14 电子科技大学 Universal license plate text recognition method

Also Published As

Publication number Publication date
CN115346206A (en) 2022-11-15

Similar Documents

Publication Publication Date Title
Zhao et al. Building outline delineation: From aerial images to polygons with an improved end-to-end learning framework
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN107527007B (en) Method for detecting object of interest in vehicle image processing system
Jiao et al. A configurable method for multi-style license plate recognition
CN108596055B (en) Airport target detection method of high-resolution remote sensing image under complex background
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN112232371B (en) American license plate recognition method based on YOLOv3 and text recognition
CN111582339B (en) Vehicle detection and recognition method based on deep learning
CN111709416A (en) License plate positioning method, device and system and storage medium
CN111738055A (en) Multi-class text detection system and bill form detection method based on same
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN116645592B (en) Crack detection method based on image processing and storage medium
CN112052845A (en) Image recognition method, device, equipment and storage medium
CN115063786A (en) High-order distant view fuzzy license plate detection method
US20210081695A1 (en) Image processing method, apparatus, electronic device and computer readable storage medium
CN113723377A (en) Traffic sign detection method based on LD-SSD network
CN115239644B (en) Concrete defect identification method, device, computer equipment and storage medium
CN111626295A (en) Training method and device for license plate detection model
CN115346206B (en) License plate detection method based on improved super-resolution deep convolution feature recognition
CN116012815A (en) Traffic element identification method, multi-task network model, training method and training device
CN113158954B (en) Automatic detection method for zebra crossing region based on AI technology in traffic offsite
CN111881914B (en) License plate character segmentation method and system based on self-learning threshold
CN112053407B (en) Automatic lane line detection method based on AI technology in traffic law enforcement image
KR102026280B1 (en) Method and system for scene text detection using deep learning
CN116665153A (en) Road scene segmentation method based on improved deep bv3+ network model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant