CN114898352A - Method for simultaneously realizing image defogging and license plate detection - Google Patents

Method for simultaneously realizing image defogging and license plate detection Download PDF

Info

Publication number
CN114898352A
CN114898352A CN202210743909.0A CN202210743909A CN114898352A CN 114898352 A CN114898352 A CN 114898352A CN 202210743909 A CN202210743909 A CN 202210743909A CN 114898352 A CN114898352 A CN 114898352A
Authority
CN
China
Prior art keywords
license plate
image
detection
plate detection
defogging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210743909.0A
Other languages
Chinese (zh)
Inventor
刘寒松
王国强
王永
翟贵乾
刘瑞
焦安健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonli Holdings Group Co Ltd
Original Assignee
Sonli Holdings Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonli Holdings Group Co Ltd filed Critical Sonli Holdings Group Co Ltd
Priority to CN202210743909.0A priority Critical patent/CN114898352A/en
Publication of CN114898352A publication Critical patent/CN114898352A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of license plate detection, and relates to a method for simultaneously realizing image defogging and license plate detection, which combines two tasks of image defogging and license plate detection for simultaneous training, extracts features by using the same main network, is respectively used for extracting image structure information of image defogging and license plate semantic information of license plate detection, then uses different branches to defogge an image and detect a license plate, and takes different loss functions as supervision during training, on one hand, enhances the feature discrimination of the license plate detection task by extracting the bottom layer texture structure feature of the image defogging network, on the other hand, extracts the semantic feature of the license plate detection network to provide the image defogging task with region self-adaption capability, realizes the dual fusion promotion of the task and the feature level, can be used for license plate detection in foggy scenes, and can also be used for rain detection, And the detection precision is high for target detection tasks in other severe weather such as haze and the like.

Description

Method for simultaneously realizing image defogging and license plate detection
Technical Field
The invention belongs to the technical field of license plate detection, and relates to a method for simultaneously realizing image defogging and license plate detection.
Background
With the development of big data and deep learning technology, a new round of artificial intelligence wave mat is global. The traffic industry is rapidly developed in recent years as the life of economy of the whole country, and vehicles which are increased day by day bring convenience to people's lives and also bring hidden dangers, the license plate detection problem is researched for solving the problem that the visibility is sharply reduced and the contrast between the license plate and the background is greatly reduced due to extreme weather such as foggy days, and the license plate detection algorithm is seriously influenced, so that the existing solution method can not meet the application requirements far away.
In the foggy weather, urban traffic is seriously influenced, and wrong detection and missing detection are very easy to occur in license plate detection, because the whole pattern is fuzzy in the foggy weather, the detail characteristics are seriously lost, and the information such as license plate characteristics is not easy to extract. The existing license plate detection method in a foggy scene divides license plate detection into two stages of image defogging and license plate detection, namely, defogging restoration is firstly carried out on an image, and license plate detection is carried out on the restored and enhanced image.
In addition, the traditional defogging algorithm has the defects of incomplete defogging effect, generally dark recovered fogless image and the like, and the traditional method starts from a bottom-layer vision task and mainly aims at improving the visualization effect, lacks effective utilization of high-layer characteristic knowledge in scene vision and cannot effectively promote tasks such as license plate recognition and the like; the traditional vehicle detection algorithm is used for extracting the characteristics of a vehicle image by means of a manual operator, and the traditional method is time-consuming, labor-consuming and poor in generalization capability due to the complex computability and high requirements on the environment. In recent years, with the arrival of a big data era and the development of artificial intelligence, a major breakthrough is made in a license plate detection method based on deep learning, and the detection and identification of a license plate are newly developed due to the introduction of deep learning algorithms such as fast R-CNN, YOLO and the like.
In summary, in the foggy day scene, the technical problems of false detection and missed detection in the existing license plate detection technology and the problem of low detection precision caused by license plate distortion urgently need a more effective method for license plate identification feature modeling and identification.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a method for simultaneously realizing image defogging and license plate detection, is used for solving the problem that a license plate detection algorithm is poor in robustness on a foggy scene, can be simultaneously used for image defogging and distorted license plate detection tasks on the foggy scene, and can efficiently realize license plate detection.
In order to achieve the purpose, the image defogging task and the license plate detection task are combined and trained simultaneously, the same main network is used for extracting features, the features are respectively used for extracting image structure information of the image defogging and license plate semantic information of the license plate detection, then different branches are used for defogging the image and detecting the license plate, and different loss functions are used for supervision. In order to better acquire texture features of different scales and adapt to license plates of different scales, the trunk network uses multi-scale hole convolution which does not increase calculated amount and can acquire multi-scale features, and uses an orientation frame to express the license plate, so that the problems of rotation and deformation of the license plate are solved, and the specific process is as follows:
(1) collecting a license plate data set, carrying out fogging treatment to simulate a foggy scene, and dividing the data set into a training set, a verification set and a test set;
(2) initializing the size and the numerical range of the picture in the data set in the step (1), inputting the processed image into a backbone network for deep convolution feature extraction, and outputting a convolution feature map containing multi-scale scene texture information and license plate information;
(3) respectively inputting the convolution characteristic diagram obtained in the step (2) into an image defogging reconstruction branch and a license plate detection branch, wherein the image defogging reconstruction branch is used for recovering a clear image, and the license plate detection branch realizes classification and position regression of a license plate;
(4) using images of the training set in the dataset, picture size 512
Figure DEST_PATH_IMAGE002
512
Figure 182614DEST_PATH_IMAGE002
3, inputting the images into the network in sequence to obtain the input of the whole network
Figure DEST_PATH_IMAGE004
The target detection branch uses the IOU threshold value as the measurement standard of the sample distribution strategy and outputs the classification confidence of the license plate
Figure DEST_PATH_IMAGE006
And the location of the regressive coordinate
Figure DEST_PATH_IMAGE008
B is the number of samples selected by one-time training, Class is 2, namely whether the license plate is the license plate or not, N is the number of output predicted license plate targets, 5 is the coordinate of the central point of the license plate facing to the frame, the length, the width and the angle of the frame, and Focal loss calculation is adopted to predict the type and the real type to obtain errors, and Smooth L1 loss is adopted to calculate the errors between the predicted license plate position and the real license plate position; the image defogging branch uses an L2 loss function to calculate the error between the generated defogged image and the real clear image, the two branches finally use back propagation updating parameters, and after the training iteration of the complete training set for set times, the model parameters with the best result on the verification set are stored and used as the parameters of the final model trainingCounting to obtain a trained license plate detection model in a foggy scene;
(6) loading the model parameters trained in the step (5), scaling (resize) the long edge of the image to 512 under the condition of keeping the ratio of the long edge to the short edge of the image unchanged, and filling the short edge of the image to ensure that the size of the image is 512
Figure 240699DEST_PATH_IMAGE002
And 512, the classification confidence coefficient of the license plate and the coordinate position of the license plate are finally output as the input of the network, a threshold value is set to filter the license plate with low confidence coefficient, and finally a non-maximum suppression (NMS) is used for deleting redundant frames to obtain accurate license plate detection frames, so that the high-precision detection and correction of the foggy day scene with the combination of image defogging and license plate detection are realized.
As a further technical scheme of the invention, the process of collecting the license plate data set and carrying out the fogging treatment in the step (1) comprises the following steps: collecting clean fog-free images containing conventional, inclined and deformed license plates in traffic monitoring and parking lots at side positions, constructing fog-free license plate data sets, and labeling positions of license plates facing frames
Figure DEST_PATH_IMAGE010
For detecting license plates, wherein
Figure DEST_PATH_IMAGE012
The center point of the license plate is represented,
Figure DEST_PATH_IMAGE014
the length and width of the license plate are shown,
Figure DEST_PATH_IMAGE016
representing the included angle of the long side of the license plate relative to the horizontal direction; after the original clean fog-free image license plate data set is constructed, images are fogged through the natural principle, and clear and fogged images are generated and used for image defogging.
As a further technical scheme of the invention, the number ratio of the training set, the verification set and the test set in the step (1) is 6: 2: 2.
as an originalIn a further technical scheme of the invention, the process of initializing the size and the numerical range of the picture in the step (2) is as follows: under the condition of keeping the proportion of the long side and the short side of the picture unchanged, the long side of the picture is scaled to 512, and the short side of the picture is filled so that the size of the picture is 512
Figure 833486DEST_PATH_IMAGE002
512 and the image pixels are normalized so that the pixel range is between 0-1.
As a further technical scheme of the invention, the main network in the step (2) uses a common convolutional layer and a multi-scale void convolution module which are connected as a feature extraction network, and uses a down-sampling module after using two to three convolutional layers/multi-scale void convolution modules, wherein the multi-scale void convolution module uses convolutional layers of a plurality of different void factors, acquires and fuses multi-scale features on the basis of not increasing calculated amount, and finally outputs a convolution feature map containing multi-scale scene texture information and license plate information through serial connection of the multi-layer void convolution modules.
As a further technical scheme of the present invention, the image defogging reconstruction branch in step (3) uses an up-sampling module after every two to three continuous ordinary convolution layers, and a ReLU activation function operation is performed after each ordinary convolution; and the features of the corresponding scale extracted by the deep convolution features are transmitted to a defogging reconstruction branch for feature reconstruction in a local connection mode, the two features are added, the forgetting of the original features is reduced, and the RGB clear image with the channel number of 3 is finally output.
As a further technical scheme of the invention, the license plate detection branch in the step (3) respectively uses two full-connection layer sub-networks with the same structure but without shared parameters to learn classification and position information, so as to complete the tasks of classification and position regression of the target frame, wherein the classification is whether the license plate is the license plate, and the position information is parameters of five orientation frames of the license plate
Figure 494274DEST_PATH_IMAGE010
So far, the accurate coordinate position of the license plate is obtained,each feature point in the convolution feature map is only provided with one anchor frame for learning the position of the license plate so as to improve the training and testing speed.
Compared with the prior art, the invention combines two tasks of image defogging and license plate detection for simultaneous training, extracts features by using the same main network, respectively extracts image structure information of the image defogging and license plate semantic information of the license plate detection, then uses different branches to defogge the image and detect the license plate, supervises different loss functions during training, enhances the feature discrimination of the license plate detection task by extracting the bottom texture structure feature of the image defogging network on one hand, and provides the region self-adaption capability for the image defogging task on the other hand, realizes double fusion promotion of the task and the feature level, can be used for detecting the license plate in a foggy scene, can also be used for target detection tasks in other severe weathers such as rain, haze and the like, and is concentrated in the foggy test, compared with the existing YoloV3 method, the detection precision is improved from 94.7% to 98.2%, and meanwhile, the test time is not increased.
Drawings
Fig. 1 is a schematic diagram of the entire network architecture framework employed in the present invention.
FIG. 2 is a block diagram of the multi-scale hole convolution module according to the present invention.
Fig. 3 is a flow chart of the present invention.
Detailed Description
The invention will be further described by way of examples, without in any way limiting the scope of the invention, with reference to the accompanying drawings.
Example 1:
in this embodiment, two tasks of image defogging and license plate detection are combined and trained simultaneously, features are extracted by using the same backbone network (as shown in fig. 1), the features are respectively used for extracting image structure information of image defogging and license plate semantic information of license plate detection, then different branches are used for defogging an image and detecting a license plate, and different loss functions are used for supervision, so that an end-to-end task of detecting a license plate in a foggy day scene is realized, and a specific flow is shown in fig. 3, and the method comprises the following steps:
(1) and (3) data set construction:
collecting clean fog-free images containing conventional, inclined and deformed license plates in scenes such as traffic monitoring, side parking lot and the like, constructing a fog-free license plate data set, marking the positions of the license plates for license plate detection, and mainly marking the license plates towards a frame
Figure 308647DEST_PATH_IMAGE010
Wherein
Figure 334371DEST_PATH_IMAGE012
The center point of the license plate is represented,
Figure 866984DEST_PATH_IMAGE014
the length and width of the license plate are shown,
Figure 393780DEST_PATH_IMAGE016
representing the included angle of the long side of the license plate relative to the horizontal direction; after an original clean fog-free image license plate data set is constructed, images are fogged through a natural principle to generate a clear fogged image pair for image defogging, and the data set subjected to the fogged processing is divided into a training set, a verification set and a test set, wherein the proportion of the training set, the verification set and the test set is 6: 2: 2;
(2) deep convolution feature extraction:
initializing the size and numerical range row of the picture in the data set constructed in the step (1), scaling (resize) the long edge of the picture to 512 under the condition of keeping the proportion of the long edge and the short edge of the picture unchanged, and filling the short edge of the picture to ensure that the size of the picture is 512
Figure 328238DEST_PATH_IMAGE002
512, normalizing the image pixels to make the pixel range between 0 and 1; inputting the processed image into a backbone network for convolution feature extraction, wherein the backbone network uses a common convolution layer and a multi-scale void convolution module to connect as a feature extraction network, and two to three convolution layers are usedA down-sampling module is used after the layer/multi-scale void convolution module, the multi-scale void convolution module obtains and fuses multi-scale features by using convolution layers of a plurality of different void factors on the basis of not increasing calculated amount, and finally outputs a convolution feature map containing multi-scale scene texture information and license plate information through serial connection of the multi-layer void convolution module;
(3) image defogging reconstruction branch:
the image defogging reconstruction branch is used for recovering a clear image, an up-sampling module is used after every two to three continuous common convolution layers by inputting a convolution characteristic diagram, and ReLU activation function operation is performed after each common convolution; the features of the corresponding scale extracted by the deep convolution features are transmitted to a defogging reconstruction branch for feature reconstruction in a local connection mode, the two features are added, the forgetting of the original features is reduced, and the RGB clear images with the channel number of 3 are finally output;
(4) and (3) a license plate detection branch:
according to the convolution characteristic diagram obtained in the step (2), two full-connection layer sub-networks with the same structure but without shared parameters are respectively used for learning classification and position information, so that the tasks of classification of the target frame and position regression are completed, wherein the classification is whether the license plate is the license plate, and the position information is parameters of five orientation frames of the license plate
Figure 321602DEST_PATH_IMAGE010
Obtaining the accurate coordinate position of the license plate, wherein each feature point in the feature map is only provided with one anchor frame for learning the position of the license plate so as to improve the training and testing speed;
(5) training a network structure to obtain trained model parameters;
using images of the training set in the dataset, picture size 512
Figure 279194DEST_PATH_IMAGE002
512
Figure 281785DEST_PATH_IMAGE002
3, inputting the images into the network in sequenceGet the input of the whole network
Figure 883799DEST_PATH_IMAGE004
The target detection branch uses the IOU threshold value as the measurement standard of the sample distribution strategy and outputs the classification confidence of the license plate
Figure 579222DEST_PATH_IMAGE006
And the location of the regressive coordinate
Figure 24110DEST_PATH_IMAGE008
B is the number of samples selected in one training, Class is 2, namely whether the license plate is the license plate or not, N is the number of output predicted license plate targets, and 5 is five parameters of the orientation frame of the license plate; adopting Focal loss to calculate a prediction type and a real type to obtain errors, and adopting Smooth L1 loss to calculate the errors between the predicted license plate position and the real license plate position; the image defogging branch uses an L2 loss function to calculate the error between the generated defogged image and the real clear image, and the two branches finally use back propagation to update parameters; after 50 times of training iterations of the complete training set, saving the model parameters with the best results on the verification set as the final model trained parameters to obtain the trained network parameters for detecting the license plate in the foggy scene;
(6) testing the network, and outputting the position of the license plate:
loading the model parameters trained in the step (5), scaling (resize) the long edge of the image to 512 under the condition of keeping the ratio of the long edge to the short edge of the image unchanged, and then filling the short edge of the image to ensure that the image size is 512
Figure 564813DEST_PATH_IMAGE002
512 as an input to the network; the method mainly aims at detecting the license plate, so that the defogging branch does not perform forward reasoning during testing, namely only performs forward reasoning on the license plate detection branch, finally outputs the classification confidence coefficient of the license plate and the coordinate position of the license plate, sets a threshold to filter out the license plate with low confidence coefficient, and deletes redundant frames by using non-maximum suppression (NMS), thereby obtaining an accurate license plate detection frame.
It is noted that the disclosed embodiments are intended to aid in further understanding of the invention, but those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.
Example 2:
in this embodiment, 2000 images are collected as a license plate data set, wherein 1200 training sets, 400 verification sets, and 400 test records are used, the technical scheme of embodiment 1 is adopted to perform license plate detection, all image results in the test set are counted, and the precision is used as an evaluation index, so that the final test precision is 98.2%.

Claims (7)

1. A method for simultaneously realizing image defogging and license plate detection is characterized by comprising the following steps:
(1) collecting a license plate data set, carrying out fogging treatment to simulate a foggy scene, and dividing the data set into a training set, a verification set and a test set;
(2) initializing the size and the numerical range of the picture in the data set in the step (1), inputting the processed image into a backbone network for deep convolution feature extraction, and outputting a convolution feature map containing multi-scale scene texture information and license plate information;
(3) respectively inputting the convolution characteristic diagram obtained in the step (2) into an image defogging reconstruction branch and a license plate detection branch, wherein the image defogging reconstruction branch is used for recovering a clear image, and the license plate detection branch realizes classification and position regression of a license plate;
(4) using images of the training set in the dataset, picture size 512
Figure 663767DEST_PATH_IMAGE001
512
Figure 375502DEST_PATH_IMAGE001
3, inputting the images into the network in sequence to obtain the input of the whole network
Figure 413865DEST_PATH_IMAGE002
The target detection branch uses the IOU threshold value as the measurement standard of the sample distribution strategy and outputs the classification confidence of the license plate
Figure 688989DEST_PATH_IMAGE003
And regressive coordinate position
Figure 332460DEST_PATH_IMAGE004
B is the number of samples selected by one-time training, Class is 2, namely whether the license plate is the license plate or not, N is the number of output predicted license plate targets, 5 is the coordinate of the central point of the license plate facing to the frame, the length, the width and the angle of the frame, and Focal loss calculation is adopted to predict the type and the real type to obtain errors, and Smooth L1 loss is adopted to calculate the errors between the predicted license plate position and the real license plate position; the image defogging branch uses an L2 loss function to calculate the error between the generated defogged image and the real clear image, the two branches finally use back propagation updating parameters, and after training iteration of a complete training set for set times, the model parameters with the best results on the verification set are stored and used as the parameters of the final model training, so that the well-trained foggy scene license plate detection model is obtained;
(6) loading the model parameters trained in the step (5), scaling the long edge of the image to 512 under the condition of keeping the ratio of the long edge to the short edge of the image unchanged, and filling the short edge of the image to ensure that the size of the image is 512
Figure 667626DEST_PATH_IMAGE001
And 512, the classification confidence coefficient of the license plate and the coordinate position of the license plate are finally output as the input of the network, a threshold value is set to filter out the license plate with low confidence coefficient, and finally redundant frames are deleted by using non-maximum inhibition to obtain an accurate license plate detection frame, so that the high-precision detection and correction of the foggy scene with the combination of image defogging and license plate detection are realized.
2. The method for simultaneously realizing image defogging and license plate detection according to claim 1, wherein the step (1) of collecting the license plate data set and performing the fog processing comprises the following steps: collecting clean fog-free images containing conventional, inclined and deformed license plates in traffic monitoring and parking lots at side positions, constructing fog-free license plate data sets, and labeling positions of license plates facing frames
Figure 396548DEST_PATH_IMAGE005
For detecting license plates, wherein
Figure 475362DEST_PATH_IMAGE006
The center point of the license plate is represented,
Figure 848706DEST_PATH_IMAGE007
the length and width of the license plate are shown,
Figure 620353DEST_PATH_IMAGE008
representing the included angle of the long side of the license plate relative to the horizontal direction; after the original clean fog-free image license plate data set is constructed, images are fogged through the natural principle, and clear and fogged images are generated and used for image defogging.
3. The method for simultaneously achieving image defogging and license plate detection according to claim 2, wherein the number ratio of the training set, the verification set and the test set in the step (1) is 6: 2: 2.
4. the method for simultaneously realizing image defogging and license plate detection according to claim 3, wherein the initialization processing of the size and the numerical range of the image in the step (2) comprises the following steps: under the condition of keeping the proportion of the long side and the short side of the picture unchanged, the long side of the picture is scaled to 512, and the short side of the picture is filled so that the size of the picture is 512
Figure 570991DEST_PATH_IMAGE001
512 and the image pixels are normalized so that the pixel range is between 0-1.
5. The method according to claim 4, wherein the backbone network in step (2) uses a common convolutional layer connected with a multi-scale void convolution module as a feature extraction network, and uses a down-sampling module after using two to three convolutional layers/multi-scale void convolution modules, wherein the multi-scale void convolution module uses convolutional layers of different void factors, acquires and fuses multi-scale features without increasing the amount of computation, and finally outputs a convolution feature map containing multi-scale scene texture information and license plate information through serial connection of the multi-layer void convolution modules.
6. The method according to claim 5, wherein the image defogging and license plate detection are simultaneously realized in the step (3), wherein the image defogging reconstruction branch uses an up-sampling module after every two to three continuous ordinary convolution layers, and each ordinary convolution is followed by a ReLU activation function operation; and the features of the corresponding scale extracted by the deep convolution features are transmitted to a defogging reconstruction branch for feature reconstruction in a local connection mode, the two features are added, the forgetting of the original features is reduced, and the RGB clear image with the channel number of 3 is finally output.
7. The method of claim 6, wherein the license plate detection branch of step (3) learns classification and location information of the target frame class and location regression respectively using two full-connectivity sub-networks with the same structure but without shared parameters, wherein the classification is whether the license plate is the target frame and the location information is parameters of five orientation frames of the license plate
Figure 453496DEST_PATH_IMAGE005
And obtaining the accurate coordinate position of the license plate.
CN202210743909.0A 2022-06-29 2022-06-29 Method for simultaneously realizing image defogging and license plate detection Pending CN114898352A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210743909.0A CN114898352A (en) 2022-06-29 2022-06-29 Method for simultaneously realizing image defogging and license plate detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210743909.0A CN114898352A (en) 2022-06-29 2022-06-29 Method for simultaneously realizing image defogging and license plate detection

Publications (1)

Publication Number Publication Date
CN114898352A true CN114898352A (en) 2022-08-12

Family

ID=82729868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210743909.0A Pending CN114898352A (en) 2022-06-29 2022-06-29 Method for simultaneously realizing image defogging and license plate detection

Country Status (1)

Country Link
CN (1) CN114898352A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063786A (en) * 2022-08-18 2022-09-16 松立控股集团股份有限公司 High-order distant view fuzzy license plate detection method
CN115170443A (en) * 2022-09-08 2022-10-11 荣耀终端有限公司 Image processing method, shooting method and electronic equipment
CN115171079A (en) * 2022-09-08 2022-10-11 松立控股集团股份有限公司 Vehicle detection method based on night scene
CN115601742A (en) * 2022-11-21 2023-01-13 松立控股集团股份有限公司(Cn) Scale-sensitive license plate detection method based on graph relation ranking
CN115861997A (en) * 2023-02-27 2023-03-28 松立控股集团股份有限公司 License plate detection and identification method for guiding knowledge distillation by key foreground features
CN116129379A (en) * 2022-12-28 2023-05-16 国网安徽省电力有限公司芜湖供电公司 Lane line detection method in foggy environment
CN116704487A (en) * 2023-06-12 2023-09-05 三峡大学 License plate detection and recognition method based on Yolov5s network and CRNN
CN116721355A (en) * 2023-08-09 2023-09-08 江西云眼视界科技股份有限公司 Billboard detection method, billboard detection system, readable storage medium and computer equipment
CN117036952A (en) * 2023-08-15 2023-11-10 石河子大学 Red date water content grade detection method based on RGB image reconstruction hyperspectral image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310862A (en) * 2020-03-27 2020-06-19 西安电子科技大学 Deep neural network license plate positioning method based on image enhancement in complex environment
CN113111859A (en) * 2021-05-12 2021-07-13 吉林大学 License plate deblurring detection method based on deep learning
CN113128500A (en) * 2021-04-08 2021-07-16 浙江工业大学 Mask-RCNN-based non-motor vehicle license plate recognition method and system
CN114581904A (en) * 2022-03-31 2022-06-03 西安建筑科技大学 End-to-end license plate detection and identification method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310862A (en) * 2020-03-27 2020-06-19 西安电子科技大学 Deep neural network license plate positioning method based on image enhancement in complex environment
CN113128500A (en) * 2021-04-08 2021-07-16 浙江工业大学 Mask-RCNN-based non-motor vehicle license plate recognition method and system
CN113111859A (en) * 2021-05-12 2021-07-13 吉林大学 License plate deblurring detection method based on deep learning
CN114581904A (en) * 2022-03-31 2022-06-03 西安建筑科技大学 End-to-end license plate detection and identification method based on deep learning

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063786A (en) * 2022-08-18 2022-09-16 松立控股集团股份有限公司 High-order distant view fuzzy license plate detection method
CN115171079B (en) * 2022-09-08 2023-04-07 松立控股集团股份有限公司 Vehicle detection method based on night scene
CN115170443A (en) * 2022-09-08 2022-10-11 荣耀终端有限公司 Image processing method, shooting method and electronic equipment
CN115171079A (en) * 2022-09-08 2022-10-11 松立控股集团股份有限公司 Vehicle detection method based on night scene
CN115170443B (en) * 2022-09-08 2023-01-13 荣耀终端有限公司 Image processing method, shooting method and electronic equipment
CN115601742A (en) * 2022-11-21 2023-01-13 松立控股集团股份有限公司(Cn) Scale-sensitive license plate detection method based on graph relation ranking
CN116129379A (en) * 2022-12-28 2023-05-16 国网安徽省电力有限公司芜湖供电公司 Lane line detection method in foggy environment
CN116129379B (en) * 2022-12-28 2023-11-07 国网安徽省电力有限公司芜湖供电公司 Lane line detection method in foggy environment
CN115861997B (en) * 2023-02-27 2023-05-16 松立控股集团股份有限公司 License plate detection and recognition method for key foreground feature guided knowledge distillation
CN115861997A (en) * 2023-02-27 2023-03-28 松立控股集团股份有限公司 License plate detection and identification method for guiding knowledge distillation by key foreground features
CN116704487A (en) * 2023-06-12 2023-09-05 三峡大学 License plate detection and recognition method based on Yolov5s network and CRNN
CN116704487B (en) * 2023-06-12 2024-06-11 三峡大学 License plate detection and identification method based on Yolov s network and CRNN
CN116721355A (en) * 2023-08-09 2023-09-08 江西云眼视界科技股份有限公司 Billboard detection method, billboard detection system, readable storage medium and computer equipment
CN116721355B (en) * 2023-08-09 2023-10-24 江西云眼视界科技股份有限公司 Billboard detection method, billboard detection system, readable storage medium and computer equipment
CN117036952A (en) * 2023-08-15 2023-11-10 石河子大学 Red date water content grade detection method based on RGB image reconstruction hyperspectral image
CN117036952B (en) * 2023-08-15 2024-04-12 石河子大学 Red date water content grade detection method based on RGB image reconstruction hyperspectral image

Similar Documents

Publication Publication Date Title
CN114898352A (en) Method for simultaneously realizing image defogging and license plate detection
WO2023077816A1 (en) Boundary-optimized remote sensing image semantic segmentation method and apparatus, and device and medium
CN111489301B (en) Image defogging method based on image depth information guide for migration learning
CN112232391A (en) Dam crack detection method based on U-net network and SC-SAM attention mechanism
CN110197505B (en) Remote sensing image binocular stereo matching method based on depth network and semantic information
CN114677502B (en) License plate detection method with any inclination angle
CN112200143A (en) Road disease detection method based on candidate area network and machine vision
CN114494821B (en) Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation
CN112489023A (en) Pavement crack detection method based on multiple scales and multiple layers
CN115223063A (en) Unmanned aerial vehicle remote sensing wheat new variety lodging area extraction method and system based on deep learning
CN112749578A (en) Remote sensing image automatic road extraction method based on deep convolutional neural network
CN113723377A (en) Traffic sign detection method based on LD-SSD network
CN111986164A (en) Road crack detection method based on multi-source Unet + Attention network migration
CN115410189B (en) Complex scene license plate detection method
CN115063786A (en) High-order distant view fuzzy license plate detection method
CN116246169A (en) SAH-Unet-based high-resolution remote sensing image impervious surface extraction method
CN112927237A (en) Honeycomb lung focus segmentation method based on improved SCB-Unet network
CN116883650A (en) Image-level weak supervision semantic segmentation method based on attention and local stitching
CN114267025A (en) Traffic sign detection method based on high-resolution network and light-weight attention mechanism
CN115239644A (en) Concrete defect identification method and device, computer equipment and storage medium
CN114419421A (en) Subway tunnel crack identification system and method based on images
CN117557774A (en) Unmanned aerial vehicle image small target detection method based on improved YOLOv8
CN114037834B (en) Semantic segmentation method and device based on fusion of vibration signal and RGB image
CN116597270A (en) Road damage target detection method based on attention mechanism integrated learning network
CN118365543A (en) Crack image shadow removing method based on improvement ENLIGHTENGAN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220812