CN114332840A - License plate recognition method under unconstrained scene - Google Patents

License plate recognition method under unconstrained scene Download PDF

Info

Publication number
CN114332840A
CN114332840A CN202111662775.1A CN202111662775A CN114332840A CN 114332840 A CN114332840 A CN 114332840A CN 202111662775 A CN202111662775 A CN 202111662775A CN 114332840 A CN114332840 A CN 114332840A
Authority
CN
China
Prior art keywords
license plate
image
network
vehicle
recognition method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111662775.1A
Other languages
Chinese (zh)
Other versions
CN114332840B (en
Inventor
黄立勤
王怡冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202111662775.1A priority Critical patent/CN114332840B/en
Publication of CN114332840A publication Critical patent/CN114332840A/en
Application granted granted Critical
Publication of CN114332840B publication Critical patent/CN114332840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)
  • Character Discrimination (AREA)

Abstract

The invention provides a license plate recognition method under an unconstrained scene, which comprises the following steps of; step S1, deblurring the input license plate image by a multi-stage de-blurring network to obtain a clear image; s2, inputting the obtained clear image into a vehicle detection network, and positioning a vehicle boundary frame; step S3, zooming the vehicle boundary frame; s4, inputting the vehicle boundary frame image into a license plate inclination detection network, positioning the license plate and performing inclination correction; step S5, sending the corrected license plate image into an OCR network, and performing character segmentation and character recognition to obtain a license plate number; the invention can process more complicated fuzzy and inclined license plate pictures and effectively improve the identification accuracy rate in a complicated scene.

Description

License plate recognition method under unconstrained scene
Technical Field
The invention relates to the technical field of image recognition, in particular to a license plate recognition method in an unconstrained scene.
Background
With the rapid development of global economy and the continuous deepening of urbanization, the number of motor vehicles is continuously increased explosively. Road traffic construction by governments of various countries never catches up with the increasing speed of motor vehicles, whether in developed or developing countries. Thus, traffic problems also follow. One of the inevitable methods for solving the existing traffic problems is to construct an efficient license plate recognition system. For any vehicle, the license plate number is the only identification of the vehicle identity. Therefore, the license plate recognition system is widely applied to intelligent parking lot charging, vehicle searching, traffic road management and the like.
License plate recognition generally consists of three stages: license plate detection, character segmentation and character recognition. Since the license plate recognition technology has been developed as early as the 60's in the 20 th century, there are many mature recognition technologies for accurately recognizing the license plate number under the condition of clear and frontal view of the license plate image. No matter the traditional methods in the past years, such as the method of detecting the license plate by using color and texture, the method of segmenting characters by using character priori knowledge and the method of recognizing characters by using a template matching method, or the method based on the neural network which is popular at present, the traditional methods can well recognize the license plate number in the simple scenes. However, in reality, the phenomena of motion blur, improper shooting angle and the like often occur in the license plate image, and the license plate identification accuracy rate can be greatly reduced. Therefore, the license plate recognition method under any unconstrained scene is very important.
Regarding the motion blur problem, it is mainly the balance between the texture and the context information of the image that is well handled in the process of restoring the image. The conventional method generally includes an inverse filtering and a channel mapping method. But these are difficult to solve the blurring problem that is becoming more and more complex in present images. Therefore, most of the existing methods are designed based on neural networks. Most of the neural network methods are based on single-stage design. They also appear to be unable to meet the deblurring requirements in complex scenarios. However, the multi-stage design method has been used in many cases for high-level visual tasks such as pose estimation and motion segmentation.
Regarding the problem of license plate inclination, the traditional methods mostly adopt Hough transform calculation, detect the license plate frame, grasp the license plate inclination angle in combination with the frame information, timely rotate and correct in combination with the inclination angle, and acquire the image after license plate correction. And performing feature extraction on the license plate image through a deep convolutional neural network structure, and constructing an optimization problem by using a loss function to obtain the inclination angle.
The prior art has the defects that: the license plate recognition method generally comprises three steps of license plate detection, character segmentation and character recognition. However, in the target detection, the detection rate of a small object is lower than that of a large object. The accuracy can be influenced by directly detecting the license plate.
The prior art has the defects that: in image deblurring, the multi-stage design method mostly adopts the structures of an encoder and a decoder, can extract rich context information, but the details of spatial texture are not sufficiently retained.
The prior art has the defects of 3: in image deblurring, the multi-stage design method mostly directly transfers the output of the previous stage to the input of the next stage, so that the obtained effect is not good.
The prior art has the defects of 4: in the problem of license plate image inclination, the number of layers of an algorithm network based on a deep convolutional neural network is too deep, so that the network load is too large, and the integral license plate recognition speed is influenced.
Disclosure of Invention
The invention provides a license plate recognition method in an unconstrained scene, which can process more complicated fuzzy and inclined license plate pictures and effectively improve the recognition accuracy in a complicated scene.
The invention adopts the following technical scheme.
A license plate recognition method under an unconstrained scene comprises the following steps;
step S1, deblurring the input license plate image by a multi-stage de-blurring network to obtain a clear image;
s2, inputting the obtained clear image into a vehicle detection network, and positioning a vehicle boundary frame;
step S3, zooming the vehicle boundary frame;
s4, inputting the vehicle boundary frame image into a license plate inclination detection network, positioning the license plate and performing inclination correction;
and S5, sending the corrected license plate image into an OCR network, and performing character segmentation and character recognition to obtain a license plate number.
In the multi-stage design deblurring network, the encoder structure and the decoder structure adopted in the first two stages are U-Net networks, and the original resolution image is directly input into the feature extraction network in the last stage to extract the detail texture information of the image.
In the U-Net network, an encoder part performs down-sampling step by utilizing a pooling layer to generate a larger receptive field and learn extensive context information, and a decoder part performs up-sampling step by utilizing deconvolution to gradually recover spatial information in an original input image and edge information in the image, so that a low-resolution feature map is finally mapped into a pixel-level segmentation result map.
Connecting and fusing feature maps at corresponding positions in the first two stages by a skip layer between an encoder and a decoder of the U-Net network;
the layer jump connection enables a decoder to keep more high-resolution detail information of the high-level feature map when the decoder performs upsampling, improves the perfection of restoring the detail information in the original image and improves the segmentation precision.
In the down-sampling process, two 3 × 3 convolutional layers are used per layer, and a relu activation function is employed after each convolutional layer; adopting a maximum pooling layer of 2 multiplied by 2 between each layer to carry out down-sampling operation on the original picture; and each downsampling is multiplied by one channel.
In the up-sampling process, each step has a 2 × 2 transposed convolution and two 3 × 3 convolutional layers, and the activation function of the convolutional layers is relu; adding feature information corresponding to a downsampling stage into upsampling of each step, and associating the upsampling and the feature information by using layer jump connection; the last layer of the U-Net network is a convolution layer of 1 multiplied by 1, and the feature vectors of the multiple channels of the U-Net network are converted into the number of required classification results;
an attention mechanism module is arranged behind the U-Net network; the attention mechanism module uses 1 × 1 convolution on input features of a previous stage to generate a residual image Rs, wherein C is the number of channels, and H × W represents the height and width in a space dimension;
and the attention mechanism module adds the residual image Rs to the input blurred image I to obtain a recovered image Xs. The restored image Xs provides supervision information according to the real clear image S to calculate a loss function; next, using a 1 × 1 convolution and an activation function, an attention map is generated from the restored image Xs, which is used to recalibrate the transformed local input features, suppress redundant information in the feature information, and thus obtain an attention-directed output feature, which is passed to the next stage for further processing.
In step S2, the YOLOv2 network is used to detect the vehicle, and the black box operation mode is used to merge the vehicle-related outputs and ignore other categories.
In step S3, the oblique view of the vehicle is adjusted to a greater angle than the frontal view to keep the LP region still recognizable, with the scaling factor fscExpressed as a formula:
Figure BDA0003450542250000031
wherein WvAnd HVRespectively representing the width and height of the vehicle bounding box; dmin<fsc.min{Wv,HV}<D, Dmax; and Dmin and Dmax limit the minimum dimension range of the zoomed bounding box, and the vehicle view is input into a license plate detection network after being zoomed to adjust the size of the vehicle view.
In step S4, the license plate inclination detection network first detects the license plate in the vehicle bounding box, after obtaining the license plate image, performs inclination correction on the license plate image with the inclination correction module, and estimates the transformation relationship using the affine transformation matrix; the affine transformation matrix is as follows:
Figure BDA0003450542250000041
the parameters a, b, c, d, e and f of the affine transformation matrix are obtained by training a computer neural network and are used for defining the transformation mapping relation of the two images,
the license plate inclination detection network obtains an input coordinate point position corresponding to an output coordinate point through a coordinate mapping module, performs corresponding mapping on the output and input coordinates according to parameters of an affine transformation matrix, and is expressed by a formula as follows:
Figure BDA0003450542250000042
(xi t,yi t) Is the coordinates of the output target picture, (x)i s,yi s) Is the coordinate of the original picture, and A theta represents an affine relation; when the target picture is sampled on the original picture, pixels are collected on different coordinates of the original picture to the target picture every time, the target picture is fully pasted, the coordinates of the target picture are fixed coordinates which need to traverse once every sampling, and the coordinates of the collected original picture are not fixed.
In step S5, the clear and corrected license plate image is sent to OCR-Net for license plate character segmentation and recognition.
The invention has the advantages that:
1. the fuzzy situation under various complex scenes can be more effectively processed. Because the invention adopts a multi-stage design method, the rich context information is extracted by using the improved u-net network in the first two stages, the downsampling operation is not carried out on the picture in the last stage, and the feature extraction is directly carried out by using the image with the original resolution ratio to obtain the detail texture information of the picture. .
2. Compared with other multi-stage deblurring methods, the method has better deblurring effect. Because the invention does not directly input the output of the previous stage into the next stage, but adds an attention mechanism network, the useful information extracted from each stage is strengthened, redundant information is removed, and the characteristic information is more effectively ensured to be transmitted between layers.
3. The tilt correction capability is stronger, and a lot of resources are occupied. The invention designs a new network, which can combine a space transformation network structure with a detection network, has stronger capability of detecting and correcting the inclined license plate by generating the affine transformation matrix of each pixel, and can be directly added into the detection network, thereby reducing the resource occupation. In addition, the image scaling step is added before the network, so that the occupation ratio of the license plate image in the vehicle can be better improved, and the subsequent license plate detection and correction are facilitated.
4. In the license plate recognition system, a vehicle detection module is added before the license plate detection, and the license plate is detected in a detected vehicle boundary frame. This is mainly because in the actual image, the vehicle bounding box is always used when there is a license plate.
Drawings
The invention is described in further detail below with reference to the following figures and detailed description:
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic architectural diagram of the deblurring network of the present invention;
FIG. 3 is a schematic diagram of the U-Net network of the present invention (in the solution of the present invention, the U-Net network is optimized and modified);
FIG. 4 is a schematic diagram of an attention module;
fig. 5 is a schematic diagram of a tilt correction module.
Detailed Description
As shown in the figure, a license plate recognition method in an unconstrained scene comprises the following steps;
step S1, deblurring the input license plate image by a multi-stage de-blurring network to obtain a clear image;
s2, inputting the obtained clear image into a vehicle detection network, and positioning a vehicle boundary frame;
step S3, zooming the vehicle boundary frame;
s4, inputting the vehicle boundary frame image into a license plate inclination detection network, positioning the license plate and performing inclination correction;
and S5, sending the corrected license plate image into an OCR network, and performing character segmentation and character recognition to obtain a license plate number.
In the multi-stage design deblurring network, the encoder structure and the decoder structure adopted in the first two stages are U-Net networks, and the original resolution image is directly input into the feature extraction network in the last stage to extract the detail texture information of the image.
In the U-Net network, an encoder part performs down-sampling step by utilizing a pooling layer to generate a larger receptive field and learn extensive context information, and a decoder part performs up-sampling step by utilizing deconvolution to gradually recover spatial information in an original input image and edge information in the image, so that a low-resolution feature map is finally mapped into a pixel-level segmentation result map.
Connecting and fusing feature maps at corresponding positions in the first two stages by a skip layer between an encoder and a decoder of the U-Net network;
the layer jump connection enables a decoder to keep more high-resolution detail information of the high-level feature map when the decoder performs upsampling, improves the perfection of restoring the detail information in the original image and improves the segmentation precision.
In the down-sampling process, two 3 × 3 convolutional layers are used per layer, and a relu activation function is employed after each convolutional layer; adopting a maximum pooling layer of 2 multiplied by 2 between each layer to carry out down-sampling operation on the original picture; and each downsampling is multiplied by one channel.
In the up-sampling process, each step has a 2 × 2 transposed convolution and two 3 × 3 convolutional layers, and the activation function of the convolutional layers is relu; adding feature information corresponding to a downsampling stage into upsampling of each step, and associating the upsampling and the feature information by using layer jump connection; the last layer of the U-Net network is a convolution layer of 1 multiplied by 1, and the feature vectors of the multiple channels of the U-Net network are converted into the number of required classification results;
an attention mechanism module is arranged behind the U-Net network; the attention mechanism module uses 1 × 1 convolution on input features of a previous stage to generate a residual image Rs, wherein C is the number of channels, and H × W represents the height and width in a space dimension;
and the attention mechanism module adds the residual image Rs to the input blurred image I to obtain a recovered image Xs. The restored image Xs provides supervision information according to the real clear image S to calculate a loss function; next, using a 1 × 1 convolution and an activation function, an attention map is generated from the restored image Xs, which is used to recalibrate the transformed local input features, suppress redundant information in the feature information, and thus obtain an attention-directed output feature, which is passed to the next stage for further processing.
In step S2, the YOLOv2 network is used to detect the vehicle, and the black box operation mode is used to merge the vehicle-related outputs and ignore other categories.
In step S3, the oblique view of the vehicle is adjusted to a greater angle than the frontal view to keep the LP region still recognizable, with the scaling factor fscExpressed as a formula:
Figure BDA0003450542250000071
wherein WvAnd HVRespectively representing the width and height of the vehicle bounding box; dmin<fsc.min{Wv,HV}<D, Dmax; and Dmin and Dmax limit the minimum dimension range of the zoomed bounding box, and the vehicle view is input into a license plate detection network after being zoomed to adjust the size of the vehicle view.
In step S4, the license plate inclination detection network first detects the license plate in the vehicle bounding box, after obtaining the license plate image, performs inclination correction on the license plate image with the inclination correction module, and estimates the transformation relationship using the affine transformation matrix; the affine transformation matrix is as follows:
Figure BDA0003450542250000072
the parameters a, b, c, d, e and f of the affine transformation matrix are obtained by training a computer neural network and are used for defining the transformation mapping relation of the two images,
the license plate inclination detection network obtains an input coordinate point position corresponding to an output coordinate point through a coordinate mapping module, performs corresponding mapping on the output and input coordinates according to parameters of an affine transformation matrix, and is expressed by a formula as follows:
Figure BDA0003450542250000073
(xi t,yi t) Is the coordinates of the output target picture, (x)i s,yi s) Is the coordinate of the original picture, and A theta represents an affine relation; when the target picture is sampled on the original picture, pixels are collected on different coordinates of the original picture to the target picture every time, the target picture is fully pasted, the coordinates of the target picture are fixed coordinates which need to traverse once every sampling, and the coordinates of the collected original picture are not fixed.
In step S5, the clear and corrected license plate image is sent to OCR-Net for license plate character segmentation and recognition.
Example (b):
in this example, in the deblurring network of the multi-stage design network, the modified U-Net is used in the first two stages; context information is extracted in the downsampling and upsampling processes. And directly inputting the original resolution image into a feature extraction network at the last stage, and extracting detail texture information of the image. Through such a hierarchical design, a balance between spatial detail and contextual information is achieved.
In this example, an attention mechanism module is added behind each U-Net network. For the output of the decoder, a true sharp image is provided in this block as a monitor and a loss function is calculated therewith. And meanwhile, converting the obtained characteristic diagram into an attention diagram by using an attention mechanism. This enables redundant information in the feature information to be suppressed, thereby enhancing useful information.
When the license plate is subjected to inclination correction, an inclination correction sub-network capable of being combined with a detection network is designed, and the inclination correction sub-network is mainly composed of three parts, namely parameter prediction, coordinate mapping and pixel acquisition, according to the affine transformation principle. Learning a parameter size required in the affine transformation matrix on the parameter prediction part; in the coordinate mapping part, mapping the output and input coordinates correspondingly according to the parameters obtained in the previous step; and in the pixel acquisition part, the conversion between the corresponding pixel points is started according to the mapping relation of the coordinates. And meanwhile, processing the transformed floating point value by adopting a bilinear transformation method to obtain an integer.
In this example, the YoloV2 network is used by the vehicle detection module of the vehicle detection network to detect the vehicle first and then the license plate, so as to ensure higher accuracy.
In this example, the detected vehicle boundary frame is subjected to scaling processing. If the shooting angle is too inclined, the occupation ratio of the license plate on the boundary frame of the vehicle is lower. Certain scaling processing is required to improve the occupation ratio, so that the subsequent license plate recognition efficiency is improved.

Claims (10)

1. A license plate recognition method under an unconstrained scene is characterized by comprising the following steps: comprises the following steps;
step S1, deblurring the input license plate image by a multi-stage de-blurring network to obtain a clear image;
s2, inputting the obtained clear image into a vehicle detection network, and positioning a vehicle boundary frame;
step S3, zooming the vehicle boundary frame;
s4, inputting the vehicle boundary frame image into a license plate inclination detection network, positioning the license plate and performing inclination correction;
and S5, sending the corrected license plate image into an OCR network, and performing character segmentation and character recognition to obtain a license plate number.
2. The license plate recognition method under the unconstrained scene of claim 1, characterized in that: in the multi-stage design deblurring network, the encoder structure and the decoder structure adopted in the first two stages are U-Net networks, and the original resolution image is directly input into the feature extraction network in the last stage to extract the detail texture information of the image.
3. The license plate recognition method under the unconstrained scene of claim 2, characterized in that: in the U-Net network, an encoder part performs down-sampling step by utilizing a pooling layer to generate a larger receptive field and learn extensive context information, and a decoder part performs up-sampling step by utilizing deconvolution to gradually recover spatial information in an original input image and edge information in the image, so that a low-resolution feature map is finally mapped into a pixel-level segmentation result map.
4. The license plate recognition method under the unconstrained scene of claim 3, characterized in that: connecting and fusing feature maps at corresponding positions in the first two stages by a skip layer between an encoder and a decoder of the U-Net network;
the layer jump connection enables a decoder to keep more high-resolution detail information of the high-level feature map when the decoder performs upsampling, improves the perfection of restoring the detail information in the original image and improves the segmentation precision.
5. The license plate recognition method under the unconstrained scene of claim 3, characterized in that: in the down-sampling process, two 3 × 3 convolutional layers are used per layer, and a relu activation function is employed after each convolutional layer; adopting a maximum pooling layer of 2 multiplied by 2 between each layer to carry out down-sampling operation on the original picture; and each downsampling is multiplied by one channel.
6. The license plate recognition method under the unconstrained scene of claim 5, characterized in that: in the up-sampling process, each step has a 2 × 2 transposed convolution and two 3 × 3 convolutional layers, and the activation function of the convolutional layers is relu; adding feature information corresponding to a downsampling stage into upsampling of each step, and associating the upsampling and the feature information by using layer jump connection; the last layer of the U-Net network is a convolution layer of 1 multiplied by 1, and the feature vectors of the multiple channels of the U-Net network are converted into the number of required classification results;
an attention mechanism module is arranged behind the U-Net network; the attention mechanism module uses 1 × 1 convolution on input features of a previous stage to generate a residual image Rs, wherein C is the number of channels, and H × W represents the height and width in a space dimension;
and the attention mechanism module adds the residual image Rs to the input blurred image I to obtain a recovered image Xs. The restored image Xs provides supervision information according to the real clear image S to calculate a loss function; next, using a 1 × 1 convolution and an activation function, an attention map is generated from the restored image Xs, which is used to recalibrate the transformed local input features, suppress redundant information in the feature information, and thus obtain an attention-directed output feature, which is passed to the next stage for further processing.
7. The license plate recognition method under the unconstrained scene of claim 3, characterized in that: in step S2, the YOLOv2 network is used to detect the vehicle, and the black box operation mode is used to merge the vehicle-related outputs and ignore other categories.
8. The license plate recognition method under the unconstrained scene of claim 3, characterized in that: in step S3, the oblique view of the vehicle is adjusted to a greater angle than the frontal view to keep the LP region still recognizable, with the scaling factor fscExpressed as a formula:
Figure FDA0003450542240000021
wherein WvAnd HVRespectively representing the width and height of the vehicle bounding box; dmin<fsc.min{Wv,HV}<D, Dmax; and Dmin and Dmax limit the minimum dimension range of the zoomed bounding box, and the vehicle view is input into a license plate detection network after being zoomed to adjust the size of the vehicle view.
9. The license plate recognition method under the unconstrained scene of claim 3, characterized in that: in step S4, the license plate inclination detection network first detects the license plate in the vehicle bounding box, after obtaining the license plate image, performs inclination correction on the license plate image with the inclination correction module, and estimates the transformation relationship using the affine transformation matrix; the affine transformation matrix is as follows:
Figure FDA0003450542240000031
the parameters a, b, c, d, e and f of the affine transformation matrix are obtained by training a computer neural network and are used for defining the transformation mapping relation of the two images,
the license plate inclination detection network obtains an input coordinate point position corresponding to an output coordinate point through a coordinate mapping module, performs corresponding mapping on the output and input coordinates according to parameters of an affine transformation matrix, and is expressed by a formula as follows:
Figure FDA0003450542240000032
(xi t,yi t) Is the coordinates of the output target picture, (x)i s,yi s) Is the coordinate of the original picture, and A theta represents an affine relation; when the target mapWhen the picture is sampled on the original picture, pixels are collected on different coordinates of the original picture to a target picture every time, the target picture is fully pasted, the coordinates of the target picture are fixed coordinates which need to traverse once every sampling, and the coordinates of the collected original picture are not fixed.
10. The license plate recognition method under the unconstrained scene of claim 3, characterized in that: in step S5, the clear and corrected license plate image is sent to OCR-Net for license plate character segmentation and recognition.
CN202111662775.1A 2021-12-31 2021-12-31 License plate recognition method under unconstrained scene Active CN114332840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111662775.1A CN114332840B (en) 2021-12-31 2021-12-31 License plate recognition method under unconstrained scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111662775.1A CN114332840B (en) 2021-12-31 2021-12-31 License plate recognition method under unconstrained scene

Publications (2)

Publication Number Publication Date
CN114332840A true CN114332840A (en) 2022-04-12
CN114332840B CN114332840B (en) 2024-08-02

Family

ID=81020976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111662775.1A Active CN114332840B (en) 2021-12-31 2021-12-31 License plate recognition method under unconstrained scene

Country Status (1)

Country Link
CN (1) CN114332840B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103317A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN112784834A (en) * 2019-11-09 2021-05-11 北京航天长峰科技工业集团有限公司 Automatic license plate identification method in natural scene
CN112801901A (en) * 2021-01-21 2021-05-14 北京交通大学 Image deblurring algorithm based on block multi-scale convolution neural network
US20210390723A1 (en) * 2020-06-15 2021-12-16 Dalian University Of Technology Monocular unsupervised depth estimation method based on contextual attention mechanism

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103317A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN112784834A (en) * 2019-11-09 2021-05-11 北京航天长峰科技工业集团有限公司 Automatic license plate identification method in natural scene
US20210390723A1 (en) * 2020-06-15 2021-12-16 Dalian University Of Technology Monocular unsupervised depth estimation method based on contextual attention mechanism
CN112801901A (en) * 2021-01-21 2021-05-14 北京交通大学 Image deblurring algorithm based on block multi-scale convolution neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈志鸿: ""关于视频图像多目标跟踪优化研究"", 《计算机仿真》, 30 September 2020 (2020-09-30) *

Also Published As

Publication number Publication date
CN114332840B (en) 2024-08-02

Similar Documents

Publication Publication Date Title
CN110706157B (en) Face super-resolution reconstruction method for generating confrontation network based on identity prior
CN110728200B (en) Real-time pedestrian detection method and system based on deep learning
CN111582083B (en) Lane line detection method based on vanishing point estimation and semantic segmentation
CN113052210B (en) Rapid low-light target detection method based on convolutional neural network
Seibel et al. Eyes on the target: Super-resolution and license-plate recognition in low-quality surveillance videos
CN111340844B (en) Multi-scale characteristic optical flow learning calculation method based on self-attention mechanism
CN109145798B (en) Driving scene target identification and travelable region segmentation integration method
CN113435240B (en) End-to-end form detection and structure identification method and system
CN109902809B (en) Auxiliary semantic segmentation model by using generated confrontation network
CN112990065B (en) Vehicle classification detection method based on optimized YOLOv5 model
CN115439743A (en) Method for accurately extracting visual SLAM static characteristics in parking scene
CN109272014B (en) Image classification method based on distortion adaptive convolutional neural network
CN112861700A (en) DeepLabv3+ based lane line network identification model establishment and vehicle speed detection method
CN112766056A (en) Method and device for detecting lane line in low-light environment based on deep neural network
Yuan et al. Fast super-resolution for license plate image reconstruction
CN116563553B (en) Unmanned aerial vehicle image segmentation method and system based on deep learning
CN107330856A (en) A kind of method for panoramic imaging based on projective transformation and thin plate spline
CN112132746A (en) Small-scale pedestrian target rapid super-resolution method for intelligent roadside equipment
CN114332840B (en) License plate recognition method under unconstrained scene
CN116188779A (en) Lane-Detection-based lane line Detection method
Zhang et al. Chinese license plate recognition using machine and deep learning models
CN115965531A (en) Model training method, image generation method, device, equipment and storage medium
CN115345781A (en) Multi-view video stitching method based on deep learning
CN114565764A (en) Port panorama sensing system based on ship instance segmentation
CN111008555B (en) Unmanned aerial vehicle image small and weak target enhancement extraction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant