CN109325915B - Super-resolution reconstruction method for low-resolution monitoring video - Google Patents

Super-resolution reconstruction method for low-resolution monitoring video Download PDF

Info

Publication number
CN109325915B
CN109325915B CN201811056960.4A CN201811056960A CN109325915B CN 109325915 B CN109325915 B CN 109325915B CN 201811056960 A CN201811056960 A CN 201811056960A CN 109325915 B CN109325915 B CN 109325915B
Authority
CN
China
Prior art keywords
resolution
layer
image
convolution
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811056960.4A
Other languages
Chinese (zh)
Other versions
CN109325915A (en
Inventor
詹曙
臧怀娟
朱磊磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN201811056960.4A priority Critical patent/CN109325915B/en
Publication of CN109325915A publication Critical patent/CN109325915A/en
Application granted granted Critical
Publication of CN109325915B publication Critical patent/CN109325915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • G06T3/4076Super resolution, i.e. output image resolution higher than sensor resolution by iteratively correcting the provisional high resolution image using the original low-resolution image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention discloses a super-resolution reconstruction method for a low-resolution surveillance video, which uses two convolution kernels with different sizes for feature representation of a low-resolution surveillance video frame, combines features extracted by the two convolution kernels as input of a next layer, enables network training to be easier through a residual error learning mode, reconstructs the super-resolution of the learned features through a deconvolution layer, optimizes a convolutional neural network by using a random gradient descent algorithm, acquires a trained network model, inputs a low-resolution surveillance picture to be reconstructed into the trained network model, and reconstructs the super-resolution of the surveillance video. According to the invention, on the premise of not increasing the hardware cost, the image resolution of the monitoring video is increased, so that more characteristic information required for identifying the face can be obtained, the characteristic information is used for assisting criminal investigation to determine the identity of the criminal suspect, and the accuracy and efficiency of determining the identity of the criminal suspect in the criminal investigation are improved.

Description

Super-resolution reconstruction method for low-resolution monitoring video
Technical Field
The invention relates to the field of computer vision methods, in particular to a super-resolution reconstruction method for a low-resolution monitoring video.
Background
As the government of China actively utilizes advanced security technology for maintaining social stability and guaranteeing the safety of people's lives and properties, a relatively perfect video monitoring system is established in cities all over the country. These video surveillance systems play an important role in criminal investigation in public security authorities. However, in actual monitoring, since the criminal suspect is far away from the camera or the imaging effect in the monitoring camera is not good, many images with low resolution are generated in monitoring, and it is difficult to provide feature information required for identifying the face. Therefore, it is the starting point of the present invention to perform resolution enhancement processing on the low-resolution monitored image to improve the recognizability of the target.
The image super-resolution reconstruction is a technology for improving the image quality by using a software algorithm, overcomes the defect of high cost of obtaining a high-resolution image through hardware, and has important significance in the aspect of improving the visual effect of the image. The resolution of the monitoring image with low resolution is improved by using an image super-resolution reconstruction technology, and the image resolution of the monitoring video is improved on the premise of not improving hardware cost, so that more characteristic information required by identifying the face can be acquired, and the method is used for assisting criminal investigation to determine the identity of a criminal suspect.
Disclosure of Invention
The invention aims to provide a super-resolution reconstruction method for a low-resolution monitoring video, which aims to solve the problems that many monitoring pictures are low in resolution, feature information required by target face identification is difficult to provide and the like.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a super-resolution reconstruction method for low-resolution surveillance video is characterized by comprising the following steps: extracting features of an image for training by using a convolutional neural network method containing convolutional layers and residual connection, reconstructing the image through a deconvolution layer to improve the resolution of the image, optimizing the convolutional neural network by using a random gradient descent algorithm to obtain a trained network model, and inputting an image frame to be reconstructed into the trained network model to obtain a reconstruction result; the method comprises the following steps:
(1) Selecting a plurality of pictures as a training database, wherein the training database comprises a low-resolution image for inputting a network and a corresponding high-resolution image serving as a supervised learning label;
(2) Inputting the training samples into a convolutional neural network for network training, wherein the convolutional neural network comprises a plurality of convolutional layers and a plurality of residual error connections, and the processing process in the convolutional layers is as follows:
the first layer is 1 convolution layer containing convolution kernels with the size of 3 × 3 and used for extracting global features of an image, the subsequent layers are a plurality of parallel convolution layers with 2 different convolution kernels and used for extracting features with different sizes, the first convolution layer contains a plurality of convolution kernels with the size of 3 × 3, the second convolution layer contains a plurality of convolution kernels with the size of 5 × 5, discrete convolution is carried out on the convolution kernels and an original image respectively, and after an offset term is added, the extracted image features are obtained through a ReLU activation function and are expressed as follows:
Figure BDA0001796037900000021
where L =1, 2., L represents the number of network layers, i represents the position of the pixel,
Figure BDA0001796037900000022
representing the ith pixel of the image in layer l-1,
Figure BDA0001796037900000023
representing the jth image feature of the h convolutional layer in the l layer, M j Represents the set of all images of the input, k represents the convolution kernel,
Figure BDA0001796037900000024
represents the ith value in the jth convolution kernel in the ith layer,
Figure BDA0001796037900000025
representing the jth bias term in the ith layer. Since each layer of the present invention contains 2 parallel convolutional layers with different convolutional kernels, h =1,2, f (x) represents the ReLU activation function, which is expressed as follows:
f(x)=max(0,x) (2),
after the convolution is completed, the results of the 2 parallel convolution layers are merged together at the merge layer as a whole image feature, which is expressed as follows:
Figure BDA0001796037900000026
wherein X L Output representing the l-th layer [. ]]Representing operations that merge the results of multiple parallel convolutional layers together as a whole block of image features.
(3) And (3) taking the output of the l layer as the input of the l +1 layer, and repeating the calculation of the parallel convolution layer in the step (2) until the output is transmitted to the last layer of the network. After the L-th layer of convolution is finished, the output characteristic of the L-th layer and the input characteristic of the first layer are subjected to residual error operation, and the residual error operation is expressed as follows:
X=X L +X 1 (4)
where X represents the feature after the residual operation is completed. And then inputting the X features into a deconvolution layer to amplify the size of the X features, and then enabling the amplified features to pass through a convolution layer containing a 3X 3 size convolution kernel to finally obtain an output image with improved resolution.
(4) Comparing the corresponding high-resolution image serving as the supervised learning label with an output image, optimizing the convolutional neural network by using a random gradient descent method, and obtaining a trained network after at least one hundred thousand iterations;
(5) And (4) inputting the image to be reconstructed into the trained network obtained in the step (4) when a low-resolution monitoring image to be super-resolution reconstructed is known, and outputting the super-resolution reconstructed high-resolution monitoring image by the convolutional neural network.
The super-resolution reconstruction method for the low-resolution surveillance video is characterized by comprising the following steps of: the feature extraction of the low-resolution monitoring image is carried out by using a convolutional neural network, wherein the features with different sizes are extracted by using convolutional kernels with different sizes, then the features of the convolutional neural network and the convolutional kernels are combined, the training of the network is optimized by using a residual connection mode, and the super-resolution reconstruction is carried out on the learned features by using a deconvolution layer, so that a reconstructed image with improved resolution is obtained. And finally, carrying out network optimization by using a random gradient descent method to obtain a trained network, and further carrying out super-resolution reconstruction on the low-resolution monitoring image.
According to the invention, the super-resolution reconstruction is carried out on the low-resolution monitoring image to obtain the reconstructed high-resolution image, and the image resolution of the monitoring video is improved on the premise of not improving the hardware cost, so that more characteristic information required for identifying the face can be obtained, and the super-resolution reconstruction method is used for assisting criminal investigation to determine the identity of the criminal suspect.
In the invention, the random gradient descent algorithm is an optimization algorithm, and is more suitable for the optimization control process with more control variables and more complex controlled system. In the process of training the network, the aim is to minimize the error between the output result of the network and the correct result, and the minimum value of the objective function is obtained through multiple iterations.
The invention uses a convolution neural network method to carry out feature extraction and super-resolution reconstruction. The method gradually extracts low-level features to high-level abstract features, and extracts features with different sizes by using convolution kernels with different sizes, so that effective feature information is better extracted, the reconstruction effect is improved, the convolution neural network has high flexibility, different parameters can be adjusted according to different actual conditions, and the method can be applied to different occasions.
The beneficial effects of the invention are:
the invention uses the super-resolution reconstruction of the convolutional neural network on the picture to improve the resolution of the low-resolution monitoring picture, thereby obtaining more characteristic information required by identifying the face, realizing the application of the super-resolution reconstruction of the picture to criminal investigation, and improving the accuracy and efficiency of determining the identity of the criminal suspect in the criminal investigation.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention.
Fig. 2 is a convolutional network structure used by the present invention.
Fig. 3 is a comparison graph of the effects in the surveillance video of the present invention.
Detailed Description
As shown in fig. 1, a super-resolution reconstruction method for low-resolution surveillance video comprises the following steps:
(1) Selecting 700 pictures as a training database, wherein the training database comprises a low-resolution image for inputting a network and a corresponding high-resolution image serving as a supervised learning label;
(2) Inputting training samples into a convolutional neural network for network training, wherein the convolutional neural network comprises a plurality of convolutional layers and a plurality of residual error connections, the convolutional operation comprises convolutional layers with 2 different convolutional kernels, and the processing process in the convolutional layers is as follows:
the first layer is 1 convolution layer containing convolution kernels with the size of 3 × 3 and used for extracting global features of an image, the subsequent layers are a plurality of parallel convolution layers with 2 different convolution kernels and used for extracting features with different sizes, the first convolution layer contains a plurality of convolution kernels with the size of 3 × 3, the second convolution layer contains a plurality of convolution kernels with the size of 5 × 5, discrete convolution is carried out on the convolution kernels and an original image respectively, and after an offset term is added, the extracted image features are obtained through a ReLU activation function and are expressed as follows:
Figure BDA0001796037900000041
where L =1, 2.., L represents the number of layers of the network, i represents the location of the pixel,
Figure BDA0001796037900000042
representing the ith pixel of the image in layer l-1,
Figure BDA0001796037900000043
representing the jth image feature of the h convolutional layer in the l layer, M j Represents the set of all images of the input, k represents the convolution kernel,
Figure BDA0001796037900000044
represents the ith value in the jth convolution kernel in the ith layer,
Figure BDA0001796037900000045
representing the jth bias term in the ith layer. Since each layer of the present invention contains 2 parallel convolutional layers with different convolutional kernels, h =1,2, f (x) represents the ReLU activation function, which is expressed as follows:
f(x)=max(0,x) (2),
after the convolution is completed, the results of the 2 parallel convolution layers are merged together at the merge layer as a whole block of image features, which are expressed as follows:
Figure BDA0001796037900000046
wherein X L Output representing the l-th layer [. ]]Representing the operation of merging together the results of multiple parallel convolutional layers as an entire block of image features.
(3) And (3) taking the output of the l layer as the input of the l +1 layer, and repeating the calculation of the parallel convolution layer in the step (2) until the output is transmitted to the last layer of the network. After the L-th layer of convolution is finished, residual error operation is carried out on the output characteristic of the L-th layer and the input characteristic of the first layer, and the residual error operation is expressed as follows:
X=X L +X 1 (4)
where X represents the feature after the residual operation is completed. And then inputting the X features into a deconvolution layer to amplify the size of the X features, and then enabling the amplified features to pass through a convolution layer containing a 3X 3 size convolution kernel to finally obtain an output image with improved resolution.
(4) Comparing the corresponding high-resolution image serving as the supervised learning label with an output image, optimizing the convolutional neural network by using a random gradient descent method, and obtaining a trained network after at least one hundred thousand iterations;
(5) And inputting the image to be reconstructed into the obtained trained network when the known low-resolution monitoring image to be reconstructed at super resolution is known, and outputting the high-resolution monitoring image after the super-resolution reconstruction by using the convolutional neural network.
Fig. 2 shows a convolutional network structure used in the present invention, where the formula on the left of the convolutional layer represents the size of the convolutional kernel, and the convolutional network structure uses convolutional layers with convolutional kernels of different sizes and residual connection to perform feature extraction, where the merging layer merges the output features of the convolutional layers with convolutional kernels of different sizes into a whole block of image features, the addition symbol represents residual connection, and then uses the deconvolution layer to enlarge the feature size, so as to finally obtain an output image with improved resolution. In fig. 3, (a) is a low resolution monitor picture, (b) is a reconstructed high resolution picture, and the low resolution picture has the same size as the reconstructed picture so as to compare the effect more intuitively, wherein the red-framed face part is uniformly enlarged for comparing the effect.

Claims (2)

1. A super-resolution reconstruction method for low-resolution surveillance video is characterized by comprising the following steps: extracting features of an image for training by using a convolutional neural network method containing convolutional layers and residual connection, reconstructing the image through a deconvolution layer to improve the resolution of the image, then optimizing the convolutional neural network by using a random gradient descent algorithm to obtain a trained network model, and inputting an image frame to be reconstructed into the trained network model to obtain a reconstruction result; the method comprises the following steps:
(1) Selecting a plurality of pictures as a training database, wherein the training database comprises a low-resolution image for inputting a network and a corresponding high-resolution image serving as a supervised learning label;
(2) Inputting the training sample into a convolutional neural network for network training, wherein the convolutional neural network comprises a plurality of convolutional layers and a plurality of residual errors, and the processing process in the convolutional layers is as follows:
the first layer is 1 convolution layer containing convolution kernels with the size of 3 × 3 and used for extracting global features of an image, the subsequent layers are a plurality of parallel convolution layers with 2 different convolution kernels and used for extracting features with different sizes, the first convolution layer contains a plurality of convolution kernels with the size of 3 × 3, the second convolution layer contains a plurality of convolution kernels with the size of 5 × 5, discrete convolution is carried out on the convolution kernels and an original image respectively, and after an offset term is added, the extracted image features are obtained through a ReLU activation function and are expressed as follows:
Figure FDA0001796037890000011
where L =1, 2.., L represents the number of layers of the network, i represents the location of the pixel,
Figure FDA0001796037890000012
representing the ith pixel of the image in layer l-1,
Figure FDA0001796037890000013
representing the jth image feature of the h convolutional layer in the l layer, M j Represents the set of all images of the input, k represents the convolution kernel,
Figure FDA0001796037890000014
represents the ith value in the jth convolution kernel in the ith layer,
Figure FDA0001796037890000015
represents the jth bias term in the ith layer; since each layer of the present invention contains 2 parallel convolutional layers with different convolutional kernels, h =1,2, f (x) represents the ReLU activation function, which is expressed as follows:
f(x)=max(0,x) (2),
after the convolution is completed, the results of the 2 parallel convolution layers are merged together at the merge layer as a whole block of image features, which are expressed as follows:
Figure FDA0001796037890000016
wherein X L Output representing the l-th layer [. ]]Representing an operation of merging together the results of multiple parallel convolutional layers as a whole block of image features;
(3) Taking the output of the l layer as the input of the l +1 layer, and repeating the calculation of the parallel convolution layer in the step (2) until the output is transmitted to the last layer of the network; after the L-th layer of convolution is finished, the output characteristic of the L-th layer and the input characteristic of the first layer are subjected to residual error operation, and the residual error operation is expressed as follows:
X=X L +X 1 (4)
wherein X represents a feature after completion of the residual operation; inputting the X features into a deconvolution layer to amplify the size of the X features, and then passing the amplified features through a convolution layer containing a convolution kernel with the size of 3X 3 to finally obtain an output image with improved resolution;
(4) Comparing the corresponding high-resolution image serving as the supervised learning label with the output image, optimizing the convolutional neural network by using a random gradient descent method, and obtaining a trained network after at least one hundred thousand iterations;
(5) And (4) inputting the image to be reconstructed into the trained network obtained in the step (4) when a low-resolution monitoring image to be super-resolution reconstructed is known, and outputting the super-resolution reconstructed high-resolution monitoring image by the convolutional neural network.
2. The super-resolution reconstruction method for low-resolution surveillance video according to claim 1, characterized in that: performing feature extraction of the low-resolution monitoring image by using a convolutional neural network, wherein convolution kernels with different sizes are used for extracting features with different sizes, then the features are combined, training of the network is optimized by using a residual linking mode, and a deconvolution layer is used for performing super-resolution reconstruction on the learned features, so that a reconstructed image with improved resolution is obtained; and finally, carrying out network optimization by using a random gradient descent method to obtain a trained network, and further carrying out super-resolution reconstruction on the low-resolution monitoring image.
CN201811056960.4A 2018-09-11 2018-09-11 Super-resolution reconstruction method for low-resolution monitoring video Active CN109325915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811056960.4A CN109325915B (en) 2018-09-11 2018-09-11 Super-resolution reconstruction method for low-resolution monitoring video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811056960.4A CN109325915B (en) 2018-09-11 2018-09-11 Super-resolution reconstruction method for low-resolution monitoring video

Publications (2)

Publication Number Publication Date
CN109325915A CN109325915A (en) 2019-02-12
CN109325915B true CN109325915B (en) 2022-11-08

Family

ID=65264816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811056960.4A Active CN109325915B (en) 2018-09-11 2018-09-11 Super-resolution reconstruction method for low-resolution monitoring video

Country Status (1)

Country Link
CN (1) CN109325915B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085652A (en) * 2019-06-14 2020-12-15 深圳市中兴微电子技术有限公司 Image processing method and device, computer storage medium and terminal
CN110647936B (en) * 2019-09-20 2023-07-04 北京百度网讯科技有限公司 Training method and device for video super-resolution reconstruction model and electronic equipment
CN111062867A (en) * 2019-11-21 2020-04-24 浙江大华技术股份有限公司 Video super-resolution reconstruction method
CN111915492B (en) * 2020-08-19 2021-03-30 四川省人工智能研究院(宜宾) Multi-branch video super-resolution method and system based on dynamic reconstruction
CN113408347B (en) * 2021-05-14 2022-03-15 桂林电子科技大学 Method for detecting change of remote building by monitoring camera
CN113869282B (en) * 2021-10-22 2022-11-11 马上消费金融股份有限公司 Face recognition method, hyper-resolution model training method and related equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016019484A1 (en) * 2014-08-08 2016-02-11 Xiaoou Tang An apparatus and a method for providing super-resolution of a low-resolution image
CN106683067A (en) * 2017-01-20 2017-05-17 福建帝视信息科技有限公司 Deep learning super-resolution reconstruction method based on residual sub-images
CN107578377A (en) * 2017-08-31 2018-01-12 北京飞搜科技有限公司 A kind of super-resolution image reconstruction method and system based on deep learning
EP3319039A1 (en) * 2016-11-07 2018-05-09 UMBO CV Inc. A method and system for providing high resolution image through super-resolution reconstruction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016019484A1 (en) * 2014-08-08 2016-02-11 Xiaoou Tang An apparatus and a method for providing super-resolution of a low-resolution image
EP3319039A1 (en) * 2016-11-07 2018-05-09 UMBO CV Inc. A method and system for providing high resolution image through super-resolution reconstruction
CN106683067A (en) * 2017-01-20 2017-05-17 福建帝视信息科技有限公司 Deep learning super-resolution reconstruction method based on residual sub-images
CN107578377A (en) * 2017-08-31 2018-01-12 北京飞搜科技有限公司 A kind of super-resolution image reconstruction method and system based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度卷积神经网络的遥感图像超分辨率重建;王爱丽等;《黑龙江大学自然科学学报》;20180225(第01期);全文 *

Also Published As

Publication number Publication date
CN109325915A (en) 2019-02-12

Similar Documents

Publication Publication Date Title
CN109325915B (en) Super-resolution reconstruction method for low-resolution monitoring video
CN109523470B (en) Depth image super-resolution reconstruction method and system
CN111127308A (en) Mirror image feature rearrangement repairing method for single sample face recognition under local shielding
CN111968064B (en) Image processing method and device, electronic equipment and storage medium
CN111814661A (en) Human behavior identification method based on residual error-recurrent neural network
CN109977832B (en) Image processing method, device and storage medium
CN110930378B (en) Emphysema image processing method and system based on low data demand
CN110705353A (en) Method and device for identifying face to be shielded based on attention mechanism
Su et al. Global learnable attention for single image super-resolution
Wang et al. FaceFormer: Aggregating global and local representation for face hallucination
CN111696038A (en) Image super-resolution method, device, equipment and computer-readable storage medium
Chen et al. RBPNET: An asymptotic Residual Back-Projection Network for super-resolution of very low-resolution face image
JP2023526899A (en) Methods, devices, media and program products for generating image inpainting models
CN115731597A (en) Automatic segmentation and restoration management platform and method for mask image of face mask
Yang et al. Image super-resolution reconstruction based on improved Dirac residual network
Qian et al. Effective super-resolution methods for paired electron microscopic images
Cui et al. Exploring resolution and degradation clues as self-supervised signal for low quality object detection
CN111310751A (en) License plate recognition method and device, electronic equipment and storage medium
Yeswanth et al. Sovereign critique network (SCN) based super-resolution for chest X-rays images
Hua et al. Dynamic scene deblurring with continuous cross-layer attention transmission
Chen et al. Learning traces by yourself: Blind image forgery localization via anomaly detection with vit-vae
Lai et al. Generative focused feedback residual networks for image steganalysis and hidden information reconstruction
TWI803243B (en) Method for expanding images, computer device and storage medium
CN112712468B (en) Iris image super-resolution reconstruction method and computing device
US20230154140A1 (en) Neural network-based high-resolution image restoration method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant