CN112435192A - Lightweight image definition enhancing method - Google Patents

Lightweight image definition enhancing method Download PDF

Info

Publication number
CN112435192A
CN112435192A CN202011373558.6A CN202011373558A CN112435192A CN 112435192 A CN112435192 A CN 112435192A CN 202011373558 A CN202011373558 A CN 202011373558A CN 112435192 A CN112435192 A CN 112435192A
Authority
CN
China
Prior art keywords
loss
image
vgg
model
true
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011373558.6A
Other languages
Chinese (zh)
Other versions
CN112435192B (en
Inventor
胡耀武
李云夕
熊永春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xiaoying Innovation Technology Co ltd
Original Assignee
Hangzhou Xiaoying Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xiaoying Innovation Technology Co ltd filed Critical Hangzhou Xiaoying Innovation Technology Co ltd
Priority to CN202011373558.6A priority Critical patent/CN112435192B/en
Publication of CN112435192A publication Critical patent/CN112435192A/en
Application granted granted Critical
Publication of CN112435192B publication Critical patent/CN112435192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a lightweight image definition enhancing method which is characterized by comprising the following steps: step 1: constructing a network, and performing training iteration through a set weighted LOSS to obtain a model which is trained; LOSS includes VGG Loss, Huber Loss and mean absolute error LOSS MAE Loss; step 2: the trained model receives the image to be processed and outputs an effect image subjected to definition enhancement processing; the method corrects the model by setting a plurality of Loss values, and can ensure that the corrected model can better process the edge characteristics of the image in the picture compared with the traditional method of correcting the model only by the MAE Loss value.

Description

Lightweight image definition enhancing method
Technical Field
The invention relates to the field of deep learning, in particular to a lightweight image definition enhancing method.
Background
With the evolution of image algorithms, sharpness correction for blurred images is now increasingly optimized. At present, various definition enhancement functions exist in mobile terminal APP or PC image processing software, and the purpose of definition enhancement is mainly achieved by sharpening images through algorithms, such as sharpening algorithms, algorithms based on deep learning technology and the like. Compared with a sharpening algorithm, the definition algorithm based on the deep learning technology has the advantages that the definition of the image is improved more remarkably, but the model established by the deep learning is large, the algorithm is complex, and the algorithm is difficult to apply to a mobile terminal. The image at the mobile terminal is often processed clearly through a traditional sharpening algorithm. Although the traditional sharpening algorithm can also improve the definition of an image, an ideal effect cannot be obtained.
On the other hand, as the number of pixels to be photographed increases, the composition of an image becomes more complicated, and a higher computational demand is required for processing the image clearly. There is therefore a need for a lightweight image sharpness enhancement method suitable for mobile terminals.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a lightweight image definition enhancing method which is simple in structure and convenient to use, can obtain a definition enhanced image and is suitable for a mobile terminal.
A lightweight image definition enhancement method is characterized by comprising the following steps:
step 1: constructing a network, and performing training iteration through a set weighted LOSS to obtain a model which is trained; LOSS includes VGG Loss, Huber Loss and mean absolute error LOSS MAE Loss;
step 2: the trained model receives the image to be processed and outputs an effect image subjected to definition enhancement processing;
the training of the model in the step 1 comprises the following steps:
step 11: constructing a definition enhancement network;
step 12: collecting data samples;
step 13: completing model iterative training according to the collected data samples;
step 14: and finishing lightweight setting of the model.
Further, when the sharpness enhancement network is constructed in the step 11, the Width of the input image is set to be Width, and the Height is set to be Height; the construction of the sharpness enhancement network comprises the following steps:
step 111: inputting image data of RGB24, selecting the number of channels as N, namely W × H × N of network output;
step 112: performing convolution downsampling for at least two times on Input image data; each convolution reduces W and H to 1/2, and the number of convoluted channels increases in sequence;
step 113: after convolution downsampling is completed, performing ResBlock processing on an output result m times; ResBlock denotes residual block;
step 114: performing Add operation on the output result after completing the ResBlock processing and the corresponding down-sampling result in sequence; performing UpSampling and convolution operation at least twice after the Add operation is completed, and selecting a tanh activation function when the last UpSampling and convolution operation is performed to obtain a first output of the definition enhancement network;
step 115: performing Add operation on the first output and the Input, and then performing convolution processing to complete construction of the definition enhancement network; and outputting a corresponding definition enhancement effect graph after the construction of the network is completed.
Further, Width of the input image in the step 111 is 1024, Height is 1024, the number N of channels of the image data of the input RGB24 is 3, and the obtained network output is 1024 × 1024 × 3; in step 112, 4 times of convolution downsampling are carried out; performing ResBlock processing for 4 times in step 113; the upsampling and convolution operations are performed 4 times in step 114.
Further, the step 12 of acquiring data samples includes the following steps:
step 121: preparing Z scene pictures, zooming each picture, and zooming the pictures to W multiplied by H to obtain a zoom image; the zoom factor is adjusted according to the original size of the picture;
step 122: performing Gaussian blur with Radius on the zoomed picture to obtain a blur picture; wherein Radius ═ 1 or 2;
the image obtained after scaling in step 121 is called a scaling map, and the image obtained after gaussian blurring processing in step 122 is called a blur map.
Further, the model training of step 13 firstly requires inputting the fuzzy graph into the sharpness enhancement network constructed in step 11 to obtain a real result y _ true; secondly, performing iterative training on the model according to a LOSS function LOSS of the network to obtain a corrected model; the LOSS function LOSS includes three parts, which are VGG Loss, Huber Loss, and the mean absolute error LOSS MAE Loss.
Further, the VGG Loss calculation includes the following steps:
step 131: respectively inputting images of a prediction result y _ pred and a real result y _ true of the model into a VGG-19 classification network; the image of the prediction result y _ pred is a zoom image, and the image of the real result y _ true is an image output by the sharpness enhancement network according to the blur image;
step 132: respectively calculating the minimum Mean Square Error (MSE) of the feature maps corresponding to the predicted result image and the real result image for the last three convolutional layers in the VGG-19 to obtain loss _ VGG _1, loss _ VGG _2 and loss _ VGG _ 3;
step 1333: VGG Loss is the mean value of Loss _ VGG _1, Loss _ VGG _2 and Loss _ VGG _ 3.
Further, the Huber Loss is obtained by a predicted result y _ pred and a real result y _ true of the model, the value of the Huber Loss is expressed by a Loss _ Huber, and the specific calculation of the Loss _ Huber is as follows:
Figure BDA0002806762600000031
wherein L (y _ true, y _ pred) represents loss _ huber; δ is a set constant.
Further, the MAE Loss is also obtained through a prediction result y _ pred and a real result y _ true of the model, a value of the MAE Loss is represented by a Loss _ MAE, and the Loss _ MAE is specifically calculated as follows:
L′(y_true,y_pred)=|y_true-y_pred|
where L' (y _ true, y _ pred) represents loss _ mae.
Further, the LOSS function LOSS of the sharpness enhancement network has a value of a weighted sum of VGG LOSS, Huber LOSS, and mean absolute error LOSS MAE LOSS, expressed as:
LOSS=loss_huber+loss_mae+(loss_vgg_1+loss_vgg_2+loss_vgg_3)*0.33。
further, the image to be processed input in the step 2 is represented by S; before an image to be processed is input, preprocessing any S, wherein the preprocessing comprises cutting the width and the height of the S, the width and the height of the S after cutting are all multiples of 4, and redundant pixels are discarded; in step 14, kernel _ size of the first layer of convolution is set to 7, and kernel _ size of the remaining layers of convolution is set to 3; the maximum number of channels was set to 48; the volume of the final model was set to 0.8M.
The invention has the beneficial effects that:
the model is corrected by setting a plurality of Loss values, compared with the traditional method that the model is corrected only by the MAE Loss value, the edge characteristics of the image in the picture can be better processed by the corrected model;
by completing lightweight setting of the model, the calculated amount is limited within 100M FLOPs, and the model can be ensured to run at the mobile end.
Drawings
FIG. 1 is a block diagram of the overall structure of a first embodiment of the present invention;
fig. 2 is a schematic diagram of a sharpness enhancement network structure according to a first embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The first embodiment is as follows:
as shown in fig. 1, a lightweight image sharpness enhancement method includes the following steps:
step 1: constructing a network, and performing training iteration through a set weighted LOSS to obtain a model which is trained; LOSS includes VGG Loss, Huber Loss and mean absolute error LOSS MAE Loss;
step 2: and the trained model receives the image to be processed and outputs the image subjected to the definition enhancement processing.
The LOSS in the step 1 is a LOSS function, and the training of the model in the step 1 comprises the following steps:
step 11: constructing a definition enhancement network;
step 12: collecting data samples;
step 13: completing model iterative training according to the collected data samples;
step 14: and finishing lightweight setting of the model.
As shown in fig. 2, the idea of designing the sharpness enhancement network in step 11 is to learn an edge deviation rule between a sharp target picture and a blurred picture through a large number of data samples, and implement sharpness enhancement operation on an arbitrary picture according to the deviation rule. The Width of the input image is set to Width, and the Height is set to Height, and in this example, Width is 1024 pixels. The construction of the sharpness enhancement network comprises the following steps:
step 111: inputting image data of RGB24, selecting channel number N as 3, namely W multiplied by H multiplied by N of network output is 1024 multiplied by 3;
step 112: performing convolution downsampling for at least two times on Input image data; in the example, the samples are sampled by 4 times of convolution, namely C1, C2, C3 and C4, each time of convolution, W and H are reduced to 1/2 each time, and the number of channels of convolution is increased in sequence;
step 113: after convolution downsampling is completed, performing ResBlock processing on an output result m times; m is 4 in this example; ResBlock denotes residual block;
step 114: performing Add operation on the output result after completing the ResBlock processing and the corresponding down-sampling result in sequence; performing UpSampling and convolution operation at least twice after the Add operation is completed, and selecting a tanh activation function when the last UpSampling and convolution operation is performed to obtain a first output of the definition enhancement network; in this example, a total of 4 upsampling and convolution operations are performed, each of which is denoted by U1, U2, U3 and U4, respectively;
step 115: performing Add operation on the first output and the Input, and then performing convolution processing to complete construction of the definition enhancement network; and outputting a corresponding definition enhancement effect graph after the construction of the network is completed.
The step 12 of collecting data samples comprises the following steps:
step 121: preparing Z scene pictures, 10 ten thousand in this example; zooming each picture to obtain a zoomed picture, wherein the zoomed picture is zoomed to W multiplied by H (1024 multiplied by 1024); the actual zoom factor is adjusted according to the original size of the picture;
step 122: performing Gaussian blur with Radius on the zoomed picture to obtain a blur picture; in this example Radius is 1 or 2.
The equation for gaussian blur in step 122 is as follows:
Figure BDA0002806762600000051
wherein x represents the current pixel value; mu represents the mean value of all pixels in a square area with the Radius of Radius and the x pixel position as the center position; sigma represents the variance of all pixels in a square area with the x pixel position as the center position and Radius as Radius; GaussFilter (x) indicates that the pixel is the blurred result.
Wherein, the 10 ten thousand zoom images after being zoomed in the step 121 and the 10 ten thousand blur images after being processed by the gaussian blur processing in the step 122 are used as a data sample pair for model training, wherein the zoom images are used as output, and the blur images are used as input.
The model training in step 13 firstly requires inputting the fuzzy graph in the data sample pair into the sharpness enhancement network constructed in step 11, and obtains a true result y _ true. And then, carrying out iterative training on the model according to the LOSS function LOSS of the network to obtain the corrected model. The LOSS function LOSS includes three parts, which are VGG Loss, Huber Loss, and mean absolute error LOSS (MAE Loss), respectively.
The VGG Loss calculation comprises the following steps:
step 131: respectively inputting images of a prediction result y _ pred and a real result y _ true of the model into a VGG-19 classification network; the image of the prediction result y _ pred is a zoom image, and the image of the real result y _ true is an image output by the sharpness enhancement network according to the blur image;
step 132: respectively calculating the minimum Mean Square Error (MSE) of the feature maps corresponding to the predicted result image and the real result image for the last three convolutional layers in the VGG-19 to obtain loss _ VGG _1, loss _ VGG _2 and loss _ VGG _ 3;
step 1333: VGG Loss is the mean value of Loss _ VGG _1, Loss _ VGG _2 and Loss _ VGG _ 3.
The Huber Loss is obtained through a prediction result y _ pred and a real result y _ true of the model, the value of the Huber Loss is represented by a Loss _ hub, and the specific calculation of the Loss _ hub is as follows:
Figure BDA0002806762600000061
wherein L (y _ true, y _ pred) represents loss _ huber; δ is a set constant, in this example 1.0.
The MAE Loss is also obtained through a prediction result y _ pred and a real result y _ true of the model, the value of the MAE Loss is represented by a Loss _ MAE, and the specific calculation of the Loss _ MAE is as follows:
L′(y_true,y_pred)=|y_true-y_pred|
where L' (y _ true, y _ pred) represents loss _ mae.
The LOSS function LOSS of the final sharpness enhancement network has a value of the weighted sum of VGG LOSS, Huber LOSS, and mean absolute error LOSS (MAE LOSS), expressed as:
LOSS=loss_huber+loss_mae+(loss_vgg_1+loss_vgg_2+loss_vgg_3)*0.33
and training the model according to the obtained LOSS function to obtain a training model.
Step 14, in the setting of model lightweight, setting kernel _ size of the convolution of the first layer to be 7, and setting kernel _ size of the convolution of the other layers to be 3; the maximum number of channels was set to 48; the volume of the final model was set to 0.8M. The calculation amount is limited within 100M FLOPs through the lightweight setting of the model, so that the model can be adapted to the mobile terminal. When the GPU processes the 720P resolution image, the processing speed is within 200 ms; by applying various weighted Loss values including values of VGG Loss, Huber Loss and MAE Loss and replacing the value of MSE Loss, the detailed characteristic effect of the image edge is richer and more real.
The image to be processed input in the step 2 is represented by S, and because the network is designed as a convolutional network and can be expanded to any size input, any S is preprocessed, where the preprocessing includes clipping S, so that the width and height of S are all multiples of 4, and the redundant pixels are discarded, it needs to be noted that only the minimum pixel value meeting the condition is clipped during clipping, for example, the width of the original image is 9 pixels, and the clipped part is 1 pixel instead of 5 pixels; and secondly, inputting the S into the model which completes lightweight setting to obtain a corresponding output effect image with enhanced definition.
In the implementation process, the model is corrected by setting a plurality of Loss values, compared with the traditional method that the model is corrected only by the MAE Loss value, the method can ensure that the corrected model can better process the edge characteristics of the image in the picture; by completing lightweight setting of the model, the calculated amount is limited within 100M FLOPs, and the model can be ensured to run at the mobile end.
The above description is only one specific example of the present invention and should not be construed as limiting the invention in any way. It will be apparent to persons skilled in the relevant art(s) that, having the benefit of this disclosure and its principles, various modifications and changes in form and detail can be made without departing from the principles and structures of the invention, which are, however, encompassed by the appended claims.

Claims (10)

1. A lightweight image definition enhancement method is characterized by comprising the following steps:
step 1: constructing a network, and performing training iteration through a set weighted LOSS to obtain a model which is trained; LOSS includes VGG Loss, Huber Loss and mean absolute error LOSS MAE Loss;
step 2: the trained model receives the image to be processed and outputs an effect image subjected to definition enhancement processing;
the training of the model in the step 1 comprises the following steps:
step 11: constructing a definition enhancement network;
step 12: collecting data samples;
step 13: completing model iterative training according to the collected data samples;
step 14: and finishing lightweight setting of the model.
2. A lightweight image sharpness enhancement method according to claim 1, wherein when constructing the sharpness enhancement network in step 11, the Width of the input image is set to Width and the Height is set to Height; the construction of the sharpness enhancement network comprises the following steps:
step 111: inputting image data of RGB24, selecting the number of channels as N, namely W × H × N of network output;
step 112: performing convolution downsampling for at least two times on Input image data; each convolution reduces W and H to 1/2, and the number of convoluted channels increases in sequence;
step 113: after convolution downsampling is completed, performing ResBlock processing on an output result m times; ResBlock denotes residual block;
step 114: performing Add operation on the output result after completing the ResBlock processing and the corresponding down-sampling result in sequence; performing UpSampling and convolution operation at least twice after the Add operation is completed, and selecting a tanh activation function when the last UpSampling and convolution operation is performed to obtain a first output of the definition enhancement network;
step 115: performing Add operation on the first output and the Input, and then performing convolution processing to complete construction of the definition enhancement network; and outputting a corresponding definition enhancement effect graph after the construction of the network is completed.
3. A lightweight image sharpness enhancement method according to claim 2, wherein in said step 111, Width of the input image is 1024, Height is 1024, the number N of channels of the image data of the input RGB24 is 3, and the net output is 1024 × 1024 × 3; in step 112, 4 times of convolution downsampling are carried out; performing ResBlock processing for 4 times in step 113; the upsampling and convolution operations are performed 4 times in step 114.
4. A lightweight method for enhancing image sharpness as claimed in claim 2, wherein the step 12 of acquiring data samples comprises the steps of:
step 121: preparing Z scene pictures, zooming each picture, and zooming the pictures to W multiplied by H to obtain a zoom image; the zoom factor is adjusted according to the original size of the picture;
step 122: performing Gaussian blur with Radius on the zoomed picture to obtain a blur picture; wherein Radius is 1 or 2.
5. A lightweight image sharpness enhancement method according to claim 1, wherein the model training of step 13 first requires inputting a blur image into the sharpness enhancement network constructed in step 11 to obtain a true result y _ true; secondly, performing iterative training on the model according to a LOSS function LOSS of the network to obtain a corrected model; the LOSS function LOSS includes three parts, which are VGG Loss, Huber Loss, and the mean absolute error LOSS MAE Loss.
6. A method as claimed in claim 5, wherein the VGG Loss calculation includes the following steps:
step 131: respectively inputting images of a prediction result y _ pred and a real result y _ true of the model into a VGG-19 classification network; the image of the prediction result y _ pred is a zoom image, and the image of the real result y _ true is an image output by the sharpness enhancement network according to the blur image;
step 132: respectively calculating the minimum Mean Square Error (MSE) of the feature maps corresponding to the predicted result image and the real result image for the last three convolutional layers in the VGG-19 to obtain loss _ VGG _1, loss _ VGG _2 and loss _ VGG _ 3; step 1333: VGG Loss is the mean value of Loss _ VGG _1, Loss _ VGG _2 and Loss _ VGG _ 3.
7. The method according to claim 6, wherein the Huber Loss is obtained from a predicted result y _ pred and a true result y _ true of the model, the value of the Huber Loss is expressed by a Loss _ Huber, and the Loss _ Huber is specifically calculated as follows:
Figure FDA0002806762590000021
wherein L (y _ true, y _ pred) represents loss _ huber; δ is a set constant.
8. The method according to claim 7, wherein the MAE Loss is also obtained from the predicted result y _ pred and the true result y _ true of the model, and the value of MAE Loss is expressed as Loss _ MAE, which is specifically calculated as follows:
L′(y_true,y_pred)=|y_true-y_pred|
where L' (y _ true, y _ pred) represents loss _ mae.
9. A lightweight image sharpness enhancement method according to claim 8, wherein the LOSS function LOSS of the sharpness enhancement network has a value of a weighted sum of VGG LOSS, Huber LOSS and mean absolute error LOSS, MAE LOSS, expressed as:
LOSS=loss_huber+loss_mae+(loss_vgg_1+loss_vgg_2+loss_vgg_3)*0.33。
10. a lightweight image sharpness enhancement method according to claim 2, wherein the image to be processed input in step 2 is denoted by S; before an image to be processed is input, preprocessing any S, wherein the preprocessing comprises cutting the width and the height of the S, the width and the height of the S after cutting are all multiples of 4, and redundant pixels are discarded; in step 14, kernel _ size of the first layer of convolution is set to 7, and kernel _ size of the remaining layers of convolution is set to 3; the maximum number of channels was set to 48; the volume of the final model was set to 0.8M.
CN202011373558.6A 2020-11-30 2020-11-30 Lightweight image definition enhancing method Active CN112435192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011373558.6A CN112435192B (en) 2020-11-30 2020-11-30 Lightweight image definition enhancing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011373558.6A CN112435192B (en) 2020-11-30 2020-11-30 Lightweight image definition enhancing method

Publications (2)

Publication Number Publication Date
CN112435192A true CN112435192A (en) 2021-03-02
CN112435192B CN112435192B (en) 2023-03-14

Family

ID=74697583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011373558.6A Active CN112435192B (en) 2020-11-30 2020-11-30 Lightweight image definition enhancing method

Country Status (1)

Country Link
CN (1) CN112435192B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108376387A (en) * 2018-01-04 2018-08-07 复旦大学 Image deblurring method based on polymerization expansion convolutional network
CN108549892A (en) * 2018-06-12 2018-09-18 东南大学 A kind of license plate image clarification method based on convolutional neural networks
CN108898579A (en) * 2018-05-30 2018-11-27 腾讯科技(深圳)有限公司 A kind of image definition recognition methods, device and storage medium
US10646156B1 (en) * 2019-06-14 2020-05-12 Cycle Clarity, LLC Adaptive image processing in assisted reproductive imaging modalities
CN111192206A (en) * 2019-12-03 2020-05-22 河海大学 Method for improving image definition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108376387A (en) * 2018-01-04 2018-08-07 复旦大学 Image deblurring method based on polymerization expansion convolutional network
CN108898579A (en) * 2018-05-30 2018-11-27 腾讯科技(深圳)有限公司 A kind of image definition recognition methods, device and storage medium
CN108549892A (en) * 2018-06-12 2018-09-18 东南大学 A kind of license plate image clarification method based on convolutional neural networks
US10646156B1 (en) * 2019-06-14 2020-05-12 Cycle Clarity, LLC Adaptive image processing in assisted reproductive imaging modalities
CN111192206A (en) * 2019-12-03 2020-05-22 河海大学 Method for improving image definition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
丁予康: "基于卷积神经网络的图像超分辨方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》, no. 1, 15 January 2019 (2019-01-15), pages 138 - 2990 *

Also Published As

Publication number Publication date
CN112435192B (en) 2023-03-14

Similar Documents

Publication Publication Date Title
Wang et al. Real-esrgan: Training real-world blind super-resolution with pure synthetic data
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
CN109389556B (en) Multi-scale cavity convolutional neural network super-resolution reconstruction method and device
CN110008817B (en) Model training method, image processing method, device, electronic equipment and computer readable storage medium
Yu et al. A unified learning framework for single image super-resolution
CN110136062B (en) Super-resolution reconstruction method combining semantic segmentation
CN112446383B (en) License plate recognition method and device, storage medium and terminal
CN110766632A (en) Image denoising method based on channel attention mechanism and characteristic pyramid
CN112102177A (en) Image deblurring method based on compression and excitation mechanism neural network
CN112419191B (en) Image motion blur removing method based on convolution neural network
CN110942436B (en) Image deblurring method based on image quality evaluation
EP4075373A1 (en) Image processing method and apparatus
CN112529776A (en) Training method of image processing model, image processing method and device
CN113658044A (en) Method, system, device and storage medium for improving image resolution
CN111951164A (en) Image super-resolution reconstruction network structure and image reconstruction effect analysis method
CN112598587A (en) Image processing system and method combining face mask removal and super-resolution
CN116547694A (en) Method and system for deblurring blurred images
CN113096032B (en) Non-uniform blurring removal method based on image region division
CN113724134A (en) Aerial image blind super-resolution reconstruction method based on residual distillation network
CN116029943B (en) Infrared image super-resolution enhancement method based on deep learning
CN112435192B (en) Lightweight image definition enhancing method
CN110717913B (en) Image segmentation method and device
CN109996085B (en) Model training method, image processing method and device and electronic equipment
CN114187174A (en) Image super-resolution reconstruction method based on multi-scale residual error feature fusion
CN113240589A (en) Image defogging method and system based on multi-scale feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 22nd floor, block a, Huaxing Times Square, 478 Wensan Road, Xihu District, Hangzhou, Zhejiang 310000

Patentee after: Hangzhou Xiaoying Innovation Technology Co.,Ltd.

Address before: 16 / F, HANGGANG Metallurgical Science and technology building, 294 Tianmushan Road, Xihu District, Hangzhou City, Zhejiang Province, 310012

Patentee before: Hangzhou Xiaoying Innovation Technology Co.,Ltd.