CN112669210A - Image super-resolution method, device and storage medium based on VDSR model applying novel ReLU function - Google Patents

Image super-resolution method, device and storage medium based on VDSR model applying novel ReLU function Download PDF

Info

Publication number
CN112669210A
CN112669210A CN202011576889.XA CN202011576889A CN112669210A CN 112669210 A CN112669210 A CN 112669210A CN 202011576889 A CN202011576889 A CN 202011576889A CN 112669210 A CN112669210 A CN 112669210A
Authority
CN
China
Prior art keywords
relu function
image
improved
vdsr
vdsr model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011576889.XA
Other languages
Chinese (zh)
Other versions
CN112669210B (en
Inventor
元辉
姜东冉
付丛睿
姜世奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202011576889.XA priority Critical patent/CN112669210B/en
Publication of CN112669210A publication Critical patent/CN112669210A/en
Application granted granted Critical
Publication of CN112669210B publication Critical patent/CN112669210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to an image super-resolution method, equipment and a storage medium based on a VDSR model applying a novel ReLU function, which are characterized in that: inputting an image to be processed into a trained improved VDSR model, and outputting to obtain a high-resolution image of the image; a novel ReLU function in the improved VDSR model is a ReLU function of a self-adaptive learning static working point; the invention is inspired by a triode amplifying circuit, introduces the concept of a static working point into a novel ReLU function, takes a zero point in the traditional ReLU function as the static working point, and realizes the self-adaptive learning of the value of the static working point in the neural network training process. The novel ReLU function is applied to a VDSR model, and a data augmentation and learning rate attenuation strategy is adopted in the network training process to avoid the overfitting phenomenon of the network. The invention can effectively improve the performance of the VDSR model in the super-resolution task.

Description

Image super-resolution method, device and storage medium based on VDSR model applying novel ReLU function
Technical Field
The invention relates to an image super-resolution method, equipment and a storage medium based on a VDSR model applying a novel ReLU function, and belongs to the technical field of deep learning.
Background
Deep learning is a collection of high-complexity data modeling algorithms through multilayer nonlinear transformation, and a deep neural network becomes one of the most important research directions in the field of deep learning by virtue of strong learning and expression capabilities of the deep neural network, and is widely applied to the fields of image processing, video processing and the like.
Each neuron node in the neural network receives the output value of the neuron in the previous layer as the input value of the neuron, and transmits the input value to the next layer, and the neuron node in the input layer can directly transmit the input attribute value to the next layer (hidden layer or output layer). In a multi-layer neural network, there is a functional relationship between the output of an upper node and the input of a lower node, and this function is called an activation function (also called an excitation function). Early neural network models did not introduce activation functions, such as Multilayer Perceptron (MPL), and each layer of neurons simply performed linear transformation on the upper layer of neuron outputs, which resulted in no difference in the expression ability between the fully-connected neural network of any layer and the single-layer neural network model. When the activation function is introduced, the neural network model has nonlinear fitting capability, the expression capability of the neural network is greatly enhanced, and almost any function can be fitted.
Because the activation function brings great improvement to the performance of the neural network, researchers have conducted a lot of research on the design of the activation function, from the early Sigmoid function and Tanh function to the recently widely used ReLU function, and the following describes in detail several commonly used activation functions.
First, Sigmoid function, the mathematical form of Sigmoid function is as follows:
Figure BDA0002863458880000011
the Sigmoid function image is shown in fig. 1; the horizontal and vertical coordinates represent the input and output of the Sigmoid function respectively, the Sigmoid function maps the input to the interval of (0,1), the gradient disappears easily when the gradient propagates in the reverse direction in the deep neural network, the convergence is slow when the model is trained because the output of the Sigmoid function is not the 0 mean value (zero-center), and the operation consumption is large because the Sigmoid function contains power operation.
Second, the Tanh function, the mathematical form of the Tanh function, is as follows:
Figure BDA0002863458880000012
as shown in FIG. 2, the horizontal and vertical coordinates of the Tanh function image respectively represent the input and output of the Tanh function, and the Tanh function solves the problem that the output of the Sigmoid function is not 0-mean, but the problems of gradient extinction and power operation still exist.
Third, the ReLU function, the mathematical form of the ReLU function, is as follows:
Figure BDA0002863458880000021
the ReLU function image is shown in fig. 3: the abscissa and the ordinate respectively represent the input and the output of the ReLU function, which is actually a maximum function. Although simple, is an important outcome in recent years. Compared with a Sigmoid function and a Tanh function, the ReLU function does not contain power operation any more, so that the operation consumption is greatly reduced, and the gradient disappearance problem is effectively relieved by the ReLU function. However, the ReLU function still has the problem of outputting a non-zero mean value, and in the case of improper parameter initialization or excessively high initial learning rate, it may result in some neurons in the network never being activated. Despite these problems, the ReLU function is still one of the most popular activation functions at present, and variations such as Leaky ReLU, RReLU, and prilu have appeared in subsequent studies.
In recent years, convolutional neural networks have made great progress in the task of image super-resolution, and researchers often increase the performance by deepening the depth of a network model, for example, a VDSR model includes 20 convolutional layers. If the Sigmoid function or tanh function mentioned in the foregoing is applied as the activation function layer, the training of the network model becomes unstable, and the gradient disappearance easily occurs, so that the training cannot be continued. The original VDSR model selects a ReLU function as an activation function layer, and obtains relatively ideal performance. However, the ReLU function zeroes all the feature data smaller than 0, which limits the expressive power of the model.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an image super-resolution method based on a VDSR model applying a novel ReLU function;
the invention also provides computer equipment and a computer storage medium.
Interpretation of terms:
1. the VDSR model is a classical image super-resolution network model, and the network structure thereof is shown in fig. 5. Including 20 convolutional layers, 19 ReLU activation function layers, and residual concatenation. The model takes a low-resolution image as input after bicubic interpolation, learns the lost high-frequency component of the low-resolution image through 20 layers of convolution layers, and performs pixel-level addition on the learned high-frequency component and the input at the end of the network to obtain a final high-resolution image.
2. The public data set BSD300, provided by Berkely computer Vision Group, contains 200 training charts and 100 test charts.
The technical scheme of the invention is as follows:
an image super-resolution method based on an improved VDSR model applying a novel ReLU function is as follows: inputting an image to be processed into a trained improved VDSR model, and outputting to obtain a high-resolution image of the image;
the new ReLU function in the improved VDSR model, i.e., the ReLU function f (x) of the adaptive learning static operating point, is shown in formula (i):
Figure BDA0002863458880000031
in the formula (I), Q refers to a static operating point, the static operating point refers to a zero point in an original ReLU function, Q is obtained through adaptive learning, and x represents the input characteristic of a new ReLU function layer in an improved VDSR model.
Q is obtained through adaptive learning, and means that: setting Q as a learnable parameter and initializing to 0, wherein x represents the input characteristic of a new ReLU function layer in the improved VDSR model, in the training process of the improved VDSR model, the value smaller than Q in the input characteristic is set to 0 by the new ReLU function layer, the value larger than Q is set to x-Q, the Q value is continuously updated in the back propagation process, and after training of a large amount of data, the new ReLU function layers at different positions in the VDSR model have different Q values. Compared with the original ReLU function, the novel ReLU function provided by the invention has better nonlinear fitting capability, and the original ReLU function refers to the ReLU function mentioned in the background art.
The concept of a quiescent operating point stems from the triode amplification principle. The triode has two states, namely a static state and a dynamic state, wherein the static state refers to a direct current working state when no signal is added to the triode, the voltage of each electrode in the static state is called static working current, the dynamic state refers to a working state when an alternating current signal is added to the triode, and the current of each electrode in the dynamic state is called dynamic working current. If the direct current circuit of the triode does not work normally, the alternating current circuit of the triode cannot work normally. The static operating point is that when in a static state, the circuit is in a direct current operating state, and the numerical values of the current and the voltage can be represented by a determined point on an input-output curve of the triode, which is also called as a Q point. The static values of the voltage and the current of the amplifying circuit are determined, and a proper static working point is selected to prevent the circuit from generating nonlinear distortion and ensure the amplifying effect. In the original ReLU function, data with input greater than 0 is output as the original ReLU function, and the input characteristic less than 0 sets the output to 0, and in combination with the concept of the quiescent operating point in the triode, the zero point in the original ReLU function is called the quiescent operating point of the activation function.
Preferably according to the present invention, the improved VDSR model comprises 20 convolutional layers, 19 novel ReLU function layers, and residual concatenation; the improved VDSR model takes the low-resolution image as input after bicubic interpolation, learns the lost high-frequency component of the low-resolution image through 20 layers of convolution layers, and performs pixel-level addition on the learned high-frequency component and the input at the end of the improved VDSR model to obtain the final high-resolution image.
The improved VDSR model is identical in network structure to the original VDSR model, except that the original ReLU function is replaced with the new ReLU function proposed by the present invention.
Preferably, according to the present invention, the improved VDSR model is trained as follows:
(1) data pre-processing
Selecting a plurality of pictures in the public data set BSD300 as a training set, and selecting a plurality of pictures as a test set;
performing data augmentation (data augmentation) on training data in the training set;
(2) building an improved VDSR model
The improved VDSR model adopts the network structure of the original VDSR model, and replaces the original ReLU function into a novel ReLU function;
(3) training procedure
And (3) inputting the training data in the training set processed in the step (1) into the improved VDSR model built in the step (2) for training to obtain the trained VDSR model.
Further preferably, in step (3), the initial learning rate is set to 0.0001, the optimizer selects Adam, the batch size is set to 16, and 200 epochs are trained, each time after an epoch training is performed.
Further preferably, the data augmentation means: and carrying out random horizontal turnover, random brightness adjustment and random cutting on the training data to expand a training set. Therefore, overfitting of the network is avoided, and the generalization capability of the network is improved.
Preferably, according to the present invention, the output high resolution image uses Peak Signal-to-Noise Ratio (PSNR) as an evaluation index of performance, the Peak PSNR is defined by Mean Square Error (MSE), and assuming that two single-channel images I and K with size m × n, the Mean Square Error is defined as shown in formula (ii):
Figure BDA0002863458880000041
in the formula (II), I and j respectively represent rows and columns of an image, and I (I, j) and K (I, j) respectively represent pixel values at the ith row and jth column positions of the image I and the image K;
the peak signal-to-noise ratio PSNR is defined as shown in formula (III):
Figure BDA0002863458880000042
in formula (III), MAXIThe maximum value of a pixel point in the image is represented. If the image is 8-bit, the maximum value is 255.
A computer device comprising a memory storing a computer program and a processor implementing the steps of an image super-resolution method based on an improved VDSR model applying a new ReLU function when executing the computer program.
A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of an image super-resolution method based on an improved VDSR model applying a novel ReLU function.
The invention has the beneficial effects that:
1. the invention provides a novel ReLU function, which can adaptively learn static working points in a training process and improve the expression capability of a convolutional neural network model.
2. Compared with a VDSR model applying an original ReLU function, the VDSR model applying the novel ReLU function of the invention obtains a higher PSNR value, effectively improves the performance in a super-resolution task, and verifies the effectiveness of the invention.
3. The invention performs data augmentation on the data in the network training process, and adopts a learning rate attenuation strategy, thereby effectively avoiding the overfitting phenomenon in the network training process and improving the generalization capability of the network.
Drawings
Fig. 1 is a schematic diagram of a conventional Sigmoid function image;
FIG. 2 is a diagram of an image of a conventional Tanh function;
FIG. 3 is a diagram of an image of an original ReLU function;
FIG. 4 is a diagram illustrating an image of a ReLU function for adaptively learning a static operating point according to the present invention;
fig. 5 is a schematic diagram of the network architecture of the improved VDSR model of the present invention;
FIG. 6 is a graph showing the result of comparing experimental effects of 2-fold super-resolution tasks;
FIG. 7 is a graph showing the results of comparing the experimental effects of 3-fold super-resolution tasks.
Detailed Description
The invention is further defined in the following, but not limited to, the figures and examples in the description.
Example 1
An image super-resolution method based on an improved VDSR model applying a novel ReLU function is as follows: inputting an image to be processed into a trained improved VDSR model, and outputting to obtain a high-resolution image of the image;
as shown in fig. 4, the new ReLU function in the improved VDSR model, i.e. the ReLU function f (x) of the adaptive learning static operating point, is shown in formula (i), and the abscissa and the ordinate respectively represent the input and the output of the new ReLU function:
Figure BDA0002863458880000051
in the formula (I), Q refers to a static operating point, the static operating point refers to a zero point in an original ReLU function, Q is obtained through adaptive learning, and x represents the input characteristic of a new ReLU function layer in an improved VDSR model.
Q is obtained through adaptive learning, and means that: setting Q as a learnable parameter and initializing to 0, wherein x represents the input characteristic of a new ReLU function layer in the improved VDSR model, in the training process of the improved VDSR model, the value smaller than Q in the input characteristic is set to 0 by the new ReLU function layer, the value larger than Q is set to x-Q, the Q value is continuously updated in the back propagation process, and after training of a large amount of data, the new ReLU function layers at different positions in the VDSR model have different Q values. Compared with the original ReLU function, the novel ReLU function provided by the invention has better nonlinear fitting capability, and the original ReLU function refers to the ReLU function mentioned in the background art.
The concept of a quiescent operating point stems from the triode amplification principle. The triode has two states, namely a static state and a dynamic state, wherein the static state refers to a direct current working state when no signal is added to the triode, the voltage of each electrode in the static state is called static working current, the dynamic state refers to a working state when an alternating current signal is added to the triode, and the current of each electrode in the dynamic state is called dynamic working current. If the direct current circuit of the triode does not work normally, the alternating current circuit of the triode cannot work normally. The static operating point is that when in a static state, the circuit is in a direct current operating state, and the numerical values of the current and the voltage can be represented by a determined point on an input-output curve of the triode, which is also called as a Q point. The static values of the voltage and the current of the amplifying circuit are determined, and a proper static working point is selected to prevent the circuit from generating nonlinear distortion and ensure the amplifying effect. In the original ReLU function, data with input greater than 0 is output as the original ReLU function, and the input characteristic less than 0 sets the output to 0, and in combination with the concept of the quiescent operating point in the triode, the zero point in the original ReLU function is called the quiescent operating point of the activation function.
As shown in fig. 5, the improved VDSR model includes 20 convolutional layers, 19 new ReLU function layers, and residual concatenation; the improved VDSR model takes the low-resolution image as input after bicubic interpolation, learns the lost high-frequency component of the low-resolution image through 20 layers of convolution layers, and performs pixel-level addition on the learned high-frequency component and the input at the end of the improved VDSR model to obtain the final high-resolution image.
The improved VDSR model is identical in network structure to the original VDSR model, except that the original ReLU function is replaced with the new ReLU function proposed by the present invention.
Example 2
An image super-resolution method based on an improved VDSR model applying a new ReLU function as described in embodiment 1, which is different in that: the training process for the improved VDSR model is as follows:
(1) data pre-processing
200 pictures in a public data set BSD300 are selected as a training set, and 100 pictures are selected as a test set;
performing data augmentation (data augmentation) on training data in the training set;
(2) building an improved VDSR model
The improved VDSR model adopts the network structure of the original VDSR model, and replaces the original ReLU function into a novel ReLU function;
(3) training procedure
And (3) inputting the training data in the training set processed in the step (1) into the improved VDSR model built in the step (2) for training to obtain the trained VDSR model.
In step (3), the initial learning rate is set to 0.0001, the optimizer selects Adam, the batch size is set to 16, 200 epochs are trained, and the test is performed after each epoch training.
Data augmentation refers to: and carrying out random horizontal turnover, random brightness adjustment and random cutting on the training data to expand a training set. Therefore, overfitting of the network is avoided, and the generalization capability of the network is improved.
The output high-resolution image uses Peak Signal-to-Noise Ratio (PSNR) as an evaluation index of performance, the Peak PSNR is defined by Mean Square Error (MSE), and assuming two single-channel images I and K with size m × n, the Mean Square Error is defined as formula (ii):
Figure BDA0002863458880000061
in the formula (II), I and j respectively represent rows and columns of an image, and I (I, j) and K (I, j) respectively represent pixel values at the ith row and jth column positions of the image I and the image K;
the peak signal-to-noise ratio PSNR is defined as shown in formula (III):
Figure BDA0002863458880000071
in formula (III), MAXIThe maximum value of a pixel point in the image is represented. If the image is 8-bit, the maximum value is 255.
In this embodiment, 200 pictures in the data set B300 are selected as a training data set, 100 pictures are selected as a testing data set, VDSR models to which the original ReLu function and the novel ReLu function proposed by the present invention are applied are trained respectively,
FIG. 6 is a graph showing the result of comparing experimental effects of 2-fold super-resolution tasks; FIG. 7 is a graph showing the results of comparing the experimental effects of 3-fold super-resolution tasks. In fig. 6 and 7, the abscissa and ordinate respectively represent PSNR (peak signal-to-noise ratio) and epoch (1 epoch represents all samples in 1 training set), the triangle labeled data represents the training result of the improved VDSR model applying the new ReLU function of the present invention, and the circle labeled data represents the training result of the VDSR model applying the original ReLU function;
as can be seen from fig. 6 and 7, in the 2-fold and 3-fold super-resolution tasks, the performance of the VDSR model is improved by applying the novel ReLU function proposed by the present invention.
Example 3
A computer device comprising a memory storing a computer program and a processor implementing the steps of embodiment 1 or 2 of the image super-resolution method based on an improved VDSR model applying a new ReLU function when executing the computer program.
Example 4
A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, implements the steps of embodiment 1 or 2 of the image super-resolution method based on the improved VDSR model applying the novel ReLU function.

Claims (8)

1. An image super-resolution method based on an improved VDSR model applying a novel ReLU function is characterized in that: inputting an image to be processed into a trained improved VDSR model, and outputting to obtain a high-resolution image of the image;
the new ReLU function in the improved VDSR model, i.e., the ReLU function f (x) of the adaptive learning static operating point, is shown in formula (i):
Figure FDA0002863458870000011
in the formula (I), Q refers to a static operating point, the static operating point refers to a zero point in an original ReLU function, Q is obtained through adaptive learning, and x represents the input characteristic of a new ReLU function layer in an improved VDSR model.
2. The method of claim 1, wherein the improved VDSR model comprises 20 convolutional layers, 19 novel ReLU function layers, and residual connection; the improved VDSR model takes the low-resolution image as input after bicubic interpolation, learns the lost high-frequency component of the low-resolution image through 20 layers of convolution layers, and performs pixel-level addition on the learned high-frequency component and the input at the end of the improved VDSR model to obtain the final high-resolution image.
3. The method of claim 1, wherein the improved VDSR model is trained as follows:
(1) data pre-processing
Selecting a plurality of pictures in the public data set BSD300 as a training set, and selecting a plurality of pictures as a test set;
performing data augmentation on training data in the training set;
(2) building an improved VDSR model
The improved VDSR model adopts the network structure of the original VDSR model, and replaces the original ReLU function into a novel ReLU function;
(3) training procedure
And (3) inputting the training data in the training set processed in the step (1) into the improved VDSR model built in the step (2) for training to obtain the trained VDSR model.
4. The method of claim 3, wherein in step (3), the initial learning rate is set to 0.0001, the optimizer selects Adam, batchSize is set to 16, 200 epochs are trained, and the testing is performed after each epoch training.
5. The method of claim 1, wherein the data expansion is performed by applying a new ReLU function to the VDSR model: and carrying out random horizontal turnover, random brightness adjustment and random cutting on the training data to expand a training set.
6. The image super-resolution method based on the improved VDSR model applying the new ReLU function as claimed in claim 1, wherein the outputted high resolution image uses peak signal to noise ratio PSNR as an evaluation index of performance, the peak signal to noise ratio PSNR is defined by mean square error MSE, and assuming two single channel images I and K with size m × n, the mean square error is defined as formula (ii):
Figure FDA0002863458870000021
in the formula (II), I and j respectively represent rows and columns of an image, and I (I, j) and K (I, j) respectively represent pixel values at the ith row and jth column positions of the image I and the image K;
the peak signal-to-noise ratio PSNR is defined as shown in formula (III):
Figure FDA0002863458870000022
in the formula (III), the compound represented by the formula (III),MAXIthe maximum value of a pixel point in the image is represented.
7. A computer device comprising a memory and a processor, characterized in that the memory stores a computer program which when executed by the processor implements the steps of the method for image super resolution based on an improved VDSR model applying a new ReLU function of any of claims 1-6.
8. A computer-readable storage medium, having stored thereon a computer program, wherein the computer program, when being executed by a processor, is adapted to carry out the steps of the method for image super-resolution based on an improved VDSR model applying a new ReLU function as claimed in any one of claims 1-6.
CN202011576889.XA 2020-12-28 2020-12-28 Image super-resolution method, device and medium based on static working point Active CN112669210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011576889.XA CN112669210B (en) 2020-12-28 2020-12-28 Image super-resolution method, device and medium based on static working point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011576889.XA CN112669210B (en) 2020-12-28 2020-12-28 Image super-resolution method, device and medium based on static working point

Publications (2)

Publication Number Publication Date
CN112669210A true CN112669210A (en) 2021-04-16
CN112669210B CN112669210B (en) 2022-06-03

Family

ID=75410517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011576889.XA Active CN112669210B (en) 2020-12-28 2020-12-28 Image super-resolution method, device and medium based on static working point

Country Status (1)

Country Link
CN (1) CN112669210B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102462A (en) * 2018-08-01 2018-12-28 中国计量大学 A kind of video super-resolution method for reconstructing based on deep learning
US20190347549A1 (en) * 2018-05-10 2019-11-14 Microsoft Technology Licensing, Llc Efficient data encoding for deep neural network training
CN110599401A (en) * 2019-08-19 2019-12-20 中国科学院电子学研究所 Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
US20200027015A1 (en) * 2017-04-07 2020-01-23 Intel Corporation Systems and methods for providing deeply stacked automated program synthesis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200027015A1 (en) * 2017-04-07 2020-01-23 Intel Corporation Systems and methods for providing deeply stacked automated program synthesis
US20190347549A1 (en) * 2018-05-10 2019-11-14 Microsoft Technology Licensing, Llc Efficient data encoding for deep neural network training
CN109102462A (en) * 2018-08-01 2018-12-28 中国计量大学 A kind of video super-resolution method for reconstructing based on deep learning
CN110599401A (en) * 2019-08-19 2019-12-20 中国科学院电子学研究所 Remote sensing image super-resolution reconstruction method, processing device and readable storage medium

Also Published As

Publication number Publication date
CN112669210B (en) 2022-06-03

Similar Documents

Publication Publication Date Title
Gai et al. New image denoising algorithm via improved deep convolutional neural network with perceptive loss
CN106991646B (en) Image super-resolution method based on dense connection network
US10552944B2 (en) Image upscaling with controllable noise reduction using a neural network
CN109741364B (en) Target tracking method and device
CN111932461A (en) Convolutional neural network-based self-learning image super-resolution reconstruction method and system
CN110648292A (en) High-noise image denoising method based on deep convolutional network
CN112581397B (en) Degraded image restoration method, system, medium and equipment based on image priori information
Liu et al. Learning hadamard-product-propagation for image dehazing and beyond
CN111861886A (en) Image super-resolution reconstruction method based on multi-scale feedback network
CN111695590A (en) Deep neural network feature visualization method for constraint optimization class activation mapping
CN111986085A (en) Image super-resolution method based on depth feedback attention network system
JP6942203B2 (en) Data processing system and data processing method
CN112669210B (en) Image super-resolution method, device and medium based on static working point
Chartier et al. A sequential dynamic heteroassociative memory for multistep pattern recognition and one-to-many association
CN115860113B (en) Training method and related device for self-countermeasure neural network model
CN116824232A (en) Data filling type deep neural network image classification model countermeasure training method
CN114092763A (en) Method for constructing impulse neural network model
Luo et al. Maximum a posteriori on a submanifold: a general image restoration method with gan
WO2022208632A1 (en) Inference device, inference method, learning device, learning method, and program
CN117078516B (en) Mine image super-resolution reconstruction method based on residual mixed attention
CN112949719B (en) Well testing interpretation proxy model generation method based on GAN
CN111161152B (en) Image super-resolution method based on self-adaptive convolutional neural network
US20220375032A1 (en) Image processing apparatus and operating method thereof
CN117522727A (en) Low-light image processing method based on denoising diffusion probability model
Xiaohua et al. Total generalized variational-liked network for image denoising

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant