CN112669210B - Image super-resolution method, device and medium based on static working point - Google Patents

Image super-resolution method, device and medium based on static working point Download PDF

Info

Publication number
CN112669210B
CN112669210B CN202011576889.XA CN202011576889A CN112669210B CN 112669210 B CN112669210 B CN 112669210B CN 202011576889 A CN202011576889 A CN 202011576889A CN 112669210 B CN112669210 B CN 112669210B
Authority
CN
China
Prior art keywords
relu function
improved
image
vdsr
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011576889.XA
Other languages
Chinese (zh)
Other versions
CN112669210A (en
Inventor
元辉
姜东冉
付丛睿
姜世奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202011576889.XA priority Critical patent/CN112669210B/en
Publication of CN112669210A publication Critical patent/CN112669210A/en
Application granted granted Critical
Publication of CN112669210B publication Critical patent/CN112669210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to an image super-resolution method, equipment and a storage medium based on a VDSR model applying a static working point ReLU function, which are characterized in that: inputting an image to be processed into a trained improved VDSR model, and outputting to obtain a high-resolution image of the image; the method comprises the following steps that a static working point ReLU function in an improved VDSR model is a ReLU function of a self-adaptive learning static working point; the invention is inspired by a triode amplifying circuit, introduces the concept of a static working point into a ReLU function of the static working point, takes a zero point in the traditional ReLU function as the static working point, and realizes the self-adaptive learning of the value of the static working point in the training process of the neural network. The static working point ReLU function is applied to a VDSR model, and a data augmentation and learning rate attenuation strategy is adopted in the network training process to avoid the overfitting phenomenon of the network. The invention can effectively improve the performance of the VDSR model in the super-resolution task.

Description

Image super-resolution method, device and medium based on static working point
Technical Field
The invention relates to an image super-resolution method, equipment and a storage medium based on a VDSR model applying a static working point ReLU function, and belongs to the technical field of deep learning.
Background
Deep learning is a collection of high-complexity data modeling algorithms through multilayer nonlinear transformation, and a deep neural network becomes one of the most important research directions in the field of deep learning by virtue of strong learning and expression capabilities of the deep neural network, and is widely applied to the fields of image processing, video processing and the like.
Each neuron node in the neural network receives the output value of the neuron in the previous layer as the input value of the neuron, and transmits the input value to the next layer, and the neuron node in the input layer can directly transmit the input attribute value to the next layer (hidden layer or output layer). In a multi-layer neural network, there is a functional relationship between the output of an upper node and the input of a lower node, and this function is called an activation function (also called an excitation function). Early neural network models did not introduce activation functions, such as Multilayer Perceptron (MPL), and each layer of neurons simply performed linear transformation on the upper layer of neuron outputs, which resulted in no difference in the expression ability between the fully-connected neural network of any layer and the single-layer neural network model. When the activation function is introduced, the neural network model has nonlinear fitting capability, the expression capability of the neural network is greatly enhanced, and almost any function can be fitted.
Since the activation function brings great improvement to the performance of the neural network, researchers have conducted a lot of research on the design of the activation function, from the early Sigmoid function and the Tanh function to the ReLU function which is widely used in recent years, and the following describes in detail several commonly used activation functions.
First, Sigmoid function, the mathematical form of Sigmoid function is as follows:
Figure GDA0003586730640000011
the Sigmoid function image is shown in fig. 1; the horizontal and vertical coordinates represent the input and output of the Sigmoid function respectively, the Sigmoid function maps the input to the interval of (0,1), the gradient disappears easily when the gradient propagates in the reverse direction in the deep neural network, the convergence is slow when the model is trained because the output of the Sigmoid function is not the 0 mean value (zero-center), and the operation consumption is large because the Sigmoid function contains power operation.
Second, the Tanh function, the mathematical form of the Tanh function, is as follows:
Figure GDA0003586730640000012
as shown in fig. 2, the abscissa and the ordinate of the Tanh function image respectively represent the input and the output of the Tanh function, and the Tanh function solves the problem that the Sigmoid function output is not 0-mean, but the problems of gradient extinction and power operation still exist.
Third, the ReLU function, the mathematical form of the ReLU function, is as follows:
Figure GDA0003586730640000021
the ReLU function image is shown in fig. 3: the abscissa and the ordinate respectively represent the input and the output of the ReLU function, which is in fact a maximum function. Although simple, is an important outcome in recent years. Compared with a Sigmoid function and a Tanh function, the ReLU function does not contain power operation any more, so that the operation consumption is greatly reduced, and the gradient disappearance problem is effectively relieved by the ReLU function. However, the ReLU function still has the problem of outputting a non-zero mean value, and in the case of improper parameter initialization or excessively high initial learning rate, it may result in some neurons in the network never being activated. Despite these problems, the ReLU function is still one of the most popular activation functions at present, and variations such as Leaky ReLU, RReLU, and prilu have appeared in subsequent studies.
In recent years, convolutional neural networks have made great progress in the task of image super-resolution, and researchers often increase the performance by deepening the depth of a network model, for example, a VDSR model includes 20 convolutional layers. If the Sigmoid function or tanh function mentioned in the foregoing is applied as the activation function layer, the training of the network model becomes unstable, and the gradient disappearance easily occurs, so that the training cannot be continued. The original VDSR model selects a ReLU function as an activation function layer, and obtains relatively ideal performance. However, the ReLU function sets all the feature data less than 0 to zero, which limits the expressive power of the model.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an image super-resolution method based on a VDSR model applying a static working point ReLU function;
the invention also provides computer equipment and a computer storage medium.
Interpretation of terms:
1. the VDSR model is a classical image super-resolution network model, and the network structure thereof is shown in fig. 5. Including 20 convolutional layers, 19 ReLU activation function layers, and residual concatenation. The model takes a low-resolution image as input after bicubic interpolation, learns the lost high-frequency component of the low-resolution image through 20 layers of convolution layers, and performs pixel-level addition on the learned high-frequency component and the input at the end of the network to obtain a final high-resolution image.
2. The public data set BSD300, provided by Berkely computer Vision Group, contains 200 training charts and 100 test charts.
The technical scheme of the invention is as follows:
an image super-resolution method based on an improved VDSR model applying a static working point ReLU function is characterized in that: inputting an image to be processed into a trained improved VDSR model, and outputting to obtain a high-resolution image of the image;
the ReLU function of the static operating point in the improved VDSR model, i.e. the ReLU function f (x) of the adaptive learning static operating point, is shown as formula (I):
Figure GDA0003586730640000031
in the formula (I), Q refers to a static operating point, the static operating point refers to a zero point in an original ReLU function, Q is obtained through adaptive learning, and x represents an input feature of a ReLU function layer of the static operating point in an improved VDSR model.
Q is obtained through adaptive learning, and means that: setting Q as a learnable parameter and initializing to 0, wherein x represents the input characteristic of a ReLU function layer of a static operating point in the improved VDSR model, setting the value smaller than Q in the input characteristic as 0 and the value larger than Q as x-Q in the training process of the improved VDSR model, continuously updating the Q value in the back propagation process, and after a large amount of data is trained, the ReLU function layers of the static operating point at different positions in the VDSR model have different Q values. Compared with the original ReLU function, the static working point ReLU function provided by the invention has better nonlinear fitting capability, and the original ReLU function refers to the ReLU function mentioned in the background art.
The concept of a quiescent operating point stems from the triode amplification principle. The triode has two states, namely a static state and a dynamic state, wherein the static state refers to a direct current working state when no signal is added to the triode, the voltage of each electrode in the static state is called static working current, the dynamic state refers to a working state when an alternating current signal is added to the triode, and the current of each electrode in the dynamic state is called dynamic working current. If the direct current circuit of the triode does not work normally, the alternating current circuit of the triode cannot work normally. The static operating point is that when in a static state, the circuit is in a direct current operating state, and the numerical values of the current and the voltage can be represented by a determined point on an input-output curve of the triode, which is also called as a Q point. The static values of the voltage and the current of the amplifying circuit are determined, and a proper static working point is selected to prevent the circuit from generating nonlinear distortion and ensure the amplifying effect. In the original ReLU function, data with input greater than 0 is output as the original ReLU function, and the input characteristic less than 0 sets the output to 0, and in combination with the concept of the quiescent operating point in the triode, the zero point in the original ReLU function is called the quiescent operating point of the activation function.
Preferably, according to the present invention, the improved VDSR model comprises 20 convolutional layers, 19 static operating point ReLU function layers, and residual concatenation; the improved VDSR model takes the low-resolution image as input after bicubic interpolation, learns the lost high-frequency component of the low-resolution image through 20 layers of convolution layers, and performs pixel-level addition on the learned high-frequency component and the input at the end of the improved VDSR model to obtain the final high-resolution image.
The improved VDSR model is identical in network structure to the original VDSR model except that the original ReLU function is replaced with the static operating point ReLU function proposed by the present invention.
Preferably, according to the present invention, the improved VDSR model is trained as follows:
(1) data pre-processing
Selecting a plurality of pictures in the public data set BSD300 as a training set, and selecting a plurality of pictures as a test set;
performing data augmentation (data augmentation) on training data in the training set;
(2) building an improved VDSR model
The improved VDSR model adopts the network structure of the original VDSR model, and replaces the original ReLU function with the static working point ReLU function;
(3) training procedure
And (3) inputting the training data in the training set processed in the step (1) into the improved VDSR model built in the step (2) for training to obtain the trained VDSR model.
Further preferably, in step (3), the initial learning rate is set to 0.0001, the optimizer selects Adam, the batch size is set to 16, and 200 epochs are trained, each time after an epoch training is performed.
Further preferably, the data augmentation means: and carrying out random horizontal turnover, random brightness adjustment and random cutting on the training data to expand a training set. Therefore, overfitting of the network is avoided, and the generalization capability of the network is improved.
Preferably, according to the present invention, the output high resolution image uses Peak Signal-to-Noise Ratio (PSNR) as an evaluation index of performance, the Peak PSNR is defined by Mean Square Error (MSE), and assuming that two single-channel images I and K with size m × n, the Mean Square Error is defined as shown in formula (ii):
Figure GDA0003586730640000041
in the formula (II), I and j respectively represent rows and columns of an image, and I (I, j) and K (I, j) respectively represent pixel values at the ith row and jth column positions of the image I and the image K;
the peak signal-to-noise ratio PSNR is defined as shown in formula (III):
Figure GDA0003586730640000042
in the formula (III), MAXIThe maximum value of a pixel point in the image is represented. If the image is 8-bit, the maximum value is 255.
A computer device comprising a memory storing a computer program and a processor implementing the steps of an image super-resolution method based on an improved VDSR model applying a static operating point ReLU function when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of an image super-resolution method based on an improved VDSR model applying a static operating point ReLU function.
The invention has the beneficial effects that:
1. the invention provides a static working point ReLU function, which can adaptively learn the static working point in the training process and improve the expression capability of a convolutional neural network model.
2. Compared with a VDSR model applying an original ReLU function, the VDSR model applying the ReLU function of the static working point obtains a higher PSNR value, effectively improves the performance in a super-resolution task, and verifies the effectiveness of the invention.
3. The invention performs data augmentation on the data in the network training process, and adopts a learning rate attenuation strategy, thereby effectively avoiding the overfitting phenomenon in the network training process and improving the generalization capability of the network.
Drawings
Fig. 1 is a schematic diagram of a conventional Sigmoid function image;
FIG. 2 is a diagram of an image of a conventional Tanh function;
FIG. 3 is a diagram of an image of an original ReLU function;
FIG. 4 is a diagram illustrating an image of a ReLU function for adaptively learning a static operating point according to the present invention;
fig. 5 is a schematic diagram of the network architecture of the improved VDSR model of the present invention;
FIG. 6 is a graph showing the result of comparing experimental effects of 2-fold super-resolution tasks;
FIG. 7 is a graph showing the results of comparing the experimental effects of 3-fold super-resolution tasks.
Detailed Description
The invention is further defined in the following, but not limited to, the figures and examples in the description.
Example 1
An image super-resolution method based on an improved VDSR model applying a static working point ReLU function is characterized in that: inputting an image to be processed into a trained improved VDSR model, and outputting to obtain a high-resolution image of the image;
as shown in fig. 4, the ReLU function of the static operating point in the improved VDSR model, i.e. the ReLU function f (x) of the adaptive learning static operating point, is shown in formula (I), and the abscissa and the ordinate respectively represent the input and the output of the ReLU function of the static operating point:
Figure GDA0003586730640000051
in the formula (I), Q refers to a static operating point, the static operating point refers to a zero point in an original ReLU function, Q is obtained through adaptive learning, and x represents an input feature of a ReLU function layer of the static operating point in an improved VDSR model.
Q is obtained through adaptive learning, and means that: setting Q as a learnable parameter and initializing to 0, wherein x represents the input characteristic of a ReLU function layer of a static operating point in the improved VDSR model, setting a value smaller than Q in the input characteristic as 0 and a value larger than Q as x-Q in the training process of the improved VDSR model, continuously updating the Q value in the back propagation process, and after training of a large amount of data, the ReLU function layers of the static operating point at different positions in the VDSR model have different Q values. Compared with the original ReLU function, the static working point ReLU function provided by the invention has better nonlinear fitting capability, and the original ReLU function refers to the ReLU function mentioned in the background art.
The concept of a quiescent operating point stems from the triode amplification principle. The triode has two states, namely a static state and a dynamic state, wherein the static state refers to a direct current working state when no signal is added to the triode, the voltage of each electrode in the static state is called static working current, the dynamic state refers to a working state when an alternating current signal is added to the triode, and the current of each electrode in the dynamic state is called dynamic working current. If the direct current circuit of the triode does not work normally, the alternating current circuit of the triode cannot work normally. The static operating point is that when in a static state, the circuit is in a direct current operating state, and the numerical values of the current and the voltage can be represented by a determined point on an input-output curve of the triode, which is also called as a Q point. The static values of the voltage and the current of the amplifying circuit are determined, and a proper static working point is selected to prevent the circuit from generating nonlinear distortion and ensure the amplifying effect. In the original ReLU function, data with input greater than 0 is output as the original ReLU function, and the input characteristic less than 0 sets the output to 0, and in combination with the concept of the quiescent operating point in the triode, the zero point in the original ReLU function is called the quiescent operating point of the activation function.
As shown in fig. 5, the improved VDSR model includes 20 convolution layers, 19 static operating point ReLU function layers, and residual connection; the improved VDSR model takes the low-resolution image as input after bicubic interpolation, learns the lost high-frequency component of the low-resolution image through 20 layers of convolution layers, and performs pixel-level addition on the learned high-frequency component and the input at the end of the improved VDSR model to obtain the final high-resolution image.
The improved VDSR model is identical in network structure to the original VDSR model except that the original ReLU function is replaced with the static operating point ReLU function proposed by the present invention.
Example 2
An image super-resolution method based on an improved VDSR model applying a static operating point ReLU function according to embodiment 1 is different in that: the training process for the improved VDSR model is as follows:
(1) data pre-processing
200 pictures in a public data set BSD300 are selected as a training set, and 100 pictures are selected as a test set;
performing data augmentation (data augmentation) on training data in the training set;
(2) building an improved VDSR model
The improved VDSR model adopts the network structure of the original VDSR model, and replaces the original ReLU function with a ReLU function of a static operating point;
(3) training procedure
And (3) inputting the training data in the training set processed in the step (1) into the improved VDSR model built in the step (2) for training to obtain the trained VDSR model.
In step (3), the initial learning rate is set to 0.0001, the optimizer selects Adam, the batchSize is set to 16, 200 epochs are trained, and the test is performed after each epoch.
The data augmentation means: and carrying out random horizontal turnover, random brightness adjustment and random cutting on the training data to expand a training set. Therefore, overfitting of the network is avoided, and the generalization capability of the network is improved.
The output high-resolution image uses Peak Signal-to-Noise Ratio (PSNR) as an evaluation index of performance, the Peak PSNR is defined by Mean Square Error (MSE), and assuming two single-channel images I and K with size m × n, the Mean Square Error is defined as formula (ii):
Figure GDA0003586730640000061
in the formula (II), I and j respectively represent rows and columns of an image, and I (I, j) and K (I, j) respectively represent pixel values at the ith row and jth column positions of the image I and the image K;
the peak signal-to-noise ratio PSNR is defined as shown in formula (III):
Figure GDA0003586730640000071
in formula (III), MAXIThe maximum value of a pixel point in the image is represented. If the image is 8-bit, the maximum value is 255.
In this embodiment, 200 pictures in the data set B300 are selected as a training data set, 100 pictures are selected as a testing data set, the VDSR models to which the original ReLu function and the static operating point ReLu function proposed by the present invention are applied are trained respectively,
FIG. 6 is a graph showing the result of comparing experimental results of 2 times super-resolution tasks; FIG. 7 is a graph showing the results of comparing the experimental effects of 3-fold super-resolution tasks. In fig. 6 and 7, the abscissa and ordinate respectively represent PSNR (peak signal-to-noise ratio) and epoch (1 epoch represents all samples in the 1-pass training set), triangle-labeled data represents the training results of the improved VDSR model to which the ReLU function of the present invention is applied, and circle-labeled data represents the training results of the VDSR model to which the original ReLU function is applied;
as can be seen from fig. 6 and 7, in the 2-fold and 3-fold super-resolution tasks, the performance of the VDSR model is improved after the static operating point ReLU function provided by the present invention is applied.
Example 3
A computer device comprising a memory storing a computer program and a processor implementing the steps of embodiment 1 or 2 of the image super-resolution method based on an improved VDSR model applying a static operating point ReLU function when executing the computer program.
Example 4
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of embodiment 1 or 2 of the method for image super-resolution based on an improved VDSR model applying a static operating point, ReLU, function.

Claims (4)

1. An image super-resolution method based on an improved VDSR model applying a static working point ReLU function is characterized in that: inputting an image to be processed into a trained improved VDSR model, and outputting to obtain a high-resolution image of the image;
the ReLU function of the static operating point in the improved VDSR model, i.e. the ReLU function f (x) of the adaptive learning static operating point, is shown as formula (I):
Figure FDA0003586730630000011
in the formula (I), Q refers to a static operating point, the static operating point refers to a zero point in an original ReLU function, Q is obtained through self-adaptive learning, and x represents the input characteristic of a ReLU function layer of the static operating point in an improved VDSR model;
the improved VDSR model comprises 20 layers of convolution layers, 19 layers of static operating point ReLU function layers and residual connection; the improved VDSR model takes the low-resolution image as input after bicubic interpolation, learns the lost high-frequency component of the low-resolution image through 20 layers of convolution layers, and performs pixel-level addition on the learned high-frequency component and the input at the end of the improved VDSR model to obtain a final high-resolution image;
the training process for the improved VDSR model is as follows:
(1) data pre-processing
Selecting a plurality of pictures in the public data set BSD300 as a training set, and selecting a plurality of pictures as a test set;
performing data augmentation on training data in the training set;
(2) building an improved VDSR model
The improved VDSR model adopts the network structure of the original VDSR model, and replaces the original ReLU function with the static working point ReLU function;
(3) training procedure
Inputting the training data in the training set processed in the step (1) into the improved VDSR model built in the step (2) for training to obtain a trained VDSR model;
in the step (3), the initial learning rate is set to be 0.0001, the optimizer selects Adam, the batch size is set to be 16, 200 epochs are trained, and the test is carried out after each epoch is trained;
the output high-resolution image uses a peak signal-to-noise ratio (PSNR) as an evaluation index of performance, the peak signal-to-noise ratio (PSNR) is defined by a Mean Square Error (MSE), and two single-channel images I and K with the size of m x n are assumed, and the mean square error of the two single-channel images I and K is defined as shown in a formula (II):
Figure FDA0003586730630000012
in the formula (II), I and j respectively represent rows and columns of an image, and I (I, j) and K (I, j) respectively represent pixel values at the ith row and jth column positions of the image I and the image K;
the peak signal-to-noise ratio PSNR is defined as shown in formula (III):
Figure FDA0003586730630000021
in formula (III), MAX7Representing the maximum value of the pixel points in the image.
2. The method for super-resolution of images based on the improved VDSR model using the static operating point ReLU function as claimed in claim 1, wherein the data expansion is: and carrying out random horizontal turnover, random brightness adjustment and random cutting on the training data to expand a training set.
3. A computer device comprising a memory and a processor, characterized in that the memory stores a computer program which, when executed by the processor, implements the steps of the method for image super-resolution based on an improved VDSR model applying a static operating point ReLU function of claim 1 or 2.
4. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method for image super-resolution based on an improved VDSR model applying a static operating point ReLU function of claim 1 or 2.
CN202011576889.XA 2020-12-28 2020-12-28 Image super-resolution method, device and medium based on static working point Active CN112669210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011576889.XA CN112669210B (en) 2020-12-28 2020-12-28 Image super-resolution method, device and medium based on static working point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011576889.XA CN112669210B (en) 2020-12-28 2020-12-28 Image super-resolution method, device and medium based on static working point

Publications (2)

Publication Number Publication Date
CN112669210A CN112669210A (en) 2021-04-16
CN112669210B true CN112669210B (en) 2022-06-03

Family

ID=75410517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011576889.XA Active CN112669210B (en) 2020-12-28 2020-12-28 Image super-resolution method, device and medium based on static working point

Country Status (1)

Country Link
CN (1) CN112669210B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102462A (en) * 2018-08-01 2018-12-28 中国计量大学 A kind of video super-resolution method for reconstructing based on deep learning
CN110599401A (en) * 2019-08-19 2019-12-20 中国科学院电子学研究所 Remote sensing image super-resolution reconstruction method, processing device and readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110383296A (en) * 2017-04-07 2019-10-25 英特尔公司 For providing the system and method for the auto-programming synthesis of depth stacking
US11715002B2 (en) * 2018-05-10 2023-08-01 Microsoft Technology Licensing, Llc Efficient data encoding for deep neural network training

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102462A (en) * 2018-08-01 2018-12-28 中国计量大学 A kind of video super-resolution method for reconstructing based on deep learning
CN110599401A (en) * 2019-08-19 2019-12-20 中国科学院电子学研究所 Remote sensing image super-resolution reconstruction method, processing device and readable storage medium

Also Published As

Publication number Publication date
CN112669210A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
JP6504590B2 (en) System and computer implemented method for semantic segmentation of images and non-transitory computer readable medium
Gai et al. New image denoising algorithm via improved deep convolutional neural network with perceptive loss
CN112052886A (en) Human body action attitude intelligent estimation method and device based on convolutional neural network
CN109816098B (en) Processing method and evaluation method of neural network, and data analysis method and device
Zuo et al. Convolutional neural networks for image denoising and restoration
CN109741364B (en) Target tracking method and device
CN114861838B (en) Intelligent classification method for pulsatile neural brains based on neuron complex dynamics
Liu et al. Learning hadamard-product-propagation for image dehazing and beyond
CN112581397B (en) Degraded image restoration method, system, medium and equipment based on image priori information
Li et al. A fully trainable network with RNN-based pooling
CN111861886A (en) Image super-resolution reconstruction method based on multi-scale feedback network
CN111695590A (en) Deep neural network feature visualization method for constraint optimization class activation mapping
CN111986085A (en) Image super-resolution method based on depth feedback attention network system
Henderson et al. Spike event based learning in neural networks
JP6942203B2 (en) Data processing system and data processing method
Chartier et al. A sequential dynamic heteroassociative memory for multistep pattern recognition and one-to-many association
CN112669210B (en) Image super-resolution method, device and medium based on static working point
CN115860113B (en) Training method and related device for self-countermeasure neural network model
CN113554104B (en) Image classification method based on deep learning model
CN114092763A (en) Method for constructing impulse neural network model
WO2022208632A1 (en) Inference device, inference method, learning device, learning method, and program
CN112949719B (en) Well testing interpretation proxy model generation method based on GAN
CN117725846B (en) Deep learning-based low cycle fatigue life prediction method
Su et al. Particle Swarm Optimization for Gray-Scale Image Noise Cancellation
Saari The effect of two hyperparameters in the learning performance of the Convolutional Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant