CN111192200A - Image super-resolution reconstruction method based on fusion attention mechanism residual error network - Google Patents
Image super-resolution reconstruction method based on fusion attention mechanism residual error network Download PDFInfo
- Publication number
- CN111192200A CN111192200A CN202010002303.2A CN202010002303A CN111192200A CN 111192200 A CN111192200 A CN 111192200A CN 202010002303 A CN202010002303 A CN 202010002303A CN 111192200 A CN111192200 A CN 111192200A
- Authority
- CN
- China
- Prior art keywords
- image
- layer
- resolution
- training
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 230000007246 mechanism Effects 0.000 title claims abstract description 35
- 230000004927 fusion Effects 0.000 title claims abstract description 13
- 238000012549 training Methods 0.000 claims abstract description 83
- 238000000605 extraction Methods 0.000 claims abstract description 25
- 238000007781 pre-processing Methods 0.000 claims abstract description 15
- 238000010586 diagram Methods 0.000 claims description 22
- 238000005070 sampling Methods 0.000 claims description 19
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 15
- 238000013527 convolutional neural network Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 13
- 230000000694 effects Effects 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 5
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 4
- 230000008602 contraction Effects 0.000 claims description 4
- 238000009826 distribution Methods 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 4
- 101100365548 Caenorhabditis elegans set-14 gene Proteins 0.000 claims description 3
- 238000005520 cutting process Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 230000003321 amplification Effects 0.000 claims description 2
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 abstract description 3
- 238000011835 investigation Methods 0.000 abstract 1
- 238000012545 processing Methods 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 230000015556 catabolic process Effects 0.000 description 3
- 238000006731 degradation reaction Methods 0.000 description 3
- 230000008034 disappearance Effects 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 239000000126 substance Substances 0.000 description 3
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image super-resolution reconstruction method based on a fusion attention mechanism residual error network, which solves the problems of poor image reconstruction quality and unsatisfactory visual effect in the prior art. The image reconstruction method under investigation comprises the following steps: s1: acquiring and preprocessing data, and acquiring a training image data set and an image data set to be reconstructed; s2: building a network model, wherein the model structure comprises a feature extraction layer, a feature learning layer and an image reconstruction layer; s3: initializing, training and storing model parameters to obtain an optimal model structure and an optimal parameter set; s4: and (4) image super-resolution reconstruction, namely inputting an image to be reconstructed and outputting a high-resolution image under a corresponding magnification scale. The image super-resolution network provided by the invention combines global and local residual error structures, integrates channel attention and space attention mechanisms, focuses more on high-frequency information of the image, retains important characteristics in the image to the maximum extent, reduces repeated redundant characteristics, and greatly improves the details and definition of the reconstructed image.
Description
Technical Field
The invention relates to an image super-resolution reconstruction method, in particular to an image super-resolution reconstruction method based on a fusion attention mechanism residual error network, and belongs to the technical field of image processing.
Background
Image resolution, which represents the amount of information stored in an image, is commonly expressed as "horizontal pixel value x vertical pixel value" and is an important indicator for measuring image quality. Generally, the higher the resolution of an image is, the more the details contained in the image are, the more the information provided is, the better the image quality is, and therefore, the image with the higher resolution has important application value and research prospect in various computer vision tasks. But due to cost problems, the image acquisition, storage and transmission processes in reality are inevitably subjected to condition limitation or other noise interference, so that the quality of the image is degraded to different degrees.
In order to increase the resolution of the image and restore the high-frequency details of the image, the image quality can be improved by two modes of hardware and software. However, the hardware-based mode has higher requirements on the imaging device, so that the software-based image super-resolution reconstruction method can be adopted, the limitations of the imaging device and the environment are broken through, the flexibility is increased, and the high-resolution image with good visual experience is recovered from the low-resolution image. At present, image super-resolution reconstruction methods are mainly classified into three categories: interpolation-based methods, reconstruction-based methods, and learning-based methods.
Interpolation-based methods: the interpolation-based method is a simple image processing mode, and interpolation is completed by utilizing pixels around a target according to a certain formula algorithm. Common interpolation methods include nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation. The interpolation method utilizes local information to predict the high-frequency part of the image, and has the advantages of simplicity and feasibility, but the local information of the image is limited, and more prior information can not be obtained, so that the reconstructed image is easy to generate blurs and sawteeth, and the imaging effect is not good.
Reconstruction-based methods: the reconstruction-based method is to constrain the consistency between high and low resolution image variables by modeling a degradation model of the image, and then estimate the high resolution image. Common reconstruction methods are iterative backprojection, convex set projection, and maximum a posteriori probability. This kind of method obtains stable solution through regular term constraint, but adds mandatory prior information and destroys the original structural feature of the image, leads to the distortion of the reconstructed image. And the calculation complexity is high, and the requirement of real-time processing in reconstruction cannot be met.
The learning-based method: the learning-based method is to reconstruct a high-resolution image by training a training set of high-resolution and low-resolution images and learning the mapping relation between the low-resolution images and the high-resolution images. In the early stage, the method based on machine learning has strong image reconstruction capability because parameters of the model can be adaptively learned through an optimization algorithm. The method comprises a dictionary-based learning method, adopts a sparse coding mode, is limited by the linear representation capability of dictionary learning, and has a limited reconstruction effect. In recent years, with the application of Convolutional Neural Network (CNN) in the field of computer vision, representative network models such as SRCNN emerge from super-resolution reconstruction methods, but these reconstruction methods also have some problems, such as lack of attention to high-frequency details, slow convergence rate, and too long training time.
Through retrieval, the Chinese patent with the application number of 201811439902X discloses a remote sensing image super-resolution reconstruction method based on a convolutional neural network with channel attention, a network model of the method consists of a feature extraction layer, a feature learning layer and an image reconstruction layer, but the feature extraction layer only uses one layer of convolutional operation to extract shallow features, a channel attention mechanism is only introduced into a residual module structure of the feature learning layer, the recursive network structure is used as a reference for the whole connection structure, but the shared parameters of all modules are not ensured, the effect of recursive training is not achieved, the up-sampling operation of the image reconstruction layer adopts a transposed convolution (deconvolution) mode, 0 compensation operation is additionally introduced, and noise interference can be amplified; the Chinese patent application with the application number of 2019101492716 discloses an image super-resolution reconstruction method based on a convolutional neural network, a network model of the method is also composed of a feature extraction layer, a feature learning layer and an image reconstruction layer, only an attention mechanism is introduced into a residual module structure of the feature learning layer, an image is restored to a target resolution by using an upsampling operation in a decoding stage of a U-shaped module, the number of parameters in the feature learning stage can be greatly increased, a DenseNet dense connection structure is used as a reference for an integral connection structure, so that excessive training parameters are caused, the operation cost and the expenditure are very high, and the image reconstruction layer only uses one layer of convolutional operation to restore a high-resolution image.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a method for reconstructing an image super-resolution based on an improved convolutional neural network, which combines local jump connection and global jump connection, integrates a channel attention mechanism and a space attention mechanism at the same time, and performs up-sampling and image reconstruction at the rear end of the network.
The invention provides an image super-resolution reconstruction method based on a fusion attention mechanism residual error network, which comprises the following steps:
and S1, data acquisition and preprocessing. Acquiring an image data set for training and an image data set to be reconstructed, and performing image preprocessing operation according to a reconstruction target and requirements.
And S2, building a network model. Constructing a convolutional neural network model for image super-resolution, wherein the model mainly comprises a feature extraction layer, a feature learning layer and an image reconstruction layer; the characteristic extraction layer is used for carrying out shallow characteristic extraction on the input low-resolution image to obtain a characteristic diagram of the low-resolution image; the characteristic learning layer is used for carrying out multiple residual characteristic learning on the characteristic diagram of the low-resolution image to obtain the characteristic diagram of the high-resolution image; and the image reconstruction layer is used for reconstructing the characteristic diagram of the high-resolution image and restoring the characteristic diagram into the high-resolution image with the corresponding magnification.
And S3, initializing, training and saving model parameters. Initializing model parameters before training; inputting training data into a convolutional neural network for parameter learning in a training process, updating parameter values by utilizing gradient descent, and minimizing a loss function until the network converges; and after the training is finished, storing the optimal model structure and the optimal parameter set.
And S4, reconstructing the super-resolution image. And inputting the image to be reconstructed into the optimal network model, and loading the optimal parameter set to output the high-resolution image under the corresponding amplification scale.
The invention solves the problems of poor image reconstruction quality and unsatisfactory visual effect in the prior art. The image super-resolution network provided by the invention combines global and local residual error structures, integrates channel attention and space attention mechanisms, focuses more on high-frequency information of the image, retains important characteristics in the image to the maximum extent, reduces repeated redundant characteristics, and greatly improves the details and definition of the reconstructed image.
The further optimized technical scheme of the invention is as follows:
further, the specific method of data acquisition and preprocessing in step S1 is as follows:
s11, adopting a standard public data Set DIV2K as a training data Set, and adopting Set5, Set14, BSD100, Urban100 and the like as reconstruction data sets;
s12, carrying out down-sampling operation on the high-resolution images of the data set by adopting a bicubic interpolation method to generate corresponding low-resolution images, thereby obtaining training data containing multi-scale magnification;
s13, rotating and turning the training data set image by adopting a data enhancement mode, thereby expanding the sample number of the training data set;
s14, carrying out segmentation and cutting operation on the images in the training data set, thereby saving training time and calculation cost;
and S15, performing pixel value regularization operation on the input image, so that the pixel data value distribution is more uniform.
Further, the convolutional neural network model in step S2 sequentially includes:
s21, a feature extraction layer, wherein the feature extraction layer is composed of two convolution layers and is used for extracting shallow features of the low-resolution image;
s22, a feature learning layer, wherein the feature learning layer is composed of m (m is a positive integer) cascaded fusion channels and a residual block (CSAFR) of a spatial attention mechanism, a local jump connection structure is adopted in the residual block, and a global jump connection structure is adopted in the whole feature learning layer;
and S23, an image reconstruction layer, wherein the feature reconstruction layer is composed of an up-sampling module and a reconstruction convolution module and is used for restoring the high-level feature map into a high-resolution image.
Further, the feature learning layer in step 22 specifically includes:
s221, the residual block is composed of a feature preprocessing unit, a channel attention unit, a space attention unit and a local jump connection structure;
s222, a feature learning layer in the convolutional neural network model is formed by cascading a plurality of residual blocks in a chain route;
and S223, in the final stage of feature learning, performing feature high-dimensional mapping by using a convolution layer, and connecting the bottom input and the top output of the whole feature learning layer by using global jump connection to perform global residual learning.
Further, the residual block in step S221 specifically includes:
s2211, the characteristic preprocessing unit is composed of a convolutional layer, a PReLU active layer and a convolutional layer which are commonly used in a residual error model and are connected in sequence;
s2212, the channel attention unit is composed of a global average pooling layer, an expansion convolution layer, a PReLU active layer, a contraction convolution layer and a Sigmoid active layer which are connected in sequence, and the channel attention unit is used for outputting a new feature fused into a channel attention mechanism after channel-level multiplication is carried out on the obtained channel attention weight and the input feature;
s2213, the spatial attention unit is composed of an expansion convolutional layer, a PReLU active layer, a contraction convolutional layer and a Sigmoid active layer which are connected in sequence, and the spatial attention unit is used for outputting a new feature fused into a spatial attention mechanism after pixel-level multiplication is carried out on the obtained spatial attention mask and the input feature;
and S2214, directly connecting the input of the feature preprocessing unit with the output of the spatial attention unit by adopting a local jump connection structure, and learning residual features integrated into an attention mechanism.
Further, the image reconstruction layer in step S23 specifically includes:
s231, adopting convolution operation to expand the number of channels of the input feature map in the first layer;
s232: the second layer adopts sub-pixel out-of-order operation and is used for rearranging channels and pixels of the input characteristic diagram to finish up-sampling;
s233: and the third layer adopts convolution operation and is used for recovering a real high-resolution image.
The feature extraction layer of the invention can extract more and richer bottom features by adopting two-layer convolution cascade connection, and compresses the number of channels by adopting convolution of 1 multiplied by 1, thus forcing the model to take out irrelevant features during training and eliminating the redundancy of information between the channels; channel attention and space attention mechanisms are fused in a residual block of the feature learning layer at the same time, so that the model not only pays attention to the features of which channels are important, but also pays attention to the features of which block region and position, and the weight ratio of high-frequency detail features is improved; the overall connection structure of the feature learning layer is cascaded by 16 residual blocks, local jump connection is adopted in a module, the overall feature learning layer is connected in a global jump manner, residual errors between high-resolution images and low-resolution images are learned, and the problem of gradient disappearance in the training process is solved; the sampling operation on the image reconstruction layer adopts a sub-pixel disorder mode, so that artificial factors and additional parameters are not introduced, and the noise interference and the calculated amount are effectively reduced.
Further, the specific method of step S3 is as follows:
s31, before the model training starts, initializing the parameters by an Xavier method;
s32, when the model is trained, inputting the low-resolution images and the high-resolution images in the training set into the built neural network, and updating model parameters by using a gradient descent algorithm until the network converges;
and S33, after the model training is finished, storing the optimal model structure and the optimal parameter set.
Further, the model training process in step S32 specifically includes the following steps:
s321, adopting L2 loss as a loss function for updating optimization parameters, wherein the aim of model training is to minimize an L2 loss value;
s322: optimizing model training by adopting an Adam optimizer, adjusting learning rate by utilizing a step strategy, and increasing training rate by utilizing a mini-batch training mode;
s323: the training data is read into the memory in a file queue mode, so that the dependence on the memory is reduced.
Further, the specific method of step S4 is as follows:
s41, loading an optimal parameter set, inputting the image to be reconstructed into an optimal network model, and outputting a reconstructed high-resolution image through prediction of a neural network;
and S42, calculating the peak signal-to-noise ratio (PSNR) and the Structural Similarity (SSIM) of the corresponding objective evaluation index, and verifying the reconstruction quality effect of the image.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
(1) the basic network structure is designed into a residual error structure formed by cascading a plurality of CSAFR blocks, the CSAFR blocks can recalibrate multilevel weights to capture more important information, a plurality of local jump connections can help important feature information to cross different modules and layers to be transmitted, meanwhile, global jump connections are introduced into the whole feature learning layer, residual errors between high-resolution images and low-resolution images are learned, and the problems of gradient disappearance and network degradation are effectively solved;
(2) in the invention, the channel attention unit and the space attention unit are connected in series in the residual error structure block, so that the neural network focuses more on areas and channels with more high-frequency information, the characteristic weight containing rich high-frequency information is amplified, the weight containing redundant low-frequency information is reduced, the network convergence is accelerated, and the network performance is improved;
(3) according to the method, feature extraction and learning are firstly carried out in a low-dimensional space, and finally sub-pixel convolution is used at the tail end of a network for up-sampling operation, so that the learned features are aggregated, a high-resolution image is reconstructed in a high-dimensional space, and new parameters cannot be introduced into sub-pixel sequencing, so that the operation memory and the operation amount are greatly reduced, the processing speed is higher, and the obtained effect is better.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a network structure diagram of the image super-resolution reconstruction method of the present invention.
Fig. 3 is a network structure diagram of the residual block in the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings: the present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection authority of the present invention is not limited to the following embodiments.
The embodiment provides an image super-resolution reconstruction method based on a fusion attention mechanism residual error network, as shown in fig. 1, including the following steps:
and S1, data acquisition and preprocessing.
S11, acquiring an image data set for training and an image data set to be reconstructed, respectively, in this embodiment, a standard public data set DIV2K is adopted as a training data set, and includes 800 training images, 100 verification images and 100 test images. The data Set to be reconstructed employs other standard public image libraries including Set5, Set14, BSD100 and Urban 100.
In S12, the images included in the public data set are often high-resolution images directly obtained by camera shooting, and corresponding low-resolution images do not actually exist. Therefore, in order to generate two sets of corresponding data of input and label to train the model, it is necessary to down-sample the high resolution image in the data set by using a bicubic interpolation method, and generate a corresponding low resolution image according to different magnification scales (for example, × 2, × 3, or × 4), thereby obtaining training data including a multi-scale magnification.
And S13, in order to fully utilize the training data and inhibit the overfitting of the model, the images of the training set are respectively rotated by three angles of 90 degrees, 180 degrees and 270 degrees and turned over in the horizontal direction and the vertical direction in a data enhancement mode, so that the effect of expanding the number of the samples of the training set is achieved.
And S14, combining the experimental equipment conditions, and segmenting and cutting each image in the training set in order to reduce the calculation cost and the training time. In the present embodiment, all the low-resolution images are divided into 32 × 32 sizes, and the high-resolution images are divided into 64 × 64, 96 × 96, and 128 × 128 according to the respective magnifications.
And S15, finally, performing pixel value regularization operation on the training data, and subtracting the RGB mean value of the whole training set image from the pixel value of each image in the training set so as to enable the pixel data value distribution to be more uniform, avoid gradient dispersion and avoid the generation of the training saturation phenomenon.
And S2, building a network model.
The network model structure is shown in fig. 2 and includes a Feature Extraction (FE) layer, a Feature Learning (FL) layer, and an Image Reconstruction (IR) layer. At the feature extraction layer, for the input low resolution image ILRShallow layer feature extraction is carried out to obtain a low-resolution image feature map Flow(ii) a In the feature learning layer, performing multiple residual error feature learning on the low-resolution image feature map to obtain a high-resolution image feature map Fhigh(ii) a In the image reconstruction layer, the high-resolution image feature map is reconstructed to be restored into a corresponding high-resolution image ISR. This process can be expressed by the following equation:
ISR=SIR(SFL(SFE(ILR)))
wherein S isFERepresenting a feature extraction layer operation, SFLRepresenting a feature learning layer operation, SIRRepresenting the image reconstruction layer operations.
S21, a feature extraction layer. Extracting low-resolution image I by using feature learning layerLRThereby converting the data information of the input image from the pixel space domain to the feature space domain. As shown in fig. 2, in the present embodiment, the feature extraction layer includes two convolution layers, the first layer having a convolution filter size of 3 × 3 and a number of 128 and used for extracting shallow features, and the second layer having a convolution filter size of 3 × 3 and a number of 128The convolution filter is 1 × 1 in size and 64 in number, and is used to compress the number of channels, making the model compact. Therefore, the model can be ensured to be capable of initially extracting shallow low-level features, the complexity of the model can be reduced, and the training time is shortened. The forward propagation process of shallow feature extraction can be expressed as:
wherein, FlowRepresenting a shallow feature map extracted after two layers of convolution,andrespectively representing the convolution operation of two layers of the feature extraction layers, wherein the weights of convolution kernels of the two convolution layers of the feature extraction layers are W1 FE,The bias of the convolution kernel is
And S22, a feature learning layer. Obtaining a shallow feature map FlowAnd then, the feature learning layer is used as the input of the feature learning layer. As shown in fig. 2, the feature learning phase is first composed of m cascaded Residual structure modules, in which, in addition to the basic convolution operation, a Channel Attention unit and a Spatial Attention unit are introduced in series, and this module is called a Channel and Spatial Attention function (csasr) Residual Block, which is a fusion Channel and Spatial Attention mechanism, and after m layers of Residual learning, a convolution layer is connected for high-dimensional feature mapping. The CSAFR module adopts a local jump connection structure inside and directly connects the input and the output of the module; the whole feature learning layer adopts a global jump connection structure and directly connects the bottom layer and the top layer of the network.
In this embodiment, the number of the CSAFR modules is set to 16, which can not only avoid that the model is too complex and the training time is too long due to too many cascades, but also avoid that only local information of the image can be acquired due to too few cascades, and thus the super-resolution reconstruction effect of the image is not good.
S221, as shown in fig. 3, the CSAFR module is composed of a Feature Process (FP), a Channel Attention unit (CA), a Spatial Attention unit (SA), and a Local Skip Connection (Local Skip Connection).
S2211, specifically, the FP unit is formed by combining a first convolution layer, a second activation layer and a third convolution layer by using a common residual network structure, and a feature preprocessing process of the FP unit is as follows:
wherein, FlowAnd FfpRespectively a feature input and an output, respectively,andrepresenting a two-layer convolution operation of the FP element,and δ (·) represents a PReLU activation operation.
S2212, specifically, the CA unit is constituted as follows: characteristic diagram F with input of H multiplied by W multiplied by nfpFirstly, a 1 multiplied by n channel statistic feature map is obtained through a spatial global average pooling layer, then a down-sampling layer and an up-sampling layer are carried out, the down-sampling and the up-sampling are both realized by using 1 multiplied by 1 convolution, the down-sampling layer reduces the number of feature map channels to 1/r of the original number, the post-activation function is PReLU, the up-sampling layer restores the number of feature map channels to the original n, and the post-activation function is Sigmoid. Adjusting the attention weight coefficient of each channel to be between 0 and 1 through a Sigmoid function, and finally, adjusting the weightMultiplying the weight coefficient with the original characteristic to obtain a new characteristic F after recalibrationcaIn fact, the whole process is to perform weighted distribution again on the characteristics of different channels, so that the characteristics rich in information are selectively taken into account, and redundant useless information is suppressed. The feature processing procedure of the CA unit is as follows:
wherein the content of the first and second substances,andrepresenting a two-layer convolution operation of the CA unit,respectively representing the weights of convolution kernels of two layers of convolution of the CA unit, and the corresponding offsets are respectivelyAndand p (-) represents a global average pooling operation,and delta (-) represents the PReLU activation operation,and σ (-) denotes Sigmoid activation operation,channel level multiplication representing the eigenchannel and the corresponding channel weight coefficient.
S2213, specifically, the SA unit is constituted as follows: characteristic diagram F with input of H multiplied by W multiplied by ncaFirst, the number of channels is extended to n x i by a convolution layer, and then post-activation is performedThe function PReLU with output characteristic dimension of H multiplied by W multiplied by ni is used for generating an attention characteristic graph of each channel, then the number of the channels is compressed to 1 by a convolution layer, a post-activation function Sigmoid is arranged, the output characteristic dimension is H multiplied by W multiplied by 1 and is used for combining the outputs of the previous layer into a single attention characteristic graph, the characteristics are normalized to be between 0 and 1 to generate a space attention mask, and finally the attention mask is multiplied with the input characteristic graph to obtain a new recalibrated characteristic Fsa. The introduction of a spatial attention mechanism helps to enable the network to have the ability to discriminate between different regions, to better focus on more important and more difficult to reconstruct regions, and to recover high frequency details of high resolution images. The SA unit performs spatial calibration on the input features as follows:
wherein the content of the first and second substances,andrepresenting a two-layer convolution operation of the SA unit,respectively representing the weights of convolution kernels of two layers of convolution of the SA unit, and the corresponding offsets are respectivelyAndthe operations represented by delta (-) and sigma (-) are the same as for the CA unit,each feature map is represented by a pixel-by-pixel multiplication of the spatial location of the corresponding attention mask.
S2214, directly connecting the input of the FP unit and the output of the SA unit by adopting local jump connection, and learning the residual error characteristics integrated with the attention mechanism:
wherein, FlowAnd FsaRepresenting direct inputs and outputs of a CSAFR block, F1Representing the final output of a CSAFR block following a jump connection,indicating a pixel-by-pixel addition between feature maps.
S222, the entire CSAFR block chain in the network is constructed by cascading a plurality of CSAFR blocks in a chain-like route, which can be used to perform channel-level and spatial-level feature modulation at multiple levels. And the input of this CSAFR chain, i.e. the input F of the first CASFR blockfpThe final output, i.e. the output F of the mth and final CASFR blocksm. The multi-layer CSAFR cascade in this chain thus operates as follows:
wherein the content of the first and second substances,the operation of the α th CSAFR block is shown.
S223, at the end of the characteristic learning layer, a convolution layer is used for high-dimensional mapping of characteristics to obtain a characteristic diagram F subjected to residual error learningm+1Finally, the bottom input and the top output of the whole feature learning layer are connected by utilizing global jump connection, and the low-frequency and high-frequency features of the image are combined to generate a high-resolution image feature map FhighThe final global jump connection procedure can be expressed as follows:
wherein, FlowRepresents the shallow feature map extracted after the two-layer convolution in step S21,represents the endmost convolution operation of the feature learning layer,the weight of the convolution kernel is represented by,which is indicative of the offset thereof,indicating a pixel-by-pixel addition between feature maps.
And S23, an image reconstruction layer for restoring the high-level feature map into a high-resolution image. As shown in fig. 2, the specific structure is: the first layer of convolutional layer is used for expanding the number of characteristic diagram channels, the second layer of sub-pixel disorder layer is used for periodically sequencing the input characteristic diagrams to finish the up-sampling operation, the third layer of convolutional layer is used for converging and combining the input characteristic diagrams into a real image, and finally the real image is added with a training set RGB mean value to finish the super-resolution reconstruction of the image.
S231: specifically, the input of the first layer convolution layer is the high-resolution image feature map F learned by the previous layer networkhighThe size of the feature map is H multiplied by W, and the number of channels is n. If the required magnification factor of this embodiment is r, the size of the layer convolution filter is set to be 3 × 3, and the number is r2Xn, and therefore the high resolution image feature map F output by the convolution operationhigh+1The size is not changed, the number of channels is expanded to r2X n, i.e. the dimension of the obtained characteristic diagram is H x W x r2n。
S232, specifically, the second layer of sub-pixel disorder layer convolves the first layer to obtain a feature map Fhigh+1R corresponding to each pixel point in2The channels are reordered into an r x r region that corresponds to an r x r sized sub-block in the generated upsampled feature map. The dimension after the operation is H multiplied by W multiplied by r2n characteristic diagram Fn+1Is rearrangedUpsampling feature map F ordered as rH × rW × nhigh+2。
S233, specifically, the third layer convolution layer outputs the up-sampling feature map F after sub-pixel disorderinghigh+2Performing image reconstruction operation to restore the image to a real image with corresponding magnification, and finally outputting and adding the RGB pixel mean value of the whole training set to obtain a reconstructed high-resolution image I in consideration of the operation during the training set preprocessingSRThe size of the layer convolution filter is set to 3 × 3, the number of channels is set to c, and the dimension of the finally generated high-resolution image is rH × rW × c. The forward propagation process of the image reconstruction layer can be expressed as:
wherein S isIRRepresenting the overall operation of the image reconstruction layer,andrespectively representing convolution operations of the first layer and the third layer, and the weights of convolution kernels are respectively W1 IRAndcorresponding offsets are respectivelyAnd θ (-) represents the periodic ordering of the subpixel out-of-order layer.
And S3, initializing, training and saving model parameters.
And S31, before the model training is started, initializing parameters by adopting an Xavier method.
S32, during model training, the low-resolution image I obtained by down-sampling in the training setLRAnd corresponding true high resolution image IHRInputting the network as input set and label set, and using gradient descent algorithm and error inverse after trainingAnd updating the model parameters to the propagation until the training process reaches a convergence state.
S321, concretely, adopting L2 loss, namely mean square error LMSEMean square error, i.e. the original high resolution image I, as a loss function for optimization parameter learningHRAnd reconstructing a high resolution image ISRThe average value after the squared euclidean distance of the pixel difference between each point, the expression is as follows:
h × W is the size of the feature map, c is the number of channels of the feature map, s is the number of small-batch learning samples, and I (v, I, j, k) is the pixel value with the position (I, j) in the kth channel of the v-th image. The goal of model training is to minimize LMSE,LMSEThe smaller the value of (A) is, the smaller the difference between the reconstructed image and the original image is, the higher the similarity is, and the better the reconstruction effect is.
S322, specifically, to minimize the L2 loss function, an Adam optimizer is employed to optimize model training, adjusting the learning rate with a step size strategy. The parameter settings of the Adam optimizer are as follows: initial learning rate weight base _ lr being 1 × 10-4exponential decay rate β1=0.9,β20.999, and avoid the error coefficient epsilon of 0 to 10-8. In order to fully utilize the computational performance of the GPU to improve the model training speed and avoid the OOM error exceeding the GPU video memory limit, in the present embodiment, a mini-batch training mode is adopted, and the batch size (batch _ size) is set to 16. Every 2X 105The secondary iteration learning rate is attenuated to half of the original rate so as to achieve the optimal training effect. Thus the input dimension [ batch _ size, H, W, c of the neural network]16 × 32 × 32 × 3, the output dimension corresponds to the input dimension according to the magnification scale.
And S323, in addition, in order to reduce the dependence on the memory, reading the training data into the memory in a file queue mode.
And S33, after the model training is finished, storing the optimal model structure and the optimal parameter set for testing and reconstruction.
And S4, reconstructing the super-resolution image.
And S41, loading the stored optimal parameter set, inputting the low-resolution image to be reconstructed into the optimal network model, and outputting the reconstructed high-resolution image through prediction of the neural network.
S42, in order to verify the reconstruction quality effect of the image, respectively calculating the peak signal to noise ratio (PSNR) and the Structural Similarity (SSIM) of the corresponding objective evaluation indexes, wherein the larger the values of the two indexes are, the closer the reconstructed image is to the real high-resolution image is.
According to the image super-resolution reconstruction method based on the fusion attention mechanism residual error network, firstly, a local and global combined residual error learning strategy is adopted, the problems of gradient disappearance and network degradation are improved, and the difficulty of training a deep network is reduced. And secondly, channel attention and space attention mechanisms are fused in the stacked residual blocks simultaneously, so that feature learning of high-frequency information in the low-resolution image is focused more, more effective information of the reconstructed image is kept, and the quality of the reconstructed image is improved. And finally, a strategy of performing feature learning in a low-dimensional space and reconstructing a high-resolution image in a high-dimensional space through upsampling is adopted, so that the calculation memory and the calculation amount are greatly reduced, and the network training efficiency is higher.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can understand that the modifications or substitutions within the technical scope of the present invention are included in the scope of the present invention, and therefore, the scope of the present invention should be subject to the protection scope of the claims.
Claims (9)
1. The image super-resolution reconstruction method based on the fusion attention mechanism residual error network is characterized by comprising the following steps of:
s1, acquiring and preprocessing data, acquiring an image data set for training and an image data set to be reconstructed, and performing image preprocessing operation according to a reconstruction target and requirements;
s2, building a network model, and constructing a convolutional neural network model for image super-resolution, wherein the model mainly comprises a feature extraction layer, a feature learning layer and an image reconstruction layer; the characteristic extraction layer is used for carrying out shallow characteristic extraction on the input low-resolution image to obtain a characteristic diagram of the low-resolution image; the characteristic learning layer is used for carrying out multiple residual characteristic learning on the characteristic diagram of the low-resolution image to obtain the characteristic diagram of the high-resolution image; the image reconstruction layer is used for reconstructing the characteristic diagram of the high-resolution image and restoring the characteristic diagram into the high-resolution image with the corresponding magnification;
s3, initializing, training and storing model parameters, wherein the model parameters are initialized before training; inputting training data into a convolutional neural network for parameter learning in a training process, updating parameter values by utilizing gradient descent, and minimizing a loss function until the network converges; after training, storing the optimal model structure and the optimal parameter set;
and S4, reconstructing the super-resolution image, inputting the image to be reconstructed into the optimal network model, and loading the optimal parameter set, so that the high-resolution image under the corresponding amplification scale can be output.
2. The method for image super-resolution reconstruction based on the residual network with the fused attention mechanism of claim 1, wherein the specific method of step S1 is as follows:
s11, adopting a standard public data Set DIV2K as a training data Set, and adopting Set5, Set14, BSD100 and Urban100 as reconstruction data sets;
s12, carrying out down-sampling operation on the high-resolution images of the data set by adopting a bicubic interpolation method to generate corresponding low-resolution images, thereby obtaining training data containing multi-scale magnification;
s13, rotating and turning the training data set image by adopting a data enhancement mode, thereby expanding the sample number of the training data set;
s14, carrying out segmentation and cutting operation on the images in the training data set, thereby saving training time and calculation cost;
and S15, performing pixel value regularization operation on the input image, so that the pixel data value distribution is more uniform.
3. The method for image super-resolution reconstruction based on the residual network with the fused attention mechanism of claim 1, wherein the convolutional neural network model in step S2 sequentially comprises:
s21, a feature extraction layer, wherein the feature extraction layer is composed of two convolution layers and is used for extracting shallow features of the low-resolution image;
s22, a feature learning layer, wherein the feature learning layer is composed of a plurality of cascade fusion channels and residual blocks of a space attention mechanism, a local jump connection structure is adopted in each residual block, and a global jump connection structure is adopted in the whole feature learning layer;
and S23, an image reconstruction layer, wherein the feature reconstruction layer is composed of an up-sampling module and a reconstruction convolution module and is used for restoring the high-level feature map into a high-resolution image.
4. The method for reconstructing image super resolution based on the residual network with the fused attention mechanism as claimed in claim 3, wherein the feature learning layer in the step 22 specifically comprises:
s221, the residual block is composed of a feature preprocessing unit, a channel attention unit, a space attention unit and a local jump connection structure;
s222, a feature learning layer in the convolutional neural network model is formed by cascading a plurality of residual blocks in a chain route;
and S223, in the final stage of feature learning, performing feature high-dimensional mapping by using a convolution layer, and connecting the bottom input and the top output of the whole feature learning layer by using global jump connection to perform global residual learning.
5. The method for image super-resolution reconstruction based on the fusion attention mechanism residual network according to claim 4, wherein the residual block in step S221 specifically comprises:
s2211, the characteristic preprocessing unit is composed of a convolution layer, a PReLU active layer and a convolution layer which are connected in sequence;
s2212, the channel attention unit is composed of a global average pooling layer, an expansion convolution layer, a PReLU active layer, a contraction convolution layer and a Sigmoid active layer which are connected in sequence, and the channel attention unit is used for outputting a new feature fused into a channel attention mechanism after channel-level multiplication is carried out on the obtained channel attention weight and the input feature;
s2213, the spatial attention unit is composed of an expansion convolutional layer, a PReLU active layer, a contraction convolutional layer and a Sigmoid active layer which are connected in sequence, and the spatial attention unit is used for outputting a new feature fused into a spatial attention mechanism after pixel-level multiplication is carried out on the obtained spatial attention mask and the input feature;
and S2214, directly connecting the input of the feature preprocessing unit with the output of the spatial attention unit by adopting a local jump connection structure, and learning residual features integrated into an attention mechanism.
6. The method for image super-resolution reconstruction based on the residual network with the fused attention mechanism of claim 3, wherein the image reconstruction layer in the step S23 specifically comprises:
s231, adopting convolution operation to expand the number of channels of the input feature map in the first layer;
s232: the second layer adopts sub-pixel out-of-order operation and is used for rearranging channels and pixels of the input characteristic diagram to finish up-sampling;
s233: and the third layer adopts convolution operation and is used for recovering a real high-resolution image.
7. The method for image super-resolution reconstruction based on the residual network with the fused attention mechanism of claim 1, wherein the specific method of step S3 is as follows:
s31, before the model training starts, initializing the parameters by an Xavier method;
s32, when the model is trained, inputting the low-resolution images and the high-resolution images in the training set into the built neural network, and updating model parameters by using a gradient descent algorithm until the network converges;
and S33, after the model training is finished, storing the optimal model structure and the optimal parameter set.
8. The method for image super-resolution reconstruction based on the fusion attention mechanism residual error network according to claim 7, wherein the model training process in the step S32 specifically includes the following steps:
s321, adoptLoss as a loss function for optimizing parameter updates, with the goal of model training being minimizationA loss value;
s322: optimizing model training by adopting an Adam optimizer, adjusting learning rate by utilizing a step strategy, and increasing training rate by utilizing a mini-batch training mode;
s323: the training data is read into the memory in a file queue mode, so that the dependence on the memory is reduced.
9. The method for image super-resolution reconstruction based on the residual network with the fused attention mechanism of claim 1, wherein the specific method of step S4 is as follows:
s41, loading an optimal parameter set, inputting the image to be reconstructed into an optimal network model, and outputting a reconstructed high-resolution image through prediction of a neural network;
and S42, calculating the peak signal-to-noise ratio and the structural similarity of the corresponding objective evaluation index, and verifying the reconstruction quality effect of the image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010002303.2A CN111192200A (en) | 2020-01-02 | 2020-01-02 | Image super-resolution reconstruction method based on fusion attention mechanism residual error network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010002303.2A CN111192200A (en) | 2020-01-02 | 2020-01-02 | Image super-resolution reconstruction method based on fusion attention mechanism residual error network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111192200A true CN111192200A (en) | 2020-05-22 |
Family
ID=70708377
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010002303.2A Withdrawn CN111192200A (en) | 2020-01-02 | 2020-01-02 | Image super-resolution reconstruction method based on fusion attention mechanism residual error network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111192200A (en) |
Cited By (134)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111488952A (en) * | 2020-06-28 | 2020-08-04 | 浙江大学 | Depth residual error model construction method suitable for automatic hub identification |
CN111667412A (en) * | 2020-06-16 | 2020-09-15 | 中国矿业大学 | Method and device for reconstructing image super-resolution based on cross learning network |
CN111681166A (en) * | 2020-06-02 | 2020-09-18 | 重庆理工大学 | Image super-resolution reconstruction method of stacked attention mechanism coding and decoding unit |
CN111709481A (en) * | 2020-06-17 | 2020-09-25 | 云南省烟草农业科学研究院 | Tobacco disease identification method, system, platform and storage medium |
CN111724308A (en) * | 2020-06-28 | 2020-09-29 | 深圳壹账通智能科技有限公司 | Blurred image processing method and system |
CN111754404A (en) * | 2020-06-18 | 2020-10-09 | 重庆邮电大学 | Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism |
CN111754507A (en) * | 2020-07-03 | 2020-10-09 | 征图智能科技(江苏)有限公司 | Light-weight industrial defect image classification method based on strong attention machine mechanism |
CN111818298A (en) * | 2020-06-08 | 2020-10-23 | 北京航空航天大学 | High-definition video monitoring system and method based on light field |
CN111833246A (en) * | 2020-06-02 | 2020-10-27 | 天津大学 | Single-frame image super-resolution method based on attention cascade network |
CN111833261A (en) * | 2020-06-03 | 2020-10-27 | 北京工业大学 | Image super-resolution restoration method for generating countermeasure network based on attention |
CN111861886A (en) * | 2020-07-15 | 2020-10-30 | 南京信息工程大学 | Image super-resolution reconstruction method based on multi-scale feedback network |
CN111861961A (en) * | 2020-07-25 | 2020-10-30 | 安徽理工大学 | Multi-scale residual error fusion model for single image super-resolution and restoration method thereof |
CN111882543A (en) * | 2020-07-29 | 2020-11-03 | 南通大学 | Cigarette filter stick counting method based on AA R2Unet and HMM |
CN111881920A (en) * | 2020-07-16 | 2020-11-03 | 深圳力维智联技术有限公司 | Network adaptation method of large-resolution image and neural network training device |
CN111899315A (en) * | 2020-08-07 | 2020-11-06 | 深圳先进技术研究院 | Method for reconstructing low-dose image by using multi-scale feature perception depth network |
CN111932454A (en) * | 2020-07-22 | 2020-11-13 | 杭州电子科技大学 | LOGO pattern reconstruction method based on improved binary closed-loop neural network |
CN111951164A (en) * | 2020-08-11 | 2020-11-17 | 哈尔滨理工大学 | Image super-resolution reconstruction network structure and image reconstruction effect analysis method |
CN111950643A (en) * | 2020-08-18 | 2020-11-17 | 创新奇智(上海)科技有限公司 | Model training method, image classification method and corresponding device |
CN111986102A (en) * | 2020-07-15 | 2020-11-24 | 万达信息股份有限公司 | Digital pathological image deblurring method |
CN112070670A (en) * | 2020-09-03 | 2020-12-11 | 武汉工程大学 | Face super-resolution method and system of global-local separation attention mechanism |
CN112070676A (en) * | 2020-09-10 | 2020-12-11 | 东北大学秦皇岛分校 | Image super-resolution reconstruction method of two-channel multi-sensing convolutional neural network |
CN112070690A (en) * | 2020-08-25 | 2020-12-11 | 西安理工大学 | Single image rain removing method based on convolutional neural network double-branch attention generation |
CN112132778A (en) * | 2020-08-12 | 2020-12-25 | 浙江工业大学 | Medical image lesion segmentation method based on space transfer self-learning |
CN112183432A (en) * | 2020-10-12 | 2021-01-05 | 中国科学院空天信息创新研究院 | Building area extraction method and system based on medium-resolution SAR image |
CN112200724A (en) * | 2020-10-22 | 2021-01-08 | 长沙理工大学 | Single-image super-resolution reconstruction system and method based on feedback mechanism |
CN112215755A (en) * | 2020-10-28 | 2021-01-12 | 南京信息工程大学 | Image super-resolution reconstruction method based on back projection attention network |
CN112288714A (en) * | 2020-10-28 | 2021-01-29 | 西安电子科技大学 | Hardware Trojan horse detection method based on deep learning |
CN112288658A (en) * | 2020-11-23 | 2021-01-29 | 杭州师范大学 | Underwater image enhancement method based on multi-residual joint learning |
CN112330542A (en) * | 2020-11-18 | 2021-02-05 | 重庆邮电大学 | Image reconstruction system and method based on CRCSAN network |
CN112419153A (en) * | 2020-11-23 | 2021-02-26 | 深圳供电局有限公司 | Image super-resolution reconstruction method and device, computer equipment and storage medium |
CN112435197A (en) * | 2020-12-02 | 2021-03-02 | 携程计算机技术(上海)有限公司 | Image beautifying method and device, electronic equipment and storage medium |
CN112581345A (en) * | 2020-12-04 | 2021-03-30 | 华南理工大学 | Image block-based image steganalysis method, system, device and medium |
CN112633429A (en) * | 2020-12-21 | 2021-04-09 | 安徽七天教育科技有限公司 | Method for recognizing handwriting choice questions of students |
CN112668619A (en) * | 2020-12-22 | 2021-04-16 | 万兴科技集团股份有限公司 | Image processing method, device, terminal and storage medium |
CN112734645A (en) * | 2021-01-19 | 2021-04-30 | 青岛大学 | Light-weight image super-resolution reconstruction method based on characteristic distillation multiplexing |
CN112734731A (en) * | 2021-01-11 | 2021-04-30 | 牧原食品股份有限公司 | Livestock temperature detection method, device, equipment and storage medium |
CN112734646A (en) * | 2021-01-19 | 2021-04-30 | 青岛大学 | Image super-resolution reconstruction method based on characteristic channel division |
CN112734643A (en) * | 2021-01-15 | 2021-04-30 | 重庆邮电大学 | Lightweight image super-resolution reconstruction method based on cascade network |
CN112750082A (en) * | 2021-01-21 | 2021-05-04 | 武汉工程大学 | Face super-resolution method and system based on fusion attention mechanism |
CN112767259A (en) * | 2020-12-29 | 2021-05-07 | 上海联影智能医疗科技有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN112767246A (en) * | 2021-01-07 | 2021-05-07 | 北京航空航天大学 | Multi-magnification spatial super-resolution method and device for light field image |
CN112767251A (en) * | 2021-01-20 | 2021-05-07 | 重庆邮电大学 | Image super-resolution method based on multi-scale detail feature fusion neural network |
CN112767255A (en) * | 2021-03-04 | 2021-05-07 | 山东大学 | Image super-resolution reconstruction method and system based on feature separation fusion network |
CN112785684A (en) * | 2020-11-13 | 2021-05-11 | 北京航空航天大学 | Three-dimensional model reconstruction method based on local information weighting mechanism |
CN112801868A (en) * | 2021-01-04 | 2021-05-14 | 青岛信芯微电子科技股份有限公司 | Method for image super-resolution reconstruction, electronic device and storage medium |
CN112801945A (en) * | 2021-01-11 | 2021-05-14 | 西北大学 | Depth Gaussian mixture model skull registration method based on dual attention mechanism feature extraction |
CN112819910A (en) * | 2021-01-08 | 2021-05-18 | 上海理工大学 | Hyperspectral image reconstruction method based on double-ghost attention machine mechanism network |
CN112862689A (en) * | 2021-03-09 | 2021-05-28 | 南京邮电大学 | Image super-resolution reconstruction method and system |
CN112911304A (en) * | 2021-01-29 | 2021-06-04 | 南京信息工程大学滨江学院 | Encoding-based two-way video compression device and compressed video reconstruction method |
CN112950464A (en) * | 2021-01-25 | 2021-06-11 | 西安电子科技大学 | Binary super-resolution reconstruction method without regularization layer |
CN112950471A (en) * | 2021-02-26 | 2021-06-11 | 杭州朗和科技有限公司 | Video super-resolution processing method and device, super-resolution reconstruction model and medium |
CN112967184A (en) * | 2021-02-04 | 2021-06-15 | 西安理工大学 | Super-resolution amplification method based on double-scale convolutional neural network |
CN112967295A (en) * | 2021-03-10 | 2021-06-15 | 中国科学院深圳先进技术研究院 | Image processing method and system based on residual error network and attention mechanism |
CN113052848A (en) * | 2021-04-15 | 2021-06-29 | 山东大学 | Chicken image segmentation method and system based on multi-scale attention network |
CN113052764A (en) * | 2021-04-19 | 2021-06-29 | 东南大学 | Video sequence super-resolution reconstruction method based on residual connection |
CN113096017A (en) * | 2021-04-14 | 2021-07-09 | 南京林业大学 | Image super-resolution reconstruction method based on depth coordinate attention network model |
CN113223001A (en) * | 2021-05-07 | 2021-08-06 | 西安智诊智能科技有限公司 | Image segmentation method based on multi-resolution residual error network |
CN113222819A (en) * | 2021-05-19 | 2021-08-06 | 厦门大学 | Remote sensing image super-resolution reconstruction method based on deep convolutional neural network |
CN113222821A (en) * | 2021-05-24 | 2021-08-06 | 南京航空航天大学 | Image super-resolution processing method for annular target detection |
CN113256496A (en) * | 2021-06-11 | 2021-08-13 | 四川省人工智能研究院(宜宾) | Lightweight progressive feature fusion image super-resolution system and method |
CN113298716A (en) * | 2021-05-31 | 2021-08-24 | 重庆师范大学 | Image super-resolution reconstruction method based on convolutional neural network |
CN113298717A (en) * | 2021-06-08 | 2021-08-24 | 浙江工业大学 | Medical image super-resolution reconstruction method based on multi-attention residual error feature fusion |
CN113313691A (en) * | 2021-06-03 | 2021-08-27 | 上海市第一人民医院 | Thyroid color Doppler ultrasound processing method based on deep learning |
CN113312604A (en) * | 2021-05-31 | 2021-08-27 | 南京信息工程大学 | Block chain authentication-based public reconstruction-based distributed secret image sharing method |
CN113313102A (en) * | 2021-08-02 | 2021-08-27 | 南京天朗防务科技有限公司 | Random resonance chaotic small signal detection method based on variant differential evolution algorithm |
CN113313129A (en) * | 2021-06-22 | 2021-08-27 | 中国平安财产保险股份有限公司 | Method, device and equipment for training disaster recognition model and storage medium |
CN113327203A (en) * | 2021-05-28 | 2021-08-31 | 北京百度网讯科技有限公司 | Image processing network model, method, apparatus and medium |
CN113362225A (en) * | 2021-06-03 | 2021-09-07 | 太原科技大学 | Multi-description compressed image enhancement method based on residual recursive compensation and feature fusion |
CN113379600A (en) * | 2021-05-26 | 2021-09-10 | 北京邮电大学 | Short video super-resolution conversion method, device and medium based on deep learning |
CN113421187A (en) * | 2021-06-10 | 2021-09-21 | 山东师范大学 | Super-resolution reconstruction method, system, storage medium and equipment |
CN113436076A (en) * | 2021-07-26 | 2021-09-24 | 柚皮(重庆)科技有限公司 | Image super-resolution reconstruction method with characteristics gradually fused and electronic equipment |
CN113470044A (en) * | 2021-06-09 | 2021-10-01 | 东北大学 | CT image liver automatic segmentation method based on deep convolutional neural network |
CN113487486A (en) * | 2021-07-23 | 2021-10-08 | 河南牧原智能科技有限公司 | Image resolution enhancement processing method, device and equipment and readable storage medium |
CN113487481A (en) * | 2021-07-02 | 2021-10-08 | 河北工业大学 | Circular video super-resolution method based on information construction and multi-density residual block |
CN113506222A (en) * | 2021-07-30 | 2021-10-15 | 合肥工业大学 | Multi-mode image super-resolution method based on convolutional neural network |
CN113516133A (en) * | 2021-04-01 | 2021-10-19 | 中南大学 | Multi-modal image classification method and system |
CN113516693A (en) * | 2021-05-21 | 2021-10-19 | 郑健青 | Rapid and universal image registration method |
CN113538244A (en) * | 2021-07-23 | 2021-10-22 | 西安电子科技大学 | Lightweight super-resolution reconstruction method based on adaptive weight learning |
CN113570505A (en) * | 2021-09-24 | 2021-10-29 | 中国石油大学(华东) | Shale three-dimensional super-resolution digital core grading reconstruction method and system |
CN113628115A (en) * | 2021-08-25 | 2021-11-09 | Oppo广东移动通信有限公司 | Image reconstruction processing method and device, electronic equipment and storage medium |
CN113628296A (en) * | 2021-08-04 | 2021-11-09 | 中国科学院自动化研究所 | Magnetic particle imaging reconstruction method from time-frequency domain signal to two-dimensional image |
CN113658047A (en) * | 2021-08-18 | 2021-11-16 | 北京石油化工学院 | Crystal image super-resolution reconstruction method |
CN113706388A (en) * | 2021-09-24 | 2021-11-26 | 上海壁仞智能科技有限公司 | Image super-resolution reconstruction method and device |
CN113706386A (en) * | 2021-09-04 | 2021-11-26 | 大连钜智信息科技有限公司 | Super-resolution reconstruction method based on attention mechanism |
CN113766250A (en) * | 2020-09-29 | 2021-12-07 | 四川大学 | Compressed image quality improving method based on sampling reconstruction and feature enhancement |
WO2021249523A1 (en) * | 2020-06-12 | 2021-12-16 | 华为技术有限公司 | Image processing method and device |
CN113837935A (en) * | 2020-06-24 | 2021-12-24 | 四川大学 | Compressed image super-resolution reconstruction method based on attention-enhancing network |
CN113837940A (en) * | 2021-09-03 | 2021-12-24 | 山东师范大学 | Image super-resolution reconstruction method and system based on dense residual error network |
CN113888491A (en) * | 2021-09-27 | 2022-01-04 | 长沙理工大学 | Multilevel hyperspectral image progressive and hyper-resolution method and system based on non-local features |
CN113989300A (en) * | 2021-10-29 | 2022-01-28 | 北京百度网讯科技有限公司 | Lane line segmentation method and device, electronic equipment and storage medium |
CN114004784A (en) * | 2021-08-27 | 2022-02-01 | 西安市第三医院 | Method for detecting bone condition based on CT image and electronic equipment |
CN114049261A (en) * | 2022-01-13 | 2022-02-15 | 武汉理工大学 | Image super-resolution reconstruction method focusing on foreground information |
CN114066727A (en) * | 2021-07-28 | 2022-02-18 | 华侨大学 | Multi-stage progressive image super-resolution method |
CN114120406A (en) * | 2021-11-22 | 2022-03-01 | 四川轻化工大学 | Face feature extraction and classification method based on convolutional neural network |
CN114140357A (en) * | 2021-12-02 | 2022-03-04 | 哈尔滨工程大学 | Multi-temporal remote sensing image cloud region reconstruction method based on cooperative attention mechanism |
CN114170089A (en) * | 2021-09-30 | 2022-03-11 | 成都大学附属医院 | Method and electronic device for diabetic retinopathy classification |
CN114187181A (en) * | 2021-12-17 | 2022-03-15 | 福州大学 | Double-path lung CT image super-resolution method based on residual information refining |
CN114198295A (en) * | 2021-12-15 | 2022-03-18 | 中国石油天然气股份有限公司 | Compressor unit whole-system vibration monitoring method and device and electronic equipment thereof |
WO2022057837A1 (en) * | 2020-09-16 | 2022-03-24 | 广州虎牙科技有限公司 | Image processing method and apparatus, portrait super-resolution reconstruction method and apparatus, and portrait super-resolution reconstruction model training method and apparatus, electronic device, and storage medium |
CN114332592A (en) * | 2022-03-11 | 2022-04-12 | 中国海洋大学 | Ocean environment data fusion method and system based on attention mechanism |
CN114494493A (en) * | 2022-01-18 | 2022-05-13 | 清华大学 | Tomographic image reconstruction method, device, readable storage medium and electronic equipment |
CN114511576A (en) * | 2022-04-19 | 2022-05-17 | 山东建筑大学 | Image segmentation method and system for scale self-adaptive feature enhanced deep neural network |
CN114529456A (en) * | 2022-02-21 | 2022-05-24 | 深圳大学 | Super-resolution processing method, device, equipment and medium for video |
CN114549316A (en) * | 2022-02-18 | 2022-05-27 | 中国石油大学(华东) | Remote sensing single image super-resolution method based on channel self-attention multi-scale feature learning |
WO2022133874A1 (en) * | 2020-12-24 | 2022-06-30 | 京东方科技集团股份有限公司 | Image processing method and device and computer-readable storage medium |
CN114723608A (en) * | 2022-04-14 | 2022-07-08 | 西安电子科技大学 | Image super-resolution reconstruction method based on fluid particle network |
CN114724021A (en) * | 2022-05-25 | 2022-07-08 | 北京闪马智建科技有限公司 | Data identification method and device, storage medium and electronic device |
CN114742774A (en) * | 2022-03-30 | 2022-07-12 | 福州大学 | No-reference image quality evaluation method and system fusing local and global features |
CN114785834A (en) * | 2021-01-06 | 2022-07-22 | 南京邮电大学 | Design method of intelligent video monitoring energy-saving system of teaching building under IPv6 environment |
CN114972043A (en) * | 2022-08-03 | 2022-08-30 | 江西财经大学 | Image super-resolution reconstruction method and system based on combined trilateral feature filtering |
CN114972363A (en) * | 2022-05-13 | 2022-08-30 | 北京理工大学 | Image segmentation method and device, electronic equipment and computer storage medium |
CN114972040A (en) * | 2022-07-15 | 2022-08-30 | 南京林业大学 | Speckle image super-resolution reconstruction method for laminated veneer lumber |
CN115100042A (en) * | 2022-07-20 | 2022-09-23 | 北京工商大学 | Pathological image super-resolution method based on channel attention retention network |
CN115131214A (en) * | 2022-08-31 | 2022-09-30 | 南京邮电大学 | Indoor aged person image super-resolution reconstruction method and system based on self-attention |
CN115131198A (en) * | 2022-04-12 | 2022-09-30 | 腾讯科技(深圳)有限公司 | Model training method, image processing method, device, equipment and storage medium |
CN115239557A (en) * | 2022-07-11 | 2022-10-25 | 河北大学 | Light-weight X-ray image super-resolution reconstruction method |
CN115311145A (en) * | 2022-08-12 | 2022-11-08 | 中国电信股份有限公司 | Image processing method and device, electronic device and storage medium |
CN115330635A (en) * | 2022-08-25 | 2022-11-11 | 苏州大学 | Image compression artifact removing method and device and storage medium |
CN115358954A (en) * | 2022-10-21 | 2022-11-18 | 电子科技大学 | Attention-guided feature compression method |
CN115546032A (en) * | 2022-12-01 | 2022-12-30 | 泉州市蓝领物联科技有限公司 | Single-frame image super-resolution method based on feature fusion and attention mechanism |
CN115719309A (en) * | 2023-01-10 | 2023-02-28 | 湖南大学 | Spectrum super-resolution reconstruction method and system based on low-rank tensor network |
WO2023045297A1 (en) * | 2021-09-22 | 2023-03-30 | 深圳市中兴微电子技术有限公司 | Image super-resolution method and apparatus, and computer device and readable medium |
CN116188272A (en) * | 2023-03-15 | 2023-05-30 | 包头市易慧信息科技有限公司 | Two-stage depth network image super-resolution reconstruction method suitable for multiple fuzzy cores |
CN116385272A (en) * | 2023-05-08 | 2023-07-04 | 南京信息工程大学 | Image super-resolution reconstruction method, system and equipment |
CN116385265A (en) * | 2023-04-06 | 2023-07-04 | 北京交通大学 | Training method and device for image super-resolution network |
CN116467946A (en) * | 2023-04-21 | 2023-07-21 | 南京信息工程大学 | Deep learning-based mode prediction product downscaling method |
CN116664397A (en) * | 2023-04-19 | 2023-08-29 | 太原理工大学 | TransSR-Net structured image super-resolution reconstruction method |
WO2023154006A3 (en) * | 2022-02-10 | 2023-09-21 | Lemon Inc. | Method and system for a high-frequency attention network for efficient single image super-resolution |
CN117036162A (en) * | 2023-06-19 | 2023-11-10 | 河北大学 | Residual feature attention fusion method for super-resolution of lightweight chest CT image |
CN117078516A (en) * | 2023-08-11 | 2023-11-17 | 济宁安泰矿山设备制造有限公司 | Mine image super-resolution reconstruction method based on residual mixed attention |
CN117172134A (en) * | 2023-10-19 | 2023-12-05 | 武汉大学 | Moon surface multiscale DEM modeling method and system based on converged terrain features |
CN117274066A (en) * | 2023-11-21 | 2023-12-22 | 北京渲光科技有限公司 | Image synthesis model, method, device and storage medium |
CN117495681A (en) * | 2024-01-03 | 2024-02-02 | 国网山东省电力公司济南供电公司 | Infrared image super-resolution reconstruction system and method |
CN117576467A (en) * | 2023-11-22 | 2024-02-20 | 安徽大学 | Crop disease image identification method integrating frequency domain and spatial domain information |
-
2020
- 2020-01-02 CN CN202010002303.2A patent/CN111192200A/en not_active Withdrawn
Cited By (205)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111681166A (en) * | 2020-06-02 | 2020-09-18 | 重庆理工大学 | Image super-resolution reconstruction method of stacked attention mechanism coding and decoding unit |
CN111833246B (en) * | 2020-06-02 | 2022-07-08 | 天津大学 | Single-frame image super-resolution method based on attention cascade network |
CN111833246A (en) * | 2020-06-02 | 2020-10-27 | 天津大学 | Single-frame image super-resolution method based on attention cascade network |
CN111833261A (en) * | 2020-06-03 | 2020-10-27 | 北京工业大学 | Image super-resolution restoration method for generating countermeasure network based on attention |
CN111818298B (en) * | 2020-06-08 | 2021-10-22 | 北京航空航天大学 | High-definition video monitoring system and method based on light field |
CN111818298A (en) * | 2020-06-08 | 2020-10-23 | 北京航空航天大学 | High-definition video monitoring system and method based on light field |
WO2021249523A1 (en) * | 2020-06-12 | 2021-12-16 | 华为技术有限公司 | Image processing method and device |
CN111667412A (en) * | 2020-06-16 | 2020-09-15 | 中国矿业大学 | Method and device for reconstructing image super-resolution based on cross learning network |
CN111709481A (en) * | 2020-06-17 | 2020-09-25 | 云南省烟草农业科学研究院 | Tobacco disease identification method, system, platform and storage medium |
CN111709481B (en) * | 2020-06-17 | 2023-12-12 | 云南省烟草农业科学研究院 | Tobacco disease identification method, system, platform and storage medium |
CN111754404B (en) * | 2020-06-18 | 2022-07-01 | 重庆邮电大学 | Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism |
CN111754404A (en) * | 2020-06-18 | 2020-10-09 | 重庆邮电大学 | Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism |
CN113837935A (en) * | 2020-06-24 | 2021-12-24 | 四川大学 | Compressed image super-resolution reconstruction method based on attention-enhancing network |
CN111488952A (en) * | 2020-06-28 | 2020-08-04 | 浙江大学 | Depth residual error model construction method suitable for automatic hub identification |
CN111724308A (en) * | 2020-06-28 | 2020-09-29 | 深圳壹账通智能科技有限公司 | Blurred image processing method and system |
CN111754507A (en) * | 2020-07-03 | 2020-10-09 | 征图智能科技(江苏)有限公司 | Light-weight industrial defect image classification method based on strong attention machine mechanism |
CN111861886B (en) * | 2020-07-15 | 2023-08-08 | 南京信息工程大学 | Image super-resolution reconstruction method based on multi-scale feedback network |
CN111986102A (en) * | 2020-07-15 | 2020-11-24 | 万达信息股份有限公司 | Digital pathological image deblurring method |
CN111861886A (en) * | 2020-07-15 | 2020-10-30 | 南京信息工程大学 | Image super-resolution reconstruction method based on multi-scale feedback network |
CN111986102B (en) * | 2020-07-15 | 2024-02-27 | 万达信息股份有限公司 | Digital pathological image deblurring method |
CN111881920B (en) * | 2020-07-16 | 2024-04-09 | 深圳力维智联技术有限公司 | Network adaptation method of large-resolution image and neural network training device |
CN111881920A (en) * | 2020-07-16 | 2020-11-03 | 深圳力维智联技术有限公司 | Network adaptation method of large-resolution image and neural network training device |
CN111932454A (en) * | 2020-07-22 | 2020-11-13 | 杭州电子科技大学 | LOGO pattern reconstruction method based on improved binary closed-loop neural network |
CN111932454B (en) * | 2020-07-22 | 2022-05-27 | 杭州电子科技大学 | LOGO pattern reconstruction method based on improved binary closed-loop neural network |
CN111861961A (en) * | 2020-07-25 | 2020-10-30 | 安徽理工大学 | Multi-scale residual error fusion model for single image super-resolution and restoration method thereof |
CN111861961B (en) * | 2020-07-25 | 2023-09-22 | 安徽理工大学 | Single image super-resolution multi-scale residual error fusion model and restoration method thereof |
CN111882543B (en) * | 2020-07-29 | 2023-12-26 | 南通大学 | Cigarette filter stick counting method based on AA R2Unet and HMM |
CN111882543A (en) * | 2020-07-29 | 2020-11-03 | 南通大学 | Cigarette filter stick counting method based on AA R2Unet and HMM |
CN111899315A (en) * | 2020-08-07 | 2020-11-06 | 深圳先进技术研究院 | Method for reconstructing low-dose image by using multi-scale feature perception depth network |
CN111899315B (en) * | 2020-08-07 | 2024-04-26 | 深圳先进技术研究院 | Method for reconstructing low-dose image by using multi-scale feature perception depth network |
CN111951164A (en) * | 2020-08-11 | 2020-11-17 | 哈尔滨理工大学 | Image super-resolution reconstruction network structure and image reconstruction effect analysis method |
CN112132778A (en) * | 2020-08-12 | 2020-12-25 | 浙江工业大学 | Medical image lesion segmentation method based on space transfer self-learning |
CN111950643A (en) * | 2020-08-18 | 2020-11-17 | 创新奇智(上海)科技有限公司 | Model training method, image classification method and corresponding device |
CN112070690B (en) * | 2020-08-25 | 2023-04-25 | 西安理工大学 | Single image rain removing method based on convolution neural network double-branch attention generation |
CN112070690A (en) * | 2020-08-25 | 2020-12-11 | 西安理工大学 | Single image rain removing method based on convolutional neural network double-branch attention generation |
CN112070670A (en) * | 2020-09-03 | 2020-12-11 | 武汉工程大学 | Face super-resolution method and system of global-local separation attention mechanism |
CN112070670B (en) * | 2020-09-03 | 2022-05-10 | 武汉工程大学 | Face super-resolution method and system of global-local separation attention mechanism |
CN112070676A (en) * | 2020-09-10 | 2020-12-11 | 东北大学秦皇岛分校 | Image super-resolution reconstruction method of two-channel multi-sensing convolutional neural network |
CN112070676B (en) * | 2020-09-10 | 2023-10-27 | 东北大学秦皇岛分校 | Picture super-resolution reconstruction method of double-channel multi-perception convolutional neural network |
WO2022057837A1 (en) * | 2020-09-16 | 2022-03-24 | 广州虎牙科技有限公司 | Image processing method and apparatus, portrait super-resolution reconstruction method and apparatus, and portrait super-resolution reconstruction model training method and apparatus, electronic device, and storage medium |
CN113766250B (en) * | 2020-09-29 | 2022-05-27 | 四川大学 | Compressed image quality improving method based on sampling reconstruction and feature enhancement |
CN113766250A (en) * | 2020-09-29 | 2021-12-07 | 四川大学 | Compressed image quality improving method based on sampling reconstruction and feature enhancement |
CN112183432A (en) * | 2020-10-12 | 2021-01-05 | 中国科学院空天信息创新研究院 | Building area extraction method and system based on medium-resolution SAR image |
CN112200724A (en) * | 2020-10-22 | 2021-01-08 | 长沙理工大学 | Single-image super-resolution reconstruction system and method based on feedback mechanism |
CN112288714A (en) * | 2020-10-28 | 2021-01-29 | 西安电子科技大学 | Hardware Trojan horse detection method based on deep learning |
CN112215755A (en) * | 2020-10-28 | 2021-01-12 | 南京信息工程大学 | Image super-resolution reconstruction method based on back projection attention network |
CN112288714B (en) * | 2020-10-28 | 2022-12-27 | 西安电子科技大学 | Hardware Trojan horse detection method based on deep learning |
CN112785684B (en) * | 2020-11-13 | 2022-06-14 | 北京航空航天大学 | Three-dimensional model reconstruction method based on local information weighting mechanism |
CN112785684A (en) * | 2020-11-13 | 2021-05-11 | 北京航空航天大学 | Three-dimensional model reconstruction method based on local information weighting mechanism |
CN112330542B (en) * | 2020-11-18 | 2022-05-03 | 重庆邮电大学 | Image reconstruction system and method based on CRCSAN network |
CN112330542A (en) * | 2020-11-18 | 2021-02-05 | 重庆邮电大学 | Image reconstruction system and method based on CRCSAN network |
CN112288658A (en) * | 2020-11-23 | 2021-01-29 | 杭州师范大学 | Underwater image enhancement method based on multi-residual joint learning |
CN112419153A (en) * | 2020-11-23 | 2021-02-26 | 深圳供电局有限公司 | Image super-resolution reconstruction method and device, computer equipment and storage medium |
CN112288658B (en) * | 2020-11-23 | 2023-11-28 | 杭州师范大学 | Underwater image enhancement method based on multi-residual joint learning |
CN112435197A (en) * | 2020-12-02 | 2021-03-02 | 携程计算机技术(上海)有限公司 | Image beautifying method and device, electronic equipment and storage medium |
CN112581345A (en) * | 2020-12-04 | 2021-03-30 | 华南理工大学 | Image block-based image steganalysis method, system, device and medium |
CN112633429A (en) * | 2020-12-21 | 2021-04-09 | 安徽七天教育科技有限公司 | Method for recognizing handwriting choice questions of students |
CN112668619B (en) * | 2020-12-22 | 2024-04-16 | 万兴科技集团股份有限公司 | Image processing method, device, terminal and storage medium |
CN112668619A (en) * | 2020-12-22 | 2021-04-16 | 万兴科技集团股份有限公司 | Image processing method, device, terminal and storage medium |
WO2022133874A1 (en) * | 2020-12-24 | 2022-06-30 | 京东方科技集团股份有限公司 | Image processing method and device and computer-readable storage medium |
CN115668272A (en) * | 2020-12-24 | 2023-01-31 | 京东方科技集团股份有限公司 | Image processing method and apparatus, computer readable storage medium |
CN112767259A (en) * | 2020-12-29 | 2021-05-07 | 上海联影智能医疗科技有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN112801868B (en) * | 2021-01-04 | 2022-11-11 | 青岛信芯微电子科技股份有限公司 | Method for image super-resolution reconstruction, electronic device and storage medium |
CN112801868A (en) * | 2021-01-04 | 2021-05-14 | 青岛信芯微电子科技股份有限公司 | Method for image super-resolution reconstruction, electronic device and storage medium |
CN114785834A (en) * | 2021-01-06 | 2022-07-22 | 南京邮电大学 | Design method of intelligent video monitoring energy-saving system of teaching building under IPv6 environment |
CN114785834B (en) * | 2021-01-06 | 2024-04-09 | 南京邮电大学 | Teaching building intelligent video monitoring energy-saving system design method under IPv6 environment |
CN112767246A (en) * | 2021-01-07 | 2021-05-07 | 北京航空航天大学 | Multi-magnification spatial super-resolution method and device for light field image |
CN112819910A (en) * | 2021-01-08 | 2021-05-18 | 上海理工大学 | Hyperspectral image reconstruction method based on double-ghost attention machine mechanism network |
CN112801945A (en) * | 2021-01-11 | 2021-05-14 | 西北大学 | Depth Gaussian mixture model skull registration method based on dual attention mechanism feature extraction |
CN112734731A (en) * | 2021-01-11 | 2021-04-30 | 牧原食品股份有限公司 | Livestock temperature detection method, device, equipment and storage medium |
CN112734643A (en) * | 2021-01-15 | 2021-04-30 | 重庆邮电大学 | Lightweight image super-resolution reconstruction method based on cascade network |
CN112734645A (en) * | 2021-01-19 | 2021-04-30 | 青岛大学 | Light-weight image super-resolution reconstruction method based on characteristic distillation multiplexing |
CN112734646A (en) * | 2021-01-19 | 2021-04-30 | 青岛大学 | Image super-resolution reconstruction method based on characteristic channel division |
CN112734646B (en) * | 2021-01-19 | 2024-02-02 | 青岛大学 | Image super-resolution reconstruction method based on feature channel division |
CN112734645B (en) * | 2021-01-19 | 2023-11-03 | 青岛大学 | Lightweight image super-resolution reconstruction method based on feature distillation multiplexing |
CN112767251B (en) * | 2021-01-20 | 2023-04-07 | 重庆邮电大学 | Image super-resolution method based on multi-scale detail feature fusion neural network |
CN112767251A (en) * | 2021-01-20 | 2021-05-07 | 重庆邮电大学 | Image super-resolution method based on multi-scale detail feature fusion neural network |
CN112750082A (en) * | 2021-01-21 | 2021-05-04 | 武汉工程大学 | Face super-resolution method and system based on fusion attention mechanism |
CN112950464A (en) * | 2021-01-25 | 2021-06-11 | 西安电子科技大学 | Binary super-resolution reconstruction method without regularization layer |
CN112950464B (en) * | 2021-01-25 | 2023-09-01 | 西安电子科技大学 | Binary super-resolution reconstruction method without regularization layer |
CN112911304A (en) * | 2021-01-29 | 2021-06-04 | 南京信息工程大学滨江学院 | Encoding-based two-way video compression device and compressed video reconstruction method |
CN112911304B (en) * | 2021-01-29 | 2022-03-25 | 南京信息工程大学滨江学院 | Encoding-based two-way video compression device and compressed video reconstruction method |
CN112967184A (en) * | 2021-02-04 | 2021-06-15 | 西安理工大学 | Super-resolution amplification method based on double-scale convolutional neural network |
CN112967184B (en) * | 2021-02-04 | 2022-12-13 | 西安理工大学 | Super-resolution amplification method based on double-scale convolutional neural network |
CN112950471A (en) * | 2021-02-26 | 2021-06-11 | 杭州朗和科技有限公司 | Video super-resolution processing method and device, super-resolution reconstruction model and medium |
CN112767255A (en) * | 2021-03-04 | 2021-05-07 | 山东大学 | Image super-resolution reconstruction method and system based on feature separation fusion network |
CN112767255B (en) * | 2021-03-04 | 2022-11-29 | 山东大学 | Image super-resolution reconstruction method and system based on feature separation fusion network |
CN112862689A (en) * | 2021-03-09 | 2021-05-28 | 南京邮电大学 | Image super-resolution reconstruction method and system |
CN112967295B (en) * | 2021-03-10 | 2024-04-05 | 中国科学院深圳先进技术研究院 | Image processing method and system based on residual network and attention mechanism |
CN112967295A (en) * | 2021-03-10 | 2021-06-15 | 中国科学院深圳先进技术研究院 | Image processing method and system based on residual error network and attention mechanism |
CN113516133B (en) * | 2021-04-01 | 2022-06-17 | 中南大学 | Multi-modal image classification method and system |
CN113516133A (en) * | 2021-04-01 | 2021-10-19 | 中南大学 | Multi-modal image classification method and system |
CN113096017B (en) * | 2021-04-14 | 2022-01-25 | 南京林业大学 | Image super-resolution reconstruction method based on depth coordinate attention network model |
CN113096017A (en) * | 2021-04-14 | 2021-07-09 | 南京林业大学 | Image super-resolution reconstruction method based on depth coordinate attention network model |
CN113052848B (en) * | 2021-04-15 | 2023-02-17 | 山东大学 | Chicken image segmentation method and system based on multi-scale attention network |
CN113052848A (en) * | 2021-04-15 | 2021-06-29 | 山东大学 | Chicken image segmentation method and system based on multi-scale attention network |
CN113052764A (en) * | 2021-04-19 | 2021-06-29 | 东南大学 | Video sequence super-resolution reconstruction method based on residual connection |
CN113223001A (en) * | 2021-05-07 | 2021-08-06 | 西安智诊智能科技有限公司 | Image segmentation method based on multi-resolution residual error network |
CN113222819A (en) * | 2021-05-19 | 2021-08-06 | 厦门大学 | Remote sensing image super-resolution reconstruction method based on deep convolutional neural network |
CN113222819B (en) * | 2021-05-19 | 2022-07-26 | 厦门大学 | Remote sensing image super-resolution reconstruction method based on deep convolution neural network |
CN113516693B (en) * | 2021-05-21 | 2023-01-03 | 郑健青 | Rapid and universal image registration method |
CN113516693A (en) * | 2021-05-21 | 2021-10-19 | 郑健青 | Rapid and universal image registration method |
CN113222821A (en) * | 2021-05-24 | 2021-08-06 | 南京航空航天大学 | Image super-resolution processing method for annular target detection |
CN113379600A (en) * | 2021-05-26 | 2021-09-10 | 北京邮电大学 | Short video super-resolution conversion method, device and medium based on deep learning |
CN113327203A (en) * | 2021-05-28 | 2021-08-31 | 北京百度网讯科技有限公司 | Image processing network model, method, apparatus and medium |
CN113312604B (en) * | 2021-05-31 | 2023-05-09 | 南京信息工程大学 | Distributed secret image sharing method with public reconstruction based on blockchain authentication |
CN113312604A (en) * | 2021-05-31 | 2021-08-27 | 南京信息工程大学 | Block chain authentication-based public reconstruction-based distributed secret image sharing method |
CN113298716A (en) * | 2021-05-31 | 2021-08-24 | 重庆师范大学 | Image super-resolution reconstruction method based on convolutional neural network |
CN113298716B (en) * | 2021-05-31 | 2023-09-12 | 重庆师范大学 | Image super-resolution reconstruction method based on convolutional neural network |
CN113362225A (en) * | 2021-06-03 | 2021-09-07 | 太原科技大学 | Multi-description compressed image enhancement method based on residual recursive compensation and feature fusion |
CN113313691A (en) * | 2021-06-03 | 2021-08-27 | 上海市第一人民医院 | Thyroid color Doppler ultrasound processing method based on deep learning |
CN113298717A (en) * | 2021-06-08 | 2021-08-24 | 浙江工业大学 | Medical image super-resolution reconstruction method based on multi-attention residual error feature fusion |
CN113470044A (en) * | 2021-06-09 | 2021-10-01 | 东北大学 | CT image liver automatic segmentation method based on deep convolutional neural network |
CN113421187A (en) * | 2021-06-10 | 2021-09-21 | 山东师范大学 | Super-resolution reconstruction method, system, storage medium and equipment |
CN113256496A (en) * | 2021-06-11 | 2021-08-13 | 四川省人工智能研究院(宜宾) | Lightweight progressive feature fusion image super-resolution system and method |
CN113256496B (en) * | 2021-06-11 | 2021-09-21 | 四川省人工智能研究院(宜宾) | Lightweight progressive feature fusion image super-resolution system and method |
CN113313129B (en) * | 2021-06-22 | 2024-04-05 | 中国平安财产保险股份有限公司 | Training method, device, equipment and storage medium for disaster damage recognition model |
CN113313129A (en) * | 2021-06-22 | 2021-08-27 | 中国平安财产保险股份有限公司 | Method, device and equipment for training disaster recognition model and storage medium |
CN113487481A (en) * | 2021-07-02 | 2021-10-08 | 河北工业大学 | Circular video super-resolution method based on information construction and multi-density residual block |
CN113487486A (en) * | 2021-07-23 | 2021-10-08 | 河南牧原智能科技有限公司 | Image resolution enhancement processing method, device and equipment and readable storage medium |
CN113538244B (en) * | 2021-07-23 | 2023-09-01 | 西安电子科技大学 | Lightweight super-resolution reconstruction method based on self-adaptive weight learning |
CN113538244A (en) * | 2021-07-23 | 2021-10-22 | 西安电子科技大学 | Lightweight super-resolution reconstruction method based on adaptive weight learning |
CN113436076A (en) * | 2021-07-26 | 2021-09-24 | 柚皮(重庆)科技有限公司 | Image super-resolution reconstruction method with characteristics gradually fused and electronic equipment |
CN114066727A (en) * | 2021-07-28 | 2022-02-18 | 华侨大学 | Multi-stage progressive image super-resolution method |
CN113506222A (en) * | 2021-07-30 | 2021-10-15 | 合肥工业大学 | Multi-mode image super-resolution method based on convolutional neural network |
CN113506222B (en) * | 2021-07-30 | 2024-03-01 | 合肥工业大学 | Multi-mode image super-resolution method based on convolutional neural network |
CN113313102B (en) * | 2021-08-02 | 2021-11-05 | 南京天朗防务科技有限公司 | Random resonance chaotic small signal detection method based on variant differential evolution algorithm |
CN113313102A (en) * | 2021-08-02 | 2021-08-27 | 南京天朗防务科技有限公司 | Random resonance chaotic small signal detection method based on variant differential evolution algorithm |
CN113628296A (en) * | 2021-08-04 | 2021-11-09 | 中国科学院自动化研究所 | Magnetic particle imaging reconstruction method from time-frequency domain signal to two-dimensional image |
CN113628296B (en) * | 2021-08-04 | 2023-12-15 | 中国科学院自动化研究所 | Magnetic particle imaging reconstruction method from time-frequency domain signal to two-dimensional image |
CN113658047A (en) * | 2021-08-18 | 2021-11-16 | 北京石油化工学院 | Crystal image super-resolution reconstruction method |
CN113628115A (en) * | 2021-08-25 | 2021-11-09 | Oppo广东移动通信有限公司 | Image reconstruction processing method and device, electronic equipment and storage medium |
CN113628115B (en) * | 2021-08-25 | 2023-12-05 | Oppo广东移动通信有限公司 | Image reconstruction processing method, device, electronic equipment and storage medium |
CN114004784B (en) * | 2021-08-27 | 2022-06-03 | 西安市第三医院 | Method for detecting bone condition based on CT image and electronic equipment |
CN114004784A (en) * | 2021-08-27 | 2022-02-01 | 西安市第三医院 | Method for detecting bone condition based on CT image and electronic equipment |
CN113837940A (en) * | 2021-09-03 | 2021-12-24 | 山东师范大学 | Image super-resolution reconstruction method and system based on dense residual error network |
CN113706386A (en) * | 2021-09-04 | 2021-11-26 | 大连钜智信息科技有限公司 | Super-resolution reconstruction method based on attention mechanism |
WO2023045297A1 (en) * | 2021-09-22 | 2023-03-30 | 深圳市中兴微电子技术有限公司 | Image super-resolution method and apparatus, and computer device and readable medium |
CN113706388A (en) * | 2021-09-24 | 2021-11-26 | 上海壁仞智能科技有限公司 | Image super-resolution reconstruction method and device |
CN113570505A (en) * | 2021-09-24 | 2021-10-29 | 中国石油大学(华东) | Shale three-dimensional super-resolution digital core grading reconstruction method and system |
CN113888491A (en) * | 2021-09-27 | 2022-01-04 | 长沙理工大学 | Multilevel hyperspectral image progressive and hyper-resolution method and system based on non-local features |
CN114170089A (en) * | 2021-09-30 | 2022-03-11 | 成都大学附属医院 | Method and electronic device for diabetic retinopathy classification |
CN114170089B (en) * | 2021-09-30 | 2023-07-07 | 成都市第二人民医院 | Method for classifying diabetic retinopathy and electronic equipment |
CN113989300A (en) * | 2021-10-29 | 2022-01-28 | 北京百度网讯科技有限公司 | Lane line segmentation method and device, electronic equipment and storage medium |
CN114120406A (en) * | 2021-11-22 | 2022-03-01 | 四川轻化工大学 | Face feature extraction and classification method based on convolutional neural network |
CN114120406B (en) * | 2021-11-22 | 2024-06-07 | 四川轻化工大学 | Face feature extraction and classification method based on convolutional neural network |
CN114140357B (en) * | 2021-12-02 | 2024-04-19 | 哈尔滨工程大学 | Multi-temporal remote sensing image cloud zone reconstruction method based on cooperative attention mechanism |
CN114140357A (en) * | 2021-12-02 | 2022-03-04 | 哈尔滨工程大学 | Multi-temporal remote sensing image cloud region reconstruction method based on cooperative attention mechanism |
CN114198295A (en) * | 2021-12-15 | 2022-03-18 | 中国石油天然气股份有限公司 | Compressor unit whole-system vibration monitoring method and device and electronic equipment thereof |
CN114187181A (en) * | 2021-12-17 | 2022-03-15 | 福州大学 | Double-path lung CT image super-resolution method based on residual information refining |
CN114187181B (en) * | 2021-12-17 | 2024-06-07 | 福州大学 | Dual-path lung CT image super-resolution method based on residual information refining |
CN114049261A (en) * | 2022-01-13 | 2022-02-15 | 武汉理工大学 | Image super-resolution reconstruction method focusing on foreground information |
CN114049261B (en) * | 2022-01-13 | 2022-04-01 | 武汉理工大学 | Image super-resolution reconstruction method focusing on foreground information |
CN114494493A (en) * | 2022-01-18 | 2022-05-13 | 清华大学 | Tomographic image reconstruction method, device, readable storage medium and electronic equipment |
WO2023154006A3 (en) * | 2022-02-10 | 2023-09-21 | Lemon Inc. | Method and system for a high-frequency attention network for efficient single image super-resolution |
CN114549316A (en) * | 2022-02-18 | 2022-05-27 | 中国石油大学(华东) | Remote sensing single image super-resolution method based on channel self-attention multi-scale feature learning |
CN114529456A (en) * | 2022-02-21 | 2022-05-24 | 深圳大学 | Super-resolution processing method, device, equipment and medium for video |
CN114529456B (en) * | 2022-02-21 | 2022-10-21 | 深圳大学 | Super-resolution processing method, device, equipment and medium for video |
CN114332592A (en) * | 2022-03-11 | 2022-04-12 | 中国海洋大学 | Ocean environment data fusion method and system based on attention mechanism |
CN114742774A (en) * | 2022-03-30 | 2022-07-12 | 福州大学 | No-reference image quality evaluation method and system fusing local and global features |
CN115131198A (en) * | 2022-04-12 | 2022-09-30 | 腾讯科技(深圳)有限公司 | Model training method, image processing method, device, equipment and storage medium |
CN115131198B (en) * | 2022-04-12 | 2024-03-22 | 腾讯科技(深圳)有限公司 | Model training method, image processing method, device, equipment and storage medium |
CN114723608B (en) * | 2022-04-14 | 2023-04-07 | 西安电子科技大学 | Image super-resolution reconstruction method based on fluid particle network |
CN114723608A (en) * | 2022-04-14 | 2022-07-08 | 西安电子科技大学 | Image super-resolution reconstruction method based on fluid particle network |
CN114511576A (en) * | 2022-04-19 | 2022-05-17 | 山东建筑大学 | Image segmentation method and system for scale self-adaptive feature enhanced deep neural network |
CN114972363A (en) * | 2022-05-13 | 2022-08-30 | 北京理工大学 | Image segmentation method and device, electronic equipment and computer storage medium |
CN114724021A (en) * | 2022-05-25 | 2022-07-08 | 北京闪马智建科技有限公司 | Data identification method and device, storage medium and electronic device |
CN115239557B (en) * | 2022-07-11 | 2023-10-24 | 河北大学 | Light X-ray image super-resolution reconstruction method |
CN115239557A (en) * | 2022-07-11 | 2022-10-25 | 河北大学 | Light-weight X-ray image super-resolution reconstruction method |
CN114972040A (en) * | 2022-07-15 | 2022-08-30 | 南京林业大学 | Speckle image super-resolution reconstruction method for laminated veneer lumber |
CN115100042B (en) * | 2022-07-20 | 2024-05-03 | 北京工商大学 | Path image super-resolution method based on channel attention retention network |
CN115100042A (en) * | 2022-07-20 | 2022-09-23 | 北京工商大学 | Pathological image super-resolution method based on channel attention retention network |
CN114972043A (en) * | 2022-08-03 | 2022-08-30 | 江西财经大学 | Image super-resolution reconstruction method and system based on combined trilateral feature filtering |
CN115311145B (en) * | 2022-08-12 | 2024-06-11 | 中国电信股份有限公司 | Image processing method and device, electronic equipment and storage medium |
CN115311145A (en) * | 2022-08-12 | 2022-11-08 | 中国电信股份有限公司 | Image processing method and device, electronic device and storage medium |
CN115330635B (en) * | 2022-08-25 | 2023-08-15 | 苏州大学 | Image compression artifact removing method, device and storage medium |
CN115330635A (en) * | 2022-08-25 | 2022-11-11 | 苏州大学 | Image compression artifact removing method and device and storage medium |
CN115131214B (en) * | 2022-08-31 | 2022-11-29 | 南京邮电大学 | Indoor old man image super-resolution reconstruction method and system based on self-attention |
CN115131214A (en) * | 2022-08-31 | 2022-09-30 | 南京邮电大学 | Indoor aged person image super-resolution reconstruction method and system based on self-attention |
CN115358954B (en) * | 2022-10-21 | 2022-12-23 | 电子科技大学 | Attention-guided feature compression method |
CN115358954A (en) * | 2022-10-21 | 2022-11-18 | 电子科技大学 | Attention-guided feature compression method |
CN115546032A (en) * | 2022-12-01 | 2022-12-30 | 泉州市蓝领物联科技有限公司 | Single-frame image super-resolution method based on feature fusion and attention mechanism |
CN115719309A (en) * | 2023-01-10 | 2023-02-28 | 湖南大学 | Spectrum super-resolution reconstruction method and system based on low-rank tensor network |
CN116188272A (en) * | 2023-03-15 | 2023-05-30 | 包头市易慧信息科技有限公司 | Two-stage depth network image super-resolution reconstruction method suitable for multiple fuzzy cores |
CN116188272B (en) * | 2023-03-15 | 2023-11-10 | 包头市易慧信息科技有限公司 | Two-stage depth network image super-resolution reconstruction method suitable for multiple fuzzy cores |
CN116385265A (en) * | 2023-04-06 | 2023-07-04 | 北京交通大学 | Training method and device for image super-resolution network |
CN116385265B (en) * | 2023-04-06 | 2023-10-17 | 北京交通大学 | Training method and device for image super-resolution network |
CN116664397B (en) * | 2023-04-19 | 2023-11-10 | 太原理工大学 | TransSR-Net structured image super-resolution reconstruction method |
CN116664397A (en) * | 2023-04-19 | 2023-08-29 | 太原理工大学 | TransSR-Net structured image super-resolution reconstruction method |
CN116467946B (en) * | 2023-04-21 | 2023-10-27 | 南京信息工程大学 | Deep learning-based mode prediction product downscaling method |
CN116467946A (en) * | 2023-04-21 | 2023-07-21 | 南京信息工程大学 | Deep learning-based mode prediction product downscaling method |
CN116385272B (en) * | 2023-05-08 | 2023-12-19 | 南京信息工程大学 | Image super-resolution reconstruction method, system and equipment |
CN116385272A (en) * | 2023-05-08 | 2023-07-04 | 南京信息工程大学 | Image super-resolution reconstruction method, system and equipment |
CN117036162A (en) * | 2023-06-19 | 2023-11-10 | 河北大学 | Residual feature attention fusion method for super-resolution of lightweight chest CT image |
CN117036162B (en) * | 2023-06-19 | 2024-02-09 | 河北大学 | Residual feature attention fusion method for super-resolution of lightweight chest CT image |
CN117078516B (en) * | 2023-08-11 | 2024-03-12 | 济宁安泰矿山设备制造有限公司 | Mine image super-resolution reconstruction method based on residual mixed attention |
CN117078516A (en) * | 2023-08-11 | 2023-11-17 | 济宁安泰矿山设备制造有限公司 | Mine image super-resolution reconstruction method based on residual mixed attention |
CN117172134A (en) * | 2023-10-19 | 2023-12-05 | 武汉大学 | Moon surface multiscale DEM modeling method and system based on converged terrain features |
CN117172134B (en) * | 2023-10-19 | 2024-01-16 | 武汉大学 | Moon surface multiscale DEM modeling method based on fusion terrain features |
CN117274066A (en) * | 2023-11-21 | 2023-12-22 | 北京渲光科技有限公司 | Image synthesis model, method, device and storage medium |
CN117274066B (en) * | 2023-11-21 | 2024-02-09 | 北京渲光科技有限公司 | Image synthesis model, method, device and storage medium |
CN117576467B (en) * | 2023-11-22 | 2024-04-26 | 安徽大学 | Crop disease image identification method integrating frequency domain and spatial domain information |
CN117576467A (en) * | 2023-11-22 | 2024-02-20 | 安徽大学 | Crop disease image identification method integrating frequency domain and spatial domain information |
CN117495681A (en) * | 2024-01-03 | 2024-02-02 | 国网山东省电力公司济南供电公司 | Infrared image super-resolution reconstruction system and method |
CN117495681B (en) * | 2024-01-03 | 2024-05-24 | 国网山东省电力公司济南供电公司 | Infrared image super-resolution reconstruction system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111192200A (en) | Image super-resolution reconstruction method based on fusion attention mechanism residual error network | |
CN110969577B (en) | Video super-resolution reconstruction method based on deep double attention network | |
CN108122197B (en) | Image super-resolution reconstruction method based on deep learning | |
CN110599401A (en) | Remote sensing image super-resolution reconstruction method, processing device and readable storage medium | |
CN110675321A (en) | Super-resolution image reconstruction method based on progressive depth residual error network | |
CN112037131A (en) | Single-image super-resolution reconstruction method based on generation countermeasure network | |
CN107784628B (en) | Super-resolution implementation method based on reconstruction optimization and deep neural network | |
CN111861884B (en) | Satellite cloud image super-resolution reconstruction method based on deep learning | |
CN109949222B (en) | Image super-resolution reconstruction method based on semantic graph | |
CN111681166A (en) | Image super-resolution reconstruction method of stacked attention mechanism coding and decoding unit | |
CN112132959A (en) | Digital rock core image processing method and device, computer equipment and storage medium | |
CN111127325B (en) | Satellite video super-resolution reconstruction method and system based on cyclic neural network | |
CN111932461A (en) | Convolutional neural network-based self-learning image super-resolution reconstruction method and system | |
Luo et al. | Lattice network for lightweight image restoration | |
CN111951164B (en) | Image super-resolution reconstruction network structure and image reconstruction effect analysis method | |
CN115564649B (en) | Image super-resolution reconstruction method, device and equipment | |
CN113538246B (en) | Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network | |
CN112699844A (en) | Image super-resolution method based on multi-scale residual error level dense connection network | |
CN114841856A (en) | Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention | |
CN116681584A (en) | Multistage diffusion image super-resolution algorithm | |
CN112669248A (en) | Hyperspectral and panchromatic image fusion method based on CNN and Laplacian pyramid | |
CN108416736A (en) | A kind of image super-resolution rebuilding method returned based on secondary anchor point neighborhood | |
Dong et al. | Real-world remote sensing image super-resolution via a practical degradation model and a kernel-aware network | |
CN116757955A (en) | Multi-fusion comparison network based on full-dimensional dynamic convolution | |
CN113379606B (en) | Face super-resolution method based on pre-training generation model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200522 |