CN113592718A - Mine image super-resolution reconstruction method and system based on multi-scale residual error network - Google Patents
Mine image super-resolution reconstruction method and system based on multi-scale residual error network Download PDFInfo
- Publication number
- CN113592718A CN113592718A CN202110924404.XA CN202110924404A CN113592718A CN 113592718 A CN113592718 A CN 113592718A CN 202110924404 A CN202110924404 A CN 202110924404A CN 113592718 A CN113592718 A CN 113592718A
- Authority
- CN
- China
- Prior art keywords
- layer
- attention
- scale
- image
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000000605 extraction Methods 0.000 claims abstract description 66
- 230000004927 fusion Effects 0.000 claims abstract description 46
- 101001013832 Homo sapiens Mitochondrial peptide methionine sulfoxide reductase Proteins 0.000 claims description 32
- 102100031767 Mitochondrial peptide methionine sulfoxide reductase Human genes 0.000 claims description 32
- 230000006870 function Effects 0.000 claims description 28
- 238000013507 mapping Methods 0.000 claims description 19
- 150000001875 compounds Chemical class 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 230000009191 jumping Effects 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 abstract description 5
- 239000003245 coal Substances 0.000 description 7
- 238000011160 research Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000012535 impurity Substances 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a mine image super-resolution reconstruction method and a system based on a multi-scale residual error network, wherein the system reconstructs a low-resolution image into a high-resolution image through an image super-resolution reconstruction model, the image super-resolution reconstruction model consists of a shallow layer feature extraction module, a deep layer feature extraction module and a feature reconstruction module, the deep layer feature extraction module consists of m multi-scale residual error attention groups which are fused with multi-scale convolution, a convolution attention mechanism and residual error connection and a feature fusion layer, and the whole feature extraction module adopts a simplified dense network to transmit extracted shallow layer features and deep layer features of different layers to the feature fusion layer for feature fusion. The super-resolution reconstruction method for the mine image comprises the following steps: extracting shallow layer features and deep layer features of the image, fusing the extracted shallow layer features and deep layer features, and reconstructing the image. The invention can more effectively utilize the characteristic information and can effectively prevent the loss of the image characteristic information.
Description
Technical Field
The invention relates to an image reconstruction method, in particular to a mine image super-resolution reconstruction method and a reconstruction system based on a multi-scale residual error network.
Background
Mine images can visually show a coal mine scene, input information is provided for coal mine intelligent analysis such as mine monitoring, behavior recognition and personnel detection, however, the collected low-resolution images are limited by the performance of image collection equipment and the severe coal mine environment, and the visual effect of the collected low-resolution images is poor, so that the accuracy of coal mine intelligent analysis is reduced. The image super-resolution reconstruction technology can reconstruct a high-resolution image by using the prior knowledge of the characteristics of low-resolution images, the similarity or redundancy among the images and the like, so that the research on the super-resolution reconstruction method suitable for mine images has important practical significance.
Among the many studies on super-resolution reconstruction methods, there are few studies on super-resolution reconstruction of mine images. In the research on the super-resolution restoration key technology of the underground coal mine monitoring image (China mining university (Beijing) 2012), Vanguang adopts an interpolation method based on a partial differential equation to correct the edge of the image, and adopts a wiener filter to weaken additive Gaussian noise in the image, so that an interpolation algorithm based on pre-filtering is provided, and the detail information of the mine image is improved. The research and implementation of an image super-resolution restoration method of Lu Hui Jun in an underground coal mine monitoring system (Beijing university of transportation, 2016) proposes that an improved neighbor embedding algorithm is adopted to reconstruct an underground video monitoring image, and a good effect is achieved. However, theories and experiments prove that the traditional reconstruction method cannot exceed the current super-resolution reconstruction method based on deep learning, the super-resolution reconstruction method based on deep learning can adaptively learn the complex mapping from low resolution to high resolution images, and the super-resolution reconstruction method based on deep learning has certain robustness to noise. Current research is trending towards using deeper convolutional neural networks to improve performance. However, blindly increasing the network depth does not effectively improve the network. The convolution kernels with different sizes can extract features of different levels, and the convolution kernels with different scales can be used for extracting the features to learn more image details so as to obtain better effect in the subsequent image processing process. The attention mechanism can increase the characteristic information with high contribution degree to the reconstruction result and inhibit the characteristic information with low contribution degree. Therefore, the research of the mine image super-resolution reconstruction based on the multi-scale residual error network is vital to guarantee the safe mining of coal mines.
Current research tends to use deeper convolutional neural networks to improve performance, but neglects to fully exploit input low resolution image features. As the depth of the network increases, the features gradually disappear during transmission. How to fully exploit these features is crucial for the network to reconstruct high quality images. Most network models utilize a convolution kernel with a single scale to indiscriminately extract the characteristic information and transmit the characteristic information to a deep network layer, so that the characteristic information extracted from a front layer is wasted by equal treatment, high-frequency information with high contribution to a reconstruction effect cannot be fully learned, and the performance of the deep convolution network cannot be fully exerted. And the characteristic information extracted by the upper network is treated indiscriminately, so that the reconstruction capability of the network model on the detail information is influenced, and the performance of the deep network cannot be fully exerted. The feature details extracted by the convolution kernel with a single scale are not rich enough, and some deeper features are ignored. The method specifically comprises the following points:
first, when extracting feature information, indiscriminate extraction of feature information wastes computational resources, so that high-frequency information with a high contribution to reconstruction effect cannot be sufficiently learned.
Secondly, the feature information extracted by adopting the convolution kernel with a single scale is single, and abundant feature information cannot be acquired.
Third, as the depth of the network increases, the image features gradually disappear during transmission.
Disclosure of Invention
In order to solve the technical problem, the invention provides a mine image super-resolution reconstruction method of a multi-scale residual error network.
In order to achieve the purpose, the invention is realized by the following technical scheme:
the invention relates to a mine image super-resolution reconstruction system based on a multi-scale residual error network, which reconstructs a low-resolution image into a high-resolution image through an image super-resolution reconstruction model, wherein the image super-resolution reconstruction model consists of a shallow feature extraction module, a deep feature extraction module and a feature reconstruction module, the whole feature extraction module adopts a simplified dense network to transmit features extracted by a shallow feature extraction layer and a plurality of m multi-scale residual error attention groups to a feature fusion layer for feature fusion, the output of the feature fusion layer is used as the input of the feature reconstruction module, and the feature reconstruction module consists of a convolution layer, a pixel reconstruction layer and a feature reconstruction layer and generates a super-resolution image.
Wherein the deep feature extraction module consists of m multi-scale residual attention groups MSRAG stacking and a feature fusion layer, and the m multi-scale residual attention groups stacking process is as follows:
…
…
in the formula (I), the compound is shown in the specification,a mapping function representing the ith MSRAG,represents the output of the ith MSRAG, S0Representing the output of the shallow feature extraction layer.
Wherein: the formula of the characteristic fusion layer is as follows
In the formula, Hf(. cndot.) represents a mapping function for the feature fusion layer,represents a series structure, SfRepresenting the output of the feature fusion layer.
The invention is further improved in that: the multi-scale residual attention group comprises n multi-scale attention modules and local residual learning, the multi-scale attention module MSRA adopts four branches, the sizes of convolution kernels of convolution layers in the four branches are respectively 1 × 1, 3 × 3, 5 × 5 and 7 × 7, an attention module CBAM is added in each branch of the multi-scale attention module MSRA, and each attention module CBAM is composed of a channel attention module and a space attention module. A convolution attention mechanism module CBAM is added in the deep convolution neural network, more computer computing power is distributed to information with high contribution degree to reconstruction performance, and feature information of the image is utilized more effectively.
The invention relates to a mine image super-resolution reconstruction method based on a multi-scale residual error network, which comprises the following steps:
acquiring a low-resolution image;
inputting the obtained low-resolution image into a shallow feature extraction module to extract the shallow features of the image;
sequentially extracting deep features of the image by the m multi-scale residual error attention groups, and then transmitting the features extracted by the shallow feature extraction layer and the m multi-scale residual error attention groups to the feature fusion layer for feature fusion;
after the characteristics are fused, the image is up-sampled and reconstructed by an image reconstruction module, the image is up-sampled by adopting a sub-pixel convolution method, and then is reconstructed by a reconstruction layer, wherein the image reconstruction process of the image reconstruction module is as follows:
SFE=HFE(Sf)
SPX=HPX(SFE)
ISR=HRC(SPX)
wherein S isfRepresenting the output of the feature fusion layer, HFE(·)、HPX(. and H)RC(. represents a mapping function of the first layer convolutional layer, the pixel reconstruction layer and the reconstruction layer, respectively, SFERepresenting the output of the convolutional layer, SPXRepresenting the output of the pixel reconstruction layer, ISRRepresenting the reconstructed image.
The invention is further improved in that: the specific deep feature extraction and feature fusion comprises the following steps:
step 3-1, sequentially extracting deep features of the image from m multi-scale residual error attention groups, wherein each multi-scale residual error attention group consists of n multi-scale attention modules MSRA and is combined with the n multi-scale attention modules through jumping connection;
step 3-2 and step 3-1, the multi-scale attention module comprises four branches and residual learning is adopted, and residual connection of the multi-scale attention module is described as follows:
respectively representing the input and output of the ith MSRA,representing the output of the third 1 x 1 sized convolutional layer in the MSRA, the use of local residual learning results in a significant reduction in computational complexity. At the same time, improve the netThe properties of the collaterals.
3-3, transmitting the features extracted by the shallow feature extraction layer and the m multi-scale residual error attention groups to a feature fusion layer for fusion, wherein the feature fusion layer consists of a splicing layer and a convolution layer with the size of 1 multiplied by 1 and is used for fusing the features of the shallow feature extraction layer and the m multi-scale residual error attention groups; the whole feature extraction module adopts a simplified dense network, simplified dense jump connection is constructed on the periphery of the whole module, and the feature fusion layer integrates the features extracted by the shallow feature extraction layer and the m multi-scale residual error attention groups, so that the richness and diversity of image features are increased.
The invention has the beneficial effects that:
firstly, a convolution attention module CBAM is added in a deep convolution neural network, more computer computing power is distributed to information with high contribution degree to reconstruction performance, and feature information of an image is utilized more effectively;
secondly, a novel multi-scale attention block MSRA is adopted, and the module extracts image features from all scales by adopting a mode of cross-connecting four convolution kernels in series, so that the diversity of the extracted features is ensured, and the extracted image feature information is richer;
thirdly, the invention provides a simple feature fusion structure, which effectively fuses the features of m MSRAGs and a shallow feature extraction layer, and effectively prevents the loss of image features.
Drawings
FIG. 1 is a diagram of a super-resolution reconstruction system for mine images according to the present invention.
FIG. 2 is a flow chart of the super-resolution reconstruction system for mine images.
FIG. 3 is a block diagram of the multi-scale residual attention group MSRAG of the present invention.
Fig. 4 is a block diagram of a multi-scale attention module MSRA of the present invention.
FIG. 5 is a block diagram of the attention Module CBAM of the present invention.
Detailed Description
In the following description, for purposes of explanation, numerous implementation details are set forth in order to provide a thorough understanding of the embodiments of the invention. It should be understood, however, that these implementation details are not to be interpreted as limiting the invention. That is, in some embodiments of the invention, such implementation details are not necessary.
A first part: image shallow feature extraction
The final learning objective is to learn the input low resolution image ILRAnd outputting the super-resolution image IHREnd-to-end mapping function between. Given a training data setSolving for
Where θ is { ω ═ ω1,ω2,ω3...ωm,b1,b2,b3...bmDenotes the weight and bias of the m-layer neural network, LSRIs used to minimizeAndthe loss function of the difference, the L1 norm loss function, is sensitive to the fluctuation of data, can effectively guide the updating of the model parameters and prevent the change of the gradient, and obtains a reconstructed image with higher quality. And in the network model training process, a loss function of an L1 norm is adopted. The loss function for the L1 norm is:
the method realizes the super-resolution reconstruction of the image through three modules of shallow feature extraction, deep feature extraction and image reconstruction in the figure 1. To extract the shallow features of an image before extracting the deep features of the image, we extract the shallow features of the input low-resolution image with a pre-constructed shallow feature extraction module. The shallow layer feature extraction layer adopts a convolution layer with convolution kernel size of 3 multiplied by 3, and the extraction working principle is as follows:
S0=ω3×3*ILR+b0 (3)
in the formula ILRRepresenting an input low resolution image, S0Is the output of the shallow feature extraction layer, omega3×3And b0Representing the weight and bias of the convolutional layer, respectively.
A second part: deep layer characteristic extraction module
The deep layer feature extraction module comprises m multi-scale residual error attention groups and a feature fusion layer, wherein deep layer features of the image are sequentially extracted by the m multi-scale residual error attention groups, and then the features extracted by the shallow layer feature extraction layer and the m multi-scale residual error attention groups are transmitted to the feature fusion layer for feature fusion.
The MSRAG in the deep feature extraction module in FIG. 1 is a multi-scale residual attention group, a simplified dense jump connection is constructed at the periphery of the whole feature extraction module, and a feature fusion layer fuses features extracted by a shallow feature extraction layer and a plurality of m multi-scale residual attention groups, so that the richness and diversity of image features are increased.
The whole feature extraction module adopts simplified dense connection, and the output of the shallow feature extraction layer and each multi-scale residual error attention group is transmitted into the feature fusion layer by adopting only one jump connection. The feature fusion layer is composed of a splicing layer and a convolution layer with the size of 1 multiplied by 1 and is used for fusing the features of the shallow feature extraction layer and the m multi-scale residual error attention groups.
The m multi-scale residual attention group (MSRAG) stacking procedure is as follows:
…
…
in the formula (I), the compound is shown in the specification,a mapping function representing the ith MSRAG,represents the output of the ith MSRAG, S0Representing the output of the shallow feature extraction layer.
The formula of the characteristic fusion layer is as follows
In the formula, Hf(. cndot.) represents a mapping function for the feature fusion layer,represents a series structure, SfRepresenting the output of the feature fusion layer. The MSRAG in fig. 1 is composed of n MSRAs combined by skip connection, the detailed structure of MSRAG is shown in fig. 3, the multi-scale residual attention group MSRAG is composed of n multi-scale attention modules MSRAs, the whole feature extraction module adopts simplified dense connection, the output of the shallow feature extraction layer and each multi-scale residual attention group is transmitted into the feature fusion layer by only one skip connection, and the extraction process of the multi-scale residual attention group is represented as:
…
…
in the formula (I), the compound is shown in the specification,a mapping function representing the jth MSRA,the output of the jth MSRA is shown,represents the output of the ith MSRAG.
In order to detect image features at different scales, the invention proposes a multi-scale attention block MSRA, the detailed structure of which is shown in fig. 4.
The model adopts four branches, the sizes of convolution kernels of convolution layers in the four branches are respectively 1 multiplied by 1, 3 multiplied by 3, 5 multiplied by 5 and 7 multiplied by 7, firstly, the first layer of convolution layer of each branch is respectively utilized to extract input different scale characteristics, the extracted four different scale characteristics are processed by an activation function and then are connected in series, then, after the characteristics of different scales are extracted again, the high contribution information to the reconstruction result is further enhanced by a convolution attention module CBAM, the low contribution information to the reconstruction result is inhibited, then, the learned characteristics of different scales are subjected to multi-scale characteristic fusion through a series layer and a convolution layer, and finally, residual error learning is performed through local jump connection
Different from the previous work, the invention constructs a four-branch network, convolution layers of different branches use convolution kernels with different sizes, and in this way, information among the branches can be shared, so that image characteristics with different scales can be detected. This operation can be defined as:
in the formula (I), the compound is shown in the specification,b1,b2the weights and offsets of eight convolution kernels in four branches are respectively represented, the subscript of the weights represents the size of the convolution kernels, the superscript represents the number of layers on which the weights are located,b3represents the weights and offsets of the convolutional layers with the final convolutional kernel size of 1 x 1,represents the output of the third 1 x 1 sized convolutional layer, σ (-) represents the activation function ReLU,represents the output of the (i-1) th MSRA,respectively representing the output of the first active layer of the four branches of the MSRA and the output of the CBAM, FCBAM(. cndot.) represents the mapping function of the convolution attention module, represents the convolution process,indicating a tandem operation.
And local residual learning, namely in order to make the network more efficient, residual learning is adopted for each MSRA. Formally, the present invention describes the multi-scale residual attention group MSRA as:
respectively representing the input and output of the ith MSRA,representing the output of the third 1 x 1 sized convolutional layer within the MSRA, the use of local residual learning allows the computation of complexThe impurity degree is greatly reduced. Meanwhile, the performance of the network is improved.
The invention adds a simple but effective attention module CBAM in each branch of MSRA, the CBAM is composed of a channel attention module and a space attention module. The attention module CBAM can effectively increase the feature information with high contribution degree to the reconstruction result and inhibit the feature information with low contribution degree. Because the attention module CBAM is a lightweight general-purpose module, the amount of calculation added by adding the CBAM to the image super-resolution model is almost negligible.
The detailed structure of the attention module CBAM is shown in fig. 5, and given an input feature map, the CBAM calculates attention weights along two dimensions of space and channel in turn, and then multiplies the attention weights of the channel attention module and the space attention module by the respective inputs to perform adaptive adjustment on the features. MLP in channel attention represents a multi-layered perceptron consisting of two 1 × 1 sized convolutional layers and one ReLU activation function layer. The principle of the attention module CBAM is as follows:
SCA=SRL·σ(HMLP(HAvgPool(SRL))+HMLP(HMaxPool(SRL))) (21)
SSA=SCA·σ(H7×7([HAvgPool(SCA),HMaxPool(SCA)])) (22)
in the formula, SRLRepresenting the input of CBAM, HAvgPool(·),HMaxPool(. H) denotes average pooling and maximum pooling, respectivelyMLP(. -) represents the mapping function of the multilayer perceptron MLP, σ (-) represents the Sigmoid activation function, SCA,SSARepresenting the output of channel attention and spatial attention, H, respectively7×7(. cndot.) represents a mapping function for a convolution layer of size 7 × 7.
And a third part: image reconstruction
The network model of the invention does not need to preprocess the image, and the characteristic extraction process is directly carried out on the input low-resolution image, thereby effectively reducing the calculation amount of the algorithm. After the feature extraction is finished, the image is up-sampled and reconstructed through an image reconstruction module. The method adopts a sub-pixel convolution method to perform image up-sampling, and then performs image reconstruction through a reconstruction layer. As shown in fig. 1, the feature reconstruction module of the present invention is composed of a convolution layer, a pixel reconstruction layer, and a reconstruction layer, and generates a super-resolution image, and the detailed principle of the image reconstruction module is as follows:
SFE=HFE(Sf) (23)
SPX=HPX(SFE) (24)
ISR=HRC(SPX) (25)
wherein S isfRepresenting the output of the feature fusion layer, HFE(·)、HPX(. and H)RC(. represents mapping functions of convolutional layer, pixel reconstruction layer, and reconstruction layer, respectively, SFERepresenting the output of the convolutional layer, SPXRepresenting the output of the pixel reconstruction layer, ISRRepresenting the reconstructed image.
The super-resolution reconstruction method of the mine image comprises the following steps:
step 2, inputting the low-resolution image obtained in the step 1 into a shallow feature extraction module for image shallow feature extraction, wherein the shallow feature extraction layer consists of a convolutional layer,
step 3, the features extracted in the step 2 are transmitted to a deep feature extraction module for deep feature extraction, and features of a shallow feature extraction layer and a plurality of m multi-scale residual error attention groups are fused; richness and diversity of image characteristics are increased;
the specific deep feature extraction and feature fusion comprises the following steps:
step 3-1, sequentially extracting deep features of the image from m multi-scale residual error attention groups, wherein each multi-scale residual error attention group consists of n multi-scale attention modules MSRA and is combined with the n multi-scale attention modules through jumping connection;
step 3-2 and step 3-1, the multi-scale attention module comprises four branches and residual learning is adopted, and residual connection of the multi-scale attention module is described as follows:
respectively representing the input and output of the ith MSRA,representing the output of the third 1 x 1 sized convolutional layer in the MSRA, the use of local residual learning results in a significant reduction in computational complexity. Meanwhile, the performance of the network is improved.
And 3-3, the whole feature extraction module adopts a simplified dense network to transmit the features extracted by the shallow feature extraction layer and the m multi-scale residual error attention groups to the feature fusion layer for fusion, simplified dense jump connection is constructed at the periphery of the whole module, the feature fusion layer fuses the features extracted by the shallow feature extraction layer and the m multi-scale residual error attention groups, and the richness and diversity of image features are increased.
And 4, after the characteristics in the step 3 are fused, performing upsampling and reconstruction on the image through an image reconstruction module, specifically, performing upsampling on the image by adopting a sub-pixel convolution method, and then performing image reconstruction through a reconstruction layer.
Firstly, aiming at the problem that when feature information is extracted, computing resources are wasted due to the indiscriminate extraction of the feature information, the invention does not simply add a channel attention mechanism containing average pooling into the residual attention block, but adds a channel attention module which simultaneously considers the average pooling and the maximum pooling into the residual attention block, and adopts a channel and space two-dimensional attention mechanism, so that different contribution information can be given to training learning of different degrees, the feature information utilization rate in image super-resolution reconstruction is higher, and the computer computing power distribution is more reasonable and more effective in utilization of the feature information.
Secondly, aiming at the problem that the convolution kernel with a single scale cannot acquire rich characteristic information, the invention does not adopt the traditional convolution kernel with a single scale to extract the characteristic information, but provides a new multi-scale attention block MSRA.
Thirdly, aiming at the problem that the image features gradually disappear in the transmission process, the invention does not adopt the traditional long jump connection to realize the global residual error learning, but provides a simple feature fusion structure, wherein n multi-scale attention modules form a multi-scale residual error attention group, and the feature fusion structure can fuse the features of m MSRAGs and a shallow feature extraction layer, so that the loss of image feature information can be effectively prevented.
The above description is only an embodiment of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.
Claims (10)
1. A mine image super-resolution reconstruction method based on a multi-scale residual error network is characterized by comprising the following steps: the super-resolution reconstruction method comprises the following steps:
step 1, acquiring a low-resolution image;
step 2, inputting the low-resolution image obtained in the step 1 into a shallow feature extraction module for image shallow feature extraction, wherein the shallow feature extraction layer is composed of a convolution layer, and the working principle of feature extraction is as follows:
S0=ω3×3*ILR+b0
in the formula ILRRepresenting an input low resolution image, S0Is shallowOutput of layer feature extraction layer, ω3×3And b0Respectively representing weights and offsets of convolutional layers;
step 3, the features extracted in the step 2 are transmitted to a deep feature extraction module for deep feature extraction, and features extracted by a shallow feature extraction layer and m multi-scale residual error attention groups are fused;
after the features in the step 4 and the step 3 are fused, the image is up-sampled and reconstructed by an image reconstruction module, and the working principle of the image reconstruction module is as follows:
SFE=HFE(Sf)
SPX=HPX(SFE)
ISR=HRC(SPX)
wherein S isfRepresenting the output of the feature fusion layer, HFE(·)、HPX(. and H)RC(. represents mapping functions of convolutional layer, pixel reconstruction layer, and reconstruction layer, respectively, SFERepresenting the output of the convolutional layer, SPXRepresenting the output of the pixel reconstruction layer, ISRRepresenting the reconstructed image.
2. The mine image super-resolution reconstruction method based on the multi-scale residual error network as claimed in claim 1, wherein: the specific feature fusion of the step 3 comprises the following steps:
step 3-1, sequentially extracting deep features of the image from m multi-scale residual error attention groups, wherein each multi-scale residual error attention group consists of n multi-scale attention modules MSRA and is combined with the n multi-scale attention modules through jumping connection;
step 3-2 and step 3-1, the multi-scale attention module comprises four branches and residual learning is adopted, and residual connection of the multi-scale attention module is described as follows:
respectively representing the input and output of the ith MSRA,representing the output of the third 1 x 1 sized convolutional layer within the MSRA, where MSRA represents a multi-scale attention module, the use of local residual learning allows for a significant reduction in computational complexity. Meanwhile, the performance of the network is improved.
And 3-3, transmitting the features extracted by the shallow feature extraction layer and the m multi-scale residual error attention groups to a feature fusion layer for fusion, wherein the feature fusion layer consists of a splicing layer and a convolution layer with the size of 1 multiplied by 1.
3. The mine image super-resolution reconstruction method based on the multi-scale residual error network as claimed in claim 2, wherein: in the step 3-1, the multi-scale residual attention group is stacked by n multi-scale attention modules MSRA and combined with the n multi-scale attention modules through skip connection, and the principle of extracting features of the multi-scale residual attention group is represented as follows:
4. A mine image super-resolution reconstruction system based on a multi-scale residual error network comprises: the method is characterized in that: the image super-resolution reconstruction system reconstructs a low-resolution image into a high-resolution image through an image super-resolution reconstruction model, wherein the image super-resolution reconstruction model consists of a shallow layer feature extraction module, a deep layer feature extraction module and a feature reconstruction module.
5. The mine image super-resolution reconstruction system based on the multi-scale residual error network of claim 4, wherein: the deep feature extraction module consists of a stack of m multi-scale residual attention groups and a feature fusion layer.
6. The mine image super-resolution reconstruction system based on the multi-scale residual error network of claim 5, wherein: the m multi-scale residual attention group stacking procedure is as follows:
7. The mine image super-resolution reconstruction system based on the multi-scale residual error network of claim 5, wherein: the feature fusion layer is represented as follows:
8. The mine image super-resolution reconstruction system based on the multi-scale residual error network of claim 5, wherein: the multi-scale residual attention group comprises n multi-scale attention modules and local residual learning, the multi-scale attention module adopts four branches, the sizes of convolution kernels in the branches are different, an attention module is added in each branch of the multi-scale attention module, the attention module consists of a channel attention module and a space attention module, the multi-scale attention module adopts four branches, and the principle of the multi-scale attention module is defined as follows:
in the formula (I), the compound is shown in the specification,b1,b2the weights and offsets of eight convolution kernels in four branches are respectively represented, the subscript of the weights represents the size of the convolution kernels, the superscript represents the number of layers on which the weights are located,b3represents the weights and offsets of the convolutional layers with the final convolutional kernel size of 1 x 1,represents the output of the third 1 x 1 sized convolutional layer, σ (-) represents the activation function ReLU,represents the output of the (i-1) th MSRA,respectively representing the output of the first active layer of the four branches of the MSRA and the output of the CBAM, FCBAM(. cndot.) represents the mapping function of the convolution attention module, represents the convolution process,indicating a tandem operation.
9. The mine image super-resolution reconstruction system based on the multi-scale residual error network of claim 8, wherein: the principle of the attention module is as follows:
SCA=SRL·σ(HMLP(HAvgPool(SRL))+HMLP(HMaxPool(SRL)))
SSA=SCA·σ(H7×7([HAvgPool(SCA),HMaxPool(SCA)]))
in the formula, SRLRepresenting the input of CBAM, HAvgPool(·),HMaxPool(. H) denotes average pooling and maximum pooling, respectivelyMLP(. -) represents the mapping function of the multilayer perceptron MLP, σ (-) represents the Sigmoid activation function, SCA,SSARepresenting the output of channel attention and spatial attention, H, respectively7×7(. cndot.) represents a mapping function for a convolution layer of size 7 × 7.
10. The mine image super-resolution reconstruction system based on the multi-scale residual error network of claim 4, wherein: the feature reconstruction module is composed of a convolution layer, a pixel reconstruction layer and a feature reconstruction layer and generates a super-resolution image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110924404.XA CN113592718A (en) | 2021-08-12 | 2021-08-12 | Mine image super-resolution reconstruction method and system based on multi-scale residual error network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110924404.XA CN113592718A (en) | 2021-08-12 | 2021-08-12 | Mine image super-resolution reconstruction method and system based on multi-scale residual error network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113592718A true CN113592718A (en) | 2021-11-02 |
Family
ID=78257458
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110924404.XA Pending CN113592718A (en) | 2021-08-12 | 2021-08-12 | Mine image super-resolution reconstruction method and system based on multi-scale residual error network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113592718A (en) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113888412A (en) * | 2021-11-23 | 2022-01-04 | 钟家兴 | Image super-resolution reconstruction method for diabetic retinopathy classification |
CN114066873A (en) * | 2021-11-24 | 2022-02-18 | 袁兰 | Method and device for detecting osteoporosis by utilizing CT (computed tomography) image |
CN114092330A (en) * | 2021-11-19 | 2022-02-25 | 长春理工大学 | Lightweight multi-scale infrared image super-resolution reconstruction method |
CN114283069A (en) * | 2022-01-17 | 2022-04-05 | 柚皮(重庆)科技有限公司 | Brain magnetic resonance image super-resolution reconstruction method |
CN114298911A (en) * | 2021-12-31 | 2022-04-08 | 中国矿业大学 | Single-image super-resolution reconstruction method of multi-scale residual attention mechanism network |
CN114331830A (en) * | 2021-11-04 | 2022-04-12 | 西安理工大学 | Super-resolution reconstruction method based on multi-scale residual attention |
CN114331849A (en) * | 2022-03-15 | 2022-04-12 | 之江实验室 | Cross-mode nuclear magnetic resonance hyper-resolution network and image super-resolution method |
CN114648468A (en) * | 2022-05-18 | 2022-06-21 | 中山大学深圳研究院 | Image processing method, image processing device, terminal equipment and computer readable storage medium |
CN114820324A (en) * | 2022-05-19 | 2022-07-29 | 河南垂天科技有限公司 | Image super-resolution reconstruction system based on three parallel residual error networks |
CN114943650A (en) * | 2022-04-14 | 2022-08-26 | 北京东软医疗设备有限公司 | Image deblurring method and device, computer equipment and storage medium |
CN114998195A (en) * | 2022-04-21 | 2022-09-02 | 重庆理工大学 | Pig B ultrasonic image fat content detection method based on deep regression network |
CN115018705A (en) * | 2022-05-27 | 2022-09-06 | 南京航空航天大学 | Image super-resolution method based on enhanced generation countermeasure network |
CN115170916A (en) * | 2022-09-06 | 2022-10-11 | 南京信息工程大学 | Image reconstruction method and system based on multi-scale feature fusion |
CN115239564A (en) * | 2022-08-18 | 2022-10-25 | 中国矿业大学 | Mine image super-resolution reconstruction method combining semantic information |
CN115358932A (en) * | 2022-10-24 | 2022-11-18 | 山东大学 | Multi-scale feature fusion face super-resolution reconstruction method and system |
CN115546032A (en) * | 2022-12-01 | 2022-12-30 | 泉州市蓝领物联科技有限公司 | Single-frame image super-resolution method based on feature fusion and attention mechanism |
CN116071243A (en) * | 2023-03-27 | 2023-05-05 | 江西师范大学 | Infrared image super-resolution reconstruction method based on edge enhancement |
WO2023123108A1 (en) * | 2021-12-29 | 2023-07-06 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Methods and systems for enhancing qualities of images |
CN116523759A (en) * | 2023-07-04 | 2023-08-01 | 江西财经大学 | Image super-resolution reconstruction method and system based on frequency decomposition and restarting mechanism |
CN116594061A (en) * | 2023-07-18 | 2023-08-15 | 吉林大学 | Seismic data denoising method based on multi-scale U-shaped attention network |
CN116993592A (en) * | 2023-09-27 | 2023-11-03 | 城云科技(中国)有限公司 | Construction method, device and application of image super-resolution reconstruction model |
CN117078516A (en) * | 2023-08-11 | 2023-11-17 | 济宁安泰矿山设备制造有限公司 | Mine image super-resolution reconstruction method based on residual mixed attention |
CN117114994A (en) * | 2023-09-07 | 2023-11-24 | 济宁安泰矿山设备制造有限公司 | Mine image super-resolution reconstruction method and system based on hierarchical feature fusion |
CN117237190A (en) * | 2023-09-15 | 2023-12-15 | 中国矿业大学 | Lightweight image super-resolution reconstruction system and method for edge mobile equipment |
CN117934289A (en) * | 2024-03-25 | 2024-04-26 | 山东师范大学 | System and method for integrating MRI super-resolution and synthesis tasks |
WO2024145988A1 (en) * | 2023-01-03 | 2024-07-11 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Neural network-based in-loop filter |
CN118350996A (en) * | 2024-06-18 | 2024-07-16 | 西南交通大学 | Image super-resolution method, device and equipment based on multi-scale feature fusion |
WO2024153156A1 (en) * | 2023-01-17 | 2024-07-25 | 浙江华感科技有限公司 | Image processing method and apparatus, and device and medium |
CN118570065A (en) * | 2024-07-29 | 2024-08-30 | 济南大学 | Automobile part image super-resolution method based on double-channel residual attention |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110569738A (en) * | 2019-08-15 | 2019-12-13 | 杨春立 | natural scene text detection method, equipment and medium based on dense connection network |
CN112330542A (en) * | 2020-11-18 | 2021-02-05 | 重庆邮电大学 | Image reconstruction system and method based on CRCSAN network |
CN112653899A (en) * | 2020-12-18 | 2021-04-13 | 北京工业大学 | Network live broadcast video feature extraction method based on joint attention ResNeSt under complex scene |
CN112862688A (en) * | 2021-03-08 | 2021-05-28 | 西华大学 | Cross-scale attention network-based image super-resolution reconstruction model and method |
CN112862689A (en) * | 2021-03-09 | 2021-05-28 | 南京邮电大学 | Image super-resolution reconstruction method and system |
-
2021
- 2021-08-12 CN CN202110924404.XA patent/CN113592718A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110569738A (en) * | 2019-08-15 | 2019-12-13 | 杨春立 | natural scene text detection method, equipment and medium based on dense connection network |
CN112330542A (en) * | 2020-11-18 | 2021-02-05 | 重庆邮电大学 | Image reconstruction system and method based on CRCSAN network |
CN112653899A (en) * | 2020-12-18 | 2021-04-13 | 北京工业大学 | Network live broadcast video feature extraction method based on joint attention ResNeSt under complex scene |
CN112862688A (en) * | 2021-03-08 | 2021-05-28 | 西华大学 | Cross-scale attention network-based image super-resolution reconstruction model and method |
CN112862689A (en) * | 2021-03-09 | 2021-05-28 | 南京邮电大学 | Image super-resolution reconstruction method and system |
Cited By (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114331830A (en) * | 2021-11-04 | 2022-04-12 | 西安理工大学 | Super-resolution reconstruction method based on multi-scale residual attention |
CN114331830B (en) * | 2021-11-04 | 2023-04-21 | 西安理工大学 | Super-resolution reconstruction method based on multi-scale residual error attention |
CN114092330B (en) * | 2021-11-19 | 2024-04-30 | 长春理工大学 | Light-weight multi-scale infrared image super-resolution reconstruction method |
CN114092330A (en) * | 2021-11-19 | 2022-02-25 | 长春理工大学 | Lightweight multi-scale infrared image super-resolution reconstruction method |
CN113888412B (en) * | 2021-11-23 | 2022-04-05 | 南京云上数融技术有限公司 | Image super-resolution reconstruction method for diabetic retinopathy classification |
CN113888412A (en) * | 2021-11-23 | 2022-01-04 | 钟家兴 | Image super-resolution reconstruction method for diabetic retinopathy classification |
CN114066873A (en) * | 2021-11-24 | 2022-02-18 | 袁兰 | Method and device for detecting osteoporosis by utilizing CT (computed tomography) image |
WO2023123108A1 (en) * | 2021-12-29 | 2023-07-06 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Methods and systems for enhancing qualities of images |
CN114298911A (en) * | 2021-12-31 | 2022-04-08 | 中国矿业大学 | Single-image super-resolution reconstruction method of multi-scale residual attention mechanism network |
CN114283069A (en) * | 2022-01-17 | 2022-04-05 | 柚皮(重庆)科技有限公司 | Brain magnetic resonance image super-resolution reconstruction method |
CN114331849A (en) * | 2022-03-15 | 2022-04-12 | 之江实验室 | Cross-mode nuclear magnetic resonance hyper-resolution network and image super-resolution method |
CN114943650A (en) * | 2022-04-14 | 2022-08-26 | 北京东软医疗设备有限公司 | Image deblurring method and device, computer equipment and storage medium |
CN114998195A (en) * | 2022-04-21 | 2022-09-02 | 重庆理工大学 | Pig B ultrasonic image fat content detection method based on deep regression network |
CN114648468B (en) * | 2022-05-18 | 2022-08-16 | 中山大学深圳研究院 | Image processing method, image processing device, terminal equipment and computer readable storage medium |
CN114648468A (en) * | 2022-05-18 | 2022-06-21 | 中山大学深圳研究院 | Image processing method, image processing device, terminal equipment and computer readable storage medium |
CN114820324A (en) * | 2022-05-19 | 2022-07-29 | 河南垂天科技有限公司 | Image super-resolution reconstruction system based on three parallel residual error networks |
CN115018705A (en) * | 2022-05-27 | 2022-09-06 | 南京航空航天大学 | Image super-resolution method based on enhanced generation countermeasure network |
CN115239564A (en) * | 2022-08-18 | 2022-10-25 | 中国矿业大学 | Mine image super-resolution reconstruction method combining semantic information |
CN115170916B (en) * | 2022-09-06 | 2023-01-31 | 南京信息工程大学 | Image reconstruction method and system based on multi-scale feature fusion |
CN115170916A (en) * | 2022-09-06 | 2022-10-11 | 南京信息工程大学 | Image reconstruction method and system based on multi-scale feature fusion |
CN115358932A (en) * | 2022-10-24 | 2022-11-18 | 山东大学 | Multi-scale feature fusion face super-resolution reconstruction method and system |
CN115546032B (en) * | 2022-12-01 | 2023-04-21 | 泉州市蓝领物联科技有限公司 | Single-frame image super-resolution method based on feature fusion and attention mechanism |
CN115546032A (en) * | 2022-12-01 | 2022-12-30 | 泉州市蓝领物联科技有限公司 | Single-frame image super-resolution method based on feature fusion and attention mechanism |
WO2024145988A1 (en) * | 2023-01-03 | 2024-07-11 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Neural network-based in-loop filter |
WO2024153156A1 (en) * | 2023-01-17 | 2024-07-25 | 浙江华感科技有限公司 | Image processing method and apparatus, and device and medium |
CN116071243A (en) * | 2023-03-27 | 2023-05-05 | 江西师范大学 | Infrared image super-resolution reconstruction method based on edge enhancement |
CN116523759A (en) * | 2023-07-04 | 2023-08-01 | 江西财经大学 | Image super-resolution reconstruction method and system based on frequency decomposition and restarting mechanism |
CN116523759B (en) * | 2023-07-04 | 2023-09-05 | 江西财经大学 | Image super-resolution reconstruction method and system based on frequency decomposition and restarting mechanism |
CN116594061A (en) * | 2023-07-18 | 2023-08-15 | 吉林大学 | Seismic data denoising method based on multi-scale U-shaped attention network |
CN116594061B (en) * | 2023-07-18 | 2023-09-22 | 吉林大学 | Seismic data denoising method based on multi-scale U-shaped attention network |
CN117078516A (en) * | 2023-08-11 | 2023-11-17 | 济宁安泰矿山设备制造有限公司 | Mine image super-resolution reconstruction method based on residual mixed attention |
CN117078516B (en) * | 2023-08-11 | 2024-03-12 | 济宁安泰矿山设备制造有限公司 | Mine image super-resolution reconstruction method based on residual mixed attention |
CN117114994B (en) * | 2023-09-07 | 2024-02-20 | 济宁安泰矿山设备制造有限公司 | Mine image super-resolution reconstruction method and system based on hierarchical feature fusion |
CN117114994A (en) * | 2023-09-07 | 2023-11-24 | 济宁安泰矿山设备制造有限公司 | Mine image super-resolution reconstruction method and system based on hierarchical feature fusion |
CN117237190A (en) * | 2023-09-15 | 2023-12-15 | 中国矿业大学 | Lightweight image super-resolution reconstruction system and method for edge mobile equipment |
CN117237190B (en) * | 2023-09-15 | 2024-03-15 | 中国矿业大学 | Lightweight image super-resolution reconstruction system and method for edge mobile equipment |
CN116993592B (en) * | 2023-09-27 | 2023-12-12 | 城云科技(中国)有限公司 | Construction method, device and application of image super-resolution reconstruction model |
CN116993592A (en) * | 2023-09-27 | 2023-11-03 | 城云科技(中国)有限公司 | Construction method, device and application of image super-resolution reconstruction model |
CN117934289A (en) * | 2024-03-25 | 2024-04-26 | 山东师范大学 | System and method for integrating MRI super-resolution and synthesis tasks |
CN118350996A (en) * | 2024-06-18 | 2024-07-16 | 西南交通大学 | Image super-resolution method, device and equipment based on multi-scale feature fusion |
CN118350996B (en) * | 2024-06-18 | 2024-08-30 | 西南交通大学 | Image super-resolution method, device and equipment based on multi-scale feature fusion |
CN118570065A (en) * | 2024-07-29 | 2024-08-30 | 济南大学 | Automobile part image super-resolution method based on double-channel residual attention |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113592718A (en) | Mine image super-resolution reconstruction method and system based on multi-scale residual error network | |
CN110119780B (en) | Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network | |
CN112750082B (en) | Human face super-resolution method and system based on fusion attention mechanism | |
CN106683067B (en) | Deep learning super-resolution reconstruction method based on residual sub-images | |
CN109685716B (en) | Image super-resolution reconstruction method for generating countermeasure network based on Gaussian coding feedback | |
CN112329800A (en) | Salient object detection method based on global information guiding residual attention | |
CN111815516B (en) | Super-resolution reconstruction method for weak supervision infrared remote sensing image | |
CN111986108B (en) | Complex sea and air scene image defogging method based on generation countermeasure network | |
CN111861961A (en) | Multi-scale residual error fusion model for single image super-resolution and restoration method thereof | |
CN111768340B (en) | Super-resolution image reconstruction method and system based on dense multipath network | |
CN112686119B (en) | License plate motion blurred image processing method based on self-attention generation countermeasure network | |
CN111932461A (en) | Convolutional neural network-based self-learning image super-resolution reconstruction method and system | |
CN113792641A (en) | High-resolution lightweight human body posture estimation method combined with multispectral attention mechanism | |
Liu et al. | RB-Net: Training highly accurate and efficient binary neural networks with reshaped point-wise convolution and balanced activation | |
CN113421187B (en) | Super-resolution reconstruction method, system, storage medium and equipment | |
CN114926343A (en) | Image super-resolution method based on pyramid fusion attention network | |
CN113222819A (en) | Remote sensing image super-resolution reconstruction method based on deep convolutional neural network | |
CN116797461A (en) | Binocular image super-resolution reconstruction method based on multistage attention-strengthening mechanism | |
Li et al. | Lightweight adaptive weighted network for single image super-resolution | |
CN116168197A (en) | Image segmentation method based on Transformer segmentation network and regularization training | |
CN114936977A (en) | Image deblurring method based on channel attention and cross-scale feature fusion | |
Zhao et al. | Single-branch self-supervised learning with hybrid tasks | |
CN117576483A (en) | Multisource data fusion ground object classification method based on multiscale convolution self-encoder | |
CN117173022A (en) | Remote sensing image super-resolution reconstruction method based on multipath fusion and attention | |
CN117036171A (en) | Blueprint separable residual balanced distillation super-resolution reconstruction model and blueprint separable residual balanced distillation super-resolution reconstruction method for single image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |