CN111709882A - Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation - Google Patents

Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation Download PDF

Info

Publication number
CN111709882A
CN111709882A CN202010780609.0A CN202010780609A CN111709882A CN 111709882 A CN111709882 A CN 111709882A CN 202010780609 A CN202010780609 A CN 202010780609A CN 111709882 A CN111709882 A CN 111709882A
Authority
CN
China
Prior art keywords
data
resolution
image
network
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010780609.0A
Other languages
Chinese (zh)
Other versions
CN111709882B (en
Inventor
赵壮
韩静
柏连发
张毅
戚浩存
张姝
罗隽
谢辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010780609.0A priority Critical patent/CN111709882B/en
Publication of CN111709882A publication Critical patent/CN111709882A/en
Application granted granted Critical
Publication of CN111709882B publication Critical patent/CN111709882B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • G06T3/4061Super resolution, i.e. output image resolution higher than sensor resolution by injecting details from a different spectral band
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation, belonging to the technical field of intelligent processing and analysis of spectral data; firstly, entering low-image-resolution high-spectral data into a network for data processing; then the high image resolution ratio multispectral data enters a network to be fused with the low image resolution ratio multispectral data, and the high image resolution ratio multispectral data is output; and finally, further fusing the fused output and the hyperspectral data image super-resolution reconstruction result. The non-bottleneck 1D structure used in the network of the application decomposes an original 3 multiplied by 3 convolution kernel into a pair of 1D convolution kernels, so that the convolution parameters are reduced; according to the method and the device, the network can learn the transformation matrix information during fusion, and the accuracy of network spectrum dimension reconstruction and the network generalization capability are improved; the overall calculation speed is improved, and the precision is accurate.

Description

Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation
Technical Field
The invention belongs to the technical field of intelligent processing and analysis of spectral data, and particularly relates to a super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation.
Background
Spectra were studied for transients such as: explosion, shock wave, high voltage discharge and fluorescence, however, in these cases, conventional single slit based spectrometers cannot achieve high signal-to-noise ratio (SNR) spectra due to transient processes that limit exposure times.
In the field of spectral imaging, the realization of fast, highly sensitive, high-resolution, hyperspectral imaging has been the focus of research. However, the direct acquisition method is difficult to acquire data quickly due to the large data volume of the spectrum data cube. Although the spectral imaging technology based on compressed sensing can realize fast imaging, the spectral imaging technology still has insufficient spectral resolution or trade-off between imaging resolution and acquisition time.
Because the acquisition of low-image-resolution high-spectral data (LrHS) and high-image-resolution multi-spectral data (HrMS) is easy, a super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation is provided for the problems in the compressed sensing spectral imaging technology.
Disclosure of Invention
The specific technical scheme of the super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation is as follows:
the method comprises the following steps: and (4) entering the low-image-resolution high-spectral data into a network for data processing.
Step two: and performing data preprocessing on the high-image-resolution multispectral data, including channel adjustment and adjustment fusion.
Step three: and the high-image-resolution multispectral data enters a network to be fused with the low-image-resolution multispectral data, and is output.
Step four: and further fusing the fused output and the hyperspectral data image super-resolution reconstruction result. By the structure, the network can learn the transformation matrix information during fusion, and the accuracy of network spectrum dimension reconstruction and the network generalization capability are improved.
Further, in the first step: after entering a network, the low image resolution hyperspectral data enter a subnetwork based on an encoder-decoder structure, in an encoder, the low image resolution hyperspectral data are downsampled through a pooling layer, and convolution layers are connected through a BatchNorm normalization layer and a ReLU activation function; in the decoder, the deconvolution layer is used for up-sampling; and adding dropout operation in each module, and randomly discarding a part of neurons to reduce the interaction between hidden nodes.
Furthermore, the convolutional layers are all of a non-bottleneck 1D structure, the convolutional kernels are a pair of decomposed convolutional kernel vectors with the sizes of (3 × 1) (1 × 3), the deconvolution interlayer convolutional layers also use the non-bottleneck 1D structure, the deconvolution layers are used in a decoder for upsampling, meanwhile, the number of the characteristic channels is halved, in the convolutional layers at the rear end of the decoder, the number of the characteristic channels is increased again by a network, and if the image resolution increasing multiplying factor is r, the number of the channels is increased to r of the original number of the channels2The pixel is used for the back-end sub-pixel convolution layer; the decoder is connected with the sub-pixel convolution layer, and the image resolution is improved to be the same as the high image resolution data.
Further, in the second step, data preprocessing is performed on the high-image-resolution multispectral data, namely, the adjustment fusion and the channel adjustment are performed.
Further, in the third step, the multispectral data with high image resolution enters a network, and dimension is increased through a convolution layer with convolution kernel size of 1 × 1 until the number of spectral channels is the same as that of the high-spectral data with low image resolution; performing concat operation on the upscaled data and the low-image-resolution hyperspectral data which are up-sampled to the same image resolution, and constraining the upscaled data in the spectral dimension by using the low-image-resolution hyperspectral data; finally, fusing the data by a convolution layer with convolution kernel size of 3 multiplied by 3 and extracting features, and then reducing the dimension to the number of high-spectral data channels with low image resolution by a 1 multiplied by 1 convolution layer; and fusing the output data of the two modules through a fusion module consisting of a 3 multiplied by 3 convolutional layer and a 1 multiplied by 1 convolutional layer, further fusing and constraining the output data and the up-sampled high-spectral data with low image resolution, and finally obtaining network output to complete a fusion task. When merging, the matrices are concatenated using concat operations rather than matrix addition and subtraction to retain more information, and then the information is extracted through additional convolutional layers and reduced to the output scale.
Further, in the output result of the step four, the loss function is:
Figure 21407DEST_PATH_IMAGE001
(1)
in the formula (1), N is the total number of samples, α and β are weight coefficients,
Figure 588654DEST_PATH_IMAGE002
for high image resolution multi-spectral data,
Figure 231119DEST_PATH_IMAGE003
in order to output the result of the process,
Figure 946134DEST_PATH_IMAGE004
the low image resolution high spectral data,
Figure 581515DEST_PATH_IMAGE005
for down-sampling of the output result, F represents the MSE function.
The invention has the beneficial effects that:
for lightweight networks that enable spectral image reconstruction, information loss in the bottleneck structure may have adverse effects, and thus non-bottleneck 1D structures are used in the network herein. The structure decomposes the original 3 x 3 convolution kernel into a pair of 1D convolution kernels, thereby reducing the number of convolution parameters. The traditional super-resolution reconstruction network needs to up-sample a low-resolution image to the same size as a high-resolution image and then learn a mapping relation. This approach requires the network to convolve the extracted features on the high resolution image, increasing computational complexity. Therefore, sub-pixel convolution is introduced into a super-resolution reconstruction part of the network, and features are directly extracted from a low-resolution image, so that the network operation speed is improved. The mode is equivalent to the mode that an interpolation amplification process is fused into a feature extraction convolution layer and is autonomously learned by a network, and meanwhile, the calculated amount is greatly reduced because the convolution kernel extracts features in a low-resolution image, so that the network based on sub-pixel convolution has higher efficiency. In addition, the elimination of the interpolation process means that the network can learn information useful for super-resolution reconstruction more clearly, and the interference of interpolation data is avoided, so that the network can learn the mapping relationship from low resolution to high resolution better, and obtain a result with higher accuracy.
Drawings
FIG. 1 is a schematic structural diagram of a fusion module;
FIG. 2 is a diagram of a super-resolution fusion network of spectral images;
FIG. 3 shows the results of comparative experiments: (a) a low resolution image; (b) a high resolution image; (c) a PCA algorithm result; (d) PCA & Wavelet algorithm results; (e) SuperResPALM algorithm results; (f) 3D-CNN results; (g) the network result of this text;
fig. 4 shows test results in different scenarios: (a) a low resolution image; (b) a high resolution image; (c) the network result of this text;
FIG. 5 is a schematic diagram of location selection in test data;
figure 6 is a comparison graph of the spectral reconstruction results of each point.
Detailed Description
The network structure built by the method is shown in fig. 2, and two paths of data at the input end of the network are high-image-resolution multispectral data (HrMS) and low-image-resolution hyperspectral data (LrHS) respectively. The network is integrally divided into a low-resolution image super-resolution reconstruction module and a high-resolution image spectrum constraint fusion module. After low-image-resolution high-spectral data are input into a network, the high-image-resolution high-spectral data firstly enter a sub-network based on an encoder-decoder structure, in an encoder, the data are down-sampled through a pooling layer, and meanwhile, the number of characteristic channels is doubled along with the deepening of the network depth. The convolution layers are all of a non-bottleneck 1D structure, the convolution kernel is a pair of (3 × 1) (1 × 3) size decomposition convolution kernel vectors, and the convolution layers are connected through a BatchNorm normalization layer and a ReLU activation function. In the decoder, the deconvolution layer is used for up-sampling, the number of characteristic channels is reduced by half, and the deconvolution layer uses a non-bottleneck 1D structure.
And adding dropout operation after each module, and randomly discarding a part of neurons to reduce the interaction between hidden layer nodes so as to achieve the effect of reducing the overfitting phenomenon.
Different from the traditional encoder-decoder structure, in the back end convolution layer of the decoder, the network promotes the characteristic channel number again, if the image resolution promotion multiplying power is r, the channel number is promoted to r of the original channel number2And the second sub-pixel convolution layer is used for the back-end sub-pixel convolution layer.
And finally, connecting the sub-pixel convolution layer behind the decoder, and increasing the image resolution to be the same as the high image resolution data.
The encoder-decoder structure is used for network learning more deep features, network parameters can be reduced, and the running speed is further improved.
Meanwhile, the high-image-resolution multispectral data enters a network, and dimension is increased to the same number of spectral channels as the low-image-resolution multispectral data through a convolution layer with convolution kernel size of 1 multiplied by 1. Subsequently, concat operation is carried out on the upscaled data and the high-spectrum data which are downsampled to the same image resolution, and the upscaled data are restrained on the spectrum dimension by the high-spectrum data with the low image resolution. And finally fusing the data by a convolution layer with the convolution kernel size of 3 multiplied by 3, extracting features, and reducing the dimension to the number of high-spectral data channels with low image resolution by the convolution layer of 1 multiplied by 1. At the moment, the output data of the two modules are fused through a fusion module consisting of a 3 multiplied by 3 convolutional layer and a 1 multiplied by 1 convolutional layer, and are further fused and constrained with the up-sampling low-image-resolution hyperspectral data, and finally, network output is obtained, and a fusion task is completed.
In a network output LOSS calculation module, in consideration of the accuracy of spectral dimensions, an MSE function based on spectral three-dimensional data is used for LOSS calculation, and a network output result and high-spectral-resolution high-image-resolution Label data are used for calculating a LOSS function. Meanwhile, a spectrum fusion mathematical model is considered, the high-image-resolution high-spectrum data output by the network are down-sampled to be the same as the low-image-resolution high-spectrum data LrHS data, and a loss function is calculated with the low-image-resolution high-spectrum data LrHS input data. And combining the calculation results of the two as a network overall loss function. Thus, the network loss function expression is as follows:
Figure 252537DEST_PATH_IMAGE001
(1)
in the formula (1), N is the total number of samples, α and β are weight coefficients,
Figure 580750DEST_PATH_IMAGE002
for high image resolution multi-spectral data,
Figure 517482DEST_PATH_IMAGE003
in order to output the result of the process,
Figure 972865DEST_PATH_IMAGE004
the low image resolution high spectral data,
Figure 514705DEST_PATH_IMAGE005
for down-sampling of the output result, F represents the MSE function.
Example 1:
as shown in fig. 2, a is low image resolution hyperspectral data h × w × s, 64 × 31 (length × width × number of spectra);
b is high image resolution low spectral data H × W × s, 256 × 3;
the low image resolution hyperspectral data 64 x 31 is convolved with the sub-pixels through the network erf-net to obtain data 1 of 256 x 31,
the low image resolution hyperspectral data 64 x 31 are directly interpolated to obtain data 3 and 3' of 256 x 31,
high image resolution low spectral data 256 x 3 was convolved 1 x 1 to give data 2 of 256 x 31,
2 and 3' concat (1) were further convolved with 3 x 3 and 1 x 1 to give 256 x 31 data 4,
4 and 1 concat (2) and then 3 x 3 and 1 x 1 are convoluted to obtain 256 x 31 data 5,
and 5 and 3 concat (3) are further convoluted by 3 × 3 and 1 × 1 to obtain 256 × 31 data, namely, the result is obtained.
Example 2
In order to express the network performance provided by the invention, in a comparison test part, a plurality of algorithms based on a traditional algorithm and a convolutional neural network are selected for comparison test, including a fusion algorithm based on PCA principal component analysis, an algorithm combining PCA and wavelet transformation, a SuperResPALM algorithm provided by ICCV15, and a spectrum super-resolution fusion algorithm based on a 3D-CNN network. The results of the comparative tests are shown in FIG. 3.
TABLE 1 comparison of evaluation indexes of different algorithm reconstruction results
Figure 13819DEST_PATH_IMAGE006
As can be seen from the comparison results and the evaluation indexes shown in table 1, the network proposed herein has significant advantages in terms of each index. Meanwhile, based on the hardware platform described above, the time taken for the network to reconstruct a set of spectral images is about 0.5 second, and the reconstruction efficiency is high. In addition, in order to verify the generalization ability of the network, a plurality of groups of different scenes are tested, and as can be seen from the mathematical model, the transformation matrices are different for different optical systems, so that data of different databases are trained by using the data of the databases respectively during testing, the test data are scenes which do not participate in the training, and the test result is shown in fig. 4.
TABLE 2 comparison of evaluation indexes of different scene reconstruction results
Figure 375531DEST_PATH_IMAGE007
From the results shown in table 2, the network proposed herein has good generalization capability, and can meet the reconstruction requirements of different scenes. Meanwhile, in order to more intuitively embody the reconstruction capability of the network in the spectral dimension, the section randomly selects pixel points in the test data to draw a spectral curve to be compared with the standard data, and the results of selecting positions and the spectral curve are shown in fig. 5 and 6. As can be seen, for the test data which does not participate in the training, the spectrum curves of the targets made of different materials have good fitting degree, which indicates that the network has good spectrum reconstruction capability.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The meaning of "and/or" as used herein is intended to include both the individual components or both.
The term "connected" as used herein may mean either a direct connection between components or an indirect connection between components via other components.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (4)

1. A super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation is characterized by comprising the following steps:
the method comprises the following steps: entering the low-image-resolution high-spectral data into a network for data processing; after entering the network, the low image resolution high spectrum data enters a sub-network based on an encoder-decoder structure, in an encoder, the low image resolution high spectrum data is down sampled through a pooling layer, and then convolvedThe method comprises the steps of connecting a BatchNorm normalization layer and a ReLU activation function, using an anti-convolution layer to perform upsampling in a decoder, adding dropout operation in each module, randomly discarding a part of neurons, and reducing interaction among hidden layer nodes, wherein the convolution layers are of a non-bottleneck 1D structure, convolution kernels are decomposition convolution kernel vectors with the sizes of a pair (3 × 1) (1 × 3), the anti-convolution interlayer convolution layer also uses a non-bottleneck 1D structure, using the anti-convolution layer to perform upsampling in the decoder, simultaneously reducing the number of characteristic channels by half, and in a convolution layer at the rear end of the decoder, a network increases the number of the characteristic channels again, and if the image resolution increasing magnification is r, the number of the channels is increased to r of the original number of the channels2The pixel is used for the back-end sub-pixel convolution layer; the decoder is connected with the sub-pixel convolution layer, and the image resolution is improved to be the same as the high image resolution data;
step two: carrying out data preprocessing on the high-image-resolution multispectral data;
step three: the high-image-resolution multispectral data enters a network to be fused with the low-image-resolution multispectral data, and the high-image-resolution multispectral data is output;
step four: and further fusing the fused output and the hyperspectral data image super-resolution reconstruction result.
2. The super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation according to claim 1, characterized in that: and in the second step, data preprocessing is carried out on the multispectral data with high image resolution, including adjustment and fusion and channel adjustment.
3. The super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation according to claim 1, characterized in that: in the third step, the multispectral data with high image resolution enters a network, and dimension is increased to the same number of spectral channels as the high-spectral data with low image resolution through a convolution layer with convolution kernel size of 1 multiplied by 1; performing concat operation on the upscaled data and the low-image-resolution hyperspectral data which are up-sampled to the same image resolution, and constraining the upscaled data in the spectral dimension by using the low-image-resolution hyperspectral data; finally, fusing the data by a convolution layer with convolution kernel size of 3 multiplied by 3 and extracting features, and then reducing the dimension to the number of high-spectral data channels with low image resolution by a 1 multiplied by 1 convolution layer; and fusing the output data of the two modules through a fusion module consisting of a 3 multiplied by 3 convolutional layer and a 1 multiplied by 1 convolutional layer, further fusing and constraining the output data and the up-sampled high-spectral data with low image resolution, and finally obtaining network output to complete a fusion task.
4. The super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation according to claim 1, characterized in that: in the output result of step four, the loss function is:
Figure 396377DEST_PATH_IMAGE001
(1)
in formula (1), N is the total number of samples, and α and β are weight coefficients;
Figure 955534DEST_PATH_IMAGE002
for high image resolution multi-spectral data,
Figure 714281DEST_PATH_IMAGE003
in order to output the result of the process,
Figure 435112DEST_PATH_IMAGE004
the low image resolution high spectral data,
Figure 600514DEST_PATH_IMAGE005
for down-sampling of the output result, F represents the MSE function.
CN202010780609.0A 2020-08-06 2020-08-06 Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation Active CN111709882B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010780609.0A CN111709882B (en) 2020-08-06 2020-08-06 Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010780609.0A CN111709882B (en) 2020-08-06 2020-08-06 Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation

Publications (2)

Publication Number Publication Date
CN111709882A true CN111709882A (en) 2020-09-25
CN111709882B CN111709882B (en) 2022-09-27

Family

ID=72548164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010780609.0A Active CN111709882B (en) 2020-08-06 2020-08-06 Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation

Country Status (1)

Country Link
CN (1) CN111709882B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767243A (en) * 2020-12-24 2021-05-07 深圳大学 Hyperspectral image super-resolution implementation method and system
CN113657388A (en) * 2021-07-09 2021-11-16 北京科技大学 Image semantic segmentation method fusing image super-resolution reconstruction
CN114757831A (en) * 2022-06-13 2022-07-15 湖南大学 High-resolution video hyperspectral imaging method, device and medium based on intelligent space-spectrum fusion
CN114913072A (en) * 2022-05-16 2022-08-16 中国第一汽车股份有限公司 Image processing method and device, storage medium and processor
CN115496819A (en) * 2022-11-18 2022-12-20 南京理工大学 Rapid coding spectral imaging method based on energy concentration characteristic

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903255A (en) * 2019-03-04 2019-06-18 北京工业大学 A kind of high spectrum image Super-Resolution method based on 3D convolutional neural networks

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903255A (en) * 2019-03-04 2019-06-18 北京工业大学 A kind of high spectrum image Super-Resolution method based on 3D convolutional neural networks

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767243A (en) * 2020-12-24 2021-05-07 深圳大学 Hyperspectral image super-resolution implementation method and system
CN112767243B (en) * 2020-12-24 2023-05-26 深圳大学 Method and system for realizing super-resolution of hyperspectral image
CN113657388A (en) * 2021-07-09 2021-11-16 北京科技大学 Image semantic segmentation method fusing image super-resolution reconstruction
CN113657388B (en) * 2021-07-09 2023-10-31 北京科技大学 Image semantic segmentation method for super-resolution reconstruction of fused image
CN114913072A (en) * 2022-05-16 2022-08-16 中国第一汽车股份有限公司 Image processing method and device, storage medium and processor
CN114757831A (en) * 2022-06-13 2022-07-15 湖南大学 High-resolution video hyperspectral imaging method, device and medium based on intelligent space-spectrum fusion
CN114757831B (en) * 2022-06-13 2022-09-06 湖南大学 High-resolution video hyperspectral imaging method, device and medium based on intelligent space-spectrum fusion
CN115496819A (en) * 2022-11-18 2022-12-20 南京理工大学 Rapid coding spectral imaging method based on energy concentration characteristic

Also Published As

Publication number Publication date
CN111709882B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN111709882B (en) Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation
CN111768342B (en) Human face super-resolution method based on attention mechanism and multi-stage feedback supervision
Xie et al. Hyperspectral image super-resolution using deep feature matrix factorization
CN109255755B (en) Image super-resolution reconstruction method based on multi-column convolutional neural network
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
CN110428387B (en) Hyperspectral and full-color image fusion method based on deep learning and matrix decomposition
CN108009559B (en) Hyperspectral data classification method based on space-spectrum combined information
Rao et al. A residual convolutional neural network for pan-shaprening
CN114119444B (en) Multi-source remote sensing image fusion method based on deep neural network
CN110544205A (en) Image super-resolution reconstruction method based on visible light and infrared cross input
CN113902622B (en) Spectrum super-resolution method based on depth priori joint attention
CN111914909B (en) Hyperspectral change detection method based on space-spectrum combined three-direction convolution network
CN116309070A (en) Super-resolution reconstruction method and device for hyperspectral remote sensing image and computer equipment
CN113744136A (en) Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
CN114841860A (en) Hyperspectral remote sensing image super-resolution method based on Laplacian pyramid network
CN115565045A (en) Hyperspectral and multispectral image fusion method based on multi-scale space-spectral transformation
Pan et al. Structure–color preserving network for hyperspectral image super-resolution
Deng et al. Multiple frame splicing and degradation learning for hyperspectral imagery super-resolution
CN113409190B (en) Video super-resolution method based on multi-frame grouping and feedback network
CN114511470B (en) Attention mechanism-based double-branch panchromatic sharpening method
CN110631699A (en) Transient spectrometer based on deep network unmixing
CN113111919B (en) Hyperspectral image classification method based on depth high resolution
Guo et al. Speedy and accurate image super‐resolution via deeply recursive CNN with skip connection and network in network
CN114627370A (en) Hyperspectral image classification method based on TRANSFORMER feature fusion
CN114140359A (en) Remote sensing image fusion sharpening method based on progressive cross-scale neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant