CN111709882B - Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation - Google Patents
Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation Download PDFInfo
- Publication number
- CN111709882B CN111709882B CN202010780609.0A CN202010780609A CN111709882B CN 111709882 B CN111709882 B CN 111709882B CN 202010780609 A CN202010780609 A CN 202010780609A CN 111709882 B CN111709882 B CN 111709882B
- Authority
- CN
- China
- Prior art keywords
- data
- resolution
- image
- network
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 27
- 238000004364 calculation method Methods 0.000 title claims abstract description 13
- 230000011218 segmentation Effects 0.000 title claims abstract description 10
- 230000003595 spectral effect Effects 0.000 claims abstract description 22
- 238000000034 method Methods 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 5
- 239000010410 layer Substances 0.000 claims description 49
- 238000005070 sampling Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 230000003993 interaction Effects 0.000 claims description 3
- 210000002569 neuron Anatomy 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 239000011229 interlayer Substances 0.000 claims description 2
- 238000001228 spectrum Methods 0.000 abstract description 19
- 230000009466 transformation Effects 0.000 abstract description 4
- 239000011159 matrix material Substances 0.000 abstract description 3
- 238000004458 analytical method Methods 0.000 abstract description 2
- 238000012360 testing method Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 7
- 238000000701 chemical imaging Methods 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4061—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution by injecting details from different spectral ranges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation, belonging to the technical field of intelligent processing and analysis of spectral data; firstly, entering low-image-resolution high-spectrum data into a network for data processing; then the high image resolution ratio multispectral data enters a network to be fused with the low image resolution ratio multispectral data, and the high image resolution ratio multispectral data is output; and finally, further fusing the fused output and the hyperspectral data image super-resolution reconstruction result. The non-bottleneck 1D structure used in the network of the application decomposes an original 3 multiplied by 3 convolution kernel into a pair of 1D convolution kernels, so that the convolution parameters are reduced; according to the method and the device, the network can learn the transformation matrix information during fusion, and the accuracy of network spectrum dimension reconstruction and the network generalization capability are improved; the overall calculation speed is improved, and the precision is accurate.
Description
Technical Field
The invention belongs to the technical field of intelligent processing and analysis of spectral data, and particularly relates to a super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation.
Background
Spectra were studied for transients such as: explosion, shock wave, high voltage discharge and fluorescence, however, in these cases, conventional single slit based spectrometers cannot achieve high signal-to-noise ratio (SNR) spectra due to transient processes that limit exposure times.
In the field of spectral imaging, the realization of fast, highly sensitive, high-resolution, hyperspectral imaging has been the focus of research. However, the direct acquisition method is difficult to acquire data quickly due to the large data volume of the spectrum data cube. Although the spectral imaging technology based on compressed sensing can realize fast imaging, the spectral imaging technology still has insufficient spectral resolution or trade-off between imaging resolution and acquisition time.
Because the acquisition of low-image-resolution high-spectral data (LrHS) and high-image-resolution multi-spectral data (HrMS) is easy, a super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation is provided for the problems in the compressed sensing spectral imaging technology.
Disclosure of Invention
The specific technical scheme of the super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation is as follows:
the method comprises the following steps: and (4) entering the low-image-resolution high-spectral data into a network for data processing.
Step two: and performing data preprocessing on the high-image-resolution multispectral data, including channel adjustment and adjustment fusion.
Step three: and the high-image-resolution multispectral data enters a network to be fused with the low-image-resolution multispectral data, and is output.
Step four: and further fusing the fused output and the hyperspectral data image super-resolution reconstruction result. By the structure, the network can learn the transformation matrix information during fusion, and the accuracy of network spectrum dimension reconstruction and the network generalization capability are improved.
Further, in the first step: after entering a network, the low image resolution hyperspectral data enter a subnetwork based on an encoder-decoder structure, in an encoder, the low image resolution hyperspectral data are downsampled through a pooling layer, and convolution layers are connected through a BatchNorm normalization layer and a ReLU activation function; in the decoder, the deconvolution layer is used for up-sampling; and adding dropout operation in each module, and randomly discarding a part of neurons to reduce the interaction between hidden nodes.
Furthermore, the convolution layers are all of a non-bottleneck 1D structure, and the convolution kernel is a pair of (3 × 1) (1 × 3) size decomposition convolution kernel vectors; the deconvolution interlayer convolution layer also uses a non-bottleneck 1D structure; in the decoder, the deconvolution layer is used for up-sampling, meanwhile, the number of the characteristic channels is reduced by half, in the convolution layer at the rear end of the decoder, the network increases the number of the characteristic channels again,if the resolution of the image is raised by the multiplying factor r, the number of channels is raised to r of the original number of channels 2 The pixel is used for the back-end sub-pixel convolution layer; the decoder is connected with the sub-pixel convolution layer, and the image resolution is improved to be the same as the high image resolution data.
Further, in the second step, data preprocessing is performed on the high-image-resolution multispectral data, namely, the adjustment fusion and the channel adjustment are performed.
Further, in the third step, the multispectral data with high image resolution enters a network, and dimension is increased through a convolution layer with convolution kernel size of 1 × 1 until the number of spectral channels is the same as that of the high-spectral data with low image resolution; performing concat operation on the upscaled data and the low-image-resolution hyperspectral data which are up-sampled to the same image resolution, and constraining the upscaled data in the spectral dimension by using the low-image-resolution hyperspectral data; finally, fusing the data and extracting the characteristics by a convolution layer with convolution kernel size of 3 multiplied by 3, and reducing the dimension to the number of high-spectrum data channels with low image resolution by a 1 multiplied by 1 convolution layer; and fusing the output data of the two modules through a fusion module consisting of a 3 multiplied by 3 convolutional layer and a 1 multiplied by 1 convolutional layer, further fusing and constraining the output data and the up-sampled high-spectral data with low image resolution, and finally obtaining network output to complete a fusion task. When merging, the matrices are concatenated using concat operations rather than matrix addition and subtraction to retain more information, and then the information is extracted through additional convolutional layers and reduced to the output scale.
Further, in the output result of the fourth step, the loss function is:
in the formula (1), N is the total number of samples, alpha and beta are weight coefficients,for high image resolution multi-spectral data,to be transportedAs a result, the number of the chips,the high spectral data with low image resolution is obtained,for down-sampling of the output result, F represents the MSE function.
The invention has the beneficial effects that:
for lightweight networks that enable spectral image reconstruction, information loss in the bottleneck structure may have adverse effects, and thus non-bottleneck 1D structures are used in the network herein. The structure decomposes the original 3 x 3 convolution kernel into a pair of 1D convolution kernels, thereby reducing the number of convolution parameters. The traditional super-resolution reconstruction network needs to up-sample a low-resolution image to the same size as a high-resolution image and then learn a mapping relation. This approach requires the network to convolve the extracted features on the high resolution image, increasing computational complexity. Therefore, sub-pixel convolution is introduced into a super-resolution reconstruction part of the network, and features are directly extracted from a low-resolution image, so that the network operation speed is improved. The mode is equivalent to the mode that an interpolation amplification process is fused into a feature extraction convolution layer and is autonomously learned by a network, and meanwhile, the calculated amount is greatly reduced because the convolution kernel extracts features in a low-resolution image, so that the network based on sub-pixel convolution has higher efficiency. In addition, the elimination of the interpolation process means that the network can learn information useful for super-resolution reconstruction more clearly, and the interference of interpolation data is avoided, so that the network can learn the mapping relationship from low resolution to high resolution better, and obtain a result with higher accuracy.
Drawings
FIG. 1 is a schematic structural diagram of a fusion module;
FIG. 2 is a diagram of a super-resolution fusion network structure of a spectral image;
FIG. 3 shows the results of comparative experiments: (a) a low resolution image; (b) a high resolution image; (c) a PCA algorithm result; (d) PCA & Wavelet algorithm results; (e) SuperResPALM algorithm results; (f) 3D-CNN results; (g) the network result of this text;
fig. 4 shows test results in different scenarios: (a) a low resolution image; (b) a high resolution image; (c) the network result of this text;
FIG. 5 is a schematic diagram of location selection in test data;
figure 6 is a comparison graph of the spectral reconstruction results of each point.
Detailed Description
The network structure built by the method is shown in fig. 2, and two paths of data at the input end of the network are high-image-resolution multispectral data (HrMS) and low-image-resolution hyperspectral data (LrHS) respectively. The network is integrally divided into a low-resolution image super-resolution reconstruction module and a high-resolution image spectrum constraint fusion module. After low-image-resolution high-spectral data are input into a network, the high-image-resolution high-spectral data firstly enter a sub-network based on an encoder-decoder structure, in an encoder, the data are down-sampled through a pooling layer, and meanwhile, the number of characteristic channels is doubled along with the deepening of the network depth. The convolution layers are all of a non-bottleneck 1D structure, the convolution kernel is a pair of (3 × 1) (1 × 3) size decomposition convolution kernel vectors, and the convolution layers are connected through a BatchNorm normalization layer and a ReLU activation function. In the decoder, the deconvolution layer is used for up-sampling, meanwhile, the number of characteristic channels is reduced by half, and the deconvolution layer also uses a non-bottleneck 1D structure.
And adding dropout operation after each module, and randomly discarding a part of neurons to reduce the interaction between hidden layer nodes so as to achieve the effect of reducing the overfitting phenomenon.
Different from the traditional encoder-decoder structure, in the back end convolution layer of the decoder, the network promotes the characteristic channel number again, if the image resolution promotion multiplying power is r, the channel number is promoted to r of the original channel number 2 And (4) the pixel value is used for a back-end sub-pixel convolution layer.
Finally, the sub-pixel convolution layer is connected behind the decoder, and the image resolution is improved to be the same as the high image resolution data.
The encoder-decoder structure is used for network learning more deep features, network parameters can be reduced, and the running speed is further improved.
Meanwhile, the high-image-resolution multispectral data enters a network, and dimension is increased to the same number of spectral channels as the low-image-resolution multispectral data through a convolution layer with convolution kernel size of 1 multiplied by 1. And subsequently performing concat operation on the upscaling data and the high-spectrum data downsampled to the same image resolution, and constraining the upscaling data in a spectrum dimension by using the low-image-resolution high-spectrum data. And finally fusing the data by a convolution layer with the convolution kernel size of 3 multiplied by 3, extracting features, and reducing the dimension to the number of high-spectral data channels with low image resolution by the convolution layer of 1 multiplied by 1. At the moment, the output data of the two modules are fused through a fusion module consisting of a 3 multiplied by 3 convolutional layer and a 1 multiplied by 1 convolutional layer, and are further fused and constrained with the up-sampling low-image-resolution hyperspectral data, and finally, network output is obtained, and a fusion task is completed.
In a network output LOSS calculation module, in consideration of the accuracy of spectral dimensions, an MSE function based on spectral three-dimensional data is used for LOSS calculation, and a network output result and high-spectral-resolution high-image-resolution Label data are used for calculating a LOSS function. Meanwhile, a spectrum fusion mathematical model is considered, the high-image-resolution high-spectrum data output by the network are down-sampled to be the same as the low-image-resolution high-spectrum data LrHS data, and a loss function is calculated with the low-image-resolution high-spectrum data LrHS input data. And combining the calculation results of the two as a network overall loss function. Thus, the network loss function expression is as follows:
in the formula (1), N is the total number of samples, alpha and beta are weight coefficients,for high image resolution multi-spectral data,in order to output the result of the process,the high spectral data with low image resolution is obtained,for output downsampling, F represents the MSE function.
Example 1:
as shown in fig. 2, a is low image resolution hyperspectral data h × w × s, 64 × 31 (length × width × number of spectra);
b is high image resolution low spectral data H × W × s, 256 × 3;
the low image resolution hyperspectral data 64 x 31 is convolved with the sub-pixels through the network erf-net to obtain data 1 of 256 x 31,
the low image resolution hyperspectral data 64 x 31 are directly interpolated to obtain data 3 and 3' of 256 x 31,
high image resolution low spectral data 256 x 3 was convolved 1 x 1 to give data 2 of 256 x 31,
2 and 3' concat (1) were further convolved with 3 x 3 and 1 x 1 to give 256 x 31 data 4,
4 and 1 concat (2) and then 3 x 3 and 1 x 1 are convoluted to obtain 256 x 31 data 5,
and 5 and 3 concat (3) are further convoluted by 3 × 3 and 1 × 1 to obtain 256 × 31 data, namely, the result is obtained.
Example 2
In order to express the network performance provided by the invention, in a comparison test part, a plurality of algorithms based on a traditional algorithm and a convolutional neural network are selected for comparison test, including a fusion algorithm based on PCA principal component analysis, an algorithm combining PCA and wavelet transformation, a SuperResPALM algorithm provided by ICCV15, and a spectrum super-resolution fusion algorithm based on a 3D-CNN network. The results of the comparative tests are shown in FIG. 3.
TABLE 1 comparison of evaluation indexes of different algorithm reconstruction results
As can be seen from the comparison results and the evaluation indexes shown in table 1, the network proposed herein has significant advantages in terms of each index. Meanwhile, based on the hardware platform described above, the time taken for the network to reconstruct a set of spectral images is about 0.5 second, and the reconstruction efficiency is high. In addition, in order to verify the generalization ability of the network, a plurality of groups of different scenes are tested, and as can be seen from the mathematical model, the transformation matrices are different for different optical systems, so that data of different databases are trained by using the data of the databases respectively during testing, the test data are scenes which do not participate in the training, and the test result is shown in fig. 4.
TABLE 2 comparison of evaluation indexes of different scene reconstruction results
From the results shown in table 2, the network proposed herein has good generalization capability, and can meet the reconstruction requirements of different scenes. Meanwhile, in order to more intuitively embody the reconstruction capability of the network in the spectral dimension, the section randomly selects pixel points in the test data to draw a spectral curve to be compared with the standard data, and the results of selecting positions and the spectral curve are shown in fig. 5 and 6. As can be seen, for the test data which does not participate in training, the spectrum curves of the targets with different materials have good fitting degree, which indicates that the network has good spectrum reconstruction capability.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The meaning of "and/or" as used herein is intended to include both the individual components or both.
The term "connected" as used herein may mean either a direct connection between components or an indirect connection between components via other components.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.
Claims (4)
1. A super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation is characterized by comprising the following steps:
the method comprises the following steps: entering the low-image-resolution high-spectral data into a network for data processing; after entering a network, the low-image-resolution hyperspectral data enters a subnetwork based on an encoder-decoder structure, in an encoder, the low-image-resolution hyperspectral data is downsampled through a pooling layer, and convolution layers are connected through a BatchNorm normalization layer and a ReLU activation function; in the decoder, the deconvolution layer is used for up-sampling; adding dropout operation in each module, and randomly discarding a part of neurons to reduce the interaction between hidden layer nodes; the convolution layers are all of non-bottleneck 1D structures, and the convolution kernel is a pair of (3 × 1) (1 × 3) size decomposition convolution kernel vectors; the deconvolution interlayer convolution layer also uses a non-bottleneck 1D structure; in the decoder, the deconvolution layer is used for up-sampling, meanwhile, the number of the characteristic channels is halved, in the convolution layer at the rear end of the decoder, the network promotes the number of the characteristic channels again, the image resolution promotion multiplying factor is r, and the number of the channels is promoted to r of the original number of the channels 2 The sub-pixel convolution layer is used for the back-end sub-pixel convolution layer; the decoder is connected with the sub-pixel convolution layer, and the image resolution is improved to be the same as the high image resolution data;
step two: carrying out data preprocessing on the high-image-resolution multispectral data;
step three: the high-image-resolution multispectral data enters a network to be fused with the low-image-resolution multispectral data, and the high-image-resolution multispectral data is output;
step four: and further fusing the fused output and the hyperspectral data image super-resolution reconstruction result.
2. The super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation according to claim 1, characterized in that: and in the second step, data preprocessing is carried out on the multispectral data with high image resolution, including adjustment and fusion and channel adjustment.
3. The method for computing the super-resolution fusion based on the sub-pixel convolution and the feature segmentation as claimed in claim 1, wherein: in the third step, the multispectral data with high image resolution enters a network, and dimension is increased to the same number of spectral channels as the high-spectral data with low image resolution through a convolution layer with convolution kernel size of 1 multiplied by 1; performing concat operation on the upscaled data and the low-image-resolution hyperspectral data which are up-sampled to the same image resolution, and constraining the upscaled data in the spectral dimension by using the low-image-resolution hyperspectral data; finally, fusing the data by a convolution layer with convolution kernel size of 3 multiplied by 3 and extracting features, and then reducing the dimension to the number of high-spectral data channels with low image resolution by a 1 multiplied by 1 convolution layer; and fusing the output data of the two modules through a fusion module consisting of a 3 multiplied by 3 convolutional layer and a 1 multiplied by 1 convolutional layer, further fusing and constraining the output data and the up-sampled high-spectral data with low image resolution, and finally obtaining network output to complete a fusion task.
4. The super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation according to claim 1, characterized in that: in the output result of step four, the loss function is:
in the formula (1), N is the total number of samples, and alpha and beta are weight coefficients;for high image resolution multi-spectral data,in order to output the result of the process,the low image resolution high spectral data,for down-sampling of the output result, F represents the MSE function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010780609.0A CN111709882B (en) | 2020-08-06 | 2020-08-06 | Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010780609.0A CN111709882B (en) | 2020-08-06 | 2020-08-06 | Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111709882A CN111709882A (en) | 2020-09-25 |
CN111709882B true CN111709882B (en) | 2022-09-27 |
Family
ID=72548164
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010780609.0A Active CN111709882B (en) | 2020-08-06 | 2020-08-06 | Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111709882B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112767243B (en) * | 2020-12-24 | 2023-05-26 | 深圳大学 | Method and system for realizing super-resolution of hyperspectral image |
CN113657388B (en) * | 2021-07-09 | 2023-10-31 | 北京科技大学 | Image semantic segmentation method for super-resolution reconstruction of fused image |
CN114913072A (en) * | 2022-05-16 | 2022-08-16 | 中国第一汽车股份有限公司 | Image processing method and device, storage medium and processor |
CN114757831B (en) * | 2022-06-13 | 2022-09-06 | 湖南大学 | High-resolution video hyperspectral imaging method, device and medium based on intelligent space-spectrum fusion |
CN115496819B (en) * | 2022-11-18 | 2023-03-21 | 南京理工大学 | Rapid coding spectral imaging method based on energy concentration characteristic |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109903255A (en) * | 2019-03-04 | 2019-06-18 | 北京工业大学 | A kind of high spectrum image Super-Resolution method based on 3D convolutional neural networks |
-
2020
- 2020-08-06 CN CN202010780609.0A patent/CN111709882B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111709882A (en) | 2020-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111709882B (en) | Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation | |
Xie et al. | Hyperspectral image super-resolution using deep feature matrix factorization | |
CN109255755B (en) | Image super-resolution reconstruction method based on multi-column convolutional neural network | |
CN108009559B (en) | Hyperspectral data classification method based on space-spectrum combined information | |
Liu et al. | A spectral grouping and attention-driven residual dense network for hyperspectral image super-resolution | |
CN110428387B (en) | Hyperspectral and full-color image fusion method based on deep learning and matrix decomposition | |
Rao et al. | A residual convolutional neural network for pan-shaprening | |
CN114119444B (en) | Multi-source remote sensing image fusion method based on deep neural network | |
CN112819910B (en) | Hyperspectral image reconstruction method based on double-ghost attention machine mechanism network | |
CN111914909B (en) | Hyperspectral change detection method based on space-spectrum combined three-direction convolution network | |
WO2024027095A1 (en) | Hyperspectral imaging method and system based on double rgb image fusion, and medium | |
CN116309070A (en) | Super-resolution reconstruction method and device for hyperspectral remote sensing image and computer equipment | |
CN113409190B (en) | Video super-resolution method based on multi-frame grouping and feedback network | |
Pan et al. | Structure–color preserving network for hyperspectral image super-resolution | |
CN114841860A (en) | Hyperspectral remote sensing image super-resolution method based on Laplacian pyramid network | |
Deng et al. | Multiple frame splicing and degradation learning for hyperspectral imagery super-resolution | |
Liang et al. | Blind super-resolution of single remotely sensed hyperspectral image | |
Thuan et al. | Edge-focus thermal image super-resolution using generative adversarial network | |
CN110631699A (en) | Transient spectrometer based on deep network unmixing | |
Zeng et al. | U-net-based multispectral image generation from an rgb image | |
CN114140359B (en) | Remote sensing image fusion sharpening method based on progressive cross-scale neural network | |
CN113111919B (en) | Hyperspectral image classification method based on depth high resolution | |
Guo et al. | Speedy and accurate image super‐resolution via deeply recursive CNN with skip connection and network in network | |
CN114972864A (en) | Hyperspectrum and laser radar fusion classification method based on shuffle feature enhancement | |
CN114862976A (en) | Multispectral image reconstruction method for rotation diffraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |