CN112950736B - Hyperspectral imaging method based on unsupervised network de-dispersion blurring - Google Patents

Hyperspectral imaging method based on unsupervised network de-dispersion blurring Download PDF

Info

Publication number
CN112950736B
CN112950736B CN202110243398.1A CN202110243398A CN112950736B CN 112950736 B CN112950736 B CN 112950736B CN 202110243398 A CN202110243398 A CN 202110243398A CN 112950736 B CN112950736 B CN 112950736B
Authority
CN
China
Prior art keywords
dispersion
image
network
hyperspectral
rgb image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110243398.1A
Other languages
Chinese (zh)
Other versions
CN112950736A (en
Inventor
曹汛
张理清
华夏
王漱明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202110243398.1A priority Critical patent/CN112950736B/en
Publication of CN112950736A publication Critical patent/CN112950736A/en
Application granted granted Critical
Publication of CN112950736B publication Critical patent/CN112950736B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral imaging method based on unsupervised network dispersion and ambiguity resolution. The method comprises the following steps: s1, collecting a dispersion blurred RGB image; s2, constructing an unsupervised learning convolutional neural network, wherein the input of the network is a dispersion RGB image, and the output of the network is a reconstructed hyperspectral image; s3, inputting the single dispersion RGB image acquired in the step S1 into a convolutional neural network in the step S2, and reconstructing spectrum information of the image by using an on-line training method; s4, driving parameter optimization of the convolutional neural network according to the physical relation of the dispersion image generated by the imaging system, and training the capability of the convolutional neural network to reconstruct hyperspectral data from the dispersion RGB image by using a back propagation algorithm; and S5, repeating the iteration for a plurality of times to obtain a reconstruction result which gradually approximates to the real hyperspectral image. The hyperspectral imaging method of the invention utilizes an unsupervised network to resolve chromatic dispersion and blur, uses model drive to replace data drive, ensures reconstruction precision, and simultaneously makes the system simpler and reduces cost.

Description

Hyperspectral imaging method based on unsupervised network de-dispersion blurring
Technical Field
The invention belongs to the field of spectrum imaging, and particularly relates to a hyperspectral imaging method based on unsupervised network dispersion and blur resolving.
Background
Hyperspectral imaging techniques have important applications in various fields, where hyperspectral images retain more color information than RGB images. The traditional spectrometer and imaging device have the defects of high cost, large volume, complex system and the like. With the development of computer science, software algorithms are introduced into spectral imaging technology, so that a spectral imaging instrument is cheaper and more compact, and mainly comprises compressed sensing and computational reconstruction algorithms. In compressed sensing reconstruction, high-dimensional spectrum data can be solved through an optimization problem containing a sparse prior, but the prior problem that effective prior with strong robustness cannot be found exists in manual design prior, so that the accuracy of the reconstructed spectrum data is low.
With the development of artificial intelligence, a deep learning neural network method is applied to more and more image processing problems. Convolutional neural networks are also used for hyperspectral reconstruction work in hyperspectral imaging techniques. However, in the traditional calculation imaging method based on deep learning, supervised learning is mostly adopted, a large amount of labeled data or real data is required to be obtained in advance to train a network, the quality and the number of data sets can directly influence the effect of network training, the generalization capability of a model obtained by data driving is limited, and good reconstruction precision cannot be realized on all data. Although some methods of unsupervised learning exist, the method still uses a data-driven acquisition model, which takes a long time to train the model, and the obtained model prediction accuracy cannot achieve the effect of supervised learning.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a hyperspectral imaging method based on unsupervised network dispersion blur solving, which can directly reconstruct and obtain a hyperspectral image from a single dispersion RGB image by using unsupervised learning, so that the reconstruction accuracy is ensured, the system algorithm is simpler, and the acquisition data cost is reduced.
The invention adopts the technical scheme that:
a hyperspectral imaging method based on unsupervised network de-dispersion blurring comprises the following steps:
s1, collecting a dispersion blurred RGB image;
s2, constructing an unsupervised learning convolutional neural network, wherein the input of the network is a dispersion RGB image, and the output of the network is a reconstructed hyperspectral image;
s3, inputting the single dispersion RGB image acquired in the step S1 into a convolutional neural network in the step S2, and reconstructing spectrum information of the image by using an online training method;
s4, driving parameter optimization of the convolutional neural network according to the physical relation of the dispersion image generated by the imaging system, and training the capability of the convolutional neural network to reconstruct hyperspectral data from the dispersion RGB image by using a back propagation algorithm;
and S5, repeating the iteration for a plurality of times to obtain a reconstruction result which gradually approximates to the real hyperspectral image.
Further, in step S1, a dispersion component and an image sensor are used to collect a dispersion blurred RGB image; after passing through the dispersion component, the hyperspectral signal forms dispersion in the corresponding dispersion direction, namely, the two-dimensional data of each channel are shifted in two-dimensional space, and then the image sensor acquires the RGB image with blurred dispersion.
Further, in step S2, the convolutional neural network is a symmetrical U-shaped convolutional neural network UNet, the first half is downsampled four times, the second half is upsampled 4 times, and the front and back features are overlapped and fused in the dimension of the spectrum channel by using layer jump connection in the same dimension stage; the convolutional neural network does not limit the size of an input image, and the sizes of the input image and the output image are consistent in two-dimensional space dimensions, so that the dispersion blurred RGB image is mapped to the hyperspectral image.
Further, in step S4, the physical relationship of chromatic dispersion formed by the acquisition system on the reconstructed hyperspectral image output by the convolutional neural network is recalculated to obtain a dispersive RGB image, and the dispersive RGB image is close to the input dispersive RGB image through back propagation, so that the network optimization objective is:
Loss=min‖MS′-D‖
=min‖D′-D‖
s' represents a reconstructed hyperspectral image output by a network, the size is h multiplied by w multiplied by c, w and h respectively represent the transverse size and the longitudinal size of the image, and c is the number of spectrum channels; d represents a dispersion RGB image input by a network, and the size is h multiplied by w multiplied by 3; m represents the physical relationship between the hyperspectral image and the dispersive RGB image; d' represents the recalculated dispersed RGB image, which is the same size as the input dispersed RGB image D.
Further, in step S5, after multiple iterations of the back propagation algorithm, the loss function is minimized, and the reconstructed hyperspectral image target is converted into a dispersive RGB image obtained by recalculating the network output hyperspectral image, which gradually approximates the input dispersive RGB image.
The invention has the remarkable advantages that:
(1) Compared with the supervised learning method in the traditional deep learning, the method for reconstructing the hyperspectral image by using the unsupervised network to resolve the chromatic dispersion blur does not need to input a large amount of real high-precision spectrum data as a network training label and a large amount of data sets, and the data acquisition cost can be reduced.
(2) The model utilizes a physical relation to drive network optimization, and uses an online training method to reconstruct hyperspectral information for a single dispersion blurred RGB image, so that the training model is not required to have good generalization capability for all data.
(3) Only one dispersion component and one image sensor are needed for image acquisition, so that the method can realize high-precision reconstruction effect and reduce the cost and complexity of an imaging system.
Drawings
Fig. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic representation of the calculation of dispersive RGB data in an imaging model of the method of the invention.
Fig. 3 is an unsupervised network learning schematic diagram used by the method of the present invention.
FIG. 4 is a schematic diagram of a U-shaped convolutional neural network used in the method of the present invention.
FIG. 5 shows the result of the method of the present invention for resolving dispersion blur, (a) RGB three channel images for network input dispersion blur; (b) Recalculating the obtained dispersive RGB image for the hyperspectral image output by the network; (c) The four images (f) are respectively spectrum curve comparison of four points randomly taken in the output image (output) and the corresponding real hyperspectral image (target).
Detailed Description
For the purposes of clarity, methods and advantages of the invention, a more detailed description of the practice of the invention will be presented below with reference to the accompanying drawings.
The embodiment provides a hyperspectral imaging method based on unsupervised network dispersion and ambiguity resolution, referring to fig. 1, specifically including:
s1: the dispersion blurred RGB data is collected.
The dispersive RGB data collection operation can be realized through one dispersive component and one image sensor. Fig. 5 (a) is a simulated dispersive RGB image D. The hyperspectral data S is three-dimensional cube data, the size is y x c, x and y respectively represent the transverse size and the longitudinal size of the image in a two-dimensional space, c is the number of spectrum channels, and the two-dimensional data of each spectrum channel represents the spectrum signal in the wave band. The dispersive RGB image D can be expressed as:
D=ΩΦS
where Φ represents an operation matrix of a dispersion direction and an amplitude of each spatial volume pixel, Ω represents an operation matrix of compressing hyperspectral data to RGB data. If the dispersion device disperses the hyperspectral image in the horizontal direction and the displacement between adjacent channels is one pixel, the size of the dispersed hyperspectral data with the size of y x c is y x (x+c-1) x c. Omega is the transmission coefficient of each space pixel at the three channels of the camera, namely, the dispersed data phi S is multiplied by the transmission coefficient of the camera channel on the corresponding wave band and then summed (integral action) to obtain the dispersion blurred RGB observation data D.
In order to make the input dispersion data D and the hyperspectral data S of the network uniform in size in the two-dimensional space dimension, the hyperspectral data S is supplemented with a blank pixel (c-1) column in the horizontal direction. The steps can be seen in the schematic diagram of fig. 2. Where the hyperspectral data and the dispersive RGB data are collectively represented as h×w in size in two dimensional space, S is h×w×c and D is h×w×3.
S2: and constructing an unsupervised learning convolutional neural network, inputting a dispersion blurred RGB image by the network, and outputting a reconstructed hyperspectral image.
The constructed de-dispersive neural network is a symmetrical U-shaped convolutional neural network UNet, and can fully combine the characteristic information of lower layers and higher layers by using jump connection, and the network structure is shown in a schematic diagram of fig. 4.
Downsampling is performed four times on the first half part of the network (the coding network), the image size is reduced by half in the transverse and longitudinal directions after downsampling, and the dimension size of a spectrum channel is kept unchanged; the second half (decoding network) is up-sampled for 4 times, the image size is doubled in the transverse and longitudinal directions after up-sampling each time, and the dimension size of the spectrum channel is kept unchanged; and overlapping and fusing the front and rear features on the channel by using layer jump connection at the same scale stage, so that the shallow features and the deep features are combined to form a thicker feature map.
The activation function used in the network is the LeakyReLU, which has a small slope for negative inputs and some retention of data information less than 0. The forward procedure is expressed as:
the downsampling pooling layer is MaxPool (filter size 2 x 2, step size 2), and upsampling uses the PixelShuffle algorithm (upsampling factor 2).
Connecting two Conv2d-BatchNorm2d-LeakyReLU after each downsampling and upsampling operation, wherein the convolution kernel size is 3 multiplied by 3, the first convolution changes the channel number of the spectrum dimension of the image except the two-dimensional space dimension, the channel number is doubled in the downsampling layer, and the channel number is reduced by half in the upsampling layer; the second convolution maintains the number of channels in that spectral dimension. The function of adding the BatchNorm2d layer after the Conv2d layer is to perform normalization processing on the data, so that the data cannot cause unstable network performance due to overlarge data before performing the activation function LeakyReLU, and convergence of network training is quickened.
The network input layer uses two Conv2d-BatchNorm2d-LeakyReLU, wherein the first convolution kernel has the size of 3 multiplied by 3 and the number of 64, and the channel number of the spectrum dimension of the input three-channel data is increased to 64; the number of channels in the spectral dimension of the feature map is maintained through a second convolution.
The output layer of the network uses a convolution layer with the size of 1×1 and the number of reconstruction channels c, so that the output result of the network has the size of the channels to be reconstructed in the spectrum dimension.
The network does not limit the size of the input image, and the sizes of the input image and the output image are consistent in two-dimensional space dimension, so that the dispersion blurred RGB image is mapped to the hyperspectral image.
S3: the single dispersion RGB data collected in S1 is input into a convolutional neural network of S2.
In the traditional deep learning method, a large amount of data needs to be input into a training network, and the quantity and quality of the data influence the training effect of the network. In the embodiment, only a single piece of data is required to be input during network training, no other label contrast exists, each dispersive RGB image is independently trained by using an online training method, and the target dispersive RGB image is not required to be predicted by a training model after training on a large amount of data.
S4: and training a convolutional neural network, and driving network parameter optimization according to the physical relation of the dispersion image generated by the imaging system.
Unlike supervised learning in conventional deep learning, the network does not need to use a real hyperspectral image as a training label, but rather uses a physical relationship of a real imaging system to collect a dispersive RGB image, and uses a model driver to replace a data driver to optimize network parameters. The reconstructed hyperspectral image output by the network is recalculated to obtain a dispersion RGB image (called a dispersion RGB image) according to the physical relation DΩ phi S mentioned in the step S1, and the dispersion RGB image approximates to the dispersion RGB image input by the network through back propagation. That is, the network training converts minimizing the loss between the reconstructed hyperspectral image and the true hyperspectral image into minimizing the loss between the redispersed RGB image and the network input dispersive RGB image, and the network optimization targets are:
Loss=min‖MS′-D‖
=min‖D′-D‖
s' represents a reconstructed hyperspectral image output by a network, and the size is h multiplied by w multiplied by c; m represents the physical relationship between the hyperspectral image and the dispersive RGB image, i.e. here m=Ω Φ, D ' represents the RGB image (D ' is the same size as D) that the reconstructed hyperspectral image S ' is re-compression mapped back to the dispersive blur; d represents a single dispersive RGB image of the network input. The above process is illustrated schematically in fig. 3.
After the network inputs the dispersion RGB image, the output of the network is obtained through forward propagation, namely, a hyperspectral image is rebuilt; and (3) using a neural network back propagation algorithm, solving a bias derivative for the loss function by a chain derivative rule, updating the weight parameter of the network in the gradient descending direction, and continuously reducing the loss function value after repeated forward and back propagation iterations.
A network model was built using a Pytorch deep learning framework, and network training was performed using a GPU (NVIDIA 1070 ti).
The loss function selects the smooth L1 loss as a piecewise function, L2 loss is arranged between [ -1,1], the problem of unsmooth L1 loss is solved, L1 loss is arranged outside the [ -1,1] interval, the problem of gradient explosion of outliers is solved, and the method is expressed as follows:
the network training optimizer is an Adam adaptive optimizer.
The image reconstruction results and corresponding network model parameters (saved as a. Pth type file) are saved every 500 cycles, while the best network model best_model. Pth and best image reconstruction results are saved or updated according to the magnitude of the loss function value that falls in the history.
S5: with the increase of the iteration times, the dispersion RGB image recalculated by the output hyperspectral image gradually approaches the input dispersion RGB image, so that the purpose that the output reconstructed hyperspectral image gradually approaches the real hyperspectral image is realized.
Because the training mode is online training of a single input image, the trained model does not need to have generalization capability.
After experiments on a plurality of dispersion RGB images, a high-precision reconstruction result can be obtained under the setting that the learning rate is 0.01, the iteration period is 3000 and the training time is 10-15 minutes.
As a result of the reconstruction, as shown in fig. 5, (a) is an input dispersion RGB image of size 512×512×3, and (b) is a re-dispersion RGB image of size 512×512×3, that is, a dispersion RGB image re-calculated from a network output hyperspectral image (size 512×512×31), it is seen that the re-dispersion RGB image has no significant dispersion phenomenon. The output image of the network and the corresponding 31-channel spectral curve of the real hyperspectral image are compared by random point taking. The results of the (c) - (f) show that the output result of the network well reconstructs the spectrum information of the image, and the spectrum curve well approximates to the spectrum curve of the real hyperspectral image.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention, but rather to enable any modification, equivalents, improvements and the like to be made within the spirit and principles of the invention.

Claims (4)

1. The hyperspectral imaging method based on unsupervised network de-dispersion blurring is characterized by comprising the following steps:
s1, collecting a dispersion blurred RGB image;
s2, constructing an unsupervised learning convolutional neural network, wherein the input of the network is a dispersion RGB image, and the output of the network is a reconstructed hyperspectral image;
s3, inputting the single dispersion RGB image acquired in the step S1 into a convolutional neural network in the step S2, and reconstructing spectrum information of the image by using an online training method;
s4, driving parameter optimization of the convolutional neural network according to the physical relation of the dispersion image generated by the imaging system, and training the capability of the convolutional neural network to reconstruct hyperspectral data from the dispersion RGB image by using a back propagation algorithm; the method comprises the following steps: the reconstructed hyperspectral image output by the convolutional neural network is recalculated by the physical relation of chromatic dispersion formed by an acquisition system to obtain a chromatic dispersion RGB image, and the chromatic dispersion RGB image is close to the input chromatic dispersion RGB image through back propagation, and the network optimization target is as follows:
Loss=min‖MS′-D‖
=min‖D′-D‖
s' represents a reconstructed hyperspectral image output by a network, the size is h multiplied by w multiplied by c, w and h respectively represent the transverse size and the longitudinal size of the image, and c is the number of spectrum channels; d represents a dispersion RGB image input by a network, and the size is h multiplied by w multiplied by 3; m represents the physical relationship between the hyperspectral image and the dispersive RGB image; d' represents the recalculated dispersive RGB image, which is the same size as the input dispersive RGB image D;
and S5, repeating the iteration for a plurality of times to obtain a reconstruction result which gradually approximates to the real hyperspectral image.
2. The method for hyperspectral imaging based on unsupervised network de-dispersion blur as claimed in claim 1, wherein in step S1, a dispersion component and an image sensor are used to collect the dispersion blurred RGB image; after passing through the dispersion component, the hyperspectral signal forms dispersion in the corresponding dispersion direction, namely, the two-dimensional data of each channel are shifted in two-dimensional space, and then the image sensor acquires the RGB image with blurred dispersion.
3. The hyperspectral imaging method based on unsupervised network de-dispersion blur as claimed in claim 1, wherein in step S2, the convolutional neural network is a symmetric U-shaped convolutional neural network UNet, the first half is downsampled four times, the second half is upsampled 4 times, and the front and back features are overlapped and fused in the dimension of the spectrum channel by using layer-jump connection in the same scale stage; the convolutional neural network does not limit the size of an input image, and the sizes of the input image and the output image are consistent in two-dimensional space dimensions, so that the dispersion blurred RGB image is mapped to the hyperspectral image.
4. The method of claim 1, wherein in step S5, the reconstructed hyperspectral image target is converted into a dispersive RGB image obtained by recalculating the network output hyperspectral image by repeating the back propagation algorithm for a plurality of times to minimize the loss function.
CN202110243398.1A 2021-03-05 2021-03-05 Hyperspectral imaging method based on unsupervised network de-dispersion blurring Active CN112950736B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110243398.1A CN112950736B (en) 2021-03-05 2021-03-05 Hyperspectral imaging method based on unsupervised network de-dispersion blurring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110243398.1A CN112950736B (en) 2021-03-05 2021-03-05 Hyperspectral imaging method based on unsupervised network de-dispersion blurring

Publications (2)

Publication Number Publication Date
CN112950736A CN112950736A (en) 2021-06-11
CN112950736B true CN112950736B (en) 2024-04-09

Family

ID=76247800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110243398.1A Active CN112950736B (en) 2021-03-05 2021-03-05 Hyperspectral imaging method based on unsupervised network de-dispersion blurring

Country Status (1)

Country Link
CN (1) CN112950736B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115587949A (en) * 2022-10-27 2023-01-10 贵州大学 Agricultural multispectral visual reconstruction method based on visible light image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3460427A1 (en) * 2017-09-25 2019-03-27 Korea Advanced Institute of Science and Technology Method for reconstructing hyperspectral image using prism and system therefor
CN109697697A (en) * 2019-03-05 2019-04-30 北京理工大学 The reconstructing method of the spectrum imaging system of neural network based on optimization inspiration
CN110490937A (en) * 2019-07-15 2019-11-22 南京大学 A kind of method and device thereof for accelerating EO-1 hyperion video to rebuild
CN110717947A (en) * 2019-09-25 2020-01-21 北京理工大学 High-quality spectrum reconstruction method based on external and internal training
CN111174912A (en) * 2020-01-03 2020-05-19 南京大学 Snapshot type dispersion ambiguity-resolving hyperspectral imaging method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3460427A1 (en) * 2017-09-25 2019-03-27 Korea Advanced Institute of Science and Technology Method for reconstructing hyperspectral image using prism and system therefor
CN109697697A (en) * 2019-03-05 2019-04-30 北京理工大学 The reconstructing method of the spectrum imaging system of neural network based on optimization inspiration
CN110490937A (en) * 2019-07-15 2019-11-22 南京大学 A kind of method and device thereof for accelerating EO-1 hyperion video to rebuild
CN110717947A (en) * 2019-09-25 2020-01-21 北京理工大学 High-quality spectrum reconstruction method based on external and internal training
CN111174912A (en) * 2020-01-03 2020-05-19 南京大学 Snapshot type dispersion ambiguity-resolving hyperspectral imaging method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Dispersion characterization and pulse prediction with machine learning";Sanjaya Lohani等;《OSA Continuum》;20191215;第2卷(第12期);3438-3445 *
"基于模糊神经网络的高光谱数据挖掘方法";李俊兵 等;《孝感学院学报》;20031230;第23卷(第6期);68-71 *
"深度学习在高光谱成像算法中的研究";张理清;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20220515(第05期);C028-167 *

Also Published As

Publication number Publication date
CN112950736A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN108550115B (en) Image super-resolution reconstruction method
CN111047515B (en) Attention mechanism-based cavity convolutional neural network image super-resolution reconstruction method
CN111402129B (en) Binocular stereo matching method based on joint up-sampling convolutional neural network
CN109389556B (en) Multi-scale cavity convolutional neural network super-resolution reconstruction method and device
CN109886871B (en) Image super-resolution method based on channel attention mechanism and multi-layer feature fusion
CN111784602B (en) Method for generating countermeasure network for image restoration
CN108765296B (en) Image super-resolution reconstruction method based on recursive residual attention network
CN110020989B (en) Depth image super-resolution reconstruction method based on deep learning
CN110599401A (en) Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
CN113222823B (en) Hyperspectral image super-resolution method based on mixed attention network fusion
CN111161146B (en) Coarse-to-fine single-image super-resolution reconstruction method
CN112215755B (en) Image super-resolution reconstruction method based on back projection attention network
Nan et al. Single image super-resolution reconstruction based on the ResNeXt network
Luo et al. Lattice network for lightweight image restoration
CN111709882B (en) Super-resolution fusion calculation method based on sub-pixel convolution and feature segmentation
CN114862731B (en) Multi-hyperspectral image fusion method guided by low-rank priori and spatial spectrum information
Li et al. Underwater image high definition display using the multilayer perceptron and color feature-based SRCNN
CN113902658B (en) RGB image-to-hyperspectral image reconstruction method based on dense multiscale network
CN114841856A (en) Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention
Hu et al. Hyperspectral image super resolution based on multiscale feature fusion and aggregation network with 3-D convolution
CN113744136A (en) Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
CN112950736B (en) Hyperspectral imaging method based on unsupervised network de-dispersion blurring
CN110599495B (en) Image segmentation method based on semantic information mining
CN113008371B (en) Hyperspectral imaging method for deep learning dispersion-based fuzzy solution
CN116486074A (en) Medical image segmentation method based on local and global context information coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant