CN114972022B - Fusion hyperspectral super-resolution method and system based on unaligned RGB image - Google Patents

Fusion hyperspectral super-resolution method and system based on unaligned RGB image Download PDF

Info

Publication number
CN114972022B
CN114972022B CN202210401740.0A CN202210401740A CN114972022B CN 114972022 B CN114972022 B CN 114972022B CN 202210401740 A CN202210401740 A CN 202210401740A CN 114972022 B CN114972022 B CN 114972022B
Authority
CN
China
Prior art keywords
image
resolution
hyperspectral
rgb
hyperspectral image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210401740.0A
Other languages
Chinese (zh)
Other versions
CN114972022A (en
Inventor
付莹
赖泽强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202210401740.0A priority Critical patent/CN114972022B/en
Publication of CN114972022A publication Critical patent/CN114972022A/en
Application granted granted Critical
Publication of CN114972022B publication Critical patent/CN114972022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a fusion hyperspectral super-resolution method and system based on non-aligned RGB images, and belongs to the technical field of super-resolution imaging. Firstly, a depth RGB image feature extractor and a hyperspectral image feature extractor are respectively constructed for an RGB image and a hyperspectral image based on a depth learning theory. The multi-level features of the RGB reference image and the hyperspectral image are extracted, respectively, using a feature extractor. The RGB reference image is aligned with the multi-level features of the hyperspectral image using a multi-level depth optical flow estimation network. After the aligned RGB image features and the hyperspectral image features are obtained, a depth self-adaptive feature decoder is constructed, the aligned features are decoded, and a hyperspectral image with high resolution is reconstructed. According to the method, explicit intermediate steps and manual intervention are not needed, and under the condition that a hyperspectral camera, an RGB camera and a necessary fixing device are only used, the shot unaligned high-resolution RGB image is utilized to carry out spatial super-resolution on the low-resolution hyperspectrum.

Description

Fusion hyperspectral super-resolution method and system based on unaligned RGB image
Technical Field
The invention relates to a fusion hyperspectral super-resolution method and system based on non-aligned RGB images, and belongs to the technical field of super-resolution imaging in computational photography.
Background
Unlike conventional black-and-white images and RGB images, hyperspectral images have a finer division in the spectral dimension, and hyperspectral images may contain hundreds to thousands of bands, so that hyperspectral images can acquire not only image features of an object but also spectral features of an object. This property makes hyperspectral images extremely practical in a variety of detection fields, as different real objects leave unique "spectral fingerprints" in the electromagnetic spectrum that can be used to identify the constituent components of the object. For example, spectral features of petroleum may help mineralogists find oil fields.
Existing hyperspectral imaging devices often rely on a large number of high-sensitivity sensors, high-speed computers, and mass storage devices to capture hyperspectral images. This results in a hyperspectral imaging system that is very complex and expensive. To reduce the cost, existing commercial hyperspectral cameras often sacrifice part of the spatial resolution while guaranteeing the spectral resolution.
The hyperspectral super-resolution technology aims at improving the spatial resolution of hyperspectral images by using a software method. The existing hyperspectral resolution technology can be divided into two types according to input, one type is that only a single low-resolution hyperspectral image is input, and the high-frequency details are omitted through algorithm reconstruction to improve the spatial resolution, and the method is commonly called hyperspectral single-image super-resolution. The other is to input a low-resolution hyperspectral image and a matched high-resolution RGB image at the same time, and the high-resolution spatial information of RGB is used for assisting the hyperspectral super-resolution, which is called hyperspectral fusion.
Most of the existing hyperspectral single-image super-resolution algorithms depend on deep learning methods. Such deep learning-based hyperspectral single-image superdivision algorithms typically use a well-designed deep nonlinear neural network to model the mapping of low-resolution hyperspectral images to high-resolution hyperspectral images, and then use the relevant data and appropriate loss functions to optimize the parameters of the network for training so that they approach the true mapping relationship. Such methods tend to achieve quite good results at a certain magnification (less than four times). However, for larger magnifications (more than four times), such single-image superdivision algorithms do not achieve satisfactory results.
The hyperspectral fusion method utilizes the matched high-resolution RGB image as an aid to perform hyperspectral image super-division. Some of these approaches are based on optimization methods to design various prior constraints, and some are based on deep learning. Due to the paired high-resolution RGB images, the method can obtain better effect of the higher-spectrum single-image super-resolution method on high magnification. The main disadvantage is that most existing methods rely on the RGB image and the hyperspectral image being precisely aligned, and the superdivision effect of such methods is compromised if the RGB reference image is not aligned with the hyperspectral image.
The acquisition of precisely aligned high resolution RGB reference images in practical applications is not easy, often requiring the input light path to be split into two paths by means of a beam splitter, and then imaged simultaneously by the hyperspectral camera and the RGB camera, respectively. At the same time, the entire imaging system also requires precise calibration in order to achieve precise alignment. This series of requirements greatly increases the complexity of the system and increases the cost, and is disadvantageous for hyperspectral imaging because the use of a beam splitter further reduces the brightness of the input optical path.
Therefore, a high-spectrum fusion super-resolution method and a high-spectrum fusion super-resolution system which can still keep better performance under the condition that a high-resolution RGB reference image and a low-resolution high-spectrum image are not completely aligned are urgently needed around how to improve the imaging quality of the high-spectrum image, reduce the cost of the whole system and enlarge the application scene of the high-spectrum image.
Disclosure of Invention
Aiming at overcoming the defects of the prior art, the invention creatively provides a fusion hyperspectral super-resolution method and system based on a non-aligned RGB image in order to reduce the dependence of the existing hyperspectral fusion super-resolution technology on the accurate alignment of RGB reference images.
The invention is realized by adopting the following technical scheme.
A non-aligned RGB image based fusion hyperspectral super resolution method takes a low resolution hyperspectral image and a corresponding high resolution RGB reference image as input, and does not require complete alignment of the RGB reference image with the hyperspectral image.
The hyperspectral super-resolution method is used for improving the spatial resolution of hyperspectral images. The image to be processed is referred to as a low resolution hyperspectral image, and the image after passing through the super resolution is referred to as a relatively high resolution hyperspectral image. The high resolution RGB image has the same resolution as the high resolution hyperspectral image. The resolution gap between high and low resolution depends on the scaling factor specified at model runtime.
Step 1: constructing a neural network;
Firstly, based on a deep learning theory, respectively constructing a deep RGB image feature extractor and a hyperspectral image feature extractor aiming at an RGB image and a hyperspectral image; using the two feature extractors to respectively extract multi-level features of the RGB reference image and the hyperspectral image;
Then, using a multi-level depth optical flow estimation network to align the RGB reference image with multi-level features of the hyperspectral image;
after the aligned RGB image features and hyperspectral image features are obtained, a depth self-adaptive feature decoder is constructed, the aligned features are decoded, and a hyperspectral image with high resolution is reconstructed;
Step 2: training;
Using the processed non-aligned hyperspectral fusion data set to iteratively train and store the trainable parameters of the neural network constructed in the step 1;
Then, based on an alignment algorithm of SIFT and RANSAC, aligning the image pairs, and synthesizing a synthesized RGB image corresponding to the hyperspectral image by using a spectral response function; performing color matching on the synthesized RGB image and the non-aligned reference RGB image by using a color matching algorithm based on a histogram; downsampling the collected high-resolution hyperspectral image to obtain a synthesized low-resolution hyperspectral image; taking the processed data as training data;
Step 3: a using stage, predicting a corresponding high-resolution hyperspectral image according to the input hyperspectral image and the RGB reference image by using the model parameters obtained in the training stage in the step 1;
Firstly, inputting a synthesized low-resolution hyperspectral image and a matched RGB image into a deep neural network to obtain a predicted high-resolution hyperspectral image; then, calculating a mean square error loss function by using the predicted image and a real high-resolution image of the training scene; then, using a back propagation technology to calculate gradients of all nodes of the deep neural network, and using a parameter optimizer to update network parameters; each sample in the dataset is reused, and the updating is performed repeatedly until the loss falls below a set threshold.
Furthermore, in order to effectively implement the method, the invention provides a fused hyperspectral super-resolution system based on non-aligned RGB images, which comprises a data acquisition subsystem, a data processing subsystem, a training subsystem and an reasoning subsystem.
The data acquisition subsystem is used for acquiring paired unaligned high-resolution RGB images and low-resolution hyperspectral images. These data will be used for training.
Optionally, the data acquisition subsystem comprises a hyperspectral camera, an RGB camera and a camera fixture, wherein the hyperspectral camera and the RGB camera are fixed on the fixture in parallel. By adjusting the angles and focal lengths of the two cameras, the two cameras can shoot clear images containing the same scene.
The data processing subsystem is used for processing the unaligned image pair acquired by the data acquisition subsystem. Specifically, the processing content may include: the image pairs are aligned using an alignment algorithm based on SIFT (Scale-INVARIANT FEATURE TRANSFORM) and RANSAC (random sample consensus algorithm, RANdom SAmple Consensus), and the synthetic RGB images corresponding to the hyperspectral images are synthesized using a spectral response function. Color matching is performed on the composite RGB image and the non-aligned reference RGB image using a histogram-based color matching algorithm. And downsampling the collected high-resolution hyperspectral image to obtain a synthesized low-resolution hyperspectral image. And taking the processed data as training data.
The training subsystem uses the training data processed by the data processing subsystem to train the deep neural network model. Specifically, first, a synthesized low-resolution hyperspectral image and a paired RGB image are input into a deep neural network to obtain a predicted high-resolution hyperspectral image. Then, the predicted image is combined with the true high resolution image of the training scene to calculate a mean square error loss function. After the calculation is completed, the gradient of each node of the deep neural network is calculated by using a back propagation technology, and then the network parameters are updated by using a parameter optimizer. Each sample in the dataset is reused, and the updating is performed repeatedly until the loss falls below a set threshold.
The inference subsystem uses a trained deep neural network model to infer, and inputs are a low-resolution hyperspectral image and a misaligned paired high-resolution RGB image in a practical application scene. In each reasoning process, the reasoning subsystem does not need to train repeatedly, and the same deep neural network model is used each time.
Advantageous effects
Compared with the prior art, the invention has the following advantages:
1. the invention can be used as an end-to-end solution, and can directly input low-resolution hyperspectral images and high-resolution RGB reference images in the using stage without explicit intermediate steps and manual intervention.
2. The invention can realize the spatial super-resolution of the low-resolution high spectrum by using the shot misaligned high-resolution RGB image under the condition of not needing special equipment and only using the hyperspectral camera, the RGB camera and the necessary fixing device.
Drawings
FIG. 1 is a schematic diagram of a core algorithm model of the method of the present invention.
Fig. 2 is a schematic diagram of the composition of the system of the present invention.
Detailed Description
For a better description of the objects and advantages of the present invention, the invention will be further described with reference to the accompanying drawings and examples.
The traditional hyperspectral fusion super-resolution method based on the RGB image generally uses a priori constraint of manual design, and the complete spectral information of each pixel point of the RGB image is predicted according to the spectral characteristics of the hyperspectral image. In recent years, the development of deep learning technology has led to the development of this kind of method in the direction of data driving. The existing super-resolution technology based on hyperspectral fusion models the mapping relation from RGB information to spectrum information by designing a specific deep neural network, and uses a large amount of data to perform training and fitting to obtain the mapping relation. However, both methods are based on the assumption that the RGB image is perfectly aligned with the spectral image. This assumption limits their application to a large extent given the difficulty of achieving perfect alignment in real life.
According to the fusion hyperspectral super-resolution method based on the unaligned RGB image, which is provided by the embodiment, based on a deep learning theory, the high-resolution RGB reference image is used for assisting the spatial super-resolution of the low-resolution hyperspectral image, and the RGB image and the hyperspectral image are not required to be completely aligned. As shown in fig. 1.
The present embodiment includes a network construction phase, a training phase, and a use phase.
Step 1: constructing a neural network;
Firstly, based on a deep learning theory, respectively constructing a deep RGB image feature extractor and a hyperspectral image feature extractor aiming at an RGB image and a hyperspectral image; using the two feature extractors to respectively extract multi-level features of the RGB reference image and the hyperspectral image;
Then, using a multi-level depth optical flow estimation network to align the RGB reference image with multi-level features of the hyperspectral image;
after the aligned RGB image features and hyperspectral image features are obtained, a depth self-adaptive feature decoder is constructed, the aligned features are decoded, and a hyperspectral image with high resolution is reconstructed;
Step 2: training;
Using the processed non-aligned hyperspectral fusion data set to iteratively train and store the trainable parameters of the neural network constructed in the step 1;
Then, based on an alignment algorithm of SIFT and RANSAC, aligning the image pairs, and synthesizing a synthesized RGB image corresponding to the hyperspectral image by using a spectral response function; performing color matching on the synthesized RGB image and the non-aligned reference RGB image by using a color matching algorithm based on a histogram; downsampling the collected high-resolution hyperspectral image to obtain a synthesized low-resolution hyperspectral image; taking the processed data as training data;
Step 3: a using stage, predicting a corresponding high-resolution hyperspectral image according to the input hyperspectral image and the RGB reference image by using the model parameters obtained in the training stage in the step 1;
Firstly, inputting a synthesized low-resolution hyperspectral image and a matched RGB image into a deep neural network to obtain a predicted high-resolution hyperspectral image; then, calculating a mean square error loss function by using the predicted image and a real high-resolution image of the training scene; then, using a back propagation technology to calculate gradients of all nodes of the deep neural network, and using a parameter optimizer to update network parameters; each sample in the dataset is reused, and the updating is performed repeatedly until the loss falls below a set threshold.
Examples
The embodiment discloses a fusion hyperspectral super-resolution system based on unaligned RGB images, which comprises a data acquisition subsystem, a data processing subsystem, a training subsystem and an reasoning subsystem. As shown in fig. 2.
The data acquisition subsystem is used for acquiring paired unaligned high-resolution RGB images and low-resolution hyperspectral images. These data will be used for training.
Optionally, the data acquisition subsystem comprises a hyperspectral camera, an RGB camera and a camera fixture, wherein the hyperspectral camera and the RGB camera are fixed on the fixture in parallel. By adjusting the angles and focal lengths of the two cameras, the two cameras can shoot clear images containing the same scene.
The data processing subsystem is used for processing the unaligned image pair acquired by the data acquisition subsystem. Specifically, the processing content may include: the image pairs are aligned using an alignment algorithm based on SIFT (Scale-INVARIANT FEATURE TRANSFORM) and RANSAC (random sample consensus algorithm, RANdom SAmple Consensus), and the synthetic RGB images corresponding to the hyperspectral images are synthesized using a spectral response function. Color matching is performed on the composite RGB image and the non-aligned reference RGB image using a histogram-based color matching algorithm. And downsampling the collected high-resolution hyperspectral image to obtain a synthesized low-resolution hyperspectral image. And taking the processed data as training data.
The training subsystem uses the training data processed by the data processing subsystem to train the deep neural network model. Specifically, first, a synthesized low-resolution hyperspectral image and a paired RGB image are input into a deep neural network to obtain a predicted high-resolution hyperspectral image. Then, the predicted image is combined with the true high resolution image of the training scene to calculate a mean square error loss function. After the calculation is completed, the gradient of each node of the deep neural network is calculated by using a back propagation technology, and then the network parameters are updated by using a parameter optimizer. Each sample in the dataset is reused, and the updating is performed repeatedly until the loss falls below a set threshold.
The inference subsystem uses a trained deep neural network model to infer, and inputs are a low-resolution hyperspectral image and a misaligned paired high-resolution RGB image in a practical application scene. In each reasoning process, the reasoning subsystem does not need to train repeatedly, and the same deep neural network model is used each time.
The connection relation between the above-mentioned constituent systems is: the output end of the data acquisition subsystem is connected with the input end of the data processing subsystem, and the data acquisition subsystem is responsible for processing the data of the data acquisition subsystem. The output end of the data processing subsystem is connected with the input end of the training subsystem, and the latter receives the data provided by the former and completes model training. The output end of the training subsystem is connected with the input end of the reasoning subsystem, and the latter uses the model trained by the former to make reasoning in actual deployment.
The working process of the system is as follows:
Step 1: using the data acquisition subsystem, a misaligned hyperspectral image and RGB reference image pair is acquired.
Step 2: and processing the data acquired by the data acquisition subsystem by using the data processing subsystem, performing preliminary alignment, cutting out a common area, normalizing and formatting to manufacture a data set.
Step 3: the data set is sent to a training subsystem that trains a hyperspectral fusion super-resolution network model based on the optical flow aligned non-aligned RGB images.
Step 4: and the reasoning module uses the trained model to perform reasoning and prediction on the hyperspectral image of the actual scene and the RGB reference image to obtain a predicted high-resolution hyperspectral image.
While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (1)

1. A fusion hyperspectral super-resolution method based on unaligned RGB images is characterized in that,
The image to be processed is called a low resolution hyperspectral image, and the image after super resolution is called a relative high resolution hyperspectral image; the high-resolution RGB image and the high-resolution hyperspectral image have the same resolution; the resolution gap between high resolution and low resolution depends on the scaling factor specified at model run-time;
Taking a low resolution hyperspectral image and its corresponding high resolution RGB reference image as input and not requiring perfect alignment of the RGB reference image with the hyperspectral image;
Step 1: constructing a neural network;
Firstly, based on a deep learning theory, respectively constructing a deep RGB image feature extractor and a hyperspectral image feature extractor aiming at an RGB image and a hyperspectral image; using the two feature extractors to respectively extract multi-level features of the RGB reference image and the hyperspectral image;
Then, using a multi-level depth optical flow estimation network to align the RGB reference image with multi-level features of the hyperspectral image;
after the aligned RGB image features and hyperspectral image features are obtained, a depth self-adaptive feature decoder is constructed, the aligned features are decoded, and a hyperspectral image with high resolution is reconstructed;
Step 2: training;
Using the processed non-aligned hyperspectral fusion data set to iteratively train and store the trainable parameters of the neural network constructed in the step 1;
Then, based on an alignment algorithm of SIFT and RANSAC, aligning the image pairs, and synthesizing a synthesized RGB image corresponding to the hyperspectral image by using a spectral response function; performing color matching on the synthesized RGB image and the non-aligned reference RGB image by using a color matching algorithm based on a histogram; downsampling the collected high-resolution hyperspectral image to obtain a synthesized low-resolution hyperspectral image; taking the processed data as training data;
Step 3: a using stage, predicting a corresponding high-resolution hyperspectral image according to the input hyperspectral image and the RGB reference image by using the model parameters obtained in the training stage in the step 1;
Firstly, inputting a synthesized low-resolution hyperspectral image and a matched RGB image into a deep neural network to obtain a predicted high-resolution hyperspectral image; then, calculating a mean square error loss function by using the predicted image and a real high-resolution image of the training scene; then, using a back propagation technology to calculate gradients of all nodes of the deep neural network, and using a parameter optimizer to update network parameters; each sample in the dataset is reused, and the updating is performed repeatedly until the loss falls below a set threshold.
CN202210401740.0A 2022-04-18 2022-04-18 Fusion hyperspectral super-resolution method and system based on unaligned RGB image Active CN114972022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210401740.0A CN114972022B (en) 2022-04-18 2022-04-18 Fusion hyperspectral super-resolution method and system based on unaligned RGB image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210401740.0A CN114972022B (en) 2022-04-18 2022-04-18 Fusion hyperspectral super-resolution method and system based on unaligned RGB image

Publications (2)

Publication Number Publication Date
CN114972022A CN114972022A (en) 2022-08-30
CN114972022B true CN114972022B (en) 2024-06-07

Family

ID=82978259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210401740.0A Active CN114972022B (en) 2022-04-18 2022-04-18 Fusion hyperspectral super-resolution method and system based on unaligned RGB image

Country Status (1)

Country Link
CN (1) CN114972022B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115479906A (en) * 2022-09-27 2022-12-16 同济大学 Broken plastic and micro-plastic detection method based on RGB and hyperspectral image fusion
CN116433551B (en) * 2023-06-13 2023-08-22 湖南大学 High-resolution hyperspectral imaging method and device based on double-light-path RGB fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598138A (en) * 2020-04-24 2020-08-28 山东易华录信息技术有限公司 Neural network learning image identification method and device
CN112184560A (en) * 2020-12-02 2021-01-05 南京理工大学 Hyperspectral image super-resolution optimization method based on deep closed-loop neural network
JPWO2022064901A1 (en) * 2020-09-28 2022-03-31

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10891527B2 (en) * 2019-03-19 2021-01-12 Mitsubishi Electric Research Laboratories, Inc. Systems and methods for multi-spectral image fusion using unrolled projected gradient descent and convolutinoal neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598138A (en) * 2020-04-24 2020-08-28 山东易华录信息技术有限公司 Neural network learning image identification method and device
JPWO2022064901A1 (en) * 2020-09-28 2022-03-31
CN112184560A (en) * 2020-12-02 2021-01-05 南京理工大学 Hyperspectral image super-resolution optimization method based on deep closed-loop neural network

Also Published As

Publication number Publication date
CN114972022A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
Chen et al. Real-world single image super-resolution: A brief review
Zheng et al. Crossnet: An end-to-end reference-based super resolution network using cross-scale warping
CN114972022B (en) Fusion hyperspectral super-resolution method and system based on unaligned RGB image
Biasutti et al. Lu-net: An efficient network for 3d lidar point cloud semantic segmentation based on end-to-end-learned 3d features and u-net
Joze et al. Imagepairs: Realistic super resolution dataset via beam splitter camera rig
Yu et al. Hallucinating unaligned face images by multiscale transformative discriminative networks
CN105678728A (en) High-efficiency super-resolution imaging device and method with regional management
Albluwi et al. Image deblurring and super-resolution using deep convolutional neural networks
CN111861880A (en) Image super-fusion method based on regional information enhancement and block self-attention
CN111105354A (en) Depth image super-resolution method and device based on multi-source depth residual error network
Shangguan et al. Learning cross-video neural representations for high-quality frame interpolation
Ziwei et al. Overview on image super resolution reconstruction
CN113763300B (en) Multi-focusing image fusion method combining depth context and convolution conditional random field
CN110378850A (en) A kind of zoom image generation method of combination Block- matching and neural network
CN110060208A (en) A method of improving super-resolution algorithms reconstruction property
CN112734636A (en) Fusion method of multi-source heterogeneous remote sensing images
Zhao et al. A simple yet effective pipeline for radial distortion correction
Tu et al. RGTGAN: Reference-Based Gradient-Assisted Texture-Enhancement GAN for Remote Sensing Super-Resolution
CN110895790A (en) Scene image super-resolution method based on posterior degradation information estimation
Xiu et al. Double discriminative face super-resolution network with facial landmark heatmaps
WO2022252362A1 (en) Geometry and texture based online matching optimization method and three-dimensional scanning system
Dong et al. Learning Multi-Modal Cross-Scale Deformable Transformer Network for Unregistered Hyperspectral Image Super-resolution
Wang et al. A Lightweight Recurrent Aggregation Network for Satellite Video Super-Resolution
Lu et al. Event Camera Demosaicing via Swin Transformer and Pixel-focus Loss
Wang et al. Joint Defocus Deblurring and Superresolution Learning Network for Autonomous Driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant