CN117197269A - Hyperspectral image simulation method based on deep learning model - Google Patents

Hyperspectral image simulation method based on deep learning model Download PDF

Info

Publication number
CN117197269A
CN117197269A CN202311113098.7A CN202311113098A CN117197269A CN 117197269 A CN117197269 A CN 117197269A CN 202311113098 A CN202311113098 A CN 202311113098A CN 117197269 A CN117197269 A CN 117197269A
Authority
CN
China
Prior art keywords
hyperspectral
multispectral
value
image
spectrum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311113098.7A
Other languages
Chinese (zh)
Inventor
田晓敏
李朋
金永涛
杨健
顾行发
杨秀峰
占玉林
李国洪
余涛
王延仓
宋玉彬
李�浩
胡嘉欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
North China Institute of Aerospace Engineering
Original Assignee
Aerospace Information Research Institute of CAS
North China Institute of Aerospace Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS, North China Institute of Aerospace Engineering filed Critical Aerospace Information Research Institute of CAS
Priority to CN202311113098.7A priority Critical patent/CN117197269A/en
Publication of CN117197269A publication Critical patent/CN117197269A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral image simulation method based on a deep learning model, which relates to the technical field of earth observation and remote sensing, and is used for extracting end member hyperspectral reflectivity from the existing hyperspectral image and constructing a hyperspectral library; the multispectral response function is utilized, the multispectral spectral library is constructed by calculating the multispectral reflectivities of the end members of different ground objects with the same multispectral wave band number, an attention mechanism is added to preprocess multispectral image data, the multispectral spectral library and the hyperspectral spectral library are used as query, key and value of the attention mechanism, the input multispectral image data is optimized, and the neural network is used for optimizing the detail points of the analog spectral images after the end members are linearly combined, so that a very good analog effect is achieved.

Description

Hyperspectral image simulation method based on deep learning model
Technical Field
The invention relates to the technical field of earth observation and remote sensing, in particular to a hyperspectral image simulation method based on a deep learning model.
Background
The hyperspectral remote sensing data has higher spectral resolution and contains more abundant information. However, due to the limitation of observation conditions, the hyperspectral image of a required region cannot be obtained, so that the simulation of hyperspectral remote sensing images based on multispectral images by a simulation technology means is always one of very important research directions in the remote sensing field, the existing model has larger difference with data acquired by a ground spectrometer applied by an algorithm and satellite-borne data, and it is difficult to accurately restore a spectrum curve of a real satellite-borne hyperspectral.
Disclosure of Invention
The invention aims to provide a hyperspectral image simulation method based on a deep learning model, so as to solve the problems in the background technology.
A hyperspectral image simulation method based on a deep learning model comprises the following steps:
s1-1, hyperspectral image data and multispectral image data are obtained;
s1-2, constructing a hyperspectral spectrum library and a multispectral spectrum library;
s1-3, inputting the preprocessed data into a neural network layer NET to obtain simulated hyperspectral reflectivity;
s1-4, obtaining a simulated hyperspectral image according to the simulated hyperspectral reflectivity, and ending.
Specifically, in step s1-2, the construction of the hyperspectral spectrum library and the multispectral spectrum library includes the following steps:
s2-1, clustering hyperspectral images into N types of ground objects, extracting class centers of the N types of ground objects as end member hyperspectral reflectivity of the ground objects, and constructing a hyperspectral spectrum library;
s2-2, taking the spectrum in the hyperspectral spectrum library as input, and utilizing a multispectral spectral response function to obtain the end member multispectral reflectivities of different ground objects with the same multispectral wave band number through calculation so as to construct the multispectral spectrum library;
s2-3, taking the reflectivity of the multispectral image pixels as query q t T=1..n, n is the number of reflectance data of the multispectral image pixels, and the multispectral end-member spectral library and the hyperspectral spectral library are used as key value pairs to obtain m key value pairs (k 1 ,v 1 ),...,(k m ,v m ) M is the number of end member spectrums in the multispectral end member spectrum library and the hyperspectral spectrum library.
Specifically, in step s2-1, the clustering method selects a K-means clustering method to cluster hyperspectral images into N clusters, the class center of each cluster is extracted as the end member spectrum of the class, and N is a positive integer;
in step S2-2, the spectral response function of the multispectral is:
wherein, LB and L (lambda) are respectively remote sensing signal energy values (radiation brightness or reflectivity) of finer wave bands in the image wave band and the spectrum library, beta (lambda) represents the weight value of the spectrum response function corresponding to different wave bands, beta (lambda) is obtained through experimental measurement or simulation, and by the method, the spectrum energy can be redistributed in different wave band ranges.
Specifically, in step s1-3, the following steps are further included:
s5-1, set t=1;
s5-2, query q t And inputting the key value pairs into an additive attention module to obtain weighted values alpha of each end member spectrum of the hyperspectral spectrum library t
S5-3, query q t Characteristic splicing is carried out on the weighted values to obtain the input alpha of the neural network layer NET t ⊙q t The addition of the corresponding elements;
s5-4, inputting the spliced characteristics into a neural network layer NET, wherein the network layer NET comprises an H layer linear layer and a ReLU layer, and obtaining a hyperspectral pixel reflectivity predicted valueH is a positive integer, and a hyperspectral image element reflectivity predicted value corresponds to a multispectral image element reflectivity;
s5-5, calculating a loss value between the reflectance predicted value of the hyperspectral pixel and the reflectance true value of the hyperspectral pixel, and updating the linear layer parameters through a small batch gradient descent algorithm;
s5-6, giving a value of t+1 to t, repeating the steps S5-2 to S5-6 until t=n+1, and finishing updating parameters of a small batch gradient descent method by all spliced features at the moment;
s5-7, recalculating the reflectance predicted values of all the hyperspectral pixels, calculating the average value of the loss values between the reflectance predicted values and the true values of all the hyperspectral pixels, setting a threshold value and the maximum circulation times, ending model training if the average value of the loss values is smaller than the threshold value or reaches the maximum circulation times, storing parameters, ending model training, otherwise, sequencing the reflectance data of the hyperspectral image pixels again, and entering S5-1.
In step S5-2, the additive attention module expression is:
wherein,alpha is the attention weighting function; />Vector space for all multispectral image reflectivities, +.>Vector space formed by all end member spectrums in the multispectral end member spectrum library,vector space is formed for all end member spectra of the hyperspectral spectrum library.
The attention weighting function is obtained by:
s7-1, query q t And bond k i Respectively input linear layers W q And W is k Obtaining two output vectors W with the same dimension q q t And W is k k i ,W q And W is k Is an updatable parameter;
s7-2, adding the two output vectors, and then passing through a tanh activation function layer;
s7-3, passing the result in step s7-2 through an output feature size and value v i Linear layers of the same feature sizeObtaining alpha (q) before normalization t ,k i ) Where the value v i The feature size is equal to the number of bands of the hyperspectral image, is an updatable parameter;
s7-4 normalized using a softmax function to yield the final α (q t ,k i ) Finally, finally
The softmax function is used to normalize all values to between 0 and 1, and all values after normalization add up to 1.
In step S5-4, the linear layer is a full connection layer, and the calculation formula of the full connection layer is as follows:
wherein z is i For the output of the ith neuron of the full link layer, x j Input feature, x, for j of the neuron j The input X from the fully-connected layer, X including the output of the preceding fully-connected layer and the query q t Splicing value alpha of weighted value characteristic t ⊙q t H is the total number of inputs to the neuron, U T For this purpose, the weight of the full connection layer, p represents the bias of the full connection layer, and a random initialization mode is adopted;
after passing through the full connection layer, a ReLU function is used as an activation function, and the calculation formula of the function is as follows:
ReLU(z i )=max(0,z i )。
the method for updating the weight and the bias of the linear layer by adopting a small batch gradient descent method comprises the following steps:
s8-1, designating a batch size A and a learning rate eta, wherein each A piece of data is used as a batch;
s8-2, traversing and calculating weight U of all data in a batch T And the gradient of the bias p with respect to the loss value, the calculation formula is as follows:
wherein L is a loss function and y is the output of the linear layer;
s8-3, calculating the average number of all data weights and bias gradients of the batch;
s8-4, updating weights and biases according to the average number of gradients and the learning rate:
specifically, the loss function is a mean square error function msflow, and the formula is:
is the reflectance predicted value of the hyperspectral pixel, and yt is the reflectance true value of the hyperspectral pixel.
The small-batch gradient descent method uses a small part of samples in each training iteration to estimate the gradient and update parameters, so that parameter adjustment can be performed more frequently, and the convergence speed of an algorithm is increased; the small-batch gradient descent method updates parameters by using the average gradient of a small sample set, so that gradient estimation is more stable, and compared with random gradient descent of a single sample, the small-batch gradient descent can reduce variance of parameter update, thereby generating a more stable training process; the small-batch gradient descent method provides a way to introduce randomness and diversity in training, which helps to prevent the model from sinking into a locally optimal solution, improves the generalization capability of the model, and reduces the risk of overfitting.
Compared with the prior art, the invention has the following beneficial effects: the end member spectrum used for generating the model image is extracted from the remote sensing image to be simulated, so that the spectrum curve of the simulated image is more similar to the real hyperspectral image; the neural network is used for optimizing the detail points of the simulated spectrum image after the linear combination of the end members, so that a very good simulation effect is achieved.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
fig. 1 is a network structure diagram of a hyperspectral image simulation method based on a deep learning model according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The hyperspectral image simulation method based on the deep learning model comprises the following steps of:
s1-1, hyperspectral image data and multispectral image data are obtained;
s1-2, constructing a hyperspectral spectrum library and a multispectral spectrum library;
s1-3, inputting the preprocessed data into a neural network layer NET to obtain simulated hyperspectral reflectivity;
s1-4, obtaining a simulated hyperspectral image according to the simulated hyperspectral reflectivity, and ending.
In step S1-2, constructing a hyperspectral spectrum library and a multispectral spectrum library comprises the following steps:
s2-1, clustering hyperspectral images into N types of ground objects, extracting class centers of the N types of ground objects as end member hyperspectral reflectivity of the ground objects, and constructing a hyperspectral spectral library;
s2-2, taking the spectrum in the hyperspectral spectrum library as input, and utilizing a multispectral spectral response function to obtain the end member multispectral reflectivities of different ground objects with the same multispectral wave band number through calculation, so as to construct the multispectral spectrum library;
s2-3, taking the reflectivity of the multispectral image pixels as query q t T=1..n, n is the number of reflectance data of the multispectral image pixels, and the multispectral end-member spectral library and the hyperspectral spectral library are used as key value pairs to obtain m key value pairs (k 1 ,v 1 ),...,(k m ,v m ) M is the number of end member spectrums in the multispectral end member spectrum library and the hyperspectral spectrum library.
In the step S2-1, a K-means clustering method is selected to cluster the hyperspectral image into N clusters, the class center of each cluster is extracted to be used as the end member spectrum of the class, and N is a positive integer; calculating the square sum SSE or other measurement indexes of the cluster errors under different cluster numbers, drawing a relation diagram between the cluster numbers and the cluster errors, and when the cluster errors cannot be obviously reduced by increasing the cluster numbers, generating an obvious inflection point which is the proper cluster number N.
In step S2-2, the spectral response function of the multispectral is:
the method comprises the steps of respectively remotely sensing signal energy values of a wave band of an image and a relatively fine wave band in a spectrum library by LB (lambda), wherein beta (lambda) represents weight values of spectrum response functions corresponding to different wave bands, beta (lambda) is obtained through experimental measurement or simulation, and spectrum energy can be redistributed in different wave band ranges by the method.
In step S1-3, the method further comprises the following steps:
s5-1, set t=1;
s5-2, query q t And inputting the key value pairs into an additive attention module to obtain weighted values alpha of each end member spectrum of the hyperspectral spectrum library t
S5-3, query q t Characteristic splicing is carried out on the weighted values to obtain the input alpha of the neural network layer NET t ⊙q t The addition of the corresponding elements;
s5-4, inputting the spliced characteristics into a neural network layer NET, wherein the network layer NET comprises 3 linear layers and a ReLU layer, and obtaining a hyperspectral pixel reflectivity predicted valueH is a positive integer, and a hyperspectral image element reflectivity predicted value corresponds to a multispectral image element reflectivity;
s5-5, calculating a loss value between the reflectance predicted value of the hyperspectral pixel and the reflectance true value of the hyperspectral pixel, and updating the linear layer parameters through a small batch gradient descent algorithm;
s5-6, giving a value of t+1 to t, repeating the steps S5-2 to S5-6 until t=n+1, and finishing updating parameters of a small batch gradient descent method by all spliced features at the moment;
s5-7, recalculating the reflectance predicted values of all the hyperspectral pixels, calculating the average value of the loss values between the reflectance predicted values and the true values of all the hyperspectral pixels, setting a threshold value and the maximum circulation times, ending model training if the average value of the loss values is smaller than the threshold value or reaches the maximum circulation times, storing parameters, ending model training, otherwise, sequencing the reflectance data of the hyperspectral image pixels again, and entering S5-1.
In step S5-2, the additive attention module expression is:
wherein,alpha is the attention weighting function; />Vector space for all multispectral image reflectivities, +.>Vector space formed by all end member spectrums in the multispectral end member spectrum library,vector space formed by all end member spectrums of the hyperspectral spectrum library;
the attention weighting function is obtained by:
s7-1, query q t And bond k i Respectively input linear layers W q And W is k Obtaining two output vectors W with the same dimension q q t And W is k k i ,W q And W is k Is an updatable parameter;
s7-2, adding the two output vectors, and then passing through a tanh activation function layer;
s7-3, the result in the step S7-2 is processed by an output characteristic size and value v i Linear layers of the same feature sizeObtaining alpha (q) before normalization t ,k i ) Where the value v i The feature size is equal to the number of bands of the hyperspectral image, is an updatable parameter;
s7-4, normalized using a softmax function to obtain the final alpha (q t ,k i ) Finally, finally
The softmax function is used to normalize all values to between 0 and 1, and all values after normalization add up to 1.
In step S5-4, the linear layer is a full connection layer, and the calculation formula of the full connection layer is as follows:
wherein z is i For the output of the ith neuron of the full link layer, x j Input feature, x, for j of the neuron j The input X from the fully-connected layer, X including the output of the preceding fully-connected layer and the query q t Splicing value alpha of weighted value characteristic t ⊙q t H is the total number of inputs to the neuron, U T The weight of the full connection layer is that p represents the bias of the full connection layer, and a random initialization mode is adopted;
after passing through the full connection layer, a ReLU function is used as an activation function, and the calculation formula of the function is as follows:
ReLU(z i )=max(0,z i )。
the method for updating the weight and the bias of the linear layer by adopting a small batch gradient descent method comprises the following steps:
s8-1, designating the batch size A as 200, the learning rate eta as 0.05, wherein each 200 pieces of data are used as a batch, and the batch size A and the learning rate eta can be set to other values according to the time condition;
s8-2, traversing and calculating weight U of all data in a batch T And the gradient of the bias p with respect to the loss value, the calculation formula is as follows:
wherein L is a loss function and y is the output of the linear layer;
s8-3, calculating the average number of all data weights and bias gradients of the batch;
s8-4, updating weights and biases according to the average number of gradients and the learning rate:
specifically, the loss function is a mean square error function MSEloss, and the formula is:
is the reflectance predictive value of the hyperspectral pixel, y t Is the true value of the reflectivity of the hyperspectral pixel.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A hyperspectral image simulation method based on a deep learning model is characterized by comprising the following steps:
s1-1, hyperspectral image data and multispectral image data are obtained;
s1-2, constructing a hyperspectral spectrum library and a multispectral spectrum library;
s1-3, inputting the preprocessed data into a neural network layer NET to obtain simulated hyperspectral reflectivity;
s1-4, obtaining a simulated hyperspectral image according to the simulated hyperspectral reflectivity, and ending.
2. The hyperspectral image modeling method based on the deep learning model as claimed in claim 1, wherein in step S1-2, the construction of the hyperspectral spectrum library and the multispectral spectrum library includes the following steps:
s2-1, clustering hyperspectral images into N types of ground objects, extracting class centers of the N types of ground objects as end member hyperspectral reflectivity of the ground objects, and constructing a hyperspectral spectral library;
s2-2, taking the spectrum in the hyperspectral spectrum library as input, and utilizing a multispectral spectral response function to obtain the end member multispectral reflectivities of different ground objects with the same multispectral wave band number through calculation so as to construct the multispectral spectrum library;
s2-3, taking the reflectivity of the multispectral image pixels as query q t T=1..n, n is the number of reflectance data of the multispectral image pixels, and the multispectral end-member spectral library and the hyperspectral spectral library are used as key value pairs to obtain m key value pairs (k 1 ,v 1 ),...,(k m ,v m ) M is the number of end member spectrums in the multispectral end member spectrum library and the hyperspectral spectrum library.
3. The hyperspectral image simulation method based on the deep learning model according to claim 2, wherein in the step S2-1, the clustering method is to cluster hyperspectral images into N clusters by using a K-means clustering method, the class center of each cluster is extracted as an end member spectrum of the class, and N is a positive integer.
4. The hyperspectral image modeling method based on the deep learning model as claimed in claim 2, wherein in step S2-2, the spectral response function of the multispectral is:
the method comprises the steps of respectively remotely sensing signal energy values of a wave band of an image and a relatively fine wave band in a spectrum library by LB (lambda), wherein beta (lambda) represents weight values of spectrum response functions corresponding to different wave bands, beta (lambda) is obtained through experimental measurement or simulation, and spectrum energy can be redistributed in different wave band ranges by the method.
5. The hyperspectral image modeling method based on the deep learning model as claimed in claim 1, wherein in step S1-3, the method further comprises the steps of:
s5-1, set t=1;
s5-2, query q t And inputting the key value pairs into an additive attention module to obtain weighted values alpha of each end member spectrum of the hyperspectral spectrum library t
S5-3, query q t Characteristic splicing is carried out on the weighted values to obtain the input alpha of the neural network layer NET t ⊙q t The addition of the corresponding elements;
s5-4, inputting the spliced characteristics into a neural network layer NET, wherein the network layer NET comprises an H layer linear layer and a ReLU layer, and obtaining a hyperspectral pixel reflectivity predicted valueH is a positive integerA hyperspectral image pixel reflectivity predictive value corresponds to a multispectral image pixel reflectivity;
s5-5, calculating a loss value between the reflectance predicted value of the hyperspectral pixel and the reflectance true value of the hyperspectral pixel, and updating the linear layer parameters through a small batch gradient descent algorithm;
s5-6, giving a value of t+1 to t, repeating the steps S5-2 to S5-6 until t=n+1, and finishing updating parameters of a small batch gradient descent method by all spliced features at the moment;
s5-7, recalculating the reflectance predicted values of all the hyperspectral pixels, calculating the average value of the loss values between the reflectance predicted values and the true values of all the hyperspectral pixels, setting a threshold value and the maximum circulation times, ending model training if the average value of the loss values is smaller than the threshold value or reaches the maximum circulation times, storing parameters, ending model training, otherwise, sequencing the reflectance data of the hyperspectral image pixels again, and entering S5-1.
6. The hyperspectral image modeling method based on the deep learning model as claimed in claim 5, wherein in step S5-2, the expression of the additive attention module is:
wherein,alpha is the attention weighting function; />Vector space for all multispectral image reflectivities, +.>Vector space formed by all end member spectra in the multispectral end member spectrum library, +.>Vector space is formed for all end member spectra of the hyperspectral spectrum library.
7. The hyperspectral image modeling method based on the deep learning model as claimed in claim 6, wherein the attention weighting function is obtained by:
s7-1, query q t And bond k i Respectively input linear layers W q And W is k Obtaining two output vectors W with the same dimension q q t And W is k k i ,W q And W is k Is an updatable parameter;
s7-2, adding the two output vectors, and then passing through a tanh activation function layer;
s7-3, the result in the step S7-2 is processed by an output characteristic size and value v i Linear layers of the same feature sizeObtaining alpha (q) before normalization t ,k i ) Where the value v i The feature size is equal to the number of bands of the hyperspectral image, is an updatable parameter;
s7-4, normalizing by a sormax function to obtain the final alpha (q t ,k i ),
8. The hyperspectral image simulation method based on the deep learning model as claimed in claim 5, wherein the updating of the weights and the bias by the small-batch gradient descent method comprises the following steps:
s8-1, designating a batch size A and a learning rate eta, wherein each A piece of data is used as a batch;
s8-2, traversing and calculating the gradient of the weight and the bias of all data in a batch relative to the loss value;
s8-3, calculating the average number of all data weights and bias gradients of the batch;
and S8-4, updating the weight and the bias according to the average number of gradients and the learning rate.
9. The hyperspectral image modeling method as claimed in claim 5, wherein in step S5-7, the loss value is obtained by calculating the mean square error of the predicted value of the reflectance of the hyperspectral pixel and the true value of the reflectance of the hyperspectral pixel.
CN202311113098.7A 2023-08-31 2023-08-31 Hyperspectral image simulation method based on deep learning model Pending CN117197269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311113098.7A CN117197269A (en) 2023-08-31 2023-08-31 Hyperspectral image simulation method based on deep learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311113098.7A CN117197269A (en) 2023-08-31 2023-08-31 Hyperspectral image simulation method based on deep learning model

Publications (1)

Publication Number Publication Date
CN117197269A true CN117197269A (en) 2023-12-08

Family

ID=88984284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311113098.7A Pending CN117197269A (en) 2023-08-31 2023-08-31 Hyperspectral image simulation method based on deep learning model

Country Status (1)

Country Link
CN (1) CN117197269A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427818A (en) * 2019-06-17 2019-11-08 青岛星科瑞升信息科技有限公司 The deep learning satellite data cloud detection method of optic that high-spectral data is supported
US20200065326A1 (en) * 2018-03-27 2020-02-27 EBA Japan Co., Ltd. Information search system and information search program
CN113222836A (en) * 2021-04-25 2021-08-06 自然资源部国土卫星遥感应用中心 Hyperspectral and multispectral remote sensing information fusion method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200065326A1 (en) * 2018-03-27 2020-02-27 EBA Japan Co., Ltd. Information search system and information search program
CN110427818A (en) * 2019-06-17 2019-11-08 青岛星科瑞升信息科技有限公司 The deep learning satellite data cloud detection method of optic that high-spectral data is supported
CN113222836A (en) * 2021-04-25 2021-08-06 自然资源部国土卫星遥感应用中心 Hyperspectral and multispectral remote sensing information fusion method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
方帅等: "基于类解混的高光谱和多光谱图像融合算法", CNKI, vol. 32, no. 1, 15 January 2020 (2020-01-15), pages 54 - 67 *

Similar Documents

Publication Publication Date Title
Hegde Photonics inverse design: pairing deep neural networks with evolutionary algorithms
US20210089922A1 (en) Joint pruning and quantization scheme for deep neural networks
Aizenberg et al. CNN based on multi-valued neuron as a model of associative memory for grey scale images
Maca et al. Forecasting SPEI and SPI drought indices using the integrated artificial neural networks
Vanzella et al. Photometric redshifts with the Multilayer Perceptron Neural Network: Application to the HDF-S and SDSS
CN107833208B (en) Hyperspectral anomaly detection method based on dynamic weight depth self-encoding
CN112347888B (en) Remote sensing image scene classification method based on bi-directional feature iterative fusion
CN108985457B (en) Deep neural network structure design method inspired by optimization algorithm
CN113159389A (en) Financial time sequence prediction method based on deep forest generation countermeasure network
Zhang et al. Self-blast state detection of glass insulators based on stochastic configuration networks and a feedback transfer learning mechanism
CN114937204A (en) Lightweight multi-feature aggregated neural network remote sensing change detection method
CN109977989B (en) Image tensor data processing method
CN110929798A (en) Image classification method and medium based on structure optimization sparse convolution neural network
CN114092283A (en) Knowledge graph matching-based legal case similarity calculation method and system
CN112784907A (en) Hyperspectral image classification method based on spatial spectral feature and BP neural network
CN116469561A (en) Breast cancer survival prediction method based on deep learning
CN115561005A (en) Chemical process fault diagnosis method based on EEMD decomposition and lightweight neural network
CN110188621B (en) Three-dimensional facial expression recognition method based on SSF-IL-CNN
Jang et al. Deep neural networks with a set of node-wise varying activation functions
CN108470209B (en) Convolutional neural network visualization method based on gram matrix regularization
Harikrishnan et al. Handwritten digit recognition with feed-forward multi-layer perceptron and convolutional neural network architectures
Banumathi et al. An Intelligent Deep Learning Based Xception Model for Hyperspectral Image Analysis and Classification.
CN117197269A (en) Hyperspectral image simulation method based on deep learning model
CN111062888B (en) Hyperspectral image denoising method based on multi-target low-rank sparsity and spatial-spectral total variation
Wang et al. Learning of recurrent convolutional neural networks with applications in pattern recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination