CN117095306B - Mixed pixel decomposition method based on attention mechanism - Google Patents

Mixed pixel decomposition method based on attention mechanism Download PDF

Info

Publication number
CN117095306B
CN117095306B CN202311177107.9A CN202311177107A CN117095306B CN 117095306 B CN117095306 B CN 117095306B CN 202311177107 A CN202311177107 A CN 202311177107A CN 117095306 B CN117095306 B CN 117095306B
Authority
CN
China
Prior art keywords
loss
attention
spectrum
abundance
dimension reduction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311177107.9A
Other languages
Chinese (zh)
Other versions
CN117095306A (en
Inventor
田晓敏
李朋
金永涛
杨健
米晓飞
刘苗
胡嘉欣
汪青宇
杨楠杰
底薇萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
North China Institute of Aerospace Engineering
Original Assignee
Aerospace Information Research Institute of CAS
North China Institute of Aerospace Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS, North China Institute of Aerospace Engineering filed Critical Aerospace Information Research Institute of CAS
Priority to CN202311177107.9A priority Critical patent/CN117095306B/en
Publication of CN117095306A publication Critical patent/CN117095306A/en
Application granted granted Critical
Publication of CN117095306B publication Critical patent/CN117095306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a mixed pixel decomposition method based on an attention mechanism, and belongs to the technical field of mixed pixel decomposition of hyperspectral images. The method comprises the following steps: the spectrum of the pixels in the hyperspectral image and different end member spectrum data are put into a characteristic dimension reduction network block to reduce dimension; putting the spectrum of the pixel after dimension reduction and the spectrum of the end member into an attention scoring function to obtain one-dimensional characteristics with the same number as the end member; normalizing the obtained one-dimensional features to obtain attention scores; updating network parameters by using a back propagation algorithm according to Loss value Loss between the attention score and abundance in the real tag; calculating the Loss value Loss average of all the pixels; outputting abundance values of different end members corresponding to each pixel to obtain abundance images of different end members; according to the invention, the defect that only the image spectrum is trained is overcome, and the influence of redundant spectrum on spectrum decomposition is reduced.

Description

Mixed pixel decomposition method based on attention mechanism
Technical Field
The invention relates to the technical field of mixed pixel decomposition of hyperspectral images, in particular to a mixed pixel decomposition method based on an attention mechanism.
Background
A large number of mixed pixels exist in the remote sensing image, such as a spectrometer spatial resolution image and a complex ground feature image, so that further exploration of the remote sensing image is very influenced; in order to solve the problem, the mixed pixel decomposition technology can extract the proportion of each pixel to the specified category from a pair of mixed pixel images containing a plurality of categories, and lay a foundation for further processing of the images.
At present, a great deal of researches on mixed pixel decomposition technology are carried out, wherein the researches comprise a statistical method, a geometric method, a machine learning method, a deep learning method and the like; the deep learning method is excellent in terms of unmixing precision and efficiency, however, the input features of the model in the existing method for decomposing mixed pixels based on the deep learning method are only spectrum data in hyperspectral images, training is not performed by utilizing end member spectrum information, and excavation of available features in the data is not enough.
Disclosure of Invention
The invention aims to provide a heterogeneous earth surface relative true value acquisition system and method based on data analysis, so as to solve the problems in the background art.
In order to solve the technical problems, the invention provides the following technical scheme: a method of mixed pel decomposition based on an attention mechanism, the method comprising the steps of:
S10, putting the spectrum of one pixel in the hyperspectral image and different end member spectrum data into three characteristic dimension reduction network blocks, and reducing the dimension of the data;
S20, putting the spectrum of the pixel after the dimension reduction in the step S10 and the spectrum of the end member into an attention scoring function to obtain one-dimensional characteristics with the same number as the end member;
s30, normalizing the one-dimensional features obtained in the step S20 to obtain attention scores;
S40, updating network parameters by using a back propagation algorithm according to Loss value Loss between the attention score and abundance in the real tag;
S50, executing steps S10 to S40 on all pixels in the hyperspectral image to obtain Loss values Loss of all pixels and calculating Loss value Loss average;
S60, comparing the Loss value Loss average with K 1; if the Loss value Loss average is smaller than the threshold K 1, executing step S70; if the Loss value Loss average is not less than the threshold value K 1, repeating the step S10;
S70, outputting abundance values of different end members corresponding to each pixel, and sorting the output end member abundance data to obtain abundance images of different end members;
Wherein K 1 represents a predefined maximum average threshold for the Loss value Loss.
In step S10, each feature dimension reduction network block includes a linear layer and a nonlinear activation layer;
the three characteristic dimension reduction network blocks output the characteristic quantity of 128, 64 and 32 respectively; band information in the hyperspectral is reduced to 32 dimensions, so that the influence of redundant bands in mixed pixel decomposition is reduced.
Further, the linear layer in the feature dimension reduction network block is a full-connection layer, and the calculation formula of the full-connection layer is as follows:
Linear(x)=wx+b;
Wherein, w is the weight of the full connection layer and is obtained by adopting a random initialization mode; b represents a bias value; x represents the deep features extracted from the spectrum by the feature extraction network.
After the spectra of the pixels and the different end-member spectrum data pass through the full connection layer in the network block, the data pass through the nonlinear activation layer; the nonlinear activation layer is a function, and the calculation formula of the function is as follows:
ReLU(X0)=max(0,X0);
Wherein X 0 is the input of the ReLU function, i.e., linear (X); max (0, X 0) is a maximum value from 0 to X 0.
The network block calculation formula of the input characteristic dimension reduction is as follows:
y=ReLU(Linesr(x));
Wherein y represents the feature vector after dimension reduction.
In step S20, taking the hyperspectral image after dimension reduction as query q, taking different end member spectrums after dimension reduction as query keys k i, inputting q and k i into an attention scoring function a, and mapping the two vectors into scalar quantities through the attention scoring function a; the expression formula of the attention scoring function a is:
a=B(cat(q,ki));
Wherein q and k i are features of the input; cat (q, ki) is used to splice the features of q and k i; b represents a network block calculation process for the feature dimension reduction.
In step S30, the obtained one-dimensional features are normalized by a function softmax; the attention score obtained after normalization of the obtained one-dimensional features is used as the abundance value of different end members; wherein,
The calculation formula of the attention score is:
Wherein α represents an attention score; q and k i are inputs to softmax; j represents an index of the corresponding abundance; m represents the number of end members.
In step S40, the Loss value Loss is selected as the mean square error MSELoss; MSELoss has a calculation formula:
wherein, Representing predicted abundance values for each end member; y represents the true abundance value of each end member in the tag; n represents the number of end members.
Further, the method for updating network parameters by using the back propagation algorithm comprises the following steps:
S401, calculating the gradient of each parameter layer by layer; calculating the gradient of the loss value L of the full connection layer to the weight W and the weight b by using a chain derivative rule, and according to the formula:
s402, updating the weight by using a gradient descent algorithm, and according to the formula:
Wherein L represents a loss function; y 0 represents the output of this layer; w and b 0 represent the weights of this layer; η represents the learning rate.
In step S60, a maximum training number K 2 of hyperspectral image data is set; recording the number of repeated training times of hyperspectral image data when the Loss value Loss average is not smaller than a threshold value K 1; when the number of repeated training reaches K 2, step S70 is directly performed.
Compared with the prior art, the invention has the following beneficial effects: 1. the attention mechanism is added in the process of training the data, so that the spectrum of the image and the characteristics of different end member spectrums can be added into the model for training; the correlation between the image spectrum and the end member spectrum is automatically learned through a multi-layer network, so that the defect that only the image spectrum is trained is overcome; 2. the feature dimension reduction module is added, effective features are automatically learned, and the influence of redundant spectrum on spectrum decomposition is reduced.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a schematic diagram of a network structure of a method for decomposing mixed pixels based on an attention mechanism according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the present invention provides the following technical solutions: a method of mixed pel decomposition based on an attention mechanism, the method comprising the steps of:
S10, putting the spectrum of one pixel in the hyperspectral image and different end member spectrum data into three characteristic dimension reduction network blocks, and reducing the dimension of the data;
S20, putting the spectrum of the pixel after the dimension reduction in the step S10 and the spectrum of the end member into an attention scoring function to obtain one-dimensional characteristics with the same number as the end member;
s30, normalizing the one-dimensional features obtained in the step S20 to obtain attention scores;
S40, updating network parameters by using a back propagation algorithm according to Loss value Loss between the attention score and abundance in the real tag;
S50, executing steps S10 to S40 on all pixels in the hyperspectral image to obtain Loss values Loss of all pixels and calculating Loss value Loss average;
S60, comparing the Loss value Loss average with K 1; if the Loss value Loss average is smaller than the threshold K 1, executing step S70; if the Loss value Loss average is not less than the threshold value K 1, repeating the step S10;
S70, outputting abundance values of different end members corresponding to each pixel, and sorting the output end member abundance data to obtain abundance images of different end members;
Where K 1 represents the maximum average threshold for the Loss value Loss.
In step S10, each feature dimension reduction network block includes a linear layer and a nonlinear activation layer;
The three characteristic dimension reduction network blocks output the characteristic quantity of 128, 64 and 32 respectively; band information in the hyperspectral is reduced to 32 dimensions.
The linear layer in the characteristic dimension reduction network block is a full-connection layer, and the calculation formula of the full-connection layer is as follows:
Linear(x)=wx+b;
Wherein, w is the weight of the full connection layer and is obtained by adopting a random initialization mode; b represents a bias value; x represents the deep features extracted from the spectrum by the feature extraction network.
After the spectra of the pixels and the different end-member spectrum data pass through the full connection layer in the network block, the data pass through the nonlinear activation layer; the nonlinear activation layer is a function, and the calculation formula of the function is as follows:
ReLU(X0)=max(0,X0);
Wherein X 0 is the input of the ReLU function; max (0, X 0) is a maximum value from 0 to X 0.
The network block calculation formula of the input characteristic dimension reduction is as follows:
y=ReLU(Linear(x));
Wherein y represents the feature vector after dimension reduction.
In step S20, taking the hyperspectral image after dimension reduction as query q, taking different end member spectrums after dimension reduction as query keys k i, inputting q and k i into an attention scoring function a, and mapping the two vectors into scalar quantities through the attention scoring function a; the expression formula of the attention scoring function a is:
a=B(cat(q,ki));
Wherein q and k i are features of the input; cat (q, k i) is used to splice the features of q and k i; b represents a network block calculation process for the feature dimension reduction.
In step S30, the obtained one-dimensional features are normalized by a function softmax; the attention score obtained after normalization of the obtained one-dimensional features is used as the abundance value of different end members; wherein,
The calculation formula of the attention score is:
Wherein α represents an attention score; q and k i are inputs to softmax; j represents an index of the corresponding abundance; m represents the number of end members.
In step S40, the Loss value Loss is selected as the mean square error MSELoss; MSELoss has a calculation formula:
wherein, Representing predicted abundance values for each end member; y represents the true abundance value of each end member in the tag; n represents the number of end members.
The method for updating network parameters by using the back propagation algorithm comprises the following steps:
S401, calculating the gradient of each parameter layer by layer; calculating the gradient of the loss value L of the full connection layer to the weight W and the weight b by using a chain derivative rule, and according to the formula:
s402, updating the weight by using a gradient descent algorithm, and according to the formula:
Wherein L represents a loss function; y 0 represents the output of this layer; w and b 0 represent the weights of this layer; η represents the learning rate.
In step S60, a maximum training number K 2 of hyperspectral image data is set; recording the number of repeated training times of hyperspectral image data when the Loss value Loss average is not smaller than a threshold value K 1; when the number of repeated training reaches K 2, step S70 is directly performed.
In this embodiment:
in order to put data into a network model for training, the data set needs to be processed first; wherein the dataset comprises hyperspectral imagery, an abundance map, and an end member spectrum; the processing steps are as follows:
Tiling two-dimensional data of hyperspectral image data and abundance map data into one-dimensional data, and taking the tiled hyperspectral image data as a sample X; x= { X 1,x2,...,xm }; taking the tiled abundance map data as a label Y; y= { Y 1,y2,...,ym }; wherein m is the total sample amount; taking the spectrum of each end member as a query key K; k= { K 1,k2,...,kn }; wherein n is the number of end members; the maximum average threshold of Loss value Loss is K 1; the maximum training frequency of the hyperspectral image data is K 2;
placing a sample x i of the pixel and an end member spectrum K into three characteristic dimension reduction network blocks to reduce dimension; according to the calculation formula:
y=ReLU(Linear(xi));
Obtaining an abundance map data label y i after tiling;
mapping the two vectors of the hyperspectral image q after dimension reduction and the different end member spectra k i after dimension reduction into scalar quantities through an attention scoring function a; according to the formula:
a=B(cat(q,ki));
Normalizing the obtained one-dimensional features through a function softmax; the attention score obtained after normalization of the obtained one-dimensional features is used as the abundance value of different end members; according to the formula:
Taking the output abundance values of different end members as predicted values By prediction value/>And the true abundance map data tag y i selects the mean square error MSELoss to calculate the Loss value Loss; according to the formula:
updating network parameters using a back propagation algorithm;
calculating the gradient of the loss value L to the weight W and the weight b by using a chain derivative rule; according to the formula:
Updating the weights using a gradient descent algorithm; according to the formula:
Recording all obtained pixel Loss values Loss, and calculating Loss value Loss average;
Comparing the Loss value Loss average to K 1, where the Loss value Loss average is less than a threshold K 1;
And finishing the abundance data of the output end members, and outputting abundance images of different end members.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A mixed pixel decomposition method based on an attention mechanism is characterized in that: the method comprises the following steps:
S10, putting the spectrum of one pixel in the hyperspectral image and different end member spectrum data into three characteristic dimension reduction network blocks, and reducing the dimension of the data;
S20, putting the spectrum of the pixel after the dimension reduction in the step S10 and the spectrum of the end member into an attention scoring function to obtain one-dimensional characteristics with the same number as the end member;
s30, normalizing the one-dimensional features obtained in the step S20 to obtain attention scores;
S40, updating network parameters by using a back propagation algorithm according to Loss value Loss between the attention score and abundance in the real tag;
S50, executing steps S10 to S40 on all pixels in the hyperspectral image to obtain Loss values Loss of all pixels and calculating Loss value Loss average;
S60, comparing the Loss value Loss average with K 1; if the Loss value Loss average is smaller than the threshold K 1, executing step S70; if the Loss value Loss average is not less than the threshold value K 1, repeating the step S10;
S70, outputting abundance values of different end members corresponding to each pixel, and sorting the output end member abundance data to obtain abundance images of different end members;
Where K 1 represents the maximum average threshold for the Loss value Loss.
2. The attention mechanism based mixed pixel decomposition method of claim 1, wherein: in step S10, each feature dimension reduction network block includes a linear layer and a nonlinear activation layer;
The three characteristic dimension reduction network blocks output the characteristic quantity of 128, 64 and 32 respectively; band information in the hyperspectral is reduced to 32 dimensions.
3. The attention mechanism-based mixed pixel decomposition method of claim 2, wherein: the linear layer in the characteristic dimension reduction network block is a full-connection layer, and the calculation formula of the full-connection layer is as follows:
Linear(x)=wx+b;
Wherein, w is the weight of the full connection layer and is obtained by adopting a random initialization mode; b represents a bias value; x represents the deep features extracted from the spectrum by the feature extraction network.
4. A mixed pel decomposition method based on an attention mechanism according to claim 3, wherein: after the spectra of the pixels and the different end-member spectrum data pass through the full connection layer in the network block, the data pass through the nonlinear activation layer; the nonlinear activation layer is a function, and the calculation formula of the function is as follows:
ReLU(X0)=max(0,X0);
Wherein X 0 is the input of the ReLU function; max (0, X 0) is a maximum value from 0 to X 0.
5. The attention mechanism based mixed pixel decomposition method of claim 4, wherein: the network block calculation formula of the input characteristic dimension reduction is as follows:
y=ReLU(Linear(x));
Wherein y represents the feature vector after dimension reduction.
6. The attention mechanism based mixed pixel decomposition method of claim 1, wherein: in step S20, taking the hyperspectral image after dimension reduction as query q, taking different end member spectrums after dimension reduction as query keys k i, inputting q and k i into an attention scoring function a, and mapping the two vectors into scalar quantities through the attention scoring function a; the expression formula of the attention scoring function a is:
a=B(cat(q,ki));
Wherein q and k i are features of the input; cat (q, k i) is used to splice the features of q and k i; b represents a network block calculation process for the feature dimension reduction.
7. The attention mechanism based mixed pixel decomposition method of claim 1, wherein: in step S30, the obtained one-dimensional features are normalized by a function softmax; the attention score obtained after normalization of the obtained one-dimensional features is used as the abundance value of different end members; wherein,
The calculation formula of the attention score is:
Wherein α represents an attention score; q and k i are inputs to softmax; j represents an index of the corresponding abundance; m represents the number of end members.
8. The attention mechanism based mixed pixel decomposition method of claim 5, wherein: in step S40, the Loss value Loss is selected as the mean square error MSELoss; MSELoss has a calculation formula:
wherein, Representing predicted abundance values for each end member; y represents the true abundance value of each end member in the tag; n represents the number of end members.
9. The attention mechanism based mixed pel decomposition method of claim 8, wherein: the method for updating network parameters by using the back propagation algorithm comprises the following steps:
S401, calculating the gradient of each parameter layer by layer; calculating the gradient of the loss value L of the full connection layer to the weight W and the weight b by using a chain derivative rule, and according to the formula:
s402, updating the weight by using a gradient descent algorithm, and according to the formula:
Wherein L represents a loss function; y 0 represents the output of this layer; w and b 0 represent the weights of this layer; η represents the learning rate.
10. The attention mechanism based mixed pixel decomposition method of claim 1, wherein: in step S60, a maximum training number K 2 of hyperspectral image data is set; recording the number of repeated training times of hyperspectral image data when the Loss value Loss average is not smaller than a threshold value K 1; when the number of repeated training reaches K 2, step S70 is directly performed.
CN202311177107.9A 2023-09-13 2023-09-13 Mixed pixel decomposition method based on attention mechanism Active CN117095306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311177107.9A CN117095306B (en) 2023-09-13 2023-09-13 Mixed pixel decomposition method based on attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311177107.9A CN117095306B (en) 2023-09-13 2023-09-13 Mixed pixel decomposition method based on attention mechanism

Publications (2)

Publication Number Publication Date
CN117095306A CN117095306A (en) 2023-11-21
CN117095306B true CN117095306B (en) 2024-05-24

Family

ID=88783421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311177107.9A Active CN117095306B (en) 2023-09-13 2023-09-13 Mixed pixel decomposition method based on attention mechanism

Country Status (1)

Country Link
CN (1) CN117095306B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2319406A1 (en) * 2004-12-28 2011-05-11 Hyperspectral Imaging, Inc Hyperspectral/multispectral imaging in determination, assessment and monitoring of systemic physiology and shock
CN109583380A (en) * 2018-11-30 2019-04-05 广东工业大学 A kind of hyperspectral classification method based on attention constrained non-negative matrix decomposition
CN113850202A (en) * 2021-09-28 2021-12-28 中国地质大学(武汉) Semi-supervised hyperspectral image mixed pixel decomposition method based on deep learning
CN114422784A (en) * 2022-01-19 2022-04-29 北华航天工业学院 Unmanned aerial vehicle multispectral remote sensing image compression method based on convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2319406A1 (en) * 2004-12-28 2011-05-11 Hyperspectral Imaging, Inc Hyperspectral/multispectral imaging in determination, assessment and monitoring of systemic physiology and shock
CN109583380A (en) * 2018-11-30 2019-04-05 广东工业大学 A kind of hyperspectral classification method based on attention constrained non-negative matrix decomposition
CN113850202A (en) * 2021-09-28 2021-12-28 中国地质大学(武汉) Semi-supervised hyperspectral image mixed pixel decomposition method based on deep learning
CN114422784A (en) * 2022-01-19 2022-04-29 北华航天工业学院 Unmanned aerial vehicle multispectral remote sensing image compression method based on convolutional neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Miao,L等.A maximum entropy approach to unsupervised mixed-pixel decomposition.《IEEE Transactions on image processing》.2007,1008-1021. *
基于机器学习的遥感影像云检测研究进展;邴芳飞等;《遥感技术与应用》;20230228;129-142 *
森林地上生物量遥感估算方法;田晓敏等;《北京林业大学学报》;20210831;137-148 *
高光谱图像混合像元多维卷积网络协同分解法;刘帅等;《测绘学报》;20201231;1600-1608 *

Also Published As

Publication number Publication date
CN117095306A (en) 2023-11-21

Similar Documents

Publication Publication Date Title
CN111695467B (en) Spatial spectrum full convolution hyperspectral image classification method based on super-pixel sample expansion
CN111368896B (en) Hyperspectral remote sensing image classification method based on dense residual three-dimensional convolutional neural network
CN108280396B (en) Hyperspectral image classification method based on depth multi-feature active migration network
CN107145836B (en) Hyperspectral image classification method based on stacked boundary identification self-encoder
CN111680176A (en) Remote sensing image retrieval method and system based on attention and bidirectional feature fusion
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN111783884B (en) Unsupervised hyperspectral image classification method based on deep learning
CN107590515A (en) The hyperspectral image classification method of self-encoding encoder based on entropy rate super-pixel segmentation
CN102930275B (en) Based on the characteristics of remote sensing image system of selection of Cramer ' s V index
CN112949414B (en) Intelligent surface water body drawing method for wide-vision-field high-resolution six-satellite image
CN112949416A (en) Supervised hyperspectral multi-scale graph volume integral classification method
CN113988147B (en) Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device
CN112329818B (en) Hyperspectral image non-supervision classification method based on graph convolution network embedded characterization
CN115019104A (en) Small sample remote sensing image classification method and system based on multi-source domain self-attention
CN115512226B (en) LiDAR point cloud filtering method integrated with attention mechanism multi-scale CNN
CN114299398B (en) Small sample remote sensing image classification method based on self-supervision contrast learning
CN115457311A (en) Hyperspectral remote sensing image band selection method based on self-expression transfer learning
CN114596463A (en) Image-based land parcel type classification method
CN117953369A (en) Remote sensing image change detection method integrating twin coding and decoding and attention mechanism
CN117095306B (en) Mixed pixel decomposition method based on attention mechanism
CN113362915A (en) Material performance prediction method and system based on multi-modal learning
CN110866552B (en) Hyperspectral image classification method based on full convolution space propagation network
CN1472634A (en) High spectrum remote sensing image combined weighting random sorting method
CN116977747A (en) Small sample hyperspectral classification method based on multipath multi-scale feature twin network
CN116563649A (en) Tensor mapping network-based hyperspectral image lightweight classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant