CN117034778A - Method for inverting aboveground biomass based on hypershaper-transducer structure - Google Patents
Method for inverting aboveground biomass based on hypershaper-transducer structure Download PDFInfo
- Publication number
- CN117034778A CN117034778A CN202311089810.4A CN202311089810A CN117034778A CN 117034778 A CN117034778 A CN 117034778A CN 202311089810 A CN202311089810 A CN 202311089810A CN 117034778 A CN117034778 A CN 117034778A
- Authority
- CN
- China
- Prior art keywords
- feature
- image
- features
- information
- biomass
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000002028 Biomass Substances 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000004927 fusion Effects 0.000 claims abstract description 14
- 238000013135 deep learning Methods 0.000 claims abstract description 12
- 238000000605 extraction Methods 0.000 claims abstract description 6
- 230000003287 optical effect Effects 0.000 claims abstract description 4
- 230000008569 process Effects 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000000243 photosynthetic effect Effects 0.000 claims description 3
- 238000001556 precipitation Methods 0.000 claims description 3
- 230000005855 radiation Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 2
- 238000007500 overflow downdraw method Methods 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims description 2
- 238000001228 spectrum Methods 0.000 claims description 2
- 108700019146 Transgenes Proteins 0.000 claims 2
- 238000004422 calculation algorithm Methods 0.000 abstract description 5
- 239000000284 extract Substances 0.000 abstract description 3
- 238000013136 deep learning model Methods 0.000 abstract description 2
- 230000003595 spectral effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000007637 random forest analysis Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 229910052799 carbon Inorganic materials 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000009919 sequestration Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10036—Multispectral image; Hyperspectral image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Remote Sensing (AREA)
- Geometry (AREA)
- Astronomy & Astrophysics (AREA)
- Databases & Information Systems (AREA)
- Computer Hardware Design (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for inverting aboveground biomass based on a hypershaper-transducer structure, which is oriented to a biomass inversion model of a SAR and optical satellite remote sensing fusion image, constructs a deep learning model based on a transducer related structure, fully exerts the extraction capability of the transducer structure on global information, digs characteristic information of a deep space in fused remote sensing image data, constructs a high-dimensional characteristic representation based on the fusion image, and realizes estimation inversion of biomass through a deep learning algorithm. The invention extracts the enhancement structure based on the network characteristics of HyperGraph (HyperGraph), and further characterizes the geometric topology information in the fused remote sensing image data. And the hypergraph structure is constructed for the original features, hypergraph learning is carried out, and support is provided for the acquisition of the subsequent high-dimensional feature space information.
Description
Technical Field
The invention relates to a method for inverting aboveground biomass, in particular to a method for inverting aboveground biomass based on a HyperGraph-transducer structure.
Background
The biomass on the forest land is used as the most basic quantity characteristic of the forest ecological system, is an energy basis and a material source for the whole forest ecological system to operate, can reflect the complex relationship between material circulation and energy flow between the forest and the environment, and is an important index for measuring the carbon sequestration capacity of the forest, the productivity of the forest ecological system and the structural function. The change of forest biomass can reflect the quality and condition change of the forest ecological system.
At present, inversion estimation of forest ground biomass is mostly realized based on a regression algorithm and an empirical model coupled machine learning method. The inversion accuracy is affected by the limitation of saturation, insufficient contained band information and the like by utilizing a single vegetation index, if the spectral information of a sensitive band or a plurality of vegetation indexes are used as input variables of a traditional regression or machine learning method for biomass estimation, a large amount of spectral information is usually abandoned, meanwhile, a certain priori knowledge is often required for the selection of the sensitive band, and the data processing process is complex. In addition, compared with inversion of a single remote sensing image, the biomass inversion method combined by the multi-mode remote sensing images can obtain vegetation information of a horizontal structure and a vertical structure at the same time. However, features obtained by combining remote sensing images lack relevance and consistency on data samples. Therefore, in order to improve the accuracy and efficiency of inversion, research on forest ground biomass inversion methods based on deep learning and remote sensing image fusion is needed.
Disclosure of Invention
In order to improve inversion accuracy of forest ground biomass, the invention provides a method for inverting the ground biomass based on a hypershap-transducer structure. According to the method, SAR images and multispectral images are fused, high-information-resolution image data containing spectral information and vegetation characteristic information are generated, forest ground biomass is inverted through a deep learning transducer model, and a novel remote sensing quantitative inversion theory based on artificial intelligence is constructed.
The invention aims at realizing the following technical scheme:
a method for inverting aboveground biomass based on a hypershaper-transducer structure, comprising the steps of:
firstly, constructing a biomass inversion model by using a deep learning network based on a Transformer based on fused high-resolution remote sensing image data and measured biomass data and combining enhanced vegetation index, photosynthetic effective radiation, temperature, precipitation and altitude data in the same period;
selecting an optimal characteristic representation result to construct an hyperedge and related weights thereof by comparing the construction mode of the hyperedge by using a characteristic enhancement structure based on the hyperimage, establishing an adjacent matrix and performing hyperimage learning;
determining the spatial relationship between vertexes by utilizing the distance in the feature, finding adjacent vertexes in the feature space, and constructing a superside to connect the vertexes;
step four, the features subjected to hypergraph convolution are rich in geometric topology information, the features are input into a deep learning network based on a Transformer structure, and global features are extracted for inversion of biomass;
and fifthly, performing model fitting on the extracted features and other data such as temperature, altitude and the like to realize biomass inversion.
Compared with the prior art, the invention has the following advantages:
1. the invention is oriented to a biomass inversion model of SAR and optical satellite remote sensing fusion images, a deep learning model based on a transducer related structure is constructed, the extraction capability of the transducer structure to global information is fully exerted, the characteristic information of deep space in fused remote sensing image data is mined, a high-dimensional characteristic representation based on the fusion images is constructed, and the estimation inversion of biomass is realized through a deep learning algorithm.
2. The invention extracts the enhancement structure based on the network characteristics of HyperGraph (HyperGraph), and further characterizes the geometric topology information in the fused remote sensing image data. And the hypergraph structure is constructed for the original features, hypergraph learning is carried out, and support is provided for the acquisition of the subsequent high-dimensional feature space information.
Drawings
FIG. 1 is a Transformer-based network structure;
fig. 2 is a framework of a two-branch convolutional neural network algorithm.
Detailed Description
The following description of the present invention is provided with reference to the accompanying drawings, but is not limited to the following description, and any modifications or equivalent substitutions of the present invention should be included in the scope of the present invention without departing from the spirit and scope of the present invention.
The invention provides a method for inverting aboveground biomass based on a hypersigner-transducer structure, which comprises the following steps:
step one, based on fused high-resolution remote sensing image data and actually measured biomass data, combining enhanced vegetation index, photosynthetic effective radiation, temperature, precipitation and altitude data in the same period, and constructing a biomass inversion model by using a deep learning network based on a Transformer.
As shown in fig. 1, the deep learning network based on the transducer mainly comprises three continuous transducer layers, and on the basis of taking the features learned by the hypergraph as the input of the network, the transducer layers further extract the information of the features, and the information is different from the characteristics that the hypergraph features are rich in local geometric topological information, and the transducer focuses on the global information of the features, so that the two features can be highly characterized by combining the features of the fused remote sensing image data. The transducer layer is made up of Patch Merging Block and Transformer Block. For each incoming feature, patchMerging Block implements a downsampling of the feature, downsampling the feature size to the original quarter. And the TransformerBlock is used as a feature learning module for feature extraction.
In this step, as shown in fig. 2, the fusion method of the SAR image and the optical image based on the dual-branch CNN acquires high-resolution remote sensing image data:
step 1, the original remote sensing SAR image and the multispectral image are subjected to a high-pass filter to obtain high-frequency information of a forest image, and the high-frequency information of the multispectral image is up-sampled to the same resolution of the high-frequency information of the SAR image, wherein the process is shown in the following formula:
S hp =HP(S)
MS hp =HP(MS)
↑MS hp =↑(MS hp )
wherein HP (·) represents performing high-pass filtering operation on two forest remote sensing images, S represents an input SAR image, MS represents an input multispectral image, and ++ (·) represents performing 3 times up-sampling operation in a bicubic interpolation mode;
step 2, respectively extracting deep features of SAR and multispectral image high-frequency information by an encoder module, wherein the method comprises the following specific steps of:
step 21, extracting shallow features of small scale by using a convolution kernel of 3×3 for SAR image high frequency information;
step 22, extracting large-scale shallow layer features from the high-frequency information of the multispectral image by adopting a 9X 9 convolution kernel;
step 23, further extracting features of the SAR and the multispectral shallow features through a multiscale feature extraction module to obtain multiscale deep features;
and 3, fusing deep features of different source images in a feature fusion layer module, wherein the method comprises the following specific steps of:
step 31, adding pixels of different scale deep features output by the encoder module according to the same scale features, and overlapping high-resolution detailed information of SAR images which are not contained in the multispectral images into the multispectral images;
step 32, performing channel cascade on all the feature maps subjected to superposition fusion, wherein the process of the channel cascade is represented by the following steps:
wherein,representing the operation of the encoder module to extract features; />Representing a feature pixel level superposition operation; i is 3, 5 and 7, which represent the size of the convolution kernel; />And->Respectively representing deep features of SAR images and multispectral images extracted by adopting an i×i convolution kernel to carry out convolution operation; />Representing the collectionFusing the obtained feature images in a pixel superposition mode; c (·) represents a feature map channel cascade operation; />Representing the final fused feature map;
and 4, the decoder module maps the fused feature images back to the spatial information of the images by adopting an image super-resolution network SRCNN, wherein: the first layer is a 9×9 convolution kernel to generate a 64-channel feature map, the second layer is a 1×1 convolution kernel to generate a 32-channel feature map, and the last layer is a 5×5 convolution kernel feature channel with dimensions reduced to 3 dimensions to obtain clear image detail information, wherein the three-layer convolution operation in the decoding process is represented by the following formula:
wherein omega 1 、ω 2 And omega 3 Representing three different convolution kernels, b 1 ,b 2 And b 3 For the bias value of each layer, F hp Representing spatial detail information;
and 5, superposing the space detail information on the sampled multispectral image by adopting a jumper connection mode to obtain a final fusion result, namely, transmitting the multispectral spectrum information into the fusion image, wherein the implementation process is shown in the following formula:
wherein F is hp Representing the space detail information, +.MS represents 3 times of the bicubic interpolationUp-sampled multispectral images.
And secondly, using a hypergraph-based feature enhancement structure, selecting the optimal feature representation result to construct the hyperedge and the related weight thereof by comparing the construction modes of the hyperedge based on node distance construction, feature construction and the like, establishing an adjacent matrix and performing hypergraph learning.
Step three, the distance-based hypergraph generation method utilizes the distance in the feature to determine the spatial relationship between the vertexes, and the main objective is to find adjacent vertexes in the feature space and construct a hyperedge to connect them, and the nearest neighbor search and clustering are generally used to construct the hyperedge. In the nearest neighbor search method, the construction of the superside needs to find the nearest point to each vertex, and for each given vertex, the superside can connect itself with the nearest point in the superside feature space. Different from the nearest neighbor searching method, the clustering-based superedge construction method directly clusters a group of vertexes into a cluster through a clustering algorithm such as k-means and the like, and uses the superedge to connect all vertexes in the same cluster.
And step four, the features subjected to hypergraph convolution are rich in geometric topology information, and the features are input into a deep learning network based on a transducer structure, so that global features can be better extracted for inversion of biomass.
And fifthly, on the basis, performing model fitting on the extracted characteristics and other temperature, altitude and other data by using Random Forest (RF), support Vector Machine (SVM) and other methods, realizing biomass inversion, and finally inputting test data into the fitted model for accuracy testing.
Claims (7)
1. A method for inverting aboveground biomass based on a hypershaptor-transducer structure, said method comprising the steps of:
firstly, constructing a biomass inversion model by using a deep learning network based on a Transformer based on fused high-resolution remote sensing image data and measured biomass data and combining enhanced vegetation index, photosynthetic effective radiation, temperature, precipitation and altitude data in the same period;
selecting an optimal characteristic representation result to construct an hyperedge and related weights thereof by comparing the construction mode of the hyperedge by using a characteristic enhancement structure based on the hyperimage, establishing an adjacent matrix and performing hyperimage learning;
determining the spatial relationship between vertexes by utilizing the distance in the feature, finding adjacent vertexes in the feature space, and constructing a superside to connect the vertexes;
step four, the features subjected to hypergraph convolution are rich in geometric topology information, the features are input into a deep learning network based on a Transformer structure, and global features are extracted for inversion of biomass;
and fifthly, performing model fitting on the extracted characteristics, the temperature and the altitude data to realize biomass inversion.
2. The method for inverting aboveground biomass based on a hypersigner-based structure of claim 1, characterized in that the deep learning network based on transgenes consists of three consecutive layers of transgenes.
3. The method of inverting aboveground biomass based on the hypersigner-fransformer structure of claim 2, wherein the fransformer layer consists of Patch Merging Block and Transformer Block, and for each input feature, patchMerging Block achieves feature downsampling, feature size downsampling to one-fourth of the original, transformer Block as a feature learning module for feature extraction.
4. The method for inverting aboveground biomass based on the hypersigner-fransformer structure according to claim 1, wherein in the first step, high-resolution remote sensing image data is obtained by a fusion method of a SAR image and an optical image based on a double-branch CNN:
step 1, the original remote sensing SAR image and the multispectral image are subjected to a high-pass filter to obtain high-frequency information of a forest image, and the high-frequency information of the multispectral image is up-sampled to the same resolution of the high-frequency information of the SAR image, wherein the process is shown in the following formula:
S hp =HP(S)
MS hp =HP(MS)
↑MS hp =↑(MS hp )
wherein HP (·) represents performing high-pass filtering operation on two forest remote sensing images, S represents an input SAR image, MS represents an input multispectral image, and ++ (·) represents performing 3 times up-sampling operation in a bicubic interpolation mode;
step 2, respectively extracting deep features of SAR and multispectral image high-frequency information by an encoder module;
step 3, fusing deep features of different source images in a feature fusion layer module;
and 4, the decoder module maps the fused feature images back to the spatial information of the images by adopting an image super-resolution network SRCNN, wherein: the first layer is a 9×9 convolution kernel to generate a 64-channel feature map, the second layer is a 1×1 convolution kernel to generate a 32-channel feature map, and the last layer is a 5×5 convolution kernel feature channel with dimension reduced to 3 dimensions to obtain clear image detail information;
and 5, superposing the space detail information on the sampled multispectral image by adopting a jumper connection mode to obtain a final fusion result, namely, transmitting the multispectral spectrum information into the fusion image, wherein the implementation process is shown in the following formula:
wherein F is hp Representing spatial detail information, +.MS represents a 3-fold upsampled multispectral image after bicubic interpolation.
5. The method for inverting aboveground biomass based on a hypersigner-transducer structure according to claim 4 wherein the specific steps of step 2 are as follows:
step 21, extracting shallow features of small scale by using a convolution kernel of 3×3 for SAR image high frequency information;
step 22, extracting large-scale shallow layer features from the high-frequency information of the multispectral image by adopting a 9X 9 convolution kernel;
and step 23, further extracting the features of the SAR and the multispectral shallow features by a multiscale feature extraction module to obtain multiscale deep features.
6. The method for inverting aboveground biomass based on a hypersigner-transducer structure according to claim 4 wherein the specific steps of step 3 are as follows:
step 31, adding pixels of different scale deep features output by the encoder module according to the same scale features, and overlapping high-resolution detailed information of SAR images which are not contained in the multispectral images into the multispectral images;
step 32, performing channel cascade on all the feature maps subjected to superposition fusion, wherein the process of the channel cascade is represented by the following steps:
wherein,representing the operation of the encoder module to extract features; and (c) represents a feature pixel level superimposing operation; i is 3, 5 and 7, which represent the size of the convolution kernel; />And->Respectively, denote convolution with an ixi convolution kernelOperating deep features of the extracted SAR image and multispectral image; />Representing a feature map obtained by fusion in a pixel superposition mode; c (·) represents a feature map channel cascade operation; />And representing the final fused feature map.
7. The method for inverting aboveground biomass based on the hypersigner-fransformer structure as recited in claim 4 wherein in said step 4, the convolution operation of three layers in the decoding process is represented by the following formula:
wherein omega 1 、ω 2 And omega 3 Representing three different convolution kernels, b 1 ,b 2 And b 3 For the bias value of each layer, F hp Representing spatial detail information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311089810.4A CN117034778A (en) | 2023-08-28 | 2023-08-28 | Method for inverting aboveground biomass based on hypershaper-transducer structure |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311089810.4A CN117034778A (en) | 2023-08-28 | 2023-08-28 | Method for inverting aboveground biomass based on hypershaper-transducer structure |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117034778A true CN117034778A (en) | 2023-11-10 |
Family
ID=88626348
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311089810.4A Pending CN117034778A (en) | 2023-08-28 | 2023-08-28 | Method for inverting aboveground biomass based on hypershaper-transducer structure |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117034778A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117571641A (en) * | 2024-01-12 | 2024-02-20 | 自然资源部第二海洋研究所 | Sea surface nitrate concentration distribution detection method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113919441A (en) * | 2021-11-03 | 2022-01-11 | 北京工业大学 | Classification method based on hypergraph transformation network |
CN114821261A (en) * | 2022-05-20 | 2022-07-29 | 合肥工业大学 | Image fusion algorithm |
CN115331063A (en) * | 2022-09-02 | 2022-11-11 | 安徽大学 | Hyperspectral image classification method and system based on dynamic hypergraph convolution network |
CN115759526A (en) * | 2022-10-31 | 2023-03-07 | 苏州深蓝空间遥感技术有限公司 | Crop harvest index inversion method based on convolutional neural network and crop model |
WO2023098098A1 (en) * | 2021-12-02 | 2023-06-08 | 南京邮电大学 | Tag-aware recommendation method based on attention mechanism and hypergraph convolution |
-
2023
- 2023-08-28 CN CN202311089810.4A patent/CN117034778A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113919441A (en) * | 2021-11-03 | 2022-01-11 | 北京工业大学 | Classification method based on hypergraph transformation network |
WO2023098098A1 (en) * | 2021-12-02 | 2023-06-08 | 南京邮电大学 | Tag-aware recommendation method based on attention mechanism and hypergraph convolution |
CN114821261A (en) * | 2022-05-20 | 2022-07-29 | 合肥工业大学 | Image fusion algorithm |
CN115331063A (en) * | 2022-09-02 | 2022-11-11 | 安徽大学 | Hyperspectral image classification method and system based on dynamic hypergraph convolution network |
CN115759526A (en) * | 2022-10-31 | 2023-03-07 | 苏州深蓝空间遥感技术有限公司 | Crop harvest index inversion method based on convolutional neural network and crop model |
Non-Patent Citations (2)
Title |
---|
YU-JUNG HEO: "Hypergraph Transformer:Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question Answering", Retrieved from the Internet <URL:https://arxiv.org/abs/2204.10448> * |
杨浩龙: "基于改进的TransUnet网络的地物分类研究", 中国优秀硕士学位论文全文数据库 工程科技II辑, no. 2, pages 1 - 46 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117571641A (en) * | 2024-01-12 | 2024-02-20 | 自然资源部第二海洋研究所 | Sea surface nitrate concentration distribution detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | Multiscale feature learning by transformer for building extraction from satellite images | |
CN111178304B (en) | High-resolution remote sensing image pixel level interpretation method based on full convolution neural network | |
CN117034778A (en) | Method for inverting aboveground biomass based on hypershaper-transducer structure | |
Dibs et al. | Multi-fusion algorithms for detecting land surface pattern changes using multi-high spatial resolution images and remote sensing analysis | |
CN114937202A (en) | Double-current Swin transform remote sensing scene classification method | |
Parajuli et al. | Attentional dense convolutional neural network for water body extraction from sentinel-2 images | |
Jia et al. | Collaborative contrastive learning for hyperspectral and LiDAR classification | |
Li et al. | Spatial-temporal super-resolution land cover mapping with a local spatial-temporal dependence model | |
Liu et al. | Environment monitoring of Shanghai Nanhui intertidal zone with dual-polarimetric SAR data based on deep learning | |
Guo et al. | A flexible object-level processing strategy to enhance the weight function-based spatiotemporal fusion method | |
Thati et al. | A systematic extraction of glacial lakes for satellite imagery using deep learning based technique | |
CN101561882A (en) | Sub-pixel spatial mapping method based on spatial correlation | |
Zhao et al. | A Method for Extracting Lake Water Using ViTenc-UNet: Taking Typical Lakes on the Qinghai-Tibet Plateau as Examples | |
Yao et al. | Spectralmamba: Efficient mamba for hyperspectral image classification | |
Chi et al. | Remote sensing data processing and analysis for the identification of geological entities | |
CN112446256A (en) | Vegetation type identification method based on deep ISA data fusion | |
CN115797184A (en) | Water super-resolution extraction model based on remote sensing image | |
Yuan et al. | MPFFNet: LULC classification model for high-resolution remote sensing images with multi-path feature fusion | |
Kulkarni et al. | “Parametric Methods to Multispectral Image Classification using Normalized Difference Vegetation Index | |
Soufi et al. | Deep learning technique for image satellite processing | |
CN111695530A (en) | River water replenishing effect intelligent monitoring and evaluation method based on high-resolution remote sensing | |
Baral et al. | Remote Sensing Image Classification Using Transfer Learning Based Convolutional Neural Networks: An Experimental Overview | |
CN111339825B (en) | Model training method based on characteristic relation atlas learning and data classification method | |
Li et al. | HrreNet: semantic segmentation network for moderate and high-resolution satellite images | |
Zhang et al. | DGCC-EB: Deep Global Context Construction With an Enabled Boundary for Land Use Mapping of CSMA |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |