CN110210420A - Classification method based on fusion high spectrum image and DSM data - Google Patents

Classification method based on fusion high spectrum image and DSM data Download PDF

Info

Publication number
CN110210420A
CN110210420A CN201910487219.1A CN201910487219A CN110210420A CN 110210420 A CN110210420 A CN 110210420A CN 201910487219 A CN201910487219 A CN 201910487219A CN 110210420 A CN110210420 A CN 110210420A
Authority
CN
China
Prior art keywords
layer
information
input end
output end
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910487219.1A
Other languages
Chinese (zh)
Inventor
张钧萍
王金哲
吴斯凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201910487219.1A priority Critical patent/CN110210420A/en
Publication of CN110210420A publication Critical patent/CN110210420A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Based on the classification method of fusion high spectrum image and DSM data, it is related to high spectrum image information technical treatment field.The spectral information of EO-1 hyperion can be lost by solving existing pixel level fusing method, and the joint for being difficult to realize the spatially and spectrally information of high spectrum image utilizes, in addition, can there are problems that different degrees of spectrum distortion in fusion process.Classification method includes the following steps: to obtain the space-optical spectrum united information F of high spectrum image Step 1: by double branch neural networks simultaneously to the progress feature extraction of DSM data derived from high spectrum image and LiDARspec‑spatWith the multiple dimensioned elevation information F of DSM dataelv;Step 2: by Fspec‑spatAnd FelvIt is connected entirely, to realize space-optical spectrum united information Fspec‑spatWith multiple dimensioned elevation information FelvFusion Features, obtain fused data information Fall;Step 3: by classifier to data information FallClassify, obtain sample class label, to complete to fused data information FallClassification.It is mainly used for carrying out terrain classification.

Description

Classification method based on fusion of hyperspectral image and DSM data
Technical Field
The invention relates to the field of hyperspectral image information technical processing.
Background
With the continuous progress of aviation and aerospace platforms and sensor technologies, more and more data types can be obtained, and the defects of a single data source in the aspects of land utilization and military reconnaissance can be effectively overcome by effectively utilizing multi-source remote sensing data.
The existing data fusion method can be mainly divided into three types: pixel level fusion, feature level fusion and decision level fusion, most of the conventional fusion methods are pixel level fusion, hyperspectral spectral information is lost, and joint utilization of space and spectral information of a hyperspectral image is difficult to realize, for example: in the prior art, the utilization of the spatial features and the spectral features mainly comprises a stack mode and a mixed kernel mode, and the two modes are both used for combining spatial and spectral information, so that the complexity of data processing is increased, the original data structure is damaged, and the original features cannot be fully utilized. In addition, there are different degrees of spectral distortion in the fusion process, so the above problems of pixel level fusion in the conventional fusion method need to be solved.
Disclosure of Invention
The invention provides a classification method based on fusion of hyperspectral images and DSM data, and aims to solve the problems that the existing pixel level fusion method can lose hyperspectral spectral information, the joint utilization of space and spectral information of hyperspectral images is difficult to realize, and in addition, different degrees of spectral distortion can exist in the fusion process.
The classification method based on the fusion of the hyperspectral image and the DSM data comprises the following steps:
step one, simultaneously extracting the characteristics of the hyperspectral image and the DSM data derived by LiDAR through a double-branch neural network to obtain space-spectrum joint information F of the hyperspectral imagespec-spatAnd multi-scale elevation information F of DSM dataelv
Step two, combining the space-spectrum information Fspec-spatAnd multi-scale elevation information FelvFull concatenation is performed to realize space-spectrum joint information Fspec-spatAnd multi-scale elevation information FelvTo obtain fused data information Fall
Step three, data information F is classified by a classifierallClassifying to obtain sample class labels, thereby completing the fusion of the data information FallClassification of (3).
Preferably, the two-branch neural network comprises a 3D CNN branch and a 2D CNN branch;
the 3D CNN branch is used for carrying out feature extraction on the hyperspectral image so as to obtain space-spectrum combined information F of the hyperspectral imagespec-spat
A 2D CNN branch for performing feature extraction on the LiDAR-derived DSM data to obtain multi-scale elevation information F of the DSM dataelv
Preferably, the 3D CNN branches into a six-tier network structure;
the six-layer network structure comprises two convolution layers, a normalization layer, two activation layers and a maximum pooling layer;
wherein,
the distribution of each layer in the six-layer network structure according to the information flow is as follows in sequence: a first convolutional layer, a normalization layer, a first active layer, a second convolutional layer, a second active layer and a max pooling layer;
the input end of a first convolution layer in a six-layer network structure is used for receiving a hyperspectral image, the output end of the first convolution layer is connected with the input end of a normalization layer, the output end of the normalization layer is connected with the input end of a first active layer, the output end of one active layer is connected with the input end of a second convolution layer, the output end of the second convolution layer is connected with the input end of a second active layer, the output end of the second active layer is connected with the input end of a maximum pooling layer, and the output end of the maximum pooling layer is used for outputting space-spectrum combined information F of the hyperspectral imagespec-spat
Preferably, the 2D CNN branch includes a convolutional layer, an active layer, a maximum pooling layer, and two cascade modules, and the distribution of each layer in the 2D CNN branch according to the information flow sequentially is: the device comprises a convolutional layer, an active layer, a first cascade module, a maximum pooling layer and a second cascade module;
wherein,
the input end of the convolution layer in the 2D CNN branch is used for receiving DSM data derived from LiDAR, the output end of the convolution layer is connected with the input end of the active layer, the output end of the active layer is connected with the input end of a first cascade module, the output end of the first cascade module is connected with the input end of a maximum pooling layer, the output end of the maximum pooling layer is connected with the input end of a second cascade module, and the output end of the second cascade module is used for outputting multi-scale elevation information Felv
Preferably, the cascade module is a seven-layer network structure, the cascade module includes four convolutional layers, two active layers and a normalization layer, and the distribution of each layer in the seven-layer network structure according to the information flow sequentially is: a first convolution layer, a second convolution layer, a first active layer, a third convolution layer, a normalization layer, a fourth convolution layer and a second active layer;
wherein,
the output end of the first convolution layer is simultaneously connected with the input end of the second convolution layer and the input end of the third convolution layer, the output end of the second convolution layer is connected with the input end of the first activation layer, the output end of the first activation layer is simultaneously connected with the input end of the third convolution layer and the input end of the second activation layer, the output end of the third convolution layer is connected with the input end of the normalization layer, the output end of the normalization layer is connected with the input end of the fourth convolution layer, and the output end of the fourth convolution layer is connected with the input end of the second activation layer.
The invention has the advantages that the hyperspectral image space and spectrum information and the LiDAR data elevation information are effectively utilized, a double-branch Convolutional Neural Network (CNN) is constructed, the space-spectrum characteristic of the hyperspectral image and the LiDAR data elevation information are extracted at the same time, the space-spectrum integrated characteristic of the hyperspectral image is fully utilized, and the correlation among the data is not lost. The deep learning method can extract deep features in the image, reduces the complexity of manually selecting the features, and performs data fusion and classification in an end-to-end form, wherein the end-to-end form refers to that the whole process of extracting information to finish classification can be regarded as end-to-end formation, namely: giving one input to the whole process, there will be one output.
On one hand, the spatial-spectral information is jointly extracted, so that the loss of the hyperspectral spectral information when the spatial-spectral information and the spectral information are extracted independently in the prior art is avoided, the accuracy of feature extraction is ensured, and the accuracy of the subsequent data classification result is ensured; on the other hand, the hyperspectral image and the DSM data derived by the LiDAR are simultaneously subjected to feature extraction through the double-branch neural network, so that the speed of feature extraction is increased.
LiDAR is generally known in English as: light Detection And Ranging, Chinese translation is laser radar; DSMs are all called Digital Surface Model in English, and Chinese is translated into a Digital Surface Model.
Drawings
FIG. 1 is a schematic diagram of a classification method based on fusion of hyperspectral images and DSM data according to the invention;
FIG. 2 is a schematic diagram of the 3D CNN branch;
FIG. 3 is a schematic diagram of the 2D CNN branch;
fig. 4 is a schematic diagram of a cascade module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
Referring to fig. 1, the present embodiment is described, and the classification method based on the fused hyperspectral image and DSM data according to the present embodiment includes the following steps:
step one, simultaneously deriving a hyperspectral image and a LiDAR through a double-branch neural networkPerforming feature extraction on raw DSM data to obtain space-spectrum combined information F of hyperspectral imagespec-spatAnd multi-scale elevation information F of DSM dataelv
Step two, combining the space-spectrum information Fspec-spatAnd multi-scale elevation information FelvFull concatenation is performed to realize space-spectrum joint information Fspec-spatAnd multi-scale elevation information FelvTo obtain fused data information Fall
Step three, data information F is classified by a classifierallClassifying to obtain sample class labels, thereby completing the fusion of the data information FallClassification of (3).
In this embodiment, all of the LiDAR English terms are: light Detection And Ranging, Chinese translation is laser radar; DSMs are all called Digital Surface Model in English, and Chinese is translated into a Digital Surface Model.
According to the invention, on one hand, the spatial-spectral information is jointly extracted, so that the loss of the hyperspectral spectral information when the spatial-spectral information and the spectral information are extracted independently in the prior art is avoided, and the accuracy of feature extraction is ensured, thereby ensuring the accuracy of the subsequent data classification result; on the other hand, the hyperspectral image and the DSM data derived by the LiDAR are simultaneously subjected to feature extraction through the double-branch neural network, so that the speed of feature extraction is increased.
In the whole classification process, firstly, feature extraction is carried out on the hyperspectral image and the DSM data derived from LiDAR at the same time to obtain space-spectrum combined information Fspec-spatAnd multi-scale elevation information FelvThen the two are fused, and finally the fused data information is classified, so that the whole processing process is simple and easy to implement.
Referring to fig. 2 and 3, the preferred embodiment is illustrated, in which the two-branch neural network includes a 3D CNN branch and a 2D CNN branch;
3D CNN branch for hyperspectral image processingLine feature extraction to obtain space-spectrum combined information F of hyperspectral imagespec-spat
A 2D CNN branch for performing feature extraction on the LiDAR-derived DSM data to obtain multi-scale elevation information F of the DSM dataelv
In the preferred embodiment, 3D CNN is called Three dimensional Convolutional Neural Networks in all english, and chinese is translated into a two-dimensional Convolutional Neural network, and 2D CNN is called twodimentional Convolutional Neural Networks in all english, and chinese is translated into a Three-dimensional Convolutional Neural network.
The double-branch neural network contains two channels, namely: the 3D CNN branches jointly extract space and spectral characteristics, the 2D CNN branches extract elevation information, the extracted characteristics of the two branches are fully connected in a characteristic fusion stage, and finally the two branches enter a classifier to output classification results.
Referring to fig. 2, the preferred embodiment is described, in which the 3D CNN branches into six-tier network structures;
the six-layer network structure comprises two convolution layers, a normalization layer, two activation layers and a maximum pooling layer;
wherein,
the distribution of each layer in the six-layer network structure according to the information flow is as follows in sequence: a first convolutional layer, a normalization layer, a first active layer, a second convolutional layer, a second active layer and a max pooling layer;
the input end of a first convolution layer in a six-layer network structure is used for receiving a hyperspectral image, the output end of the first convolution layer is connected with the input end of a normalization layer, the output end of the normalization layer is connected with the input end of a first active layer, the output end of one active layer is connected with the input end of a second convolution layer, the output end of the second convolution layer is connected with the input end of a second active layer, the output end of the second active layer is connected with the input end of a maximum pooling layer, and the output end of the maximum pooling layer is connected with the input end of the maximum pooling layerSpace-spectrum combined information F for outputting hyperspectral image at output endspec-spat
In the preferred embodiment, the 3D CNN branch is used to complete feature extraction and obtain spatial-spectral combined information after a series of operations of convolution, normalization, activation, re-convolution layer, re-activation layer, and maximum pooling layer are sequentially performed on the hyperspectral image.
Referring to fig. 3 to explain the preferred embodiment, in the preferred embodiment, the 2D CNN branch includes a convolutional layer, an active layer, a max pooling layer, and two cascade modules, and the distribution of each layer in the 2D CNN branch according to the information flow sequentially is: the device comprises a convolutional layer, an active layer, a first cascade module, a maximum pooling layer and a second cascade module;
wherein,
the input end of the convolution layer in the 2D CNN branch is used for receiving DSM data derived from LiDAR, the output end of the convolution layer is connected with the input end of the active layer, the output end of the active layer is connected with the input end of a first cascade module, the output end of the first cascade module is connected with the input end of a maximum pooling layer, the output end of the maximum pooling layer is connected with the input end of a second cascade module, and the output end of the second cascade module is used for outputting multi-scale elevation information Felv
In the preferred embodiment, the cascade module is used for performing multi-scale feature extraction on the input data.
Sequentially convolving and activating the DSM data derived by the LiDAR through the 2D CNN branch, performing multi-scale feature extraction once through the first cascading module, performing maximum pooling operation, and performing multi-scale feature extraction for the second time through the second cascading module on the result after the maximum pooling operation so as to output multi-scale elevation information Felv
Referring to fig. 4, the preferred embodiment is described, in which the cascade module is a seven-layer network structure, the cascade module includes four convolutional layers, two active layers and one normalization layer,
and the distribution of each layer in the seven-layer network structure according to the information flow is as follows in sequence: a first convolution layer, a second convolution layer, a first active layer, a third convolution layer, a normalization layer, a fourth convolution layer and a second active layer;
wherein,
the output end of the first convolution layer is simultaneously connected with the input end of the second convolution layer and the input end of the third convolution layer, the output end of the second convolution layer is connected with the input end of the first activation layer, the output end of the first activation layer is simultaneously connected with the input end of the third convolution layer and the input end of the second activation layer, the output end of the third convolution layer is connected with the input end of the normalization layer, the output end of the normalization layer is connected with the input end of the fourth convolution layer, and the output end of the fourth convolution layer is connected with the input end of the second activation layer.
In the present preferred embodiment of the process of the invention,
the cascade module is used for combining features from different levels of different layers, for reuse and transfer of features, and can perform one multi-scale feature extraction and stacking. The cascade module performs seven layers of operation, the cascade module comprising four convolutions, one normalization and two active layers, the two shortcut connections being at the first and third convolution layers and the two active layers, respectively. In addition, mathematical addition is performed between two feature maps having identically shaped channels, and then the fused features are propagated to the next layer in the forward stage.
And the third convolution layer is used for superposing the data output by the first convolution layer and the data output by the first active layer and then convolving.
And the second active layer is used for superposing the data output by the first active layer and the data output by the fourth convolution layer and then convolving the superposed data.
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. It should be understood that features described in different dependent claims and herein may be combined in ways different from those described in the original claims. It is also to be understood that features described in connection with individual embodiments may be used in other described embodiments.

Claims (5)

1. The classification method based on the fusion of the hyperspectral image and the DSM data is characterized by comprising the following steps of:
step one, simultaneously extracting the characteristics of the hyperspectral image and the DSM data derived by LiDAR through a double-branch neural network to obtain space-spectrum joint information F of the hyperspectral imagespec-spatAnd multi-scale elevation information F of DSM dataelv
Step two, combining the space-spectrum information Fspec-spatAnd multi-scale elevation information FelvFull concatenation is performed to realize space-spectrum joint information Fspec-spatAnd multi-scale elevation information FelvTo obtain fused data information Fall
Step three, data information F is classified by a classifierallClassifying to obtain sample class labels, thereby completing the fusion of the data information FallClassification of (3).
2. The method for classification based on fusion of hyperspectral images and DSM data according to claim 1, wherein the two-branch neural network comprises a 3D CNN branch and a 2D CNN branch;
the 3D CNN branch is used for carrying out feature extraction on the hyperspectral image so as to obtain space-spectrum combined information F of the hyperspectral imagespec-spat
A 2D CNN branch for performing feature extraction on the LiDAR-derived DSM data to obtain multi-scale elevation information F of the DSM dataelv
3. The method for classifying based on the fusion of the hyperspectral image and the DSM data according to claim 2, wherein the 3DCNN is branched into a six-layer network structure;
the six-layer network structure comprises two convolution layers, a normalization layer, two activation layers and a maximum pooling layer;
wherein,
the distribution of each layer in the six-layer network structure according to the information flow is as follows in sequence: a first convolutional layer, a normalization layer, a first active layer, a second convolutional layer, a second active layer and a max pooling layer;
the input end of the first convolution layer in the six-layer network structure is used for receiving the hyperspectral image, the output end of the first convolution layer is connected with the input end of the normalization layer, the output end of the normalization layer is connected with the input end of the first active layer, the output end of one active layer is connected with the input end of the second convolution layer, the output end of the second convolution layer is connected with the input end of the second active layer, and the output end of the second active layer is connected with the input end of the second active layerThe end of the input end of the maximum pooling layer is connected with the input end of the maximum pooling layer, and the output end of the maximum pooling layer is used for outputting space-spectrum combined information F of the hyperspectral imagespec-spat
4. The method of classification based on fusing hyperspectral image and DSM data according to claim 2,
the 2D CNN branch comprises a convolution layer, an activation layer, a maximum pooling layer and two cascade modules, and the distribution of each layer in the 2D CNN branch according to the information flow sequentially is as follows: the device comprises a convolutional layer, an active layer, a first cascade module, a maximum pooling layer and a second cascade module;
wherein,
the input end of the convolution layer in the 2D CNN branch is used for receiving DSM data derived from LiDAR, the output end of the convolution layer is connected with the input end of the active layer, the output end of the active layer is connected with the input end of a first cascade module, the output end of the first cascade module is connected with the input end of a maximum pooling layer, the output end of the maximum pooling layer is connected with the input end of a second cascade module, and the output end of the second cascade module is used for outputting multi-scale elevation information Felv
5. The method of classification based on fusing hyperspectral image and DSM data according to claim 4,
the cascade module is seven layers network structure, and the cascade module includes four convolution layers, two activation layers and a normalization layer, and the distribution of each layer is in proper order among the seven layers network structure according to the information flow direction: a first convolution layer, a second convolution layer, a first active layer, a third convolution layer, a normalization layer, a fourth convolution layer and a second active layer;
wherein,
the output end of the first convolution layer is simultaneously connected with the input end of the second convolution layer and the input end of the third convolution layer, the output end of the second convolution layer is connected with the input end of the first activation layer, the output end of the first activation layer is simultaneously connected with the input end of the third convolution layer and the input end of the second activation layer, the output end of the third convolution layer is connected with the input end of the normalization layer, the output end of the normalization layer is connected with the input end of the fourth convolution layer, and the output end of the fourth convolution layer is connected with the input end of the second activation layer.
CN201910487219.1A 2019-06-05 2019-06-05 Classification method based on fusion high spectrum image and DSM data Pending CN110210420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910487219.1A CN110210420A (en) 2019-06-05 2019-06-05 Classification method based on fusion high spectrum image and DSM data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910487219.1A CN110210420A (en) 2019-06-05 2019-06-05 Classification method based on fusion high spectrum image and DSM data

Publications (1)

Publication Number Publication Date
CN110210420A true CN110210420A (en) 2019-09-06

Family

ID=67791113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910487219.1A Pending CN110210420A (en) 2019-06-05 2019-06-05 Classification method based on fusion high spectrum image and DSM data

Country Status (1)

Country Link
CN (1) CN110210420A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963262A (en) * 2021-12-20 2022-01-21 中国地质大学(武汉) Mining area land coverage classification method based on depth feature fusion model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909015A (en) * 2017-10-27 2018-04-13 广东省智能制造研究所 Hyperspectral image classification method based on convolutional neural networks and empty spectrum information fusion
CN108846352A (en) * 2018-06-08 2018-11-20 广东电网有限责任公司 A kind of vegetation classification and recognition methods
CN109164459A (en) * 2018-08-01 2019-01-08 南京林业大学 A kind of method that combination laser radar and high-spectral data classify to forest species
BR102017015268A2 (en) * 2017-07-17 2019-01-29 Fundação Universidade Regional De Blumenau method for real-time flood monitoring by unmanned aerial vehicle
US20190138830A1 (en) * 2015-01-09 2019-05-09 Irvine Sensors Corp. Methods and Devices for Cognitive-based Image Data Analytics in Real Time Comprising Convolutional Neural Network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190138830A1 (en) * 2015-01-09 2019-05-09 Irvine Sensors Corp. Methods and Devices for Cognitive-based Image Data Analytics in Real Time Comprising Convolutional Neural Network
BR102017015268A2 (en) * 2017-07-17 2019-01-29 Fundação Universidade Regional De Blumenau method for real-time flood monitoring by unmanned aerial vehicle
CN107909015A (en) * 2017-10-27 2018-04-13 广东省智能制造研究所 Hyperspectral image classification method based on convolutional neural networks and empty spectrum information fusion
CN108846352A (en) * 2018-06-08 2018-11-20 广东电网有限责任公司 A kind of vegetation classification and recognition methods
CN109164459A (en) * 2018-08-01 2019-01-08 南京林业大学 A kind of method that combination laser radar and high-spectral data classify to forest species

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
FRANZ ROTTENSTEINER ET AL.: ""A New Method for Building Extraction in Urban Areas from High-resolution LIDAR Data"", 《RESEARCHGATE》 *
HAO LI ET AL.: ""Hyperspectral and LiDAR Fusion Using Deep "", 《REMOTE SENSING》 *
XIAODONG XU ET AL: ""Multisource Remote Sensing Data Classification Based on Convolutional Neural Network"", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *
YING LI ET AL: ""Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network"", 《REMOTE SENSING》 *
李春阳: ""基于深度神经网络的多/高光谱与高程数据联合分类研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
邹雄高: ""基于机载激光雷达数据的滤波分类与建筑物提取技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963262A (en) * 2021-12-20 2022-01-21 中国地质大学(武汉) Mining area land coverage classification method based on depth feature fusion model
CN113963262B (en) * 2021-12-20 2022-08-23 中国地质大学(武汉) Mining area land coverage classification method, equipment, device and storage medium

Similar Documents

Publication Publication Date Title
Qingyun et al. Cross-modality attentive feature fusion for object detection in multispectral remote sensing imagery
CN111325751B (en) CT image segmentation system based on attention convolution neural network
Du et al. Car detection for autonomous vehicle: LIDAR and vision fusion approach through deep learning framework
CN109902806B (en) Method for determining target bounding box of noise image based on convolutional neural network
CN109509156B (en) Image defogging processing method based on generation countermeasure model
CN111274921A (en) Method for recognizing human body behaviors by utilizing attitude mask
WO2024040828A1 (en) Method and device for fusion and classification of remote sensing hyperspectral image and laser radar image
Zhang et al. Exploration of deep learning-based multimodal fusion for semantic road scene segmentation
CN111027581A (en) 3D target detection method and system based on learnable codes
Zuo et al. Fast residual forests: Rapid ensemble learning for semantic segmentation
CN111553869A (en) Method for complementing generated confrontation network image under space-based view angle
CN114663514A (en) Object 6D attitude estimation method based on multi-mode dense fusion network
CN113361466A (en) Multi-modal cross-directed learning-based multi-spectral target detection method
CN112861774A (en) Method and system for identifying ship target by using remote sensing image
CN116109925A (en) Multi-mode remote sensing image classification method based on heterogeneous feature learning network
Liu et al. A multi-modality sensor system for unmanned surface vehicle
CN110210420A (en) Classification method based on fusion high spectrum image and DSM data
CN116665185A (en) Three-dimensional target detection method, system and storage medium for automatic driving
Li et al. Automatic rocks segmentation based on deep learning for planetary rover images
CN116433904A (en) Cross-modal RGB-D semantic segmentation method based on shape perception and pixel convolution
CN116824356A (en) Method and system for extracting and classifying spatial elevation spectrum features of multi-source remote sensing image
CN115861861A (en) Lightweight acceptance method based on unmanned aerial vehicle distribution line inspection
CN114373080B (en) Hyperspectral classification method of lightweight hybrid convolution model based on global reasoning
Khryaschev et al. Urban areas analysis using satellite image segmentation and deep neural network
CN116167927A (en) Image defogging method and system based on mixed double-channel attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190906

RJ01 Rejection of invention patent application after publication