CN115588136A - Neural network hyperspectral image classification method based on multi-level spatial spectrum fusion - Google Patents

Neural network hyperspectral image classification method based on multi-level spatial spectrum fusion Download PDF

Info

Publication number
CN115588136A
CN115588136A CN202211225424.9A CN202211225424A CN115588136A CN 115588136 A CN115588136 A CN 115588136A CN 202211225424 A CN202211225424 A CN 202211225424A CN 115588136 A CN115588136 A CN 115588136A
Authority
CN
China
Prior art keywords
neural network
module
fusion
feature
spectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211225424.9A
Other languages
Chinese (zh)
Inventor
孙泊远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Academy of Intelligent Sensing
Original Assignee
Nantong Academy of Intelligent Sensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Academy of Intelligent Sensing filed Critical Nantong Academy of Intelligent Sensing
Priority to CN202211225424.9A priority Critical patent/CN115588136A/en
Publication of CN115588136A publication Critical patent/CN115588136A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Remote Sensing (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing, and discloses a neural network hyperspectral image classification method based on multilevel spatio-spectral fusion, which comprises the following steps: acquiring a hyperspectral image, extracting a required image to manufacture a training data set and a test data set, carrying out normalization processing on all the data sets, and carrying out neural network model training on the data by using the training data set; and classifying the test data set by using the obtained neural network model. The invention introduces a neural network for extracting and fusing multi-level features, enhances the representation capability of the model and improves the classification effect. Belongs to the technical field of image processing, and comprises a hyperspectral image neural network classification technology with multi-level spatial spectral feature fusion.

Description

Neural network hyperspectral image classification method based on multi-level spatial spectrum fusion
Technical Field
The invention belongs to the technical field of image processing, and relates to a neural network hyperspectral image classification method based on multilevel spatio-spectral fusion.
Background
The hyperspectral image contains rich spectral information and spatial information, and can be used for multiple tasks in the field of hyperspectral image analysis, such as image segmentation, object identification and anomaly detection. The hyperspectral image classification is one of important basic researches of hyperspectral images and is an important means for performing depth analysis on the hyperspectral images. The method for classifying the hyperspectral images can be generally divided into two categories. One is a traditional method based on artificial feature extraction, such as support vector machine, decision tree, minimum distance classification, maximum likelihood classification, spectral angle classification, mixed distance method classification, etc. The other type is a neural network method based on automatic feature extraction, and the method is mainly represented by a convolutional neural network and has strong capability of extracting features.
In the prior art, bor-ChenKuo et al propose a feature selection method based on kernel, and apply the method to a support vector machine classification model based on Radial Basis Function (RBF) as kernel. In this model, the separability of the feature space of the RBF kernel is measured by a criterion that includes inter-class and intra-class information. Onuwa Okwuashi et al propose a deep support vector machine classification method that combines the characteristics of a support vector machine and a deep neural network to use a separate support vector machine as an internal connection of the deep neural network. The methods are based on a method of artificially designing feature extraction to train a classifier to acquire spectral information of a hyperspectral image, and spatial information of the hyperspectral image is not effectively utilized. No efficient extraction is performed for spectral features.
In recent years, deep learning has achieved a remarkable level in the field of computer vision, and more deep learning models are applied to hyperspectral image processing. ShiqiYu et al propose to construct a convolutional neural network with three 1 × 1 convolutional layers and finally add a global average pooling layer to process the spatial and spectral information of the hyperspectral image. Jingxiang Yang et al propose a deep convolutional neural network with a double-branch structure to extract the spectrum-space joint characteristics of a hyperspectral image. Two branches of the network are dedicated to extracting features in the spectral and spatial domains. The extracted spectral and spatial features are then concatenated to extract a combined spectral-spatial feature and input to the fully-connected layer for classification. The methods segment the processing process of the spatial information and the spectral information, and do not consider the correlation of the spatial information and the spectral information in a hierarchical manner, so that the characterization capability of the extracted fusion features is weakened, and the classification accuracy and the generalization capability of a later model are further influenced.
Disclosure of Invention
The embodiment of the invention provides a neural network hyperspectral image classification method based on multilevel space-spectrum fusion, which aims to solve the technical problem that the prior art does not comprehensively and effectively utilize space and spectrum information and needs manual design feature extraction, so that the classification precision is low, and the content of the invention is as follows:
the invention aims to provide a neural network hyperspectral image classification method based on multilevel space-spectrum fusion, which is technically characterized by comprising the following steps of:
firstly, acquiring a hyperspectral image from a hyperspectral camera for preprocessing, extracting different types of hyperspectral images from the preprocessed hyperspectral image according to classification information, and dividing each type of hyperspectral image into a training data set and a test data set;
step two, normalizing the training data set and the test data set obtained in the step one to enable numerical values in all data to be normalized to the range of [0,1 ];
step three, establishing a neural network model, and training the neural network model by using the training data set subjected to normalization processing in the step two to obtain a trained neural network model; the neural network model comprises an input module, three spectral feature extraction modules, a spatial feature extraction module, two spatial pooling modules, two feature fusion extraction modules and a classification module;
and step four, inputting the test data set subjected to normalization processing in the step two into the trained neural network model in the step three to obtain a classification result.
In some embodiments of the present invention, the hyperspectral image preprocessing method in step one of the above neural network hyperspectral image classification method based on multilevel spatio-temporal spectrum fusion is as follows: and fusing the red, green and blue channel images of the hyperspectral image to obtain an RGB image, keeping the rest spectral channel images unchanged, and manually marking the RGB image.
In some embodiments of the invention, the aboveThe neural network hyperspectral image classification method based on multilevel space-spectrum fusion comprises the following steps of:
Figure BDA0003879657650000031
wherein X (i,j) Is the value, min, of the pixel located in the ith row and jth column of the current image X (X) Is the minimum pixel value, max, in image X (X) Is the maximum pixel value in image X;
in some embodiments of the present invention, the neural network model training in step three of the above neural network hyperspectral image classification method based on multi-level spatial spectrum fusion includes the following steps:
step 1, inputting the training data set subjected to normalization processing in the step two into a first spectral feature extraction module and a spatial feature extraction module through an input module, and performing first spectral feature extraction and spatial feature extraction to obtain a feature map A and a feature map B;
step 2, inputting the feature map A obtained in the step 1 into a first space pooling module and a second spectral feature extraction module to perform first pooling operation and second spectral feature extraction to obtain a feature map C and a feature map D;
step 3, inputting the feature map B in the step 1 and the feature map C in the step 2 into a first feature fusion module for carrying out first feature fusion and further feature extraction to obtain a feature map E;
step 4, inputting the feature map D in the step 2 into a second spatial pooling module and a third spectral feature extraction module for second pooling operation and third spectral feature extraction to obtain a feature map F and a feature map G;
step 5, inputting the feature map F in the step 4 and the feature map E in the step 3 into a second feature fusion module for carrying out second fusion feature fusion and further feature extraction to obtain a feature map H;
and 6, inputting the feature map G in the step 4 and the feature map H in the step 5 into a classification module for processing to obtain a trained neural network model.
In some embodiments of the present invention, in the first step of training the neural network model based on the neural network hyperspectral image classification method by multi-level spatial spectrum fusion, the first spectral feature extraction module has a structure that: convolution layer- > batch processing- > activation function- > pooling layer- > batch processing- > activation function.
In some embodiments of the present invention, the first spatial pooling module in the second step of training the neural network model of the neural network hyperspectral image classification method based on multi-level spatial spectrum fusion and the second spatial pooling module in the fourth step of training are the same in structure: pooling layer- > convolutional layer- > batch processing- > activation function.
In some embodiments of the present invention, in the first step of training the neural network model based on the neural network hyperspectral image classification method by multi-level spatial spectrum fusion, the spatial feature extraction module has a structure: convolution layer- > batch processing- > activation function- > pooling layer.
In some embodiments of the present invention, the first feature fusion module in the third step and the second feature fusion module in the fifth step of the neural network model training based on the neural network hyperspectral image classification method based on multi-level spatial spectrum fusion have the same structure.
In some embodiments of the present invention, the second spectral feature extraction module in the second step of training the neural network model based on the neural network hyperspectral image classification method by multi-level spatial-spectral fusion and the third spectral feature extraction module in the fourth step are the same in structure: pooling- > convolutional layer- > batch processing- > activation function.
In some embodiments of the present invention, the structure of the classification module in the sixth step of training the neural network model based on the neural network hyperspectral image classification method with multi-level spatial spectrum fusion is as follows: the characteristic fusion layer- > the full link layer- > batch processing- > the activation function- > the full link layer- > the Softmax layer.
Compared with the prior art, the neural network hyperspectral image classification method based on multilevel space-spectrum fusion can achieve the following beneficial effects:
1. the neural network hyperspectral image classification method based on multilevel space spectrum fusion extracts and fuses spatial information and spectral information of input hyperspectral images for multiple times, so that the obtained network model representation capability is more efficient, the defect that the existing method extracts and fuses the spatial information and spectral information features of the hyperspectral images is overcome, and the method has the advantage that the characteristic information of multilevel spectral space fusion can be extracted.
2. The neural network model in the neural network hyperspectral image classification method based on multilevel space-spectrum fusion is a convolutional neural network model, and the method realizes a self-learning automatic feature extraction method by utilizing convolutional layers, batch processing and activation functions in the model, and can effectively extract the features of hyperspectral images. The feature extraction is a full-automatic process and is continuously optimized in the training process, so that the pertinence of the feature extraction caused by human factors in the traditional method is overcome, and the method has the advantage of higher universality.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a neural network hyperspectral image classification method based on multi-level spatial spectrum fusion according to the invention;
FIG. 2 is a flow chart of neural network model training of the present invention;
FIG. 3 is a diagram showing the result of classification of fungal types on wood by the neural network hyperspectral image classification method based on multi-level spatio-temporal spectral fusion.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The technical solutions provided by the embodiments of the present invention are described in detail below with reference to the accompanying drawings.
The neural network hyperspectral image classification method based on multi-level spatial spectrum fusion shown in FIG. 1 comprises the following steps:
firstly, collecting and acquiring a hyperspectral image from a hyperspectral camera for preprocessing, preferably, the preprocessing method of the hyperspectral image comprises the following steps: and fusing the red channel image, the green channel image and the blue channel image of the hyperspectral image to obtain an RGB image, keeping the rest spectral channel images unchanged, and manually marking the RGB image. 4 types of hyperspectral images (four types shown in figure 3 are: clearwood, softrot, brown stain and Blue stain) are extracted from the preprocessed hyperspectral images according to classification information, each type of hyperspectral image is divided into a training data set and a test data set according to extracted target image data, and preferably each type of hyperspectral image is cut into data blocks of 32 multiplied by 320.
Step two, normalizing the training data set and the test data set obtained in the step one to enable numerical values in all data to be normalized to [0,1]Insofar, as preferred, the formula of the normalization process of the present invention is:
Figure BDA0003879657650000061
wherein X (i,j) Is the value, min, of the pixel located in the ith row and jth column of the current image X (X) Is the minimum pixel value, max, in image X (X) Is the maximum pixel value in image X.
Step three, establishing a neural network model, and training the neural network model by using the training data set subjected to normalization processing in the step two to obtain a trained neural network model; the neural network model comprises an input module, three spectral feature extraction modules, a spatial feature extraction module, two spatial pooling modules, two feature fusion extraction modules and a classification module, and the training steps of the neural network model disclosed by the invention are as follows as shown in figure 2:
and step 1, inputting the training data set subjected to normalization processing in the step two into a first spectral feature extraction module and a spatial feature extraction module through an input module, and performing first spectral feature extraction and spatial feature extraction to obtain a feature map A and a feature map B. Preferably, the first spectral feature extraction module structure of the present invention is: convolution layer- > batch processing- > activation function- > pooling layer- > convolution layer- > batch processing- > activation function; the convolution kernels of the convolution layers are convolution kernels with the size of 1 x 1, the activation functions are ReLU functions, and the pooling layer is the maximum pooling of 1 x 4; the spatial feature extraction module structure of the invention is as follows: convolution layer- > batch processing- > activation function- > pooling layer; the convolution kernel size of the convolution layer is 3 multiplied by 320 convolution kernels, the activation functions are all ReLU functions, and the pooling layers are all maximal pooling of 2 multiplied by 1.
And 2, inputting the feature map A in the step 1 into a first space pooling module and a second spectral feature extraction module to perform first pooling operation and second spectral feature extraction to obtain a feature map C and a feature map D. Preferably, the first space pooling module of the present invention has a structure of: pooling layer- > rolling layer- > batch processing- > activating function; the downsampling sizes of the pooling layers are respectively 2 × 2 × 1 and 4 × 4 × 1, the convolution kernel sizes of the convolutional layers are respectively 3 × 3 × 80,3 × 3 × 20, and the activation functions are all ReLU functions; the structure of the second spectral feature extraction mode of the present invention is preferably: pooling layer- > rolling layer- > batch processing- > activating function; the convolution kernels of the convolution layers are convolution kernels with the sizes of 1 x 3 and 32 x 5 respectively, the activation functions are ReLU functions, and the pooling layers are maximal pooling of 1 x 4.
And 3, inputting the feature map B in the step 1 and the feature map C in the step 2 into a first feature fusion module to perform first feature fusion and further feature extraction to obtain a feature map E, wherein preferably, the first feature fusion module of the score has the following structure: a characteristic fusion layer- > a convolution layer- > batch processing- > an activation function; convolution kernels of the convolution layers are convolution kernels of 3 x 1 and 8 x 1 respectively, and the activation functions are all ReLU functions;
step 4, inputting the feature map D in the step 2 into a second spatial pooling module and a third spectral feature extraction module for second pooling operation and third spectral feature extraction to obtain a feature map F and a feature map G, wherein the second spatial pooling module has the same structure as the first spatial pooling module in the step 2; the third spectral feature extraction module of the present invention has the same structure as the second spectral feature extraction module in step 2.
And 5, inputting the feature map F in the step 4 and the feature map E in the step 3 into a second feature fusion module for carrying out second fusion feature fusion and further feature extraction to obtain a feature map H, wherein the second feature fusion module has the same structure as the first feature fusion module in the step three.
And 6, inputting the feature map G in the step 4 and the feature map H in the step 5 into a classification module for processing to obtain a trained neural network model, wherein preferably, the structure of the classification module is as follows: a characteristic fusion layer- > a full link layer- > batch processing- > an activation function- > a full link layer- > a Softmax layer; the output of the full connection layer is 128 and 4 respectively, and the extracted features are fused and then connected with 2 full connection layers and a Softmax layer.
And step four, inputting the test data set subjected to normalization processing in the step two into the trained neural network model in the step three to obtain a classification result.
Simulation experiment
The effect of the invention can be further illustrated by the following simulation experiment:
1. simulation conditions are as follows:
hardware Intel Core CPU i9-9700k@4.9GHz
A display card: geforce 2080Ti/11Gb
Memory: 32Gb
The software platform is as follows: MATLAB
2. Simulation content and results:
the experiment is carried out under the simulation condition by using the method of the invention, namely, 10% of data blocks are randomly selected from each category of the hyperspectral data as training samples and the data blocks are left as test samples to obtain the classification result shown in figure 3, wherein group channel represents a true value, clear wood represents an unaffected wood area, soft rot represents a wood area with Soft rot, brown stain represents a wood area with Brown stain, and Blue stain represents a wood area with Blue stain.
TABLE 1 Classification accuracy comparison Table obtained in simulation by Using the prior art
Method Rate of accuracy
Method of the invention 93.41%
Method based on support vector machine 90.27%
Method based on convolutional neural network 91.47%
As can be seen from Table 1, for the hyperspectral data of the wood, the accuracy of feature extraction by the method of the present invention is 93.41%, compared with the existing traditional method and the neural network method. The classification accuracy is improved.
In conclusion, the method introduces the characteristic extraction and fusion of multi-level spatial information and spectral information, effectively improves the representation capability of the image, enables the model to learn the hyperspectral image characteristics with higher discrimination, and obtains more accurate classification results compared with the prior art.
The above description is only an example of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (10)

1. A neural network hyperspectral image classification method based on multilevel space-spectrum fusion is characterized by comprising the following steps:
firstly, acquiring a hyperspectral image from a hyperspectral camera for preprocessing, extracting different types of hyperspectral images from the preprocessed hyperspectral image according to classification information, and dividing each type of hyperspectral image into a training data set and a test data set;
step two, normalizing the training data set and the test data set obtained in the step one to enable numerical values in all data to be normalized to the range of [0,1 ];
step three, establishing a neural network model, and training the neural network model by using the training data set subjected to normalization processing in the step two to obtain a trained neural network model; the neural network model comprises an input module, three spectral feature extraction modules, a spatial feature extraction module, two spatial pooling modules, two feature fusion extraction modules and a classification module;
and step four, inputting the test data set subjected to normalization processing in the step two into the trained neural network model in the step three to obtain a classification result.
2. The neural network hyperspectral image classification method based on multi-level spatial spectrum fusion according to claim 1, characterized in that the hyperspectral image preprocessing mode in the first step is as follows: and fusing the red, green and blue channel images of the hyperspectral image to obtain an RGB image, keeping the rest spectral channel images unchanged, and manually marking the RGB image.
3. The method for classifying the hyperspectral images of the neural network based on multi-level spatial spectrum fusion according to claim 1, wherein the normalization in the second step is performed according to the following formula:
Figure FDA0003879657640000011
wherein X (i,j) Is the value, min, of the pixel located in the ith row and jth column of the current image X (X) Is the minimum pixel value, max, in image X (X) Is the maximum pixel value in image X.
4. The method for classifying the hyperspectral images of the neural network based on multi-level spatial-spectral fusion according to claim 1, wherein the step of training the neural network model in the third step is as follows:
step 1, inputting the training data set subjected to normalization processing in the step two into a first spectral feature extraction module and a spatial feature extraction module through an input module, and performing first spectral feature extraction and spatial feature extraction to obtain a feature map A and a feature map B;
step 2, inputting the data set A obtained in the step 1 into a first space pooling module and a second spectral feature extraction module to perform first pooling operation and second spectral feature extraction to obtain a feature map C and a feature map D;
step 3, inputting the feature map B in the step 1 and the feature map C in the step 2 into a first feature fusion module to perform first feature fusion and further feature extraction to obtain a feature map E;
step 4, inputting the feature map D in the step 2 into a second spatial pooling module and a third spectral feature extraction module for second pooling operation and third spectral feature extraction to obtain a feature map F and a feature map G;
step 5, inputting the feature map F in the step 4 and the feature map E in the step 3 into a second feature fusion module to perform second fusion feature fusion and further feature extraction to obtain a feature map H;
and 6, inputting the feature map G in the step 4 and the feature map H in the step 5 into a classification module for processing to obtain a trained neural network model.
5. The method for classifying the hyperspectral image of the neural network based on the multilevel spatial spectrum fusion according to claim 4, wherein the first spectral feature extraction module in the step 1 has the structure that: convolution layer- > batch processing- > activation function- > pooling layer- > batch processing- > activation function.
6. The method for classifying hyperspectral images of a neural network based on multi-level spatial-spectral fusion according to claim 4, wherein the first spatial pooling module in the step 2 and the second spatial pooling module in the step 4 have the same structure and are both: pooling layer- > convolutional layer- > batch processing- > activation function.
7. The method for classifying the hyperspectral images of the neural network based on multi-level spatial spectrum fusion according to claim 4, wherein the spatial feature extraction module in the step 1 is structured as follows: convolution layer- > batch processing- > activation function- > pooling layer.
8. The method for classifying hyperspectral images of a neural network based on multi-level spatial spectral fusion according to claim 4, wherein the first feature fusion module in the step 3 and the second feature fusion module in the step 5 have the same structure and are both: feature fusion layer- > convolutional layer- > batch processing- > activation function.
9. The method for classifying hyperspectral images of a neural network based on multi-level spatial spectrum fusion according to claim 4, wherein the structures of the second spectral feature extraction module in the step 2 and the third spectral feature extraction module in the step 4 are the same: pooling- > convolutional layer- > batch processing- > activation function.
10. The method for classifying the hyperspectral images of the neural network based on multi-level spatial-spectral fusion according to claim 1, wherein the classification module in the step 6 has the structure that: the characteristic fusion layer- > the full link layer- > batch processing- > the activation function- > the full link layer- > the Softmax layer.
CN202211225424.9A 2022-10-09 2022-10-09 Neural network hyperspectral image classification method based on multi-level spatial spectrum fusion Pending CN115588136A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211225424.9A CN115588136A (en) 2022-10-09 2022-10-09 Neural network hyperspectral image classification method based on multi-level spatial spectrum fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211225424.9A CN115588136A (en) 2022-10-09 2022-10-09 Neural network hyperspectral image classification method based on multi-level spatial spectrum fusion

Publications (1)

Publication Number Publication Date
CN115588136A true CN115588136A (en) 2023-01-10

Family

ID=84778193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211225424.9A Pending CN115588136A (en) 2022-10-09 2022-10-09 Neural network hyperspectral image classification method based on multi-level spatial spectrum fusion

Country Status (1)

Country Link
CN (1) CN115588136A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372789A (en) * 2023-12-07 2024-01-09 北京观微科技有限公司 Image classification method and image classification device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372789A (en) * 2023-12-07 2024-01-09 北京观微科技有限公司 Image classification method and image classification device
CN117372789B (en) * 2023-12-07 2024-03-08 北京观微科技有限公司 Image classification method and image classification device

Similar Documents

Publication Publication Date Title
CN108615010B (en) Facial expression recognition method based on parallel convolution neural network feature map fusion
CN107564025B (en) Electric power equipment infrared image semantic segmentation method based on deep neural network
CN108256482B (en) Face age estimation method for distributed learning based on convolutional neural network
EP3872650A1 (en) Method for footprint image retrieval
CN111414862B (en) Expression recognition method based on neural network fusion key point angle change
CN111680706B (en) Dual-channel output contour detection method based on coding and decoding structure
CN110059586B (en) Iris positioning and segmenting system based on cavity residual error attention structure
CN109784197B (en) Pedestrian re-identification method based on hole convolution and attention mechanics learning mechanism
WO2022083202A1 (en) Fine water body extraction method based on u-net neural network
CN111695407B (en) Gender identification method, system, storage medium and terminal based on multispectral fusion
CN104156734A (en) Fully-autonomous on-line study method based on random fern classifier
CN106127159A (en) A kind of gender identification method based on convolutional neural networks
CN111105389B (en) Road surface crack detection method integrating Gabor filter and convolutional neural network
CN104598885A (en) Method for detecting and locating text sign in street view image
CN108447048B (en) Convolutional neural network image feature processing method based on attention layer
CN109815923B (en) Needle mushroom head sorting and identifying method based on LBP (local binary pattern) features and deep learning
CN109344845A (en) A kind of feature matching method based on Triplet deep neural network structure
CN116012653A (en) Method and system for classifying hyperspectral images of attention residual unit neural network
CN115588136A (en) Neural network hyperspectral image classification method based on multi-level spatial spectrum fusion
CN108363962B (en) Face detection method and system based on multi-level feature deep learning
CN110348448A (en) A kind of license plate character recognition method based on convolutional neural networks
CN110008912B (en) Social platform matching method and system based on plant identification
CN111368776B (en) High-resolution remote sensing image classification method based on deep ensemble learning
CN103617417A (en) Automatic plant identification method and system
CN111191510B (en) Relation network-based remote sensing image small sample target identification method in complex scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination