CN113723469A - Interpretable hyperspectral image classification method and device based on space-spectrum combined network - Google Patents

Interpretable hyperspectral image classification method and device based on space-spectrum combined network Download PDF

Info

Publication number
CN113723469A
CN113723469A CN202110907624.1A CN202110907624A CN113723469A CN 113723469 A CN113723469 A CN 113723469A CN 202110907624 A CN202110907624 A CN 202110907624A CN 113723469 A CN113723469 A CN 113723469A
Authority
CN
China
Prior art keywords
hyperspectral image
hyperspectral
network
classification method
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110907624.1A
Other languages
Chinese (zh)
Inventor
谢伟
刘晋铭
梅勇
周亭
尹青
俞煌
邢岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Academy of Defense Engineering of PLA Academy of Military Science
Original Assignee
National Academy of Defense Engineering of PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Academy of Defense Engineering of PLA Academy of Military Science filed Critical National Academy of Defense Engineering of PLA Academy of Military Science
Priority to CN202110907624.1A priority Critical patent/CN113723469A/en
Publication of CN113723469A publication Critical patent/CN113723469A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for classifying interpretable hyperspectral images based on a space-spectrum combined network, wherein the method comprises the following steps: acquiring hyperspectral image data to be detected; extracting the spectral characteristics of the hyperspectral image data by using a SincNet network, and extracting the spatial characteristics of the hyperspectral image data by using a DS-CNN network; and superposing the spectral characteristics and the spatial characteristics, inputting the spectral characteristics and the spatial characteristics into a full connection layer for characteristic fusion, and finishing classification. The method can solve the problem of large intra-class difference in hyperspectral image classification, and can meet the requirement on interpretability to a certain extent to obtain a high-precision hyperspectral image classification result.

Description

Interpretable hyperspectral image classification method and device based on space-spectrum combined network
Technical Field
The application relates to the technical field of computer image processing, in particular to a method and a device for interpretable hyperspectral image classification based on a space-spectrum combined network.
Background
The hyperspectral remote sensing image classification technology is an important link of hyperspectral remote sensing theory and application research. The method plays a fundamental role in hyperspectral image analysis, and can be effectively applied to various methods such as environment detection, city planning, geological rock and mine identification, vegetation fine classification, military target detection and the like, so that the method has great research value.
At present, the popular deep learning method can realize the classification of hyperspectral images less, but overfitting is easily realized for categories with large intra-category differences, so that the classification accuracy is difficult to further improve.
Disclosure of Invention
In view of this, an interpretable hyperspectral image classification method and an interpretable hyperspectral image classification device based on a spatial-spectral combination network are provided to improve the accuracy of hyperspectral image classification.
According to a first aspect, an embodiment of the present application provides a hyperspectral image classification method, including:
acquiring hyperspectral image data to be detected;
extracting the spectral characteristics of the hyperspectral image data by using a SincNet network, and extracting the spatial characteristics of the hyperspectral image data by using a DS-CNN network;
superposing the spectral characteristics and the spatial characteristics, and inputting the spectral characteristics and the spatial characteristics into a full-connection layer for characteristic fusion; and
and classifying the data after feature fusion.
With reference to the first aspect, in an optional implementation manner, the method further includes performing first preprocessing on the obtained hyperspectral image data, and the sincent network extracts the spectral feature based on the first preprocessed hyperspectral image data.
Further, the first pre-processing comprises: automatically clustering the original hyperspectral image to obtain a new true value image; the SincNet network extracts the spectral features based on the original hyperspectral image and the new truth map.
And the automatic clustering selection adopts an automatic clustering algorithm based on density peak values.
With reference to the first aspect, in an optional implementation manner, the method further includes performing second preprocessing on the obtained hyperspectral image data, and the DS-CNN network extracts the spatial feature based on the first and second preprocessed hyperspectral image data.
Further, the second pre-processing comprises: carrying out PCA (principal component analysis) dimension reduction on the original hyperspectral image data, and cutting the original hyperspectral image data into image blocks; and the DS-CNN network extracts the spatial features based on the image blocks and a new truth diagram obtained by the first preprocessing.
With reference to the first aspect, in an optional implementation manner, the sincent network first layer performs feature extraction by using a sinc function filter as a one-dimensional convolution kernel.
With reference to the first aspect, in an optional implementation manner, the DS-CNN network implements image space information extraction by using six layers of two-dimensional convolution layers, where the first layer is a 3 × 3 square convolution kernel, the second layer and the fourth layer are 1 × 5 horizontal strip convolution kernels, the third layer and the fifth layer are 5 × 1 vertical strip convolution kernels, and the last layer is a2 × 2 square convolution kernel.
According to a second aspect, an embodiment of the present application provides a hyperspectral image classification apparatus, including:
the image acquisition equipment is used for acquiring a hyperspectral image to be detected;
a memory having computer instructions stored therein;
and the processor is in data connection with the image acquisition equipment and the memory, and executes the computer instructions so as to execute the hyperspectral image classification method according to any one of the technical schemes and automatically classify the hyperspectral images.
According to a third aspect, embodiments of the present application provide a computer-readable storage medium storing computer instructions, which when executed by a processor, implement the hyperspectral image classification method according to any of the above technical solutions.
The application provides a space-spectrum combined network model aiming at the high-spectrum image classification accuracy, and the high-spectrum image classification accuracy is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a hyperspectral image classification method according to a first embodiment of the application;
FIG. 2 is a schematic diagram of an exemplary reduced-dimension projection of the PCA algorithm.
FIG. 3 is a schematic diagram of a spatial-spectral union network-based interpretable hyperspectral image classification network model according to an embodiment of the application;
fig. 4 is a schematic diagram of a sincent network model according to an embodiment of the present application;
FIG. 5 is a diagram of a DS-CNN network model according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a one-dimensional convolution process;
FIG. 7 is a schematic diagram of a two-dimensional convolution process;
FIG. 8 is an example of hyperspectral image classification using the method provided by the embodiment of the application; wherein, (a) is a hyperspectral image, (b) is a truth value diagram, and (c) is a classified image;
FIG. 9 is a schematic flow chart diagram of a method for training a classification model of the present application;
FIG. 10 is a schematic structural diagram of a hyperspectral image classification device according to another embodiment of the application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1, fig. 1 shows a schematic flow of an interpretable hyperspectral image classification method based on a spatial-spectral combination network according to a first embodiment of the present application. As shown in fig. 1, the classification method is used for automatically classifying the ground object types in the hyperspectral image, and specifically includes the following steps:
101, acquiring original hyperspectral image data to be detected;
further, the application is directed at preprocessing the hyperspectral image data, and the preprocessing specifically comprises the following steps:
firstly, automatically clustering the obtained original hyperspectral images, and secondarily dividing the categories with larger intra-category differences in the original hyperspectral images to obtain a new truth-value chart.
Secondly, the original hyperspectral image is compressed and dimensionality reduced, redundant spectral information is removed, so that the calculated amount is reduced, and the spatial information extraction process is accelerated.
And then, carrying out mirror image expansion on the image subjected to dimension reduction, and cutting the image into image blocks with proper sizes by taking each pixel point as a center. That is, if the original image size is m × n, the number of cropped image blocks is m × n.
The automatic clustering method selects an automatic clustering algorithm based on density peak values. The hyperspectral dimension reduction method adopts a Principal Component Analysis (PCA) algorithm.
The method for automatically clustering and selecting the hyperspectral images is based on an automatic clustering algorithm of density peak values.
The clustering algorithm based on the density peak value judges that the cluster belongs to a certain class by calculating the density of points around a certain point and the distance between the points and the surrounding points. The algorithm considers that for each class's center point, its surrounding points should be denser than the other points of the class. Meanwhile, the center point of this class should be far enough from the center points of other classes. The specific formula of the local density is as follows.
Figure BDA0003202296920000051
Wherein, if x < 0, χ (x) is 1, otherwise, χ (x) is 0. At the same time, dcIs the cut-off distance. RhoiCorresponding to a distance from point i of less than dcThe number of points of (a).
The specific formula for calculating the distance is as follows.
Figure BDA0003202296920000052
That is, when the density of a certain point is not the maximum, the point is not the center point, the distance is set to the distance from the point closest to the certain point, and when the density of a certain point is the maximum, the point is said to be the center point, the distance corresponding to the point is set to the distance from the point farthest from the certain point.
At a density of rhoiDistance δ as abscissaiAs the vertical axis, a dot diagram is drawn, and points closer to the upper right corner have the maximum local density and the maximum distance, and can be regarded as the center point of the cluster. The algorithm can divide data into proper clusters without iterative calculation, meanwhile, a central point does not need to be preset, and the algorithm has few experience components.
The method comprises the steps of obtaining a false-true value image by using an automatic clustering method based on density peak values, calculating errors by taking the false-true value image as a reference, and modifying network weights. The difference of each type in the false true value image is small, so that the network cannot cause overfitting, and the problem of large intra-type difference which mostly exists in the classification of the hyperspectral images is solved.
The method for reducing the dimension of the hyperspectral image is a PCA algorithm, and the first three principal components are selected as dimension reduction reserved data. FIG. 2 shows an exemplary reduced-dimension projection diagram of the PCA algorithm.
Pca (principal component analysis), a principal component analysis method, is one of the most widely used data dimension reduction algorithms. The main idea of PCA is to map n-dimensional features onto k-dimensions, which are completely new orthogonal features, also called principal components, and k-dimensional features reconstructed on the basis of the original n-dimensional features.
The principle of the PCA method is to sequentially find a set of mutually orthogonal axes from the original space, and the selection of a new axis is closely related to the data itself. The first new coordinate axis is selected to be the direction with the largest square difference in the original data, the second new coordinate axis is selected to be the plane which is orthogonal to the first coordinate axis and enables the square difference to be the largest, and the third axis is the plane which is orthogonal to the 1 st axis and the 2 nd axis and enables the square difference to be the largest. By analogy, n coordinate axes can be obtained.
Observing the new axes obtained in this way, it was found that most of the variances were contained in the preceding k axes, and the variance contained in the following axes was almost 0. Thus, the remaining axes can be ignored, leaving only the first k axes with the most variance. This is equivalent to only retaining the dimension feature containing most of the variance, and neglecting the feature dimension containing the variance almost 0, so as to implement the dimension reduction processing on the data feature.
And 102, processing the hyperspectral image data by using a hyperspectral image classification model, and outputting a classification result.
In step 2, the hyperspectral image classification model performs the following operation steps on the hyperspectral image data:
step 1021, extracting the spectral characteristics of the hyperspectral image data by using a SincNet network, and extracting the spatial characteristics of the hyperspectral image data by using a DS-CNN network;
and 1022, overlapping the spectral features and the spatial features, inputting the full connection layer for feature fusion, and finishing classification.
FIG. 3 illustrates a framework structure of a hyperspectral classification model for automatically classifying hyperspectral images according to an embodiment of the application.
As shown in fig. 3, for the preprocessed hyperspectral image data, the model utilizes a sincent network to extract spectral features, utilizes a DS-CNN network to extract spatial features, then superimposes the extracted spectral features and spatial features, inputs the superimposed spectral features and spatial features into a full connection layer to perform feature fusion, and obtains a pre-classification result.
The model utilizes a SincNet network to extract spectral features aiming at original image data and a new true value graph obtained by automatic clustering pretreatment.
The model utilizes a DS-CNN network to extract spatial features for hyperspectral image data subjected to dimensionality reduction and cropping preprocessing and a new true value graph obtained through automatic clustering preprocessing.
Fig. 4 and 5 show schematic structural diagrams of two network models in a classification model according to an embodiment of the present application.
As shown in fig. 4, the sincent network is a convolutional neural network constructed with interpretability using a sinc function filter as a convolutional layer. DS-CNN (double Strip CNN) is a double Strip convolution neural network, and a network model of the double Strip convolution neural network comprises Strip convolution layers in two different directions.
The SincNet network is mainly composed of three layers of convolution. The first layer is a one-dimensional convolution layer formed by taking a Sinc function filter as a convolution kernel, and the variable parameters are the highest frequency and the lowest frequency of the filter. The latter two layers are standard one-dimensional convolutional layers, and simultaneously, a maximum pooling layer and batch standardization are added after each convolutional layer.
According to the method, the SincNet network is used for spectral feature extraction, so that the number of network layers can be reduced, the operation speed is increased, the interpretability of the network is improved, and the problem of 'black boxes' commonly existing in a deep network method is reduced.
As shown in fig. 5, the DS-CNN network is mainly composed of three convolution modules. The initial module consists of a 3 multiplied by 3 standard two-dimensional convolutional layer, a Dropout layer and a Relu activation function layer; the middle module is composed of four layers of strip convolution layers, wherein one layer and three layers are transverse strip convolution kernels with the size of 1 multiplied by 5, the other layer and four layers are longitudinal strip convolution kernels with the size of 5 multiplied by 1, and a Dropout layer and a Relu activation function layer are connected behind each layer; the last module is composed of a max pooling layer, a2 x 2 standard two-dimensional convolution layer, a Dropout layer and a Relu activation function layer. Finally, the two-dimensional vector is changed into a one-dimensional vector through vector stretching so as to be combined with the one-dimensional spectral feature.
According to the classification model constructed according to the embodiment of the application, the main content of each module is convolution operation. As shown in FIG. 6, in the one-dimensional convolution, some 1 × 8 vector data aiTo be processed, sequentially sliding the convolution kernel k of 1 multiplied by 5 to realize multiplication and addition of corresponding numerical values, wherein the convolution result is
Figure BDA0003202296920000081
As shown in FIG. 7, the principle of two-dimensional convolution is similar, a certain 5 × 5 area of the feature map of the previous layer is to be processed, and each pixel value is represented by aijThe 3 x 3 convolution operator k (the lower right corner numerical value in the gray scale region) traverses the region in a sliding manner, so that multiplication and addition of corresponding numerical values are realized, and the subsequent layer of feature map of convolution processing can be obtained
Figure BDA0003202296920000082
The classification model extracts image space information by using a DS-CNN network, combines with a SincNet network, and respectively extracts spectral information and space information in a hyperspectral image by two paths, so that information with distinguishing characteristics can be fully obtained, refined ground feature classification can be realized, and classification accuracy is improved.
The following describes a method for training the hyperspectral image classification model, and the method includes the following steps:
step 201, hyperspectral image data acquired by a satellite are acquired.
The image is a hyperspectral image with higher resolution acquired by an EO-1 satellite, a celestial series satellite and the like, and the category true value image is obtained by manual labeling according to visual information. Alternatively, at least part of the hyperspectral image and the corresponding truth map may be downloaded directly from the network.
The hyperspectral image is obtained by simultaneously imaging a target area in tens of to hundreds of continuous and subdivided spectral bands in ultraviolet, visible light, near infrared and mid-infrared areas of an electromagnetic spectrum through hyperspectral sensors mounted on different space platforms. The earth surface image information is obtained, and simultaneously, the spectrum information is also obtained, so that the combination of the spectrum and the image is really realized. The hyperspectral image not only greatly improves the information abundance, but also provides possibility for more reasonable and effective analysis of image data in the aspect of processing technology.
Step 202, preprocessing the acquired hyperspectral image data.
The pretreatment specifically comprises:
firstly, automatically clustering the obtained original hyperspectral images, and secondarily dividing the categories with larger intra-category differences in the original hyperspectral images to obtain a new truth-value chart.
Secondly, the original hyperspectral image is compressed and dimensionality reduced, redundant spectral information is removed, so that the calculated amount is reduced, and the spatial information extraction process is accelerated.
And then, carrying out mirror image expansion on the image subjected to dimension reduction, and cutting the image into image blocks with proper sizes by taking each pixel point as a center. That is, if the original image size is m × n, the number of cropped image blocks is m × n.
The automatic clustering method selects an automatic clustering algorithm based on density peak values. The hyperspectral dimension reduction method adopts a Principal Component Analysis (PCA) algorithm.
The method for automatically clustering and selecting the hyperspectral images is an automatic clustering algorithm based on density peak values.
The method for reducing the dimension of the hyperspectral image is a PCA algorithm, and the first three main components are selected as dimension reduction reserved data. FIG. 2 shows an exemplary reduced-dimension projection diagram of the PCA algorithm.
And 203, training the network model by using the preprocessed hyperspectral image data to obtain a classification model capable of realizing the automatic classification of the hyperspectral images.
The preprocessed image data is input into the network model shown in fig. 3, 4 and 5. The constructed network model is used to learn the labeled training image data set (as shown in fig. 8(a) and (b)), so as to obtain the network model of the initialization parameters.
And carrying out ground feature classification prediction on the verification set image by using the model to obtain a prediction classification result.
The prediction error loss value is obtained by comparing the difference between the prediction result (as shown in fig. 8(d)) and the new true value image after the clustering is completed (as shown in fig. 8 (c)).
And finally, transmitting the prediction error loss value back to the network, performing gradient feedback, and correcting the parameters of each module unit in the network. And the accuracy of the final network test result is determined by the predicted value and the original image true value image.
And through the cyclic calculation of preset times, the final prediction error is within a set threshold range, and the accuracy of the prediction result is within an expected range, so that the network model with the capability of accurately realizing ground object classification is obtained.
By calculating the error between the pre-classification result and the new truth diagram and feeding back the error by the gradient feedback, the parameters of each module can be corrected, and the optimal model can be achieved by iteration.
Calculating the number of correctly classified pixels according to the label category of the original true value image and taking the number of correctly classified pixels/the total number of pixels as the accuracy rate of the test result; for example, if the original true-value map labels are a, B, and C …, and the new true-value map labels are a1, a2, B1, B2, and C …, then the correctly classified pixels need to be satisfied, the true-value icon label is a1/a2, and the prediction result is a1/a 2. Wherein, the A1 prediction is that the A2 does not count the pixel classification error.
Although the above embodiments describe the above methods in a sequential order of steps, those skilled in the art will appreciate that the steps of the above methods are not performed in the exact sequential order described.
The trained classification model has optimal combination parameters, the remote sensing image (as shown in fig. 8 (a)) is input into the network model, the image is calculated by using modules such as convolution, pooling and the like with adjusted parameters, feature extraction is realized, and finally the image is input into a classifier to realize accurate classification of the hyperspectral image ground objects.
Meanwhile, in the network model adopted by the application, the sinc function filter has fewer parameters due to the fact that the adjustable parameters of the sinc function filter are the highest cut-off frequency and the lowest cut-off frequency, and therefore faster fitting can be achieved, and the running time is reduced. In addition, the sinc function filter has larger sensitivity to the wave crest of the spectral curve, and the definite physical meaning of the sinc function filter enables the neural network to have better interpretability.
In addition, in the network model adopted by the application, the strip convolution module is added, semantic features of local areas around a certain pixel in different directions can be extracted, effective discrimination of a central pixel can be realized by combining spatial information of the strip areas, and the category judgment accuracy is improved.
According to another embodiment of the application, the device for classifying the interpretable hyperspectral images based on the space-spectrum combined network is further provided, and accurate classification of the ground object classes of the hyperspectral images is achieved. The hyperspectral classification apparatus may be implemented by software and/or hardware.
As shown in fig. 10, the apparatus 300 includes an image acquisition device 301, a memory 302, and a processor 303. The image capturing device 301, the memory 302, and the processor 303 may be connected by a bus or other means.
The image acquiring device 301 is configured to acquire panchromatic images and multispectral images to be fused, and send the panchromatic images and the multispectral images to be fused to the processor 303.
The Processor 303 may be a Central Processing Unit (CPU) or other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof.
The memory 302 is a non-transitory computer-readable storage medium, and can be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as programs or instructions corresponding to the hyperspectral image classification method according to the first embodiment of the present application.
The processor 303 executes various functional applications of the processor and data processing by running a non-transitory software program or instructions stored in the memory 302, that is, implements the hyperspectral image classification method in the above-described method embodiment.
The memory 302 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the data storage area may store data created by the processor 302, and the like.
Further, the memory 302 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device.
In some aspects, the memory 302 optionally includes memory located remotely from the processor 303, and such remote memory may be connected to the processor 303 over a network.
Optionally, the network includes, but is not limited to, the internet, an intranet, a local area network, a mobile communications network, and combinations thereof.
Although the present application is described in more detail by the above embodiments, the present application is not limited to the above embodiments, and modifications and equivalents may be made to the technical solutions of the embodiments without departing from the inventive concept of the present application without departing from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A hyperspectral image classification method is characterized by comprising the following steps:
acquiring hyperspectral image data to be detected;
extracting the spectral characteristics of the hyperspectral image data by using a SincNet network, and extracting the spatial characteristics of the hyperspectral image data by using a DS-CNN network;
superposing the spectral characteristics and the spatial characteristics, and inputting the spectral characteristics and the spatial characteristics into a full-connection layer for characteristic fusion; and
and classifying the data after feature fusion.
2. The hyperspectral image classification method according to claim 1, further comprising performing a first preprocessing on the obtained hyperspectral image data, wherein the SincNet network extracts the spectral features based on the first preprocessed hyperspectral image data.
3. The hyperspectral image classification method according to claim 2, wherein the first preprocessing comprises: automatically clustering the original hyperspectral image to obtain a new true value image; the SincNet network extracts the spectral features based on the original hyperspectral image and the new truth map.
4. The hyperspectral image classification method according to claim 3, wherein the automatic clustering selection adopts an automatic clustering algorithm based on density peaks.
5. The hyperspectral image classification method according to claim 3, further comprising performing second preprocessing on the obtained hyperspectral image data, wherein the DS-CNN network extracts the spatial features based on the first and second preprocessed hyperspectral image data.
6. The hyperspectral image classification method according to claim 5, wherein the second preprocessing comprises: carrying out PCA (principal component analysis) dimension reduction on the original hyperspectral image data, and cutting the original hyperspectral image data into image blocks; and the DS-CNN network extracts the spatial features based on the image blocks and a new truth diagram obtained by the first preprocessing.
7. The hyperspectral image classification method according to claim 1, wherein the first layer of the SincNet network performs feature extraction by using a sincNet function filter as a one-dimensional convolution kernel.
8. The hyperspectral image classification method according to claim 1 or 7, wherein the DS-CNN network adopts six layers of two-dimensional convolution layers to realize image space information extraction, the first layer is a 3 x 3 square convolution kernel, the second layer and the fourth layer are 1 x 5 horizontal strip convolution kernels, the third layer and the fifth layer are 5 x 1 vertical strip convolution kernels, and the last layer is a2 x 2 square convolution kernel.
9. A hyperspectral image classification device, comprising:
the image acquisition equipment is used for acquiring a hyperspectral image to be detected;
a memory having computer instructions stored therein;
a processor, in data connection with the image acquisition device and the memory, for automatically classifying the hyperspectral image by executing the computer instructions to perform the hyperspectral image classification method according to any of claims 1 to 8.
10. A computer-readable storage medium, characterized in that it stores computer instructions which, when executed by a processor, implement the hyperspectral image classification method according to claims 1-8.
CN202110907624.1A 2021-08-09 2021-08-09 Interpretable hyperspectral image classification method and device based on space-spectrum combined network Pending CN113723469A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110907624.1A CN113723469A (en) 2021-08-09 2021-08-09 Interpretable hyperspectral image classification method and device based on space-spectrum combined network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110907624.1A CN113723469A (en) 2021-08-09 2021-08-09 Interpretable hyperspectral image classification method and device based on space-spectrum combined network

Publications (1)

Publication Number Publication Date
CN113723469A true CN113723469A (en) 2021-11-30

Family

ID=78675216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110907624.1A Pending CN113723469A (en) 2021-08-09 2021-08-09 Interpretable hyperspectral image classification method and device based on space-spectrum combined network

Country Status (1)

Country Link
CN (1) CN113723469A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883852A (en) * 2023-08-29 2023-10-13 北京建工环境修复股份有限公司 Core data acquisition method and system based on hyperspectrum

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084159A (en) * 2019-04-15 2019-08-02 西安电子科技大学 Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint
CN111160273A (en) * 2019-12-31 2020-05-15 北京云智空间科技有限公司 Hyperspectral image space spectrum combined classification method and device
US20210012487A1 (en) * 2019-07-12 2021-01-14 Mayo Foundation For Medical Education And Research Deep Learning-Based Medical Image Quality Evaluation and Virtual Clinical Trial

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084159A (en) * 2019-04-15 2019-08-02 西安电子科技大学 Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint
US20210012487A1 (en) * 2019-07-12 2021-01-14 Mayo Foundation For Medical Education And Research Deep Learning-Based Medical Image Quality Evaluation and Virtual Clinical Trial
CN111160273A (en) * 2019-12-31 2020-05-15 北京云智空间科技有限公司 Hyperspectral image space spectrum combined classification method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YUAN LI: "Automatic Clustering-Based Two-Branch CNN for Hyperspectral Image Classification", 《IEEE》, pages 7803 - 7814 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883852A (en) * 2023-08-29 2023-10-13 北京建工环境修复股份有限公司 Core data acquisition method and system based on hyperspectrum
CN116883852B (en) * 2023-08-29 2024-03-08 北京建工环境修复股份有限公司 Core data acquisition method and system based on hyperspectrum

Similar Documents

Publication Publication Date Title
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
Hong et al. Multimodal GANs: Toward crossmodal hyperspectral–multispectral image segmentation
Wang et al. Multiscale visual attention networks for object detection in VHR remote sensing images
Bazi et al. Convolutional SVM networks for object detection in UAV imagery
Xie et al. Multilevel cloud detection in remote sensing images based on deep learning
CN108009559B (en) Hyperspectral data classification method based on space-spectrum combined information
Sirmacek et al. Urban-area and building detection using SIFT keypoints and graph theory
US9607228B2 (en) Parts based object tracking method and apparatus
EP4091109A1 (en) Systems for multiclass object detection and alerting and methods therefor
CN111476251A (en) Remote sensing image matching method and device
Shahab et al. How salient is scene text?
Capobianco et al. Target detection with semisupervised kernel orthogonal subspace projection
CN112580480B (en) Hyperspectral remote sensing image classification method and device
de Carvalho et al. Bounding box-free instance segmentation using semi-supervised iterative learning for vehicle detection
CN114037640A (en) Image generation method and device
Larabi et al. High-resolution optical remote sensing imagery change detection through deep transfer learning
Nayan et al. Real time detection of small objects
US11908178B2 (en) Verification of computer vision models
Deepthi et al. Detection and classification of objects in satellite images using custom CNN
Chen et al. Improved fast r-cnn with fusion of optical and 3d data for robust palm tree detection in high resolution uav images
CN113723469A (en) Interpretable hyperspectral image classification method and device based on space-spectrum combined network
CN116758419A (en) Multi-scale target detection method, device and equipment for remote sensing image
CN109934147B (en) Target detection method, system and device based on deep neural network
Jafrasteh et al. Generative adversarial networks as a novel approach for tectonic fault and fracture extraction in high resolution satellite and airborne optical images
Nayan et al. Real time multi-class object detection and recognition using vision augmentation algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination