CN106815601B - Hyperspectral image classification method based on recurrent neural network - Google Patents
Hyperspectral image classification method based on recurrent neural network Download PDFInfo
- Publication number
- CN106815601B CN106815601B CN201710014713.7A CN201710014713A CN106815601B CN 106815601 B CN106815601 B CN 106815601B CN 201710014713 A CN201710014713 A CN 201710014713A CN 106815601 B CN106815601 B CN 106815601B
- Authority
- CN
- China
- Prior art keywords
- sample
- feature
- high spectrum
- recurrent neural
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2136—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of hyperspectral image classification methods based on recurrent neural network, it is weaker mainly to solve existing method input feature vector identification, local spatial feature extracts insufficient problem, implementation step includes: the spatial texture feature and rarefaction representation feature of 1. extraction high spectrum images, and to its stacked combination at low-level feature;2. extracting sample local space sequence signature on low-level feature;3. constructing recurrent neural networks model according to local space sequence signature, and utilize training sample local space sequence signature training recurrent neural networks model parameter;4. test sample local space sequence signature is inputted trained recurrent neural networks model, the high-level semantics features of high abstraction are obtained, the classification information of test sample is obtained.The method that the present invention uses deep learning, improves the accuracy of classification hyperspectral imagery, can be used for vegetation investigation, disaster surveillance, and cartography and information obtain.
Description
Technical field
The invention belongs to technical field of image processing, are related to a kind of hyperspectral image classification method, it is distant to can be used for EO-1 hyperion
Feel the classification of image.
Background technique
Currently, the spectral resolution with remote sensor is continuously improved, cognition of the people to object spectrum attribute, feature
Also it deepens continuously, the characters of ground object that many is hidden within the scope of narrow spectrum is gradually found, has greatly accelerated remote sensing technology
Development, so that high-spectrum remote-sensing is become one of most important research direction of 21 century remote sensing fields.
Different from multispectral remote sensing, for spectral remote sensing technology using imaging spectrometer with nanoscale spectral resolution, use is several
Ten or several hundred a wave bands are simultaneously imaged earth's surface object, can obtain the continuous spectrum information of atural object, the spy with " collection of illustrative plates "
Property, national economy, in terms of all play an important role, have been widely used for landmarks categorizations, target visit
The fields such as survey, agricultural monitoring, mineral map plotting, environmental management and national defense construction.
The classification of high spectrum image is an important content of target in hyperspectral remotely sensed image processing and application, and final goal is
Unique classification logotype is assigned to each pixel in image.Research and analysis for hyperspectral image data classification task,
Main is exactly to tell the characters of ground object in image by high-spectral data, that is, passes through analysis original spectrum or other features letter
Breath marks off different earth surface areas to come, such as meadow, farmland, waters, cities and towns, bridge, facilitates people's identification and divides
Analyse earth's surface situation.
The features such as high in face of hyperspectral image data dimension, data dependence is strong, redundancy height, local space consistency,
Common classification method is mainly started with from the following aspects: (1) rarefaction representation and dictionary learning select a small amount of marked sample
As dictionary, gone to indicate other samples with the linear combination of dictionary sample;(2) supporting vector machine SVM classifier and its kernel function
Correlating transforms, construct novel kernel function, such as polynomial kernel, adapt to the nonlinear Distribution of high-spectral data;(3) it is semi-supervised with
Active Learning promotes the classifying quality under small sample;(4) the new proposition of classifier and the combination of multi-categorizer;(5) feature mentions
Take with combined transformation etc..Wherein common feature includes: 1) spectral signature, i.e., high spectrum image itself spectral information characteristics and its
Related derived character;2) space characteristics, texture, shape, morphological feature including high spectrum image;3) sky-spectrum signature, i.e., it is empty
Between the feature that combines of feature and spectral signature.
These features are started with from the inherent characteristic of high spectrum image, special by simply extracting the low layer that transformation obtains
Sign.With the burning hot rise of deep learning in recent years, deep learning model framework how is utilized, sufficiently extracting has more preferable indicate
Property high spectrum image high-level characteristic, promoted classification hyperspectral imagery precision, increasingly become domestic and foreign scholars fall over each other research
Hot spot.
Deep learning is a kind of Feature Extraction Technology, it is developed from nerual network technique, by way of layering pair
Low-level feature carries out high abstraction, to obtain the better representation method of feature.Common deep learning frame mainly has stack
Self-encoding encoder SAE, depth confidence network DBN, convolutional neural networks CNN, recurrent neural network RNN, they are widely used in
The fields such as natural language processing, computer vision, speech recognition, bioinformatics, and achieve extraordinary effect.
Has scholar at present for stack self-encoding encoder SAE, depth confidence network DBN and convolutional neural networks CNN depth
It practises model and is introduced into classification hyperspectral imagery.In " Deep Learning-Based Classification of
Hyperspectral Data " in, high-spectral data is carried out PCA dimensionality reduction by Yushi Chen, chooses the pixel in rectangular window
It connects into be feature vector as local spatial feature, then connected with original spectrum feature, is inputted as low-level feature
Build stack self-encoding encoder SAE model.In " Spectral-Spatial Classification of Hyperspectral
Data Based on Deep Belief Network " in, Yushi Chen is special using method building low layer same as above
Sign, and input the depth confidence network DBN model built.The classifying quality of this method is general, and accuracy is not high, and
There are many deficiencies, such as directlys adopt spectral signature as input feature vector, and the random noise for including is too many, and identification is weaker, nothing
Method obtains good classifying quality;And in local spatial feature extraction, simply picks all pixels of neighborhood and be not added
Processing, the then pixel wherein to differ greatly with center pixel can seriously affect nicety of grading.
Summary of the invention
It is an object of the invention in view of the above shortcomings of the prior art, propose a kind of bloom based on recurrent neural network
Image classification method is composed, to construct the degree of purity more better low-level feature of high-class effect, while being reinforced to picture in local space
The exploration of correlation between element, improves the effect of important pixel, reduces the influence of useless pixel, and low-level feature is abstracted as and is sentenced
The other higher high-level semantics features of property improve nicety of grading to more fully utilize the characteristic of high spectrum image.
To achieve the above object, technical solution of the present invention includes the following:
(1) a panel height spectrum picture is inputted, which includes K pixel, B EO-1 hyperion spectral coverage, c class atural object,
Wherein K=K1×K2, K1Indicate the length of high spectrum image, K2Indicate the width of high spectrum image, each pixel of image is one
Sample, each sample indicate that the intrinsic dimensionality of sample is B with a feature vector, and 10% sample is selected in every class atural object
This composition training sample set, the sample for being left 90% form test sample collection;
(2) the principal component grayscale image of high spectrum image is filtered using Gabor filter, obtains high spectrum image
Spatial texture featureWherein R indicates real number field, and g is spatial texture feature vector dimension;
(3) rarefaction representation coefficient that each pixel in high spectrum image is calculated using the method for rarefaction representation, obtains bloom
The rarefaction representation feature of spectrogram pictureWherein m is the dimension of rarefaction representation feature vector;
(4) by the spatial texture feature F of high spectrum image1With rarefaction representation feature F2Stacked combination is at high spectrum image
Low-level featureL is the dimension of low-level feature vector, l=g+m;
(5) in high spectrum image low-level feature matrix F, window is constructed centered on each sample, extracts the office of sample
Portion's space characteristics block, and utilize the local space sequence signature of the similitude building sample between sample;
(6) recurrent neural networks model is constructed by time step long number of number of samples in window, and inputs training sample
Local space sequence signature and corresponding class label repetitive exercise recurrent neural networks model parameter, obtain trained recurrence
Neural network model;
(7) the local space sequence signature of test sample is input in trained recurrent neural networks model, is obtained
Classification category, completes classification.
The present invention has the advantage that
(1) present invention gets up the spatial texture feature and rarefaction representation feature integration of high spectrum image, and this low layer is special
Sign had not only included high spectrum image pixel samples local spatial information, but also included rarefaction representation of the pixel samples about other samples
Information, such low-level feature degree of purity and identification are higher, more preferable for classification task effect;
(2) present invention extracts the local space sequence of high spectrum image on the basis of high spectrum image local spatial feature
Column feature not only obtains local spatial information, also explores the similarity information in local space between each pixel samples, mention
The high effect of important pixel, reduces the influence of useless pixel, improves classifying quality;
(3) present invention utilizes the recurrent neural networks models for being usually used in natural language processing field, by recurrent neural net
The temporal characteristics of network combine with high spectrum image local space sequence information, and it is empty can to effectively integrate high spectrum image part
Between context relation, be extracted as high-level semantics features for low-level feature is abstract, take full advantage of the characteristic of EO-1 hyperion, improve
Classification accuracy rate.
Detailed description of the invention
Fig. 1 is flow diagram of the invention;
Fig. 2 is recurrent neural networks model schematic diagram;
Fig. 3 is the Indian Pines image that present invention emulation uses;
Fig. 4 is the classification results comparison diagram of the present invention and existing method to Indian Pines image.
Specific embodiment
Referring to Fig.1, specific implementation step of the invention includes:
Step 1, high spectrum image is inputted.
Input a width three-dimensional matrice high spectrum image, which includes K pixel samples, B EO-1 hyperion spectral coverage,
C class atural object, wherein K=K1×K2, K1Indicate the length of high spectrum image, K2The width for indicating high spectrum image, is selected in every class atural object
10% sample is selected as training sample, is left 90% sample as test sample.
Step 2, the spatial texture feature F of high spectrum image is obtained1。
2a) high spectrum image is converted using Principal Component Analysis, k=10 principal component grayscale image before extracting;
The Gabor filter of 4 directions, 3 kinds of scales 2b) is set, that is, 4 different Gabor kernel function directions and 3 are set
A different sinusoidal plane wave wavelength, obtains 12 Gabor filters, the kernel function of each Gabor filter is as follows:
Wherein, x'=xcos θ+ysin θ, y'=-xcos θ+ysin θ, x and y indicates coordinate location information, λ indicate sinusoidal
The wavelength of plane wave, θ indicate the direction of Gabor kernel function,Indicate phase deviation, σ indicates the standard deviation of Gaussian envelope, γ table
Show space aspect ratio;
2c) using 12 Gabor filters set respectively to each principal component ash in preceding k principal component grayscale image
Degree figure carries out Gabor filtering, obtains 12 filtered images of each principal component grayscale image;
2d) together by 12 × k filtered image stacks, the spatial texture feature of high spectrum image is obtainedWherein R indicates real number field, and g=12 × k is spatial texture feature vector length.
Step 3, the rarefaction representation feature F of high spectrum image is obtained2。
3a) sample that every class randomly selects 1% from the c class training sample of high spectrum image constructs son as dictionary atom
Dictionary, wherein the sub- dictionary of the i-th class is Indicate the i-th class
J-th of dictionary atom, j=1,2 ..., mi, miIndicate the dictionary atom number of the i-th class;
3b) c sub- dictionaries are lined up, an entirety is merged into, obtains total structuring dictionary D=[D1 ... Di
... Dc], D ∈ RB×mIt is a two-dimensional matrix, indicates total structuring dictionary that the sub- dictionary of all classes is constituted, m=m1+…+
mi+…+mcIndicate the sum of the sub- dictionary atom number of all categories;
The rarefaction representation vector that each pixel 3c) is solved using orthogonal matching pursuit algorithm, that is, pass through orthogonal matching pursuit
The following formula of algorithm optimization obtains rarefaction representation vector α of each pixel x about structuring dictionary D in high spectrum image:
S.t.D α=x
Wherein | | α | |0Expression takes 0 norm to α;
3d) by the rarefaction representation vector α of samples all in high spectrum image, formed according to raw image data corresponding position
One three-dimensional rarefaction representation eigenmatrix
Step 4, by spatial texture featureWith rarefaction representation featureStacked combination is got up, and is obtained
To the low-level feature matrix of high spectrum imageWherein l=g+m is low-level feature vector length.
Step 5, the local space sequence signature of sample is constructed.
5a) in high spectrum image low-level feature matrix F, centered on each sample x, size is constructed by side length of w=9
For the window of w × w, the local spatial feature block of x is extracted, then the local spatial feature block of x includes w2=81 samples, wherein often
A sample is the low-level feature vector of an a length of l;
The similarity for 5b) calculating each sample and central sample x in local spatial feature block according to Euclidean distance formula is big
Small, the smaller similarity of Euclidean distance is bigger, and each sample is lined up from big to small according to similarity, obtains sample x's
Local space sequence signature
Step 6, training sample set and its corresponding class label training recurrent neural networks model are utilized.
6a) using number of samples in window as time step long number T, constructs input layer and hidden node number is the recurrence of l
Neural network model, wherein T=w × w=81;
The local space sequence signature of training sample and corresponding class label 6b) are input to recurrent neural network mould
Each of sample local space sequence signature low-level feature vector is inputted each corresponding time step by type, tool
Body method is by t-th of low-level feature x in the local space sequence signature of any one training sample xtInput recurrent neural net
T-th of time step of network, by the input x of the time steptWith the hidden layer state s of step-length at previous i.e. the t-1t-1Altogether
With the hidden layer state s for constituting the time stept:
st=σ (Uxt+Wst-1),
Wherein U indicates that weight matrix of the input layer to hidden layer, the weight matrix of W expression hidden layer to hidden layer, σ indicate non-linear
Activation primitive, σ of the invention select ReLU function, then last output o81By the hidden layer state s of the last one time step81
It determines, it may be assumed that
Wherein V indicate hidden layer to output layer weight matrix,Indicate nonlinear activation function, this exampleSelection
Softmax function, then using the parameter in the back-propagation method repetitive exercise recurrent neural networks model for passing through the time, repeatedly
Stop after generation 200 times, obtains trained recurrent neural networks model.
Step 7, the local space sequence signature of test sample is input in trained recurrent neural networks model, is obtained
To classification category, classification is completed.
Effect of the invention can be further illustrated by following emulation experiment:
1. simulated conditions:
Emulation experiment uses Indian Pines image, which is pushed away in June, 1992 by NASA's NASA jet
Unloaded visible light/Infrared Imaging Spectrometer AVIRIS into laboratory is obtained in the Indiana northwestward, as shown in figure 3, image
Size is 145 × 145, totally 220 wave bands, removes noise and atmosphere and wave band that waters absorbs is there are also 200 wave bands, totally 16
Class terrestrial object information, as shown in table 1.
Emulation experiment is Intel Core i5-4210, dominant frequency 2.90GHz in CPU, inside saves as 10 system of Windows of 8G
It is carried out on system with Python.
16 class data in 1 Indian Pines image of table
Classification | Item name | Number of samples | Classification | Item name | Number of samples |
1 | Alfalfa | 46 | 9 | Oats | 20 |
2 | Corn-notill | 1428 | 10 | Soybean-notill | 972 |
3 | Corn-mintill | 830 | 11 | Soybean-mintill | 2455 |
4 | Corn | 237 | 12 | Soybean-clean | 593 |
5 | Grass-pasture | 483 | 13 | Wheat | 205 |
6 | Grass-trees | 730 | 14 | Woods | 1265 |
7 | Grass-pasture-mowed | 28 | 15 | Buildings-Grass-Trees-Drives | 386 |
8 | Hay-windrowed | 478 | 16 | Stone-Steal-Towers | 93 |
2. simulation parameter:
Above emulation experiment is unified select 10% as training sample, and remaining 90% as test sample, SVM punish because
Son is set as 491;In SRC method, dictionary directly is constituted with training sample, degree of rarefication is set as 10;In SOMP method, directly use
Training sample constitutes dictionary, and window size w is set as 9, and degree of rarefication is set as 30;In the present invention, PCA transformation retains preceding 10 masters
The frequency of ingredient, Gabor filter selects { 0.25,0.5,0.75 } 3 kinds of scales, direction selection4 kinds of directions
Totally 12 filters, the every 1% building dictionary of class sample random selection of rarefaction representation, degree of rarefication are set as 30, window size w setting
It is 9, timing node number T is set as 81.
3. emulation content and result:
Classified with existing three kinds of common methods to high spectrum image Indian Pine using the present invention, commonly
Three kinds of methods are respectively: being based on supporting vector machine svm classifier method, based on the classification method of rarefaction representation SRC, be based on partial zones
The rarefaction representation SOMP classification method in domain.
Classified to Indian Pines image with the present invention with above-mentioned three kinds of common methods, as a result as shown in figure 4, its
Middle Fig. 4 (a) is the classification results figure with SVM method, and Fig. 4 (b) is the classification results figure with rarefaction representation SRC method, Fig. 4 (c)
It is the classification results figure with SOMP method, Fig. 4 (d) is classification results figure of the invention.Figure 4, it is seen that compared to normal
Three kinds of methods, result figure of the invention are more clear completely, and local space consistency and edge consistency are all than existing side
Method will be got well, and nicety of grading is higher.
The present invention and other various methods respectively carry out 10 emulation experiments, take the average value of classification results as final point
Class accuracy, including whole accuracy (OA), every class average accuracy (AA) and Kappa coefficient (Kappa), as a result such as 2 institute of table
Show.
The classification accuracy rate of table 2 present invention and other methods
Method | OA (%) | AA (%) | Kappa |
SVM | 81.24 | 74.06 | 0.79 |
SRC | 68.53 | 64.23 | 0.64 |
SOMP | 95.27 | 83.48 | 0.95 |
The present invention | 97.17 | 95.84 | 0.97 |
As seen from Table 2, the present invention and existing SOMP method are due to containing local spatial information, compared to only with single picture
The SVM method and SRC method of prime information, classification accuracy rate is obviously higher, and the present invention had both incorporated spatial texture feature
With the information of rarefaction representation feature, and the information between local space pixel is sufficiently excavated, and will with recurrent neural networks model
It is the better high-level semantics features of the higher representative of identification that low-level feature, which extracts, can obtain higher classification accuracy rate,
Whole accuracy, average accuracy, other three kinds of methods of being all better than on Kappa coefficient.
To sum up, the spatial texture feature of high spectrum image and rarefaction representation feature integration are low-level feature by the present invention, and
It is extracted local space sequence signature on the basis of local spatial feature, and utilizes the recurrent neural net in deep learning frame
Network model classifies to high spectrum image, has not only improved the degree of purity and identification of low-level feature, but also explores high-spectrum
Similarity information in the local space of picture between each pixel samples improves the effect of important pixel, reduces useless pixel
Influence, while the temporal characteristics of recurrent neural network being combined with high spectrum image local space sequence information, can be with
Low-level feature is abstracted and is extracted as high-level semantics features, sufficiently by the context relation for effectively integrating high spectrum image local space
The characteristic of EO-1 hyperion is utilized, obtains higher discrimination, there is apparent advantage compared with the existing methods.
Claims (5)
1. a kind of hyperspectral image classification method based on recurrent neural network, comprising:
(1) a panel height spectrum picture is inputted, which includes K pixel, B EO-1 hyperion spectral coverage, c class atural object, wherein K
=K1×K2, K1Indicate the length of high spectrum image, K2Indicating the width of high spectrum image, each pixel of image is a sample,
Each sample indicates that the intrinsic dimensionality of sample is B with a feature vector, and 10% sample composition is selected in every class atural object
Training sample set, the sample for being left 90% form test sample collection;
(2) the principal component grayscale image of high spectrum image is filtered using Gabor filter, obtains the space of high spectrum image
Textural characteristicsWherein R indicates real number field, and g is spatial texture feature vector dimension;
(3) rarefaction representation coefficient that each pixel in high spectrum image is calculated using the method for rarefaction representation, obtains high-spectrum
The rarefaction representation feature of pictureWherein m is the dimension of rarefaction representation feature vector;
(4) by the spatial texture feature F of high spectrum image1With rarefaction representation feature F2Stacked combination at high spectrum image low layer
FeatureL is the dimension of low-level feature vector, l=g+m;
(5) in high spectrum image low-level feature matrix F, window is constructed centered on each sample, the part for extracting sample is empty
Between characteristic block, and utilize the local space sequence signature of the similitude building sample between sample;
(6) recurrent neural networks model is constructed by time step long number of number of samples in window, and inputs the part of training sample
Spatial sequence feature and corresponding class label repetitive exercise recurrent neural networks model parameter, obtain trained recurrent neural
Network model;
(7) the local space sequence signature of test sample is input in trained recurrent neural networks model, is classified
Category completes classification.
2. according to the method described in claim 1, wherein using Gabor filter to the principal component of high spectrum image in step 2
Grayscale image is filtered, and is carried out as follows:
2a) high spectrum image is converted using Principal Component Analysis, k=10 principal component grayscale image before extracting;
4 different Gabor kernel function directions and 3 different sinusoidal plane wave wavelength 2b) are set, 12 Gabor filters are obtained
The kernel function of wave device, each Gabor filter is as follows:
Wherein, x'=xcos θ+ysin θ, y'=-xcos θ+ysin θ, x and y indicates coordinate location information, λ indicate sinusoidal plane
The wavelength of wave, θ indicate the direction of Gabor kernel function,Indicate phase deviation, σ indicates the standard deviation of Gaussian envelope, and γ indicates empty
Between aspect ratio;
2c) using 12 Gabor filters setting respectively to each principal component grayscale image in k principal component grayscale image into
Row Gabor filtering, obtains 12 filtered images of each principal component grayscale image;
2d) together by 12 × k filtered image stacks, the spatial texture feature of high spectrum image is obtainedG=12 × k representation space texture feature vector dimension.
3. according to the method described in claim 1, wherein being calculated in step 3 using the method for rarefaction representation every in high spectrum image
The rarefaction representation coefficient of a pixel carries out as follows:
3a) sample that every class randomly selects 1% from the c class training sample of high spectrum image constructs sub- word as dictionary atom
Allusion quotation, wherein the sub- dictionary of the i-th class is Indicate the jth of the i-th class
A dictionary atom, j=1,2 ..., mi, miIndicate the dictionary atom number of the i-th class;
3b) c sub- dictionaries are lined up, an entirety is merged into, obtains total structuring dictionary D=[D1...Di...Dc],
D∈RB×mIt is a two-dimensional matrix, indicates total structuring dictionary that the sub- dictionary of all classes is constituted, m=m1+…+mi+…+mc
It is the dimension of rarefaction representation feature vector, it is added by the sub- dictionary atom number of all categories and is obtained;
The rarefaction representation vector that each pixel 3c) is solved using orthogonal matching pursuit algorithm, that is, pass through orthogonal matching pursuit algorithm
Optimize following formula, obtain rarefaction representation vector α of each pixel x about structuring dictionary D in high spectrum image:
S.t.D α=x
Wherein | | α | |0Expression takes 0 norm to α;
3d) by the rarefaction representation vector α of samples all in high spectrum image, one is formed according to raw image data corresponding position
Three-dimensional rarefaction representation eigenmatrix
4. according to the method described in claim 1, the local space sequence signature of sample is wherein constructed in step 5, by following step
It is rapid to carry out:
5a) in high spectrum image low-level feature matrix F, centered on any one sample x, constructed by window side length of w=9
Size is w × w rectangular window, extracts the local spatial feature block of x, i.e., size is w × w × l three-dimensional matrice, then x
Local spatial feature block includes w2=81 pixel samples, wherein each sample is the low-level feature vector of an a length of l, l=g
+ m, g representation space texture feature vector dimension, m indicate rarefaction representation feature vector dimension;
The similarity size of each pixel samples and center pixel sample x in local spatial feature block 5b) is calculated, and by each picture
Element is lined up from big to small according to similarity, obtains the local space sequence signature of pixel samples x
5. according to the method described in claim 1, wherein construct recurrent neural networks model and training pattern parameter in step 6,
It carries out as follows:
6a) building time step be T, input layer and hidden node number be l recurrent neural networks model, wherein T=w ×
W=81, w indicate window side length, l=g+m, g representation space texture feature vector dimension, m expression rarefaction representation feature vector dimension
Number;
The local space sequence signature of training sample 6b) is input to recurrent neural networks model, i.e., by sample local space sequence
Each of column feature low-level feature vector inputs each corresponding time step, and using the backpropagation by the time
Parameter in method repetitive exercise recurrent neural networks model, obtains trained recurrent neural networks model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710014713.7A CN106815601B (en) | 2017-01-10 | 2017-01-10 | Hyperspectral image classification method based on recurrent neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710014713.7A CN106815601B (en) | 2017-01-10 | 2017-01-10 | Hyperspectral image classification method based on recurrent neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106815601A CN106815601A (en) | 2017-06-09 |
CN106815601B true CN106815601B (en) | 2019-10-11 |
Family
ID=59110109
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710014713.7A Active CN106815601B (en) | 2017-01-10 | 2017-01-10 | Hyperspectral image classification method based on recurrent neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106815601B (en) |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107194437B (en) * | 2017-06-22 | 2020-04-07 | 重庆大学 | Image classification method based on Gist feature extraction and concept machine recurrent neural network |
CN107169535B (en) * | 2017-07-06 | 2023-11-03 | 谈宜勇 | Deep learning classification method and device for biological multispectral image |
CN107844751B (en) * | 2017-10-19 | 2021-08-27 | 陕西师范大学 | Method for classifying hyperspectral remote sensing images of guide filtering long and short memory neural network |
CN107798348B (en) * | 2017-10-27 | 2020-02-18 | 广东省智能制造研究所 | Hyperspectral image classification method based on neighborhood information deep learning |
CN110399929B (en) * | 2017-11-01 | 2023-04-28 | 腾讯科技(深圳)有限公司 | Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium |
CN107679525B (en) * | 2017-11-01 | 2022-11-29 | 腾讯科技(深圳)有限公司 | Image classification method and device and computer readable storage medium |
CN108171270B (en) * | 2018-01-05 | 2021-08-27 | 大连海事大学 | Hyperspectral image classification method based on Hash learning |
CN108256454B (en) * | 2018-01-08 | 2020-08-14 | 浙江大华技术股份有限公司 | Training method based on CNN model, and face posture estimation method and device |
CN108460342B (en) * | 2018-02-05 | 2021-01-01 | 西安电子科技大学 | Hyperspectral image classification method based on convolutional neural network and cyclic neural network |
CN108764303A (en) * | 2018-05-10 | 2018-11-06 | 电子科技大学 | A kind of remote sensing images spatial term method based on attention mechanism |
US10643092B2 (en) | 2018-06-21 | 2020-05-05 | International Business Machines Corporation | Segmenting irregular shapes in images using deep region growing with an image pyramid |
US10776923B2 (en) | 2018-06-21 | 2020-09-15 | International Business Machines Corporation | Segmenting irregular shapes in images using deep region growing |
CN109002771B (en) * | 2018-06-26 | 2022-04-08 | 中国科学院遥感与数字地球研究所 | Remote sensing image classification method based on recurrent neural network |
CN109460471B (en) * | 2018-11-01 | 2021-09-24 | 信融源大数据科技(北京)有限公司 | Method for establishing fiber category map library based on self-learning mode |
CN109670042A (en) * | 2018-12-04 | 2019-04-23 | 广东宜教通教育有限公司 | A kind of examination question classification and grade of difficulty method based on recurrent neural network |
CN109615008B (en) * | 2018-12-11 | 2022-05-13 | 华中师范大学 | Hyperspectral image classification method and system based on stack width learning |
CN109711466B (en) * | 2018-12-26 | 2023-04-14 | 陕西师范大学 | CNN hyperspectral image classification method based on edge preserving filtering |
CN109816002B (en) * | 2019-01-11 | 2022-09-06 | 广东工业大学 | Single sparse self-encoder weak and small target detection method based on feature self-migration |
CN109978041B (en) * | 2019-03-19 | 2022-11-29 | 上海理工大学 | Hyperspectral image classification method based on alternative updating convolutional neural network |
CN110188794B (en) * | 2019-04-23 | 2023-02-28 | 深圳大学 | Deep learning model training method, device, equipment and storage medium |
CN110163293A (en) * | 2019-05-28 | 2019-08-23 | 武汉轻工大学 | Red meat classification method, device, equipment and storage medium based on deep learning |
CN110363078B (en) * | 2019-06-05 | 2023-08-04 | 广东三姆森科技股份有限公司 | Method and device for classifying hyperspectral images based on ADMM-Net |
CN110866439B (en) * | 2019-09-25 | 2023-07-28 | 南京航空航天大学 | Hyperspectral image joint classification method based on multi-feature learning and super-pixel kernel sparse representation |
CN110852451B (en) * | 2019-11-27 | 2022-03-01 | 电子科技大学 | Recursive kernel self-adaptive filtering method based on kernel function |
CN111582330A (en) * | 2020-04-22 | 2020-08-25 | 北方民族大学 | Integrated ResNet-NRC method for dividing sample space based on lung tumor image |
CN111860654B (en) * | 2020-07-22 | 2024-02-02 | 河南大学 | Hyperspectral image classification method based on cyclic neural network |
CN113128669A (en) * | 2021-04-08 | 2021-07-16 | 中国科学院计算技术研究所 | Neural network model for semi-supervised learning and semi-supervised learning method |
CN113139532B (en) * | 2021-06-22 | 2021-09-21 | 中国地质大学(武汉) | Classification method based on multi-output classification model, computer equipment and medium |
CN113887656B (en) * | 2021-10-21 | 2024-04-05 | 江南大学 | Hyperspectral image classification method combining deep learning and sparse representation |
CN117649943B (en) * | 2024-01-30 | 2024-04-30 | 吉林大学 | Shaping data intelligent analysis system and method based on machine learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103514456A (en) * | 2013-06-30 | 2014-01-15 | 安科智慧城市技术(中国)有限公司 | Image classification method and device based on compressed sensing multi-core learning |
CN104036289A (en) * | 2014-06-05 | 2014-09-10 | 哈尔滨工程大学 | Hyperspectral image classification method based on spatial and spectral features and sparse representation |
CN104091151A (en) * | 2014-06-30 | 2014-10-08 | 南京信息工程大学 | Vehicle identification method based on Gabor feature extraction and sparse representation |
CN104298999A (en) * | 2014-09-30 | 2015-01-21 | 西安电子科技大学 | Hyperspectral feature leaning method based on recursion automatic coding |
US9152881B2 (en) * | 2012-09-13 | 2015-10-06 | Los Alamos National Security, Llc | Image fusion using sparse overcomplete feature dictionaries |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2532075A (en) * | 2014-11-10 | 2016-05-11 | Lego As | System and method for toy recognition and detection based on convolutional neural networks |
-
2017
- 2017-01-10 CN CN201710014713.7A patent/CN106815601B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9152881B2 (en) * | 2012-09-13 | 2015-10-06 | Los Alamos National Security, Llc | Image fusion using sparse overcomplete feature dictionaries |
CN103514456A (en) * | 2013-06-30 | 2014-01-15 | 安科智慧城市技术(中国)有限公司 | Image classification method and device based on compressed sensing multi-core learning |
CN104036289A (en) * | 2014-06-05 | 2014-09-10 | 哈尔滨工程大学 | Hyperspectral image classification method based on spatial and spectral features and sparse representation |
CN104091151A (en) * | 2014-06-30 | 2014-10-08 | 南京信息工程大学 | Vehicle identification method based on Gabor feature extraction and sparse representation |
CN104298999A (en) * | 2014-09-30 | 2015-01-21 | 西安电子科技大学 | Hyperspectral feature leaning method based on recursion automatic coding |
Non-Patent Citations (2)
Title |
---|
Preprocessing-free surface material classification using convolutional neural networks pretrained by sparse Autoencoder;Mengqi Ji etal.;《2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP)》;20150920;第1-6页 * |
基于稀疏表示的人脸表情识别;朱可;《中国优秀硕士学位论文全文数据库 信息科技辑》;20131215;第9-43页 * |
Also Published As
Publication number | Publication date |
---|---|
CN106815601A (en) | 2017-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106815601B (en) | Hyperspectral image classification method based on recurrent neural network | |
Wang et al. | Auto-AD: Autonomous hyperspectral anomaly detection network based on fully convolutional autoencoder | |
Wu et al. | ORSIm detector: A novel object detection framework in optical remote sensing imagery using spatial-frequency channel features | |
Zhao et al. | Object-based convolutional neural network for high-resolution imagery classification | |
Shen et al. | Efficient deep learning of nonlocal features for hyperspectral image classification | |
Xu et al. | A lightweight and robust lie group-convolutional neural networks joint representation for remote sensing scene classification | |
Zhang et al. | Weakly supervised learning based on coupled convolutional neural networks for aircraft detection | |
CN106529508B (en) | Based on local and non local multiple features semanteme hyperspectral image classification method | |
Liu et al. | Stacked Fisher autoencoder for SAR change detection | |
CN103971123B (en) | Hyperspectral image classification method based on linear regression Fisher discrimination dictionary learning (LRFDDL) | |
CN106023065B (en) | A kind of tensor type high spectrum image spectral-spatial dimension reduction method based on depth convolutional neural networks | |
CN109389080A (en) | Hyperspectral image classification method based on semi-supervised WGAN-GP | |
Huang et al. | Multi-scale local context embedding for LiDAR point cloud classification | |
CN107247930A (en) | SAR image object detection method based on CNN and Selective Attention Mechanism | |
CN102208034A (en) | Semi-supervised dimension reduction-based hyper-spectral image classification method | |
CN108537121A (en) | Self-adaptive remote sensing scene classification method based on meteorological environment parameter and image information fusion | |
CN107767416A (en) | The recognition methods of pedestrian's direction in a kind of low-resolution image | |
CN109034213B (en) | Hyperspectral image classification method and system based on correlation entropy principle | |
CN108427913A (en) | The Hyperspectral Image Classification method of combined spectral, space and hierarchy information | |
CN108596195A (en) | A kind of scene recognition method based on sparse coding feature extraction | |
Deng | A survey of convolutional neural networks for image classification: Models and datasets | |
CN105160351A (en) | Semi-monitoring high-spectral classification method based on anchor point sparse graph | |
CN114358211B (en) | Multi-mode deep learning-based aircraft behavior intention recognition method | |
Tun et al. | Hyperspectral remote sensing images classification using fully convolutional neural network | |
Xu et al. | UCDFormer: Unsupervised change detection using a transformer-driven image translation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |