CN108470192B - Hyperspectral classification method and device - Google Patents

Hyperspectral classification method and device Download PDF

Info

Publication number
CN108470192B
CN108470192B CN201810206243.9A CN201810206243A CN108470192B CN 108470192 B CN108470192 B CN 108470192B CN 201810206243 A CN201810206243 A CN 201810206243A CN 108470192 B CN108470192 B CN 108470192B
Authority
CN
China
Prior art keywords
matrix
hyperspectral
feature matrix
spectral
end member
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810206243.9A
Other languages
Chinese (zh)
Other versions
CN108470192A (en
Inventor
杨祖元
陈松灿
李珍妮
谢胜利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ante Laser Co ltd
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201810206243.9A priority Critical patent/CN108470192B/en
Publication of CN108470192A publication Critical patent/CN108470192A/en
Application granted granted Critical
Publication of CN108470192B publication Critical patent/CN108470192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Abstract

The invention discloses a hyperspectral classification method and device. The dimensionality and the data volume of the current hyperspectral remote sensing data are greatly increased, but information loss can be caused after dimension reduction of the data, the hyperspectral classification method provided by the invention overcomes the defect that label information is lost after dimension reduction of a hyperspectral image through the idea of constraining nonnegative matrix decomposition, in addition, a first end member spectral matrix obtained by calculation of a vertex component analysis algorithm is used as an initial value of the constraining nonnegative matrix algorithm, the operation speed of the algorithm is accelerated, the label constraint matrix is constructed through machine learning calculation classification method, the classification precision and the classification speed of the hyperspectral image can be effectively improved, various end members are obtained through constraint nonnegative matrix decomposition and updating, the space distribution condition of various ground objects is effectively reflected, the problem that the label information loss of the hyperspectral image after the nonnegative matrix decomposition and dimension reduction can be caused by the current spectrum classification method is solved, thereby leading to the technical problem of low classification accuracy.

Description

Hyperspectral classification method and device
Technical Field
The invention relates to the technical field of combination of remote sensing imaging and machine learning, in particular to a hyperspectral classification method and device.
Background
In the past, people often carry out remote sensing measurement in a wide waveband, with the development of science and technology, the appearance of a hyperspectral remote sensing image realizes the breakthrough of the spectral resolution of the remote sensing image, people can acquire relevant data from an interested object by utilizing a plurality of very narrow electromagnetic wavebands in ultraviolet, visible light and middle infrared regions of an electromagnetic spectrum, and substances which are not detectable in the broadband remote sensing originally can be detected in the hyperspectral remote sensing.
The hyperspectral remote sensing image also comprises spectrum dimensions besides a two-dimensional plane image, so that the integration of maps is realized, the continuous spectrum information of each ground feature can be obtained while the earth surface space image is obtained, the hyperspectral remote sensing image has natural advantages for analyzing the earth ground feature information, and the hyperspectral remote sensing image is widely applied to the fields of military affairs, agriculture, mineral resources, ecological environment and the like.
The ground reflection spectrum signal obtained by hyperspectral remote sensing is recorded by taking a pixel as a unit, and is the synthesis of the surface material spectrum signal corresponding to the pixel. Due to the spatial resolution limit of a remote sensor and the complex diversity of the ground objects in the nature, the ground surface corresponding to the image elements in the image is not necessarily the characteristic of one substance, but is the mixture of the spectra of several different substances. If the pixel only contains one ground object type, such as mineral substances, water bodies, vegetation and the like, the pixel is called an end member; if the picture element comprises more than one type of terrain, it is referred to as a hybrid picture element. Due to the limitation of the spatial resolution of the remote sensor and the complex diversity of the natural ground features, the mixed pixel is universally existed in the remote sensing image, and even if the remote sensing image is a bare ground surface, the mixed pixel is a mixed spectrum of different types of soil and mineral substances. The hyperspectral classification method is a process of dividing pixels in a hyperspectral image into different categories, wherein each category corresponds to an end member, and each end member corresponds to a ground object type
However, with the improvement of the spectral resolution of the hyperspectral remote sensing data, the data dimension and the data volume of the hyperspectral remote sensing data are also greatly increased, for example, AVIRIS has 244 wave bands, so that the pressure of a computer for processing the data is remarkably increased, difficulty is brought to the identification and classification of hyperspectrum, many traditional spectral classification methods are not suitable for the hyperspectral data, and information loss can be caused after the dimension reduction of the data. Therefore, the current spectrum classification method can cause label information loss of the hyperspectral image after non-negative matrix decomposition and dimensionality reduction, and the technical problem of low classification precision is caused.
Disclosure of Invention
The invention provides a hyperspectral classification method and device, and solves the technical problem that the classification precision is low due to the fact that label information of a hyperspectral image is lost after non-negative matrix decomposition and dimension reduction by the current spectral classification method.
The invention provides a hyperspectral classification method, which comprises the following steps:
s1: extracting a hyperspectral feature matrix in a hyperspectral image to be classified to obtain a first hyperspectral feature matrix, deleting a wave band with a negative value element in the first hyperspectral feature matrix, and updating to obtain a second hyperspectral feature matrix, wherein a row in the hyperspectral feature matrix represents a wave band, and a column in the hyperspectral feature matrix represents a pixel;
s2: extracting a first end member spectral matrix in the second hyperspectral feature matrix by a vertex component analysis method;
s3: calculating the distance between each pixel in the second hyperspectral feature matrix and each end member in the first end member spectral matrix and the distance between each end member and each other through a machine learning algorithm, and classifying the kth end member of the first end member spectral matrix and the pixel in the second hyperspectral feature matrix, the distance between which and the kth end member is less than a preset distance threshold value, into kth end members;
s4: constructing an indication matrix C of L rows and r columns according to the classification result, if the second hyperspectral feature matrix comprises m kth end members, then the values of the elements from the j +1 th column to the j + m th column in the kth row in the indication matrix C are 1, the values of the other elements are 0, and constructing a label constraint matrix A according to the indication matrix C, wherein L is the number of the end members, r is the total number of the end members in the second hyperspectral feature matrix, j is the total number of the end members from the 1 st column to the k-1 th column in the second hyperspectral feature matrix, and the expression of the label constraint matrix A is as follows:
Figure BDA0001595969610000021
i is a unit matrix with the row number and the column number of (n-L), and n is the total number of pixels in the second hyperspectral characteristic matrix;
s5: adjusting the column positions of the second hyperspectral feature matrix, arranging the pixels classified into end members in sequence from the first column according to the category sequence, arranging the pixels in the same end member in sequence according to the sequence of the column positions in the second hyperspectral feature matrix, arranging the pixels not classified into end members in sequence from the next column of the last column of end members according to the sequence of the column positions in the second hyperspectral feature matrix, and updating to obtain a third hyperspectral feature matrix;
s6: and taking the first end member spectral matrix as an initial value, iteratively updating the first end member spectral matrix through a constraint non-negative matrix decomposition algorithm according to the third high spectral feature matrix and the label constraint matrix A, taking the end member spectral matrix obtained after iteration to convergence as a second end member spectral matrix, and taking the second end member spectral matrix as a high spectral classification result of the high spectral image to be classified.
Preferably, step S1 specifically includes:
s11: extracting a hyperspectral feature matrix in a hyperspectral image to be classified to obtain a first hyperspectral feature matrix, wherein rows in the hyperspectral feature matrix represent wave bands, and columns represent pixels;
s12: calculating the signal-to-noise ratio of each wave band of the first hyperspectral feature matrix, deleting the wave bands lower than a preset signal-to-noise ratio threshold value and the wave bands with negative values in the first hyperspectral feature matrix, and updating to obtain a fourth hyperspectral feature matrix;
s13: and carrying out normalization processing on the fourth hyperspectral feature matrix, and updating to obtain a second hyperspectral feature matrix.
Preferably, after step S11, step S12 is preceded by:
s14: calculating a spectral feature vector and spatial feature information in the first hyperspectral feature matrix through a PCA algorithm, and reversely integrating the spectral feature vector and the spatial feature information to obtain a filtered first hyperspectral feature matrix;
s12 specifically includes: and calculating the signal-to-noise ratio of each wave band of the filtered first hyperspectral feature matrix, and deleting the wave band which is lower than a preset signal-to-noise ratio threshold value and the wave band with a negative value element in the filtered first hyperspectral feature matrix to obtain a fourth hyperspectral feature matrix.
Preferably, step S3 specifically includes: calculating the distance between each pixel in the second hyperspectral feature matrix and each end member in the first end member spectral matrix and the distance between each end member and each other end member through a k-nearest neighbor algorithm, and listing the kth end member of the first end member spectral matrix and the pixel in the second hyperspectral feature matrix, the distance between which and the kth end member is smaller than a preset distance threshold value, as kth class end members, wherein the preset threshold value distance is the minimum value of alpha times of the average value of the distances between the end members and beta times of the average value of the distances between the pixel and the end members, alpha is a first preset multiple, and beta is a second preset multiple.
Preferably, step S6 specifically includes:
s61: adding a row of elements at the bottom of the third high spectral feature matrix and the first end-element spectral matrix, wherein the value of each element in the added row is the mean value of the third high spectral feature matrix, and obtaining a fifth high spectral feature matrix and a third end-element spectral matrix;
s62: and taking the first end member spectral matrix as an initial value, iteratively updating the third end member spectral matrix through a constraint nonnegative matrix decomposition algorithm according to the fifth hyperspectral feature matrix and the label constraint matrix A, taking the end member spectral matrix obtained after iteration to convergence as a second end member spectral matrix, and taking the second end member spectral matrix as a hyperspectral classification result of the hyperspectral image to be classified.
The invention provides a hyperspectral classification device, which comprises:
the characteristic extraction unit is used for extracting a hyperspectral feature matrix in a hyperspectral image to be classified to obtain a first hyperspectral feature matrix, deleting a wave band with a negative value element in the first hyperspectral feature matrix, and updating to obtain a second hyperspectral feature matrix, wherein a row in the hyperspectral feature matrix represents a wave band, and a column in the hyperspectral feature matrix represents a pixel;
the analysis and extraction unit is used for extracting a first end-member spectral matrix in the second hyperspectral feature matrix by a vertex component analysis method;
the distance calculation unit is used for calculating the distance between each pixel in the second hyperspectral feature matrix and each end member in the first end member spectral matrix and the distance between each end member and each other through a machine learning algorithm, and classifying the kth end member of the first end member spectral matrix and the pixel in the second hyperspectral feature matrix, the distance between which and the kth end member is less than a preset distance threshold value, into the kth end member;
the label matrix unit is used for constructing an indication matrix C with L rows and r columns according to the classification result, if the second hyperspectral feature matrix comprises m kth end members, the values of the elements from the j +1 th column to the j + m th column in the kth row in the indication matrix C are 1, the values of the other elements are 0, and constructing a label constraint matrix A according to the indication matrix C, wherein L is the number of the end members in category, r is the total number of the end members in the second hyperspectral feature matrix, j is the total number of the end members from the 1 st category to the k-1 th category in the second hyperspectral feature matrix, and the expression of the label constraint matrix A is as follows:
Figure BDA0001595969610000041
i is a unit matrix with the row number and the column number of (n-L), and n is the total number of pixels in the second hyperspectral characteristic matrix;
the position adjusting unit is used for adjusting the column positions of the second hyperspectral feature matrix, arranging the pixels classified into the end members in sequence from the first column according to the category sequence, arranging the pixels in the same category end member in sequence according to the sequence of the column positions in the second hyperspectral feature matrix, arranging the pixels not classified into the end members in sequence from the next column of the last column of end members according to the sequence of the column positions in the second hyperspectral feature matrix, and updating to obtain a third hyperspectral feature matrix;
and the iteration updating unit is used for performing iteration updating on the first end member spectral matrix by a constraint non-negative matrix decomposition algorithm according to the third high-spectrum characteristic matrix and the label constraint matrix A by taking the first end member spectral matrix as an initial value, taking the end member spectral matrix obtained after iteration to convergence as a second end member spectral matrix, and taking the second end member spectral matrix as a high-spectrum classification result of the high-spectrum image to be classified.
Preferably, the feature extraction unit specifically includes:
the characteristic subunit is used for extracting a hyperspectral characteristic matrix in a hyperspectral image to be classified to obtain a first hyperspectral characteristic matrix, wherein a row in the hyperspectral characteristic matrix represents a wave band, and a column represents a pixel;
the deleting subunit is used for calculating the signal-to-noise ratio of each wave band of the first hyperspectral feature matrix, deleting the wave bands which are lower than a preset signal-to-noise ratio threshold value in the first hyperspectral feature matrix and the wave bands with negative value elements, and updating to obtain a fourth hyperspectral feature matrix;
and the normalizing subunit is used for performing normalization processing on the fourth hyperspectral feature matrix and updating to obtain a second hyperspectral feature matrix.
Preferably, the feature extraction unit further includes:
the filtering subunit is used for calculating a spectral feature vector and spatial feature information in the first hyperspectral feature matrix through a PCA algorithm, and reversely integrating the spectral feature vector and the spatial feature information to obtain a filtered first hyperspectral feature matrix;
and the deleting subunit is specifically configured to calculate a signal-to-noise ratio of each waveband of the filtered first hyperspectral feature matrix, delete a waveband lower than a preset signal-to-noise ratio threshold value in the filtered first hyperspectral feature matrix and a waveband having a negative value element, and obtain a fourth hyperspectral feature matrix.
Preferably, the distance calculating unit is specifically configured to calculate, by using a k-nearest neighbor algorithm, distances between each pixel in the second hyperspectral feature matrix and each end member in the first hyperspectral feature matrix and distances between each end member and each other, and list, as the kth class end member, a kth end member in the first hyperspectral feature matrix and a pixel in the second hyperspectral feature matrix, which is less than a preset distance threshold, where the preset threshold distance is a minimum value of an average value of distances between each end member by α times and an average value of distances between the pixels and the end members by β times, where α is a first preset multiple and β is a second preset multiple.
Preferably, the iteration updating unit specifically includes:
the reconstruction subunit is used for adding a row of elements at the bottoms of the third high-spectrum characteristic matrix and the first end-element spectrum matrix, and the value of each element in the added row is the mean value of the third high-spectrum characteristic matrix to obtain a fifth high-spectrum characteristic matrix and a third end-element spectrum matrix;
and the iteration subunit is used for performing iteration updating on the third end-member spectral matrix by a constraint nonnegative matrix decomposition algorithm according to a fifth hyperspectral feature matrix and the label constraint matrix A by taking the first end-member spectral matrix as an initial value, taking the end-member spectral matrix obtained after iteration to convergence as a second end-member spectral matrix, and taking the second end-member spectral matrix as a hyperspectral classification result of the hyperspectral image to be classified.
According to the technical scheme, the invention has the following advantages:
the invention provides a hyperspectral classification method, which comprises the following steps: s1: extracting a hyperspectral feature matrix in a hyperspectral image to be classified to obtain a first hyperspectral feature matrix, deleting a wave band with a negative value element in the first hyperspectral feature matrix, and updating to obtain a second hyperspectral feature matrix, wherein a row in the hyperspectral feature matrix represents a wave band, and a column in the hyperspectral feature matrix represents a pixel; s2: extracting a first end member spectral matrix in the second hyperspectral feature matrix by a vertex component analysis method; s3: calculating the distance between each pixel in the second hyperspectral feature matrix and each end member in the first end member spectral matrix and the distance between each end member and each other through a machine learning algorithm, and classifying the kth end member of the first end member spectral matrix and the pixel in the second hyperspectral feature matrix, the distance between which and the kth end member is less than a preset distance threshold value, into kth end members; s4: constructing an indication matrix C of an r column in L rows according to the classification result, if the second hyperspectral feature matrix comprises m kth end members, dereferencing the elements from the j +1 th column to the j + m th column in the kth row in the indication matrix C to be 1, dereferencing the rest elements to be 0, and constructing a label constraint matrix A according to the indication matrix C; s5: adjusting the column positions of the second hyperspectral feature matrix, arranging the pixels classified into end members in sequence from the first column according to the category sequence, arranging the pixels in the same end member in sequence according to the sequence of the column positions in the second hyperspectral feature matrix, arranging the pixels not classified into end members in sequence from the next column of the last column of end members according to the sequence of the column positions in the second hyperspectral feature matrix, and updating to obtain a third hyperspectral feature matrix; s6: and taking the first end member spectral matrix as an initial value, iteratively updating the first end member spectral matrix through a constraint non-negative matrix decomposition algorithm according to the third high spectral feature matrix and the label constraint matrix A, taking the end member spectral matrix obtained after iteration to convergence as a second end member spectral matrix, and taking the second end member spectral matrix as a high spectral classification result of the high spectral image to be classified.
The high-spectrum classification method provided by the invention constructs the label constraint matrix through the classification information of the machine learning algorithm, can effectively improve the classification precision and the classification speed of the high-spectrum image, overcomes the defect that the label information of the high-spectrum image is lost after the non-negative matrix is decomposed and reduced in dimension through the idea of constraining the non-negative matrix, updates to obtain various end members, reflects the space distribution condition of various ground objects, enables the high-spectrum image to have application value, and simultaneously takes the first end member spectral matrix obtained by the calculation of the vertex component analysis algorithm as the initial value of the constraint non-negative matrix algorithm, enables the operation speed of the algorithm to be accelerated, and solves the technical problem that the prior spectrum classification method can cause the label information of the high-spectrum image to be lost after the non-negative matrix is decomposed and reduced in dimension, thereby causing the low classification precision.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of an embodiment of a hyperspectral classification method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an embodiment of another hyperspectral classification method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an embodiment of a hyperspectral classification apparatus according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a hyperspectral classification method and device, and solves the technical problem that the classification precision is low due to the fact that label information of a hyperspectral image is lost after non-negative matrix decomposition and dimension reduction by a current spectrum classification method.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides an embodiment of a hyperspectral classification method, including:
step 101: extracting a hyperspectral feature matrix in a hyperspectral image to be classified to obtain a first hyperspectral feature matrix, deleting a wave band with a negative value element in the first hyperspectral feature matrix, and updating to obtain a second hyperspectral feature matrix, wherein a row in the hyperspectral feature matrix represents a wave band, and a column in the hyperspectral feature matrix represents a pixel;
it should be noted that rows included in the hyperspectral feature matrix represent wave bands existing in the pixels, columns represent the pixels, and how to extract the hyperspectral feature matrix in the hyperspectral image is a common technical means, and details are not repeated here.
Step 102: extracting a first end member spectral matrix in the second hyperspectral feature matrix by a vertex component analysis method;
it should be noted that a first end member spectral matrix in the second hyperspectral feature matrix can be extracted by a vertex component analysis method, a row in the first end member spectral matrix represents a waveband, and a column represents an end member;
the end members in the first end member spectral matrix are also the picture elements in the second hyperspectral feature matrix.
Step 103: calculating the distance between each pixel in the second hyperspectral feature matrix and each end member in the first end member spectral matrix and the distance between each end member and each other through a machine learning algorithm, and classifying the kth end member of the first end member spectral matrix and the pixel in the second hyperspectral feature matrix, the distance between which and the kth end member is less than a preset distance threshold value, into kth end members;
it should be noted that the distance between each pixel and each end member and the distance between each end member in the second hyperspectral feature matrix can be calculated through a machine learning algorithm, the similarity between the pixel and the end member can be judged through the distance, and if the distance between the pixel and the kth end member is smaller than a preset distance threshold, the kth end member and the pixel are both listed as the kth end member.
Step 104: constructing an indication matrix C of L rows and r columns according to the classification result, if the second hyperspectral feature matrix comprises m kth end members, then the values of the elements from the j +1 th column to the j + m th column in the kth row in the indication matrix C are 1, the values of the other elements are 0, and constructing a label constraint matrix A according to the indication matrix C, wherein L is the number of the end members, r is the total number of the end members in the second hyperspectral feature matrix, j is the total number of the end members from the 1 st column to the k-1 th column in the second hyperspectral feature matrix, and the expression of the label constraint matrix A is as follows:
Figure BDA0001595969610000081
i is a unit matrix with the row number and the column number of (n-L), and n is the total number of pixels in the second hyperspectral characteristic matrix;
the label constraint matrix is constructed by using the classification information of the machine learning algorithm, so that the classification precision and the classification speed of the hyperspectral image can be effectively improved.
Step 105: adjusting the column positions of the second hyperspectral feature matrix, arranging the pixels classified into end members in sequence from the first column according to the category sequence, arranging the pixels in the same end member in sequence according to the sequence of the column positions in the second hyperspectral feature matrix, arranging the pixels not classified into end members in sequence from the next column of the last column of end members according to the sequence of the column positions in the second hyperspectral feature matrix, and updating to obtain a third hyperspectral feature matrix;
it should be noted that, end members of the same class are adjacently and sequentially arranged according to the sequence of the positions of the columns in the second hyperspectral feature matrix, if the 4 th pixel and the 6 th pixel are of the 1 st class and the 1 st pixel and the 2 nd pixel are of the 2 nd class, the 4 th pixel and the 6 th pixel are adjacently and sequentially arranged, and the 1 st pixel and the 2 nd pixel are adjacently and sequentially arranged;
the end members of different types are arranged in sequence according to the number of types, and then are arranged in sequence according to the types 1, 2 and 3, if the 4 th pixel and the 6 th pixel are the type 1, and the 1 st pixel and the 2 nd pixel are the type 2, then the 4 th pixel, the 6 th pixel, the 1 st pixel and the 2 nd pixel are arranged in sequence;
the non-end members are sequentially arranged from the next column of the last column of end members according to the sequence of the positions of the columns in the second hyperspectral feature matrix, and if the 3 rd pixel and the 5 th pixel are not end members, the arrangement sequence is as follows: the 4 th pixel element, the 6 th pixel element, the 1 st pixel element, the 2 nd pixel element, the 3 rd pixel element and the 5 th pixel element.
Step 106: and taking the first end member spectral matrix as an initial value, iteratively updating the first end member spectral matrix through a constraint non-negative matrix decomposition algorithm according to the third high spectral feature matrix and the label constraint matrix A, taking the end member spectral matrix obtained after iteration to convergence as a second end member spectral matrix, and taking the second end member spectral matrix as a high spectral classification result of the high spectral image to be classified.
The label constraint matrix A is used for overcoming the defect that label information of a hyperspectral image is lost after non-negative matrix decomposition dimensionality reduction by using a constraint non-negative matrix decomposition algorithm, and a real and effective second end-member spectral matrix can be obtained through iteration and used as a hyperspectral classification result of the hyperspectral image to be classified;
the high-spectrum classification method provided by the invention constructs the label constraint matrix through the classification information of the machine learning algorithm, can effectively improve the classification precision and the classification speed of the high-spectrum image, overcomes the defect that the label information of the high-spectrum image is lost after the non-negative matrix is decomposed and reduced in dimension through the idea of constraining the non-negative matrix, updates to obtain various end members, reflects the space distribution condition of various ground objects, enables the high-spectrum image to have application value, and simultaneously takes the first end member spectral matrix obtained by the calculation of the vertex component analysis algorithm as the initial value of the constraint non-negative matrix algorithm, enables the operation speed of the algorithm to be accelerated, and solves the technical problem that the prior spectrum classification method can cause the label information of the high-spectrum image to be lost after the non-negative matrix is decomposed and reduced in dimension, thereby causing the low classification precision.
The foregoing is an embodiment of a hyperspectral classification method provided by an embodiment of the present invention, and the following is another embodiment of a hyperspectral classification method provided by an embodiment of the present invention.
Referring to fig. 2, another embodiment of the present invention provides a hyperspectral classification method, including:
step 201: extracting a hyperspectral feature matrix in a hyperspectral image to be classified to obtain a first hyperspectral feature matrix, wherein rows in the hyperspectral feature matrix represent wave bands, and columns represent pixels;
it should be noted that rows included in the hyperspectral feature matrix represent wave bands existing in the pixels, columns represent the pixels, and how to extract the hyperspectral feature matrix in the hyperspectral image is a common technical means, and is not described herein again;
a first hyperspectral feature matrix such as Xa=[x1 T,...,xm T]T∈Rm×nThe hyperspectral image data processing method is characterized by comprising the following steps of obtaining a matrix with m rows and n columns, wherein m represents the wave band number of a hyperspectral image, n represents the total number of pixels in the hyperspectral image data, R represents a real number domain, and T represents the transposition of the matrix.
Step 202: calculating a spectral feature vector and spatial feature information in the first hyperspectral feature matrix through a PCA algorithm, and reversely integrating the spectral feature vector and the spatial feature information to obtain a filtered first hyperspectral feature matrix;
it should be noted that the feature value and the feature vector in the first hyperspectral feature matrix can be calculated through a PCA algorithm, the feature vector is a spectral feature vector, the spatial feature information can be obtained by performing domain extraction on the spectral feature vector, and the filtered first hyperspectral feature matrix can be obtained again by reversely integrating the spectral feature vector and the spatial feature information;
the noise in the first hyperspectral feature matrix can be removed by filtering through a PCA algorithm, the accuracy of final classification is improved, and if the data have fewer impurities, whether filtering is performed or not has less influence on the result.
Step 203: calculating the signal-to-noise ratio of each wave band of the filtered first hyperspectral feature matrix, and deleting the wave band which is lower than a preset signal-to-noise ratio threshold value in the filtered first hyperspectral feature matrix and the wave band with a negative value element to obtain a fourth hyperspectral feature matrix;
it should be noted that, since the non-negative matrix decomposition requires that negative value elements do not exist in the matrix, the waveband with the negative value elements in the first hyperspectral feature matrix is deleted;
the SIGNAL-to-NOISE RATIO, which is called SNR or S/N (SIGNAL-to-NOISE RATIO) in english, is also called SIGNAL-to-NOISE RATIO, the SIGNAL-to-NOISE RATIO of the image should be equal to the RATIO of the power spectrum of the SIGNAL to the NOISE, and the higher the SIGNAL-to-NOISE RATIO is, the less the NOISE is, so that the band in which the SIGNAL-to-NOISE RATIO in the first hyperspectral feature matrix is lower than the preset SIGNAL-to-NOISE RATIO threshold is deleted, and the precision of the final hyperspectral classification is improved.
Step 204: performing normalization processing on the fourth hyperspectral feature matrix, and updating to obtain a second hyperspectral feature matrix;
it should be noted that the complexity of subsequent calculation may be reduced by performing normalization processing on the fourth hyper-spectral feature matrix, where the normalization processing may be performed in a manner of X ═ X./max (X)), that is, each element in the fourth hyper-spectral feature matrix is divided by an element of the maximum value based on the element of the maximum value.
Step 205: extracting a first end member spectral matrix in the second hyperspectral feature matrix by a vertex component analysis method;
it should be noted that Vertex Component Analysis (VCA) is a fast algorithm for unsupervised extraction of end members from hyperspectral images, and it applies two simple geometric facts: first, the end members must be the end points of the simplex; second, the affine transformation of the simplex remains a simplex;
the first end-member spectral matrix in the second hyperspectral characteristic matrix can be rapidly extracted through a vertex component analysis method, and the model of the VCA is as follows:
Xb=MS+p (1)
wherein, XbIs a second hyperspectral feature matrix, M ═ M1,m2,...,mr]Is a first end-member spectral matrix (mi represents the ith end-member spectral signal, r represents the number of end-members, i.e. the class number, and r is prior information), and S ═ S1,s2,...,sp]TIs an abundance vector corresponding to the first end-element spectral matrix, and p is noise;
according to the information of the first end member spectral matrix, r end members of the hyperspectral image are equivalent to r categories, and the vertex component analysis is to take an extreme value pixel in the hyperspectral image as an end member, namely the extracted end member is a certain pixel in the hyperspectral image.
Step 206: calculating the distance between each pixel in the second hyperspectral feature matrix and each end member in the first hyperspectral feature matrix and the distance between each end member and each other end member through a k-nearest neighbor algorithm, and listing the kth end member of the first hyperspectral feature matrix and the pixel in the second hyperspectral feature matrix, the distance between which and the kth end member is less than a preset distance threshold value, as kth class end members, wherein the preset threshold value distance is the minimum value of alpha times of the average value of the distances between the end members and beta times of the average value of the distances between the pixel and the end members, alpha is a first preset multiple, and beta is a second preset multiple;
it should be noted that k-Nearest Neighbor algorithm ((KNN)) is a theoretically mature method and is one of the simplest machine learning algorithms, and the idea of the algorithm is as follows: if most of the k most similar samples in the feature space (i.e., the nearest neighbors in the feature space) of a sample belong to a certain class, then the sample also belongs to this class;
the distance between each pixel and each end member and the distance between each end member can be rapidly calculated through a k-nearest neighbor algorithm;
if a certain pixel is divided into kth class end members, the distance between the pixel and the kth pixel needs to be smaller than a preset distance threshold, and the preset distance threshold is the minimum value of the following three types: 1. the distance between each end member, i.e. the distance between any two end members; 2. a times of the average value of the distances between the end members, wherein A is a first preset multiple, and the value of A is determined according to actual requirements; 3. b times of the average value of the distance between the pixel and the end member, wherein B is a second preset multiple, and the value of B is determined according to actual requirements;
the classification according to the calculation result of the k-nearest neighbor algorithm can improve the classification precision.
Step 207: constructing an indication matrix C of L rows and r columns according to the classification result, if the second hyperspectral feature matrix comprises m kth end members, then the values of the elements from the j +1 th column to the j + m th column in the kth row in the indication matrix C are 1, the values of the other elements are 0, and constructing a label constraint matrix A according to the indication matrix C, wherein L is the number of the end members, r is the total number of the end members in the second hyperspectral feature matrix, j is the total number of the end members from the 1 st column to the k-1 th column in the second hyperspectral feature matrix, and the expression of the label constraint matrix A is as follows:
Figure BDA0001595969610000121
i is a unit matrix with the row number and the column number of (n-L), and n is the total number of pixels in the second hyperspectral characteristic matrix;
it should be noted that, the expression of the label constraint matrix a is:
Figure BDA0001595969610000122
i is a unit matrix with the row number and the column number being (n-L), and n is the total number of pixels.
If there are 2 class 1 end members, 2 class 2 end members, 1 class 3 end member and no classification of other pixels into end members in the classified second hyperspectral feature matrix, the label constraint matrix a is:
Figure BDA0001595969610000131
step 208: adjusting the column positions of the second hyperspectral feature matrix, arranging the pixels classified into end members in sequence from the first column according to the category sequence, arranging the pixels in the same end member in sequence according to the sequence of the column positions in the second hyperspectral feature matrix, arranging the pixels not classified into end members in sequence from the next column of the last column of end members according to the sequence of the column positions in the second hyperspectral feature matrix, and updating to obtain a third hyperspectral feature matrix;
it should be noted that, the same class of end members are adjacently arranged according to the sequence of the original column positions in the second hyperspectral feature matrix, if the 4 th pixel and the 6 th pixel are of the 1 st class and the 1 st pixel and the 2 nd pixel are of the 2 nd class, the 4 th pixel and the 6 th pixel are adjacently and sequentially arranged, and the 1 st pixel and the 2 nd pixel are adjacently and sequentially arranged;
arrange according to the classification number in proper order between the end member of different types, then arrange in proper order according to 1 st type, 2 nd type and 3 rd type, if 4 th pixel and 6 th pixel are 1 st type, 1 st pixel and 2 nd pixel are 2 nd type, then the sequence is: the 4 th pixel, the 6 th pixel, the 1 st pixel and the 2 nd pixel;
the non-end members are sequentially arranged from the next column of the last column of end members according to the sequence of the positions of the columns in the second hyperspectral feature matrix, and if the 3 rd pixel and the 5 th pixel are not end members, the arrangement sequence is as follows: the 4 th pixel element, the 6 th pixel element, the 1 st pixel element, the 2 nd pixel element, the 3 rd pixel element and the 5 th pixel element.
Step 209: adding a row of elements at the bottom of the third high spectral feature matrix and the first end-element spectral matrix, wherein the value of each element in the added row is the mean value of the third high spectral feature matrix, and obtaining a fifth high spectral feature matrix and a third end-element spectral matrix;
it should be noted that, a constrained non-negative matrix factorization algorithm is adopted to obtain a second end-member spectral matrix and a corresponding abundance matrix by performing mixed pixel factorization on the hyperspectral image, in order to make the column sum constraint of the abundance matrix be one, the structures of a third hyperspectral feature matrix and a first end-member spectral matrix need to be changed, a row of elements is added to the bottoms of the third hyperspectral feature matrix and the first end-member spectral matrix, and the value of each element in the added row is the mean value of the third hyperspectral feature matrix, so as to obtain a fifth hyperspectral feature matrix and a third end-member spectral matrix;
Figure BDA0001595969610000141
wherein, XCIs a third high spectral feature matrix, XEIs a fifth hyper-spectral feature matrix, U1Bit first end-element spectral matrix, U3And d is the average value of the third high spectral characteristic matrix.
Step 210: and taking the first end member spectral matrix as an initial value, iteratively updating the third end member spectral matrix through a constraint nonnegative matrix decomposition algorithm according to the fifth hyperspectral feature matrix and the label constraint matrix A, taking the end member spectral matrix obtained after iteration to convergence as a second end member spectral matrix, and taking the second end member spectral matrix as a hyperspectral classification result of the hyperspectral image to be classified.
It should be noted that the model of the constrained non-negative matrix factorization algorithm is:
XE≈UV,V=ZA (5)
wherein, XEA fifth hyperspectral characteristic matrix is adopted, Z is an introduced auxiliary matrix, the initial value of Z is a zero-mean Gaussian random matrix generated randomly, and V is an abundance matrix of r rows and n columns corresponding to the end member spectrum matrix;
the update formula of the constraint non-negative matrix factorization algorithm is as follows:
Figure BDA0001595969610000142
Figure BDA0001595969610000143
wherein U represents an end-member spectral matrix, UijThe element representing the ith row and the jth column of the end-member spectral matrix, ZijAn element representing the ith row and the jth column of the auxiliary matrix;
let V equal to ZA, multiple iterations have UV equal to XEWhen the change of U is less than a first preset change threshold value or UV and XEWhen the difference value of (A) is less than a preset threshold value, the iteration is considered to be converged, and the end-member spectral matrix U at the moment is a second end-member spectral matrix U2And taking the second end-member spectrum matrix and the corresponding abundance matrix as a hyperspectral classification result of the hyperspectral image to be classified.
The hyperspectral classification method provided by the invention constructs the label constraint matrix through the classification information of the machine learning algorithm, combines a small amount of marked samples with a large amount of unmarked samples to improve the generalization capability of classification, and can effectively improve the classification precision and the classification speed of hyperspectral images;
meanwhile, a first end element spectral matrix obtained by calculation of a vertex component analysis algorithm is used as an initial value of a constraint non-negative matrix algorithm, so that the operation speed of the algorithm is accelerated;
the abundance matrix is constrained by adding a constraint column sum of one, and the physical characteristic that the sum of the distribution percentages of various ground objects in the hyperspectral image is one is met;
the defect that label information of a hyperspectral image is lost after dimension reduction of a nonnegative matrix is overcome through the idea of constraining the nonnegative matrix, various end members are obtained through updating, the spatial distribution condition of various ground features is reflected, the hyperspectral image has application value, in the hyperspectral image, if pixels are in a certain range near the end members, the consistency of the label information of the pixels is probably larger, and a better classification effect can be obtained by combining the label information with the nonnegative matrix decomposition;
in summary, the hyperspectral classification method in the embodiment solves the technical problem that the current spectrum classification method can cause label information loss after the dimension reduction of a hyperspectral image through nonnegative matrix decomposition, so that the classification precision is low.
The hyperspectral classification method according to the embodiment of the invention is another embodiment of the hyperspectral classification method according to the embodiment of the invention, and the hyperspectral classification device according to the embodiment of the invention is as follows.
Referring to fig. 3, another embodiment of the present invention provides a hyperspectral classification apparatus, including:
the feature extraction unit 301 is configured to extract a hyperspectral feature matrix in a hyperspectral image to be classified to obtain a first hyperspectral feature matrix, delete a waveband with a negative value element in the first hyperspectral feature matrix, and update the waveband to obtain a second hyperspectral feature matrix, where a row in the hyperspectral feature matrix represents a waveband and a column in the hyperspectral feature matrix represents a pixel;
an analysis extraction unit 302, configured to extract a first end-member spectral matrix in the second hyperspectral feature matrix by using a vertex component analysis method;
a distance calculating unit 303, configured to calculate, through a machine learning algorithm, distances between each pixel in the second hyperspectral feature matrix and each end member in the first end member spectral matrix and distances between each end member and each other, and classify a kth end member of the first end member spectral matrix and a pixel in the second hyperspectral feature matrix, whose distance from the kth end member is smaller than a preset distance threshold, as a kth class end member;
a tag matrix unit 304, configured to construct an indication matrix C with L rows and r columns according to the classification result, if the second hyperspectral feature matrix includes m kth class end members, values of elements from the j +1 th column to the j + m th column in the kth row in the indication matrix C are 1, values of the remaining elements are 0, and construct a tag constraint matrix a according to the indication matrix C, where L is the class number of the end members, r is the total number of various end members in the second hyperspectral feature matrix, j is the total number of the 1 st class to the k-1 th class end members in the second hyperspectral feature matrix, and an expression of the tag constraint matrix a is:
Figure BDA0001595969610000161
i is a unit matrix with the row number and the column number of (n-L), and n is the total number of pixels in the second hyperspectral characteristic matrix;
a position adjusting unit 305, configured to perform column position adjustment on the second hyperspectral feature matrix, sequentially arrange the pixels classified as end members from the first column according to the category sequence, sequentially arrange the pixels in the same category end member according to the sequence of the column positions in the second hyperspectral feature matrix, sequentially arrange the pixels not classified as end members from the next column of the last column end member according to the sequence of the column positions in the second hyperspectral feature matrix, and update the third hyperspectral feature matrix;
and the iteration updating unit 306 is configured to perform iteration updating on the first end member spectral matrix by using the first end member spectral matrix as an initial value and according to the third high-spectrum feature matrix and the label constraint matrix a through a constraint nonnegative matrix factorization algorithm, and use the end member spectral matrix obtained after iteration to convergence as a second end member spectral matrix and the second end member spectral matrix as a high-spectrum classification result of the high-spectrum image to be classified.
Further, the feature extraction unit 301 specifically includes:
the characteristic subunit 3011 is configured to extract a hyperspectral characteristic matrix in a hyperspectral image to be classified, to obtain a first hyperspectral characteristic matrix, where a row in the hyperspectral characteristic matrix represents a waveband and a column represents a pixel;
a deleting subunit 3012, configured to calculate a signal-to-noise ratio of each waveband of the first hyperspectral feature matrix, delete a waveband in the first hyperspectral feature matrix that is lower than a preset signal-to-noise ratio threshold and a waveband in which a negative value element exists, and update to obtain a fourth hyperspectral feature matrix;
and the normalizing subunit 3013 is configured to perform normalization processing on the fourth hyperspectral feature matrix, and update the fourth hyperspectral feature matrix to obtain a second hyperspectral feature matrix.
Further, the feature extraction unit 301 further includes:
the filtering subunit 3014 is configured to calculate a spectral feature vector and spatial feature information in the first hyperspectral feature matrix through a PCA algorithm, and reversely integrate the spectral feature vector and the spatial feature information to obtain a filtered first hyperspectral feature matrix;
the deleting subunit 3012 is specifically configured to calculate a signal-to-noise ratio of each waveband of the filtered first hyperspectral feature matrix, delete a waveband lower than a preset signal-to-noise ratio threshold in the filtered first hyperspectral feature matrix and a waveband having a negative value element, and obtain a fourth hyperspectral feature matrix.
Further, the distance calculating unit 303 is specifically configured to calculate, by using a k-nearest neighbor algorithm, distances between each pixel in the second hyperspectral feature matrix and each end member in the first hyperspectral feature matrix and distances between each end member, and rank, as the kth class end member, a kth end member in the first hyperspectral feature matrix and a pixel in the second hyperspectral feature matrix, which is less than a preset distance threshold, where the preset threshold distance is a minimum value of an average value of the distances between each end member by α times and an average value of distances between the pixels and the end members by β times, where α is a first preset multiple and β is a second preset multiple.
Further, the iteration updating unit 306 specifically includes:
the reconstruction subunit 3061 is used for adding a row of elements at the bottoms of the third high-spectrum characteristic matrix and the first end-element spectral matrix, and the value of each element in the added row is the mean value of the third high-spectrum characteristic matrix to obtain a fifth high-spectrum characteristic matrix and a third end-element spectral matrix;
the iteration subunit 3062 is configured to use the first end-member spectral matrix as an initial value, perform iteration update on the third end-member spectral matrix by a constraint nonnegative matrix factorization algorithm according to the fifth hyper-spectral feature matrix and the tag constraint matrix a, use the end-member spectral matrix obtained after iteration to convergence as the second end-member spectral matrix, and use the second end-member spectral matrix as a hyper-spectral classification result of the hyper-spectral image to be classified.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A hyperspectral classification method is characterized by comprising the following steps:
s1: extracting a hyperspectral feature matrix in a hyperspectral image to be classified to obtain a first hyperspectral feature matrix, deleting a wave band with a negative value element in the first hyperspectral feature matrix, and updating to obtain a second hyperspectral feature matrix, wherein a row in the hyperspectral feature matrix represents a wave band, and a column in the hyperspectral feature matrix represents a pixel;
s2: extracting a first end member spectral matrix in the second hyperspectral feature matrix by a vertex component analysis method;
s3: calculating the distance between each pixel in the second hyperspectral feature matrix and each end member in the first hyperspectral feature matrix and the distance between each end member and each other end member through a k-nearest neighbor algorithm, and listing the kth end member of the first hyperspectral feature matrix and the pixel in the second hyperspectral feature matrix, the distance between which and the kth end member is less than a preset distance threshold value, as kth class end members, wherein the preset threshold value distance is the minimum value of alpha times of the average value of the distances between the end members and beta times of the average value of the distances between the pixel and the end members, alpha is a first preset multiple, and beta is a second preset multiple;
s4: constructing an indication matrix C of L rows and r columns according to the classification result, if the second hyperspectral feature matrix comprises m kth end members, then the values of the elements from the j +1 th column to the j + m th column in the kth row in the indication matrix C are 1, the values of the other elements are 0, and constructing a label constraint matrix A according to the indication matrix C, wherein L is the number of the end members, r is the total number of the end members in the second hyperspectral feature matrix, j is the total number of the end members from the 1 st column to the k-1 th column in the second hyperspectral feature matrix, and the expression of the label constraint matrix A is as follows:
Figure FDA0003403868240000011
i is a unit matrix with the row number and the column number of (n-L), and n is the total number of pixels in the second hyperspectral characteristic matrix;
s5: adjusting the column positions of the second hyperspectral feature matrix, arranging the pixels classified into end members in sequence from the first column according to the category sequence, arranging the pixels in the same end member in sequence according to the sequence of the column positions in the second hyperspectral feature matrix, arranging the pixels not classified into end members in sequence from the next column of the last column of end members according to the sequence of the column positions in the second hyperspectral feature matrix, and updating to obtain a third hyperspectral feature matrix;
s6: and taking the first end member spectral matrix as an initial value, iteratively updating the first end member spectral matrix through a constraint non-negative matrix decomposition algorithm according to the third high spectral feature matrix and the label constraint matrix A, taking the end member spectral matrix obtained after iteration to convergence as a second end member spectral matrix, and taking the second end member spectral matrix as a high spectral classification result of the high spectral image to be classified.
2. The hyperspectral classification method according to claim 1, wherein the step S1 specifically comprises:
s11: extracting a hyperspectral feature matrix in a hyperspectral image to be classified to obtain a first hyperspectral feature matrix, wherein rows in the hyperspectral feature matrix represent wave bands, and columns represent pixels;
s12: calculating the signal-to-noise ratio of each wave band of the first hyperspectral feature matrix, deleting the wave bands lower than a preset signal-to-noise ratio threshold value and the wave bands with negative values in the first hyperspectral feature matrix, and updating to obtain a fourth hyperspectral feature matrix;
s13: and carrying out normalization processing on the fourth hyperspectral feature matrix, and updating to obtain a second hyperspectral feature matrix.
3. The hyperspectral classification method according to claim 2, wherein after step S11, before step S12, the method further comprises:
s14: calculating a spectral feature vector and spatial feature information in the first hyperspectral feature matrix through a PCA algorithm, and reversely integrating the spectral feature vector and the spatial feature information to obtain a filtered first hyperspectral feature matrix;
s12 specifically includes: and calculating the signal-to-noise ratio of each wave band of the filtered first hyperspectral feature matrix, and deleting the wave band which is lower than a preset signal-to-noise ratio threshold value and the wave band with a negative value element in the filtered first hyperspectral feature matrix to obtain a fourth hyperspectral feature matrix.
4. The hyperspectral classification method according to claim 1, wherein the step S6 specifically comprises:
s61: adding a row of elements at the bottom of the third high spectral feature matrix and the first end-element spectral matrix, wherein the value of each element in the added row is the mean value of the third high spectral feature matrix, and obtaining a fifth high spectral feature matrix and a third end-element spectral matrix;
s62: and taking the first end member spectral matrix as an initial value, iteratively updating the third end member spectral matrix through a constraint nonnegative matrix decomposition algorithm according to the fifth hyperspectral feature matrix and the label constraint matrix A, taking the end member spectral matrix obtained after iteration to convergence as a second end member spectral matrix, and taking the second end member spectral matrix as a hyperspectral classification result of the hyperspectral image to be classified.
5. A hyperspectral classification apparatus, comprising:
the characteristic extraction unit is used for extracting a hyperspectral feature matrix in a hyperspectral image to be classified to obtain a first hyperspectral feature matrix, deleting a wave band with a negative value element in the first hyperspectral feature matrix, and updating to obtain a second hyperspectral feature matrix, wherein a row in the hyperspectral feature matrix represents a wave band, and a column in the hyperspectral feature matrix represents a pixel;
the analysis and extraction unit is used for extracting a first end-member spectral matrix in the second hyperspectral feature matrix by a vertex component analysis method;
the distance calculation unit is used for calculating the distance between each pixel in the second hyperspectral feature matrix and each end member in the first hyperspectral feature matrix and the distance between each end member and each other through a k-nearest neighbor algorithm, and listing the kth end member of the first hyperspectral feature matrix and the pixel in the second hyperspectral feature matrix, the distance between which and the kth end member is smaller than a preset distance threshold value, as the kth class end member, wherein the preset threshold value distance is the minimum value of alpha times of the average value of the distances between the end members and beta times of the average value of the distances between the pixels and the end members, the alpha is a first preset multiple, and the beta is a second preset multiple;
the label matrix unit is used for constructing an indication matrix C with L rows and r columns according to the classification result, if the second hyperspectral feature matrix comprises m kth end members, the values of the elements from the j +1 th column to the j + m th column in the kth row in the indication matrix C are 1, the values of the other elements are 0, and constructing a label constraint matrix A according to the indication matrix C, wherein L is the number of the end members in category, r is the total number of the end members in the second hyperspectral feature matrix, j is the total number of the end members from the 1 st category to the k-1 th category in the second hyperspectral feature matrix, and the expression of the label constraint matrix A is as follows:
Figure FDA0003403868240000031
i is a unit matrix with the row number and the column number of (n-L), and n is the total number of pixels in the second hyperspectral characteristic matrix;
the position adjusting unit is used for adjusting the column positions of the second hyperspectral feature matrix, arranging the pixels classified into the end members in sequence from the first column according to the category sequence, arranging the pixels in the same category end member in sequence according to the sequence of the column positions in the second hyperspectral feature matrix, arranging the pixels not classified into the end members in sequence from the next column of the last column of end members according to the sequence of the column positions in the second hyperspectral feature matrix, and updating to obtain a third hyperspectral feature matrix;
and the iteration updating unit is used for performing iteration updating on the first end member spectral matrix by a constraint non-negative matrix decomposition algorithm according to the third high-spectrum characteristic matrix and the label constraint matrix A by taking the first end member spectral matrix as an initial value, taking the end member spectral matrix obtained after iteration to convergence as a second end member spectral matrix, and taking the second end member spectral matrix as a high-spectrum classification result of the high-spectrum image to be classified.
6. The hyperspectral classification apparatus according to claim 5, wherein the feature extraction unit specifically comprises:
the characteristic subunit is used for extracting a hyperspectral characteristic matrix in a hyperspectral image to be classified to obtain a first hyperspectral characteristic matrix, wherein a row in the hyperspectral characteristic matrix represents a wave band, and a column represents a pixel;
the deleting subunit is used for calculating the signal-to-noise ratio of each wave band of the first hyperspectral feature matrix, deleting the wave bands which are lower than a preset signal-to-noise ratio threshold value in the first hyperspectral feature matrix and the wave bands with negative value elements, and updating to obtain a fourth hyperspectral feature matrix;
and the normalizing subunit is used for performing normalization processing on the fourth hyperspectral feature matrix and updating to obtain a second hyperspectral feature matrix.
7. The hyperspectral classification apparatus according to claim 6, wherein the feature extraction unit further comprises:
the filtering subunit is used for calculating a spectral feature vector and spatial feature information in the first hyperspectral feature matrix through a PCA algorithm, and reversely integrating the spectral feature vector and the spatial feature information to obtain a filtered first hyperspectral feature matrix;
and the deleting subunit is specifically configured to calculate a signal-to-noise ratio of each waveband of the filtered first hyperspectral feature matrix, delete a waveband lower than a preset signal-to-noise ratio threshold value in the filtered first hyperspectral feature matrix and a waveband having a negative value element, and obtain a fourth hyperspectral feature matrix.
8. The hyperspectral classification apparatus according to claim 5, wherein the iterative update unit specifically comprises:
the reconstruction subunit is used for adding a row of elements at the bottoms of the third high-spectrum characteristic matrix and the first end-element spectrum matrix, and the value of each element in the added row is the mean value of the third high-spectrum characteristic matrix to obtain a fifth high-spectrum characteristic matrix and a third end-element spectrum matrix;
and the iteration subunit is used for performing iteration updating on the third end-member spectral matrix by a constraint nonnegative matrix decomposition algorithm according to a fifth hyperspectral feature matrix and the label constraint matrix A by taking the first end-member spectral matrix as an initial value, taking the end-member spectral matrix obtained after iteration to convergence as a second end-member spectral matrix, and taking the second end-member spectral matrix as a hyperspectral classification result of the hyperspectral image to be classified.
CN201810206243.9A 2018-03-13 2018-03-13 Hyperspectral classification method and device Active CN108470192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810206243.9A CN108470192B (en) 2018-03-13 2018-03-13 Hyperspectral classification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810206243.9A CN108470192B (en) 2018-03-13 2018-03-13 Hyperspectral classification method and device

Publications (2)

Publication Number Publication Date
CN108470192A CN108470192A (en) 2018-08-31
CN108470192B true CN108470192B (en) 2022-04-19

Family

ID=63265288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810206243.9A Active CN108470192B (en) 2018-03-13 2018-03-13 Hyperspectral classification method and device

Country Status (1)

Country Link
CN (1) CN108470192B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263777B (en) * 2019-06-26 2021-04-27 中国人民解放军火箭军工程大学 Target detection method and system based on space-spectrum combination local preserving projection algorithm
CN112633045A (en) * 2019-10-09 2021-04-09 华为技术有限公司 Obstacle detection method, device, equipment and medium
CN112417188B (en) * 2020-12-10 2022-05-24 桂林电子科技大学 Hyperspectral image classification method based on graph model
CN113743325B (en) * 2021-09-07 2024-01-12 中国人民解放军火箭军工程大学 Supervisory and unsupervised hyperspectral mixed pixel decomposition method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221243A (en) * 2007-11-01 2008-07-16 复旦大学 Remote sensing image mixed pixels decomposition method based on nonnegative matrix factorization
CN101540049A (en) * 2009-04-29 2009-09-23 北京师范大学 End member extract method of hyperspectral image
CN101692125A (en) * 2009-09-10 2010-04-07 复旦大学 Fisher judged null space based method for decomposing mixed pixels of high-spectrum remote sensing image
CN105261000A (en) * 2015-09-17 2016-01-20 哈尔滨工程大学 Hyperspectral image fusion method based on end member extraction and spectrum unmixing
CN105550693A (en) * 2015-11-09 2016-05-04 天津商业大学 Cuckoo search hyperspectral unmixing method based on nonnegative independent component analysis
WO2016091017A1 (en) * 2014-12-09 2016-06-16 山东大学 Extraction method for spectral feature cross-correlation vector in hyperspectral image classification
CN106557782A (en) * 2016-11-22 2017-04-05 青岛理工大学 Hyperspectral image classification method and device based on category dictionary
CN106650811A (en) * 2016-12-26 2017-05-10 大连海事大学 Hyperspectral mixed pixel classification method based on neighbor cooperation enhancement
CN107341510A (en) * 2017-07-05 2017-11-10 西安电子科技大学 Image clustering method based on sparse orthogonal digraph Non-negative Matrix Factorization

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11206976B2 (en) * 2014-10-16 2021-12-28 New York University Method and system for simultaneous decomposition of multiple hyperspectral datasets and signal recovery of unknown fluorophores in a biochemical system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221243A (en) * 2007-11-01 2008-07-16 复旦大学 Remote sensing image mixed pixels decomposition method based on nonnegative matrix factorization
CN101540049A (en) * 2009-04-29 2009-09-23 北京师范大学 End member extract method of hyperspectral image
CN101692125A (en) * 2009-09-10 2010-04-07 复旦大学 Fisher judged null space based method for decomposing mixed pixels of high-spectrum remote sensing image
WO2016091017A1 (en) * 2014-12-09 2016-06-16 山东大学 Extraction method for spectral feature cross-correlation vector in hyperspectral image classification
CN105261000A (en) * 2015-09-17 2016-01-20 哈尔滨工程大学 Hyperspectral image fusion method based on end member extraction and spectrum unmixing
CN105550693A (en) * 2015-11-09 2016-05-04 天津商业大学 Cuckoo search hyperspectral unmixing method based on nonnegative independent component analysis
CN106557782A (en) * 2016-11-22 2017-04-05 青岛理工大学 Hyperspectral image classification method and device based on category dictionary
CN106650811A (en) * 2016-12-26 2017-05-10 大连海事大学 Hyperspectral mixed pixel classification method based on neighbor cooperation enhancement
CN107341510A (en) * 2017-07-05 2017-11-10 西安电子科技大学 Image clustering method based on sparse orthogonal digraph Non-negative Matrix Factorization

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
Adaptive Method for Nonsmooth Nonnegative Matrix Factorization;Zuyuan Yang 等;《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》;20160128;第28卷(第4期);948-960 *
Constrained Concept Factorization for Image Representation;Haifeng Liu 等;《IEEE TRANSACTIONS ON CYBERNETICS》;20131101;第44卷(第7期);1214-1224 *
Non-negative Matrix Factorization for Hyperspectral Unmixing Using Prior Knowledge of Spectral Signatures;Wei Tang 等;《Optical Engineering》;20120803;第51卷(第8期);1-35 *
Robust Semi-supervised Nonnegative Matrix Factorization;Jing Wang 等;《2015 International Joint Conference on Neural Networks》;20151001;1-8 *
基于矩阵分解的高光谱数据特征提取;魏峰 等;《红外与毫米波学报》;20141215;第33卷(第6期);674-679 *
基于约束非负矩阵分解的高光谱图像解混快速算法;刘建军 等;《电子学报》;20130315;第41卷(第3期);432-439 *
基于近邻保留 PNMF 特征提取的高光谱图像分类;温金环 等;《西北工业大学学报》;20120215;第30卷(第1期);138-144 *
多端元光谱混合分析综述;戚文超 等;《遥感信息》;20161015;第31卷(第5期);11-18 *

Also Published As

Publication number Publication date
CN108470192A (en) 2018-08-31

Similar Documents

Publication Publication Date Title
CN110321963B (en) Hyperspectral image classification method based on fusion of multi-scale and multi-dimensional space spectrum features
CN106709881B (en) A kind of high spectrum image denoising method decomposed based on non-convex low-rank matrix
Hong et al. SULoRA: Subspace unmixing with low-rank attribute embedding for hyperspectral data analysis
CN108470192B (en) Hyperspectral classification method and device
CN108009559B (en) Hyperspectral data classification method based on space-spectrum combined information
Du et al. Spatial and spectral unmixing using the beta compositional model
Zhong et al. Multiple-spectral-band CRFs for denoising junk bands of hyperspectral imagery
Karoui et al. Blind spatial unmixing of multispectral images: New methods combining sparse component analysis, clustering and non-negativity constraints
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
Xiang et al. Hyperspectral anomaly detection by local joint subspace process and support vector machine
CN109376753B (en) Probability calculation method for three-dimensional spatial spectrum space dimension pixel generic
Ceamanos et al. Processing hyperspectral images
CN111680579B (en) Remote sensing image classification method for self-adaptive weight multi-view measurement learning
CN115620128A (en) Hyperspectral anomaly detection method
Hasanlou et al. A sub-pixel multiple change detection approach for hyperspectral imagery
CN109583380B (en) Hyperspectral classification method based on attention-constrained non-negative matrix factorization
Likó et al. Tree species composition mapping with dimension reduction and post-classification using very high-resolution hyperspectral imaging
CN113421198A (en) Hyperspectral image denoising method based on subspace non-local low-rank tensor decomposition
Gao et al. SSC-SFN: spectral-spatial non-local segment federated network for hyperspectral image classification with limited labeled samples
CN109460788B (en) Hyperspectral image classification method based on low-rank-sparse information combination network
Hong et al. Random forest fusion classification of remote sensing PolSAR and optical image based on LASSO and IM factor
CN116403046A (en) Hyperspectral image classification device and method
Drumetz Endmember variability in hyperspectral image unmixing
Song et al. Graph learning and denoising-based weighted sparse unmixing for hyperspectral images
Sigurdsson et al. Total variation and ℓ q based hyperspectral unmixing for feature extraction and classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231213

Address after: F201, No. 11 Caibin Road, Science City, Guangzhou Economic and Technological Development Zone, Guangzhou City, Guangdong Province, 510700

Patentee after: ANTE LASER Co.,Ltd.

Address before: No.729, Dongfeng East Road, Yuexiu District, Guangzhou City, Guangdong Province 510060

Patentee before: GUANGDONG University OF TECHNOLOGY

TR01 Transfer of patent right