CN106529508B - Based on local and non local multiple features semanteme hyperspectral image classification method - Google Patents

Based on local and non local multiple features semanteme hyperspectral image classification method Download PDF

Info

Publication number
CN106529508B
CN106529508B CN201611119573.1A CN201611119573A CN106529508B CN 106529508 B CN106529508 B CN 106529508B CN 201611119573 A CN201611119573 A CN 201611119573A CN 106529508 B CN106529508 B CN 106529508B
Authority
CN
China
Prior art keywords
sample
local
semantic expressiveness
image
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611119573.1A
Other languages
Chinese (zh)
Other versions
CN106529508A (en
Inventor
张向荣
焦李成
高泽宇
冯婕
白静
侯彪
马文萍
李阳阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201611119573.1A priority Critical patent/CN106529508B/en
Publication of CN106529508A publication Critical patent/CN106529508A/en
Application granted granted Critical
Publication of CN106529508B publication Critical patent/CN106529508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Abstract

The invention discloses one kind based on part and non local multiple features semanteme hyperspectral image classification method.It is low mainly to solve accuracy in classification hyperspectral imagery, poor robustness, the weak problem of Space Consistency.Its step includes: input picture, extracts image various features;Data set cutting is training set and test set;The various features of all samples are mapped to corresponding semantic expressiveness by probability support vector machine;Construct local and non local neighbour's set;Noise reduction markov field model is constructed, semantic fusion and noise reduction process are carried out;To semantic expressiveness iteration optimization;The classification of all samples is acquired using semantic expressiveness, completes high spectrum image Accurate classification.Present invention employs multiple features fusions, and the spatial information existed in the image is sufficiently excavated and utilized, under Small Sample Size, very high nicety of grading is obtained, and possess good robustness and Space Consistency, for military detection, mapping, vegetation investigation, mineral detection etc..

Description

Based on local and non local multiple features semanteme hyperspectral image classification method
Technical field
The invention belongs to technical field of image processing, are related to machine learning and Hyperspectral imagery processing, specifically a kind of base In local and non local multiple features semanteme hyperspectral image classification method, for dividing the different atural objects in high spectrum image Class identification.
Background technique
High spectrum resolution remote sensing technique has been increasingly becoming the research hotspot in earth observation field in the past few decades.EO-1 hyperion Remote sensing technology using imaging spectrometer with nanoscale spectral resolution, with tens or several hundred a wave bands simultaneously to earth's surface object at Picture can obtain the continuous spectrum information of atural object, realize that atural object spatial information, radiation information, the synchronous of spectral information obtain, tool There is the characteristic of " collection of illustrative plates ".Since different atural object has different reflected wave informations, the high score of spectrum in high spectrum image Resolution just provides extremely important discriminant information to differentiate different atural objects or target.The terrain classification of high spectrum image is in geology There is applications well prospect in the fields such as investigation, crops disaster monitoring, atmosphere pollution and military target strike.
Classification of hyperspectral remote sensing image is exactly to incorporate each pixel in high-spectrum remote sensing in each classification into Process.Since the image that high spectrum resolution remote sensing technique obtains contains space, radiation and the triple information of spectrum abundant, appoint for classification Business provides a large amount of discriminant information, but there are still huge challenge and difficulties.Firstly, data volume is big, at least tens wave bands, Cause computation complexity very high, also brings challenge to the storage of data, transmitting and display;Secondly, dimension is excessively high, there are redundancies Data and its partial noise can reduce nicety of grading;Finally, wave band is more, and correlation is high between wave band, leads to required training sample Number increases, if lack of training samples, the problems such as will appear poor fitting common in machine learning, leads to subsequent classification Precision degradation.
In traditional hyperspectral classification method, the classification methods such as support vector machine based on spectral information can be a degree of Above-mentioned difficulties are solved, but its nicety of grading is lower, the Space Consistency of classification results figure is poor, is unable to satisfy application demand.
In recent years, in feature level, researcher proposes classification method largely based on multicharacteristic information, in nicety of grading On have certain promotion, these methods are when combining multicharacteristic information, there are mainly two types of mode: first is that feature level melts It closes, as the input of classifier after directly multiple feature vectors are connected;Second is decision level fusion, by multiple features After vector inputs classifier respectively, its classification results is merged.Multicharacteristic information is during both amalgamation modes All there is certain information loss, multicharacteristic information is caused not to be effectively utilised all.
In model level, due to the importance of spatial information in high spectrum image, a large amount of classification sides that space constraint is added Method is suggested, as joint sparse indicates model, the support vector machine based on fusion nucleus, markov field model etc., these methods The local spatial information in high spectrum image is utilized, so that nicety of grading is greatly improved.But, on the one hand, this A little methods are more coarse in the processing of local spatial information, mostly use part neck of traditional square window as pixel Domain affects the Space Consistency of classification results figure, hinders the further promotion of nicety of grading.On the other hand, in EO-1 hyperion In image redundancy a large amount of non local spatial information, these methods all this type of information is not effectively utilized, make score The precision of class and the upper limit of robustness are lower.
Therefore how a variety of useful features are extracted from the redundant data of higher-dimension, reasonably tie various features It closes, and effectively utilizes spatial information abundant (local and non local) and a small amount of and precious classification information, promoted The Space Consistency of the precision of classification results of the high spectrum image under Small Sample Size, robustness and classification results figure is one It is a to have technical problem to be solved.
Summary of the invention
The present invention is for classification results precision existing in the prior art is lower, robustness is poor, classification results map space The weaker problem of consistency, propose it is a kind of rationally various features information is combined and can be effectively utilized part and Non local spatial information and related category information based on local and non local multiple features semanteme hyperspectral image classification method.
The present invention is a kind of based on part and non local multiple features semanteme hyperspectral image classification method, which is characterized in that It comprises the following steps that
(1) input picture extracts the various features of image: input high spectrum imageM is high spectrum image The total number of middle all pixels point, hqFor a column vector, represent pixel q the reflected value of each wave band constituted to Amount;Various features are extracted to high spectrum image respectively, include: original spectrum feature, Gabor textural characteristics, difference morphology Feature;The high spectrum image includes c class pixel, wherein have it is N number of have a label pixel, m unmarked pixels, image it is every A pixel is a sample, and each sample is made of V feature vector, respectively represents the sample under different characteristic description Statement, V is the number of feature classification;
(2) hyperspectral image data collection is divided into training set and test set: using N number of has label pixel as training sample structure At training setIts corresponding category label collection isIt is unmarked with m Pixel constitutes test set as test sampleWherein, xiIndicate training set I-th of sample, yjIndicate j-th of sample of test set, liIt is category label belonging to i-th of training sample, DvIndicate v class The dimension of feature, R indicate real number field;
(3) various features of all samples are mapped to corresponding semantic expressiveness using probability support vector machine (SVM): point It Li Yong not training setIn all samples V feature vector and its corresponding class Other label setsV probability supporting vector machine classifier is constructed, the kernel function of the classifier is radial Gaussian kernel, core ginseng Several and punishment parameter is obtained by more times of cross validations;By test setIn all samples This V feature vector is separately input in the V corresponding classifier constructed, obtains describing under sublist states in different characteristic, often A test sample yj, j=1,2 ..., m belong to the probability of each classification, the semantic expressiveness as each test sampleFor each sample x in training seti, i=1,2 ..., N, it belongs to classification l itselfi's Probability is 1, and the probability for belonging to other classifications is 0, obtains the corresponding a variety of semantic expressiveness of various featuresWhereinLiBehavior 1, other behaviors 0;To obtain the high-spectrum A variety of semantic expressiveness of all samples as in
(4) the local and non local neighbour set of all samples in test set is constructed;For each sample in test set This yj, j=1,2 ..., m construct its local auto-adaptive neighbour's set BjWith non local similar structure neighbour set Cj
(5) noise reduction markov field model is constructed, the fusion of a variety of semantic expressiveness and the semantic expressiveness of test sample are carried out Noise reduction process;To each test sample yj, j=1,2 ..., m are proceeded as follows respectively, by yjCorresponding semantic expressivenessLocal auto-adaptive neighbour's set BjIn all samples semantic expressivenessWith it is non local similar Structure neighbour's set CjIn all samples semantic expressivenessIt is input in local energy function, it is minimum Change the energy function, obtains test sample yjSingle order noise reduction semantic expressivenessThe language of sample in training set is kept at the same time Justice indicates constant, obtains the single order noise reduction semantic expressiveness of all samples of the high spectrum image
(6) to the single order noise reduction semantic expressiveness of all samples of high spectrum imageFurther iteration optimization;Setting maximum changes Generation number Tmax, t is current iteration algebra, is proceeded as follows to each test sample: by test sample yj, and set BjWith Set CjIn all samples t rank noise reduction semantic expressivenessAs noise reduction Markov Random Fields The input of local energy function minimizes the energy function, obtains test sample yj(t+1) rank semantic expressivenessWith This continues to keep the semantic expressiveness of sample in training set constant simultaneously, and then obtains the (t+ of all samples of the high spectrum image 1) rank noise reduction semantic expressivenessIteration process, until t=Tmax- 1 stops, and obtains all samples of the high spectrum image TmaxRank noise reduction semantic expressiveness, that is, final semantic expressiveness
(7) final semantic expressiveness is utilizedAcquire the classification of all samples in test set;For in test set Each sample yj, j=1,2 ..., m, final semantic expressiveness isThat is test sample yjBelong to the probability of each classification Composed column vector selects the label of maximum value element position in the vector as yjClassificationTo be tested Collect class prediction setComplete the classification task of the high spectrum image.
The present invention is based on multiple features in high spectrum image different characteristic space, are mapped to by Weak Classifier identical Semantic space then carries out semantic fusion, noise reduction using markov field model, obtains containing much information and a small amount of noise Semantic expressiveness, on this basis in high spectrum image different atural objects carry out Classification and Identification.
The present invention has the advantage that compared with prior art
1, for the present invention due to sufficiently having excavated the correlation between pixel, the local auto-adaptive for having found each pixel is close Neighbour and non local similar structure neighbour, at the same time, the classification information of a small amount of marked sample can also travel to it therewith In local and non local neighbour so that high spectrum image still can be obtained in the case where a small amount of marked sample it is very high Nicety of grading.
2, the present invention is due to having carried out multi-feature cubes to the improvement with non local direction to markov field model, and uses Geodesic distance regular terms between semantic vector is instead of traditional discrete classification similarities and differences regular terms, so that markov field model More it is suitable for handling classification hyperspectral imagery problem, solves the problems such as excessively smooth caused by legacy Markov model, Improve the Space Consistency of classification results figure.
3, the noise reduction markov field model proposed in the present invention, can be solved by simple gradient descent method, Computation complexity cuts class algorithm lower than figure used in legacy Markov field model, and can be fine in seeking excellent iterative process Avoid fall into locally optimal solution, the robustness of model is improved, to promote the robustness of classification results.
Comparative experiments shows that invention significantly improves the classification accuracy of high-spectrum remote sensing and robustness, and makes The result figure that must classify has good Space Consistency.
Detailed description of the invention
Fig. 1 is flow diagram of the invention;
Fig. 2 is the Indian Pine data set that present invention emulation uses, wherein Fig. 2 a is logical for Indian Pine data set The one-dimensional gray level image of principal component analysis (PCA) acquisition is crossed, Fig. 2 b is Indian Pine data set truly object category label Figure, each color correspond to a kind of different type of ground objects;
Fig. 3 is the comparison of the classification results figure of the present invention and existing method on Indian Pine data set, wherein figure 3a-3f, which is respectively corresponded, is based on three kinds of existing classification methods: the support vector machine (SVM+CK) of fusion nucleus, and support vector machine combines Markov Random Fields (SVM+MRF), joint sparse indicate (SOMP);And the simplification version and the present invention of two kinds of the method for the present invention mention The classification results figure for the Indian Pine data set that method out obtains.
Specific embodiment
With reference to the accompanying drawing to the detailed description of the invention.
Embodiment 1:
For the terrain classification problem of high spectrum image, mostly there is niceties of grading not enough to manage for current existing method Think, classification results robustness is poor, and the weaker problem of the Space Consistency of classification results figure, present invention combination various features are in language The integration technology in adopted space and part, non local space constraint method, mention mainly for various problems existing for existing method Go out a kind of based on part and non local multiple features semanteme hyperspectral image classification method.
The present invention is a kind of based on part and non local multiple features semanteme hyperspectral image classification method, referring to Fig. 1, including It has the following steps:
(1) input picture extracts the various features of image.
Common hyperspectral image data include by the jet propulsion laboratory NASA NASA unloaded visible light/it is red The ROSIS spectrum of Indian Pine data set and Salinas data set and NASA that outer imaging spectrometer AVIRIS is obtained The University of Pavia data set etc. that instrument obtains.
Input high spectrum imageM is the total number of all pixels point in high spectrum image, hqIt is arranged for one Vector represents the reflected value of each wave band of pixel q, that is, the original spectrum feature of the pixel;To high spectrum image point Indescribably take various features, include: original spectrum feature, Gabor textural characteristics, difference morphological feature (DMP), these three are special Sign has reacted spectrum, texture possessed by high spectrum image, shape information respectively.The high spectrum image includes c class pixel, In have it is N number of have a label pixel, m unmarked pixels, each pixel of image is a sample, and each sample is by V Feature vector is constituted, and respectively represents statement of the sample under different characteristic description, V is the number of feature classification.
Feature categorical measure used in the present embodiment is 3, therefore the number V of the feature classification described here and its later 3 should be all equal in the case where explanation is not added.
It is every a kind of pixel moderate proportions from high spectrum image that N number of mentioned in the present embodiment, which has label pixel, The pixel selected, m remaining all pixels point is as unmarked pixel.
(2) hyperspectral image data collection is divided into training set and test set, and using N number of has label pixel as training sample structure At training setIts corresponding category label collection isWith m unmarked pictures Vegetarian refreshments constitutes test set as test sampleWherein, xiIndicate label training I-th of collection has label training sample, yjIndicate j-th of unmarked test sample of test set.Each of high spectrum image Sample all indicates that each column vector represents a feature, l with V column vectoriIt is to have class belonging to label training sample for i-th Other label, DvIndicate that dimension corresponding to sample v category feature, R indicate real number field.It is to have label picture corresponding to N in step (1) Vegetarian refreshments number, m are unmarked pixel number, here, N is that have label training sample total number, m is test sample total number.
(3) various features of all samples are mapped to corresponding semantic space using probability support vector machine (SVM), had Body is to be mapped to corresponding semantic expressiveness.It is utilized respectively training setIn all samples V feature vector and its corresponding category label collectionConstruct V classifier, that is, probability supporting vector machine (SVM), the kernel function of the probability supporting vector machine (SVM) is that radial Gaussian kernel, nuclear parameter and punishment parameter are intersected by more times Verifying obtains.By test setIn V feature vector of all samples input respectively Into V corresponding probability supporting vector machine (SVM) classifiers of building, obtain describing under sublist states in different characteristic, each Test sample yj, j=1,2 ..., m belong to the probability of each classification, the semantic expressiveness as each test sampleFor each sample x in training seti, i=1,2 ..., N belong to classification l itselfiIt is general Rate is 1, and the probability for belonging to other classifications is 0, and the corresponding a variety of semantic expressiveness of various features areWhereinLiBehavior 1, other behaviors 0, the semanteme of sample in training set Expression is all the vector of 0-1 coding, obtains a variety of semantic expressiveness of all samples in the high spectrum image Note, since the category label of samples all in training set is it is known that its corresponding semantic expressiveness is right-on semantic expressiveness.
Sample characteristics are completed to the mapping of sample semantic expressiveness using probability support vector machine (SVM) in the present embodiment, are Because for high spectrum image terrain classification problem, probability support vector machine (SVM) possesses good robustness and relatively strong Classification capacity, and it is more general in EO-1 hyperion process field as benchmark classifier.Other also can be used in the present invention can The classifier for obtaining class probability completes the step, such as multinomial logistic regression, and random forest and their variety classification device etc. come It substitutes probability support vector machine (SVM).
(4) in order to extract the local spatial information of all samples and non local spatial information in test set, to introduce part With non local space constraint, the local and non local neighbour set of all samples in test set is constructed.For in test set Each sample yj, j=1,2 ..., m construct its local auto-adaptive neighbour's set BjWith non local similar structure neighbour set Cj, Obtain the local auto-adaptive neighbour set and non local similar structure neighbour set of each test sample in test set.
(5) noise reduction markov field model is constructed, the fusion and semantic drop of a variety of semantic expressiveness of test sample are carried out Make an uproar processing, wherein noise reduction process be by introducing part, non local space constraint is realized.In conjunction with being obtained in step (3) All samples a variety of semantic expressivenessAnd the local auto-adaptive of all samples in test set is obtained in step (4) Neighbour's set and non local similar structure neighbour set, by a variety of semantic expressiveness of samples all in test set and its part, A variety of semantic expressiveness of non local neighbour fully enter in noise reduction markov field model, maximize the connection of Markov Random Fields Probability is closed, that is, the global energy of minimum Markov Random Fields uses Iterative conditional modes method to minimize global energy (ICM) it is solved, by the global minimization, problem is converted to multiple local minimum problems, that is, minimizes each sample The energy of place gesture group, gesture group is made of test sample itself and corresponding all neighbour's samples, to each test sample yj, J=1,2 ..., m are proceeded as follows respectively, by test sample yjA variety of semantic expressivenessLocal auto-adaptive neighbour Set BjIn all samples a variety of semantic expressivenessWith non local similar structure neighbour set CjIn own A variety of semantic expressiveness of sampleIt is input in the local energy function of noise reduction Markov Random Fields, minimizes The function, to obtain test sample yjSemantic expressiveness after first time noise reductionTo minimize all test samples places The energy of gesture group, obtains the single order noise reduction semantic expressiveness of all test samples.In order to by the semantic expressiveness of sample in training set, It is exactly right-on semantic expressiveness, continues through noise reduction Markov Random Fields and positive shadow is carried out to its part and non local neighbour It rings, therefore keeps the semantic expressiveness of sample in training set constant, in conjunction with the single order noise reduction semantic expressiveness of all test samples, be somebody's turn to do The single order noise reduction semantic expressiveness of all samples of high spectrum image
(6) the Iterative conditional modes method (ICM) as used in step (5) carries out a wheel local minimum energy balane The minimum of global energy is not ensured that afterwards, so to the single order noise reduction semantic expressiveness of all samples of high spectrum image? It is exactly the further iteration optimization of semantic expressiveness result obtained in step (5).Referring to Fig. 1, maximum number of iterations T is setmax, t is Current iteration algebra, by test sample yjT rank noise reduction semantic expressivenessLocal auto-adaptive neighbour's set BjIn all samples This t rank noise reduction semantic expressivenessWith non local similar structure neighbour set CjIn all samples t Rank noise reduction semantic expressivenessIt is same to minimize as the input of noise reduction Markov Random Fields local energy function The energy function, obtains test sample yj(t+1) rank semantic expressivenessTo obtain all tests of the high spectrum image (t+1) rank noise reduction semantic expressiveness of sample.For samples all in training set, described in processing mode and step (5) It is identical, obtain (t+1) rank noise reduction semantic expressiveness of all samples in the high spectrum imageIteration process is exactly This step, until t=Tmax- 1, stop, obtaining the T of all samples of the high spectrum imagemaxRank noise reduction semantic expressiveness, that is, Final semantic expressivenessIt infuses, in the noise reduction Markov Random Fields local energy function mentioned in this step and step (5) The noise reduction Markov Random Fields local energy function mentioned is different, is since the local energy function in step (5) has needed At the function that multi-semantic meaning merges, and the local energy function in this step only needs to be performed iteration optimization, specific difference Referring to single order local energy function in embodiment 4 and (t+1) rank local energy function.
Setup parameter T is needed in the present inventionmax, that is, the number of iterations referred in step (6), under normal circumstances, if TmaxIt is smaller, it is unable to get convergent as a result, if TmaxIt is larger, cause the complexity calculated higher, generates more meaningless meter It calculates.The setting method of this parameter is described below, and is judged by the way that whether the result of calculating is stable, is randomly selected a small amount of forecast sample (5 to percent 10 percent), the prediction classification collection obtained according to (t+1) rank semantic expressiveness of this fractional prediction sample The gap between prediction category set that conjunction and t rank semantic expressiveness obtain is less than certain threshold value, then without continuing iteration, TmaxIt is equal to (t+1).In general, in the case where training samples number is less, required TmaxIt is larger, otherwise resolutely.
(7) it utilizes in step (6) and obtains final semantic expressivenessAcquire the classification of all samples in test set.It is right Each sample y in test setj, j=1,2 ..., m, final semantic expressiveness is(column vector of c × 1), That is test sample yjBelong to column vector composed by the probability of each classification, selectsThe mark of middle maximum value element position Classification number as the test sampleTo obtain the classification of all samples in test set, test set class prediction collection is constituted It closesComplete the classification task of the high spectrum image.
Such as Indian Pine data set, the truly species that Fig. 2 b gives the data set are marked on a map no matter It using which kind of classification method, can be compared with Fig. 2 b, verify classifying quality.
The present invention finds the local auto-adaptive neighbour collection of each pixel due to sufficiently having excavated the correlation between pixel And non local similar structure neighbour collection, and will be added to by the constructed part of both neighbours collection and non local space constraint In designed noise reduction markov field model, at the same time, the classification information of a small amount of marked sample also can be with noise reduction horse Er Kefu optimization process and travel in its local and non local neighbour.The invention enables high spectrum images to have on a small quantity Still very high nicety of grading can be obtained in the case where marker samples, and classification results figure possesses good Space Consistency.
Embodiment 2:
Based on local and non local multiple features semanteme hyperspectral image classification method with embodiment 1, wherein institute in step (1) The various features stated, include but are not limited to: original spectrum feature, Gabor textural characteristics, difference morphological feature (DMP), Middle Gabor textural characteristics and difference morphological feature (DMP) are expressed as follows respectively:
Gabor textural characteristics: to the high spectrum imagePrincipal component analysis (PCA) processing is carried out, after taking processing Preceding 3 dimension principal component as 3 width benchmark images, carry out 16 directions, the Gabor transformation of 5 scales, each benchmark image respectively The textural characteristics for respectively obtaining 80 dimensions are stacked to obtain the Gabor textural characteristics that total dimension is 240 dimensions.
Difference morphological feature (DMP): to the high spectrum imagePrincipal component analysis (PCA) processing is carried out, is taken Preceding 3 dimension principal component that treated carries out opening for 5 scales respectively and operates and mutually make the difference and 5 scales as 3 width benchmark images Closed operation and mutually make the difference, each benchmark image respectively obtains the Differential Characteristics of 8 dimensions, and being stacked to obtain total dimension is 24 dimensions Difference morphological feature.
The present invention, in addition to using original spectral information, also considers emphatically texture and shape in manifold selection Information in terms of shape, since different Feature Descriptors has different statements for image, wherein Gabor textural characteristics can be with Local grain information in extraction high spectrum image well, that is, the correlation information between local pixel, and difference shape State feature (DMP) can be very good the edge and size information of the shape block in reaction high spectrum image.The present invention combines Spectrum, texture and shape feature can promote the identification between the different classes of pixel of high spectrum image, and final improve is based on office The nicety of grading and robustness in portion and non local multiple features semanteme hyperspectral image classification method.
Other than the three kinds of features mentioned in the present embodiment, other features also be can be used in the present invention, such as high The features such as common gray level co-occurrence matrixes, 3 D wavelet transformation in spectral image analysis, simultaneously, what the present invention can be used Feature quantity is also not limited to three kinds, although a greater variety of features can improvement method judgement index, also can increasing for no reason Add the information of computation complexity and bulk redundancy, three kinds of features that the present invention uses: original spectrum feature, Gabor texture are special Sign, difference morphological feature (DMP), cover the most information of high spectrum image substantially.
Embodiment 3:
Based on local and non local multiple features semanteme hyperspectral image classification method with embodiment 1-2, wherein in step (4) The part and non local neighbour set building method are as follows:
4a) to the high spectrum imageIt carries out principal component analysis (PCA), extracts first principal component as a width Benchmark image, that is, a width can reflect the gray level image of the basic atural object profile information of the high spectrum image, and super-pixel is arranged Number LP carries out the super-pixel image segmentation based on entropy rate, obtains LP super-pixel block
Using the superpixel segmentation method based on entropy rate to the first principal component grayscale image of high spectrum image in the present embodiment Super-pixel segmentation, the good marginal information and structural information kept in image of the super-pixel block being partitioned into are carried out, and is partitioned into Super-pixel block difference in size it is smaller, shape is more regular, more be suitable for high spectrum image, also largely made by researchers Analysis process field for high spectrum image.In the present invention, other image partition methods also can be used and come instead of entropy rate Superpixel segmentation method, such as average drifting (Mean Shift) and other superpixel segmentation methods based on graph theory.
Local window parameter W 4b) is setlocal, for the sample y in test setj, it is located at sample yjCentered on it is rectangular Window Wlocal×WlocalIn, but with sample yjBelong to the same super-pixel block PuAll samples constitute sample yjPart it is adaptive Answer neighbour's set Bj, to obtain the local auto-adaptive neighbour set of all samples in test set.Treated in this way, and part is close Neighbour's set, gathers compared to traditional square window neighbour, for each test sample local neighbor joined it is certain adaptive Ying Xing will not be because of so that local neighbor corresponding to each test sample is identified the case where being according to around itself The stationarity of window parameter setting causes the local neighbor set of partial test sample to contain the spatial information of a large amount of mistakes. And compared to completely adaptive local neighbor set construction method, and can be differentiated according to every the different of high spectrum image Rate, different types of ground objects set suitable local window parameter, introduce certain priori knowledge to promote local spatial information Quality, and the local neighbor set structure that complete adaptive local neighbor set construction method mostly complexity is higher, of the invention Construction method complexity and conventional method, that is, square window neighbour set are almost consistent, almost without computation complexity is increased, But it is the introduction of certain adaptivity.
Need to set local window parameter W in the present inventionlocal, parameter is bigger, and the local neighbor for including is more, local neighbor Information is abundanter, the wrong neighbour for also the more being readily incorporated the more more.Parameter is smaller, and the local neighbor for including is fewer, also with regard to nothing Method extracts enough local neighbor information.It sets the parameter and needs selection of compromising therebetween, the general value of this parameter Have 3,5 ..., 15 etc., according to prior informations such as the resolution ratio of every high spectrum image, types of ground objects, different journeys are carried out to parameter The adjustment of degree.
Non local topology window parameter W 4c) is setnonlocalAnd non local neighbour's number K, respectively to original EO-1 hyperion Each sample h in imageq, q=1,2 ..., M is in its neighborhood Wnonlocal×WnonlocalMiddle progress mean value pond, obtains all samples This structural information
Need to set non local topology window parameter W in the present inventionnonlocalAnd non local neighbour's number K, wherein non-office Portion topology window parameter WnonlocalSelection method and the present embodiment 4b) in the local window parameter W that mentionslocalIt is more similar, But it needs for structural information as much as possible to be included when in general, extracting the information of the affiliated sub-block of each sample, it is non local Topology window parameter WnonlocalValue be greater than local window parameter Wlocal, such as 15,17 ..., 25 etc..Likewise, according to every The prior informations such as resolution ratio, the type of ground objects of high spectrum image are opened, different degrees of adjustment is carried out to parameter.Non local neighbour The value of number K generally has 20,30 ..., 100 etc., this parameter needs are adjusted according to the quantity of class sample every in high spectrum image Whole, if every class sample size is all more, the value of non local neighbour's number K is larger, if there is part class sample size less, The value of non local neighbour's number K is smaller.
The present invention extracts the structural information of the affiliated sub-block of each sample using the method in mean value pond, mean value Chi Huahou's As a result the essential information that can substantially show the affiliated sub-block of each sample, certain area can be embodied compared to other sub-blocks Indexing.Also other polymerizations can be used to substitute the information that mean value pondization extracts each affiliated sub-block of sample, such as maximum value Chi Hua, weighted mean pond or other more complicated pond methods.
4d) for each sample y in test setj, j=1,2 ..., m, with its structural informationWith remaining all samples Structural informationIt compares, calculates the how far between composition of sample information:
WhereinIt is geodesic distance,Each row in vector x is opened in representative Root operation, SGDjqValue just represent test sample yjThe how far of structural information between sample q, compared to common Europe Formula distance or cosine angle isometry mode, since there are some researches prove the reflected wave informations that high spectrum image is included It can be counted as a kind of manifold, and geodesic distance can preferably state high spectrum image wave band as a kind of manifold distance The distance between reflected value vector.Searching and test sample yjK most like sample, that is, the smallest preceding K sample of SGD value This, as test sample yjNon local similar structure neighbour set Cj, at the same time, due to each non local neighbour's sample and Test sample yjBetween similarity degree it is different, so for test sample yjSignificance level with regard to different from, each basis The size of SGD value gives the different weights of K non local neighbour respectively, and similarity degree is higher, that is, SGD value is bigger, then its Weight is bigger, and weight calculation formula is as follows:
In formula, ωjhRepresent test sample yjNon local similar structure neighbour h corresponding to weight size, γ is Gauss Nuclear parameter.
The present invention focuses on to devise the local auto-adaptive neighbour set of all samples in test set and non local similar structure Neighbour's set, wherein local auto-adaptive neighbour set combines tradition side's window and super-pixel block, is adequately extracted sample institute Local spatial information near position reduces wrong spatial information and introduces.Rather than the design of the similar structure neighbour set in part The non local spatial information of bulk redundancy present in high spectrum image is considered, the present invention has carried out these information sufficiently It extracts.The design of both neighbours set, so that final noise reduction Markov Random Fields take full advantage of local and non local space Information, and then promote the nicety of grading of high spectrum image and the Space Consistency of classification results figure.
Embodiment 4:
Based on local and non local multiple features semanteme hyperspectral image classification method with embodiment 1-3, wherein step (5) and (6) global energy that Markov Random Fields are minimized in, is solved using Iterative conditional modes method (ICM), by global energy minimization It is converted into and minimizes each local gesture group energy, for test sample yj, corresponding single order local energy function, that is, walk Suddenly local energy function used in (5) are as follows:
Local energy function used in corresponding (t+1) rank local energy function, that is, step (6) are as follows:
It infuses, distance metric method is geodesic distance in two above local energy function, with geodetic described in embodiment 3 Apart from identical.It is proved to be more suitable compared to traditional distance metric method, such as Euclidean distance, cosine angle, geodesic distance In the distance between measurement two semantic vectors, that is, probability vector.
Minimizing two above-mentioned local energy functions is a convex optimization problem, and the present invention uses simple and quick gradient Descent method minimizes energy above function, with test sample yjSingle order local energy function for, gradient are as follows:
WhereinIt representsK-th of element, and:
Using above formula, the global energy for minimizing Markov Random Fields is completed.
The present invention is due to having carried out multi-feature cubes to the improvement with non local direction to markov field model, and term Geodesic distance regular terms between adopted vector is instead of traditional discrete classification similarities and differences regular terms, so that markov field model is more What is added is suitable for processing classification hyperspectral imagery problem, solves the problems such as excessively smooth caused by legacy Markov model, changes It has been apt to the Space Consistency of classification hyperspectral imagery result figure, has improved nicety of grading.Later, pass through Iterative conditional modes method (ICM) multiple local minimum problems are converted by global energy minimization problem, is asked using simple gradient descent method Solution, computation complexity cut class algorithm lower than figure used in legacy Markov field model, and in seeking excellent iterative process very well Avoid and fall into locally optimal solution, the robustness of model is improved, to promote the robustness of classification results.
A complete embodiment is given below, the present invention is further described.
Embodiment 5:
Based on local and non local multiple features semanteme hyperspectral image classification method with embodiment 1-4,
Referring to Fig.1, specific implementation step of the invention includes:
Step 1, high spectrum image is inputtedM is the total number of all pixels point in high spectrum image, hqIt is one A column vector represents the reflected value of each wave band of pixel q.Multiple spies are carried out to high spectrum image with different Feature Descriptors The extraction of sign.In emulation experiment of the invention, be extracted Gabor textural characteristics respectively, morphological difference feature (DMP) and Original spectrum feature.
After 1a) Gabor textural characteristics carry out dimensionality reduction using principal component analytical method (PCA) for original high spectrum image First three main composition image carries out the transformation of the multiple scales of multiple directions of Gabor filter, infuses and uses 16 in the present embodiment Direction and 5 scales, the result of each filtering is stacked, and obtains the Gabor textural characteristics of the high spectrum image, mutually deserved To the texture feature vector of each pixel.
1b) morphological difference feature (DMP) carries out the opening and closing operations of different scale also for first three main composition image, Note opens operation and the closed operation of 5 scales using 5 scales in this embodiment, and that seeks two neighboring scale opens or close behaviour Difference between work stacks all obtained differences, obtains the morphological difference feature of the high spectrum image, accordingly obtains Morphological difference feature (DMP) vector of each pixel.
1c) original spectrum feature is exactly the light with the reflected value of each wave band of each pixel directly as the pixel Spectrum signature.
Step 2, the pixel that equal proportion is selected from every a kind of pixel of high spectrum image, as there is label pixel Point, the total number for having label pixel are N, and the remaining m pixel of the high spectrum image is as unmarked pixel, with each V feature vector of pixel be respectively to indicate the pixel, the intrinsic dimensionality of sampleWherein DvRepresent v-th of spy The dimension of sign.
Step 3, choosing has label training set X, test set Y, obtains the semantic expressiveness collection S of all samples.
3a) there is label training set with N number of marked sample compositionIt is corresponded to Category label collection beR indicates real number field.It has been utilized respectively label training setIn single feature to Duration set XvAnd category label collection L, train V probability support vector machine (SVM) model.The core type of support vector machine is Radial Gaussian kernel, nuclear parameter r and penalty parameter c are obtained by more times of cross validations.
3b) test set is constituted with m unmarked sampleRespectively by test set Different characteristic vector setBe separately input to step 3a) in the corresponding V probability support vector machine that is trained In model, obtains test sample and concentrate all samples under the statement of different characteristic description, belong to the probability of each classification, i.e., For the semantic expressiveness of the test sample, such as test sample yjSemantic expressiveness beSample in training set is come It says, a semantic expressiveness i.e. 0-1 coding vector, category position is 1, and other positions 0 indicate it belong to class itself Other probability is 1, and the probability for belonging to other classifications is 0.Therefore the semantic expressiveness collection for obtaining all samples of the high spectrum image is
Step 4, local auto-adaptive neighbour's set B of each sample in unmarked test set Y and non local similar is constructed Structure neighbour's set C.
Dimensionality reduction 4a) is carried out using the method for principal component analysis (PCA) to original high spectrum image, selects first principal component figure As (grayscale image) is used as benchmark image, setting super-pixel number LP, using the superpixel segmentation method based on entropy rate to this Image carries out super-pixel segmentation, obtains LP super-pixel block
Local auto-adaptive window parameter W 4b) is setlocal, the super-pixel block according to obtained in step 4a) constructs each survey The local auto-adaptive neighbour of sample sheet collects.With test sample yjFor, if there is sample n to belong to test sample yjCentered on Wlocal×WlocalIn the square window of size, and and test sample yjBelong to the same super-pixel block Pu, then this sample n is referred to as test specimens This yjA local auto-adaptive neighbour, n ∈ Bj, and so on, obtain the local auto-adaptive neighbour set of each test sample
Non local topology window parameter W 4c) is setnonlocalAnd the number K of non local neighbour.First to original bloom Spectrogram pictureM indicates the number of all samples in high spectrum image.In Wnonlocal×WnonlocalSize it is adaptive Window, i.e. parameter are WnonlocalLocal auto-adaptive neighbour collection in carry out mean value pond, the same 4b of the make of self-adapting window) Described in.Obtain the partial structurtes information of each sample
4d) according to obtained in step 4c)Calculate separately each test sample partial structurtes information and Similarity between remaining all the points.For each sample y in test setj, j=1,2 ..., m, with its structural informationWith The structural information of remaining all samplesIt compares, calculates the sample partial structurtes informationWith remaining all samples Partial structurtes informationBetween similarity, calculating formula of similarity is as follows:
WhereinIt is geodesic distance,It represents and each row in column vector x is carried out Open root operation.The similarity calculated between every two sample can be obtained by a similarity matrix SD, and the matrix is symmetrical Matrix, that is, SD (j, q)=SD (q, j), represents test sample yjStructural similarity between sample q.
4e) the similarity matrix SD according to obtained in step 4d) selects its preceding K a most like each test sample Sample as its non local neighbour.With test sample yjFor, select the jth column in similarity matrix SD, each of which element Represent test sample yjWith the similarity of each sample, select wherein in addition to itself (and itself similarity itself is maximum), The maximum preceding K sample of numerical value, as test sample yjNon local neighbour, then obtained test sample yjNon local neighbour Set Cj, and so on, obtain the non local neighbour set of each test sample
4f) at the same time, according to the size of similarity value, the K different weights of non local neighbour are given respectively, to make More like neighbour possesses higher weight, in contrast, the weight of the lower neighbour of similarity is reduced, to improve non local sky Between the reasonability and adaptivity that constrain, weight calculation formula is following (wherein γ is Gauss nuclear parameter):
Step 5, according to the semantic expressiveness of each sample calculated in step 3 and step 4 and its corresponding part, Non local neighbour's set, constructs noise reduction markov field model, wherein local neighbor set is constrained as local space, non local Neighbour gathers to be added in model as non local space constraint.Each part is minimized using Iterative conditional modes method (ICM) Energy, i.e., the energy of gesture group where each sample, achievees the purpose that minimize global energy, calculates the drop of each test sample It makes an uproar semantic expressiveness.
5a) with test sample yjFor, by test sample yjV semantic expressivenessAnd its part, non-office The semantic expressiveness of all samples in portion neighbour setIt is input to noise reduction Ma Erke In husband local energy function.Test sample yjThe energy of place gesture group are as follows:
Wherein, first item is from bound term on the right side of formula (3), and Section 2 is local space bound term, and Section 3 is non-office Portion's space constraint item, note, all bound terms are all the constraints in a variety of semantic situations in this formula.Most using gradient descent method The smallization function, obtains test sample yjSingle order noise reduction semantic expressivenessAnd so on, all test samples are traversed, it can be with The single order noise reduction semantic expressiveness of all test samples is obtained, and the semantic expressiveness of training sample remains unchanged, therefore available institute There is the single order noise reduction semantic expressiveness collection of sample
Excellent process gradually 5b) is sought since Iterative conditional modes method (ICM) is one, primary iteration is unable to complete the overall situation Energy minimize, final convergence is not achieved as a result, by step 5a) obtained in single order noise reduction semantic expressiveness collectionIt is input to Continuous be iterated of noise reduction markov field model relaying asks excellent.Maximum number of iterations T is setmax, with test sample yjFor, Single order noise reduction semantic expressivenessAnd in its part, non local neighbour set sample single order noise reduction semantic expressivenessIt is input in noise reduction Markov Random Fields local energy function, minimizes the energy function Obtain test sample yjSecond order noise reduction semantic expressivenessAnd so on, all test samples are traversed, infuses, arbitrary order is being asked to drop Make an uproar semantic expressiveness when, the semantic expressiveness of all training samples all remains unchanged, remain as 0-1 coding column vector, and then To the second order noise reduction semantic expressiveness of all samplesSince a variety of semantic fusions are in the step 5a of this example) in complete, this When energy function no longer need to carry out the fusions of a variety of semantemes, so part can function be reduced to:
Three same formula of meaning (3) on the right side of formula (4), but do not needed to carry out a variety of semantic fusions, therefore each single item It is all the constraint in single semantic situation.From formula as can be seen that according to test sample yjAnd its part, non local neighbour The t rank noise reduction semantic expressiveness of sample, minimizes the available test sample y of the functionj(t+1) rank noise reduction semanteme table Show.And it is above-mentioned according to single order noise reduction semantic expressivenessAvailable second order noise reduction semantic expressivenessSpy when this is t=1 Example.Loop iteration process, until t=Tmax- 1, available test sample yjTmaxRank noise reduction semantic expressiveness
This process is repeated to each test sample, it can obtain the T of each test samplemaxRank noise reduction semantic expressiveness, And then obtain the T of whole samplesmaxRank noise reduction semantic expressiveness
Step 6, in the present invention semantic expressiveness of sample be exactly the sample belong to column that the probability of each classification is constituted to Amount, therefore the T obtained according to step 5maxRank noise reduction semantic expressivenessFor test sample yjSelect its semantic expressiveness vectorPosition where middle maximum value is exactly test sample yjCategory labelAnd so on, obtain all test samples Category label
One kind is proposed in the present invention based on part and non local multiple features semanteme hyperspectral image classification method, in method In construct a kind of novel Markov Random Fields, referred to as noise reduction Markov Random Fields.The model passes through Iterative conditional modes method (ICM) most The global energy of smallization Markov Random Fields can carry out multiple features semanteme reasonable under part, non local space constraint Fusion all obtains more comprehensive, the lower semantic expressiveness of noise for all test samples, and then obtains all tests The prediction category of sample.Both sides information is adequately utilized in this method, and multicharacteristic information can carry out high spectrum image More comprehensive statement, the relationship that part, non local space constraint can be mutual to high spectrum image pixel carry out abundant Excavation.Finally, proposed by the present invention based on part and non local multiple features semanteme EO-1 hyperion compared to traditional classification method Image classification method improves the precision, robustness and Space Consistency of classification results.
Effect of the invention can be further illustrated by following emulation experiment:
Embodiment 6:
Based on local and non local multiple features semanteme hyperspectral image classification method with embodiment 1-5.
1. simulated conditions:
Emulation experiment uses unloaded visible light/Infrared Imaging Spectrometer of the jet propulsion laboratory NASA NASA The Indian Pine image that AVIRIS is obtained in June, 1992 in the Indiana northwestward, as shown in Figure 2 a, image size is 145 × 145, totally 220 wave bands, remove noise and atmosphere and wave band that waters absorbs is there are also 200 wave bands, referring to fig. 2 b, After handmarking, totally 16 class terrestrial object information.
Emulation experiment is Intel Core (TM) i5-4200H, dominant frequency 2.80GHz in CPU, inside saves as the WINDOWS of 12G It is carried out in 7 systems with MATLAB 2014a software.
Table 1 gives 16 class data in Indian Pine image.
16 class data in 1 Indian Pine image of table
Classification Item name Number of samples Training sample number
1 Clover 46 3
2 Corn-does not plough plough 1428 72
3 Corn-irrigation 830 42
4 Corn 237 12
5 Herbage 483 25
6 Trees 730 37
7 The herbage of cutting 28 2
8 Hay material heap 478 24
9 Buckwheat 20 1
10 Soybean-does not plough plough 972 49
11 Soybean-irrigation 2455 123
12 Soya bean 593 30
13 Wheat 205 11
14 The woods 1265 64
15 Building-grass-tree 386 20
16 Stone-reinforcing bar 93 5
2. emulation content and analysis:
Classified there are three types of method to high spectrum image Indian Pine using the present invention with existing, now there are three types of methods It is respectively: the support vector machine (SVM+CK) based on fusion nucleus, support vector machine combination Markov Random Fields (SVM+MRF), joint Rarefaction representation (SOMP).Itd is proposed in the present invention based on local and non local multiple features semanteme hyperspectral image classification method, benefit Classified with multiple features semantic expressiveness and space constraint to high spectrum image, be abbreviated as NE-MFAS, in order to verify present invention side The validity of method joined two kinds of simplification version method MFAS and MFS in frame of the present invention in emulation experiment, wherein using MFAS method is only to construct local neighbor set in (4) embodiment 6 the step of, does not utilize the letter of non local neighbour's set Breath, to verify influence of the non local neighbour's aggregate information to classification results, classification results are referring to Fig. 3 e.Using MFS method in reality It does not utilize non local neighbour's aggregate information in the step of applying example 6 (4) not only, does not utilize super picture when constructing local neighbor set yet The information of element only selects square window as its local neighbor set, to verify influence of the super-pixel constraint to classification results, point Class result is referring to Fig. 3 d.The wherein penalty factor of support vector machine (SVM) methodNuclear parameterIt is true by 5 times of cross validations Fixed, the Sparse parameter of SOMP method is set as 30, and airspace scale parameter is set as 7 × 7.Super-pixel number used in the present invention L is set as 75, local window WlocalSize is 7 × 7, non local topology window WnonlocalSize be 21 × 21, it is non local close Adjacent number K is 30, and Gauss nuclear parameter γ is 0.05, maximum number of iterations TmaxIt is set as 3.The selection mode of training sample is as follows, 5% pixel is randomly selected from 16 class data as training sample, remaining 95% is used as test sample.
The classification results figure of each method as shown in Figure 3, obtains in the case where training sample is completely the same, wherein scheming 3a-3c is the classification results figure of three kinds of existing methods SVM+CK, SVM+MRF, SOMP respectively, it can be seen that these three classification methods In most of region, performance is fine.The wherein classification results figure of SOMP, referring to Fig. 3 c, discrete noise point is more, Space Consistency It is poor.And the classification results figure of SVM+MRF, referring to Fig. 3 b, compared to Fig. 3 c almost without discrete noise point, but there is a large amount of Excessively smooth phenomenon, and classification results figure edge corrosion is more serious.The classification results figure of SVM+CK, referring to Fig. 3 a, compared to Fig. 3 b and 3c, the result is that it is optimal, but there is some discrete noise spot and a large amount of edge corrosions.Fig. 3 d-3f points MFS, the classification results figure of MFAS, NE-MFAS are not corresponded to.It is compared with Fig. 3 a-3c, it can be seen that side proposed by the present invention Method is in regional area continuity, and edge is kept and the performance of sample one's respective area is all more preferable than existing method.Wherein point of MFS Class result figure, referring to Fig. 3 d, compared to three kinds of traditional methods, MFS there's almost no discrete noise point and most of area The Space Consistency in domain is all preferable, but has certain edge corrosion phenomenon.MFAS is the constraint letter that joined super-pixel than MFS more Breath, referring to Fig. 3 e, compared to Fig. 3 d, the classification results figure of MFAS has very big improvement in terms of edge holding, hardly deposits In edge corrosion phenomenon, but in part small sample category regions, the yellow rectangle region on the right side of image, it is virtually impossible to correct point Class.And after joined non local space constraint to MFAS, the classification results figure of NE-MFAS of the present invention, referring to Fig. 3 f, compared to figure 3d and 3e, while keeping good Space Consistency, to small sample category regions, the yellow rectangle region on the right side of image, Almost all is correctly classified.
By the variation of Fig. 3 d to 3f, it can be found that the adaptive local space of the super-pixel constraint proposed in the present invention Constraint can be very good to keep the marginal information of image, enhance Space Consistency, and non local space constraint proposed by the present invention The upper limit of nicety of grading can be further improved, especially on promoting the relatively small number of category classification precision of sample, is had very Good performance.
Embodiment 7:
Based on local and non local multiple features semanteme hyperspectral image classification method with embodiment 1-6, the condition of emulation and Content is the same as embodiment 6.Difference between classification results figure described in embodiment 6 can only observe by the naked eye judgement, pass through herein Nicety of grading comparative analysis embodies advantage of the method for the present invention compared to other methods from data.All experiments carry out 10 Secondary to be averaged, infusing each Experiment Training sample is all to randomly select to obtain, that is to say, that primary experiment and another experiment Training sample is not identical.
Table 2 is given and has three kinds of methods, two kinds of present invention simplify version method and the experimental precision comparison of the method for the present invention:
Three kinds of methods and experimental precision result of the present invention on 2 Indian Pine image of table
Method Nicety of grading (%)
SVM+CK 91.59±0.94
SVM+MRF 84.91±1.43
SOMP 87.55±1.24
The present invention simplifies version 1 (MFS) 96.87±0.79
The present invention simplifies version 2 (MFAS) 97.24±0.64
(NE-MFAS) of the invention 98.20±0.58
From table 2 it can be seen that method of the invention is much higher than existing three in the mean value of 10 repetition experiment acquired results Kind method, and standard deviation will be lower than existing three kinds of methods, it may be said that no matter bright the method for the present invention is in nicety of grading or robust Property on to show all be optimal, be on the one hand that a large amount of differentiations letter that multiple features are included sufficiently is combined by method of the invention Breath, on the other hand will be far superior to other three kinds of methods based on spatial information in excavated space message context.From MFS to MFAS, nicety of grading improve 0.5%, illustrate that super-pixel constraint plays positive effect to the extraction of spatial information, compare In traditional square window, adaptive local neighbour set designed by the present invention can reject the neighbour that part generates negative interaction Point extracts local spatial information more abundant, including edge and structural information.From MFAS to NE-MFAS, nicety of grading is mentioned 1% has been risen, has illustrated to have contained a large amount of valuable information in non local neighbour, and extracting these information can further mention The upper limit of high-class precision.For nicety of grading of the invention in 5% a small amount of sample, nicety of grading has reached 98.20%, And standard deviation only has 0.58%, better than existing most of method.
In conclusion disclosed by the invention a kind of based on part and non local multiple features semanteme classification hyperspectral imagery side Method.It is low mainly to solve existing hyperspectral image classification method accuracy, poor robustness, the weak problem of Space Consistency.It is walked It suddenly include: to be utilized respectively various features extracting method to original high spectrum image to extract different features;By high spectrum image Each pixel indicated with multiple feature vectors, selected label training set and test set;There to be the input of label training set Into probability support vector machine, multiple Feature Mappings of each pixel in test set to semantic space, each is obtained Multiple semantic expressiveness of test sample;Spatial information and multi-semantic meaning information are introduced simultaneously using noise reduction markov field model, most The energy of the smallization Markov Random Fields obtains the noise reduction semantic expressiveness of each test sample, and then obtains the class of entire test set Other information.The perfection of semantic expressiveness and markov field model that the present invention is obtained using support vector machine agrees with, and adequately ties Multicharacteristic information and part, non local airspace contextual information are closed, it is more accurate finally to carry out to high spectrum image Ground classification.The comparison for simplifying version with two the method for the present invention, demonstrates the validity of the method for the present invention.With existing three kinds of methods Comparison illustrates that the method for the present invention under Small Sample Size, has pinpoint accuracy, high robust and outstanding space are consistent Property.It can be used for military detection, mapping, vegetation investigation, mineral detection etc..
There is no the part described in detail to belong to the well known conventional means of the industry in present embodiment, does not chat one by one here It states.The foregoing examples are only illustrative of the present invention, does not constitute the limitation to protection scope of the present invention, all and sheet Invent it is the same or similar design all belong to the scope of protection of the present invention within.

Claims (4)

1. one kind is based on part and non local multiple features semanteme hyperspectral image classification method, which is characterized in that include as follows Step:
(1) input picture extracts the various features of image: input high spectrum imageM is institute in high spectrum image There are the total number of pixel, hqFor a column vector, the vector that pixel q is constituted in the reflected value of each wave band is represented;It is right High spectrum image extracts various features respectively, includes: original spectrum feature, Gabor textural characteristics and difference morphological feature; The high spectrum image includes c class pixel, wherein have it is N number of have a label pixel, m unmarked pixels, each picture of image Vegetarian refreshments is a sample, and each sample is made of V feature vector, respectively represents table of the sample under different characteristic description It states, V is the number of feature classification;
(2) hyperspectral image data collection is divided into training set and test set: use it is N number of have label pixel as training sample constitute instruction Practice collectionIts corresponding category label collection isWith m unmarked pixels Test set is constituted as test sampleWherein, xiIndicate i-th of sample of training set This, yjIndicate j-th of sample of test set, liIt is category label belonging to i-th of training sample, DvIndicate the dimension of v category feature Number, R indicate real number field;
(3) various features of all samples are mapped to corresponding semantic expressiveness using probability support vector machine (SVM): benefit respectively Use training setIn all samples V feature vector and its corresponding classification mark Note collectionConstruct V probability supporting vector machine classifier, the kernel function of the classifier is radial Gaussian kernel, nuclear parameter with And punishment parameter is obtained by more times of cross validations;By test setIn all samples V feature vector is separately input in V corresponding classifier of building, obtains describing each survey under sublist states in different characteristic This y of samplej, j=1,2, L, m belong to the probability of each classification, the semantic expressiveness as each test sampleFor each sample x in training seti, i=1,2, L, N, it belongs to classification l itselfiIt is general Rate is 1, and the probability for belonging to other classifications is 0, obtains the corresponding a variety of semantic expressiveness of various featuresWhereinLiBehavior 1, other behaviors 0;To obtain the high-spectrum A variety of semantic expressiveness of all samples as in
(4) the local and non local neighbour set of all samples in test set is constructed: for each sample y in test setj,j =1,2, L, m construct its local auto-adaptive neighbour's set BjWith non local similar structure neighbour set Cj;It is local and non local close Neighbour's set building method is as follows:
Principal component analysis 4a) is carried out to the high spectrum image, first principal component is extracted as benchmark image, super-pixel number is set LP carries out the super-pixel image segmentation based on entropy rate, obtains LP super-pixel block
Local window parameter W 4b) is setlocal, for the sample y in test setj, it is located at sample yjCentered on square window Wlocal×WlocalIn, but with sample yjBelong to the same super-pixel block PuAll samples constitute sample yjLocal auto-adaptive it is close Adjacent set Bj
Non local topology window parameter W 4c) is setnonlocalAnd non local neighbour's number K, respectively to original high spectrum image In each sample hq, q=1,2, L, M is in its neighborhood Wnonlocal×WnonlocalMiddle progress mean value pond, obtains all samples Structural informationFor each sample y in test setj, j=1,2, L, m, with its structural informationWith remaining institute There is the structural information of sampleIt compares, calculates the how far between composition of sample information:
WhereinIt is geodesic distance,It represents and root behaviour is carried out out to each row in vector x Make, SGDjqValue just represent test sample yjThe how far of structural information between sample q;Searching and test sample yjMost Similar K sample, that is, the smallest preceding K sample of SGD value constitute test sample yjNon local similar structure neighbour collection Close Cj, at the same time, according to the size of SGD value, the K different weights of non local neighbour are given respectively, weight calculation formula is such as Under:
In formula, ωjhRepresentative sample yjNon local similar structure neighbour h weight size, γ is Gauss nuclear parameter;
(5) noise reduction markov field model is constructed, the fusion of a variety of semantic expressiveness and the drop of semantic expressiveness of test sample are carried out It makes an uproar processing: to each test sample yj, j=1,2, L, m are proceeded as follows respectively, by yjCorresponding semantic expressivenessLocal auto-adaptive neighbour's set BjIn all samples semantic expressivenessWith it is non local similar Structure neighbour's set CjIn all samples semantic expressivenessIt is input in local energy function, it is minimum Change the energy function, obtains test sample yjSingle order noise reduction semantic expressivenessThe language of sample in training set is kept at the same time Justice indicates constant, obtains the single order noise reduction semantic expressiveness of all samples of the high spectrum image
(6) to the single order noise reduction semantic expressiveness of all samples of high spectrum imageFurther iteration optimization: setting greatest iteration time Number Tmax, t is current iteration algebra, is proceeded as follows to each test sample: by test sample yj, and set BjAnd set CjIn all samples t rank noise reduction semantic expressivenessAs noise reduction Markov Random Fields part The input of energy function minimizes the energy function, obtains test sample yj(t+1) rank semantic expressivenessIt is same with this Shi Jixu keeps the semantic expressiveness of sample in training set constant, and then obtains (t+1) rank of all samples of the high spectrum image Noise reduction semantic expressivenessIteration process, until t=Tmax- 1 stops, and obtains the of all samples of the high spectrum image TmaxRank noise reduction semantic expressiveness, that is, final semantic expressiveness
(7) final semantic expressiveness is utilizedAcquire the classification of all samples in test set: for each sample in test set This yj, j=1,2, L, m, final semantic expressiveness isThat is test sample yjComposed by the probability for belonging to each classification Column vector selects the label of maximum value element position in the vector as yjClassificationIt is pre- to obtain test set classification Survey setComplete the classification task of the high spectrum image.
2. according to claim 1 based on part and non local multiple features semanteme hyperspectral image classification method, feature It is, the Gabor textural characteristics and difference morphological feature in various features described in step (1) are expressed as follows respectively:
Gabor textural characteristics: use the high spectrum image principal component analysis treated preceding 3 dimension principal component to divide as benchmark image Not carry out 16 directions, the Gabor transformation of 5 scales, each benchmark image respectively obtains the textural characteristics of 80 dimensions, is stacked Obtain the Gabor textural characteristics that total dimension is 240 dimensions;
Difference morphological feature: use the high spectrum image principal component analysis treated preceding 3 dimension principal component to divide as benchmark image Not carry out opening for 5 scales operate and mutually make the difference and the closed operation of 5 scales and mutually make the difference, each benchmark image respectively obtains The Differential Characteristics of 8 dimensions are stacked to obtain the difference morphological feature that total dimension is 24 dimensions.
3. according to claim 1 based on part and non local multiple features semanteme hyperspectral image classification method, feature It is, wherein each test sample y of calculating described in step (5)jNoise reduction semantic expressiveness used by single order local energy letter Number are as follows:
The energy function, gradient are minimized using gradient descent method are as follows:
WhereinIt representsK-th of element, and:
4. according to claim 1 based on part and non local multiple features semanteme hyperspectral image classification method, feature It is, wherein to each test sample y in step (6)jThe further iteration optimization of single order noise reduction semantic expressiveness in, (t+1) Rank local energy function are as follows:
CN201611119573.1A 2016-12-07 2016-12-07 Based on local and non local multiple features semanteme hyperspectral image classification method Active CN106529508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611119573.1A CN106529508B (en) 2016-12-07 2016-12-07 Based on local and non local multiple features semanteme hyperspectral image classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611119573.1A CN106529508B (en) 2016-12-07 2016-12-07 Based on local and non local multiple features semanteme hyperspectral image classification method

Publications (2)

Publication Number Publication Date
CN106529508A CN106529508A (en) 2017-03-22
CN106529508B true CN106529508B (en) 2019-06-21

Family

ID=58342718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611119573.1A Active CN106529508B (en) 2016-12-07 2016-12-07 Based on local and non local multiple features semanteme hyperspectral image classification method

Country Status (1)

Country Link
CN (1) CN106529508B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107124612B (en) * 2017-04-26 2019-06-14 东北大学 Method for compressing high spectrum image based on distributed compression perception
CN107194936B (en) * 2017-05-24 2021-02-02 哈尔滨工业大学 Hyperspectral image target detection method based on superpixel combined sparse representation
CN107301368B (en) * 2017-06-28 2023-12-22 昂视智能(深圳)有限公司 Recognition method of DataMatrix two-dimensional code
CN107451614B (en) * 2017-08-01 2019-12-24 西安电子科技大学 Hyperspectral classification method based on fusion of space coordinates and space spectrum features
CN107679538B (en) * 2017-09-05 2020-12-04 深圳大学 Method and system for forming local feature descriptor of hyperspectral image
WO2019047025A1 (en) * 2017-09-05 2019-03-14 深圳大学 Method for forming local feature descriptor of hyperspectral image and forming system
CN107909576B (en) * 2017-11-22 2021-06-25 南开大学 Indoor RGB-D image object segmentation method based on support semantic relation
CN108171270B (en) * 2018-01-05 2021-08-27 大连海事大学 Hyperspectral image classification method based on Hash learning
CN108416746B (en) * 2018-02-07 2023-04-18 西北大学 Colored drawing cultural relic pattern enhancement method based on dimension reduction and fusion of hyperspectral images
CN108960073B (en) * 2018-06-05 2020-07-24 大连理工大学 Cross-modal image mode identification method for biomedical literature
CN110263709B (en) * 2019-06-19 2021-07-16 百度在线网络技术(北京)有限公司 Driving decision mining method and device
CN110378294B (en) * 2019-07-22 2022-08-19 大连海事大学 Hyperspectral target detection method and system based on local energy constraint and feature fusion
CN111104984B (en) * 2019-12-23 2023-07-25 东软集团股份有限公司 Method, device and equipment for classifying CT (computed tomography) images
CN111259936B (en) * 2020-01-09 2021-06-01 北京科技大学 Image semantic segmentation method and system based on single pixel annotation
CN111414936B (en) * 2020-02-24 2023-08-18 北京迈格威科技有限公司 Determination method, image detection method, device, equipment and medium of classification network
CN111885390B (en) * 2020-07-30 2021-04-13 河南大学 High spectral image compression method based on fractal multi-wavelet
CN111968057A (en) * 2020-08-24 2020-11-20 浙江大华技术股份有限公司 Image noise reduction method and device, storage medium and electronic device
CN113269201A (en) * 2021-04-25 2021-08-17 浙江师范大学 Hyperspectral image band selection method and system based on potential feature fusion
CN115019077A (en) * 2022-08-09 2022-09-06 江苏思伽循环科技有限公司 Method for identifying and controlling shaking table separator in waste battery recycling process

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770584A (en) * 2009-12-30 2010-07-07 重庆大学 Extraction method for identification characteristic of high spectrum remote sensing data
CN102542288A (en) * 2011-11-28 2012-07-04 北京航空航天大学 Construction and merging classification method for high spectrum data multi-characteristic space

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770584A (en) * 2009-12-30 2010-07-07 重庆大学 Extraction method for identification characteristic of high spectrum remote sensing data
CN102542288A (en) * 2011-11-28 2012-07-04 北京航空航天大学 Construction and merging classification method for high spectrum data multi-characteristic space

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于人工蜂群算法高光谱图像波段选择;王立国等;《哈尔滨工业大学学报》;20151130;第82-88页
应用稀疏非负矩阵分解聚类实现高光谱影响波段的优化选择;施蓓琦等;《测绘学报》;20130630;第353-366页

Also Published As

Publication number Publication date
CN106529508A (en) 2017-03-22

Similar Documents

Publication Publication Date Title
CN106529508B (en) Based on local and non local multiple features semanteme hyperspectral image classification method
CN106815601B (en) Hyperspectral image classification method based on recurrent neural network
CN106203523B (en) The hyperspectral image classification method of the semi-supervised algorithm fusion of decision tree is promoted based on gradient
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
Lu et al. A survey of image classification methods and techniques for improving classification performance
CN110321963A (en) Based on the hyperspectral image classification method for merging multiple dimensioned multidimensional sky spectrum signature
CN106503739A (en) The target in hyperspectral remotely sensed image svm classifier method and system of combined spectral and textural characteristics
CN103440505B (en) The Classification of hyperspectral remote sensing image method of space neighborhood information weighting
CN104239902B (en) Hyperspectral image classification method based on non local similitude and sparse coding
CN107392130A (en) Classification of Multispectral Images method based on threshold adaptive and convolutional neural networks
CN103208011B (en) Based on average drifting and the hyperspectral image space-spectral domain classification method organizing sparse coding
CN106846322B (en) The SAR image segmentation method learnt based on curve wave filter and convolutional coding structure
CN106203522B (en) Hyperspectral image classification method based on three-dimensional non-local mean filtering
CN106339674A (en) Hyperspectral image classification method based on edge preservation and graph cut model
CN106611423B (en) SAR image segmentation method based on ridge ripple filter and deconvolution structural model
Nurmasari et al. Oil palm plantation detection in Indonesia using Sentinel-2 and Landsat-8 optical satellite imagery (case study: Rokan Hulu regency, Riau Province)
CN108229551A (en) A kind of Classification of hyperspectral remote sensing image method based on compact dictionary rarefaction representation
He et al. Object-oriented mangrove species classification using hyperspectral data and 3-D Siamese residual network
CN106096612A (en) Trypetid image identification system and method
Zhao et al. Center attention network for hyperspectral image classification
CN109034213B (en) Hyperspectral image classification method and system based on correlation entropy principle
CN114266961A (en) Method for integrating, learning and classifying marsh vegetation stacks by integrating hyperspectral and multiband fully-polarized SAR images
Liu et al. A deep learning approach for building segmentation in Taiwan agricultural area using high resolution satellite imagery
He et al. Bilinear squeeze-and-excitation network for fine-grained classification of tree species
Jenicka Land Cover Classification of Remotely Sensed Images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant