CN106529508A - Local and non-local multi-feature semantics-based hyperspectral image classification method - Google Patents

Local and non-local multi-feature semantics-based hyperspectral image classification method Download PDF

Info

Publication number
CN106529508A
CN106529508A CN201611119573.1A CN201611119573A CN106529508A CN 106529508 A CN106529508 A CN 106529508A CN 201611119573 A CN201611119573 A CN 201611119573A CN 106529508 A CN106529508 A CN 106529508A
Authority
CN
China
Prior art keywords
local
sample
samples
semantic
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611119573.1A
Other languages
Chinese (zh)
Other versions
CN106529508B (en
Inventor
张向荣
焦李成
高泽宇
冯婕
白静
侯彪
马文萍
李阳阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201611119573.1A priority Critical patent/CN106529508B/en
Publication of CN106529508A publication Critical patent/CN106529508A/en
Application granted granted Critical
Publication of CN106529508B publication Critical patent/CN106529508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a local and non-local multi-feature semantics-based hyperspectral image classification method. The method mainly solves the problem in the prior art that the hyperspectral image classification is low in correct rate, poor in robustness and weak in spatial uniformity. The method comprises the steps of inputting images, extracting a plurality of features out of the images, dividing a data set into a training set and a testing set, mapping various features of all samples into corresponding semantic representations by a probabilistic support vector machine, constructing a local and non-local neighbor set, constructing a noise-reducing Markov random field model, conducting the semantic integration and the noise-reducing treatment, subjecting the semantic representations to iterative optimization, obtaining the categories of all samples based on semantic representations, and completing the accurate classification of hyperspectral images. According to the technical scheme of the invention, the multi-feature fusion is conducted, and the spatial information of images is fully excavated and utilized. In the case of small samples, the advantages of high classification accuracy, good robustness and excellent spatial consistency are realized. The method can be applied to the fields of military detection, map plotting, vegetation investigation, mineral detection and the like.

Description

Based on local and non local multiple features semanteme hyperspectral image classification method
Technical field
The invention belongs to technical field of image processing, is related to machine learning and Hyperspectral imagery processing, specifically a kind of base In local and non local multiple features semanteme hyperspectral image classification method, carry out point for the different atural objects in high spectrum image Class is recognized.
Background technology
High spectrum resolution remote sensing technique has been increasingly becoming the study hotspot in earth observation field in the past few decades.EO-1 hyperion Remote sensing technology using imaging spectrometer with nano level spectral resolution, with tens or hundreds of wave bands simultaneously to earth's surface thing into Picture, is obtained in that the continuous spectrum information of atural object, realizes the synchronous acquisition of atural object spatial information, radiation information, spectral information, tool There is the characteristic of " collection of illustrative plates ".As different atural object has different reflected wave informations, the high score of spectrum in high spectrum image Resolution just provides extremely important discriminant information to differentiate different atural objects or target.The terrain classification of high spectrum image is in geology There is applications well prospect in the fields such as investigation, crops disaster monitoring, atmospheric pollution and military target strike.
Classification of hyperspectral remote sensing image incorporates each pixel in high-spectrum remote sensing in each classification into Process.As the image that high spectrum resolution remote sensing technique is obtained contains abundant space, radiation and the triple information of spectrum, it is that classification is appointed Business is there is provided substantial amounts of discriminant information, but still suffers from huge challenge and difficulty.First, data volume is big, at least tens wave bands, Cause computation complexity very high, storage, transmission and display also to data bring challenge;Secondly, dimension is too high, there is redundancy Data and its partial noise, can reduce nicety of grading;Finally, wave band is more, and between wave band, dependency is high, causes required training sample Number increases, if lack of training samples, the problems such as common poor fitting in machine learning occurs, causes subsequent classification Precision degradation.
In traditional hyperspectral classification method, the sorting technique such as support vector machine based on spectral information can be with a certain degree of Above-mentioned difficulties are solved, but its nicety of grading is relatively low, the Space Consistency of classification results figure is poor, it is impossible to meet application demand.
In recent years, in feature aspect, researcher proposes a large amount of sorting techniques based on multicharacteristic information, in nicety of grading On have certain lifting, these methods have two ways when with reference to multicharacteristic information, mainly:First is that feature level melts Close, after directly connected multiple characteristic vectors as grader input;Second is decision level fusion, by multiple features After vector is input into grader respectively, its classification results is merged.Multicharacteristic information is during both amalgamation modes All there is certain information loss, cause multicharacteristic information not all to be effectively utilised.
In model level, due to the importance of spatial information in high spectrum image, a large amount of classification sides for adding space constraint Method is suggested, and such as joint sparse represents model, based on the support vector machine of fusion nucleuss, markov field model etc., these methods Make use of the local spatial information in high spectrum image so that nicety of grading is greatly improved.But, on the one hand, this A little methods are more coarse in the process of local spatial information, mostly led using traditional square window as the local of pixel Domain, have impact on the Space Consistency of classification results figure, hinder the further lifting of nicety of grading.On the other hand, in EO-1 hyperion In image, redundancy substantial amounts of non local spatial information, and these methods are not all effectively utilized to this type of information so that point The upper limit of the precision and robustness of class is relatively low.
Therefore how various useful features are extracted from the redundant data of higher-dimension, reasonably various features is tied Close, and effectively utilize abundant spatial information (locally and non local) and a small amount of and precious classification information, lifted The Space Consistency of the precision, robustness and classification results figure of classification results of the high spectrum image under Small Sample Size is one It is individual to have technical barrier to be solved.
The content of the invention
The present invention is for present in prior art, classification results precision is relatively low, robustness is poor, classification results map space The weaker problem of concordance, it is proposed that it is a kind of rationally by various features information be combined and can effectively utilize local and Non local spatial information and related category information based on local and non local multiple features semanteme hyperspectral image classification method.
The present invention is a kind of based on the semantic hyperspectral image classification method of local and non local multiple features, it is characterised in that Including having the following steps:
(1) input picture, extracts the various features of image:Input high spectrum imageM is high spectrum image The total number of middle all pixels point, hqFor a column vector, represent pixel q each wave band reflected value constituted to Amount;Various features are extracted respectively to high spectrum image, is included:Original spectrum feature, Gabor textural characteristics, difference morphology Feature;The high spectrum image includes c class pixels, wherein have it is N number of have a labelling pixel, m unmarked pixel, image it is every Individual pixel is a sample, and each sample is made up of V characteristic vector, represents the sample respectively in the case where different characteristic description is sub Statement, V is the number of feature classification;
(2) hyperspectral image data collection is divided into training set and test set:There is labelling pixel as training sample structure with N number of Into training setIts corresponding category label collection isWith m unmarked picture Vegetarian refreshments constitutes test set as test sampleWherein, xiRepresent the i-th of training set Individual sample, yjRepresent j-th sample of test set, liIt is the category label belonging to i-th training sample, DvRepresent v category features Dimension, R represents real number field;
(3) various features of all samples are mapped to into corresponding semantic expressiveness using probability support vector machine (SVM):Point Li Yong not training setIn all samples V characteristic vector and its corresponding classification Label setsBuild V probability supporting vector machine grader, the kernel function of the grader is radial Gaussian kernel, nuclear parameter And punishment parameter is obtained by many times of cross validations;By test setIn all samples V characteristic vector be separately input to build V correspondingly in grader, obtain describing under sublist states in different characteristic, each Test sample yj, j=1,2 ..., m belong to the probability of each classification, used as the semantic expressiveness of each test sampleFor each sample x in training seti, i=1,2 ..., N, it belongs to classification l itselfi's Probability is 1, and the probability for belonging to other classifications is 0, obtains the corresponding various semantic expressivenesses of various featuresWhereinLiBehavior 1, other behaviors 0;So as to obtain the high spectrum image In all samples various semantic expressivenesses
(4) local and the non local neighbour set of all samples in test set are constructed;For each sample in test set This yj, j=1,2 ..., m construct its local auto-adaptive neighbour set BjWith non local analog structure neighbour set Cj
(5) noise reduction markov field model is built, carries out various semantic expressiveness fusions and the semantic expressiveness of test sample Noise reduction process;To each test sample yj, j=1,2 ..., m are proceeded as follows respectively, by yjCorresponding semantic expressivenessLocal auto-adaptive neighbour set BjIn all samples semantic expressivenessIt is similar with non local Structure neighbour set CjIn all samples semantic expressivenessIt is input in local energy function, it is minimum Change the energy function, obtain test sample yjSingle order noise reduction semantic expressivenessAt the same time keep the language of sample in training set Justice represents constant, obtains the single order noise reduction semantic expressiveness of all samples of the high spectrum image
(6) the single order noise reduction semantic expressiveness to all samples of high spectrum imageFurther iteration optimization;Setting maximum changes Generation number Tmax, t is current iteration algebraically, and each test sample is proceeded as follows:By test sample yj, and set BjWith Set CjIn all samples t rank noise reduction semantic expressivenessesAs noise reduction Markov Random Fields The input of local energy function, minimizes the energy function, obtains test sample yj(t+1) rank semantic expressivenessWith this Continue simultaneously to keep the semantic expressiveness of sample in training set constant, and then obtain (t+1) of all samples of the high spectrum image Rank noise reduction semantic expressivenessIteration process, until t=Tmax- 1 stops, and obtains the of all samples of the high spectrum image TmaxRank noise reduction semantic expressiveness, that is, final semantic expressiveness
(7) using final semantic expressivenessTry to achieve the classification of all samples in test set;For in test set Each sample yj, j=1,2 ..., m, its final semantic expressiveness isThat is test sample yjBelong to the probability of each classification The column vector for being constituted, selects the label of maximum element position in the vector as yjClassificationSo as to be tested Collection class prediction setComplete the classification task of the high spectrum image.
The present invention is mapped to identical by Weak Classifier based on the multiple features in high spectrum image different characteristic space Semantic space, then carries out semantic fusion, noise reduction using markov field model, obtains containing much information and a small amount of noise Semantic expressiveness, on this basis to high spectrum image in different atural objects carry out Classification and Identification.
The present invention has advantages below compared with prior art:
1st, due to fully having excavated the mutual relation between pixel, the local auto-adaptive that have found each pixel is near for the present invention Neighbour, and non local analog structure neighbour, at the same time, the classification information of a small amount of marked sample also can travel to which therewith In local and non local neighbour so that high spectrum image still can obtain very high in the case of a small amount of marked sample Nicety of grading.
2nd, it is of the invention as multi-feature cubes have been carried out to markov field model to the improvement with non local direction, and use Geodesic distance regular terms between semantic vector instead of traditional discrete classification similarities and differences regular terms so that markov field model More it is suitable for processing classification hyperspectral imagery problem, the problems such as being smoothed caused by solving legacy Markov model excessively, Improve the Space Consistency of classification results figure.
3rd, the noise reduction markov field model proposed in the present invention, can be solved by simple gradient descent method, Computation complexity cuts class algorithm less than the figure used by legacy Markov field model, and can be fine in excellent iterative process is sought Avoid be absorbed in locally optimal solution, improve the robustness of model, so as to lift the robustness of classification results.
Contrast experiment shows, invention significantly improves the classification accuracy and robustness of high-spectrum remote sensing, and make Result figure of must classifying has good Space Consistency.
Description of the drawings
Fig. 1 is the schematic flow sheet of the present invention;
Fig. 2 is the Indian Pine data sets that present invention emulation is adopted, wherein, Fig. 2 a are logical for Indian Pine data sets The one-dimensional gray level image that principal component analysiss (PCA) are obtained is crossed, Fig. 2 b are Indian Pine data sets truly thing category label Figure, a kind of each different type of ground objects of color correspondence;
Fig. 3 is the contrast of the present invention and classification results figure of the existing method on Indian Pine data sets, wherein, figure 3a-3f is correspond to respectively based on three kinds of existing sorting techniques:The support vector machine (SVM+CK) of fusion nucleuss, support vector machine are combined Markov Random Fields (SVM+MRF), joint sparse represent (SOMP);And the simplified version of two kinds of the inventive method, and the present invention carries The classification results figure of the Indian Pine data sets that the method for going out is obtained.
Specific embodiment
Below in conjunction with the accompanying drawings to the detailed description of the invention.
Embodiment 1:
For the terrain classification problem of high spectrum image, existing method mostly has nicety of grading and not enough manages at present Think, classification results robustness is poor, the weaker problem of the Space Consistency of classification results figure, the present invention combine various features in language The integration technology in adopted space and local, non local space constraint method, the various problems existed mainly for existing method are carried Go out a kind of based on the semantic hyperspectral image classification method of local and non local multiple features.
The present invention is a kind of based on local and non local multiple features semanteme hyperspectral image classification method, referring to Fig. 1, including Have the following steps:
(1) input picture, extracts the various features of image.
Conventional hyperspectral image data is included by the unloaded visible ray/red of NASA NASA jet propulsion laboratories Indian Pine data sets and Salinas data sets that outer imaging spectrometer AVIRIS is obtained, and the ROSIS spectrum of NASA University of Pavia data sets that instrument is obtained etc..
Input high spectrum imageM is the total number of all pixels point in high spectrum image, hqArrange for one Vector, represents the reflected value of each wave band of pixel q, that is, the original spectrum feature of the pixel;To high spectrum image point Various features are indescribably taken, is included:Original spectrum feature, Gabor textural characteristics, difference morphological feature (DMP), these three are special Levy reacted high spectrum image respectively and had spectrum, texture, shape information.The high spectrum image includes c class pixels, its In have it is N number of have a labelling pixel, m unmarked pixel, each pixel of image is a sample, and each sample is by V Characteristic vector is constituted, and is represented the sample respectively and is described the statement under son in different characteristic, and V is the number of feature classification.
Feature categorical measure used in the present embodiment is 3, thus here and its described afterwards feature classification number V All should be equal to 3 in the case where explanation is not added with.
N number of mentioned in the present embodiment has labelling pixel to be each class pixel moderate proportions from high spectrum image The pixel selected, m remaining all pixels point is used as unmarked pixel.
(2) hyperspectral image data collection is divided into training set and test set, has labelling pixel as training sample structure with N number of Into training setIts corresponding category label collection isWith m unmarked picture Vegetarian refreshments constitutes test set as test sampleWherein, xiIndicate that labelling is trained I-th of collection has labelling training sample, yjRepresent j-th unmarked test sample of test set.Each in high spectrum image Sample all represents with V column vector that each column vector represents a feature, liIt is the class having belonging to labelling training sample for i-th Other label, DvThe dimension corresponding to sample v category features is represented, R represents real number field.It is have labelling picture corresponding to N in step (1) Vegetarian refreshments number, m are unmarked pixel number, here, N is that have labelling training sample total number, m is test sample total number.
(3) various features of all samples are mapped to into corresponding semantic space using probability support vector machine (SVM), are had Body is to be mapped to corresponding semantic expressiveness.It is utilized respectively training setIn all samples V characteristic vector and its corresponding category label collectionIt is probability supporting vector machine (SVM) to build V grader, The kernel function of the probability supporting vector machine (SVM) is radial Gaussian kernel, and nuclear parameter and punishment parameter are obtained by many times of cross validations Arrive.By test setIn V characteristic vector of all samples be separately input to structure In the V that builds corresponding probability supporting vector machine (SVM) grader, obtain describing under sublist states in different characteristic, each survey Sample this yj, j=1,2 ..., m belong to the probability of each classification, used as the semantic expressiveness of each test sampleFor each sample x in training seti, i=1,2 ..., N belong to classification l itselfiIt is general Rate is 1, and the probability for belonging to other classifications is 0, and the corresponding various semantic expressivenesses of its various features areWhereinLiBehavior 1, other behaviors 0, the semantic table of sample in training set Show all be 0-1 coding vector, obtain various semantic expressivenesses of all samples in the high spectrum imageNote, As the category label of all samples in training set is, it is known that its corresponding semantic expressiveness is right-on semantic expressiveness.
Used in the present embodiment, probability support vector machine (SVM) completes sample characteristics to the mapping of sample semantic expressiveness, is Because for high spectrum image terrain classification problem, probability support vector machine (SVM) possesses good robustness and stronger Classification capacity, it is and more general in EO-1 hyperion process field as benchmark grader.The present invention can also can using other The grader for obtaining class probability completes the step, such as multinomial logistic regression, and random forest and their variety classification device etc. come Substitute probability support vector machine (SVM).
(4) in order to extract the local spatial information of all samples and non local spatial information in test set, introduce local With non-local space constraint, local and the non local neighbour set of all samples in test set are constructed.For in test set Each sample yj, j=1,2 ..., m construct its local auto-adaptive neighbour set BjWith non local analog structure neighbour set Cj, The local auto-adaptive neighbour set and non local analog structure neighbour for obtaining each test sample in test set is gathered.
(5) noise reduction markov field model is built, carries out the fusion of various semantic expressivenesses of test sample and semantic drop Make an uproar process, wherein noise reduction process is by introducing local, non-local space constraint realizing.With reference to obtaining in step (3) All samples various semantic expressivenessesAnd in step (4), obtain the local auto-adaptive of all samples in test set Neighbour gather and non local analog structure neighbour set, by various semantic expressivenesses of all samples in test set and its local, Various semantic expressivenesses of non local neighbour, fully enter in noise reduction markov field model, maximize the connection of Markov Random Fields Probability, that is, the global energy of minimum Markov Random Fields are closed, in order to minimize global energy, using Iterative conditional modes method (ICM) solved, global minimization's problem is converted to into multiple local minimum problems, that is, minimizes each sample The energy of place gesture group, the gesture group are made up of test sample itself and corresponding all neighbour's samples, to each test sample yj, J=1,2 ..., m are proceeded as follows respectively, by test sample yjVarious semantic expressivenessesLocal auto-adaptive neighbour Set BjIn all samples various semantic expressivenessesWith non local analog structure neighbour set CjIn own Various semantic expressivenesses of sampleIt is input in the local energy function of noise reduction Markov Random Fields, minimizes The function, so as to obtain test sample yjSemantic expressiveness after first time noise reductionIt is located so as to minimize all test samples The energy of gesture group, obtains the single order noise reduction semantic expressiveness of all test samples.In order to by the semantic expressiveness of sample in training set, It is exactly right-on semantic expressiveness, continuing through noise reduction Markov Random Fields locally carries out positive shadow with non local neighbour to which Ring, therefore keep the semantic expressiveness of sample in training set constant, with reference to the single order noise reduction semantic expressiveness of all test samples, be somebody's turn to do The single order noise reduction semantic expressiveness of all samples of high spectrum image
(6) as the Iterative conditional modes method (ICM) used in step (5) carries out a wheel local minimum energy balane The minimum of global energy is not ensured that afterwards, so the single order noise reduction semantic expressiveness to all samples of high spectrum image It is exactly the further iteration optimization of semantic expressiveness result obtained in step (5).Referring to Fig. 1, maximum iteration time T is setmax, t is Current iteration algebraically, by test sample yjT rank noise reduction semantic expressivenessesLocal auto-adaptive neighbour set BjIn all samples This t rank noise reduction semantic expressivenessesWith non local analog structure neighbour set CjIn all samples t Rank noise reduction semantic expressivenessIt is as the input of noise reduction Markov Random Fields local energy function, same to minimize The energy function, obtains test sample yj(t+1) rank semantic expressivenessSo as to obtain all tests of the high spectrum image (t+1) rank noise reduction semantic expressiveness of sample.For all samples in training set, described in processing mode and step (5) It is identical, obtain (t+1) rank noise reduction semantic expressiveness of all samples in the high spectrum imageIteration process, be exactly This step, until t=Tmax- 1, stop, obtaining the T of all samples of the high spectrum imagemaxRank noise reduction semantic expressiveness, that is, Final semantic expressivenessIn note, the noise reduction Markov Random Fields local energy function mentioned in this step and step (5) The noise reduction Markov Random Fields local energy function mentioned is different, has been needed due to the local energy function in step (5) Into the function of multi-semantic meaning fusion, and the local energy function in this step only need to be performed iteration optimization, concrete difference Referring to single order local energy function in embodiment 4 and (t+1) rank local energy function.
Setup parameter T is needed in the present inventionmax, that is, the iterationses referred in step (6), in general, if TmaxIt is less, the result for restraining cannot be obtained, if TmaxIt is larger, cause the complexity for calculating higher, produce more meaningless meter Calculate.The establishing method of this parameter is described below, and by the result that calculates whether stably judging, randomly selects a small amount of forecast sample (5 to percent 10 percent), according to the prediction classification collection that (t+1) rank semantic expressiveness of this fractional prediction sample is obtained Gap between the prediction category set that conjunction and t rank semantic expressivenesses are obtained need not then continue iteration less than certain threshold value, TmaxIt is equal to (t+1).General, in the case where training samples number is less, required TmaxIt is larger, otherwise resolutely.
(7) using obtaining final semantic expressiveness in step (6)Try to achieve the classification of all samples in test set.It is right Each sample y in test setj, j=1,2 ..., m, its final semantic expressiveness is(column vector of c × 1), That is test sample yjBelong to the column vector constituted by the probability of each classification, selectThe mark of middle maximum element position Number as the test sample classificationSo as to obtain the classification of all samples in test set, test set class prediction set is constitutedComplete the classification task of the high spectrum image.
For example for Indian Pine data sets, Fig. 2 b give the truly species of the data set and mark on a map no matter Using which kind of sorting technique, can be contrasted with Fig. 2 b, be verified classifying quality.
The present invention finds the local auto-adaptive neighbour collection of each pixel due to fully having excavated the mutual relation between pixel And non local analog structure neighbour collection, and will be added to by the constructed local of both neighbour's collection and non local space constraint In designed noise reduction markov field model, at the same time, the classification information of a small amount of marked sample also can be with noise reduction horse The optimization process of Er Kefu fields and travel to its local and non local neighbour in.The invention enables high spectrum image is having on a small quantity Still very high nicety of grading can be obtained in the case of marker samples, and classification results figure possesses good Space Consistency.
Embodiment 2:
Based on local and non local multiple features semanteme hyperspectral image classification method with embodiment 1, wherein institute in step (1) The various features stated, include but are not limited to:Original spectrum feature, Gabor textural characteristics, difference morphological feature (DMP), its Middle Gabor textural characteristics and difference morphological feature (DMP) are expressed as follows respectively:
Gabor textural characteristics:To the high spectrum imagePrincipal component analysiss (PCA) process is carried out, after taking process Front 3-dimensional main constituent as 3 width benchmark images, carry out 16 directions, the Gabor transformation of 5 yardsticks, each benchmark image respectively The textural characteristics of 80 dimensions are obtained respectively, be stacked the Gabor textural characteristics for obtaining that total dimension is 240 dimensions.
Difference morphological feature (DMP):To the high spectrum imagePrincipal component analysiss (PCA) process is carried out, is taken Front 3-dimensional main constituent after process carries out opening operation and mutually making the difference and 5 yardsticks for 5 yardsticks respectively as 3 width benchmark images Closed operation and mutually make the difference, each benchmark image respectively obtains the Differential Characteristics of 8 dimensions, be stacked obtain total dimension for 24 dimension Difference morphological feature.
The present invention in manifold selection, except using original spectral information, also considering texture and shape emphatically Information in terms of shape, due to the statement different for image has of different Feature Descriptors, wherein Gabor textural characteristics can be with Local grain information in extraction high spectrum image, that is, the correlation information between the pixel of local well, and difference shape State feature (DMP) can be very good the edge and size information for reacting the shape block in high spectrum image.The present invention is combined Spectrum, texture and shape facility can lift the identification between the different classes of pixel of high spectrum image, final to improve based on office Portion and the nicety of grading and robustness of non local multiple features semanteme hyperspectral image classification method.
In addition to the three kinds of features mentioned in the present embodiment, other features can also be for example high using in the present invention The feature such as conventional gray level co-occurrence matrixes, 3 D wavelet transformation in spectral image analysis, simultaneously, what the present invention can be used Feature quantity is also not limited to three kinds, although a greater variety of features can improve the judgement index of method, increasing that also can be for no reason Plus computation complexity, and the information of bulk redundancy, three kinds of features that the present invention is used:Original spectrum feature, Gabor textures are special Levy, difference morphological feature (DMP), cover substantially the most information of high spectrum image.
Embodiment 3:
Based on local and non local multiple features semanteme hyperspectral image classification method with embodiment 1-2, wherein in step (4) Described local is as follows with non local neighbour set building method:
4a) to the high spectrum imagePrincipal component analysiss (PCA) are carried out, and first principal component are extracted as a width Benchmark image, that is, a width can reflect the gray level image of the basic atural object profile information of the high spectrum image, arrange super-pixel Number LP, carries out the super-pixel image segmentation based on entropy rate, obtains LP super-pixel block
First principal component gray-scale maps of the superpixel segmentation method used in the present embodiment based on entropy rate to high spectrum image Super-pixel segmentation is carried out, the marginal information and structural information in the good holding image of the super-pixel block being partitioned into, and be partitioned into Super-pixel block difference in size it is less, shape is more regular, more suitable for high spectrum image, is also made by researcheres are substantial amounts of For the analyzing and processing field of high spectrum image.In the present invention, it is also possible to entropy rate is alternative in using other image partition methods Superpixel segmentation method, such as average drifting (Mean Shift) and other superpixel segmentation methods based on graph theory.
4b) local window parameter W is setlocal, for the sample y in test setj, positioned at sample yjCentered on it is square Window Wlocal×WlocalIn, but with sample yjBelong to same super-pixel block PuAll samples constitute sample yjLocal it is adaptive Answer neighbour set Bj, so as to the local auto-adaptive neighbour for obtaining all samples in test set gathers.Local after so processing is near Adjacent set, gathers compared to traditional square window neighbour, be each test sample local neighbor add it is certain adaptive Ying Xing so that the local neighbor corresponding to each test sample be according to determined by the situation around itself, will not be because of The stationarity that window parameter is arranged, causes the local neighbor set of partial test sample to contain the spatial information of a large amount of mistakes. And compared to complete adaptive local neighbor set construction method, can be differentiated according to the difference of every high spectrum image again Rate, different types of ground objects set suitable local window parameter, introduce certain priori to lift local spatial information Quality, and adaptive local neighbor set construction method mostly complexity completely is higher, the local neighbor set structure of the present invention Construction method complexity and traditional method, that is, square window neighbour set is almost unanimously, does not almost increase computation complexity, But it is the introduction of certain adaptivity.
Need to set local window parameter W in the present inventionlocal, the bigger local neighbor for including of parameter is more, local neighbor Information is abundanter, the wrong neighbour for being also readily incorporated more and more.Parameter is more little, comprising local neighbor it is fewer, also with regard to nothing Method extracts enough local neighbor information.Setting the parameter needs selection of compromising therebetween, the general value of this parameter Have 3,5 ..., 15 etc., the prior information such as resolution, type of ground objects according to every high spectrum image carries out different journeys to parameter The adjustment of degree.
Non local topology window parameter W 4c) is setnonlocal, and non local neighbour's number K, respectively to original EO-1 hyperion Each sample h in imageq, q=1,2 ..., M is in its neighborhood Wnonlocal×WnonlocalIn carry out average pond, obtain all samples This structural information
Need to set non local topology window parameter W in the present inventionnonlocalAnd non local neighbour's number K, wherein non-office Portion's topology window parameter WnonlocalSystem of selection and the present embodiment 4b) in local window parameter W mentionedlocalIt is more similar, But it is general, need to be included structural information as much as possible during the information for extracting the affiliated sub-block of each sample, it is non local Topology window parameter WnonlocalValue be more than local window parameter Wlocal, such as 15,17 ..., 25 etc..Likewise, according to every The prior informations such as resolution, the type of ground objects of high spectrum image are opened, different degrees of adjustment is carried out to parameter.Non local neighbour The value of number K typically has 20,30 ..., 100 etc., and this parameter needs to be adjusted per the quantity of class sample according in high spectrum image Whole, if all more per class sample size, the value of non local neighbour's number K is larger, if there is part class sample size less, The value of non local neighbour's number K is less.
The present invention extracts the structural information of the affiliated sub-block of each sample using the method in average pond, behind average pond As a result the essential information of the affiliated sub-block of each sample just can be substantially shown, certain area can be embodied compared to other sub-blocks Indexing.The information that average pondization extracts the affiliated sub-block of each sample, such as maximum can also be substituted using other polymerizations Chi Hua, weighted mean pond or other more complicated pond methods.
4d) for each sample y in test setj, j=1,2 ..., m use its structural informationWith remaining all samples Structural informationCompare, calculate the how far between composition of sample information:
WhereinIt is geodesic distance,Each row in representing to vector x is opened Root is operated, SGDjqValue just represent test sample yjThe how far of structural information and sample q between, compared to conventional Europe Formula distance or cosine angle isometry mode, due to there are some researches prove, the reflected wave information included by high spectrum image A kind of manifold can be counted as, and geodesic distance can preferably state high spectrum image wave band as a kind of manifold distance The distance between reflected value vector.Find and test sample yjK most like sample, that is, the front K sample that SGD values are minimum This, as test sample yjNon local analog structure neighbour set Cj, at the same time, due to each non local neighbour's sample and Test sample yjBetween similarity degree it is different, so for test sample yjSignificance level just otherwise varied, each basis The size of SGD values, gives the K different weights of non local neighbour respectively, and similarity degree is higher, that is, SGD values are bigger, then its Weight is bigger, and weight calculation formula is as follows:
In formula, ωjhRepresent test sample yjNon local analog structure neighbour h corresponding to weight size, γ is Gauss Nuclear parameter.
The present invention devises emphatically the local auto-adaptive neighbour set of all samples in test set and non local analog structure Neighbour gathers, and tradition side's window and super-pixel block are combined by wherein local auto-adaptive neighbour set, are sufficiently extracted sample institute Local spatial information near position, reduces wrong spatial information and introduces.Rather than the design of local similar structure neighbour set The non local spatial information of bulk redundancy present in high spectrum image is take into account, these information have been carried out fully by the present invention Extract.The design of both neighbour's set so that final noise reduction Markov Random Fields take full advantage of local and non local space Information, and then lift the Space Consistency of the nicety of grading and classification results figure of high spectrum image.
Embodiment 4:
Based on local and non local multiple features semanteme hyperspectral image classification method with embodiment 1-3, wherein step (5) and (6) global energy of Markov Random Fields is minimized in, is solved using Iterative conditional modes method (ICM), by global energy minimization It is converted into and minimizes each local gesture group energy, for test sample yj, its corresponding single order local energy function, that is, walk Suddenly the local energy function used in (5) is:
Local energy function used in corresponding (t+1) rank local energy function, that is, step (6) is:
Note, in two above local energy function apart from measure be geodesic distance, with the geodetic described in embodiment 3 Apart from identical.Compared to traditional distance metric method, such as Euclidean distance, cosine angle etc., geodesic distance is proved to be more suitable for In measuring two semantic vectors, that is, the distance between probability vector.
It is a convex optimization problem to minimize two above-mentioned local energy functions, and the present invention is using simple and quick gradient Descent method minimizes energy above function, with test sample yjSingle order local energy function as a example by, its gradient is:
WhereinRepresentK-th element, and:
Using above formula, complete to minimize the global energy of Markov Random Fields.
The present invention due to having carried out multi-feature cubes to the improvement with non local direction to markov field model, and term Geodesic distance regular terms between adopted vector instead of traditional discrete classification similarities and differences regular terms so that markov field model is more Plus be suitable for process classification hyperspectral imagery problem, solve the problems such as being smoothed caused by legacy Markov model excessively, change It has been apt to the Space Consistency of classification hyperspectral imagery result figure, has improve nicety of grading.Afterwards, by Iterative conditional modes method (ICM) global energy minimization problem is converted into into multiple local minimum problems, is asked using simple gradient descent method Solution, computation complexity cut class algorithm less than the figure used by legacy Markov field model, and fine in excellent iterative process is sought Avoid and be absorbed in locally optimal solution, improve the robustness of model, so as to lift the robustness of classification results.
A complete embodiment is given below, the present invention is further described.
Embodiment 5:
Based on local and non local multiple features semanteme hyperspectral image classification method with embodiment 1-4,
With reference to Fig. 1, the specific implementation step of the present invention includes:
Step 1, is input into high spectrum imageM is the total number of all pixels point in high spectrum image, hqFor one Individual column vector, represents the reflected value of each wave band of pixel q.Multiple spies are carried out to high spectrum image with different Feature Descriptors The extraction levied.In the emulation experiment of the present invention, be extracted Gabor textural characteristics respectively, Morphological Differences feature (DMP) and Original spectrum feature.
After 1a) Gabor textural characteristics carry out dimensionality reduction using principal component analytical method (PCA) for original high spectrum image The conversion of the multiple yardsticks of multiple directions that first three main composition image carries out Gabor filter, 16 used in note the present embodiment Direction and 5 yardsticks, each result for filtering is stacked, and obtains the Gabor textural characteristics of the high spectrum image, mutually deserved To the texture feature vector of each pixel.
1b) Morphological Differences feature (DMP) carries out the opening and closing operations of different scale also for first three main composition image, Note, used in this embodiment, 5 yardsticks open operation and the closed operation of 5 yardsticks, and ask opening or closing behaviour for two neighboring yardstick Difference between work, all differences for obtaining are stacked, and are obtained the Morphological Differences feature of the high spectrum image, are accordingly obtained Morphological Differences feature (DMP) vector of each pixel.
1c) original spectrum feature is exactly the light of the reflected value directly as the pixel of each wave band with each pixel Spectrum signature.
Step 2, selects the pixel of equal proportion from each class pixel of high spectrum image, used as there is labelling pixel Point, the total number for having labelling pixel is N, and the remaining m pixel of the high spectrum image use each as unmarked pixel Representing the pixel, the intrinsic dimensionality of sample is respectively V characteristic vector of pixelWherein DvRepresent v-th it is special The dimension levied.
Step 3, selection have labelling training set X, test set Y, obtain the semantic expressiveness collection S of all samples.
3a) being constituted with N number of marked sample has labelling training setWhich is corresponding Category label collection isR represents real number field.It has been utilized respectively labelling training setIn single feature vector Set XvAnd category label collection L, train V probability support vector machine (SVM) model.The core type of support vector machine is footpath To gaussian kernel, nuclear parameter r and penalty parameter c are obtained by many times of cross validations.
3b) test set is constituted with m unmarked sampleRespectively by test set Different characteristic vector setBe separately input to step 3a) in the corresponding V probability support vector machine that trained In model, obtain test sample and concentrate all samples under the statement that different characteristic describes son, belong to the probability of each classification, i.e., For the semantic expressiveness of the test sample, such as test sample yjSemantic expressiveness beFor the sample in training set comes Say, its semantic expressiveness is a 0-1 coding vector, and its category position is 1, and other positions are 0, indicate it belong to class itself Other probability is 1, and the probability for belonging to other classifications is 0.Therefore the semantic expressiveness collection for obtaining all samples of the high spectrum image is
Step 4, constructs local auto-adaptive neighbour's set B of each sample in unmarked test set Y and non local similar Structure neighbour set C.
Dimensionality reduction is carried out using the method for principal component analysiss (PCA) to original high spectrum image 4a), first principal component figure is selected As (gray-scale maps) are used as benchmark image, super-pixel number LP is set, using the superpixel segmentation method based on entropy rate to this Image carries out super-pixel segmentation, obtains LP super-pixel block
4b) local auto-adaptive window parameter W is setlocal, according to step 4a) in the super-pixel block that obtains, build each survey The local auto-adaptive neighbour collection of sample sheet.With test sample yjAs a example by, if there is sample n to belong to test sample yjCentered on Wlocal×WlocalIn the square window of size, and and test sample yjBelong to same super-pixel block Pu, then this sample n is called test specimens This yjA local auto-adaptive neighbour, n ∈ Bj, by that analogy, obtain the local auto-adaptive neighbour set of each test sample
Non local topology window parameter W 4c) is setnonlocal, and number K of non local neighbour.First to original bloom Spectrogram pictureM represents the number of all samples in high spectrum image.In Wnonlocal×WnonlocalThe self adaptation of size Window, i.e. parameter are WnonlocalLocal auto-adaptive neighbour collection in carry out average pond, the same 4b of make of self-adapting window) Described in.Obtain the partial structurtes information of each sample
4d) according to step 4c) in obtainCalculate respectively each test sample partial structurtes information and It is remaining a little between similarity.For each sample y in test setj, j=1,2 ..., m use its structural informationWith The structural information of remaining all samplesCompare, calculate the sample partial structurtes informationWith remaining all sample offices Portion's structural informationBetween similarity, calculating formula of similarity is as follows:
WhereinIt is geodesic distance,Each row in representing to column vector x is carried out Open root operation.The similarity calculated between each two sample can be obtained by a similarity matrix SD, and the matrix is symmetrical Matrix is SD (j, q)=SD (q, j), represents test sample yjAnd the structural similarity between sample q.
4e) according to step 4d) in the similarity matrix SD that obtains, for each test sample, select its front K it is most like Sample as its non local neighbour.With test sample yjAs a example by, select the jth row in similarity matrix SD, each of which element Represent test sample yjWith the similarity of each sample, select wherein in addition to itself (and itself similarity itself is maximum), The maximum front K sample of numerical value, as test sample yjNon local neighbour, then obtained test sample yjNon local neighbour Set Cj, by that analogy, obtain non local neighbour's set of each test sample
4f) at the same time, the size according to Similarity value, gives the K different weight of non local neighbour respectively, so that More like neighbour possesses higher weight, comparatively, reduces the weight of the relatively low neighbour of similarity, so as to improve non local sky Between the reasonability that constrains and adaptivity, weight calculation formula is following (wherein γ is Gauss nuclear parameter):
Step 5, according to the semantic expressiveness of each sample calculated in step 3 and step 4, and its corresponding local, Non local neighbour's set, builds noise reduction markov field model, and wherein local neighbor set is constrained as local space, non local Neighbour's set is added in model as non local space constraint.Each local is minimized using Iterative conditional modes method (ICM) Energy, the i.e. energy of each sample place gesture group, reach the purpose for minimizing global energy, calculate the drop of each test sample Make an uproar semantic expressiveness.
5a) with test sample yjAs a example by, by test sample yjV semantic expressivenessAnd its local, non-office The semantic expressiveness of all samples in portion neighbour setIt is input to noise reduction Ma Erke In the local energy function of husband field.Test sample yjThe energy of place gesture group is:
Wherein, on the right side of formula (3), Section 1 is that, from bound term, Section 2 is local space bound term, and Section 3 is non-office Portion's space constraint item, note, in this formula, all bound terms are all the constraints in the case of various semantemes.Using gradient descent method most The littleization function, obtains test sample yjSingle order noise reduction semantic expressivenessBy that analogy, all test samples are traveled through, can be with The single order noise reduction semantic expressiveness of all test samples is obtained, and the semantic expressiveness of training sample keeps constant, therefore institute can be obtained There is the single order noise reduction semantic expressiveness collection of sample
5b) excellent process is gradually sought as Iterative conditional modes method (ICM) is one, iteration once cannot complete the overall situation Energy minimization, does not reach final convergence result, by step 5a) in the single order noise reduction semantic expressiveness collection that obtainsIt is input to Continuous being iterated of noise reduction markov field model relaying asks excellent.Maximum iteration time T is setmax, with test sample yjAs a example by, its Single order noise reduction semantic expressivenessAnd its local, non local neighbour set in sample single order noise reduction semantic expressivenessIt is input in noise reduction Markov Random Fields local energy function, minimizing the energy function just can be with Obtain test sample yjSecond order noise reduction semantic expressivenessBy that analogy, all test samples are traveled through, note is dropped arbitrary order is sought Make an uproar semantic expressiveness when, the semantic expressiveness of all training samples all keeps constant, remains as 0-1 coding column vector, so To the second order noise reduction semantic expressiveness of all samplesDue to various semantic fusions 5a the step of this example) in complete, this When energy function no longer need to carry out the fusions of various semantemes, so local can function be reduced to:
The same formula of the meaning (3) of three on the right side of formula (4), but already without will carry out various semantic fusions, therefore each item All be it is single semanteme in the case of constraint.As can be seen that according to test sample y from formulajAnd its local, non local neighbour The t rank noise reduction semantic expressivenesses of sample, minimize the function and can obtain test sample yj(t+1) rank noise reduction semanteme table Show.And it is above-mentioned according to single order noise reduction semantic expressivenessSecond order noise reduction semantic expressiveness can be obtainedSpy when this is t=1 Example.The loop iteration process, until t=Tmax- 1, test sample y can be obtainedjTmaxRank noise reduction semantic expressiveness
This process is repeated to each test sample, you can to obtain the T of each test samplemaxRank noise reduction semantic expressiveness, And then obtain the T of whole samplesmaxRank noise reduction semantic expressiveness
Step 6, in the present invention semantic expressiveness of sample be exactly the sample belong to row that the probability of each classification constituted to Amount, therefore the T obtained according to step 5maxRank noise reduction semantic expressivenessFor test sample yjSelect its semantic expressiveness vectorThe position that middle maximum is located, is exactly test sample yjCategory labelBy that analogy, all test samples are obtained Category label
One kind is proposed in the present invention based on local and non local multiple features semanteme hyperspectral image classification method, in method In construct a kind of new Markov Random Fields, referred to as noise reduction Markov Random Fields.The model passes through Iterative conditional modes method (ICM) most The global energy of littleization Markov Random Fields, can be carried out multiple features semantic under local, non-local space constraint reasonably Fusion, for all test samples all obtain a more relatively low semantic expressiveness of comprehensive, noise, and then obtains all tests The prediction category of sample.The method sufficiently make use of both sides information, multicharacteristic information carry out to high spectrum image More comprehensively state, local, non local space constraint can be carried out fully to high spectrum image pixel relation each other Excavation.Finally, it is compared to traditional sorting technique, proposed by the present invention based on the semantic EO-1 hyperion of local and non local multiple features Image classification method improves precision, robustness and the Space Consistency of classification results.
The effect of the present invention can be further illustrated by following emulation experiment:
Embodiment 6:
Based on local and non local multiple features semanteme hyperspectral image classification method with embodiment 1-5.
1. simulated conditions:
Unloaded visible ray/Infrared Imaging Spectrometer of the emulation experiment using NASA NASA jet propulsion laboratories The Indian Pine images that AVIRIS was obtained in the Indiana northwestward in June, 1992, as shown in Figure 2 a, image size is 145 × 145, totally 220 wave bands remove the wave band that noise and air and waters absorb and also have 200 wave bands, referring to Fig. 2 b, After handmarking, totally 16 class terrestrial object information.
Emulation experiment is Intel Core (TM) i5-4200H, dominant frequency 2.80GHz in CPU, inside saves as the WINDOWS of 12G Carried out with MATLAB 2014a softwares in 7 systems.
Table 1 gives 16 class data in Indian Pine images.
16 class data in 1 Indian Pine images of table
Classification Item name Number of samples Training sample number
1 Herba Medicaginiss 46 3
2 Semen Maydiss-do not plough plough 1428 72
3 Semen Maydiss-irrigation 830 42
4 Semen Maydiss 237 12
5 Herbage 483 25
6 Trees 730 37
7 The herbage of cutting 28 2
8 Radix Glycyrrhizae stockpile 478 24
9 Semen Fagopyri Esculenti 20 1
10 Semen sojae atricolor-do not plough plough 972 49
11 Semen sojae atricolor-irrigation 2455 123
12 Semen Glyciness 593 30
13 Semen Tritici aestivi 205 11
14 The woods 1265 64
15 Building-grass-tree 386 20
16 Stone-reinforcing bar 93 5
2. emulation content and analysis:
High spectrum image Indian Pine are classified with existing three kinds of methods using the present invention, existing three kinds of methods It is respectively:Support vector machine (SVM+CK) based on fusion nucleuss, support vector machine combine Markov Random Fields (SVM+MRF), joint Rarefaction representation (SOMP).Propose in the present invention based on local and non local multiple features semanteme hyperspectral image classification method, profit High spectrum image is classified with multiple features semantic expressiveness and space constraint, be abbreviated as NE-MFAS, in order to verify present invention side The effectiveness of method, adds two kinds of simplified version methods MFAS and MFS in framework of the present invention, wherein using in emulation experiment MFAS methods are the step of embodiment 6 only to construct local neighbor set in (4), do not utilize the letter of non local neighbour's set Breath, verifies impact of the non local neighbour's aggregate information to classification results, and classification results are referring to Fig. 3 e.Using MFS methods in reality Non local neighbour's aggregate information is not only utilized in the step of applying example 6 (4), super picture is not utilized yet when local neighbor set is constructed The information of element, only selects square window as its local neighbor set, verifies impact of the super-pixel constraint to classification results, point Class result is referring to Fig. 3 d.The wherein penalty factor of support vector machine (SVM) methodNuclear parameterDetermined by 5 times of cross validations, The Sparse parameter of SOMP methods is set to 30, and spatial domain scale parameter is set to 7 × 7.Super-pixel number L used in the present invention sets 75 are set to, local window WlocalSize is 7 × 7, non local topology window WnonlocalSize be 21 × 21, non local neighbour Number K is 30, and Gauss nuclear parameter γ is 0.05, maximum iteration time TmaxIt is set to 3.The selection mode of training sample is as follows, from 5% pixel is randomly selected in 16 class data as training sample, remaining is 95% as test sample.
The classification results figure of each method, obtains in the case where training sample is completely the same as shown in Figure 3, wherein scheming 3a-3c is three kinds of existing methods SVM+CK respectively, the classification results figure of SVM+MRF, SOMP, it can be seen that these three sorting techniques In most of region, performance is fine.The classification results figure of wherein SOMP, referring to Fig. 3 c, discrete noise point is more, Space Consistency It is poor.And the classification results figure of SVM+MRF, referring to Fig. 3 b, compared to Fig. 3 c almost no discrete noise points, but exist a large amount of Smooth excessively phenomenon, and classification results figure edge corrosion is more serious.The classification results figure of SVM+CK, referring to Fig. 3 a, compared to Fig. 3 b and 3c, are as a result optimum, but there is some discrete noise spot and substantial amounts of edge corrosion.Fig. 3 d-3f point MFS, the classification results figure of MFAS, NE-MFAS are not correspond to.Contrasted with Fig. 3 a-3c, it can be seen that side proposed by the present invention Method is kept at regional area continuity, edge, and the performance of sample one's respective area is all more preferable than existing method.Wherein MFS's divides Class result figure, referring to Fig. 3 d, compared to three kinds of traditional methods, MFS there's almost no discrete noise point and most of area The Space Consistency in domain is all preferable, but has certain edge corrosion phenomenon.MFAS is the constraint letter that super-pixel is added than MFS more Breath, referring to Fig. 3 e, compared to Fig. 3 d, the classification results figure of MFAS has very big improvement in terms of edge holding, hardly deposits In edge corrosion phenomenon, but in part small sample category regions, yellow rectangle region such as on the right side of image, it is virtually impossible to correct point Class.And after non local space constraint is added to MFAS, the classification results figure of NE-MFAS of the present invention, referring to Fig. 3 f, compared to figure 3d and 3e, while good Space Consistency is kept, to small sample category regions, yellow rectangle region such as on the right side of image, Almost all is correctly classified.
By the change of Fig. 3 d to 3f, it can be found that the adaptive local space of the super-pixel constraint proposed in the present invention Constraint can be very good the marginal information for keeping image, strengthen Space Consistency, and non local space constraint proposed by the present invention The upper limit of nicety of grading can be further improved, especially on the relatively small number of category classification precision of sample is lifted, is had very Good performance.
Embodiment 7:
Based on local and non local multiple features semanteme hyperspectral image classification method with embodiment 1-6, the condition of emulation and Content is with embodiment 6.Difference described in embodiment 6 between classification results figure can only observe by the naked eye judgement, and here passes through Nicety of grading relative analyses, embody advantage of the inventive method compared to additive method from data.All experiments carry out 10 Secondary to average, it is all to randomly select to obtain to note each Experiment Training sample, that is to say, that once experiment and another experiment Training sample is not identical.
Table 2 provides existing three kinds of methods, two kinds of present invention and simplifies the experimental precision contrast of version method and the inventive method:
Three kinds of methods and experimental precision result of the present invention on 2 Indian Pine images of table
Method Nicety of grading (%)
SVM+CK 91.59±0.94
SVM+MRF 84.91±1.43
SOMP 87.55±1.24
The present invention simplifies version 1 (MFS) 96.87±0.79
The present invention simplifies version 2 (MFAS) 97.24±0.64
(NE-MFAS) of the invention 98.20±0.58
From table 2 it can be seen that the method for the present invention is far above existing three in the average of 10 repetition experiment acquired results The method of kind, and standard deviation will be less than existing three kinds of methods, it may be said that no matter bright the inventive method is in nicety of grading or robust Property on to show all be optimum, be on the one hand that a large amount of differentiations letters that multiple features are included fully are combined due to the method for the present invention Breath, will be far superior on the other hand other three kinds of methods based on spatial information in excavated space message context.From MFS to MFAS, nicety of grading improve 0.5%, illustrate that extraction of the super-pixel constraint to spatial information serves positive effect, compare In traditional square window, the adaptive local neighbour set designed by the present invention can reject the neighbour that part produces negative interaction Point, extracts more abundant local spatial information, including edge and structural information.From MFAS to NE-MFAS, nicety of grading is carried 1% is risen, has been contained a large amount of valuable information in illustrating non local neighbour, and has been extracted these information and further can carry The upper limit of high-class precision.In the case of 5% a small amount of sample, nicety of grading has reached 98.20% to the nicety of grading of the present invention, And standard deviation only has 0.58%, better than existing most of method.
In sum, it is disclosed by the invention a kind of based on the semantic classification hyperspectral imagery side of local and non local multiple features Method.Existing hyperspectral image classification method accuracy is solved mainly low, poor robustness, the weak problem of Space Consistency.Its step Suddenly include:Various features extracting method is utilized respectively to original high spectrum image and extracts different features;By high spectrum image Each pixel represented with multiple characteristic vectors, selected labelling training set and test set;There to be label training set to be input into To in probability support vector machine, multiple Feature Mapping of each pixel in test set to semantic space, each is obtained Multiple semantic expressivenesses of test sample;Spatial information and multi-semantic meaning information are introduced simultaneously using noise reduction markov field model, most The energy of the littleization Markov Random Fields, obtains the noise reduction semantic expressiveness of each test sample, and then obtains the class of whole test set Other information.The perfection of semantic expressiveness and markov field model that the present invention is obtained using support vector machine agrees with, and sufficiently ties Multicharacteristic information and local, non local spatial domain contextual information are closed, it is more accurate that finally high spectrum image can be carried out Ground classification.Simplify the contrast of version with two the inventive method, demonstrate the effectiveness of the inventive method.With existing three kinds of methods Contrast, illustrates the inventive method under Small Sample Size, with pinpoint accuracy, and high robust, and outstanding space are consistent Property.Can be used for the aspects such as military detection, mapping, vegetation investigation, mineral detection.
In present embodiment, the part of narration does not belong to the known conventional means of the industry in detail, does not chat one by one here State.Exemplified as above is only the illustration to the present invention, does not constitute the restriction to protection scope of the present invention, every and sheet The same or analogous design of invention is belonged within protection scope of the present invention.

Claims (5)

1. it is a kind of based on the semantic hyperspectral image classification method of local and non local multiple features, it is characterised in that to include as follows Step:
(1) input picture, extracts the various features of image:Input high spectrum imageM is institute in high spectrum image There are the total number of pixel, hqFor a column vector, the reflected value vectors that constituted of the pixel q in each wave band is represented;It is right High spectrum image extracts various features respectively, includes:Original spectrum feature, Gabor textural characteristics, difference morphological feature; The high spectrum image includes c class pixels, wherein have it is N number of have a labelling pixel, m unmarked pixel, each picture of image Vegetarian refreshments is a sample, and each sample is made up of V characteristic vector, represents table of the sample in the case where different characteristic description is sub respectively State, V is the number of feature classification;
(2) hyperspectral image data collection is divided into training set and test set:There is labelling pixel to constitute instruction as training sample with N number of Practice collectionIts corresponding category label collection isWith m unmarked pixel Test set is constituted as test sampleWherein, xiRepresent i-th sample of training set This, yjRepresent j-th sample of test set, liIt is the category label belonging to i-th training sample, DvRepresent the dimension of v category features Number, R represent real number field;
(3) various features of all samples are mapped to into corresponding semantic expressiveness using probability support vector machine (SVM):It is sharp respectively Use training setIn all samples V characteristic vector and its corresponding category label CollectionBuild V probability supporting vector machine grader, the kernel function of the grader is radial Gaussian kernel, nuclear parameter and Punishment parameter is obtained by many times of cross validations;By test setIn all samples V Individual characteristic vector is separately input in the V correspondence grader for building, and obtains describing under sublist states in different characteristic, and each is tested Sample yj, j=1,2 ..., m belong to the probability of each classification, used as the semantic expressiveness of each test sampleFor each sample x in training seti, i=1,2 ..., N, it belongs to classification l itselfi's Probability is 1, and the probability for belonging to other classifications is 0, obtains the corresponding various semantic expressivenesses of various featuresWhereinLiBehavior 1, other behaviors 0;So as to obtain the high-spectrum Various semantic expressivenesses of all samples as in
(4) local and the non local neighbour set of all samples in test set are constructed:For each sample y in test setj,j =1,2 ..., m construct its local auto-adaptive neighbour set BjWith non local analog structure neighbour set Cj
(5) noise reduction markov field model is built, carries out various semantic expressiveness fusions and the drop of semantic expressiveness of test sample Make an uproar process:To each test sample yj, j=1,2 ..., m are proceeded as follows respectively, by yjCorresponding semantic expressivenessLocal auto-adaptive neighbour set BjIn all samples semantic expressivenessIt is similar with non local Structure neighbour set CjIn all samples semantic expressivenessIt is input in local energy function, it is minimum Change the energy function, obtain test sample yjSingle order noise reduction semantic expressivenessAt the same time keep the language of sample in training set Justice represents constant, obtains the single order noise reduction semantic expressiveness of all samples of the high spectrum image
(6) the single order noise reduction semantic expressiveness to all samples of high spectrum imageFurther iteration optimization:Setting greatest iteration time Number Tmax, t is current iteration algebraically, and each test sample is proceeded as follows:By test sample yj, and set BjAnd set CjIn all samples t rank noise reduction semantic expressivenessesAs noise reduction Markov Random Fields local The input of energy function, minimizes the energy function, obtains test sample yj(t+1) rank semantic expressivenessAt the same time Continue to keep the semantic expressiveness of sample in training set constant, and then obtain (t+1) rank drop of all samples of the high spectrum image Make an uproar semantic expressivenessIteration process, until t=Tmax- 1 stops, and obtains the T of all samples of the high spectrum imagemax Rank noise reduction semantic expressiveness, that is, final semantic expressiveness(7) using final semantic expressivenessTry to achieve test Concentrate the classification of all samples:For each sample y in test setj, j=1,2 ..., m, its final semantic expressiveness isThat is test sample yjBelong to the column vector constituted by the probability of each classification, maximum element is located in selecting the vector The label of position is used as yjClassificationSo as to obtain test set class prediction setComplete the high spectrum image Classification task.
2. according to claim 1 based on the semantic hyperspectral image classification method of local and non local multiple features, its feature It is that the Gabor textural characteristics in various features and difference morphological feature described in step (1) are expressed as follows respectively:
Gabor textural characteristics:Front 3-dimensional main constituent after being processed with the high spectrum image principal component analysiss divides as benchmark image 16 directions are not carried out, the Gabor transformation of 5 yardsticks, each benchmark image respectively obtain the textural characteristics of 80 dimensions, be stacked Obtain the Gabor textural characteristics that total dimension is 240 dimensions;
Difference morphological feature:Front 3-dimensional main constituent after being processed with the high spectrum image principal component analysiss divides as benchmark image Opening operation and mutually making the difference the closed operation with 5 yardsticks and mutually make the difference for 5 yardsticks is not carried out, each benchmark image is respectively obtained The Differential Characteristics of 8 dimensions, be stacked the difference morphological feature for obtaining that total dimension is 24 dimensions.
3. according to claim 1 based on the semantic hyperspectral image classification method of local and non local multiple features, its feature It is that the local described in step (4) is as follows with non local neighbour set building method:
Principal component analysiss 4a) are carried out to the high spectrum image, first principal component is extracted as benchmark image, super-pixel number is set LP, carries out the super-pixel image segmentation based on entropy rate, obtains LP super-pixel block
4b) local window parameter W is setlocal, for the sample y in test setj, positioned at sample yjCentered on square window Wlocal×WlocalIn, but with sample yjBelong to same super-pixel block PuAll samples constitute sample yjLocal auto-adaptive it is near Adjacent set Bj
Non local topology window parameter W 4c) is setnonlocal, and non local neighbour's number K, respectively to original high spectrum image In each sample hq, q=1,2 ..., M is in its neighborhood Wnonlocal×WnonlocalIn carry out average pond, obtain all samples Structural informationFor each sample y in test setj, j=1,2 ..., m use its structural informationWith remaining institute There is the structural information of sampleCompare, calculate the how far between composition of sample information:
SGD j q = g ( h j ‾ , h q ‾ )
WhereinIt is geodesic distance,Each row in representing to vector x carries out out root behaviour Make, SGDjqValue just represent test sample yjThe how far of structural information and sample q between;Find and test sample yjMost K similar sample, that is, the front K sample that SGD values are minimum, constitute test sample yjNon local analog structure neighbour collection Close Cj, at the same time, according to the size of SGD values, the K different weight of non local neighbour is given respectively, weight calculation formula is such as Under:
ω j h = exp ( - | SGD j h | γ ) , h ∈ C j
In formula, ωjhRepresentative sample yjNon local analog structure neighbour h weight size, γ is Gauss nuclear parameter.
4. according to claim 1 based on the semantic hyperspectral image classification method of local and non local multiple features, its feature It is, wherein the calculating test sample y described in step (5)jThe single order local energy function that adopted of noise reduction semantic expressiveness for:
E ( s j ( 1 ) ‾ ; ( B j , C j ) ) = Σ v = ( 1 , ... , V ) g ( s j ( 1 ) ‾ , s j v ) + Σ r ∈ B j Σ v = ( 1 , ... , V ) g ( s j ( 1 ) ‾ , s r v ) + Σ h ∈ C j Σ v = ( 1 , ... , V ) g ( s j ( 1 ) ‾ , ω j h · s h v )
The energy function is minimized using gradient descent method, its gradient is:
∂ E ( s j ( 1 ) ‾ ; ( B j , C j ) ) ∂ s j , k ( 1 ) ‾ = 1 | V | Σ v = ( 1 , ... , V ) f ( s j , k ( 1 ) ‾ , s j , k v ) + 1 | V | Σ r ∈ B j Σ v = ( 1 , ... , V ) f ( s j , k ( 1 ) ‾ , s r , k v ) + 1 | V | Σ h ∈ C j Σ v = ( 1 , ... , V ) f ( s j , k ( 1 ) ‾ , ω j h · s h , k v )
WhereinRepresentK-th element, and:
f ( x , y ) = ∂ g ( x , y ) ∂ x = - y 2 x 1 - ( x y ) 2
5. according to claim 1 based on the semantic hyperspectral image classification method of local and non local multiple features, its feature It is, wherein to test sample y in step (6)jThe further iteration optimization of single order noise reduction semantic expressiveness in, (t+1) rank office Portion's energy function is:
E ( s j ( t + 1 ) ‾ ; ( B j , C j ) ) = g ( s j ( t + 1 ) ‾ , s j ( t ) ‾ ) + Σ r ∈ B j g ( s j ( t + 1 ) ‾ , s r ( t ) ‾ ) + Σ h ∈ C j g ( s j ( t + 1 ) ‾ , ω j h · s h ( t ) ‾ )
CN201611119573.1A 2016-12-07 2016-12-07 Based on local and non local multiple features semanteme hyperspectral image classification method Active CN106529508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611119573.1A CN106529508B (en) 2016-12-07 2016-12-07 Based on local and non local multiple features semanteme hyperspectral image classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611119573.1A CN106529508B (en) 2016-12-07 2016-12-07 Based on local and non local multiple features semanteme hyperspectral image classification method

Publications (2)

Publication Number Publication Date
CN106529508A true CN106529508A (en) 2017-03-22
CN106529508B CN106529508B (en) 2019-06-21

Family

ID=58342718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611119573.1A Active CN106529508B (en) 2016-12-07 2016-12-07 Based on local and non local multiple features semanteme hyperspectral image classification method

Country Status (1)

Country Link
CN (1) CN106529508B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107124612A (en) * 2017-04-26 2017-09-01 东北大学 The method for compressing high spectrum image perceived based on distributed compression
CN107194936A (en) * 2017-05-24 2017-09-22 哈尔滨工业大学 The high spectrum image object detection method represented based on super-pixel joint sparse
CN107301368A (en) * 2017-06-28 2017-10-27 昂纳自动化技术(深圳)有限公司 A kind of recognition methods of DataMatrix Quick Response Codes
CN107451614A (en) * 2017-08-01 2017-12-08 西安电子科技大学 The hyperspectral classification method merged based on space coordinates with empty spectrum signature
CN107679538A (en) * 2017-09-05 2018-02-09 深圳大学 The forming method and formation system of high spectrum image local feature description
CN107909576A (en) * 2017-11-22 2018-04-13 南开大学 Indoor RGB D method for segmenting objects in images based on support semantic relation
CN108171270A (en) * 2018-01-05 2018-06-15 大连海事大学 A kind of hyperspectral image classification method based on Hash study
CN108416746A (en) * 2018-02-07 2018-08-17 西北大学 Based on high-spectrum image dimensionality reduction and the polychrome cultural relics pattern Enhancement Method that merges
CN108960073A (en) * 2018-06-05 2018-12-07 大连理工大学 Cross-module state image steganalysis method towards Biomedical literature
WO2019047025A1 (en) * 2017-09-05 2019-03-14 深圳大学 Method for forming local feature descriptor of hyperspectral image and forming system
CN110263709A (en) * 2019-06-19 2019-09-20 百度在线网络技术(北京)有限公司 Driving Decision-making method for digging and device
CN110378294A (en) * 2019-07-22 2019-10-25 大连海事大学 A kind of EO-1 hyperion object detection method and system based on local energy constraint and Fusion Features
CN111104984A (en) * 2019-12-23 2020-05-05 东软集团股份有限公司 CT image classification method, device and equipment for electronic computer tomography
CN111259936A (en) * 2020-01-09 2020-06-09 北京科技大学 Image semantic segmentation method and system based on single pixel annotation
CN111414936A (en) * 2020-02-24 2020-07-14 北京迈格威科技有限公司 Determination method of classification network, image detection method, device, equipment and medium
CN111885390A (en) * 2020-07-30 2020-11-03 河南大学 High spectral image compression method based on fractal multi-wavelet
CN111968057A (en) * 2020-08-24 2020-11-20 浙江大华技术股份有限公司 Image noise reduction method and device, storage medium and electronic device
CN113095408A (en) * 2021-04-14 2021-07-09 中国工商银行股份有限公司 Risk determination method and device and server
CN115019077A (en) * 2022-08-09 2022-09-06 江苏思伽循环科技有限公司 Method for identifying and controlling shaking table separator in waste battery recycling process
WO2022227914A1 (en) * 2021-04-25 2022-11-03 浙江师范大学 Hyperspectral image band selection method and system based on latent feature fusion
CN118432767A (en) * 2024-07-03 2024-08-02 湖南雷诺科技发展有限公司 5G big data low-delay optimized transmission method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770584A (en) * 2009-12-30 2010-07-07 重庆大学 Extraction method for identification characteristic of high spectrum remote sensing data
CN102542288A (en) * 2011-11-28 2012-07-04 北京航空航天大学 Construction and merging classification method for high spectrum data multi-characteristic space

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770584A (en) * 2009-12-30 2010-07-07 重庆大学 Extraction method for identification characteristic of high spectrum remote sensing data
CN102542288A (en) * 2011-11-28 2012-07-04 北京航空航天大学 Construction and merging classification method for high spectrum data multi-characteristic space

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
施蓓琦等: "应用稀疏非负矩阵分解聚类实现高光谱影响波段的优化选择", 《测绘学报》 *
王立国等: "基于人工蜂群算法高光谱图像波段选择", 《哈尔滨工业大学学报》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107124612A (en) * 2017-04-26 2017-09-01 东北大学 The method for compressing high spectrum image perceived based on distributed compression
CN107194936A (en) * 2017-05-24 2017-09-22 哈尔滨工业大学 The high spectrum image object detection method represented based on super-pixel joint sparse
CN107194936B (en) * 2017-05-24 2021-02-02 哈尔滨工业大学 Hyperspectral image target detection method based on superpixel combined sparse representation
CN107301368B (en) * 2017-06-28 2023-12-22 昂视智能(深圳)有限公司 Recognition method of DataMatrix two-dimensional code
CN107301368A (en) * 2017-06-28 2017-10-27 昂纳自动化技术(深圳)有限公司 A kind of recognition methods of DataMatrix Quick Response Codes
CN107451614B (en) * 2017-08-01 2019-12-24 西安电子科技大学 Hyperspectral classification method based on fusion of space coordinates and space spectrum features
CN107451614A (en) * 2017-08-01 2017-12-08 西安电子科技大学 The hyperspectral classification method merged based on space coordinates with empty spectrum signature
WO2019047025A1 (en) * 2017-09-05 2019-03-14 深圳大学 Method for forming local feature descriptor of hyperspectral image and forming system
CN107679538A (en) * 2017-09-05 2018-02-09 深圳大学 The forming method and formation system of high spectrum image local feature description
CN107679538B (en) * 2017-09-05 2020-12-04 深圳大学 Method and system for forming local feature descriptor of hyperspectral image
CN107909576A (en) * 2017-11-22 2018-04-13 南开大学 Indoor RGB D method for segmenting objects in images based on support semantic relation
CN107909576B (en) * 2017-11-22 2021-06-25 南开大学 Indoor RGB-D image object segmentation method based on support semantic relation
CN108171270A (en) * 2018-01-05 2018-06-15 大连海事大学 A kind of hyperspectral image classification method based on Hash study
CN108171270B (en) * 2018-01-05 2021-08-27 大连海事大学 Hyperspectral image classification method based on Hash learning
CN108416746A (en) * 2018-02-07 2018-08-17 西北大学 Based on high-spectrum image dimensionality reduction and the polychrome cultural relics pattern Enhancement Method that merges
CN108416746B (en) * 2018-02-07 2023-04-18 西北大学 Colored drawing cultural relic pattern enhancement method based on dimension reduction and fusion of hyperspectral images
CN108960073B (en) * 2018-06-05 2020-07-24 大连理工大学 Cross-modal image mode identification method for biomedical literature
CN108960073A (en) * 2018-06-05 2018-12-07 大连理工大学 Cross-module state image steganalysis method towards Biomedical literature
CN110263709A (en) * 2019-06-19 2019-09-20 百度在线网络技术(北京)有限公司 Driving Decision-making method for digging and device
CN110378294A (en) * 2019-07-22 2019-10-25 大连海事大学 A kind of EO-1 hyperion object detection method and system based on local energy constraint and Fusion Features
CN110378294B (en) * 2019-07-22 2022-08-19 大连海事大学 Hyperspectral target detection method and system based on local energy constraint and feature fusion
CN111104984A (en) * 2019-12-23 2020-05-05 东软集团股份有限公司 CT image classification method, device and equipment for electronic computer tomography
CN111259936B (en) * 2020-01-09 2021-06-01 北京科技大学 Image semantic segmentation method and system based on single pixel annotation
CN111259936A (en) * 2020-01-09 2020-06-09 北京科技大学 Image semantic segmentation method and system based on single pixel annotation
CN111414936A (en) * 2020-02-24 2020-07-14 北京迈格威科技有限公司 Determination method of classification network, image detection method, device, equipment and medium
CN111414936B (en) * 2020-02-24 2023-08-18 北京迈格威科技有限公司 Determination method, image detection method, device, equipment and medium of classification network
CN111885390A (en) * 2020-07-30 2020-11-03 河南大学 High spectral image compression method based on fractal multi-wavelet
CN111885390B (en) * 2020-07-30 2021-04-13 河南大学 High spectral image compression method based on fractal multi-wavelet
CN111968057A (en) * 2020-08-24 2020-11-20 浙江大华技术股份有限公司 Image noise reduction method and device, storage medium and electronic device
CN113095408A (en) * 2021-04-14 2021-07-09 中国工商银行股份有限公司 Risk determination method and device and server
WO2022227914A1 (en) * 2021-04-25 2022-11-03 浙江师范大学 Hyperspectral image band selection method and system based on latent feature fusion
CN115019077A (en) * 2022-08-09 2022-09-06 江苏思伽循环科技有限公司 Method for identifying and controlling shaking table separator in waste battery recycling process
CN118432767A (en) * 2024-07-03 2024-08-02 湖南雷诺科技发展有限公司 5G big data low-delay optimized transmission method and system
CN118432767B (en) * 2024-07-03 2024-08-30 湖南雷诺科技发展有限公司 5G big data low-delay optimized transmission method and system

Also Published As

Publication number Publication date
CN106529508B (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN106529508A (en) Local and non-local multi-feature semantics-based hyperspectral image classification method
Rocchini et al. Uncertainty in ecosystem mapping by remote sensing
CN104881865B (en) Forest pest and disease monitoring method for early warning and its system based on unmanned plane graphical analysis
CN108460391B (en) Hyperspectral image unsupervised feature extraction method based on generation countermeasure network
CN111476170A (en) Remote sensing image semantic segmentation method combining deep learning and random forest
CN106203523A (en) The classification hyperspectral imagery of the semi-supervised algorithm fusion of decision tree is promoted based on gradient
CN103208011B (en) Based on average drifting and the hyperspectral image space-spectral domain classification method organizing sparse coding
CN105069468A (en) Hyper-spectral image classification method based on ridgelet and depth convolution network
CN111639587B (en) Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
CN106611423B (en) SAR image segmentation method based on ridge ripple filter and deconvolution structural model
Ozdarici-Ok Automatic detection and delineation of citrus trees from VHR satellite imagery
Nurmasari et al. Oil palm plantation detection in Indonesia using Sentinel-2 and Landsat-8 optical satellite imagery (case study: Rokan Hulu regency, Riau Province)
CN106683102A (en) SAR image segmentation method based on ridgelet filters and convolution structure model
Feng et al. Embranchment cnn based local climate zone classification using sar and multispectral remote sensing data
CN103593853A (en) Remote-sensing image multi-scale object-oriented classification method based on joint sparsity representation
CN108229551A (en) A kind of Classification of hyperspectral remote sensing image method based on compact dictionary rarefaction representation
Zhang et al. Mapping freshwater marsh species in the wetlands of Lake Okeechobee using very high-resolution aerial photography and lidar data
CN106096612A (en) Trypetid image identification system and method
CN107273919A (en) A kind of EO-1 hyperion unsupervised segmentation method that generic dictionary is constructed based on confidence level
Han et al. Integration of texture and landscape features into object-based classification for delineating Torreya using IKONOS imagery
CN109034213A (en) Hyperspectral image classification method and system based on joint entropy principle
He et al. Bilinear squeeze-and-excitation network for fine-grained classification of tree species
CN116030355B (en) Ground object classification method and system
Jenicka Land Cover Classification of Remotely Sensed Images
Ouchra et al. Comparison of Machine Learning Methods for Satellite Image Classification: A Case Study of Casablanca Using Landsat Imagery and Google Earth Engine

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant