CN103440625A - Hyperspectral image processing method based on textural feature strengthening - Google Patents

Hyperspectral image processing method based on textural feature strengthening Download PDF

Info

Publication number
CN103440625A
CN103440625A CN201310358810XA CN201310358810A CN103440625A CN 103440625 A CN103440625 A CN 103440625A CN 201310358810X A CN201310358810X A CN 201310358810XA CN 201310358810 A CN201310358810 A CN 201310358810A CN 103440625 A CN103440625 A CN 103440625A
Authority
CN
China
Prior art keywords
theta
textural characteristics
vector
matrix
processing method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310358810XA
Other languages
Chinese (zh)
Other versions
CN103440625B (en
Inventor
邓水光
徐亦飞
尹建伟
李莹
吴健
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201310358810.XA priority Critical patent/CN103440625B/en
Publication of CN103440625A publication Critical patent/CN103440625A/en
Application granted granted Critical
Publication of CN103440625B publication Critical patent/CN103440625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Complex Calculations (AREA)

Abstract

The invention provides a hyperspectral image processing method based on feature strengthening. Textural feature matrixes composed of feature values of all pixels in two-dimensional images are used for describing the corresponding two-dimensional images; the textural feature matrixes of all the two-dimensional images are subjected to textural feature strengthening to obtain textural feature strengthened matrixes; main textural features are extracted according to all the textural feature strengthened matrixes to form a main textural feature vector; the main textural feature vector is used for representing a hyperspectral image. According to the hyperspectral image processing method based on feature strengthening, multi-wavelength information of the hyperspectral image is reasonably utilized, rich textural features can be captured accurately, texture details can be distinguished conveniently, and the hyperspectral image processing method is particularly suitable for the analysis of fine-texture images.

Description

The Hyperspectral imagery processing method of strengthening based on textural characteristics
Technical field
The invention belongs to technical field of image processing, relate to the Hyperspectral imagery processing method of strengthening based on textural characteristics.
Background technology
High spectrum image is 3-D view, comprises ordinary two dimensional plane picture information and wavelength information.In the space characteristics imaging to target, each space pixel is covered to carry out continuous spectrum through tens of dispersion formation and even a hundreds of narrow wave band.A high spectrum image is the three-dimensional high spectrum image that the two dimensional image corresponding by several wavelength forms.Because heterogeneity is also different to spectral absorption, at certain specific wavelength hypograph, certain defect is had to reflection more significantly, spectral information can fully reflect the physical arrangement of sample interior, the difference of chemical composition.
The outer high spectrum of near infrared is widely used in the industries such as food, medicine, petrochemical complex because of characteristics such as its quick nondestructives.The high spectrum image analysis is divided into spectroscopic data analysis and high spectrum texture analysis usually.With respect to spectroscopic data, the texture of image more approaches people's sense organ vision, more accurate for the reaction of micromechanism.Current texture analysis method is mainly applied traditional two dimension macroscopic view picture, aspect the high spectrum image texture, current method is for the analyzing and processing remotely-sensed data of taking photo by plane, all concentrate and be illustrated in a spectrum picture due to all samples of the data of taking photo by plane, its texture analysis is valued more to the relation between pixel vectors rather than texture itself.And, in the high spectrum source that is applied in the fields such as food and agricultural, a high spectrum image only represents a sample, its precision, far away higher than taking photo by plane remote sensing images, is more paid close attention to texture structure itself rather than pixel vectors to its texture analysis.Analyze up till now, less for the high spectrum texture analysis research that is applied to the fields such as agricultural and food.
At present aspect hyperspectral analysis, mainly can be divided into following three kinds of methods: 1) by choose representational small part two dimensional image in some spectrum pictures, carry out texture analysis, this method it has been generally acknowledged that the picture that the spectral reflectance value is good has outstanding textural characteristics simultaneously, yet this hypothesis lacks the valid certificates of theory and practice.2) direct applying three-dimensional texture method, these three-dimensional methods, by wavelength being used as to third dimension degree, are expanded by classical two-dimension method, however this method is because too coarse meeting causes a large amount of information loss.3), by the relation between the definition wave band, existing two-dimension method is expanded for effectively meaning three-dimensional high spectrum image.
Can effectively mean high spectrum textural characteristics by the third expanding method, yet mainly there are following three challenges in this method: 1) need a kind of good method that can describe trickle texture of definition, the method need to meet the base attribute that an outstanding texture descriptor has, such as rotational invariance etc.2) need to reasonably utilize multiwave information, the correlation model that this just needs between the definition wave band, can effectively catch the abundant texture feature of high spectrum image by this model.The large measure feature that 3) need to extract high spectrum carries out dimensionality reduction, by the dimension reduction method of supervision is arranged, can reduce the execution time that disaggregated model expends, and can improve classification accuracy again.
Summary of the invention
For the deficiencies in the prior art, the invention provides a kind of Hyperspectral imagery processing method of strengthening based on feature, can effectively catch and describe trickle texture.
A kind of Hyperspectral imagery processing method of strengthening based on textural characteristics provided by the invention, described high spectrum image comprises some two dimensional images corresponding with different wave length, and described Hyperspectral imagery processing method comprises:
1) arbitrary two dimensional image is carried out to filtering, obtain the local direction response of all pixels in two dimensional image, the local direction response combination of the multiple directions that repeatedly filtering obtains forms the local direction response vector of all pixels;
2) successively described local direction response vector is carried out to normalization and N state encoding, obtain the direction vector of numeralization, obtain the eigenwert of all pixels according to described direction vector, and build the textural characteristics matrix;
3) circulation step 1)~2) obtain the textural characteristics matrix of all two dimensional images, according to the wavelength dependence of each textural characteristics matrix, each textural characteristics matrix is carried out to the textural characteristics reinforcement, obtain corresponding textural characteristics and strengthen matrix;
4) strengthen matrix extracting main textural characteristics from all textural characteristics, be formed for meaning the main texture feature vector of high spectrum image.
The Hyperspectral imagery processing method of strengthening based on textural characteristics of the present invention adopts the eigenwert of all pixels to describe the textural characteristics of corresponding two dimensional image, guaranteed the rotational invariance of image, the textural characteristics of all two dimensional images is strengthened and gone out main textural characteristics and form main texture feature vector for meaning high spectrum image according to strengthened texture feature extraction.By reasonably utilizing the multi-wavelength information of high spectrum image, can effectively catch the abundant textural characteristics of high spectrum image.
In described step 1) according to formula:
L θ = ( G 2 θ * I ) 2 + ( H 2 θ * I ) 2
Carry out filtering, wherein:
I means to work as forward two-dimensional viewing;
L θmean the local direction response of all pixels on the θ direction in forward two-dimensional viewing ,-π≤θ≤π;
for second order class Gaussian function:
G 2 θ = k a θ G 2 a + k b θ G 2 b + k c θ G 2 c ,
k a θ = cos 2 ( θ ) , k b θ = - 2 cos ( θ ) sin ( θ ) , k c θ = sin 2 ( θ ) ,
G 2a, G 2b, G 2cbe respectively second order Gauss function G (x, y) is rotated to 0 along counter clockwise direction, the result of pi/2 and 3 π/4, (x, y) is the pixel value in two dimensional image;
Figure BDA0000367594290000037
for the Hilbert transform of second order class Gaussian function:
H 2 θ = h a θ H 2 a + h b θ H 2 b + h c θ H 2 c + h d θ H 2 d
h a θ = cos 3 ( θ ) , h b θ = - 3 cos 2 ( θ ) sin ( θ ) , h c θ = 3 cos ( θ ) sin 2 ( θ ) , h d θ = - sin 3 ( θ ) ,
Be and x and y basis function independently.Described H 2a, H 2b, H 2cand H 2das document Freeman, W.T. , & Adelson, E.H. (1991), The design and use of steerable filters, IEEE Transactions on Pattern analysis and machine intelligence, 13,891-906.
By change value θ, obtaining each pixel of acquisition has the local direction on direction corresponding in difference, and the local direction response combination of the multiple directions that obtain forms the local direction response vector of all pixels.
Described step 2) in, pass through formula each local direction response in described local direction response vector is added to the normalization of standard deviation, is obtained normalization directional response vector,
Figure BDA0000367594290000042
for pixel in two dimensional image at θ plocal direction response on direction,
Figure BDA0000367594290000043
for right
Figure BDA0000367594290000044
local direction response after normalization, the direction number of the local direction response that P is two dimensional image,
Figure BDA0000367594290000045
it is right to mean
Figure BDA0000367594290000046
ask standard deviation.
Add the method for normalizing of standard deviation by employing, avoided occurring in the normalization process problem of molecule denominator equal proportion.
Described step 2) the N state encoding carries out according to probability model, comprising:
2-1) according to probability model, the interval of all elements in the local direction response vector after to normalization is divided into N zone;
2-2) a described N zone is adopted to 0,1 from small to large successively ..., N-1 is numbered;
2-3) number table of element affiliated area is shown the N state encoding result of respective element;
The N state encoding result of all elements in described local direction response vector forms the direction vector of numeralization.
Encoded by probability model, improved the accuracy of N state encoding.
Described step 2) in, adopt the LRP method to obtain the eigenwert of all pixels: LRP P , N ri = min { LOL ( LRP P , N , i ) | i = 0,1 , . . . , P - 1 } , LOL (LRP p,N, i) mean for the LRP shown for the P position p,Ncarry out the shift left operation of i position, wherein:
Figure BDA0000367594290000048
d pfor p element in described direction vector, the state number that N is the N state encoding.
Adopt the LRP method, the pixel that will mean with direction vector adopts eigenwert to mean, because each pixel has the local direction of P different directions corresponding, therefore, for guaranteeing the rotational invariance of high spectrum image, guarantee that each pixel only has a textural characteristics value, to the LRP of P position p,Ncarry out shifting function (mobile step-length is i), that is:
LOL (LRP p,N, i)=D in 0+ D i+1n 1+ ... D pn p-i+ D 0n p-i+1+ ...+D i-1n p, by P shifting function, get the minimum value of each shift result, make the corresponding eigenwert of each pixel.Shifting function is the rotary manipulation of corresponding two dimensional image, and the step-length corresponding rotation angle of displacement operates by multi-shift, has guaranteed the rotational invariance of image.
In step 3), when each textural characteristics matrix is carried out to the textural characteristics reinforcement, at first judge the correlativity of current texture eigenmatrix and other each textural characteristics matrixes, current texture eigenmatrix and other all textural characteristics matrixes relevant to current texture eigenmatrix wavelength are carried out to point-to-point fusion and obtain textural characteristics and strengthen matrix.
By point-to-point fusion method, by and all textural characteristics matrixes relevant to its wavelength to carry out Matrix Calculating average, the new matrix obtained is the reinforcement textural characteristics matrix that this textural characteristics matrix is corresponding, method is simple, is easy to realize.
In step 3), wavelength is relevant is to obtain according to related coefficient for any two textural characteristics matrixes, described related coefficient according to formula: calculate C ijthe related coefficient that means the textural characteristics matrix that an i and j wavelength is corresponding, a iand a jbe respectively the row vector that textural characteristics matrix that an i and j wavelength is corresponding is drawn into, if C ij0, judge that two textural characteristics matrix wavelength are relevant, otherwise, judge that wavelength is uncorrelated.
The present invention is merged the textural characteristics of the two dimensional image that all wavelengths is corresponding, and therefrom obtain the main textural characteristics of three-dimensional high spectrum image, with the disposal route of prior art, compare, rationally utilized the multi-wavelength information of high spectrum image, can capture accurately abundant textural characteristics, be convenient to carry out the differentiation of grain details, be particularly useful for close grain image (as the texture image of fillet) analysis.The dimensionality reduction undertaken by the large measure feature to extracting, can either mean high spectrum image accurately, reduced again data volume, is conducive to improve the speed of subsequent applications.
The accompanying drawing explanation
The process flow diagram of the Hyperspectral imagery processing method of strengthening based on textural characteristics that Fig. 1 is the present embodiment;
Fig. 2 is the probability statistics model that there is directional response normalized part.
Embodiment
Below in conjunction with specific embodiments the Hyperspectral imagery processing method of strengthening based on feature of the present invention is described further; but protection scope of the present invention is not limited to this embodiment; anyly be familiar with those skilled in the art in the technical scope that the present invention discloses; the variation that can expect easily or replacement, within all should being encompassed in protection scope of the present invention.
The flesh of fish experiment of distinguishing under varying environment of take in the present embodiment is example.
Instrument is prepared
Experimental facilities is by robot calculator, high spectrometer, Halogen lamp LED, rectification black-white board.High spectrometer use U.S. ASD(Analytical Spectral Device) the Handheld Field Spec spectrometer of company, spectrum sample is spaced apart 1.5nm, and sample range is 380nm~1030nm, adopts the diffuse reflection mode to carry out the sample spectrum sample; 14.5 Halogen lamp LEDs that employing and spectral instrument are supporting, carry out must using the rectification black-white board to carry out the routine rectification to high spectrometer before spectra collection.
Material is prepared
Prepare fresh and alive 54 flatfish (turbot), butcher, bloodletting, remove internal organ, cleans up, freezing stand-by.The body weight of fish is at 372g between 580g value (average 512g), and length arrives the average 30.5cm of 32cm(at 27.5cm).Be cut into from right to left subsequently 240 section samples on chopping block.Front 96 samples are as fresh not freezing sample (Fresh), and 144 samples of remaining sample are got respectively under the environment that 72 samples are placed in-70 ℃ and-20 ℃ and generated snap frozen sample and slow freezing sample.Room temperature is set in constant 20 ℃.Through 9 days, all freezing samples were thawed through the time at night under the environment of 4 ℃, formed snap frozen sample set (FFT) and the slow freezing sample set (SFT) that thaws that thaws.For the Fresh sample, 64 random samples are training set, 32 immediately sample be test set.For FFT and SFT sample, all used respectively 48 samples as training set, 24 samples are as test set.
The high spectrum image pre-service
To all high spectrum images, the regional high spectrum image of the area-of-interest that adopts Hyperspectral imagery processing software ENVI5.0 to size to be is chosen.In order to guarantee the accuracy of experiment, to eliminate, be subject to instrument to affect front 100 the corresponding spectrum pictures of wavelength with illumination effect, only choose 101-512 412 clear high spectrum images that light wave is corresponding altogether.In order to remove the impact of noise, to all high spectrum images, adopt minimal noise to separate conversion (Minimum Noise Fraction Rotation, MNF Rotation) and carry out the high s/n ratio correction.
In the present embodiment, pending high spectrum image is n the high spectrum image that the wavelength size is, the M for two-dimentional high spectrum image that each wavelength is corresponding * M rank matrix I means, each pixel in two-dimentional high spectrum image has P directional response.
The Hyperspectral imagery processing method that the present embodiment is strengthened based on textural characteristics as shown in Figure 1, comprising:
1) filtering, obtain the local direction response vector
Second order class Gaussian function
Figure BDA0000367594290000071
and second order class Gaussian function Hilbert transform
Figure BDA0000367594290000072
matrix I is carried out to convolution, by quadrature filtering pair
Figure BDA0000367594290000073
with to all wavelengths, corresponding two-dimentional high spectrum image carries out filtering respectively, obtains tieing up a local direction response of single pixel in high spectrum image:
L θ p = ( G 2 θ p * I ) 2 + ( H 2 θ p * I ) 2 - - - ( 1 )
P is p direction of formation expression, 0≤p≤P-1, θ pbe p the angle that direction responds mutually ,-π≤θ p≤ π.
Change θ pvalue, calculate P time, obtain local direction on P the direction response of all pixels, and form the local direction response vector of all pixels:
V = ( L θ 0 , L θ 1 , L θ 2 . . . , L θ P - 1 ) , - - - ( 2 )
Figure BDA0000367594290000076
with
Figure BDA0000367594290000077
according to second order Gauss filter function G (x, y) and Hilbert transform H thereof 2obtain,
G ( x , y ) = ( 4 x 2 - 2 ) exp ( - ( x 2 + y 2 ) σ 2 ) , - - - ( 3 )
σ 2for the variance of class Gaussian function, (x, y) is the pixel value in two dimensional image.
Definition:
G 2 θ = k a θ G 2 a + k b θ G 2 b + k c θ G 2 c , - - - ( 4 )
Wherein k a θ = cos 2 ( θ ) , k b θ = - 2 cos ( θ ) sin ( θ ) , k c θ = sin 2 ( θ ) , G 2a, G 2b, G 2ccorrespondence rotates 0, the result of pi/2 and 3 π/4 by G (x, y) along counter clockwise direction respectively.The polynomial expression on three rank of employing removes to approach the Hilbert transform H of second order Gauss function 2, obtain:
H 2 θ = h a θ H 2 a + h b θ H 2 b + h c θ H 2 c + h d θ H 2 d , - - - ( 5 )
Wherein h a θ = cos 3 ( θ ) , h b θ = - 3 cos 2 ( θ ) sin ( θ ) , h c θ = 3 cos ( θ ) sin 2 ( θ ) With
Figure BDA0000367594290000085
h 2a, H 2b, H 2cand H 2dbe and x and y basis function independently.
2) quantize, obtain the textural characteristics matrix
2-1) normalization: use the normalization operation of adding standard deviation to local directional response
Figure BDA00003675942900000812
carry out the normalization pre-service.
Order:
L Normθ p = L θ p Σ p = 0 P - 1 L θ p + std ( L θ p ) , - - - ( 6 )
it is right to mean
Figure BDA0000367594290000088
ask for the standard deviation computing, further obtain normalized local direction response vector:
V Norm = ( L Normθ 0 , L Normθ 1 , L Normθ 2 . . . , L Normθ P - 1 ) . - - - ( 7 )
2-2) N state encoding: take probability model as basic dynamic N state encoding.
Probability model is by following procedure definition:
If F (t) is normalized local direction response vector V normthe value of middle all elements, its span is [minV norm, maxV norm], and 0≤minV norm<maxV norm<1, minV normand maxV normbe respectively minimum value and the maximal value of element in normalized local direction response vector, t is certain normalized local direction response,
Figure BDA00003675942900000810
span is identical with F (t), minV normand maxV normbe respectively minimum value and the maximal value of element in normalization directional response vector.Adopt compound trapezoidal method of approaching to carry out the Integration Solving of F (t), use the parzen window to carry out the estimation of printenv probability density to the probability density f (ω) of the directive response in normalized part.
Set up probability model according to the probability density f (ω) of normalization directional response:
F ( t ) = &Integral; min v t f ( &omega; ) dt , - - - ( 8 )
By the codomain of F (t), be [minV norm, maxV norm] carry out the N decile, the zone that obtains N (adopts minute divisions such as N in the present embodiment, be divided into N the zone that size is identical), then the zone of being divided according to F (t) is divided into N respective regions by t, adopt 0,1,2 ... N-1 is numbered, and means the N state encoding result of the t in respective regions by numbering.As shown in Figure 2, the local direction after normalization is corresponding
Figure BDA0000367594290000091
be divided in the zone that is numbered N-2, the local direction after normalization is corresponding
Figure BDA0000367594290000092
n state encoding result be N-2.So, according to probability model by normalization local direction response vector V normbe converted into the direction vector V of numeralization d:
V D=(D 0,D 1,...,D P-1)。(9)
2-3) calculate textural characteristics: adopt the LRP method to calculate the textural characteristics value of each pixel, guarantee the rotational invariance of picture.
According to formula:
LRP P , N = &Sigma; p = 0 P D p N p - - - ( 10 )
Calculate the textural characteristics value LRP of each pixel p,N, D pthe direction vector V of numeralization din element, to the LRP of P position p,Ncarry out P shifting function, then P the LRP to obtaining p,Nminimize, this minimum value is the eigenwert of respective pixel point, that is:
LRP P , N ri = min { LOL ( LRP P , N , i ) | i = 0,1 , &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; , P - 1 } , - - - ( 11 )
Wherein, LOL (LRP p,N, i) mean the LRP for the P position p,Ncarry out shift left operation, mobile step-length is the i position, and the value of i is followed successively by 0,1 ... P-1.
As shown in Figure 3, if V d=(0,1,1,3) can think and mean that two dimensional image does not rotate by i=0 that direction vector is
Figure BDA0000367594290000095
i=1, mean two dimensional image hour hands rotation pi/2, and direction vector is
Figure BDA0000367594290000096
the rest may be inferred can obtain the direction vector of different rotation angle:
LOL(LRP P,N,0)=0×4 0+1×4 1+1×4 2+3×4 3=192
LOL(LRP P,N,1)=1×4 0+1×4 1+3×4 2+0×4 3=53
LOL(LRP P,N,2)=1×4 0+3×4 1+0×4 2+1×4 3=77
LOL(LRP P,N,3)=3×4 0+0×4 1+1×4 2+1×4 3=83
Get minimum value, obtain the eigenwert of this pixel:
LRP P , N ri = min { LOL ( LRP P , N , i ) | i = 0,1 , &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; , P - 1 } = 53 . - - - ( 12 )
The textural characteristics value of all pixels by each wavelength in corresponding two dimensional image is combined to form the textural characteristics matrix of M * M.
3) textural characteristics is strengthened, and obtains textural characteristics and strengthens matrix
3-1) judgement wavelength dependence
Define symmetrical correlation matrix C:
C = C 11 C 12 . . C 1 , n C 21 C 22 . . C 2 , n . . . . . . . . . . . . . . . C n 1 C n 2 . . C n , n , - - - ( 13 )
Ci wherein jfor the element in correlation matrix C, mean the related coefficient of the textural characteristics matrix that i and j wavelength are corresponding:
C ij = cov ( a i , a j ) cov ( a i , a i ) &times; cov ( a j , a j ) , - - - ( 14 )
i,j=1,2,...,m
Ai means the corresponding textural characteristics matrix of wavelength i G ithe row vector be drawn into, a jmean textural characteristics matrix G corresponding to wavelength j jthe row vector be drawn into, wherein cov (a i, a j) be a jand a jbetween covariance.If Ci jin be greater than 0, judge G iand G jwavelength is relevant.
3-2) point-to-point fusion
Find out all and G according to related coefficient ithe textural characteristics matrix that wavelength is relevant, the textural characteristics matrix that all wavelengths is relevant (comprises G i) addition is averaging, and carries out point-to-point fusion, form new textural characteristics matrix and be textural characteristics and strengthen matrix.For the high spectrum image of n wavelength, carry out successively point-to-point fusion process n time, obtain n textural characteristics and strengthen matrix.
4) extract main textural characteristics, be formed for meaning the main texture feature vector of high spectrum image
N the textural characteristics that 4-1) will again form after merging respectively strengthened n the M that matrix is stretched into 2dimensional vector, and form a M 2* n rank matrix R:
R = R 11 R 12 . . R 1 , n R 21 R 22 . . R 2 , n . . . . . . . . . . . . . . . R m 2 1 R m 2 2 . . R m 2 , n , - - - ( 15 )
This matrix means the rear high spectrum image matrix of textural characteristics reinforcement.
4-2) matrix R is adopted to the PCA algorithm, choose three major component P 1, P 2, P 3, and the vector of embarking on journey that stretches, form texture feature vector X k, P 1, P 2, P 3m for mutual pairwise orthogonal 2dimensional vector.
Repeatedly carry out step 1)~4) obtain the texture feature vector X of the high spectrum image of all samples k, X kthe texture feature vector that means the high spectrum image that k sample is corresponding.
5) texture feature vector of all high spectrum be combined into to new matrix and carry out dimension-reduction treatment, main texture feature vector is carried out to dimension-reduction treatment, greatly reduced data volume, improve the execution time of subsequent applications (being mainly the classification of close grain high spectrum image), improved the efficiency of subsequent applications.
The texture feature vector of all high spectrum is combined into to new matrix X, and matrix X is l(l=k * 3M 2) the rank matrix, k is number of samples, in the present embodiment, is 240.Adopt and have the epidemiology learning method DLPP of supervision that matrix X dimensionality reduction is obtained to a d dimension data set Y=(y in the present embodiment 1, y 2..., y m), y i∈ R d(d<<l) wherein:
Y T=A TX, (16)
A is d dimension row vector, tries to achieve by the following method:
At A txL wx tsolve equation under the restrictive condition of A=1:
&Sigma; ij ( y i - y j ) W ij &Sigma; ij ( y i - y j ) B ij = A T X ( D W - W ) X T A A T X ( D B - B ) X T A = A T XL W X T A A T XB B X T A - - - ( 17 )
Minimize.
B and W mean that size is the weight matrix of l * k, Bi jmean that some i and some j belong to inhomogeneity, Wi jmean that some i and some j belong to same class.D band D wfor diagonal matrix, respectively representing matrix B and W by row or press the row addition and, L b=D b-B and L w=D w-W is Laplacian Matrix.
By (17) algebraic transformation is obtained:
XL BX TA=λXL WX TA, (18)
λ is eigenwert.
By the large eigenwert characteristic of correspondence vector of d before solving in (18), can obtain A, A=(a 0, a 1..., a d) be to have by according to eigenvalue λ 12>... λ dthe proper vector of arranging forms.
Again the A substitution (16) obtained is tried to achieve to Y t, to Y tcarry out the d that transposition try to achieve after dimensionality reduction and tie up main textural characteristics matrix Y.
By this method the special vectorial X dimensionality reduction of the main texture of l dimension is become to the Y of d dimension, guaranteed that the point far away at the higher dimensional space middle distance still keeps larger distance, and the point of close together is apart from still keeping nearer.
In the present embodiment to the Y as a result after dimensionality reduction, adopt a young waiter in a wineshop or an inn to take advantage of support vector hangar lib-LSSVM(library-LeastSquare Support Vector Machine, lib-LSSVM builds support vector machine (the LeastSquare Support Vector Machine of least square, LSSVM) different fillet samples is classified, and obtained classification results accurately.

Claims (7)

1. a Hyperspectral imagery processing method of strengthening based on textural characteristics, described high spectrum image comprises some two dimensional images corresponding with different wave length, it is characterized in that, and described Hyperspectral imagery processing method comprises:
1) arbitrary two dimensional image is carried out to filtering, obtain the local direction response of all pixels in two dimensional image, the local direction response combination of the multiple directions that repeatedly filtering obtains forms the local direction response vector of all pixels;
2) successively described local direction response vector is carried out to normalization and N state encoding, obtain the direction vector of numeralization, obtain the eigenwert of all pixels according to described direction vector, and build the textural characteristics matrix;
3) circulation step 1)~2) obtain the textural characteristics matrix of all two dimensional images, according to the wavelength dependence of each textural characteristics matrix, each textural characteristics matrix is carried out to the textural characteristics reinforcement, obtain corresponding textural characteristics and strengthen matrix;
4) strengthen matrix extracting main textural characteristics from all textural characteristics, be formed for meaning the main texture feature vector of high spectrum image.
2. the Hyperspectral imagery processing method of strengthening based on textural characteristics as claimed in claim 1, is characterized in that, in described step 1) according to formula:
L &theta; = ( G 2 &theta; * I ) 2 + ( H 2 &theta; * I ) 2
Carry out filtering, wherein:
I means to work as forward two-dimensional viewing;
L θmean the local direction response of all pixels on the θ direction in forward two-dimensional viewing ,-π≤θ≤π;
Figure FDA0000367594280000012
for second order class Gaussian function:
G 2 &theta; = k a &theta; G 2 a + k b &theta; G 2 b + k c &theta; G 2 c ,
k a &theta; = cos 2 ( &theta; ) , k b &theta; = - 2 cos ( &theta; ) sin ( &theta; ) , k c &theta; = sin 2 ( &theta; ) ,
G 2a, G 2b, G 2cbe respectively second order Gauss function G (x, y) is rotated to 0 along counter clockwise direction, the result of pi/2 and 3 π/4, (x, y) is the pixel value in two dimensional image;
for the Hilbert transform of second order class Gaussian function:
H 2 &theta; = h a &theta; H 2 a + h b &theta; H 2 b + h c &theta; H 2 c + h d &theta; H 2 d
h a &theta; = cos 3 ( &theta; ) , h b &theta; = - 3 cos 2 ( &theta; ) sin ( &theta; ) , h c &theta; = 3 cos ( &theta; ) sin 2 ( &theta; ) , h d &theta; = - sin 3 ( &theta; ) ,
H 2a, H 2b, H 2cand H 2dbe and x and y basis function independently.
3. the Hyperspectral imagery processing method of strengthening based on textural characteristics as claimed in claim 2, is characterized in that described step 2) in pass through formula
Figure FDA0000367594280000026
each local direction response in described local direction response vector is added to the normalization of standard deviation, is obtained normalization directional response vector,
Figure FDA0000367594280000027
for pixel in two dimensional image at θ plocal direction response on direction,
Figure FDA0000367594280000028
for right
Figure FDA0000367594280000029
local direction response after normalization, the direction number of the local direction response that P is two dimensional image,
Figure FDA00003675942800000210
it is right to mean
Figure FDA00003675942800000211
ask standard deviation.
4. the Hyperspectral imagery processing method of strengthening based on textural characteristics as claimed in claim 3, is characterized in that described step 2) the N state encoding carries out according to probability model, comprising:
2-1) according to probability model, the interval of all elements in the local direction response vector after to normalization is divided into N zone;
2-2) a described N zone is adopted to 0,1 from small to large successively ..., N-1 is numbered;
2-3) number table of element affiliated area is shown the N state encoding result of respective element;
The N state encoding result of all elements in described local direction response vector forms the direction vector of numeralization.
5. the Hyperspectral imagery processing method of strengthening based on textural characteristics as claimed in claim 4, is characterized in that described step 2) in adopt the LRP method to obtain the eigenwert of all pixels: LRP P , N ri = min { LOL ( LRP P , N , i ) | i = 0,1 , . . . , P - 1 } , LOL (LRP p,N, i) mean the LRP for the P position p,Ncarry out the shift left operation of i position, wherein:
Figure FDA00003675942800000213
d pfor p element in described direction vector, the state number that N is the N state encoding.
6. the Hyperspectral imagery processing method of strengthening based on textural characteristics as claimed in claim 5, it is characterized in that, in step 3), when each textural characteristics matrix is carried out to the textural characteristics reinforcement, at first judge the correlativity of current texture eigenmatrix and other each textural characteristics matrixes, current texture eigenmatrix and other all textural characteristics matrixes relevant to current texture eigenmatrix wavelength are carried out to point-to-point fusion and obtain textural characteristics and strengthen matrix.
7. the Hyperspectral imagery processing method of strengthening based on textural characteristics as claimed in claim 6, is characterized in that, in step 3), wavelength is relevant is to obtain according to related coefficient for any two textural characteristics matrixes, described related coefficient according to formula:
Figure FDA0000367594280000031
calculate C ijthe related coefficient that means the textural characteristics matrix that an i and j wavelength is corresponding, a iand a jbe respectively the row vector that textural characteristics matrix that an i and j wavelength is corresponding is drawn into, if C ij0, judge that two textural characteristics matrix wavelength are relevant, otherwise, judge that wavelength is uncorrelated.
CN201310358810.XA 2013-08-16 2013-08-16 The Hyperspectral imagery processing method strengthened based on textural characteristics Active CN103440625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310358810.XA CN103440625B (en) 2013-08-16 2013-08-16 The Hyperspectral imagery processing method strengthened based on textural characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310358810.XA CN103440625B (en) 2013-08-16 2013-08-16 The Hyperspectral imagery processing method strengthened based on textural characteristics

Publications (2)

Publication Number Publication Date
CN103440625A true CN103440625A (en) 2013-12-11
CN103440625B CN103440625B (en) 2016-08-10

Family

ID=49694317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310358810.XA Active CN103440625B (en) 2013-08-16 2013-08-16 The Hyperspectral imagery processing method strengthened based on textural characteristics

Country Status (1)

Country Link
CN (1) CN103440625B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463814A (en) * 2014-12-08 2015-03-25 西安交通大学 Image enhancement method based on local texture directionality
CN105718531A (en) * 2016-01-14 2016-06-29 广州市万联信息科技有限公司 Image database building method and image recognition method
CN111460966A (en) * 2020-03-27 2020-07-28 中国地质大学(武汉) Hyperspectral remote sensing image classification method based on metric learning and neighbor enhancement
CN112037211A (en) * 2020-09-04 2020-12-04 中国空气动力研究与发展中心超高速空气动力研究所 Damage characteristic identification method for dynamically monitoring small space debris impact event

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411782A (en) * 2011-11-01 2012-04-11 哈尔滨工程大学 Three layer color visualization method of hyperspectral remote sensing image
WO2013084233A1 (en) * 2011-12-04 2013-06-13 Digital Makeup Ltd Digital makeup

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411782A (en) * 2011-11-01 2012-04-11 哈尔滨工程大学 Three layer color visualization method of hyperspectral remote sensing image
WO2013084233A1 (en) * 2011-12-04 2013-06-13 Digital Makeup Ltd Digital makeup

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUANG YUANCHENG: "High-resolution Hyper-spectral Image Classification with Parts-based Feature and Morphology Profile in Urban Area", 《GEO-SPATIAL INFORMATION SCIENCE》 *
刘小明等: "基于矩阵表示的局部敏感辨识分析", 《浙江大学学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463814A (en) * 2014-12-08 2015-03-25 西安交通大学 Image enhancement method based on local texture directionality
CN104463814B (en) * 2014-12-08 2017-04-19 西安交通大学 Image enhancement method based on local texture directionality
CN105718531A (en) * 2016-01-14 2016-06-29 广州市万联信息科技有限公司 Image database building method and image recognition method
CN105718531B (en) * 2016-01-14 2019-12-17 广州市万联信息科技有限公司 Image database establishing method and image identification method
CN111460966A (en) * 2020-03-27 2020-07-28 中国地质大学(武汉) Hyperspectral remote sensing image classification method based on metric learning and neighbor enhancement
CN111460966B (en) * 2020-03-27 2024-02-02 中国地质大学(武汉) Hyperspectral remote sensing image classification method based on metric learning and neighbor enhancement
CN112037211A (en) * 2020-09-04 2020-12-04 中国空气动力研究与发展中心超高速空气动力研究所 Damage characteristic identification method for dynamically monitoring small space debris impact event
CN112037211B (en) * 2020-09-04 2022-03-25 中国空气动力研究与发展中心超高速空气动力研究所 Damage characteristic identification method for dynamically monitoring small space debris impact event

Also Published As

Publication number Publication date
CN103440625B (en) 2016-08-10

Similar Documents

Publication Publication Date Title
Wang et al. Locality and structure regularized low rank representation for hyperspectral image classification
Wang et al. Hyperspectral image restoration via total variation regularized low-rank tensor decomposition
Zhao et al. Look across elapse: Disentangled representation learning and photorealistic cross-age face synthesis for age-invariant face recognition
Fan et al. Spatial–spectral total variation regularized low-rank tensor decomposition for hyperspectral image denoising
Ma et al. Local-manifold-learning-based graph construction for semisupervised hyperspectral image classification
CN107341786A (en) The infrared and visible light image fusion method that wavelet transformation represents with joint sparse
CN103177458B (en) A kind of visible remote sensing image region of interest area detecting method based on frequency-domain analysis
CN103218609B (en) A kind of Pose-varied face recognition method based on hidden least square regression and device thereof
CN106557784A (en) Fast target recognition methodss and system based on compressed sensing
CN102289807B (en) Method for detecting change of remote sensing image based on Treelet transformation and characteristic fusion
Gao et al. Small sample classification of hyperspectral image using model-agnostic meta-learning algorithm and convolutional neural network
CN103440625A (en) Hyperspectral image processing method based on textural feature strengthening
CN102393911A (en) Background clutter quantization method based on compressive sensing
Jin et al. Spatial-spectral feature extraction of hyperspectral images for wheat seed identification
Sun et al. An artificial target detection method combining a polarimetric feature extractor with deep convolutional neural networks
CN103425995A (en) Hyperspectral image classification method based on area similarity low rank expression dimension reduction
CN102819840B (en) Method for segmenting texture image
He et al. Multiple data-dependent kernel for classification of hyperspectral images
Elkholy et al. Unsupervised hyperspectral band selection with deep autoencoder unmixing
Shi et al. F 3 Net: Fast Fourier filter network for hyperspectral image classification
CN105975940A (en) Palm print image identification method based on sparse directional two-dimensional local discriminant projection
Tuna et al. An efficient feature extraction approach for hyperspectral images using Wavelet High Dimensional Model Representation
CN103606189A (en) Track base selection method facing non-rigid body three-dimensional reconstruction
CN104537377B (en) A kind of view data dimension reduction method based on two-dimentional nuclear entropy constituent analysis
Venkateswaran et al. Performance analysis of k-means clustering for remotely sensed images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Deng Shuiguang

Inventor after: Li Yujin

Inventor after: Xu Yifei

Inventor after: Yin Jianwei

Inventor after: Li Ying

Inventor after: Wu Jian

Inventor after: Wu Chaohui

Inventor before: Deng Shuiguang

Inventor before: Xu Yifei

Inventor before: Yin Jianwei

Inventor before: Li Ying

Inventor before: Wu Jian

Inventor before: Wu Chaohui

COR Change of bibliographic data
C14 Grant of patent or utility model
GR01 Patent grant