Summary of the invention
For the deficiencies in the prior art, the invention provides a kind of Hyperspectral imagery processing method of strengthening based on feature, can effectively catch and describe trickle texture.
A kind of Hyperspectral imagery processing method of strengthening based on textural characteristics provided by the invention, described high spectrum image comprises some two dimensional images corresponding with different wave length, and described Hyperspectral imagery processing method comprises:
1) arbitrary two dimensional image is carried out to filtering, obtain the local direction response of all pixels in two dimensional image, the local direction response combination of the multiple directions that repeatedly filtering obtains forms the local direction response vector of all pixels;
2) successively described local direction response vector is carried out to normalization and N state encoding, obtain the direction vector of numeralization, obtain the eigenwert of all pixels according to described direction vector, and build the textural characteristics matrix;
3) circulation step 1)~2) obtain the textural characteristics matrix of all two dimensional images, according to the wavelength dependence of each textural characteristics matrix, each textural characteristics matrix is carried out to the textural characteristics reinforcement, obtain corresponding textural characteristics and strengthen matrix;
4) strengthen matrix extracting main textural characteristics from all textural characteristics, be formed for meaning the main texture feature vector of high spectrum image.
The Hyperspectral imagery processing method of strengthening based on textural characteristics of the present invention adopts the eigenwert of all pixels to describe the textural characteristics of corresponding two dimensional image, guaranteed the rotational invariance of image, the textural characteristics of all two dimensional images is strengthened and gone out main textural characteristics and form main texture feature vector for meaning high spectrum image according to strengthened texture feature extraction.By reasonably utilizing the multi-wavelength information of high spectrum image, can effectively catch the abundant textural characteristics of high spectrum image.
In described step 1) according to formula:
Carry out filtering, wherein:
I means to work as forward two-dimensional viewing;
L
θmean the local direction response of all pixels on the θ direction in forward two-dimensional viewing ,-π≤θ≤π;
for second order class Gaussian function:
G
2a, G
2b, G
2cbe respectively second order Gauss function G (x, y) is rotated to 0 along counter clockwise direction, the result of pi/2 and 3 π/4, (x, y) is the pixel value in two dimensional image;
for the Hilbert transform of second order class Gaussian function:
Be and x and y basis function independently.Described H
2a, H
2b, H
2cand H
2das document Freeman, W.T. , & Adelson, E.H. (1991), The design and use of steerable filters, IEEE Transactions on Pattern analysis and machine intelligence, 13,891-906.
By change value θ, obtaining each pixel of acquisition has the local direction on direction corresponding in difference, and the local direction response combination of the multiple directions that obtain forms the local direction response vector of all pixels.
Described step 2) in, pass through formula
each local direction response in described local direction response vector is added to the normalization of standard deviation, is obtained normalization directional response vector,
for pixel in two dimensional image at θ
plocal direction response on direction,
for right
local direction response after normalization, the direction number of the local direction response that P is two dimensional image,
it is right to mean
ask standard deviation.
Add the method for normalizing of standard deviation by employing, avoided occurring in the normalization process problem of molecule denominator equal proportion.
Described step 2) the N state encoding carries out according to probability model, comprising:
2-1) according to probability model, the interval of all elements in the local direction response vector after to normalization is divided into N zone;
2-2) a described N zone is adopted to 0,1 from small to large successively ..., N-1 is numbered;
2-3) number table of element affiliated area is shown the N state encoding result of respective element;
The N state encoding result of all elements in described local direction response vector forms the direction vector of numeralization.
Encoded by probability model, improved the accuracy of N state encoding.
Described step 2) in, adopt the LRP method to obtain the eigenwert of all pixels:
LOL (LRP
p,N, i) mean for the LRP shown for the P position
p,Ncarry out the shift left operation of i position, wherein:
d
pfor p element in described direction vector, the state number that N is the N state encoding.
Adopt the LRP method, the pixel that will mean with direction vector adopts eigenwert to mean, because each pixel has the local direction of P different directions corresponding, therefore, for guaranteeing the rotational invariance of high spectrum image, guarantee that each pixel only has a textural characteristics value, to the LRP of P position
p,Ncarry out shifting function (mobile step-length is i), that is:
LOL (LRP
p,N, i)=D
in
0+ D
i+1n
1+ ... D
pn
p-i+ D
0n
p-i+1+ ...+D
i-1n
p, by P shifting function, get the minimum value of each shift result, make the corresponding eigenwert of each pixel.Shifting function is the rotary manipulation of corresponding two dimensional image, and the step-length corresponding rotation angle of displacement operates by multi-shift, has guaranteed the rotational invariance of image.
In step 3), when each textural characteristics matrix is carried out to the textural characteristics reinforcement, at first judge the correlativity of current texture eigenmatrix and other each textural characteristics matrixes, current texture eigenmatrix and other all textural characteristics matrixes relevant to current texture eigenmatrix wavelength are carried out to point-to-point fusion and obtain textural characteristics and strengthen matrix.
By point-to-point fusion method, by and all textural characteristics matrixes relevant to its wavelength to carry out Matrix Calculating average, the new matrix obtained is the reinforcement textural characteristics matrix that this textural characteristics matrix is corresponding, method is simple, is easy to realize.
In step 3), wavelength is relevant is to obtain according to related coefficient for any two textural characteristics matrixes, described related coefficient according to formula:
calculate C
ijthe related coefficient that means the textural characteristics matrix that an i and j wavelength is corresponding, a
iand a
jbe respectively the row vector that textural characteristics matrix that an i and j wavelength is corresponding is drawn into, if C
ij0, judge that two textural characteristics matrix wavelength are relevant, otherwise, judge that wavelength is uncorrelated.
The present invention is merged the textural characteristics of the two dimensional image that all wavelengths is corresponding, and therefrom obtain the main textural characteristics of three-dimensional high spectrum image, with the disposal route of prior art, compare, rationally utilized the multi-wavelength information of high spectrum image, can capture accurately abundant textural characteristics, be convenient to carry out the differentiation of grain details, be particularly useful for close grain image (as the texture image of fillet) analysis.The dimensionality reduction undertaken by the large measure feature to extracting, can either mean high spectrum image accurately, reduced again data volume, is conducive to improve the speed of subsequent applications.
Embodiment
Below in conjunction with specific embodiments the Hyperspectral imagery processing method of strengthening based on feature of the present invention is described further; but protection scope of the present invention is not limited to this embodiment; anyly be familiar with those skilled in the art in the technical scope that the present invention discloses; the variation that can expect easily or replacement, within all should being encompassed in protection scope of the present invention.
The flesh of fish experiment of distinguishing under varying environment of take in the present embodiment is example.
Instrument is prepared
Experimental facilities is by robot calculator, high spectrometer, Halogen lamp LED, rectification black-white board.High spectrometer use U.S. ASD(Analytical Spectral Device) the Handheld Field Spec spectrometer of company, spectrum sample is spaced apart 1.5nm, and sample range is 380nm~1030nm, adopts the diffuse reflection mode to carry out the sample spectrum sample; 14.5 Halogen lamp LEDs that employing and spectral instrument are supporting, carry out must using the rectification black-white board to carry out the routine rectification to high spectrometer before spectra collection.
Material is prepared
Prepare fresh and alive 54 flatfish (turbot), butcher, bloodletting, remove internal organ, cleans up, freezing stand-by.The body weight of fish is at 372g between 580g value (average 512g), and length arrives the average 30.5cm of 32cm(at 27.5cm).Be cut into from right to left subsequently 240 section samples on chopping block.Front 96 samples are as fresh not freezing sample (Fresh), and 144 samples of remaining sample are got respectively under the environment that 72 samples are placed in-70 ℃ and-20 ℃ and generated snap frozen sample and slow freezing sample.Room temperature is set in constant 20 ℃.Through 9 days, all freezing samples were thawed through the time at night under the environment of 4 ℃, formed snap frozen sample set (FFT) and the slow freezing sample set (SFT) that thaws that thaws.For the Fresh sample, 64 random samples are training set, 32 immediately sample be test set.For FFT and SFT sample, all used respectively 48 samples as training set, 24 samples are as test set.
The high spectrum image pre-service
To all high spectrum images, the regional high spectrum image of the area-of-interest that adopts Hyperspectral imagery processing software ENVI5.0 to size to be is chosen.In order to guarantee the accuracy of experiment, to eliminate, be subject to instrument to affect front 100 the corresponding spectrum pictures of wavelength with illumination effect, only choose 101-512 412 clear high spectrum images that light wave is corresponding altogether.In order to remove the impact of noise, to all high spectrum images, adopt minimal noise to separate conversion (Minimum Noise Fraction Rotation, MNF Rotation) and carry out the high s/n ratio correction.
In the present embodiment, pending high spectrum image is n the high spectrum image that the wavelength size is, the M for two-dimentional high spectrum image that each wavelength is corresponding * M rank matrix I means, each pixel in two-dimentional high spectrum image has P directional response.
The Hyperspectral imagery processing method that the present embodiment is strengthened based on textural characteristics as shown in Figure 1, comprising:
1) filtering, obtain the local direction response vector
Second order class Gaussian function
and second order class Gaussian function Hilbert transform
matrix I is carried out to convolution, by quadrature filtering pair
with
to all wavelengths, corresponding two-dimentional high spectrum image carries out filtering respectively, obtains tieing up a local direction response of single pixel in high spectrum image:
P is p direction of formation expression, 0≤p≤P-1, θ
pbe p the angle that direction responds mutually ,-π≤θ
p≤ π.
Change θ
pvalue, calculate P time, obtain local direction on P the direction response of all pixels, and form the local direction response vector of all pixels:
with
according to second order Gauss filter function G (x, y) and Hilbert transform H thereof
2obtain,
σ
2for the variance of class Gaussian function, (x, y) is the pixel value in two dimensional image.
Definition:
Wherein
G
2a, G
2b, G
2ccorrespondence rotates 0, the result of pi/2 and 3 π/4 by G (x, y) along counter clockwise direction respectively.The polynomial expression on three rank of employing removes to approach the Hilbert transform H of second order Gauss function
2, obtain:
Wherein
With
h
2a, H
2b, H
2cand H
2dbe and x and y basis function independently.
2) quantize, obtain the textural characteristics matrix
2-1) normalization: use the normalization operation of adding standard deviation to local directional response
carry out the normalization pre-service.
Order:
it is right to mean
ask for the standard deviation computing, further obtain normalized local direction response vector:
2-2) N state encoding: take probability model as basic dynamic N state encoding.
Probability model is by following procedure definition:
If F (t) is normalized local direction response vector V
normthe value of middle all elements, its span is [minV
norm, maxV
norm], and 0≤minV
norm<maxV
norm<1, minV
normand maxV
normbe respectively minimum value and the maximal value of element in normalized local direction response vector, t is certain normalized local direction response,
span is identical with F (t), minV
normand maxV
normbe respectively minimum value and the maximal value of element in normalization directional response vector.Adopt compound trapezoidal method of approaching to carry out the Integration Solving of F (t), use the parzen window to carry out the estimation of printenv probability density to the probability density f (ω) of the directive response in normalized part.
Set up probability model according to the probability density f (ω) of normalization directional response:
By the codomain of F (t), be [minV
norm, maxV
norm] carry out the N decile, the zone that obtains N (adopts minute divisions such as N in the present embodiment, be divided into N the zone that size is identical), then the zone of being divided according to F (t) is divided into N respective regions by t, adopt 0,1,2 ... N-1 is numbered, and means the N state encoding result of the t in respective regions by numbering.As shown in Figure 2, the local direction after normalization is corresponding
be divided in the zone that is numbered N-2, the local direction after normalization is corresponding
n state encoding result be N-2.So, according to probability model by normalization local direction response vector V
normbe converted into the direction vector V of numeralization
d:
V
D=(D
0,D
1,...,D
P-1)。(9)
2-3) calculate textural characteristics: adopt the LRP method to calculate the textural characteristics value of each pixel, guarantee the rotational invariance of picture.
According to formula:
Calculate the textural characteristics value LRP of each pixel
p,N, D
pthe direction vector V of numeralization
din element, to the LRP of P position
p,Ncarry out P shifting function, then P the LRP to obtaining
p,Nminimize, this minimum value is the eigenwert of respective pixel point, that is:
Wherein, LOL (LRP
p,N, i) mean the LRP for the P position
p,Ncarry out shift left operation, mobile step-length is the i position, and the value of i is followed successively by 0,1 ... P-1.
As shown in Figure 3, if V
d=(0,1,1,3) can think and mean that two dimensional image does not rotate by i=0 that direction vector is
i=1, mean two dimensional image hour hands rotation pi/2, and direction vector is
the rest may be inferred can obtain the direction vector of different rotation angle:
LOL(LRP
P,N,0)=0×4
0+1×4
1+1×4
2+3×4
3=192
LOL(LRP
P,N,1)=1×4
0+1×4
1+3×4
2+0×4
3=53
LOL(LRP
P,N,2)=1×4
0+3×4
1+0×4
2+1×4
3=77
LOL(LRP
P,N,3)=3×4
0+0×4
1+1×4
2+1×4
3=83
Get minimum value, obtain the eigenwert of this pixel:
The textural characteristics value of all pixels by each wavelength in corresponding two dimensional image is combined to form the textural characteristics matrix of M * M.
3) textural characteristics is strengthened, and obtains textural characteristics and strengthens matrix
3-1) judgement wavelength dependence
Define symmetrical correlation matrix C:
Ci wherein
jfor the element in correlation matrix C, mean the related coefficient of the textural characteristics matrix that i and j wavelength are corresponding:
i,j=1,2,...,m
Ai means the corresponding textural characteristics matrix of wavelength i G
ithe row vector be drawn into, a
jmean textural characteristics matrix G corresponding to wavelength j
jthe row vector be drawn into, wherein cov (a
i, a
j) be a
jand a
jbetween covariance.If Ci
jin be greater than 0, judge G
iand G
jwavelength is relevant.
3-2) point-to-point fusion
Find out all and G according to related coefficient
ithe textural characteristics matrix that wavelength is relevant, the textural characteristics matrix that all wavelengths is relevant (comprises G
i) addition is averaging, and carries out point-to-point fusion, form new textural characteristics matrix and be textural characteristics and strengthen matrix.For the high spectrum image of n wavelength, carry out successively point-to-point fusion process n time, obtain n textural characteristics and strengthen matrix.
4) extract main textural characteristics, be formed for meaning the main texture feature vector of high spectrum image
N the textural characteristics that 4-1) will again form after merging respectively strengthened n the M that matrix is stretched into
2dimensional vector, and form a M
2* n rank matrix R:
This matrix means the rear high spectrum image matrix of textural characteristics reinforcement.
4-2) matrix R is adopted to the PCA algorithm, choose three major component P
1, P
2, P
3, and the vector of embarking on journey that stretches, form texture feature vector X
k, P
1, P
2, P
3m for mutual pairwise orthogonal
2dimensional vector.
Repeatedly carry out step 1)~4) obtain the texture feature vector X of the high spectrum image of all samples
k, X
kthe texture feature vector that means the high spectrum image that k sample is corresponding.
5) texture feature vector of all high spectrum be combined into to new matrix and carry out dimension-reduction treatment, main texture feature vector is carried out to dimension-reduction treatment, greatly reduced data volume, improve the execution time of subsequent applications (being mainly the classification of close grain high spectrum image), improved the efficiency of subsequent applications.
The texture feature vector of all high spectrum is combined into to new matrix X, and matrix X is l(l=k * 3M
2) the rank matrix, k is number of samples, in the present embodiment, is 240.Adopt and have the epidemiology learning method DLPP of supervision that matrix X dimensionality reduction is obtained to a d dimension data set Y=(y in the present embodiment
1, y
2..., y
m), y
i∈ R
d(d<<l) wherein:
Y
T=A
TX, (16)
A is d dimension row vector, tries to achieve by the following method:
At A
txL
wx
tsolve equation under the restrictive condition of A=1:
Minimize.
B and W mean that size is the weight matrix of l * k, Bi
jmean that some i and some j belong to inhomogeneity, Wi
jmean that some i and some j belong to same class.D
band D
wfor diagonal matrix, respectively representing matrix B and W by row or press the row addition and, L
b=D
b-B and L
w=D
w-W is Laplacian Matrix.
By (17) algebraic transformation is obtained:
XL
BX
TA=λXL
WX
TA, (18)
λ is eigenwert.
By the large eigenwert characteristic of correspondence vector of d before solving in (18), can obtain A, A=(a
0, a
1..., a
d) be to have by according to eigenvalue λ
1>λ
2>... λ
dthe proper vector of arranging forms.
Again the A substitution (16) obtained is tried to achieve to Y
t, to Y
tcarry out the d that transposition try to achieve after dimensionality reduction and tie up main textural characteristics matrix Y.
By this method the special vectorial X dimensionality reduction of the main texture of l dimension is become to the Y of d dimension, guaranteed that the point far away at the higher dimensional space middle distance still keeps larger distance, and the point of close together is apart from still keeping nearer.
In the present embodiment to the Y as a result after dimensionality reduction, adopt a young waiter in a wineshop or an inn to take advantage of support vector hangar lib-LSSVM(library-LeastSquare Support Vector Machine, lib-LSSVM builds support vector machine (the LeastSquare Support Vector Machine of least square, LSSVM) different fillet samples is classified, and obtained classification results accurately.