CN103440625B - The Hyperspectral imagery processing method strengthened based on textural characteristics - Google Patents

The Hyperspectral imagery processing method strengthened based on textural characteristics Download PDF

Info

Publication number
CN103440625B
CN103440625B CN201310358810.XA CN201310358810A CN103440625B CN 103440625 B CN103440625 B CN 103440625B CN 201310358810 A CN201310358810 A CN 201310358810A CN 103440625 B CN103440625 B CN 103440625B
Authority
CN
China
Prior art keywords
theta
textural characteristics
matrix
vector
local direction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310358810.XA
Other languages
Chinese (zh)
Other versions
CN103440625A (en
Inventor
邓水光
李钰金
徐亦飞
尹建伟
李莹
吴健
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201310358810.XA priority Critical patent/CN103440625B/en
Publication of CN103440625A publication Critical patent/CN103440625A/en
Application granted granted Critical
Publication of CN103440625B publication Critical patent/CN103440625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Complex Calculations (AREA)
  • Image Analysis (AREA)

Abstract

The Hyperspectral imagery processing method that the feature based that the present invention provides is strengthened, use the corresponding two dimensional image of textural characteristics matrix description being made up of the eigenvalue of pixels all in two dimensional image, and the textural characteristics matrix of all two dimensional images is carried out textural characteristics reinforcement, obtain textural characteristics and strengthen matrix, and extract main textural characteristics according to all textural characteristics reinforcement matrix and form main texture feature vector, represent high spectrum image with main texture feature vector.Hyperspectral imagery processing method Appropriate application that the feature based of the present invention the is strengthened multi-wavelength information of high spectrum image, it is possible to capture abundant textural characteristics accurately, it is simple to carry out the differentiation of grain details, be particularly suited for close grain graphical analysis.

Description

The Hyperspectral imagery processing method strengthened based on textural characteristics
Technical field
The invention belongs to technical field of image processing, relate to the Hyperspectral imagery processing method strengthened based on textural characteristics.
Background technology
High spectrum image is 3-D view, including ordinary two dimensional plane picture information and wavelength information.At the sky to target Between while characteristic imaging, each space pixel is formed tens or even hundreds of narrow-band to carry out continuous print through dispersion Spectrum covers.One high spectrum image is the three-dimensional high spectrum image being made up of the two dimensional image that several wavelength are corresponding.Due to Heterogeneity is the most different to spectral absorption, at certain specific wavelength hypograph, certain defect is had more significant reflection, spectrum Information can fully reflect the difference of the physical arrangement of sample interior, chemical composition.
The outer EO-1 hyperion of near-infrared is widely used in the industries such as food, medicine, petrochemical industry because of characteristics such as its quick nondestructives. High spectrum image analysis is generally divided into spectral data analysis and EO-1 hyperion texture analysis.Relative to spectroscopic data, the texture of image More close to the sense organ vision of people, the reaction for microstructure is more accurate.Current texture analysis method is mainly applied Traditional two dimension macroscopic view picture, in terms of high spectrum image texture, current method is taken photo by plane remotely-sensed data for analyzing and processing, by All concentrate in all of sample of data of taking photo by plane and be illustrated in a spectrum picture, between its texture analysis is more valued pixel vectors Relation rather than texture itself.And for applying the source of the EO-1 hyperion in fields such as food and agriculturals, a high-spectrum As only representing a sample, its precision is significantly larger than taken photo by plane remote sensing images, and texture structure is more paid close attention in the texture analysis to it Itself rather than pixel vectors.Analyze up till now, for being applied to the EO-1 hyperion texture analysis research in the fields such as agricultural and food Less.
At present in terms of hyperspectral analysis, can be divided mainly into following three kinds of methods: 1) by some spectrum pictures Choosing representational small part two dimensional image and carry out texture analysis, this method it has been generally acknowledged that the good picture of spectral reflectance value is simultaneously There is outstanding textural characteristics, but this hypothesis lacks the valid certificates of theory and practice.2) three-D grain side is directly applied Method, these three-dimensional methods by by wavelength as third dimension, classical two-dimension method expand, however this method because of Substantial amounts of information loss is caused for the most coarse meeting.3) by the relation between definition wave band, existing two-dimension method is opened up Exhibition is used for the three-dimensional high spectrum image of effective expression.
Can effectively represent EO-1 hyperion textural characteristics by the third expanding method, however this method be primarily present with Lower three challenges: 1) need to define a kind of good method that can describe trickle texture, the method need to meet one outstanding The base attribute that has of texture descriptor, such as rotational invariance etc..2) needing reasonably to utilize multiwave information, this is just Need to define the correlation model between wave band, can effectively catch, by this model, the texture feature that high spectrum image is abundant. 3) need the big measure feature that EO-1 hyperion is extracted to carry out dimensionality reduction, by there being the dimension reduction method of supervision, i.e. can reduce disaggregated model The execution time expended, classification accuracy can be improved again.
Summary of the invention
For the deficiencies in the prior art, the invention provides a kind of Hyperspectral imagery processing method that feature based is strengthened, Can effectively catch and describe trickle texture.
A kind of Hyperspectral imagery processing method strengthened based on textural characteristics that the present invention provides, described high spectrum image bag Containing some two dimensional images corresponding with different wave length, described Hyperspectral imagery processing method includes:
1) arbitrary two dimensional image is filtered, obtains the local direction response vector of all pixels in two dimensional image, The local direction response vector combination repeatedly filtering the multiple directions obtained forms the local direction response vector of all pixels;
2) successively described local direction response vector is normalized and N-state encodes, obtain the direction of numeralization Vector, obtains the eigenvalue of all pixels, and builds textural characteristics matrix according to described direction vector;
3) circulation step 1)~2) obtain the textural characteristics matrix of all two dimensional images, according to the ripple of each textural characteristics matrix Long range dependent, carries out textural characteristics reinforcement to each textural characteristics matrix, obtains corresponding textural characteristics and strengthens matrix;
4) strengthen matrix extracts main textural characteristics from all of textural characteristics, formed for representing high spectrum image Main texture feature vector.
The Hyperspectral imagery processing method based on textural characteristics reinforcement of the present invention uses the eigenvalue of all pixels to retouch State the textural characteristics of corresponding two dimensional image, it is ensured that the rotational invariance of image, the textural characteristics of all two dimensional images is entered Row is strengthened and goes out main textural characteristics according to strengthened texture feature extraction and form main texture feature vector for table Show high spectrum image.By reasonably utilizing the multi-wavelength information of high spectrum image, it is possible to effective seizure high spectrum image is rich Rich textural characteristics.
Described step 1) according to formula:
L θ = ( G 2 θ * I ) 2 + ( H 2 θ * I ) 2
It is filtered, wherein:
I represents and works as forward two-dimensional viewing;
LθRepresent as all pixels local direction response vector on θ direction ,-π≤θ≤π in forward two-dimensional viewing;
For second order class Gaussian function:
G 2 θ = k a θ G 2 a + k b θ G 2 b + k c θ G 2 c ,
k a θ = cos 2 ( θ ) , k b θ = - 2 c o s ( θ ) s i n ( θ ) , k c θ = sin 2 ( θ ) ,
G2a, G2b, G2cRespectively by second order Gauss function G, (x, y) along counterclockwise rotation 0, pi/2 and the knot of 3 π/4 Really, (x y) is the pixel value in two dimensional image;
For second order class Gaussian function Hilbert transform:
H 2 θ = h a θ H 2 a + h b θ H 2 b + h c θ H 2 c + h d θ H 2 d
h a θ = cos 3 ( θ ) , h b θ = - 3 cos 2 ( θ ) s i n ( θ ) , h c θ = 3 c o s ( θ ) sin 2 ( θ ) , h d θ = - sin 3 ( θ ) ,
It is and the basic function of x and y independence.Described H2a, H2b, H2cAnd H2dSuch as document Freeman, W.T. , &Adelson, E.H. (1991), The design and use of steerable filters, IEEE Transactions on Pattern analysis and machine intelligence,13,891-906。
By change value θ, obtaining each pixel of acquisition has the local direction on direction corresponding in difference, and obtain is multiple The local direction response vector combination in direction forms the local direction response vector of all pixels.
Described step 2) in pass through formulaTo in described local direction response vector Each local direction response vector be added the normalization of standard deviation, obtain normalization directional response vector,It is two In dimension image, pixel is at θpLocal direction response vector on direction,For rightLocal direction after normalization rings Should be vectorial, P is the direction number of the local direction response vector of two dimensional image,It is right to representSeek standard deviation.
By using the method for normalizing adding standard deviation, it is to avoid during normalization molecule denominator equal proportion occurs Problem.
Described step 2) N-state coding carry out according to probabilistic model, including:
2-1) according to probabilistic model, the interval of all elements in the local direction response vector after normalization is drawn It is divided into N number of region;
2-2) described N number of region is used 0 the most successively, 1 ..., N-1 is numbered;
2-3) number table of element affiliated area is shown as the N-state coding result of respective element;
The N-state coding result of all elements in described local direction response vector i.e. constitute the direction of numeralization to Amount.
Encoded by probabilistic model, improve the accuracy of N-state coding.
Described step 2) the middle eigenvalue using LRP method to obtain all pixels:LOL(LRPP,N, i) represent for showing for P position LRPP,NPerform the shift left operation of i position, wherein:DpFor pth the element in described direction vector, N is the state number of N-state coding.
Use LRP method, the pixel represented with direction vector employing eigenvalue is represented, owing to each pixel has P The local direction of individual different directions is corresponding, therefore, for guaranteeing the rotational invariance of high spectrum image, it is ensured that each pixel is only One textural characteristics value, the LRP to P positionP,NCarry out shifting function (step-length of movement is i), it may be assumed that
LOL(LRPP,N, i)=DiN0+Di+1N1+...DPNP-i+D0NP-i+1+...+Di-1NP, by P shifting function, take every The minima of secondary shift result, makes the corresponding eigenvalue of each pixel.The rotation behaviour of the most corresponding two dimensional image of shifting function Making, the step-length corresponding rotation angle of displacement is operated by multi-shift, it is ensured that the rotational invariance of image.
In step 3) in, when each textural characteristics matrix is carried out textural characteristics reinforcement, first determine whether current texture feature The dependency of matrix and other each textural characteristics matrixes, by current texture eigenmatrix and with current texture eigenmatrix ripple Long other relevant all textural characteristics matrixes carry out point-to-point fusion and obtain textural characteristics reinforcement matrix.
By point-to-point fusion method, by and all textural characteristics matrixes relevant to its wavelength carry out Matrix Calculating and put down All, the new matrix obtained is the reinforcement textural characteristics matrix that this textural characteristics matrix is corresponding, and method is simple, it is easy to accomplish.
In step 3) in, any two textural characteristics matrix whether wavelength is relevant is to obtain according to correlation coefficient, described phase Close coefficient according to formula:Calculate, CijRepresent the i-th stricture of vagina corresponding with j wavelength The correlation coefficient of reason eigenmatrix, aiAnd ajIt is respectively the row vector that the i-th textural characteristics matrix tensile corresponding with j wavelength becomes, If Cij> 0, then judge that two textural characteristics matrix wavelength are correlated with, otherwise, it is determined that wavelength is uncorrelated.
The present invention merges by the textural characteristics of two dimensional image corresponding for all wavelengths, and therefrom obtains three-dimensional Gao Guang The main textural characteristics of spectrogram picture, compare with the processing method of prior art, the multi-wavelength letter of Appropriate application high spectrum image Breath, it is possible to capture abundant textural characteristics accurately, it is simple to carry out the differentiation of grain details, be particularly suited for close grain image (such as the texture image of fillet) is analyzed.By the dimensionality reduction that the big measure feature extracted is carried out, EO-1 hyperion can either be represented accurately Image, reduces again data volume, is conducive to improving the speed of subsequent applications.
Accompanying drawing explanation
Fig. 1 is the flow chart of the Hyperspectral imagery processing method strengthened based on textural characteristics of the present embodiment;
Fig. 2 is the probability statistics model that there is directional response normalized local.
Detailed description of the invention
The Hyperspectral imagery processing the method below in conjunction with specific embodiments feature based of the present invention strengthened do into One step describes, but protection scope of the present invention is not limited thereto embodiment, and any those familiar with the art exists In the technical scope that the invention discloses, the change that can readily occur in or replacement, all should contain within protection scope of the present invention.
In the present embodiment as a example by distinguishing the flesh of fish experiment under varying environment.
Instrument prepares
Experimental facilities is by electronic computer, EO-1 hyperion instrument, Halogen light, rectification black-white board.EO-1 hyperion instrument uses U.S. ASD The Handheld Field Spec spectrogrph of (Analytical Spectral Device) company, spectrum sample is spaced apart 1.5nm, sample range is 380nm~1030nm, uses diffuse-reflectance mode to carry out sample spectrum sampling;Use and join with spectral instrument 14.5 Halogen lights of set, must use rectification black-white board that EO-1 hyperion instrument is carried out conventional corrective before carrying out spectra collection.
Material prepares
Prepare 54 fresh and alive flatfish (turbot), butcher, blood-letting, remove internal organs, clean up, freeze stand-by.Fish Body weight is between 372g to 580g value (average 512g), and length is at 27.5cm to 32cm (average 30.5cm).Subsequently on chopping block It is cut into 240 section samples from right to left.Front 96 samples are used as fresh not freezing sample (Fresh), remaining sample 144 samples, take respectively in the environment of 72 samples are placed in-70 DEG C and-20 DEG C and generate quick freezing sample and slow freezing sample This.Room temperature is set in constant 20 DEG C.Through 9 days, all of freezing sample solved through the time at night in the environment of 4 DEG C Freeze, form quick freezing defrosting sample set (FFT) and slow freezing defrosting sample set (SFT).For Fresh sample, 64 with Press proof this be training set, 32 immediately sample be test set.For FFT and SFT sample, employ 48 sample conducts the most respectively Training set, 24 samples are as test set.
High spectrum image pretreatment
To all of high spectrum image, use the area-of-interest that size is by Hyperspectral imagery processing software ENVI5.0 Region high spectrum image choose.In order to ensure the accuracy of experiment, eliminate by before instrumental effects and illumination effect Spectrum picture corresponding to 100 wavelength, only chooses the 101-512 clear high spectrum image that 412 light waves are corresponding altogether.In order to Remove effect of noise, all of high spectrum image uses minimal noise separate conversion (Minimum Noise Fraction Rotation, MNF Rotation) carry out high s/n ratio correction.
High spectrum image pending in the present embodiment is the high spectrum image that n wavelength size is, each wavelength is corresponding Two-dimentional high spectrum image M × M rank matrix I represent, two dimension high spectrum image in each pixel have P directional response.
The Hyperspectral imagery processing method that the present embodiment is strengthened based on textural characteristics, as it is shown in figure 1, include:
1) filtering, obtains local direction response vector
Second order class Gaussian functionAnd second order class Gaussian function Hilbert transformMatrix I is carried out convolution, passes through Quadrature filtering pairWithRespectively the two-dimentional high spectrum image that all wavelengths is corresponding is filtered, obtains tieing up high-spectrum A local direction response vector of single pixel in Xiang:
L θ p = ( G 2 θ p * I ) 2 + ( H 2 θ p * I ) 2 - - - ( 1 )
P is to be formed to represent pth direction, 0≤p≤P-1, θpThe angle responded mutually for pth direction ,-π≤θp≤π。
Change θpValue, calculate P time, obtain the local direction response vector on P direction of all pixels, and Form the local direction response vector of all pixels:
V = ( L θ 0 , L θ 1 , L θ 2 ... , L θ P - 1 ) , - - - ( 2 )
WithAccording to second order Gauss filter function G (x, y) and Hilbert transform H2Obtain,
G ( x , y ) = ( 4 x 2 - 2 ) exp ( - ( x 2 + y 2 ) σ 2 ) , - - - ( 3 )
σ2For the variance of class Gaussian function, (x y) is the pixel value in two dimensional image.
Definition:
G 2 θ = k a θ G 2 a + k b θ G 2 b + k c θ G 2 c , - - - ( 4 )
WhereinG2a, G2b, G2cRespectively It is corresponding that by G, (x, y) along counterclockwise rotation 0, pi/2 and the result of 3 π/4.The multinomial using three rank goes to approach two The Hilbert transform H of rank Gaussian function2, obtain:
H 2 θ = h a θ H 2 a + h b θ H 2 b + h c θ H 2 c + h d θ H 2 d , - - - ( 5 )
WhereinWith H2a, H2b, H2cAnd H2dIt is and the basic function of x and y independence.
2) quantify, obtain textural characteristics matrix
2-1) normalization: use the normalization operation adding standard deviation to local directional response vectorIt is normalized Pretreatment.
Order:
L Normθ p = L θ p Σ p = 0 P - 1 L θ p + s t d ( L θ p ) , - - - ( 6 )
It is right to representAsk for standard deviation computing, obtain normalized local direction response vector further:
V N o r m = ( L Normθ 0 , L Normθ 1 , L Normθ 2 ... , L Normθ P - 1 ) . - - - ( 7 )
2-2) N-state coding: the dynamic N state encoding based on probabilistic model.
Probabilistic model is defined by following process:
If F (t) is normalized local direction response vector VNormThe value of middle all elements, its span is [minVNorm, maxVNorm], and 0≤minVNorm< maxVNorm< 1, minVNormAnd maxVNormIt is respectively normalized local side The minima of element and maximum in response vector, t is certain normalized local direction response vector, i.e. LNormθp, value Scope is identical with F (t), minVNormAnd maxVNormIt is respectively minima and the maximum of element in normalization directional response vector. The method using compounded trapezoidal to approach carries out the Integration Solving of F (t), uses parzen window sound directive to normalized local Probability density f (ω) answered carries out printenv Multilayer networks.
Probability density f (ω) according to normalization directional response sets up probabilistic model:
F ( t ) = ∫ min v t f ( ω ) d t , - - - ( 8 )
It is [minV by the codomain of F (t)Norm, maxVNorm] carry out N decile, obtain N number of region and (the present embodiment uses N Decile divides, and is divided into the region that N number of size is identical), t is divided into N number of respective regions by the region then divided according to F (t), uses 0,1,2 ... N-1 is numbered, and represent that the N-state of the t in respective regions encodes result by numbering.As in figure 2 it is shown, normalizing Local direction after change is correspondingBe divided in the region of numbered N-2, then the local direction after normalization is correspondingN-state coding result be N-2.So, according to probabilistic model by normalization local direction response vector VNormConvert Direction vector V for numeralizationD:
VD=(D0,D1,...,DP-1)。 (9)
2-3) calculate textural characteristics: use LRP method to calculate the textural characteristics value of each pixel, it is ensured that the rotation of picture Invariance.
According to formula:
LRP P , N = Σ p = 0 P D p N p - - - ( 10 )
Calculate textural characteristics value LRP of each pixelP,N, DpThe direction vector V of numeralizationDIn element, to P position LRPP,NCarry out P shifting function, then to P the LRP obtainedP,NMinimizing, this minima is then the spy of respective pixel point Value indicative, it may be assumed that
LRP P , N r i = m i n { L O L ( LRP P , N , i ) | i = 0 , 1 , ... ... P - 1 } , - - - ( 11 )
Wherein, LOL (LRPP,N, i) represent the LRP for P positionP,NPerforming shift left operation, the step-length of movement is i position, and i takes Value is followed successively by 0, and 1 ... P-1.
If VD=(0,1,1,3), it is believed that i=0, represents that two dimensional image does not rotates, and direction vector isRepresenting that two dimensional image hour hands rotate pi/2, direction vector isThe rest may be inferred The direction vector of different rotation angle can be obtained, then:
LOL(LRPP,N, 0)=0 × 40+1×41+1×42+3×43=192
LOL(LRPP,N, 1)=1 × 40+1×41+3×42+0×43=53
LOL(LRPP,N, 2)=1 × 40+3×41+0×42+1×43=77
LOL(LRPP,N, 3)=3 × 40+0×41+1×42+1×43=83
Take minima, obtain the eigenvalue of this pixel:
LRP P , N r i = m i n { L O L ( LRP P , N , i ) | i = 0 , 1 , ... ... P - 1 } = 53. - - - ( 12 )
The textural characteristics value of all pixels in two dimensional image corresponding for each wavelength is combined the texture forming M × M Eigenmatrix.
3) textural characteristics is strengthened, and obtains textural characteristics and strengthens matrix
3-1) judge wavelength dependence
The correlation matrix C that definition is symmetrical:
C = C 11 C 12 . . C 1 , n C 21 C 22 . . C 2 , n . . . . . . . . . . . . . . . C n 1 C n 2 . . C n , n , - - - ( 13 )
Wherein CijFor the element in correlation matrix C, the phase of the textural characteristics matrix that expression i-th is corresponding with jth wavelength Pass coefficient:
C i j = cov ( a i , a j ) cov ( a i , a i ) × cov ( a j , a j ) , i , j = 1 , 2 , ... , m - - - ( 14 )
aiRepresent wavelength i correspondence textural characteristics matrix GiThe row vector being drawn into, ajRepresent textural characteristics corresponding to wavelength j Matrix GjThe row vector being drawn into, wherein cov (ai,aj) it is ajAnd ajBetween covariance.If CijIn more than 0, then judge GiWith GjWavelength is correlated with.
3-2) point-to-point fusion
All and G is found out according to correlation coefficientiThe textural characteristics matrix that wavelength is relevant, by special for texture relevant for all wavelengths Levy matrix and (include Gi) be added be averaging, carry out point-to-point fusion, formed new textural characteristics matrix be textural characteristics strengthen Matrix.For the high spectrum image of n wavelength, carry out n point-to-point fusion process successively, obtain n textural characteristics and strengthen matrix.
4) extract main textural characteristics, form the main texture feature vector for representing high spectrum image
4-1) respectively n the textural characteristics re-formed after fused is strengthened n the M that matrix is stretched into2Dimensional vector, and Form a M2× n rank matrix R:
R = R 11 R 12 . . R 1 , n R 21 R 22 . . R 2 , n . . . . . . . . . . . . . . . R m 2 1 R m 2 2 . . R m 2 , n , - - - ( 15 )
This matrix i.e. represents high spectrum image matrix after textural characteristics reinforcement.
4-2) matrix R is used PCA algorithm, choose three main constituent P1,P2,P3, and it is drawn into row vector, form texture Feature vector, Xk, P1, P2, P3M for mutual pairwise orthogonal2Dimensional vector.
Repeatedly carry out step 1)~4) obtain the texture feature vector X of the high spectrum image of all samplesk, XkRepresent kth The texture feature vector of the high spectrum image that sample is corresponding.
5) texture feature vector of all EO-1 hyperion is combined into new matrix and carries out dimension-reduction treatment, to main textural characteristics Vector carries out dimension-reduction treatment, greatly reduces data volume, improve subsequent applications (predominantly close grain high spectrum image point Class) the execution time, improve subsequent applications efficiency.
It is l (l=k × 3M that the texture feature vector of all EO-1 hyperion is combined into new matrix X, matrix X2) rank matrix, k is Number of samples, is 240 in the present embodiment.Using in the present embodiment has the epidemiology learning method DLPP of supervision to be obtained by matrix X dimensionality reduction To a d dimension data set Y=(y1,y2,...,ym),yi∈Rd(d < < l) wherein:
YT=ATX, (16)
A is that d ties up row vector, tries to achieve by the following method:
At ATXLWXTEquation is solved under the restrictive condition of A=1:
&Sigma; i j ( y i - y j ) W i j &Sigma; i j ( y i - y j ) B i j = A T X ( D W - W ) X T A A T X ( D B - B ) X T A = A T XL W X T A A T XL B X T A - - - ( 17 )
Minimize.
B and W represents the weight matrix that size is l × k, BijRepresent that some i and some j belongs to inhomogeneity, WijRepresent some i and some j Belong to same class.DBAnd DWFor diagonal matrix, representing matrix B and W is by row or by arranging the sum being added, L respectivelyB=DB-B and LW= DW-W is Laplacian Matrix.
By (17) algebraic transformation is obtained:
XLBXTA=λ XLWXTA, (18)
λ is characterized value.
By solving eigenvalue characteristic of correspondence vector big for d before in (18), i.e. can get A, A=(a0,a1,...,ad) It is to have by according to eigenvalue λ1> λ2> ... λdThe characteristic vector composition of arrangement.
Again the A obtained substitution (16) is tried to achieve YT, to YTCarry out the d after dimensionality reduction i.e. tried to achieve by transposition and tie up main textural characteristics square Battle array Y.
The Y becoming d to tie up main texture spy's vector X dimensionality reduction that l ties up by this method, it is ensured that distance in higher dimensional space Point farther out remains on larger distance, and some distance close together remains in that nearer.
To result Y after dimensionality reduction in the present embodiment, a young waiter in a wineshop or an inn is used to take advantage of support vector machine storehouse lib-LSSVM (library- LeastSquare Support Vector Machine, lib-LSSVM build the support vector machine of least square Different fillet samples is classified by (LeastSquare Support Vector Machine, LSSVM), and has obtained standard True classification results.

Claims (5)

1. the Hyperspectral imagery processing method strengthened based on textural characteristics, described high spectrum image comprises some and different ripple Long corresponding two dimensional image, it is characterised in that described Hyperspectral imagery processing method includes:
1) arbitrary two dimensional image is filtered, obtains the local direction response vector of all pixels in two dimensional image, repeatedly The local direction response vector combination of the multiple directions that filtering obtains forms the local direction response vector of all pixels;
2) successively described local direction response vector is normalized and N-state encodes, obtain the direction vector of numeralization, Obtain the eigenvalue of all pixels according to described direction vector, and build textural characteristics matrix;
N-state coding is carried out according to probabilistic model, including:
2-1) according to probabilistic model, the interval of all elements in the local direction response vector after normalization is divided into N Individual region;
2-2) described N number of region is used 0 the most successively, 1 ..., N-1 is numbered;
2-3) number table of element affiliated area is shown as the N-state coding result of respective element;
The N-state coding result of all elements in described local direction response vector i.e. constitutes the direction vector of numeralization;
3) circulation step 1)~2) obtain the textural characteristics matrix of all two dimensional images, according to the wavelength phase of each textural characteristics matrix Guan Xing, carries out textural characteristics reinforcement to each textural characteristics matrix, obtains corresponding textural characteristics and strengthens matrix;
4) strengthen matrix extracts main textural characteristics from all of textural characteristics, formed for representing the main of high spectrum image Texture feature vector.
2. the Hyperspectral imagery processing method strengthened based on textural characteristics as claimed in claim 1, it is characterised in that described step Rapid 1) according to formula in:
L &theta; p = ( G 2 &theta; p * I ) 2 + ( H 2 &theta; p * I ) 2
It is filtered, wherein:
I represents and works as forward two-dimensional viewing;
Represent when in forward two-dimensional viewing all pixels at θpLocal direction response vector on direction ,-π≤θp≤ π, p table Show the footmark of different directions;
For second order class Gaussian function:
G 2 &theta; p = k a &theta; p G 2 a + k b &theta; p G 2 b + k c &theta; p G 2 c ,
k a &theta; p = cos 2 ( &theta; p ) , k b &theta; p = - 2 c o s ( &theta; p ) s i n ( &theta; p ) , k c &theta; p = sin 2 ( &theta; p ) ,
G2a, G2b, G2cBe respectively by second order Gauss function G (x, y) along counterclockwise rotating 0, pi/2 and the result of 3 π/4, (x y) is the pixel value in two dimensional image;
For second order class Gaussian function Hilbert transform:
H 2 &theta; p = h a &theta; p H 2 a + h b &theta; p H 2 b + h c &theta; p H 2 c + h d &theta; p H 2 d
h a &theta; p = cos 3 ( &theta; p ) , h b &theta; p = - 3 cos 2 ( &theta; p ) s i n ( &theta; p ) , h c &theta; p = 3 c o s ( &theta; p ) sin 2 ( &theta; p ) , h d &theta; p = - sin 3 ( &theta; p ) ,
H2a, H2b, H2cAnd H2dIt is and the basic function of x and y independence.
3. the Hyperspectral imagery processing method strengthened based on textural characteristics as claimed in claim 2, it is characterised in that described step Rapid 2) formula is passed through in
Described local direction response vector is added the normalization of standard deviation, obtains the response of the local direction after normalization Vector,When pixels all in forward two-dimensional viewing are at θpLocal direction response vector on direction,For rightReturn Local direction response vector after one change, P is the direction number of the local direction response vector of two dimensional image,Represent RightSeek standard deviation.
4. the Hyperspectral imagery processing method strengthened based on textural characteristics as claimed in claim 3, it is characterised in that in step 3) in, each textural characteristics matrix is carried out textural characteristics strengthen time, first determine whether current texture eigenmatrix with other each The dependency of textural characteristics matrix, by current texture eigenmatrix and other institutes relevant to current texture eigenmatrix wavelength Textured eigenmatrix carries out point-to-point fusion and obtains textural characteristics reinforcement matrix.
5. the Hyperspectral imagery processing method strengthened based on textural characteristics as claimed in claim 4, it is characterised in that in step 3), in, any two textural characteristics matrix whether wavelength is relevant is to obtain according to correlation coefficient, and described correlation coefficient is according to formula:Calculate, CijRepresent being correlated with of the i-th textural characteristics matrix corresponding with j wavelength Coefficient, aiAnd ajIt is respectively the row vector that the i-th textural characteristics matrix tensile corresponding with j wavelength becomes, if Cij> 0, then judge two Individual textural characteristics matrix wavelength is correlated with, otherwise, it is determined that wavelength is uncorrelated.
CN201310358810.XA 2013-08-16 2013-08-16 The Hyperspectral imagery processing method strengthened based on textural characteristics Active CN103440625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310358810.XA CN103440625B (en) 2013-08-16 2013-08-16 The Hyperspectral imagery processing method strengthened based on textural characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310358810.XA CN103440625B (en) 2013-08-16 2013-08-16 The Hyperspectral imagery processing method strengthened based on textural characteristics

Publications (2)

Publication Number Publication Date
CN103440625A CN103440625A (en) 2013-12-11
CN103440625B true CN103440625B (en) 2016-08-10

Family

ID=49694317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310358810.XA Active CN103440625B (en) 2013-08-16 2013-08-16 The Hyperspectral imagery processing method strengthened based on textural characteristics

Country Status (1)

Country Link
CN (1) CN103440625B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463814B (en) * 2014-12-08 2017-04-19 西安交通大学 Image enhancement method based on local texture directionality
CN105718531B (en) * 2016-01-14 2019-12-17 广州市万联信息科技有限公司 Image database establishing method and image identification method
CN111460966B (en) * 2020-03-27 2024-02-02 中国地质大学(武汉) Hyperspectral remote sensing image classification method based on metric learning and neighbor enhancement
CN112037211B (en) * 2020-09-04 2022-03-25 中国空气动力研究与发展中心超高速空气动力研究所 Damage characteristic identification method for dynamically monitoring small space debris impact event

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411782A (en) * 2011-11-01 2012-04-11 哈尔滨工程大学 Three layer color visualization method of hyperspectral remote sensing image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9495582B2 (en) * 2011-12-04 2016-11-15 Digital Makeup Ltd. Digital makeup

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411782A (en) * 2011-11-01 2012-04-11 哈尔滨工程大学 Three layer color visualization method of hyperspectral remote sensing image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUANG Yuancheng.High-resolution Hyper-spectral Image Classification with Parts-based Feature and Morphology Profile in Urban Area.《Geo-Spatial Information Science》.2010,第13卷(第2期),第111-122页. *
基于矩阵表示的局部敏感辨识分析;刘小明等;《浙江大学学报》;20090215;第43卷(第2期);第290-296页 *

Also Published As

Publication number Publication date
CN103440625A (en) 2013-12-11

Similar Documents

Publication Publication Date Title
Li et al. Color texture image retrieval based on Gaussian copula models of Gabor wavelets
CN105740799B (en) Classification of hyperspectral remote sensing image method and system based on the selection of three-dimensional Gabor characteristic
CN107341786A (en) The infrared and visible light image fusion method that wavelet transformation represents with joint sparse
WO2018045626A1 (en) Super-pixel level information fusion-based hyperspectral image classification method and system
CN106022391A (en) Hyperspectral image characteristic parallel extraction and classification method
CN104751181B (en) A kind of high spectrum image solution mixing method based on relative abundance
CN103440625B (en) The Hyperspectral imagery processing method strengthened based on textural characteristics
CN103336968A (en) Hyperspectral data dimensionality reduction method based on tensor distance patch alignment
CN106981058A (en) A kind of optics based on sparse dictionary and infrared image fusion method and system
CN106127179A (en) Based on the Classification of hyperspectral remote sensing image method that adaptive layered is multiple dimensioned
CN114821198B (en) Cross-domain hyperspectral image classification method based on self-supervision and small sample learning
Jin et al. Spatial-spectral feature extraction of hyperspectral images for wheat seed identification
CN111460966B (en) Hyperspectral remote sensing image classification method based on metric learning and neighbor enhancement
CN106886793A (en) Hyperspectral image band selection method based on discriminant information and manifold information
Hou et al. Spatial–spectral weighted and regularized tensor sparse correlation filter for object tracking in hyperspectral videos
CN103854265A (en) Novel multi-focus image fusion technology
Etemad et al. Color texture image retrieval based on Copula multivariate modeling in the Shearlet domain
CN102819840B (en) Method for segmenting texture image
Luo et al. Infrared and Visible Image Fusion: Methods, Datasets, Applications, and Prospects
CN105975940A (en) Palm print image identification method based on sparse directional two-dimensional local discriminant projection
CN104537377B (en) A kind of view data dimension reduction method based on two-dimentional nuclear entropy constituent analysis
Deng et al. Biological modeling of human visual system for object recognition using GLoP filters and sparse coding on multi-manifolds
CN114373080B (en) Hyperspectral classification method of lightweight hybrid convolution model based on global reasoning
CN113628111B (en) Hyperspectral image super-resolution method based on gradient information constraint
Chen et al. Polarization image fusion based on grouped densely connected network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Deng Shuiguang

Inventor after: Li Yujin

Inventor after: Xu Yifei

Inventor after: Yin Jianwei

Inventor after: Li Ying

Inventor after: Wu Jian

Inventor after: Wu Chaohui

Inventor before: Deng Shuiguang

Inventor before: Xu Yifei

Inventor before: Yin Jianwei

Inventor before: Li Ying

Inventor before: Wu Jian

Inventor before: Wu Chaohui

COR Change of bibliographic data
C14 Grant of patent or utility model
GR01 Patent grant