CN103440625B - The Hyperspectral imagery processing method strengthened based on textural characteristics - Google Patents
The Hyperspectral imagery processing method strengthened based on textural characteristics Download PDFInfo
- Publication number
- CN103440625B CN103440625B CN201310358810.XA CN201310358810A CN103440625B CN 103440625 B CN103440625 B CN 103440625B CN 201310358810 A CN201310358810 A CN 201310358810A CN 103440625 B CN103440625 B CN 103440625B
- Authority
- CN
- China
- Prior art keywords
- theta
- texture feature
- texture
- matrix
- hyperspectral
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 239000013598 vector Substances 0.000 claims abstract description 74
- 239000011159 matrix material Substances 0.000 claims abstract description 52
- 230000004044 response Effects 0.000 claims description 43
- 230000000875 corresponding effect Effects 0.000 claims description 29
- 230000006870 function Effects 0.000 claims description 13
- 238000010606 normalization Methods 0.000 claims description 7
- 230000002596 correlated effect Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 101100042630 Caenorhabditis elegans sin-3 gene Proteins 0.000 claims description 2
- 238000005728 strengthening Methods 0.000 claims 4
- 238000010191 image analysis Methods 0.000 abstract description 2
- 238000000034 method Methods 0.000 description 23
- 230000003595 spectral effect Effects 0.000 description 12
- 238000004458 analytical method Methods 0.000 description 9
- 239000000523 sample Substances 0.000 description 7
- 230000009467 reduction Effects 0.000 description 5
- 241000251468 Actinopterygii Species 0.000 description 4
- 238000012843 least square support vector machine Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 3
- 239000012520 frozen sample Substances 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 2
- 229910052736 halogen Inorganic materials 0.000 description 2
- 150000002367 halogens Chemical class 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 241000269908 Platichthys flesus Species 0.000 description 1
- 241000894433 Turbo <genus> Species 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000001028 reflection method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000004611 spectroscopical analysis Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Complex Calculations (AREA)
Abstract
本发明提供的基于特征加强的高光谱图像处理方法,采用由二维图像中所有像素点的特征值组成的纹理特征矩阵描述相应的二维图像,并对所有二维图像的纹理特征矩阵进行纹理特征加强,得到纹理特征加强矩阵,并根据所有纹理特征加强矩阵提取出主要的纹理特征并形成主要纹理特征向量,用主要纹理特征向量表示高光谱图像。本发明的基于特征加强的高光谱图像处理方法合理利用了高光谱图像的多波长信息,能够准确的捕捉到丰富的纹理特征,便于进行纹理细节的区分,尤其适用于细纹理图像分析。
The hyperspectral image processing method based on feature enhancement provided by the present invention uses a texture feature matrix composed of feature values of all pixels in a two-dimensional image to describe the corresponding two-dimensional image, and textures the texture feature matrix of all two-dimensional images Feature enhancement, obtain texture feature enhancement matrix, and extract main texture features according to all texture feature enhancement matrices to form main texture feature vectors, and use main texture feature vectors to represent hyperspectral images. The hyperspectral image processing method based on feature enhancement of the present invention rationally utilizes the multi-wavelength information of the hyperspectral image, can accurately capture rich texture features, facilitates the distinction of texture details, and is especially suitable for fine texture image analysis.
Description
技术领域technical field
本发明属于图像处理技术领域,涉及基于纹理特征加强的高光谱图像处理方法。The invention belongs to the technical field of image processing, and relates to a hyperspectral image processing method based on texture feature enhancement.
背景技术Background technique
高光谱图像为三维图像,包括普通二维平面图像信息和波长信息。在对目标的空间特征成像的同时,对每个空间像元经过色散形成几十个乃至几百个窄波段以进行连续的光谱覆盖。一个高光谱图像为有若干个波长对应的二维图像组成的三维高光谱图像。由于不同成分对光谱吸收也不同,在某个特定波长下图像对某个缺陷会有较显著的反映,光谱信息能充分反映样品内部的物理结构、化学成分的差异。Hyperspectral images are three-dimensional images, including ordinary two-dimensional plane image information and wavelength information. While imaging the spatial characteristics of the target, each spatial pixel is dispersed to form dozens or even hundreds of narrow bands for continuous spectral coverage. A hyperspectral image is a three-dimensional hyperspectral image composed of several two-dimensional images corresponding to wavelengths. Because different components have different spectral absorption, the image will reflect a certain defect more significantly at a specific wavelength, and the spectral information can fully reflect the differences in the physical structure and chemical composition of the sample.
近红外外高光谱因其快速无损等特性被广泛应用于食品、医药、石油化工等行业。高光谱图像分析通常分为光谱数据分析和高光谱纹理分析。相对于光谱数据,图像的纹理更加的接近人的感官视觉,对于微观结构的反应更加准确。目前的纹理分析方法主要应用传统的二维宏观图片,在高光谱图像纹理方面,当前的方法用于分析处理航拍遥感数据,由于航拍数据所有的样本都集中展示在一张光谱图像,其纹理分析更加看重对像素向量之间的关系而不是纹理本身。而在应用在食品和农业等领域的高光谱来源来说,一张高光谱图像只代表一个样本,其精度远远高于航拍遥感图像,对其的纹理分析更加的关注纹理结构本身而不是像素向量。到目前分析,对于应用于农业和食品等领域的高光谱纹理分析研究较少。Near-infrared hyperspectral spectroscopy is widely used in food, medicine, petrochemical and other industries due to its fast and non-destructive characteristics. Hyperspectral image analysis is usually divided into spectral data analysis and hyperspectral texture analysis. Compared with spectral data, the texture of the image is closer to human sensory vision, and the response to the microstructure is more accurate. The current texture analysis methods mainly apply traditional two-dimensional macro images. In terms of hyperspectral image texture, the current method is used to analyze and process aerial remote sensing data. Since all samples of aerial photography data are displayed in one spectral image, its texture analysis Pay more attention to the relationship between pixel vectors than the texture itself. For hyperspectral sources used in the fields of food and agriculture, a hyperspectral image only represents one sample, and its accuracy is much higher than that of aerial remote sensing images. The texture analysis of it pays more attention to the texture structure itself rather than pixels. vector. So far, there are few researches on hyperspectral texture analysis applied to fields such as agriculture and food.
目前在高光谱分析方面,主要可分为以下三种方法:1)通过在若干的光谱图像中选取代表性的少部分二维图像进行纹理分析,这种方法通常认为光谱反射值好的图片同时具有优秀的纹理特征,然而这种假设缺乏理论和实践的有效证明。2)直接应用三维纹理方法,这些三维方法通过将波长当作第三维度,由经典的二维方法拓展而来,然而这种方法因为过于粗糙会造成大量的信息损失。3)通过定义波段间的关系,将现有的二维方法进行拓展用来有效的表示三维高光谱图像。At present, in terms of hyperspectral analysis, there are mainly the following three methods: 1) By selecting a small number of representative two-dimensional images from a number of spectral images for texture analysis, this method usually considers that images with good spectral reflectance values It has excellent texture characteristics, but this assumption lacks effective proofs in theory and practice. 2) Directly apply the three-dimensional texture method. These three-dimensional methods are extended from the classic two-dimensional method by treating the wavelength as the third dimension. However, this method will cause a large amount of information loss because it is too rough. 3) By defining the relationship between bands, the existing two-dimensional methods are extended to effectively represent three-dimensional hyperspectral images.
通过第三种拓展方法可以有效的表示高光谱纹理特征,然而这种方法主要存在以下三个挑战:1)需要定义一种良好的可以描述细微纹理的方法,该方法需要满足一个优秀的纹理描述子具有的基本属性,例如旋转不变性等。2)需要合理的利用多波段的信息,这就需要定义波段之间的相关模型,通过该模型可以有效的捕捉高光谱图像丰富的纹理特点。3)需要对高光谱提取的大量特征进行降维,通过有监督的降维方法,即可以降低分类模型耗费的执行时间,又可以提高分类准确率。The third extension method can effectively represent hyperspectral texture features, but this method mainly has the following three challenges: 1) It is necessary to define a good method that can describe fine textures, which needs to satisfy an excellent texture description The basic properties of the child, such as rotation invariance and so on. 2) It is necessary to make reasonable use of multi-band information, which requires the definition of a correlation model between bands, through which the rich texture characteristics of hyperspectral images can be effectively captured. 3) It is necessary to reduce the dimensionality of a large number of features extracted from the hyperspectrum. Through the supervised dimensionality reduction method, the execution time consumed by the classification model can be reduced, and the classification accuracy can be improved.
发明内容Contents of the invention
针对现有技术的不足,本发明提供了一种基于特征加强的高光谱图像处理方法,能够有效捕捉并描述细微纹理。Aiming at the deficiencies of the prior art, the present invention provides a hyperspectral image processing method based on feature enhancement, which can effectively capture and describe fine textures.
本发明提供的一种基于纹理特征加强的高光谱图像处理方法,所述高光谱图像包含若干与不同波长对应的二维图像,所述高光谱图像处理方法包括:A hyperspectral image processing method based on texture feature enhancement provided by the present invention, the hyperspectral image includes several two-dimensional images corresponding to different wavelengths, and the hyperspectral image processing method includes:
1)对任一二维图像进行滤波,得到二维图像中所有像素点的局部方向响应向量,多次滤波得到的多个方向的局部方向响应向量组合形成所有像素点的局部方向响应向量;1) Filter any two-dimensional image to obtain the local direction response vectors of all pixels in the two-dimensional image, and combine the local direction response vectors of multiple directions obtained by multiple filtering to form the local direction response vectors of all pixels;
2)依次对所述的局部方向响应向量进行归一化和N状态编码,得到编码化的方向向量,根据所述的方向向量获取所有像素点的特征值,并构建纹理特征矩阵;2) performing normalization and N-state encoding on the local direction response vector in turn to obtain an encoded direction vector, obtaining the eigenvalues of all pixels according to the direction vector, and constructing a texture feature matrix;
3)循环步骤1)~2)获取所有二维图像的纹理特征矩阵,根据各纹理特征矩阵的波长相关性,对每个纹理特征矩阵进行纹理特征加强,得到相应的纹理特征加强矩阵;3) Steps 1) to 2) are looped to obtain texture feature matrices of all two-dimensional images, and according to the wavelength correlation of each texture feature matrix, texture feature enhancement is performed on each texture feature matrix to obtain a corresponding texture feature enhancement matrix;
4)从所有的纹理特征加强矩阵中提取主要纹理特征,形成用于表示高光谱图像的主要纹理特征向量。4) Extract main texture features from all texture feature enhancement matrices to form main texture feature vectors for representing hyperspectral images.
本发明的基于纹理特征加强的高光谱图像处理方法采用所有像素点的特征值描述相应的二维图像的纹理特征,保证了图像的旋转不变性,对所有二维图像的纹理特征进行加强并根据加强后的纹理特征提取出主要的纹理特征并形成主要纹理特征向量用于表示高光谱图像。通过合理的利用高光谱图像的多波长信息,能够有效的捕捉高光谱图像丰富的纹理特征。The hyperspectral image processing method based on texture feature enhancement of the present invention uses the eigenvalues of all pixels to describe the texture features of the corresponding two-dimensional image, ensures the rotation invariance of the image, and strengthens the texture features of all two-dimensional images and according to The enhanced texture features extract the main texture features and form the main texture feature vectors to represent the hyperspectral image. By rationally using the multi-wavelength information of hyperspectral images, the rich texture features of hyperspectral images can be effectively captured.
所述步骤1)中根据公式:According to formula in described step 1):
进行滤波,其中:filter, where:
I表示当前二维图像;I represents the current two-dimensional image;
Lθ表示当前二维图像中所有像素点在θ方向上的局部方向响应向量,-π≤θ≤π;L θ represents the local direction response vector of all pixels in the current two-dimensional image in the θ direction, -π≤θ≤π;
为二阶类高斯函数: is a second-order Gaussian-like function:
G2a,G2b,G2c分别为将二阶高斯函数G(x,y)沿着逆时针方向旋转0,π/2和3π/4的结果,(x,y)为二维图像中的像素值;G 2a , G 2b , G 2c are the results of rotating the second-order Gaussian function G(x,y) counterclockwise by 0,π/2 and 3π/4 respectively, and (x,y) is the Pixel values;
为二阶类高斯函数希尔伯特变换: Hilbert transform for the second-order Gaussian-like function:
是与x和y独立的基函数。所述的H2a,H2b,H2c和H2d如文献Freeman,W.T.,&Adelson,E.H.(1991),The design and use of steerable filters,IEEE Transactions onPattern analysis and machine intelligence,13,891-906。are basis functions independent of x and y. The H 2a , H 2b , H 2c and H 2d are described in Freeman, WT, & Adelson, EH (1991), The design and use of steerable filters, IEEE Transactions on Pattern analysis and machine intelligence, 13, 891-906.
通过改变值θ,获取获得每个像素点在不同有方向上的局部方向相应,得到的多个方向的局部方向响应向量组合形成所有像素点的局部方向响应向量。By changing the value θ, the local direction response of each pixel point in different directions is obtained, and the obtained local direction response vectors of multiple directions are combined to form the local direction response vectors of all pixels.
所述步骤2)中通过公式对所述的局部方向响应向量中的每一个局部方向响应向量进行添加标准差的归一化,得到归一化方向响应向量,为二维图像中像素点在θp方向上的局部方向响应向量,为对归一化后的局部方向响应向量,P为二维图像的局部方向响应向量的方向个数,表示对求标准差。Described step 2) in by formula Adding a standard deviation to each local direction response vector in the local direction response vectors to obtain a normalized direction response vector, is the local direction response vector of the pixel point in the θ p direction in the two-dimensional image, for right The normalized local direction response vector, P is the direction number of the local direction response vector of the two-dimensional image, express yes Find the standard deviation.
通过采用添加标准差的归一化方法,避免了归一化过程中出现分子分母等比例的问题。By adopting the normalization method of adding standard deviation, the problem of equal ratio of numerator and denominator in the normalization process is avoided.
所述步骤2)N状态编码根据概率模型进行,包括:The step 2) N-state encoding is carried out according to a probability model, including:
2-1)根据概率模型对归一化后的局部方向响应向量中的所有元素的取值区间划分为N个区域;2-1) According to the probability model, the value range of all elements in the normalized local direction response vector is divided into N regions;
2-2)将所述的N个区域从小到大依次采用0,1,……,N-1进行编号;2-2) The N regions are numbered sequentially by 0, 1, ..., N-1 from small to large;
2-3)元素所属区域的编号表示为相应元素的N状态编码结果;2-3) The number of the area to which the element belongs is expressed as the N-state encoding result of the corresponding element;
所述局部方向响应向量中的所有元素的N状态编码结果即构成编码化的方向向量。The N-state coding results of all elements in the local direction response vector constitute a coded direction vector.
通过概率模型进行编码,提高了N状态编码的准确性。Encoding by probabilistic model improves the accuracy of N-state encoding.
所述步骤2)中采用LRP方法获取所有像素点的特征值:LOL(LRPP,N,i)表示对于示对于P位的LRPP,N执行i位的左移操作,其中:Dp为所述方向向量中的第p个元素,N为N状态编码的状态个数。Described step 2) adopts LRP method to obtain the eigenvalues of all pixel points: LOL(LRP P,N ,i) means to perform a left shift operation of i bits for the LRP P,N shown for P bits, where: D p is the pth element in the direction vector, and N is the number of states encoded by N states.
采用LRP方法,将用方向向量表示的像素点采用特征值表示,由于每个像素点有P个不同方向的局部方向相应,因此,为确保高光谱图像的旋转不变性,保证每个像素点只有一个纹理特征值,对P位的LRPP,N进行移位操作(移动的步长为i),即:Using the LRP method, the pixel points represented by the direction vector are represented by eigenvalues. Since each pixel point has P local directions corresponding to different directions, therefore, in order to ensure the rotation invariance of the hyperspectral image, it is guaranteed that each pixel point has only A texture feature value, which performs a shift operation on the P-bit LRP P, N (the moving step is i), that is:
LOL(LRPP,N,i)=DiN0+Di+1N1+...DPNP-i+D0NP-i+1+...+Di-1NP,通过P次移位操作,取每次移位结果的最小值,使每个像素点对应一个特征值。移位操作即对应二维图像的旋转操作,移位的步长对应旋转角度通过多次移位操作,保证了图像的旋转不变性。LOL(LRP P,N ,i)=D i N 0 +D i+1 N 1 +...D P N Pi +D 0 N P-i+1 +...+D i-1 N P , Through P shift operations, the minimum value of each shift result is taken, so that each pixel corresponds to a feature value. The shift operation corresponds to the rotation operation of the two-dimensional image, and the step size of the shift corresponds to the rotation angle through multiple shift operations to ensure the rotation invariance of the image.
在步骤3)中,对每个纹理特征矩阵进行纹理特征加强时,首先判断当前纹理特征矩阵与其他各个纹理特征矩阵的相关性,将当前纹理特征矩阵以及与当前纹理特征矩阵波长相关的其他所有纹理特征矩阵进行点对点融合得到纹理特征加强矩阵。In step 3), when the texture feature is enhanced for each texture feature matrix, the correlation between the current texture feature matrix and other texture feature matrices is first judged, and the current texture feature matrix and all other wavelengths related to the current texture feature matrix The texture feature matrix is fused point-to-point to obtain a texture feature enhancement matrix.
通过点对点融合方法,将以及与其波长相关的所有纹理特征矩阵进行矩阵求平均,得到的新矩阵即为该纹理特征矩阵对应的加强纹理特征矩阵,方法简单,易于实现。Through the point-to-point fusion method, all the texture feature matrices related to its wavelength are averaged, and the new matrix obtained is the enhanced texture feature matrix corresponding to the texture feature matrix. The method is simple and easy to implement.
在步骤3)中,任意两个纹理特征矩阵是否波长相关是根据相关系数得到,所述相关系数的根据公式:计算,Cij表示第i和j个波长对应的纹理特征矩阵的相关系数,ai和aj分别为第i和j个波长对应的纹理特征矩阵拉伸成的行向量,若Cij>0,则判定两个纹理特征矩阵波长相关,否则,判定波长不相关。In step 3), whether any two texture feature matrices are wavelength-correlated is obtained according to the correlation coefficient, and the correlation coefficient is based on the formula: Calculation, C ij represents the correlation coefficient of the texture feature matrix corresponding to the i and j wavelengths, a i and a j are the row vectors stretched from the texture feature matrix corresponding to the i and j wavelengths, if C ij >0 , it is determined that the wavelengths of the two texture feature matrices are correlated, otherwise, it is determined that the wavelengths are not correlated.
本发明对将所有波长对应的二维图像的纹理特征进行融合,并从中得到三维高光谱图像的主要纹理特征,与现有技术的处理方法比较,合理利用了高光谱图像的多波长信息,能够准确的捕捉到丰富的纹理特征,便于进行纹理细节的区分,尤其适用于细纹理图像(如鱼片的纹理图像)分析。通过对提取的大量特征进行的降维,既能够精确的表示高光谱图像,又降低了数据量,有利于提高后续应用的速度。The present invention fuses the texture features of the two-dimensional images corresponding to all wavelengths, and obtains the main texture features of the three-dimensional hyperspectral image. Compared with the processing method of the prior art, the multi-wavelength information of the hyperspectral image is rationally utilized, and can Accurately capture rich texture features, which facilitates the distinction of texture details, especially for the analysis of fine texture images (such as texture images of fish fillets). By reducing the dimensionality of a large number of extracted features, it can not only accurately represent the hyperspectral image, but also reduce the amount of data, which is conducive to improving the speed of subsequent applications.
附图说明Description of drawings
图1为本实施例的基于纹理特征加强的高光谱图像处理方法的流程图;Fig. 1 is the flowchart of the hyperspectral image processing method based on texture feature enhancement in this embodiment;
图2为归一化的局部有方向响应的概率统计模型。Fig. 2 is the probability statistics model of the normalized local directional response.
具体实施方式detailed description
下面将结合具体的实施例对本发明的基于特征加强的高光谱图像处理方法做进一步描述,但本发明的保护范围并不局限于此实施例,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。The hyperspectral image processing method based on feature enhancement of the present invention will be further described in conjunction with specific embodiments below, but the scope of protection of the present invention is not limited to this embodiment. Within the technical scope, easily conceivable changes or substitutions shall fall within the protection scope of the present invention.
本实施例中以区分不同环境下的鱼肉实验为例。In this embodiment, the experiment of distinguishing fish in different environments is taken as an example.
仪器准备Instrument preparation
实验设备由电子计算机、高光谱仪、卤素灯、矫正黑、白板。高光谱仪使用美国ASD(Analytical Spectral Device)公司的Handheld Field Spec光谱仪,光谱采样间隔为1.5nm,采样范围为380nm~1030nm,采用漫反射方式进行样本光谱采样;采用与光谱仪器配套的14.5卤素灯,进行光谱采集前须使用矫正黑、白板对高光谱仪进行常规矫正。The experimental equipment consists of electronic computer, hyperspectrometer, halogen lamp, correction black and white board. The hyperspectral instrument uses a Handheld Field Spec spectrometer from ASD (Analytical Spectral Device) in the United States. The spectral sampling interval is 1.5nm, and the sampling range is 380nm to 1030nm. The diffuse reflection method is used to sample the sample spectrum; the 14.5 halogen lamp matched with the spectroscopic instrument is used. Correction black and white boards must be used to perform routine corrections on the hyperspectral instrument before spectral collection.
材料准备Material preparation
准备鲜活的54条比目鱼(大菱鲆),屠宰,放血,去内脏,清洗干净,冰冻待用。鱼的体重在372g到580g值(平均512g)之间,长度在27.5cm到32cm(平均30.5cm)。随后在砧板上从右到左被切成240个切片样本。前96个样本用作新鲜未冷冻的样本(Fresh),剩余的样本144个样本,分别取72个样本置于-70℃和-20℃的环境下生成快速冷冻样本和慢速冷冻样本。室温设定在恒定20℃。经过9天,所有的冷冻样本在4℃的环境下经过一夜时间进行解冻,形成快速冷冻解冻样本集(FFT)和慢速冷冻解冻样本集(SFT)。对于Fresh样本,64个随机样本为训练集,32个随即样本为测试集。对于FFT和SFT样本,都分别使用了48个样本作为训练集,24个样本作为测试集。Prepare 54 live flounder (turbos), slaughtered, bled, gutted, cleaned, and frozen for later use. The weight of the fish is between 372g and 580g (average 512g), and the length is between 27.5cm and 32cm (average 30.5cm). Then 240 sliced samples were cut from right to left on the cutting board. The first 96 samples were used as fresh and unfrozen samples (Fresh), and the remaining samples were 144 samples. 72 samples were taken and placed in -70°C and -20°C environments to generate fast-frozen samples and slow-frozen samples. Room temperature was set at a constant 20°C. After 9 days, all frozen samples were thawed overnight at 4°C to form a fast freeze-thaw sample set (FFT) and a slow freeze-thaw sample set (SFT). For Fresh samples, 64 random samples are used as training set and 32 random samples are used as testing set. For the FFT and SFT samples, 48 samples are used as the training set and 24 samples are used as the test set.
高光谱图像预处理Hyperspectral Image Preprocessing
对所有的高光谱图像,采用高光谱图像处理软件ENVI5.0对大小为的感兴趣区域的区域高光谱图像进行选取。为了保证实验的准确性,剔除出受仪器影响和光照影响的前100个波长所对应的光谱图像,只选取101-512总共412个光波对应的清晰高光谱图像。为了去除噪声的影响,对所有的高光谱图像采用最小噪声分离变换(Minimum Noise FractionRotation,MNF Rotation)进行高信噪比校正。For all hyperspectral images, hyperspectral image processing software ENVI5.0 was used to select the regional hyperspectral images of the region of interest with a size of . In order to ensure the accuracy of the experiment, the spectral images corresponding to the first 100 wavelengths affected by the instrument and illumination were eliminated, and only clear hyperspectral images corresponding to a total of 412 light waves from 101 to 512 were selected. In order to remove the influence of noise, all hyperspectral images are corrected with a high signal-to-noise ratio using Minimum Noise Fraction Rotation (MNF Rotation).
本实施例中待处理的高光谱图像为n个波长大小为的高光谱图像,每个波长对应的二维高光谱图像用M×M阶矩阵I表示,二维高光谱图像中的每个像素点有P个方向响应。The hyperspectral image to be processed in this embodiment is a hyperspectral image with n wavelengths, and the two-dimensional hyperspectral image corresponding to each wavelength is represented by an M×M order matrix I, and each pixel in the two-dimensional hyperspectral image A point has P directions to respond to.
本实施例基于纹理特征加强的高光谱图像处理方法,如图1所示,包括:This embodiment is based on the hyperspectral image processing method of texture feature enhancement, as shown in Figure 1, including:
1)滤波,得到局部方向响应向量1) Filter to get the local direction response vector
二阶类高斯函数及二阶类高斯函数希尔伯特变换对矩阵I进行卷积,通过正交滤波对和分别对所有波长对应的二维高光谱图像进行滤波,得到维高光谱图像中单个像素点的一个局部方向响应向量:second-order Gaussian-like function and the second-order Gaussian-like Hilbert transform Convolve the matrix I, and use orthogonal filtering to and Filter the two-dimensional hyperspectral images corresponding to all wavelengths to obtain a local direction response vector of a single pixel in the three-dimensional hyperspectral image:
p为形成表示第p个方向的,0≤p≤P-1,θp为第p个方向相响应的角度,-π≤θp≤π。p is formed to represent the p-th direction, 0≤p≤P-1, θ p is the angle corresponding to the p-th direction, -π≤θ p ≤π.
改变θp的取值,计算P次,得到所有像素点的在P个方向上的局部方向响应向量,并形成所有像素点的局部方向响应向量:Change the value of θ p , calculate P times, get the local direction response vectors of all pixels in P directions, and form the local direction response vectors of all pixels:
和根据二阶高斯滤波函数G(x,y)及其希尔伯特变换H2得到, and According to the second-order Gaussian filter function G(x, y) and its Hilbert transform H2 ,
σ2为类高斯函数的方差,(x,y)为二维图像中的像素值。σ 2 is the variance of the Gaussian-like function, and (x, y) is the pixel value in the two-dimensional image.
定义:definition:
其中G2a,G2b,G2c分别对应将G(x,y)沿着逆时针方向旋转0,π/2和3π/4的结果。采用一个三阶的多项式去逼近二阶高斯函数的希尔伯特变换H2,得到:in G 2a , G 2b , and G 2c correspond to the results of rotating G(x,y) counterclockwise by 0, π/2, and 3π/4, respectively. Using a third-order polynomial to approximate the Hilbert transform H 2 of the second-order Gaussian function, we get:
其中和H2a,H2b,H2c和H2d是与x和y独立的基函数。in and H 2a , H 2b , H 2c and H 2d are basis functions independent of x and y.
2)量化,得到纹理特征矩阵2) Quantize to get the texture feature matrix
2-1)归一化:使用添加标准差的归一化操作对局部方向响应向量进行归一化预处理。2-1) Normalization: use the normalization operation of adding standard deviation to the local direction response vector Perform normalization preprocessing.
令:make:
表示对求取标准差运算,进一步而得到归一化的局部方向响应向量: express yes Calculate the standard deviation operation, and further obtain the normalized local direction response vector:
2-2)N状态编码:以概率模型为基础的动态N状态编码。2-2) N-state coding: Dynamic N-state coding based on a probability model.
概率模型通过如下过程定义:A probabilistic model is defined by the following procedure:
设F(t)为归一化的局部方向响应向量VNorm中所有元素的值,其取值范围为[minVNorm,maxVNorm],且0≤minVNorm<maxVNorm<1,minVNorm和maxVNorm分别为归一化的局部方向响应向量中元素的最小值和最大值,t为某个归一化的局部方向响应向量,即LNormθp,取值范围与F(t)相同,minVNorm和maxVNorm分别为归一化方向响应向量中元素的最小值和最大值。采用复合梯形逼近的方法进行F(t)的积分求解,使用parzen窗对归一化的局部有方向的响应的概率密度f(ω)进行无参数概率密度估计。Let F(t) be the value of all elements in the normalized local direction response vector V Norm , and its value range is [minV Norm , maxV Norm ], and 0≤minV Norm <maxV Norm <1, minV Norm and maxV Norm is the minimum value and maximum value of elements in the normalized local direction response vector, t is a normalized local direction response vector, that is, L Normθp , the value range is the same as F(t), minV Norm and maxV Norm are the minimum and maximum values of elements in the normalized direction response vector, respectively. The compound trapezoidal approximation method is used to solve the integral of F(t), and the parzen window is used to estimate the probability density f(ω) of the normalized local directional response without parameter.
根据归一化方向响应的概率密度f(ω)建立概率模型:The probability model is established according to the probability density f(ω) of the normalized direction response:
将F(t)的值域即[minVNorm,maxVNorm]进行N等分,得到N个的区域(本实施例中采用N等分划分,分为N个大小相同的区域),然后根据F(t)所分的区域将t分为N个相应区域,采用0,1,2……N-1进行编号,并用编号表示相应区域内的t的N状态编码结果。如图2所示,归一化后的局部方向相应被划分到编号为N-2的区域中,则归一化后的局部方向相应的N状态编码结果为N-2。如此,根据概率模型将归一化局部方向响应向量VNorm转化为编码化的方向向量VD:The value domain of F(t) namely [minV Norm , maxV Norm ] is carried out N equally divided, obtains the area of N (in this embodiment adopts N equally divided, is divided into N identically sized areas), then according to F (t) Divide t into N corresponding regions, use 0, 1, 2...N-1 for numbering, and use numbers to represent the N-state encoding results of t in the corresponding regions. As shown in Figure 2, the normalized local orientation corresponds to is divided into the area numbered N-2, the normalized local direction corresponds to The N-state encoding result is N-2. In this way, the normalized local direction response vector V Norm is transformed into the coded direction vector V D according to the probability model:
VD=(D0,D1,...,DP-1)。 (9)V D =(D 0 , D 1 , . . . , D P-1 ). (9)
2-3)计算纹理特征:采用LRP方法计算每个像素点的纹理特征值,确保图片的旋转不变性。2-3) Calculation of texture features: the LRP method is used to calculate the texture feature value of each pixel to ensure the rotation invariance of the image.
根据公式:According to the formula:
计算每个像素点的纹理特征值LRPP,N,Dp编码化的方向向量VD中的元素,对P位的LRPP,N进行P次移位操作,然后对得到的P个LRPP,N求最小值,该最小值则为相应像素点的特征值,即:Calculate the texture feature value LRP P,N of each pixel, the elements in the encoded direction vector V D of D p , perform P shift operations on the P-bit LRP P,N , and then perform P shift operations on the obtained P LRP P , N seeks the minimum value, and the minimum value is the feature value of the corresponding pixel point, namely:
其中,LOL(LRPP,N,i)表示对于P位的LRPP,N执行左移操作,移动的步长为i位,i的取值依次为0,1,……P-1。Among them, LOL(LRP P, N , i) means to perform a left shift operation on the LRP P, N of P bits, and the moving step size is i bits, and the value of i is 0, 1, ... P-1 in sequence.
若VD=(0,1,1,3),可认为i=0,表示二维图像不旋转,方向向量为表示二维图像时针旋转π/2,方向向量为依此类推可得旋转不同角度的方向向量,则:If V D =(0,1,1,3), it can be considered that i=0, which means that the two-dimensional image does not rotate, and the direction vector is Indicates that the two-dimensional image rotates π/2 clockwise, and the direction vector is By analogy, the direction vectors of different rotation angles can be obtained, then:
LOL(LRPP,N,0)=0×40+1×41+1×42+3×43=192LOL(LRP P,N ,0)=0×4 0 +1×4 1 +1×4 2 +3×4 3 =192
LOL(LRPP,N,1)=1×40+1×41+3×42+0×43=53LOL(LRP P,N ,1)=1×4 0 +1×4 1 +3×4 2 +0×4 3 =53
LOL(LRPP,N,2)=1×40+3×41+0×42+1×43=77LOL(LRP P,N ,2)=1×4 0 +3×4 1 +0×4 2 +1×4 3 =77
LOL(LRPP,N,3)=3×40+0×41+1×42+1×43=83LOL(LRP P,N ,3)=3×4 0 +0×4 1 +1×4 2 +1×4 3 =83
取最小值,得到该像素点的特征值:Take the minimum value to get the feature value of the pixel:
将每个波长对应的二维图像中的所有像素点的纹理特征值组合形成M×M的纹理特征矩阵。The texture feature values of all pixels in the two-dimensional image corresponding to each wavelength are combined to form an M×M texture feature matrix.
3)纹理特征加强,得到纹理特征加强矩阵3) Texture feature enhancement, get texture feature enhancement matrix
3-1)判断波长相关性3-1) Judgment of wavelength dependence
定义对称的相关矩阵C:Define a symmetric correlation matrix C:
其中Cij为相关矩阵C中的元素,表示第i个和第j个波长对应的纹理特征矩阵的相关系数:Where C ij is an element in the correlation matrix C, which represents the correlation coefficient of the texture feature matrix corresponding to the i-th and j-th wavelengths:
ai表示波长i对应纹理特征矩阵Gi拉伸成的行向量,aj表示波长j对应的纹理特征矩阵Gj拉伸成的行向量,其中cov(ai,aj)是aj和aj之间的协方差。若Cij中大于0,则判断Gi和Gj波长相关。a i represents the row vector stretched from the texture feature matrix G i corresponding to the wavelength i, and a j represents the row vector stretched from the texture feature matrix G j corresponding to the wavelength j, where cov(a i , a j ) is a j and The covariance between a and j . If C ij is greater than 0, it is judged that G i and G j are wavelength related.
3-2)点对点融合3-2) Point-to-point fusion
根据相关系数找出所有与Gi波长相关的纹理特征矩阵,将所有波长相关的纹理特征矩阵(包括Gi)相加求平均,进行点对点融合,形成新的纹理特征矩阵即为纹理特征加强矩阵。对于n波长的高光谱图像,依次进行n次点对点融合过程,得到n个纹理特征加强矩阵。According to the correlation coefficient, find out all the texture feature matrices related to the G i wavelength, add and average all the texture feature matrices related to the wavelength (including G i ), perform point-to-point fusion, and form a new texture feature matrix, which is the texture feature enhancement matrix . For hyperspectral images with n wavelengths, n times of point-to-point fusion processes are performed sequentially to obtain n texture feature enhancement matrices.
4)提取主要纹理特征,形成用于表示高光谱图像的主要纹理特征向量4) Extract the main texture features to form the main texture feature vector used to represent the hyperspectral image
4-1)分别将经融合后重新形成的n个纹理特征加强矩阵伸成的n个M2维列向量,并形成一个M2×n阶矩阵R:4-1) Extend n M 2 -dimensional column vectors from the n texture feature enhancement matrices re-formed after fusion, and form an M 2 ×n-order matrix R:
该矩阵即表示纹理特征加强后高光谱图像矩阵。This matrix represents the hyperspectral image matrix after texture feature enhancement.
4-2)对矩阵R采用PCA算法,选取三个主成分P1,P2,P3,并拉伸成行向量,组成纹理特征向量Xk,P1,P2,P3为相互两两正交的M2维列向量。4-2) Use PCA algorithm for matrix R, select three principal components P 1 , P 2 , P 3 , and stretch them into row vectors to form texture feature vector X k , P 1 , P 2 , P 3 are mutually paired Orthogonal M 2 -dimensional column vectors.
多次进行步骤1)~4)得到所有样本的高光谱图像的纹理特征向量Xk,Xk表示第k个样本对应的高光谱图像的纹理特征向量。Steps 1) to 4) are performed multiple times to obtain the texture feature vector X k of the hyperspectral image of all samples, and X k represents the texture feature vector of the hyperspectral image corresponding to the kth sample.
5)将所有高光谱的纹理特征向量组合成新矩阵并进行降维处理,对主要纹理特征向量进行降维处理,大大减少了数据量,提高了后续应用(主要为细纹理高光谱图像的分类)的执行时间,提高后续应用的效率。5) Combine all hyperspectral texture feature vectors into a new matrix and perform dimension reduction processing, and perform dimension reduction processing on the main texture feature vectors, which greatly reduces the amount of data and improves subsequent applications (mainly the classification of fine texture hyperspectral images). ) execution time, improving the efficiency of subsequent applications.
将所有高光谱的纹理特征向量组合成新矩阵X,矩阵X为l(l=k×3M2)阶矩阵,k为样本个数,本实施例中为240。本实施例中采用有监督的流行学习方法DLPP将矩阵X降维得到一个d维数据集合Y=(y1,y2,...,ym),yi∈Rd(d<<l)其中:Combine all hyperspectral texture feature vectors into a new matrix X, where matrix X is a matrix of order l (l=k×3M 2 ), and k is the number of samples, which is 240 in this embodiment. In this embodiment, the popular supervised learning method DLPP is used to reduce the dimensionality of the matrix X to obtain a d-dimensional data set Y=(y 1 ,y 2 ,...,y m ), y i ∈ R d (d<<l )in:
YT=ATX, (16)Y T = A T X, (16)
A为d维行向量,通过以下方法求得:A is a d-dimensional row vector, obtained by the following method:
在ATXLWXTA=1的限制条件下求解等式:Solve the equation under the constraint that A T X L W X T A = 1:
的最小化。of the minimization.
B和W表示大小为l×k的权重矩阵,Bij表示点i和点j属于不同类,Wij表示点i和点j属于同一类。DB和DW为对角矩阵,分别表示矩阵B和W按行或者按列相加的和,LB=DB-B和LW=DW-W是拉普拉斯矩阵。B and W represent weight matrices of size l×k, B ij indicates that point i and point j belong to different classes, and W ij indicates that point i and point j belong to the same class. D B and D W are diagonal matrices, respectively representing the sum of matrices B and W added in rows or columns, and L B =D B -B and L W =D W -W are Laplacian matrices.
通过对(17)代数变换得到:Through the algebraic transformation of (17), we get:
XLBXTA=λXLWXTA, (18)XL B X T A = λXL W X T A, (18)
λ为特征值。λ is the eigenvalue.
通过求解(18)中前d大的特征值对应的特征向量,即可得到A,A=(a0,a1,...,ad)是有由按照特征值λ1>λ2>...λd排列的特征向量组成。 A can be obtained by solving the eigenvector corresponding to the eigenvalue with the largest d before in ( 18 ) . ...λ d arranged eigenvectors.
再将得到的A代入(16)求得YT,对YT进行转置即求得降维后的d维主要纹理特征矩阵Y。Substitute the obtained A into (16) to obtain Y T , and transpose Y T to obtain the d-dimensional main texture feature matrix Y after dimensionality reduction.
通过这种方法将l维的主要纹理特向量X降维成d维的Y,保证了在高维空间中距离较远的点仍旧保持较远的距离,而距离较近的点距离仍然保持较近。In this way, the l-dimensional main texture vector X is reduced to the d-dimensional Y, which ensures that the farther points in the high-dimensional space still maintain a longer distance, while the closer points still maintain a shorter distance. close.
本实施例中对降维后的结果Y,采用小二乘支持向量机库lib-LSSVM(library-LeastSquare Support Vector Machine,lib-LSSVM构建最小二乘的支持向量机(LeastSquare Support Vector Machine,LSSVM)对不同的鱼片样本进行分类,并得到了准确的分类结果。In the present embodiment, to the result Y after dimensionality reduction, adopt the least square support vector machine library lib-LSSVM (library-LeastSquare Support Vector Machine, lib-LSSVM builds the least square support vector machine (LeastSquare Support Vector Machine, LSSVM) Different fish fillet samples were classified and accurate classification results were obtained.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310358810.XA CN103440625B (en) | 2013-08-16 | 2013-08-16 | The Hyperspectral imagery processing method strengthened based on textural characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310358810.XA CN103440625B (en) | 2013-08-16 | 2013-08-16 | The Hyperspectral imagery processing method strengthened based on textural characteristics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103440625A CN103440625A (en) | 2013-12-11 |
CN103440625B true CN103440625B (en) | 2016-08-10 |
Family
ID=49694317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310358810.XA Expired - Fee Related CN103440625B (en) | 2013-08-16 | 2013-08-16 | The Hyperspectral imagery processing method strengthened based on textural characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103440625B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104463814B (en) * | 2014-12-08 | 2017-04-19 | 西安交通大学 | Image enhancement method based on local texture directionality |
CN105718531B (en) * | 2016-01-14 | 2019-12-17 | 广州市万联信息科技有限公司 | Image database establishment method and image recognition method |
CN111460966B (en) * | 2020-03-27 | 2024-02-02 | 中国地质大学(武汉) | Hyperspectral remote sensing image classification method based on metric learning and neighbor enhancement |
CN112037211B (en) * | 2020-09-04 | 2022-03-25 | 中国空气动力研究与发展中心超高速空气动力研究所 | Damage characteristic identification method for dynamically monitoring small space debris impact event |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102411782A (en) * | 2011-11-01 | 2012-04-11 | 哈尔滨工程大学 | Three layer color visualization method of hyperspectral remote sensing image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104067311B (en) * | 2011-12-04 | 2017-05-24 | 数码装饰有限公司 | Digital makeup |
-
2013
- 2013-08-16 CN CN201310358810.XA patent/CN103440625B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102411782A (en) * | 2011-11-01 | 2012-04-11 | 哈尔滨工程大学 | Three layer color visualization method of hyperspectral remote sensing image |
Non-Patent Citations (2)
Title |
---|
HUANG Yuancheng.High-resolution Hyper-spectral Image Classification with Parts-based Feature and Morphology Profile in Urban Area.《Geo-Spatial Information Science》.2010,第13卷(第2期),第111-122页. * |
基于矩阵表示的局部敏感辨识分析;刘小明等;《浙江大学学报》;20090215;第43卷(第2期);第290-296页 * |
Also Published As
Publication number | Publication date |
---|---|
CN103440625A (en) | 2013-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Fan et al. | Spatial–spectral total variation regularized low-rank tensor decomposition for hyperspectral image denoising | |
CN105740799B (en) | Classification of hyperspectral remote sensing image method and system based on the selection of three-dimensional Gabor characteristic | |
Chang | Real-time progressive hyperspectral image processing | |
CN102324047B (en) | Hyper-spectral image ground object recognition method based on sparse kernel representation (SKR) | |
Prabhakar et al. | Two-dimensional empirical wavelet transform based supervised hyperspectral image classification | |
Wang et al. | Adaptive ${L} _ {\bf 1/2} $ sparsity-constrained NMF with half-thresholding algorithm for hyperspectral unmixing | |
CN110197209B (en) | A radiation source identification method based on multi-feature fusion | |
WO2017161892A1 (en) | Classification method for hyperspectral remote sensing image, and system for same | |
Deng et al. | Moisture content prediction in tealeaf with near infrared hyperspectral imaging | |
Hou et al. | Spatial–spectral weighted and regularized tensor sparse correlation filter for object tracking in hyperspectral videos | |
CN102663752B (en) | SAM weighted KEST hyperspectral anomaly detection algorithm | |
CN107292258B (en) | High-spectral image low-rank representation clustering method based on bilateral weighted modulation and filtering | |
CN103440625B (en) | The Hyperspectral imagery processing method strengthened based on textural characteristics | |
CN104751181A (en) | High spectral image Deming method based on relative abundance | |
CN104504721A (en) | Unstructured road detecting method based on Gabor wavelet transformation texture description | |
CN112766227A (en) | Hyperspectral remote sensing image classification method, device, equipment and storage medium | |
CN102393911A (en) | Background clutter quantization method based on compressive sensing | |
CN113421198B (en) | Hyperspectral image denoising method based on subspace non-local low-rank tensor decomposition | |
Chen et al. | Identification of various food residuals on denim based on hyperspectral imaging system and combination optimal strategy | |
Vafadar et al. | Hyperspectral anomaly detection using combined similarity criteria | |
CN101826160B (en) | Hyperspectral image classification method based on immune evolutionary strategy | |
CN104268561A (en) | Hyperspectral image mixing eliminating method based on structure prior low rank representation | |
Liu et al. | Maximum relevance, minimum redundancy band selection based on neighborhood rough set for hyperspectral data classification | |
CN105957112A (en) | Hyper-spectral sub pixel detection method based on fast UNCLS | |
Shetty et al. | Performance evaluation of dimensionality reduction techniques on hyperspectral data for mineral exploration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information |
Inventor after: Deng Shuiguang Inventor after: Li Yujin Inventor after: Xu Yifei Inventor after: Yin Jianwei Inventor after: Li Ying Inventor after: Wu Jian Inventor after: Wu Chaohui Inventor before: Deng Shuiguang Inventor before: Xu Yifei Inventor before: Yin Jianwei Inventor before: Li Ying Inventor before: Wu Jian Inventor before: Wu Chaohui |
|
COR | Change of bibliographic data | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160810 |
|
CF01 | Termination of patent right due to non-payment of annual fee |