CN104881634A - Illumination face recognition method based on completed local convex-and-concave pattern - Google Patents
Illumination face recognition method based on completed local convex-and-concave pattern Download PDFInfo
- Publication number
- CN104881634A CN104881634A CN201510223240.2A CN201510223240A CN104881634A CN 104881634 A CN104881634 A CN 104881634A CN 201510223240 A CN201510223240 A CN 201510223240A CN 104881634 A CN104881634 A CN 104881634A
- Authority
- CN
- China
- Prior art keywords
- clccp
- image
- pixel
- feature
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000005286 illumination Methods 0.000 title claims abstract description 14
- 239000013598 vector Substances 0.000 claims abstract description 74
- 239000000284 extract Substances 0.000 claims abstract description 8
- 238000000605 extraction Methods 0.000 claims description 5
- 230000001815 facial effect Effects 0.000 claims 3
- 239000011159 matrix material Substances 0.000 abstract description 5
- 238000003909 pattern recognition Methods 0.000 abstract description 4
- 238000004422 calculation algorithm Methods 0.000 description 19
- 239000000523 sample Substances 0.000 description 14
- 238000004364 calculation method Methods 0.000 description 12
- 238000012360 testing method Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000001186 cumulative effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 101100072002 Arabidopsis thaliana ICME gene Proteins 0.000 description 1
- 101000822805 Naja atra Cytotoxin A5 Proteins 0.000 description 1
- 101000822803 Naja kaouthia Cytotoxin homolog Proteins 0.000 description 1
- 101000783567 Naja naja Cytotoxin 1 Proteins 0.000 description 1
- 101000822819 Naja naja Cytotoxin-like basic protein Proteins 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/162—Detection; Localisation; Normalisation using pixel segmentation or colour matching
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及一种基于完备局部凸凹模式的光照人脸识别方法,属模式识别领域。首先对图像进行分块;对各分块图像进行双线性插值;通过对各图像块中每一像素点的局部差分的符号特征和幅度特征进行编码,得到各图像块的符号特征矩阵、幅度特征矩阵。然后对各图像块的像素点进行编码得到各图像块的中心像素特征矩阵,然后提取此三个特征矩阵的直方图特征,得到三个特征向量,依次连接此三个特征向量得到图像块的直方图特征向量;最后连接各图像块的直方图特征向量得到此原始图像的直方图特征向量,把该特征向量送入最近邻分类器进行分类,来识别原始人脸图像的身份。本发明是基于二阶微分的图像纹理描述方法,能有效的进行光照环境下人脸识别。
The invention relates to an illuminated face recognition method based on a complete local convex-concave pattern, which belongs to the field of pattern recognition. First, the image is divided into blocks; bilinear interpolation is performed on each block image; by encoding the sign feature and amplitude feature of the local difference of each pixel in each image block, the sign feature matrix and amplitude of each image block are obtained feature matrix. Then encode the pixels of each image block to obtain the central pixel feature matrix of each image block, and then extract the histogram features of the three feature matrices to obtain three feature vectors, and connect the three feature vectors in turn to obtain the histogram of the image block Figure feature vector; finally connect the histogram feature vector of each image block to obtain the histogram feature vector of the original image, and send the feature vector to the nearest neighbor classifier for classification to identify the identity of the original face image. The invention is an image texture description method based on second-order differential, which can effectively perform face recognition under illumination environment.
Description
技术领域technical field
本发明涉及一种基于完备局部凸凹模式的光照人脸识别方法,属于模式识别技术领域。The invention relates to an illuminated face recognition method based on a complete local convex-concave pattern, which belongs to the technical field of pattern recognition.
背景技术Background technique
局部二值模式(Local binary pattern,LBP)[L.Wang and D.C.He,“Texture classification usingtexture spectrum”,Pattern Recognition,vol.23,pp.905-910,1990.]是一种重要的图像特征提取算子,具有计算量小和有效的特点。尽管LBP在计算机视觉和模式识别领域已经获得了很大的成功,但其工作机制仍有值得改进之处。显性局部二值模式(Dominant local binary patterns,DLBP)[S.Liao,M.W.K.Law,and A.C.S.Chung,“Dominant local binary patterns for texture classification,”IEEETrans.Image Process.,vol.18,no.5,pp.1107–1118,May 2009.]在统计图像的LBP所有模式基础上,筛选出较高频率的模式,并把累积频率达到80%的高频率模式组成最终的特征向量。LBP只考虑到中心像素与周围像素差值的符号信息,完备局部二值模式(Completed local binary pattern,CLBP)[Z.Guo,L.Zhang and D.Zhang,“A completed modeling of local binary pattern operator for textureclassification,”IEEE Trans.Image Process.,vol.19,no.6,pp.1657-1663,2010.]不仅考虑了符号信息,还考虑了差值的幅度信息及中心像素点的特征。LBP提取的是图像的一阶微分信息,局部微分模式(Local derivative pattern,LDP)[B.Zhang,Y.Gao,S.Zhao,and J.Liu,“Local derivative pattern versuslocal binary pattern:Face recognition with higher-order local pattern descriptor,”IEEE Trans.Image Process.,vol.19,no.2,pp.533–544,Feb.2010.]改进了LBP算法,提取了图像的二阶微分信息。为了减少LBP算法中模式的数目,研究人员提出了中心对称局部微分模式(Center-Symmetric Local derivativePattern,CS-LDP)[G.Xue,L.Song,J.Sun,M.Wu,Hybrid Center-Symmetric Local Pattern for DynamicBackground Subtraction,ICME,Barcelona,Spain(2011),pp.1–6,July 2011.]和中心对称局部二值模式算法(Center-symmetric local binary pattern,CS-LBP)[Marko H,Matti P,Cordelia S.Description of interestregions with center-symmetric local binary pattern[C]//Conference on Computer Vision Graphics and ImageProcessing.2006,4338:58-69]。局部二值计数(Local binary count,LBC)[Zhao Y,Huang D S,Jia W,“Completed local binary count for rotation invariant texture classification,”IEEE Trans.Image Process.,vol.21,no.10,pp.4492-4497,2012.]只考虑二值模式中模式为“1”的个数。统一局部二值模式减少了模式数目,减少了计算量[T.Ojala,M.T.“Gray scale and rotation invariant textureclassification with local binary patterns,”in:D.Vernon(Ed.),Proceedings of the Sixth European Conference onComputer Vision(ECCV2000),Dublin,Ireland,pp.404–420,2000.]。为了增强LBP算法所提取纹理的鉴别性,LBP算法也与Gabor滤波器和一些数据降维算法结合起来[Zhang W C,Shan S G,Gao W,et a1.Local Gabor Binary Pattern Histogram Sequence.(LGBPHS):A Novel Non-Statistical Model for FaceRepresentation and Recognition[C]Proc of the 10th IEEE Int’l Conf on Computer Vision,2005:786—791.;B.Zhang,S.Shan,X.Chen,and W.Gao,“Histogram of Gabor Phase Patterns(HGPP):A novel object representationapproach for face recognition,”IEEE Trans.Image Process.,vol.16,no.1,pp.57–68,2007.]。Local binary pattern (LBP) [L.Wang and DCHe, "Texture classification using texture spectrum", Pattern Recognition, vol.23, pp.905-910, 1990.] is an important image feature extraction algorithm. Sub, has the characteristics of small amount of calculation and effective. Although LBP has achieved great success in the fields of computer vision and pattern recognition, its working mechanism still needs to be improved. Dominant local binary patterns (DLBP) [S.Liao, MWKLaw, and ACSChung, "Dominant local binary patterns for texture classification," IEEETrans.Image Process., vol.18, no.5, pp. 1107–1118, May 2009.] On the basis of all the LBP modes of the statistical image, the higher frequency modes are screened out, and the high frequency modes with a cumulative frequency of 80% form the final feature vector. LBP only considers the symbol information of the difference between the central pixel and the surrounding pixels, and the complete local binary pattern (Completed local binary pattern, CLBP) [Z.Guo, L. Zhang and D. Zhang, "A completed modeling of local binary pattern operator for texture classification,"IEEE Trans.Image Process.,vol.19,no.6,pp.1657-1663,2010.] not only considers the sign information, but also considers the amplitude information of the difference and the characteristics of the central pixel. LBP extracts the first-order differential information of the image, Local derivative pattern (LDP) [B. Zhang, Y. Gao, S. Zhao, and J. Liu, "Local derivative pattern versus local binary pattern: Face recognition with higher-order local pattern descriptor,"IEEE Trans.Image Process.,vol.19,no.2,pp.533–544,Feb.2010.] improved the LBP algorithm and extracted the second-order differential information of the image. In order to reduce the number of patterns in the LBP algorithm, researchers proposed a center-symmetric local differential pattern (Center-Symmetric Local derivative Pattern, CS-LDP) [G.Xue, L.Song, J.Sun, M.Wu, Hybrid Center-Symmetric Local Pattern for Dynamic Background Subtraction, ICME, Barcelona, Spain (2011), pp.1–6, July 2011.] and Center-symmetric local binary pattern algorithm (Center-symmetric local binary pattern, CS-LBP) [Marko H, Matti P, Cordelia S. Description of interest regions with center-symmetric local binary pattern [C]//Conference on Computer Vision Graphics and Image Processing. 2006, 4338:58-69]. Local binary count (Local binary count, LBC) [Zhao Y, Huang D S, Jia W, "Completed local binary count for rotation invariant texture classification," IEEE Trans.Image Process., vol.21, no.10, pp. 4492-4497, 2012.] Only consider the number of "1" in the binary pattern. The unified local binary mode reduces the number of modes and reduces the amount of calculation [T.Ojala, M. T. “Gray scale and rotation invariant texture classification with local binary patterns,” in: D. Vernon (Ed.), Proceedings of the Sixth European Conference on Computer Vision (ECCV2000), Dublin, Ireland, pp. 404–420, 2000.]. In order to enhance the discrimination of the texture extracted by the LBP algorithm, the LBP algorithm is also combined with the Gabor filter and some data dimensionality reduction algorithms [Zhang W C, Shan S G, Gao W, et a1.Local Gabor Binary Pattern Histogram Sequence. (LGBPHS): A Novel Non-Statistical Model for FaceRepresentation and Recognition[C]Proc of the 10th IEEE Int'l Conf on Computer Vision, 2005: 786—791.; B. Zhang, S. Shan, X. Chen, and W. Gao, “Histogram of Gabor Phase Patterns (HGPP): A novel object representation approach for face recognition,” IEEE Trans. Image Process., vol.16, no.1, pp.57–68, 2007.].
LBP仅仅考虑图像的一阶微分信息,本发明的目的在于提供一种基于图像完备局部凸凹特征(Completed Local convex-and-concave Pattern,CLCCP)的人脸图像二阶微分纹理特征描述方法。LBP only considers the first-order differential information of the image. The purpose of the present invention is to provide a method for describing the second-order differential texture feature of the face image based on the complete local convex-concave pattern (Completed Local convex-and-concave Pattern, CLCCP).
发明内容Contents of the invention
本发明提供了一种基于完备局部凸凹模式的光照人脸识别方法,以用于解决光照环境下人脸识别问题。针对局部二值模式仅能描述图像一阶微分的缺陷,本发明提出的完备局部凸凹模式能有效描述图像的二阶微分特征。完备局部凸凹模式不仅考虑了局部差值的符号信息,而且考虑了差值的幅度特性,还考虑到了中心像素点的鉴别性。The invention provides an illuminated face recognition method based on a complete local convex-concave pattern, which is used to solve the problem of face recognition in an illuminated environment. Aiming at the defect that the local binary mode can only describe the first-order differential of the image, the complete local convex-concave mode proposed by the present invention can effectively describe the second-order differential feature of the image. The complete local convex-convex model not only considers the sign information of the local difference, but also considers the amplitude characteristics of the difference, and also considers the discrimination of the central pixel.
本发明基于完备局部凸凹模式的光照人脸识别方法是这样实现的:首先对图像进行分块;然后对各分块图像进行双线性插值,使得图像中每个像素点能构建8个对称方向,接着计算分块图像中每个像素点沿8个方向局部差分;然后编码此局部差分的符号特征CLCCP_S和幅度特征CLCCP_M;对各图像块的每个像素点进行编码,得到各图像块的中心像素特征CLCCP-C;接下来对各分块图像的CLCCP_S,CLCCP_M和CLCCP_C特征矩阵提取直方图特征向量,依次连接该分块图像CLCCP_S,CLCCP_S和CLCCP_C特征的直方图特征向量得到各分块图像的直方图特征向量;最后连接各分块图像的直方图特征向量得到此原始图像的直方图特征向量,把该特征向量送入基于卡方统计量的最近邻分类器进行分类,来识别原始人脸图像的身份。The illumination face recognition method based on the complete local convex-concave pattern of the present invention is realized in the following way: first, divide the image into blocks; then perform bilinear interpolation on each block image, so that each pixel in the image can construct 8 symmetrical directions , and then calculate the local difference of each pixel in the block image along 8 directions; then encode the sign feature CLCCP_S and amplitude feature CLCCP_M of the local difference; encode each pixel of each image block to obtain the center of each image block Pixel feature CLCCP-C; Next, the histogram feature vector is extracted from the CLCCP_S, CLCCP_M and CLCCP_C feature matrices of each block image, and the histogram feature vectors of the block image CLCCP_S, CLCCP_S and CLCCP_C features are connected in turn to obtain each block image Histogram feature vector; finally connect the histogram feature vector of each block image to get the histogram feature vector of the original image, and send the feature vector to the nearest neighbor classifier based on chi-square statistics for classification to identify the original face The identity of the image.
所述基于完备局部凸凹模式的光照人脸识别方法的具体步骤如下:The specific steps of the illumination face recognition method based on the complete local convex-concave pattern are as follows:
Step1、首先将图像进行分块:把图像I(l)均匀分成4×4的无重叠方块,一共16块,表示为 Step1. First divide the image into blocks: evenly divide the image I (l) into 4×4 non-overlapping squares, a total of 16 blocks, expressed as
Step2、对各分块图像进行双线性插值运算,使得每个像素点能构建关于该像素点对称的8个方向,然后计算每个像素点沿不同方向的局部差分,将该局部差分分解为符号部分和和幅度部分;Step2. Perform a bilinear interpolation operation on each block image, so that each pixel can construct 8 directions symmetrical to the pixel, and then calculate the local difference of each pixel along different directions, and decompose the local difference into the sign part and the magnitude part;
如图2所示,像素点P1和P2之间通过插值可以增加像素点Q1。插值方法如图4所示,其中P11,P12,P21,P22是图像中原始的四个相邻像素点,通过插值方法插出新像素点Q0。插值公式如下:As shown in FIG. 2 , pixel point Q 1 can be added through interpolation between pixel points P 1 and P 2 . The interpolation method is shown in Figure 4, where P 11 , P 12 , P 21 , and P 22 are the original four adjacent pixel points in the image, and a new pixel point Q 0 is interpolated through the interpolation method. The interpolation formula is as follows:
其中和分别表示R1,R2和位置处的像素值,x1,x和x2分别表示像素点P11,R1和P21处的横坐标,y1,y和y2分别表示像素点P11,Q0和P12处的纵坐标。图3表示原始图像中像素X0周围存在P0,P1,P2,P3,P4,P5,P6和P7 8个近邻点,仅能构成四个关于像素X0的对称方向。图2表示插值后像素X0周围存在Q0,Q1,Q2,Q3,Q4,Q5,Q6和Q7 8个插值点,故插值后像素X0周围一共存在16个近邻点,能得到8个关于像素X0的对称方向。由于增加了插值点,对图像的分辨率增强了;in and represent R 1 , R 2 and The pixel value at the position, x 1 , x and x 2 represent the abscissas at the pixel points P 11 , R 1 and P 21 respectively, and y 1 , y and y 2 represent the pixel points P 11 , Q 0 and P 12 respectively the vertical coordinate. Figure 3 shows that there are 8 neighbor points P 0 , P 1 , P 2 , P 3 , P 4 , P 5 , P 6 and P 7 around the pixel X 0 in the original image, which can only form four symmetry points about the pixel X 0 direction. Figure 2 shows that there are 8 interpolation points Q 0 , Q 1 , Q 2 , Q 3 , Q 4 , Q 5 , Q 6 and Q 7 around pixel X 0 after interpolation, so there are 16 neighbors in total around pixel X 0 after interpolation point, 8 symmetric directions about the pixel X 0 can be obtained. Due to the addition of interpolation points, the resolution of the image is enhanced;
图像块中像素X0沿8个方向的局部差分为和其中i=0,1,2,3和j=0,1,2,3;The local difference of pixel X 0 in the image block along 8 directions is and where i=0,1,2,3 and j=0,1,2,3;
Step3、对符号部分和幅度部分分别进行相应的局部凸凹模式编码,得到各分块图像的符号特征CLCCP-S和幅度特征CLCCP-M,其中像素点X0的符号特征和幅度特征的编码公式分别为:Step3. Carry out corresponding local convex-concave mode encoding on the sign part and the amplitude part respectively, and obtain the sign feature CLCCP-S and the amplitude feature CLCCP-M of each block image, wherein the encoding formulas of the sign feature and the amplitude feature of the pixel point X0 are respectively for:
其中,CLCCP-S1,8(X0)D表示像素X0处的局部凸凹性符号特征,CLCCP-M1,8(X0)D表示像素X0处的局部凸凹性幅度特征,
Step4、对各图像块的每个像素点进行编码,得到各图像块的中心像素特征CLCCP-C,编码公式为:这里cI表示整幅图像的平均值,表示图像中X0处的像素值,
Step5、经过步骤Step2、Step3和Step4,提取了图像块的完备局部凸凹特征,包括符合特征、幅度特征和中心像素特征,当图像块中的像素点X0遍历整个图像块时,得到各分块图像的CLCCP-S、CLCCP-M、CLCCP-C的特征矩阵,分别为 Step5, after steps Step2, Step3 and Step4, image blocks are extracted The complete local convex-concave features of , including conforming features, amplitude features and central pixel features, when the image block When the pixel point X 0 in traversing the entire image block, each block image is obtained The feature matrices of CLCCP-S, CLCCP-M, and CLCCP-C are respectively
Step6、接下来提取各图像块三个特征矩阵的直方图特征向量,图像块的三个特征矩阵的直方图特征向量分别表示为:依次连接此三个直方图特征向量,得到图像块的直方图特征向量
Step7、连接各图像块的直方图特征向量,得到原始图像的完备局部凸凹模式直方图特征向量为:
Step8、把该特征向量送入基于卡方统计量的最近邻分类器进行分类,来识别原始人脸图像的身份;Step8, send this feature vector into the nearest neighbor classifier based on chi-square statistics for classification, to identify the identity of the original face image;
所述步骤Step8中,基于卡方统计量的最近邻分类器进行分类时,先计算卡方统计量;设定两幅人脸图像I(0)和I(1)的完备局部凸凹模式直方图特征向量分别为:
其中I(0) CLCCP(i)和I(1) CLCCP(i)分别表示纹理特征向量I(0) CLCCP和I(1) CLCCP的第i个元素,K'表示纹理向量的长度,eps为一固定值,为Matlab中最小的正数,表示一个使得I(0) CLCCP(i)+I(1) CLCCP(i)+eps≠0的非常小的正常数。Where I (0) CLCCP (i) and I (1) CLCCP (i) represent the i-th element of the texture feature vector I (0) CLCCP and I (1) CLCCP respectively, K' represents the length of the texture vector, and eps is A fixed value, which is the smallest positive number in Matlab, representing a very small positive number such that I (0) CLCCP (i)+I (1) CLCCP (i)+eps≠0.
本发明的有益效果是:The beneficial effects of the present invention are:
1、本发明构造的完备局部凸凹模式人脸图像纹理特征提取算法是一种基于图像二阶微分特征的纹理描述算子,克服了LBP只能描述图像一阶微分信息的缺陷;1. The complete local convex-concave mode face image texture feature extraction algorithm constructed by the present invention is a texture description operator based on the second-order differential feature of the image, which overcomes the defect that LBP can only describe the first-order differential information of the image;
2、本发明构造的完备局部凸凹模式人脸图像纹理特征提取算法不仅描述了图像局部凸凹特征的符号信息,还描述了图像局部凸凹特征的幅度特征,并且考虑了图像中心像素点的鉴别能力,融合这三者提高了纹理的鉴别性;2. The complete local convex-concave mode face image texture feature extraction algorithm constructed by the present invention not only describes the symbol information of the local convex-concave features of the image, but also describes the amplitude characteristics of the local convex-concave features of the image, and considers the discrimination ability of the image center pixel point, Fusing these three improves texture discriminability;
3、本方法不仅考虑了人脸图像局部纹理的凸凹性,还考虑了图像局部纹理凸凹性的大小。人脸识别方面的实验表明该算法进行光照人脸识别时计算复杂度低,识别精度高,对光照具有不敏感性;3. This method not only considers the convexity and concaveness of the local texture of the face image, but also considers the size of the local texture of the image. Experiments on face recognition show that the algorithm has low computational complexity, high recognition accuracy and insensitivity to light when performing light face recognition;
4、在匹配识别阶段,本发明采用卡方统计量(Chi square statistic)作为两个纹理特征向量之间的距离度量,采用最近邻分类器进行分类,算法简单、计算方便,可以做到实时的图像匹配识别。4. In the matching recognition stage, the present invention adopts Chi square statistic (Chi square statistic) as the distance measure between two texture feature vectors, adopts the nearest neighbor classifier to classify, the algorithm is simple, the calculation is convenient, and real-time Image matching recognition.
附图说明Description of drawings
图1是本发明中图像完备局部凸凹模式特征提取步骤示意框图;Fig. 1 is a schematic block diagram of image complete local concave-convex pattern feature extraction steps in the present invention;
图2是本发明像素点X0的8个对称方向示意图;Fig. 2 is a schematic diagram of 8 symmetrical directions of the pixel point X0 of the present invention;
图3是本发明图像中像素点4个对称方向示意图;Fig. 3 is a schematic diagram of four symmetrical directions of pixels in the image of the present invention;
图4是本发明中双线性插值示意图;Fig. 4 is a schematic diagram of bilinear interpolation in the present invention;
图5是本发明实施例所用the extended YaleB人脸数据库光照子集中一个人的64张样本图像;Fig. 5 is 64 sample images of a person in the illumination subset of the extended YaleB face database used in the embodiment of the present invention;
图6是本发明中局部二值模式、统一局部二值模式(Uniform local binary pattern,UniformLBP)、完备局部二值模式和本方法在the extended Yale B数据库上的累加匹配特征曲线;Fig. 6 is the accumulative matching characteristic curve of local binary pattern, unified local binary pattern (Uniform local binary pattern, UniformLBP), complete local binary pattern and this method on the extended Yale B database in the present invention;
图7是本发明中局部二值模式、统一局部二值模式、完备局部二值模式和本方法在theextended Yale B数据库上的正确识别率曲线。Fig. 7 is the correct recognition rate curve of local binary pattern, unified partial binary pattern, complete partial binary pattern and this method on theextended Yale B database among the present invention.
具体实施方式Detailed ways
实施例1:如图1-7所示,一种基于完备局部凸凹模式的光照人脸识别方法,首先对图像进行分块;然后对各分块图像进行双线性插值,使得图像中每个像素点能构建8个对称方向,接着计算分块图像中每个像素点沿8个方向局部差分;然后编码此局部差分的符号特征和幅度特征;对各图像块的每个像素点进行编码,得到各图像块的中心像素特征;接下来对各分块图像的符号特征、幅度特征、中心像素特征的特征矩阵提取直方图特征向量,依次连接该分块图像符号特征、幅度特征、中心像素特征的直方图特征向量得到各分块图像的直方图特征向量;最后连接各分块图像的直方图特征向量得到此原始图像的直方图特征向量,把该特征向量送入基于卡方统计量的最近邻分类器进行分类,来识别原始人脸图像的身份。Embodiment 1: As shown in Figures 1-7, a face recognition method based on a complete local convex-convex pattern, first divides the image into blocks; then performs bilinear interpolation on each block image, so that each block in the image Pixels can construct 8 symmetrical directions, and then calculate the local difference of each pixel in the block image along 8 directions; then encode the sign feature and amplitude feature of this local difference; encode each pixel of each image block, Obtain the central pixel feature of each image block; next, extract the histogram feature vector from the feature matrix of the symbol feature, amplitude feature, and center pixel feature of each block image, and connect the block image symbol feature, amplitude feature, and center pixel feature in sequence The histogram feature vector of each block image is obtained by the histogram feature vector of each block image; finally, the histogram feature vector of each block image is connected to obtain the histogram feature vector of the original image, and the feature vector is sent to the nearest chi-square statistic Neighbor classifiers are used to identify the identity of the original face image.
所述基于完备局部凸凹模式的光照人脸识别方法的具体步骤如下:The specific steps of the illumination face recognition method based on the complete local convex-concave pattern are as follows:
Step1、首先将图像进行分块:把图像I(l)均匀分成4×4的无重叠方块,一共16块,表示为 Step1. First divide the image into blocks: evenly divide the image I (l) into 4×4 non-overlapping squares, a total of 16 blocks, expressed as
Step2、对各分块图像进行双线性插值运算,使得每个像素点能构建关于该像素点对称的8个方向,然后计算每个像素点沿不同方向的局部差分,将该局部差分分解为符号部分和和幅度部分;Step2. Perform a bilinear interpolation operation on each block image, so that each pixel can construct 8 directions symmetrical to the pixel, and then calculate the local difference of each pixel along different directions, and decompose the local difference into the sign part and the magnitude part;
Step3、对符号部分和幅度部分分别进行相应的局部凸凹模式编码,得到各分块图像的符号特征CLCCP-S和幅度特征CLCCP-M,其中像素点X0的符号特征和幅度特征的编码公式分别为:Step3. Carry out corresponding local convex-concave mode encoding on the sign part and the amplitude part respectively, and obtain the sign feature CLCCP-S and the amplitude feature CLCCP-M of each block image, wherein the encoding formulas of the sign feature and the amplitude feature of the pixel point X0 are respectively for:
其中,CLCCP-S1,8(X0)D表示像素X0处的局部凸凹性符号特征,CLCCP-M1,8(X0)D表示像素X0处的局部凸凹性幅度特征,
Step4、对各图像块的每个像素点进行编码,得到各图像块的中心像素特征CLCCP-C,编码公式为:这里cI表示整幅图像的平均值,表示图像中X0处的像素值,
Step5、经过步骤Step2、Step3和Step4,提取了图像块的完备局部凸凹特征,包括符合特征、幅度特征和中心像素特征,当图像块中的像素点X0遍历整个图像块时,得到各分块图像的CLCCP-S、CLCCP-M、CLCCP-C的特征矩阵,分别为 Step5, after steps Step2, Step3 and Step4, image blocks are extracted The complete local convex-concave features of , including conforming features, amplitude features and central pixel features, when the image block When the pixel point X 0 in traversing the entire image block, each block image is obtained The feature matrices of CLCCP-S, CLCCP-M, and CLCCP-C are respectively
Step6、接下来提取各图像块三个特征矩阵的直方图特征向量,图像块的三个特征矩阵的直方图特征向量分别表示为:依次连接此三个直方图特征向量,得到图像块的直方图特征向量
Step7、连接各图像块的直方图特征向量,得到原始图像的完备局部凸凹模式直方图特征向量为:
Step8、把该特征向量送入基于卡方统计量的最近邻分类器进行分类,来识别原始人脸图像的身份。Step8. Send the feature vector to the nearest neighbor classifier based on chi-square statistics for classification to identify the identity of the original face image.
所述步骤Step8中,基于卡方统计量的最近邻分类器进行分类时,先计算卡方统计量;设定两幅人脸图像I(0)和I(1)的完备局部凸凹模式直方图特征向量分别为:
其中I(0) CLCCP(i)和I(1) CLCCP(i)分别表示纹理特征向量I(0) CLCCP和I(1) CLCCP的第i个元素,K'表示纹理向量的长度,eps为一固定值,为Matlab中最小的正数。Where I (0) CLCCP (i) and I (1) CLCCP (i) represent the i-th element of the texture feature vector I (0) CLCCP and I (1) CLCCP respectively, K' represents the length of the texture vector, and eps is A fixed value, which is the smallest positive number in Matlab.
实施例2:如图1-7所示,一种基于完备局部凸凹模式的光照人脸识别方法,首先对图像进行分块;然后对各分块图像进行双线性插值,使得图像中每个像素点能构建8个对称方向,接着计算分块图像中每个像素点沿8个方向局部差分;然后编码此局部差分的符号特征和幅度特征;对各图像块的每个像素点进行编码,得到各图像块的中心像素特征;接下来对各分块图像的符号特征、幅度特征、中心像素特征的特征矩阵提取直方图特征向量,依次连接该分块图像符号特征、幅度特征、中心像素特征的直方图特征向量得到各分块图像的直方图特征向量;最后连接各分块图像的直方图特征向量得到此原始图像的直方图特征向量,把该特征向量送入基于卡方统计量的最近邻分类器进行分类,来识别原始人脸图像的身份。Embodiment 2: As shown in Figure 1-7, a face recognition method based on illumination of complete local convex-convex pattern, first divides the image into blocks; then performs bilinear interpolation on each block image, so that each block in the image Pixels can construct 8 symmetrical directions, and then calculate the local difference of each pixel in the block image along 8 directions; then encode the sign feature and amplitude feature of this local difference; encode each pixel of each image block, Get the central pixel feature of each image block; next, extract the histogram feature vector from the feature matrix of the symbol feature, amplitude feature, and center pixel feature of each block image, and connect the block image symbol feature, amplitude feature, and center pixel feature in sequence The histogram feature vector of each block image is obtained by the histogram feature vector of each block image; finally, the histogram feature vector of each block image is connected to obtain the histogram feature vector of the original image, and the feature vector is sent to the nearest chi-square statistic Neighborhood classifiers are used to identify the identity of the original face image.
所述基于完备局部凸凹模式的光照人脸识别方法的具体步骤如下:The specific steps of the illumination face recognition method based on the complete local convex-concave pattern are as follows:
Step1、首先将图像进行分块:把图像I(l)均匀分成4×4的无重叠方块,一共16块,表示为 Step1. First divide the image into blocks: evenly divide the image I (l) into 4×4 non-overlapping squares, a total of 16 blocks, expressed as
Step2、对各分块图像进行双线性插值运算,使得每个像素点能构建关于该像素点对称的8个方向,然后计算每个像素点沿不同方向的局部差分,将该局部差分分解为符号部分和和幅度部分;Step2. Perform a bilinear interpolation operation on each block image, so that each pixel can construct 8 directions symmetrical to the pixel, and then calculate the local difference of each pixel along different directions, and decompose the local difference into the sign part and the magnitude part;
如图2所示,像素点P1和P2之间通过插值可以增加像素点Q1。插值方法如图4所示,其中P11,P12,P21,P22是图像中原始的四个相邻像素点,通过插值方法插出新像素点Q0。插值公式如下:As shown in FIG. 2 , pixel point Q 1 can be added through interpolation between pixel points P 1 and P 2 . The interpolation method is shown in Figure 4, where P 11 , P 12 , P 21 , and P 22 are the original four adjacent pixel points in the image, and a new pixel point Q 0 is interpolated through the interpolation method. The interpolation formula is as follows:
其中和分别表示R1,R2和位置处的像素值,x1,x和x2分别表示像素点P11,R1和P21处的横坐标,y1,y和y2分别表示像素点P11,Q0和P12处的纵坐标。图3表示原始图像中像素X0周围存在P0,P1,P2,P3,P4,P5,P6和P7 8个近邻点,仅能构成四个关于像素X0的对称方向。图2表示插值后像素X0周围存在Q0,Q1,Q2,Q3,Q4,Q5,Q6和Q7 8个插值点,故插值后像素X0周围一共存在16个近邻点,能得到8个关于像素X0的对称方向。由于增加了插值点,对图像的分辨率增强了;in and represent R 1 , R 2 and The pixel value at the position, x 1 , x and x 2 represent the abscissas at the pixel points P 11 , R 1 and P 21 respectively, and y 1 , y and y 2 represent the pixel points P 11 , Q 0 and P 12 respectively the vertical coordinate. Figure 3 shows that there are 8 neighbor points P 0 , P 1 , P 2 , P 3 , P 4 , P 5 , P 6 and P 7 around the pixel X 0 in the original image, which can only form four symmetry points about the pixel X 0 direction. Figure 2 shows that there are 8 interpolation points Q 0 , Q 1 , Q 2 , Q 3 , Q 4 , Q 5 , Q 6 and Q 7 around pixel X 0 after interpolation, so there are 16 neighbors in total around pixel X 0 after interpolation point, 8 symmetric directions about the pixel X 0 can be obtained. Due to the addition of interpolation points, the resolution of the image is enhanced;
图像块中像素X0沿8个方向的局部差分为和其中i=0,1,2,3和j=0,1,2,3;The local difference of pixel X 0 in the image block along 8 directions is and where i=0,1,2,3 and j=0,1,2,3;
Step3、对符号部分和幅度部分分别进行相应的局部凸凹模式编码,得到各分块图像的符号特征CLCCP-S和幅度特征CLCCP-M,其中像素点X0的符号特征和幅度特征的编码公式分别为:Step3. Carry out corresponding local convex-concave mode encoding on the sign part and the amplitude part respectively, and obtain the sign feature CLCCP-S and the amplitude feature CLCCP-M of each block image, wherein the encoding formulas of the sign feature and the amplitude feature of the pixel point X0 are respectively for:
其中,CLCCP-S1,8(X0)D表示像素X0处的局部凸凹性符号特征,CLCCP-M1,8(X0)D表示像素X0处的局部凸凹性幅度特征,
Step4、对各图像块的每个像素点进行编码,得到各图像块的中心像素特征CLCCP-C,编码公式为:这里cI表示整幅图像的平均值,表示图像中X0处的像素值,
Step5、经过步骤Step2、Step3和Step4,提取了图像块的完备局部凸凹特征,包括符合特征、幅度特征和中心像素特征,当图像块中的像素点X0遍历整个图像块时,得到各分块图像的CLCCP-S、CLCCP-M、CLCCP-C的特征矩阵,分别为 Step5, after steps Step2, Step3 and Step4, image blocks are extracted The complete local convex-concave features of , including conforming features, amplitude features and central pixel features, when the image block When the pixel point X 0 in traversing the entire image block, each block image is obtained The feature matrices of CLCCP-S, CLCCP-M, and CLCCP-C are respectively
Step6、接下来提取各图像块三个特征矩阵的直方图特征向量,图像块的三个特征矩阵的直方图特征向量分别表示为:依次连接此三个直方图特征向量,得到图像块的直方图特征向量
Step7、连接各图像块的直方图特征向量,得到原始图像的完备局部凸凹模式直方图特征向量为:
Step8、把该特征向量送入基于卡方统计量的最近邻分类器进行分类,来识别原始人脸图像的身份。Step8. Send the feature vector to the nearest neighbor classifier based on chi-square statistics for classification to identify the identity of the original face image.
所述步骤Step8中,基于卡方统计量的最近邻分类器进行分类时,先计算卡方统计量;设定两幅人脸图像I(0)和I(1)的完备局部凸凹模式直方图特征向量分别为:
其中I(0) CLCCP(i)和I(1) CLCCP(i)分别表示纹理特征向量I(0) CLCCP和I(1) CLCCP的第i个元素,K'表示纹理向量的长度,eps为一固定值,为Matlab中最小的正数。Where I (0) CLCCP (i) and I (1) CLCCP (i) represent the i-th element of the texture feature vector I (0) CLCCP and I (1) CLCCP respectively, K' represents the length of the texture vector, and eps is A fixed value, which is the smallest positive number in Matlab.
为了证明所述方法的有益效果,通过统计本方法与其他相关算法在光照人脸数据库中的识别率、累加匹配识别率并与其他算法进行比较来证明;In order to prove the beneficial effect of the method, it is proved by counting the recognition rate of this method and other related algorithms in the illuminated face database, accumulating the matching recognition rate and comparing with other algorithms;
首先统计本方法在光照人脸数据库中的识别率,并与相关算法进行比较,画出相应识别性能曲线。本实施例采用MATLAB软件环境,本实施例中threshold取0,本实施例中所用人脸图片为the extended YaleB人脸数据库的光照子集,该子集共有38个个人,每个人在不同光照情况下拍摄64张照片,一共2432张照片,照片大小为64x64,如图5所示是该数据库中一个人的64张样本图片。该数据库可以在该数据库网站(http://vision.ucsd.edu/~leekc/ExtYaleDatabase/ExtYaleB.html)上下载所有裁切好的人脸图片。在此实施例中,计算了本方法、局部二值模式、统一局部二值模式和完备局部二值模式四种算法的正确识别率和累加匹配特性(cumulative match characteristic)曲线。采用最近邻分类器来计算识别率,在计算识别率时,训练样本集由每个人分别1,2,3,4,5张样本构成,其余图像用作测试。测试样本与所有的训练样本进行比较,如果与测试样本距离最小的训练样本的身份与测试样本一致,则认为识别是正确的。所有正确识别的样本数除以所有测试样本数即为正确识别率。First, the recognition rate of this method in the illuminated face database is counted, compared with related algorithms, and the corresponding recognition performance curve is drawn. The present embodiment adopts the MATLAB software environment, and threshold is taken as 0 in the present embodiment, and the face picture used in the present embodiment is the illumination subset of the extended YaleB face database, and this subset has 38 individuals in all, and each person is in different illumination conditions Take 64 photos, a total of 2432 photos, and the photo size is 64x64. As shown in Figure 5, there are 64 sample pictures of a person in the database. The database can download all cropped face pictures on the database website (http://vision.ucsd.edu/~leekc/ExtYaleDatabase/ExtYaleB.html). In this embodiment, the correct recognition rates and cumulative match characteristic curves of four algorithms including the present method, local binary pattern, unified partial binary pattern and complete partial binary pattern are calculated. The nearest neighbor classifier is used to calculate the recognition rate. When calculating the recognition rate, the training sample set consists of 1, 2, 3, 4, and 5 samples for each person, and the rest of the images are used for testing. The test sample is compared with all training samples, and if the identity of the training sample with the smallest distance from the test sample is consistent with the test sample, the recognition is considered correct. The number of all correctly identified samples divided by the number of all test samples is the correct recognition rate.
另外还计算了本方法、局部二值模式、完备局部二值模式、统一局部二值模式的累加匹配特性曲线。在计算累加匹配特性曲线时需要Gallery图库集和Probe图库集。Gallery图库集由the extended Yale B数据库中每个人提供一张图片构成,每人剩余的其他63张图片构成Probe图库集。假设Gallery图库集中图片数目为L,P为一长度为L的全零向量。图库Probe集中一张图片I与Gallery集中所有图片进行距离匹配,得到一个距离向量D={d1,d2,…,dL},设Probe集中图片I与Gallery集中相同身份图片之间的距离为d,则d一定是向量D的一个元素,如果按从小到大排列D向量,此时假设d排列在D位置l,则向量P的l位置上的元素值加上1。如此将Probe图库集中图片重复一次,然后将向量P的每个元素除以向量P的长度,则“秩(rank)1”识别率就是向量P的第一个元素值,“秩(rank)2”识别率就是向量P的第二个元素值,依次类推。在本实施例中,当计算累加匹配特性曲线时,随机从每个人的照片中选择一张构成Gallery图库集,每个人剩下的63张构成Probe集。局部二值模式、统一局部二值模式、完备局部二值模式、本方法在此Gallery和Probe图库集下的累加匹配曲线如图6所示;In addition, the accumulative matching characteristic curves of the method, local binary mode, complete local binary mode and unified local binary mode are also calculated. The Gallery gallery set and the Probe gallery set are required when calculating the cumulative matching characteristic curve. The Gallery gallery set consists of one picture provided by each person in the extended Yale B database, and the remaining 63 pictures of each person form the Probe gallery set. Assume that the number of pictures in the Gallery collection is L, and P is an all-zero vector of length L. A picture I in the Probe set is distance-matched with all pictures in the Gallery set to obtain a distance vector D={d 1 ,d 2 ,…,d L }, assuming the distance between the picture I in the Probe set and the same identity picture in the Gallery set is d, then d must be an element of vector D. If D vectors are arranged from small to large, at this time, assuming that d is arranged at position l of D, the value of the element at position l of vector P is added with 1. In this way, the pictures in the Probe gallery are repeated once, and then each element of the vector P is divided by the length of the vector P, then the recognition rate of "rank (rank) 1" is the value of the first element of the vector P, "rank (rank) 2 "The recognition rate is the second element value of the vector P, and so on. In this embodiment, when calculating the cumulative matching characteristic curve, randomly select one photo from each person to form the Gallery set, and the remaining 63 photos of each person form the Probe set. Local binary mode, unified local binary mode, complete local binary mode, and the accumulative matching curve of this method under the Gallery and Probe gallery sets are shown in Figure 6;
从图6中可以看出,局部二值模式和统一局部二值模式的性能比较接近,但都弱于完备局部二值模式,而完备局部二值模式又大大弱于本方法。随着“秩(rank)”的增加到接近Gallery图库集中图片数目的时候,几种算法性能相近,但此时没有多少工程实践价值了。It can be seen from Figure 6 that the performance of the local binary model and the unified local binary model is relatively close, but both are weaker than the complete local binary model, and the complete local binary model is much weaker than this method. As the "rank" increases to close to the number of pictures in the Gallery gallery, the performance of several algorithms is similar, but there is not much engineering practice value at this time.
在本实施例中还仿真了不同训练样本数目情况下各算法的正确识别率。我们将仿真重复5次,计算平均正确识别率和标准偏差,并将结果画在图7中;从图7中可以看出,本方法的性能大大优于其他几种算法,当训练样本数为5时,通过基于卡方统计量的最近邻分类器来计算局部二值模式、统一局部二值模式、完备局部二值模式和本方法的平均识别率为:66.09%,62.34%,70.92%,76.76%。其中本方法比局部二值模式算法识别率要高出10.67%,比统一局部二值模式高出14.42%,比完备局部二值模式高出5.84%,这说明本方法是一种非常高效的光照人脸识别方法。In this embodiment, the correct recognition rate of each algorithm under the condition of different numbers of training samples is also simulated. We repeated the simulation 5 times, calculated the average correct recognition rate and standard deviation, and drew the results in Figure 7; it can be seen from Figure 7 that the performance of this method is much better than other algorithms, when the number of training samples is At 5 o'clock, the average recognition rates of local binary patterns, unified local binary patterns, complete local binary patterns and this method are calculated by the nearest neighbor classifier based on chi-square statistics: 66.09%, 62.34%, 70.92%, 76.76%. Among them, the recognition rate of this method is 10.67% higher than that of the local binary mode algorithm, 14.42% higher than that of the unified local binary mode, and 5.84% higher than that of the complete local binary mode, which shows that this method is a very efficient lighting method. face recognition method.
上面结合附图对本发明的具体实施方式作了详细说明,但是本发明并不限于上述实施方式,在本领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下作出各种变化。The specific implementation of the present invention has been described in detail above in conjunction with the accompanying drawings, but the present invention is not limited to the above-mentioned implementation, within the knowledge of those of ordinary skill in the art, it can also be made without departing from the gist of the present invention. Variations.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510223240.2A CN104881634B (en) | 2015-05-05 | 2015-05-05 | A kind of illumination face recognition method based on complete Local Convex diesinking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510223240.2A CN104881634B (en) | 2015-05-05 | 2015-05-05 | A kind of illumination face recognition method based on complete Local Convex diesinking |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104881634A true CN104881634A (en) | 2015-09-02 |
CN104881634B CN104881634B (en) | 2018-02-09 |
Family
ID=53949122
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510223240.2A Expired - Fee Related CN104881634B (en) | 2015-05-05 | 2015-05-05 | A kind of illumination face recognition method based on complete Local Convex diesinking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104881634B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292230A (en) * | 2017-05-09 | 2017-10-24 | 华南理工大学 | Embedded finger vein identification method based on convolutional neural network and having counterfeit detection capability |
CN109410258A (en) * | 2018-09-26 | 2019-03-01 | 重庆邮电大学 | Texture image feature extracting method based on non local binary pattern |
CN110059606A (en) * | 2019-04-11 | 2019-07-26 | 新疆大学 | A kind of improved increment Non-negative Matrix Factorization face recognition algorithms |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101317183A (en) * | 2006-01-11 | 2008-12-03 | 三菱电机株式会社 | Method for localizing pixels representing an iris in an image acquired of an eye |
CN101551858A (en) * | 2009-05-13 | 2009-10-07 | 北京航空航天大学 | Target recognition method based on differential code and differential code mode |
US20100074496A1 (en) * | 2008-09-23 | 2010-03-25 | Industrial Technology Research Institute | Multi-dimensional empirical mode decomposition (emd) method for image texture analysis |
CN101835037A (en) * | 2009-03-12 | 2010-09-15 | 索尼株式会社 | Method and system for carrying out reliability classification on motion vector in video |
CN102722699A (en) * | 2012-05-22 | 2012-10-10 | 湖南大学 | Face identification method based on multiscale weber local descriptor and kernel group sparse representation |
CN103246880A (en) * | 2013-05-15 | 2013-08-14 | 中国科学院自动化研究所 | Human face recognizing method based on multi-level local obvious mode characteristic counting |
-
2015
- 2015-05-05 CN CN201510223240.2A patent/CN104881634B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101317183A (en) * | 2006-01-11 | 2008-12-03 | 三菱电机株式会社 | Method for localizing pixels representing an iris in an image acquired of an eye |
US20100074496A1 (en) * | 2008-09-23 | 2010-03-25 | Industrial Technology Research Institute | Multi-dimensional empirical mode decomposition (emd) method for image texture analysis |
CN101835037A (en) * | 2009-03-12 | 2010-09-15 | 索尼株式会社 | Method and system for carrying out reliability classification on motion vector in video |
CN101551858A (en) * | 2009-05-13 | 2009-10-07 | 北京航空航天大学 | Target recognition method based on differential code and differential code mode |
CN102722699A (en) * | 2012-05-22 | 2012-10-10 | 湖南大学 | Face identification method based on multiscale weber local descriptor and kernel group sparse representation |
CN103246880A (en) * | 2013-05-15 | 2013-08-14 | 中国科学院自动化研究所 | Human face recognizing method based on multi-level local obvious mode characteristic counting |
Non-Patent Citations (2)
Title |
---|
毋小省 等: "《基于凹凸局部二值模式的纹理图像分类》", 《光电子·激光》 * |
毋小省 等: "《基于纹理与特征选择的前视红外目标识别》", 《光电子·激光》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292230A (en) * | 2017-05-09 | 2017-10-24 | 华南理工大学 | Embedded finger vein identification method based on convolutional neural network and having counterfeit detection capability |
CN109410258A (en) * | 2018-09-26 | 2019-03-01 | 重庆邮电大学 | Texture image feature extracting method based on non local binary pattern |
CN109410258B (en) * | 2018-09-26 | 2021-12-10 | 重庆邮电大学 | Texture image feature extraction method based on non-local binary pattern |
CN110059606A (en) * | 2019-04-11 | 2019-07-26 | 新疆大学 | A kind of improved increment Non-negative Matrix Factorization face recognition algorithms |
Also Published As
Publication number | Publication date |
---|---|
CN104881634B (en) | 2018-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104881676B (en) | A kind of facial image convex-concave pattern texture feature extraction and recognition methods | |
CN104778457B (en) | Video face identification method based on multi-instance learning | |
CN103310236A (en) | Mosaic image detection method and system based on local two-dimensional characteristics | |
CN110070091B (en) | Semantic segmentation method and system based on dynamic interpolation reconstruction and used for street view understanding | |
CN101630405B (en) | Multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation | |
CN106529447A (en) | Small-sample face recognition method | |
CN104091169A (en) | Behavior identification method based on multi feature fusion | |
CN103996018A (en) | Human-face identification method based on 4DLBP | |
CN103150713A (en) | Image super-resolution method of utilizing image block classification sparse representation and self-adaptive aggregation | |
CN107135664A (en) | The method and face identification device of a kind of recognition of face | |
CN106971158B (en) | A pedestrian detection method based on CoLBP co-occurrence feature and GSS feature | |
CN103258202B (en) | A kind of texture characteristic extracting method of robust | |
CN105488519B (en) | A Video Classification Method Based on Video Scale Information | |
CN107729993A (en) | Utilize training sample and the 3D convolutional neural networks construction methods of compromise measurement | |
CN104504669A (en) | Median filtering detection method based on local binary pattern | |
CN106548445A (en) | Spatial domain picture general steganalysis method based on content | |
CN102663399A (en) | Image local feature extracting method on basis of Hilbert curve and LBP (length between perpendiculars) | |
CN102063627B (en) | Method for recognizing natural images and computer generated images based on multi-wavelet transform | |
CN104881634B (en) | A kind of illumination face recognition method based on complete Local Convex diesinking | |
CN104778472B (en) | Human face expression feature extracting method | |
CN104200233A (en) | Clothes classification and identification method based on Weber local descriptor | |
CN114565973A (en) | Motion recognition system, method and device and model training method and device | |
CN116597258A (en) | A method and system for ore sorting model training based on multi-scale feature fusion | |
CN104463091B (en) | A kind of facial image recognition method based on image LGBP feature subvectors | |
CN103632156B (en) | Froth images texture characteristic extracting method based on multiple dimensioned neighborhood correlation matrix |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Chen Xi Inventor after: Jin Jie Inventor after: Pan Xiaolu Inventor before: Chen Xi Inventor before: Jin Jie |
|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180209 |