CN108710823A - A kind of face similarity system design method - Google Patents

A kind of face similarity system design method Download PDF

Info

Publication number
CN108710823A
CN108710823A CN201810311000.1A CN201810311000A CN108710823A CN 108710823 A CN108710823 A CN 108710823A CN 201810311000 A CN201810311000 A CN 201810311000A CN 108710823 A CN108710823 A CN 108710823A
Authority
CN
China
Prior art keywords
feature
blocks
feature points
points
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810311000.1A
Other languages
Chinese (zh)
Other versions
CN108710823B (en
Inventor
郭婧
刘尉
陈祖希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinling Institute of Technology
Original Assignee
Jinling Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinling Institute of Technology filed Critical Jinling Institute of Technology
Priority to CN201810311000.1A priority Critical patent/CN108710823B/en
Publication of CN108710823A publication Critical patent/CN108710823A/en
Application granted granted Critical
Publication of CN108710823B publication Critical patent/CN108710823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本发明涉及一种人脸相似性比较方法,通过对人脸图像的特征点进行提取,并划分特征块,将特征点包含在特征块中,对特征块进行分层提取,对其中的特征点进一步地提取,同时进行相似块、匹配特征点的比重的计算,以比较两个人脸图像中的相似度。本方法存在很大程度的实用性,并且对人脸图像的特征比较更为细致,比较计算的过程也更为严格。The invention relates to a method for comparing the similarity of human faces. By extracting the feature points of the face image, dividing the feature blocks, including the feature points in the feature blocks, and extracting the feature blocks hierarchically, the feature points therein are For further extraction, calculate the proportion of similar blocks and matching feature points at the same time, so as to compare the similarity in two face images. This method has a large degree of practicability, and the comparison of the features of the face image is more detailed, and the comparison and calculation process is also more strict.

Description

一种人脸相似性比较方法A Method for Comparing Human Face Similarity

技术领域technical field

本发明属于人工智能领域,具体来说涉及一种新型的人脸相似性比较方法。The invention belongs to the field of artificial intelligence, and in particular relates to a novel method for comparing human face similarities.

背景技术Background technique

随着计算机网络和多媒体技术的快速发展,基于图像的人脸检测、识别、检索技术已经日益成为特别活跃的研究范畴。其中一个重要的研究课题就是人脸相似度度量,它是人脸检测、识别、检索技术的关键基础和重要内容,因此,人脸相似度的研究有着重要的实用价值及研究意义。With the rapid development of computer network and multimedia technology, image-based face detection, recognition and retrieval technology has increasingly become a particularly active research field. One of the important research subjects is the measurement of face similarity, which is the key basis and important content of face detection, recognition and retrieval technology. Therefore, the study of face similarity has important practical value and research significance.

发明内容Contents of the invention

有鉴于此,本发明提供一种解决或部分解决人脸相似度评估问题的新型人脸相似性比较方法。In view of this, the present invention provides a novel human face similarity comparison method that solves or partially solves the problem of human face similarity evaluation.

具体来说,本发明采用了以下技术方案:Specifically, the present invention adopts the following technical solutions:

一种人脸相似性比较方法,其特征在于,所述方法包括:1)特征点设置:对两张人脸图像进行采集拍照,在人脸图像上设置特征点,其中所述特征点为人脸图像上具有标志性特征的点;2)划分特征块:将两个人脸图像各自划分为多个特征块,每个特征块包含至少两个特征点,每张人脸上的每个特征块的形状不定,但是两张人脸上的特征块一一对应,对应的特征块具有相同的特征点;3)特征块比较:对两张人脸上的一对对应的特征块进行相似性比较,其中使用相同的扩展速率对一对对应的特征块进行放大,放大的倍数相同,放大后对其中对应的特征点进行匹配,在对应的特征点间进行连线,相连的线为水平线的两个特征点为匹配特征点,将匹配特征点的数量记为m,如果一对特征块中所有对应的特征点为匹配特征点,则这一对特征块为相似块,将相似块的数量记为n并计算相似块占所有特征块数量的比重,如果比重大于50%,则对除相似块以外的其他特征块中的匹配特征点进行进一步统计,并计算特征块中匹配特征点占所有特征点数量的比例作为其他特征块的权重,按权重大小进行降序排列,对排序在前50%的特征块,如果其中包含多于一个特征点则对其进行进一步的划分,得到划分后的特征块,并且划分后的特征块中所含特征点的数量只占划分前的特征块中特征点数量的50%,进行进一步特征点匹配,如果一对划分后的特征块中的所有特征点均为匹配特征点,则该划分后的特征块为二级相似块并记下二级相似块占划分后的特征块的数量的比重,然后统计其他划分后的特征块中匹配特征点的数量,通过以下公式进行人脸相似度度量:A method for comparing human face similarity, characterized in that the method includes: 1) feature point setting: collecting and photographing two face images, and setting feature points on the face images, wherein the feature points are human faces Points with iconic features on the image; 2) Divide feature blocks: Divide two face images into multiple feature blocks, each feature block contains at least two feature points, and each feature block on each face The shape is uncertain, but the feature blocks on the two faces correspond one-to-one, and the corresponding feature blocks have the same feature points; 3) feature block comparison: compare the similarity of a pair of corresponding feature blocks on the two faces, The same expansion rate is used to enlarge a pair of corresponding feature blocks, and the magnification factor is the same. After the enlargement, the corresponding feature points are matched, and the corresponding feature points are connected. The connected lines are two horizontal lines. The feature points are matching feature points, and the number of matching feature points is recorded as m. If all the corresponding feature points in a pair of feature blocks are matching feature points, the pair of feature blocks are similar blocks, and the number of similar blocks is recorded as n and calculate the proportion of similar blocks to the number of all feature blocks. If the proportion is greater than 50%, further statistics are made on the matching feature points in other feature blocks except similar blocks, and the proportion of matching feature points in feature blocks to all feature points is calculated. The proportion of the number is used as the weight of other feature blocks, and they are arranged in descending order according to the weight. For the feature blocks sorted in the top 50%, if they contain more than one feature point, they are further divided to obtain the divided feature blocks. And the number of feature points contained in the divided feature block only accounts for 50% of the number of feature points in the feature block before division, and further feature point matching is performed. If all the feature points in a pair of divided feature blocks are matched feature points, then the divided feature block is a second-level similar block and record the proportion of the second-level similar block to the number of divided feature blocks, and then count the number of matching feature points in other divided feature blocks, through the following The formula for face similarity measurement:

其中,m为特征块中匹配特征点的数量,n为相似块的数量,j为划分后的特征块中的特征点中匹配特征点的数量,c为二级相似块的数量,N为划分后的特征块的数量,分别为特征块、划分后的特征块的调整系数,为任意实数,w为人脸相似度度量的数值。Among them, m is the number of matching feature points in the feature block, n is the number of similar blocks, j is the number of matching feature points in the feature points in the divided feature block, c is the number of secondary similar blocks, and N is the division After the number of feature blocks, , Respectively, the adjustment coefficients of the feature block and the divided feature block are arbitrary real numbers, and w is the value of the face similarity measure.

优选地,人脸上作为特征点的具有标志性特征的点包括与五官相关的点。进一步,所述特征点包括眉毛的边缘两个所在点、眉毛的中间点、眼睛中的眼球所在点、鼻子上的鼻尖所在点、两个嘴巴的边缘所在点、嘴巴的中间所在点。Preferably, the points with landmark features serving as feature points on the human face include points related to facial features. Further, the feature points include two points on the edge of the eyebrows, the middle point of the eyebrows, the point where the eyeball is located in the eyes, the point where the tip of the nose is located on the nose, the points where the edges of the two mouths are located, and the point where the middle of the mouth is located.

另外,在对特征点进行匹配时,如果所匹配的特征点不够清晰,则对其进行局部放大,然后在局部放大的特征点中取二级特征点,进行进一步匹配,获得匹配的二级特征点为匹配特征点。In addition, when matching the feature points, if the matched feature points are not clear enough, it is partially enlarged, and then the secondary feature points are selected from the partially enlarged feature points for further matching to obtain the matched secondary features Points are matching feature points.

本发明的有益效果为:本发明提供的新型人脸相似性比较方法,通过对人脸图像的相似特征点进行比较,将特征点包含在特征块中, 对特征块进行分层提取,以比较两个人脸图像中的相似度,本方法存在很大程度的实用性,并且对人脸图像的特征比较更为细致,比较计算的过程也更为严格。The beneficial effects of the present invention are: the novel face similarity comparison method provided by the present invention, by comparing the similar feature points of the face images, the feature points are included in the feature blocks, and the feature blocks are layered and extracted to compare For the similarity between two face images, this method has a large degree of practicability, and the comparison of the features of the face images is more detailed, and the comparison and calculation process is more rigorous.

具体实施方式Detailed ways

人脸识别是目前计算机视觉和机器学习的研究热点,具有广阔的应用前景。如何获取有效的人脸特征表达和设计强大的分类器成为研究关键,而实际环境中的不可控因素增大获取的难度。随着压缩感知理论的提出与发展,基于稀疏编码模型的人脸识别方法研究,引起研究人员的广泛关注和极大兴趣。首先,提出一种基于稀疏表示分类( Sparse Representation Based Classificaion,SRC)的人脸识别方法,在鲁棒性人脸识别方面表现出较好性能,对存在亮度变化、噪声和遮挡的人脸识别均具有良好效果。其中基本思想为,如果将已知类别属性,但属于不同类的训练样本在空域或其特征域矢量化并构成表示字典,则属于其中某类的待测试图像在同样矢量化后,可由该字典稀疏编码表示,并且得到的非零系数主要集中在该测试图像关于所属同类样本的表示系数里,因此,使得该测试图像由对应类训练样本线性表示的误差最小,并由此判别出被测试图像所属的正确类别。Face recognition is currently a research hotspot in computer vision and machine learning, and has broad application prospects. How to obtain effective face feature expression and design a powerful classifier has become the key to research, but the uncontrollable factors in the actual environment increase the difficulty of acquisition. With the introduction and development of compressed sensing theory, the research on face recognition method based on sparse coding model has aroused extensive attention and great interest of researchers. First of all, a face recognition method based on Sparse Representation Based Classification (SRC) is proposed, which shows better performance in robust face recognition, and is good for face recognition with brightness changes, noise and occlusion. Has good effect. The basic idea is that if the training samples with known category attributes but belonging to different categories are vectorized in the airspace or its feature domain to form a representation dictionary, then the images to be tested belonging to a certain category can be vectorized by the dictionary Sparsely coded representation, and the obtained non-zero coefficients are mainly concentrated in the representation coefficients of the test image with respect to the samples of the same class, so the error of the test image linearly represented by the corresponding training sample is the smallest, and thus the tested image can be identified the correct category to belong to.

国内外许多专家学者对基于SRC框架的人脸识别方法展开大量研究工作。针对用遮挡字典致使维数过高的问题,Yang等提出一种基于Gabor变换的遮挡字典以降低系统的计算复杂度。鉴于SRC采用规则化编码系数的l1范数求解,其运算具有较高的计算复杂度,Zhang等提出采用规则化l2范数代替规则化l1范数的编码方法,提出协作表示分类(Collaborative Representation Based Classification,CRC)的概念。SRC可视为最近邻分类和最近邻特征子空间分类的推广,虽然基于SRC的人脸识别在特征维数足够高的条件下,选取不同的特征表示对最终的识别性能不会产生重要影响。但在选取特征维数较低的条件下,其稀疏表示的自由度将会增大,因此,造成基于稀疏表示的分类识别性能有较大幅度的降低。Wang等提出基于位置约束的线性编码(Locality Constrained LinearCoding,LLC),通过利用样本间存在的位置约束使得规则化的编码系数具有与稀疏编码系数类似的稀疏性,LLC也可有效用于图像分类。Many experts and scholars at home and abroad have carried out a lot of research work on the face recognition method based on the SRC framework. Aiming at the problem that the dimensionality is too high due to the occlusion dictionary, Yang et al. proposed an occlusion dictionary based on Gabor transform to reduce the computational complexity of the system. In view of the fact that SRC uses the l1 norm of regularized coding coefficients to solve, and its operation has high computational complexity, Zhang et al. proposed a coding method using regularized l2 norms instead of regularized l1 norms, and proposed Collaborative Representation Based Classification (Collaborative Representation Based Classification). Classification, CRC) concept. SRC can be regarded as the extension of the nearest neighbor classification and the nearest neighbor feature subspace classification. Although the face recognition based on SRC has a sufficiently high feature dimension, selecting different feature representations will not have an important impact on the final recognition performance. However, under the condition that the selected feature dimension is low, the degree of freedom of its sparse representation will increase, so the performance of classification and recognition based on sparse representation will be greatly reduced. Wang et al. proposed Locality Constrained Linear Coding (LLC), which uses the location constraints between samples to make the regularized coding coefficients have a similar sparsity to the sparse coding coefficients. LLC can also be effectively used for image classification.

Chao等提出一种基于位置约束和组稀疏约束的SRC人脸识别方法。同样地,Lu等和Guo等分别提出基于位置(相似性)加权的SRC(Weighted SRC,WSRC )方法。Timofte等和Waqas等分别提出加权的CRC等用于人脸识别。由此可见,在线性/稀疏编码表示分类过程中嵌入测试样本与训练样本间的位置(或相似性)信息,有助于有效提升编码系数的判别能力,从而增强其分类性能。然而在实际的人脸识别应用中,由于在非受控场景下获取的待识别人脸图像可能存在表情变化和有意的部分遮挡、伪装,基于图像全局的相似性度量很难真实反映彼此的位置关系,因此使得基于图像全局相似性的加权编码表示人脸分类性能降低。寻找在非受控条件下,特别是获取的图像存在表情变化、部分遮挡和伪装时图像间的有效位置表示,成为基于位置约束框架的加权编码表示人脸识别方法研究中值得探索的问题。为此针对非受控人脸图像存在表情变化、部分遮挡和伪装的问题,探讨基于图像分块的最大相似性嵌入稀疏表示的人脸识别。通过对训练图像和测试图像进行非重叠分块,计算各对应分块之间的相似性,并以其最大值度量图像间的相似性,进而将提取的最大块相似性信息,嵌入稀疏编码表示分类中,从而有效提高低维特征选取下稀疏编码的稳定性及系统的识别性能。Chao et al. proposed a SRC face recognition method based on location constraints and group sparse constraints. Similarly, Lu et al. and Guo et al. respectively proposed weighted SRC (Weighted SRC, WSRC) methods based on position (similarity). Timofte et al. and Waqas et al. respectively proposed weighted CRC for face recognition. It can be seen that embedding the position (or similarity) information between the test sample and the training sample in the classification process of the linear/sparse coding representation helps to effectively improve the discriminative ability of the coding coefficients, thereby enhancing its classification performance. However, in the actual face recognition application, since the face images to be recognized acquired in uncontrolled scenes may have expression changes and intentional partial occlusion and camouflage, it is difficult to truly reflect each other's positions based on the similarity measure based on the global image Therefore, the weighted coding representation based on the global similarity of the image reduces the performance of face classification. Finding an effective position representation between images under uncontrolled conditions, especially when the acquired images have expression changes, partial occlusion and camouflage, has become a problem worth exploring in the research of weighted coding representation face recognition methods based on the position constraint framework. To solve the problems of expression changes, partial occlusion and camouflage in uncontrolled face images, face recognition based on maximum similarity embedding sparse representation of image blocks is discussed. By performing non-overlapping blocks on the training image and the test image, the similarity between each corresponding block is calculated, and the similarity between the images is measured by its maximum value, and then the extracted maximum block similarity information is embedded into the sparse coding representation In classification, the stability of sparse coding under low-dimensional feature selection and the recognition performance of the system can be effectively improved.

为了使本发明所要解决的技术问题、技术方案及有益效果更加清楚明白,以下结合实施例,对本发明进行详细的说明。应当说明的是,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明,能实现同样功能的产品属于等同替换和改进,均包含在本发明的保护范围之内。具体方法如下:In order to make the technical problems, technical solutions and beneficial effects to be solved by the present invention clearer, the present invention will be described in detail below in conjunction with the embodiments. It should be noted that the specific embodiments described here are only used to explain the present invention, and are not intended to limit the present invention. Products that can achieve the same function are equivalent replacements and improvements, and are included in the protection scope of the present invention. The specific method is as follows:

对两个人脸图像通过摄像采集设备进行采集拍照,在人脸图像上设置特征点,特征点为在人脸图像上的具有标志性特征的点,包含眉毛的边缘两个所在点、眉毛的中间点、眼睛中的眼球所在点、鼻子上的鼻尖所在点、嘴巴的边缘两个所在点、嘴巴的中间所在点,对两个人脸图像进行块划分,划分为多个特征块,每个特征块的形状不定,其中至少包含两个特征点,并且,在两个人脸图像上一一对应划分特征块,即特征块是成对出现的,一对特征块各在一个人脸图像上,具有相同的特征点。相同的特征点为都为眉毛的边缘两个所在点、眉毛的中间点、眼睛中的眼球所在点、鼻子上的鼻尖所在点、嘴巴的边缘两个所在点、嘴巴的中间所在点中任意几个点的组合。The two face images are collected and photographed by the camera acquisition device, and the feature points are set on the face images. The feature points are points with iconic features on the face images, including the two points on the edge of the eyebrows and the middle of the eyebrows. point, the point where the eyeball is located in the eyes, the point where the tip of the nose is located on the nose, the two points where the edge of the mouth is located, and the point where the middle of the mouth is located, divide the two face images into multiple feature blocks, and each feature block The shape is indeterminate, which contains at least two feature points, and the feature blocks are divided into one-to-one correspondence on the two face images, that is, the feature blocks appear in pairs, and each pair of feature blocks is on a face image, with the same feature points. The same feature points are the two points on the edge of the eyebrows, the middle point of the eyebrows, the point of the eyeball in the eyes, the point on the tip of the nose on the nose, the two points on the edge of the mouth, and any number of points in the middle of the mouth. combination of points.

对一对特征块进行匹配从而进行相似性的比较,具体过程为:Match a pair of feature blocks to compare the similarity. The specific process is:

使用相同的扩展速率对一对特征块进行扩展,扩展速率为特征块的扩大速度,扩大速度等同于放大的速度。扩大后再对扩大后的特征块中包含的特征点进行匹配,将对应的特征点相连。对应的特征点为,同样为眉毛的边缘两个所在点,或同样为眉毛的中间点等。相连的线为水平线的两个特征点为匹配特征点。设匹配特征点的数量为m,m为大于0的正整数。如果匹配特征点不够清晰,必须对其进行局部放大,局部放大后在局部放大的匹配特征点中取二级特征点,二级特征点为在局部放大的匹配特征点上选取的用于进一步相匹配的点。将对应的进一步相匹配的点相连,所有的相连的线为水平线,则设置匹配特征点所在特征块为相似块。对相似块的数量进行统计,设数量为n,n为正整数,n的值除以特征块的数量得到比重。如果比重大于50%,对除了相似块的其他特征块中匹配特征点进行进一步统计,统计其中匹配特征点的数量占特征点的数量的比例,以此作为特征块的权重。按权重的大小降序排序,对排序前50%的特征块,如果其里面包含多于一个特征点,作进一步的特征块划分。划分后的特征块中特征点只占划分前特征块中原有的特征点数量的50%,并进行进一步地特征点匹配,检测是否为匹配特征点。具有完全数量的匹配特征点的划分后的特征块为二级相似块,完全数量的匹配特征点为划分后的特征块中所有特征点都为匹配特征点。将二级相似块占划分后的特征块的比重也记录下来。划分后的特征块中除了二级相似块,统计其中的匹配特征点的数量占所有特征点的数量,将整个过程的所有数据都统计在一个表格中。最后,根据统计数据进行人脸相似度度量,采用的公式如下:A pair of feature blocks are expanded with the same expansion rate, the expansion rate is the expansion speed of the feature block, and the expansion speed is equal to the amplification speed. After expanding, match the feature points contained in the enlarged feature block, and connect the corresponding feature points. The corresponding feature points are also the two points on the edge of the eyebrows, or the middle point of the eyebrows. Two feature points whose connected line is a horizontal line are matching feature points. Let the number of matching feature points be m, where m is a positive integer greater than 0. If the matching feature points are not clear enough, it must be locally enlarged. After the local enlargement, the secondary feature points are selected from the locally enlarged matching feature points. The secondary feature points are selected from the locally enlarged matching feature points for further matching. matching points. Connect the corresponding further matching points, and all the connected lines are horizontal lines, then set the feature block where the matching feature point is located as a similar block. The number of similar blocks is counted, and the number is set to n, where n is a positive integer, and the value of n is divided by the number of characteristic blocks to obtain the proportion. If the proportion is greater than 50%, make further statistics on the matching feature points in other feature blocks except similar blocks, and count the ratio of the number of matching feature points to the number of feature points, which is used as the weight of the feature block. Sort in descending order according to the size of the weight, and for the top 50% of the feature blocks, if it contains more than one feature point, further divide the feature blocks. The feature points in the divided feature block only account for 50% of the original feature points in the pre-divided feature block, and further feature point matching is performed to detect whether it is a matching feature point. A divided feature block with a complete number of matching feature points is a second-level similar block, and a full number of matching feature points means that all the feature points in the divided feature block are matching feature points. The ratio of the second-level similar blocks to the divided feature blocks is also recorded. Except for the second-level similar blocks in the divided feature blocks, count the number of matching feature points in the number of all feature points, and count all the data in the whole process in a table. Finally, the face similarity measurement is carried out according to the statistical data, and the formula adopted is as follows:

其中,m为一对特征块中匹配特征点的数量,j为划分后的特征块中的特征点中匹配特征点的数量,c为二级相似块的数量,N为划分后的特征块的数量,分别为特征块、划分后的特征块的调整系数,为任意实数,k为实数,将整个过程的所有数据都统计在一个表格中;w为人脸相似度度量的数值,数值越高,表示两张人脸图像的相似度越高。Among them, m is the number of matching feature points in a pair of feature blocks, j is the number of matching feature points in the feature blocks after division, c is the number of secondary similar blocks, and N is the number of feature blocks after division. quantity, , Respectively, the adjustment coefficients of the feature block and the divided feature block are arbitrary real numbers, k is a real number, and all the data in the whole process are counted in a table; w is the value of the face similarity measure, the higher the value, the two The higher the similarity of the face images is.

尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。 所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。While preferred embodiments of the invention have been described, additional changes and modifications to these embodiments can be made by those skilled in the art once the basic inventive concept is appreciated. Therefore, it is intended that the appended claims be construed to cover the preferred embodiment as well as all changes and modifications which fall within the scope of the invention.

显然,本领域的技术人员可以对本发明实施例进行各种改动和变型而不脱离本发明实施例的精神和范围。 这样,倘若本发明实施例的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。Apparently, those skilled in the art can make various changes and modifications to the embodiments of the present invention without departing from the spirit and scope of the embodiments of the present invention. In this way, if the modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and equivalent technologies, the present invention also intends to include these modifications and variations.

上面结合具体实施方式对本发明的实施方式作了详细的说明,但是本发明不限于上述实施方式,在所属技术领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下做出各种变化。The embodiments of the present invention have been described in detail above in conjunction with the specific embodiments, but the present invention is not limited to the above-mentioned embodiments, within the scope of knowledge possessed by those of ordinary skill in the art, and without departing from the gist of the present invention. Make various changes.

Claims (4)

1.一种人脸相似性比较方法,其特征在于,所述方法包括:1)特征点设置:对两张人脸图像进行采集拍照,在人脸图像上设置特征点,其中所述特征点为人脸图像上具有标志性特征的点;2)划分特征块:将两个人脸图像各自划分为多个特征块,每个特征块包含至少两个特征点,每张人脸上的每个特征块的形状不定,但是两张人脸上的特征块一一对应,对应的特征块具有相同的特征点;3)特征块比较:对两张人脸上的一对对应的特征块进行相似性比较,其中使用相同的扩展速率对一对对应的特征块进行放大,放大的倍数相同,放大后对其中对应的特征点进行匹配,在对应的特征点间进行连线,相连的线为水平线的两个特征点为匹配特征点,将匹配特征点的数量记为m,如果一对特征块中所有对应的特征点为匹配特征点,则这一对特征块为相似块,将相似块的数量记为n并计算相似块占所有特征块数量的比重,如果比重大于50%,则对除相似块以外的其他特征块中的匹配特征点进行进一步统计,并计算特征块中匹配特征点占所有特征点数量的比例作为其他特征块的权重,按权重大小进行降序排列,对排序在前50%的特征块,如果其中包含多于一个特征点则对其进行进一步的划分,得到划分后的特征块,并且划分后的特征块中所含特征点的数量只占划分前的特征块中特征点数量的50%,进行进一步特征点匹配,如果一对划分后的特征块中的所有特征点均为匹配特征点,则该划分后的特征块为二级相似块并记下二级相似块占划分后的特征块的数量的比重,然后统计其他划分后的特征块中匹配特征点的数量,通过以下公式进行人脸相似度度量:1. A method for comparing human face similarity, characterized in that the method comprises: 1) feature point setting: two face images are collected and photographed, and feature points are set on the face images, wherein the feature points Points with iconic features on the face image; 2) Divide feature blocks: Divide two face images into multiple feature blocks, each feature block contains at least two feature points, each feature on each face The shape of the block is uncertain, but the feature blocks on the two faces correspond one-to-one, and the corresponding feature blocks have the same feature points; 3) feature block comparison: the similarity of a pair of corresponding feature blocks on the two faces Comparison, where the same expansion rate is used to enlarge a pair of corresponding feature blocks, and the magnification factor is the same. After the enlargement, the corresponding feature points are matched, and the corresponding feature points are connected. The connected lines are horizontal lines. Two feature points are matching feature points, and the number of matching feature points is recorded as m. If all corresponding feature points in a pair of feature blocks are matching feature points, then this pair of feature blocks is a similar block, and the number of similar blocks is Record it as n and calculate the proportion of similar blocks to the number of all feature blocks. If the proportion is greater than 50%, further statistics are made on the matching feature points in other feature blocks except similar blocks, and the proportion of matching feature points in feature blocks is calculated. The proportion of the number of feature points is used as the weight of other feature blocks, and they are sorted in descending order according to the weight. For the feature blocks sorted in the top 50%, if they contain more than one feature point, they are further divided to obtain the divided features. block, and the number of feature points contained in the divided feature block only accounts for 50% of the number of feature points in the feature block before division, and further feature point matching is performed. If all the feature points in a pair of divided feature blocks are equal to To match feature points, the divided feature block is a second-level similar block and write down the proportion of the second-level similar block to the number of divided feature blocks, and then count the number of matching feature points in other divided feature blocks, The face similarity measurement is performed by the following formula: 其中,m为特征块中匹配特征点的数量,n为相似块的数量,j为划分后的特征块中的特征点中匹配特征点的数量,c为二级相似块的数量,N为划分后的特征块的数量,分别为特征块、划分后的特征块的调整系数,为任意实数,w为人脸相似度度量的数值。Among them, m is the number of matching feature points in the feature block, n is the number of similar blocks, j is the number of matching feature points in the feature points in the divided feature block, c is the number of secondary similar blocks, and N is the division After the number of feature blocks, , Respectively, the adjustment coefficients of the feature block and the divided feature block are arbitrary real numbers, and w is the value of the face similarity measure. 2.如权利要求1所述的人脸相似性比较方法,其特征在于,人脸上作为特征点的具有标志性特征的点包括与五官相关的点。2. The human face similarity comparison method as claimed in claim 1, wherein the points with landmark features as feature points on the human face include points related to facial features. 3.如权利要求2所述的人脸相似性比较方法,其特征在于,所述特征点包括眉毛的边缘两个所在点、眉毛的中间点、眼睛中的眼球所在点、鼻子上的鼻尖所在点、两个嘴巴的边缘所在点、嘴巴的中间所在点。3. The human face similarity comparison method according to claim 2, wherein the feature points include two points on the edge of the eyebrows, the middle point of the eyebrows, the point where the eyeball is located in the eyes, and the point where the tip of the nose on the nose is located. point, the point where the edges of the two mouths are, and the point where the middle of the mouth is. 4.如权利要求1所述的人脸相似性比较方法,其特征在于,在对特征点进行匹配时,如果所匹配的特征点不够清晰,则对其进行局部放大,然后在局部放大的特征点中取二级特征点,进行进一步匹配,获得匹配的二级特征点为匹配特征点。4. The human face similarity comparison method as claimed in claim 1, wherein when matching feature points, if the matched feature points are not clear enough, it is locally enlarged, and then the locally enlarged feature The second-level feature points are selected from the points for further matching, and the matched second-level feature points are obtained as matching feature points.
CN201810311000.1A 2018-04-09 2018-04-09 Face similarity comparison method Active CN108710823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810311000.1A CN108710823B (en) 2018-04-09 2018-04-09 Face similarity comparison method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810311000.1A CN108710823B (en) 2018-04-09 2018-04-09 Face similarity comparison method

Publications (2)

Publication Number Publication Date
CN108710823A true CN108710823A (en) 2018-10-26
CN108710823B CN108710823B (en) 2022-04-19

Family

ID=63866534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810311000.1A Active CN108710823B (en) 2018-04-09 2018-04-09 Face similarity comparison method

Country Status (1)

Country Link
CN (1) CN108710823B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183653A1 (en) * 2006-01-31 2007-08-09 Gerard Medioni 3D Face Reconstruction from 2D Images
CN101833672A (en) * 2010-04-02 2010-09-15 清华大学 Sparse Representation Face Recognition Method Based on Constrained Sampling and Shape Features
CN105740808A (en) * 2016-01-28 2016-07-06 北京旷视科技有限公司 Human face identification method and device
CN106980819A (en) * 2017-03-03 2017-07-25 竹间智能科技(上海)有限公司 Similarity judgement system based on human face five-sense-organ
CN107729855A (en) * 2017-10-25 2018-02-23 成都尽知致远科技有限公司 Mass data processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183653A1 (en) * 2006-01-31 2007-08-09 Gerard Medioni 3D Face Reconstruction from 2D Images
CN101833672A (en) * 2010-04-02 2010-09-15 清华大学 Sparse Representation Face Recognition Method Based on Constrained Sampling and Shape Features
CN105740808A (en) * 2016-01-28 2016-07-06 北京旷视科技有限公司 Human face identification method and device
CN106980819A (en) * 2017-03-03 2017-07-25 竹间智能科技(上海)有限公司 Similarity judgement system based on human face five-sense-organ
CN107729855A (en) * 2017-10-25 2018-02-23 成都尽知致远科技有限公司 Mass data processing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIAN LAI等: "DISCRIMINATIVE SPARSITY PRESERVING EMBEDDING FOR FACE RECOGNITION", 《IEEE》 *
PENGYUE ZHANG: "Sparse discriminativemulti-manifoldembeddingforone-sample", 《IEEE》 *

Also Published As

Publication number Publication date
CN108710823B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
Song et al. Region-based quality estimation network for large-scale person re-identification
CN102637251B (en) Face recognition method based on reference features
CN104866829B (en) A cross-age face verification method based on feature learning
Yang et al. Fine-grained evaluation on face detection in the wild
WO2019134327A1 (en) Facial expression recognition feature extraction method employing edge detection and sift
CN101807256B (en) Object identification detection method based on multiresolution frame
CN101894276B (en) Training method of human action recognition and recognition method
CN104036255B (en) A kind of facial expression recognizing method
CN111126240B (en) Three-channel feature fusion face recognition method
CN102938065B (en) Face feature extraction method and face identification method based on large-scale image data
CN102156885B (en) Image classification method based on cascaded codebook generation
CN103279768B (en) A kind of video face identification method based on incremental learning face piecemeal visual characteristic
CN105956552B (en) A kind of face blacklist monitoring method
CN104504362A (en) Face detection method based on convolutional neural network
CN109522853A (en) Face datection and searching method towards monitor video
CN102831447A (en) Method for identifying multi-class facial expressions at high precision
CN108960201A (en) A kind of expression recognition method extracted based on face key point and sparse expression is classified
CN104966052A (en) Attributive characteristic representation-based group behavior identification method
CN104156690B (en) A kind of gesture identification method based on image space pyramid feature bag
CN107239741A (en) A kind of single sample face recognition method based on sparse reconstruct
CN108960142A (en) Pedestrian based on global characteristics loss function recognition methods again
CN106649665A (en) Object-level depth feature aggregation method for image retrieval
CN105160290A (en) Mobile boundary sampling behavior identification method based on improved dense locus
CN109740672B (en) Multi-stream feature distance fusion system and fusion method
CN111461162B (en) Zero-sample target detection model and establishing method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant