CN102867173B - Human face recognition method and system thereof - Google Patents

Human face recognition method and system thereof Download PDF

Info

Publication number
CN102867173B
CN102867173B CN201210310643.7A CN201210310643A CN102867173B CN 102867173 B CN102867173 B CN 102867173B CN 201210310643 A CN201210310643 A CN 201210310643A CN 102867173 B CN102867173 B CN 102867173B
Authority
CN
China
Prior art keywords
face
class
average
user
analyzer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210310643.7A
Other languages
Chinese (zh)
Other versions
CN102867173A (en
Inventor
徐向民
罗梦娜
郭咏诗
尹飞云
张阳东
吴丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201210310643.7A priority Critical patent/CN102867173B/en
Publication of CN102867173A publication Critical patent/CN102867173A/en
Application granted granted Critical
Publication of CN102867173B publication Critical patent/CN102867173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种人脸识别方法,具有自学习功能和时空特性,通过集合人脸检测,人脸跟踪,数据采集与分析,识别、在线学习、人机交互步骤对人脸进行识别。本发明还公开了实现上述人脸识别方法的系统,包括检测器、跟踪器、收集器、分析器、在线学习模块、识别器及人机交互模块。与现有技术相比,本发明具有强抗干扰能力,识别效率高的优点。

The invention discloses a face recognition method, which has self-learning function and space-time characteristics, and recognizes faces through the steps of face detection, face tracking, data collection and analysis, recognition, online learning, and human-computer interaction. The invention also discloses a system for realizing the above face recognition method, including a detector, a tracker, a collector, an analyzer, an online learning module, a recognizer and a human-computer interaction module. Compared with the prior art, the invention has the advantages of strong anti-interference ability and high identification efficiency.

Description

一种人脸识别方法及其系统A face recognition method and system thereof

技术领域 technical field

本发明涉及人机交互技术,特别涉及一种人脸识别方法及其系统。The invention relates to human-computer interaction technology, in particular to a face recognition method and system thereof.

背景技术 Background technique

人机交互是目前国际上计算机科学研究领域中的一个热点。在人机交互技术中,人脸识别以其作为计算机识别用户并提供个性化服务的最便捷的方法逐步被应用于智能家居等场合。基于计算机视觉的人脸识别,其核心是利用计算机视觉,图像处理等技术对图像采集设备采集到的视频序列进行处理,并对用户进行分类,从而进行相应的响应。Human-computer interaction is currently a hot spot in the field of computer science research in the world. In human-computer interaction technology, face recognition is gradually being applied to smart home and other occasions as the most convenient method for computers to identify users and provide personalized services. Face recognition based on computer vision, its core is to use computer vision, image processing and other technologies to process the video sequences collected by image acquisition equipment, and classify users, so as to respond accordingly.

在现有的人脸识别技术中,已经分别实现了对人脸进行检测,对检测到的人脸跟踪,对给定的图像或视频序列进行人脸特征提取并与人脸库数据进行比对从而对用户进行识别,在此,我们把可以将这三部分模块结合在一起的人脸识别方法称为传统人脸识别系统。然而,目前传统人脸识别系统并不能把一些效果比较好的技术进行模块的融合,因而存在如下问题:In the existing face recognition technology, the detection of faces, the tracking of detected faces, the extraction of face features from a given image or video sequence and comparison with face database data have been realized respectively. In order to identify the user, here, we call the face recognition method that can combine these three modules together as the traditional face recognition system. However, the current traditional face recognition system cannot integrate some technologies with better effects into modules, so there are the following problems:

(1)效果差。由于传统人脸识别系统不能把人脸识别系统各个模块进行有效地融合,传统人脸识别系统易产生人脸跟踪不到,跟踪漂移,识别不出、不确定甚至出错等问题,严重影响到人脸识别效果。(1) The effect is poor. Because the traditional face recognition system cannot effectively integrate the various modules of the face recognition system, the traditional face recognition system is prone to problems such as not being able to track the face, tracking drift, unrecognizable, uncertain or even wrong, which seriously affects people. Face recognition effect.

(2)交互性差。传统人脸识别系统不能很好的与人进行交互。当出现人脸识别不出、不确定甚至出错时,系统不能很好的与人进行交互,因此在识别不准确的时候系统不能及时得到用户的反馈,性能得不到改善;甚至在识别不准确时由于系统认为自身识别正确从而用识别的结果修改系统中的原有参数(即自学习),导致系统越用越差。(2) Poor interactivity. Traditional face recognition systems cannot interact well with people. When the face cannot be recognized, is uncertain or even makes mistakes, the system cannot interact with people well, so when the recognition is inaccurate, the system cannot get feedback from the user in time, and the performance cannot be improved; even when the recognition is inaccurate Sometimes, because the system thinks that its own recognition is correct and uses the recognition result to modify the original parameters in the system (that is, self-learning), the more the system is used, the worse it becomes.

(3)适应性差。传统人脸识别系统易受光照、胡须、眼镜、发型、表情等各种外界条件的影响,使识别率降低。因此,系统实用性不强。(3) Poor adaptability. Traditional face recognition systems are easily affected by various external conditions such as lighting, beards, glasses, hairstyles, and expressions, which reduce the recognition rate. Therefore, the system is not very practical.

发明内容 Contents of the invention

为了克服现有技术的上述缺点与不足,本发明的目的在于提供一种人脸识别方法,具有自学习功能、时空特性,抗干扰能力强。In order to overcome the above-mentioned shortcomings and deficiencies of the prior art, the object of the present invention is to provide a face recognition method with self-learning function, space-time characteristics, and strong anti-interference ability.

本发明的另一目的在于提供实现上述人脸识别方法的人脸识别系统。Another object of the present invention is to provide a face recognition system for realizing the above face recognition method.

本发明的目的通过以下技术方案实现:The object of the present invention is achieved through the following technical solutions:

一种人脸识别方法,包括以下步骤:A face recognition method, comprising the following steps:

S1检测器检测帧序列中是否有人脸;若是,进行步骤S2,若否,重复步骤S1;S1 detector detects whether there is a human face in the frame sequence; if so, proceed to step S2, if not, repeat step S1;

S2跟踪器对检测到的人脸进行跟踪;The S2 tracker tracks the detected faces;

S3收集器收集人脸图像;The S3 collector collects face images;

S4分析器分析收集到的人脸图像是否为可靠样本;若是,进行步骤S5,若否,重复步骤S2~S4;S4 analyzer analyzes whether the face image collected is a reliable sample; if so, proceed to step S5, if not, repeat steps S2~S4;

S5分析器对可靠样本中的目标人脸提取形状参数和纹理参数,分别对目标人脸的形状和纹理进行建模,通过模型融合得到目标人脸的平均脸;The S5 analyzer extracts shape parameters and texture parameters from the target face in reliable samples, models the shape and texture of the target face respectively, and obtains the average face of the target face through model fusion;

S6识别器通过线性判别特征脸方法得到平均脸与人脸类库中的最接近人脸类的匹配度C;The S6 discriminator obtains the matching degree C between the average face and the closest face class in the face class library through the linear discriminant eigenface method;

若C<B,则进行步骤S7;若C>A,识别用户身份,识别结束;若B<C<A,则通过人机交互模块要求用户输入名字,若用户输入的名字所对应的人脸类已存在人脸类库中,进行步骤S8;否则,进行步骤S7;A、B的值由用户根据经验确定;If C<B, proceed to step S7; if C>A, identify the user identity, and the recognition ends; if B<C<A, the user is required to input a name through the human-computer interaction module, if the face corresponding to the name input by the user If the class exists in the face class storehouse, proceed to step S8; otherwise, proceed to step S7; the values of A and B are determined empirically by the user;

S7在线学习模块在人脸类库中新建一个人脸类,将目标人脸的平均脸添加到新建的人脸类中并标注用户名,并将该人脸类传给识别器,进行步骤S9;S7 online learning module creates a new face class in the face class library, adds the average face of the target face to the newly created face class and marks the user name, and passes the face class to the recognizer, and proceeds to step S9 ;

S8在线学习模块更新用户输入的名字所对应的人脸类,并将更新的人脸类传给识别器;进行步骤S9;S8 online learning module updates the corresponding face class of the name input by the user, and passes the updated face class to the recognizer; proceed to step S9;

S9识别器更新人脸类库。The S9 recognizer updates the face class library.

步骤S5所述分析器对可靠样本中的目标人脸提取形状和纹理特征,分别对目标人脸的形状和纹理进行建模,通过模型融合得到目标人脸的平均脸,具体包括以下步骤:The analyzer in step S5 extracts shape and texture features from the target face in the reliable sample, models the shape and texture of the target face respectively, and obtains the average face of the target face through model fusion, specifically including the following steps:

S5.1对可靠样本中的目标人脸进行描点;S5.1 trace the target face in the reliable sample;

S5.2对人脸的形状进行建模:首先将描点后的人脸图像两两进行Procrustes变换,得到平均形状人脸,再通过主元分析降维得到形状参数和形状模型;S5.2 Modeling the shape of the face: first, perform Procrustes transformation on the face images after the points are drawn, to obtain the average shape of the face, and then obtain the shape parameters and shape model through principal component analysis;

S5.3对人脸的纹理进行建模:先将平均形状人脸进行delaunay三角划分,再用分片仿射法进行纹理填充,最终用主元分析法降维得到平均纹理模型和纹理参数;S5.3 Modeling the texture of the face: first delaunay triangulate the average shape of the face, then use the slice affine method to fill the texture, and finally use the principal component analysis method to reduce the dimensionality to obtain the average texture model and texture parameters;

S5.4将形状参数和纹理参数进行加权组合,采用主元分析法降维得到融合参数,最终得到平均脸。S5.4 Weighted combination of shape parameters and texture parameters, using principal component analysis method to reduce dimensionality to obtain fusion parameters, and finally obtain the average face.

步骤S6所述线性判别特征脸方法,包括以下步骤:The linear discrimination eigenface method described in step S6 comprises the following steps:

S6.1对分析器传来的平均脸与人脸类库中的人脸类通过类间、类内最近邻样本算法,得到类间和类内差异的度量;S6.1 Through the inter-class and intra-class nearest neighbor sample algorithm for the average face sent by the analyzer and the face class in the face class library, the measurement of the inter-class and intra-class differences is obtained;

S6.2根据每个人脸类得到的类间和类内的差异,得到类间散布矩阵和类内散布矩阵;S6.2 Obtain the inter-class scatter matrix and the intra-class scatter matrix according to the inter-class and intra-class differences obtained for each face class;

S6.3根据步骤S6.2得到的类间散布矩阵和类内散布矩阵,利用Fisher鉴别准则得到最优鉴别矢量集;S6.3 According to the inter-class scatter matrix and the intra-class scatter matrix obtained in step S6.2, the optimal identification vector set is obtained by using the Fisher identification criterion;

S6.4将分析器传来的平均脸向最优鉴别矢量集做投影,得到低维的特征数据;S6.4 Project the average face from the analyzer to the optimal discrimination vector set to obtain low-dimensional feature data;

S6.5根据最邻近匹配原则,得到平均脸与人脸类库中的最接近人脸类的匹配度C。S6.5 According to the nearest neighbor matching principle, obtain the matching degree C between the average face and the closest face class in the face class library.

步骤S5.1所述对可靠样本中的目标人脸进行描点,具体为:As described in step S5.1, the target face in the reliable sample is described, specifically:

对可靠样本中的目标人脸的轮廓、眉毛、眼睛、鼻子、嘴唇部位进行描点,将点的坐标位置写成向量形式。Draw points on the outline, eyebrows, eyes, nose, and lips of the target face in the reliable sample, and write the coordinates of the points in vector form.

步骤S8所述在线学习模块更新用户输入的名字所对应的人脸类,具体为:The online learning module described in step S8 updates the corresponding face class of the name of user input, specifically:

在线学习模块计算分析器传过来的平均脸与用户输入的名字所对应的人脸类中的所有的人脸样本的差值,如果差值超过类内距离,则更新该人脸类,否则不更新。The online learning module calculates the difference between the average face passed by the analyzer and all the face samples in the face class corresponding to the name input by the user. If the difference exceeds the intra-class distance, the face class is updated, otherwise it is not. renew.

实现上述人脸识别方法的人脸识别系统,包括检测器、跟踪器、收集器、分析器、在线学习模块、识别器及人机交互模块,所述检测器、跟踪器、收集器、分析器、识别器、人机交互模块、在线学习模块依次连接;所述在线学习模块还与识别器连接。Realize the face recognition system of above-mentioned face recognition method, comprise detector, tracker, collector, analyzer, online learning module, recognizer and human-computer interaction module, described detector, tracker, collector, analyzer , the recognizer, the human-computer interaction module and the online learning module are connected in sequence; the online learning module is also connected with the recognizer.

与现有技术相比,本发明具有以下优点及有益效果:Compared with the prior art, the present invention has the following advantages and beneficial effects:

(1)本发明具有自学习功能和时空特性,通过集合人脸检测,人脸跟踪,数据采集与分析,识别,在线学习和监督六大模块对人脸进行识别,具有强抗干扰能力。(1) The present invention has self-learning function and space-time characteristics. It recognizes faces by integrating six modules: face detection, face tracking, data collection and analysis, recognition, online learning and supervision, and has strong anti-interference ability.

(2)本发明交互性好,通过在系统不确定时向用户发出询问,根据用户的反馈进行下一步操作,防止了万一系统性能由于各种原因性能开始下降,用户可以通过人机交互模块对系统发出复位命令。此时系统把库中在使用过程中加入的数据删除,恢复到初始化状态,使系统性能不至于下降得越来越严重。(2) The present invention has good interactivity. By inquiring the user when the system is uncertain, the next step operation is performed according to the user's feedback, which prevents the system performance from degrading due to various reasons, and the user can use the human-computer interaction module Issue a reset command to the system. At this time, the system deletes the data added during use in the library and restores it to the initialization state, so that the system performance will not decline more and more seriously.

附图说明Description of drawings

图1为本发明的人脸识别系统的框架图。Fig. 1 is a frame diagram of the face recognition system of the present invention.

图2为本发明的人脸识别的流程图。Fig. 2 is a flow chart of face recognition in the present invention.

图3为破坏性样本的示例。Figure 3 is an example of a destructive sample.

图4为可靠样本的示例。Figure 4 is an example of a reliable sample.

图5为经描点的人脸示例。Figure 5 is an example of a human face that has been traced.

具体实施方式 Detailed ways

下面结合实施例及附图,对本发明作进一步地详细说明,但本发明的实施方式不限于此。The present invention will be described in further detail below in conjunction with the embodiments and the accompanying drawings, but the embodiments of the present invention are not limited thereto.

实施例Example

如图1所示,本发明的人脸识别系统包括检测器、跟踪器、收集器、分析器、在线学习模块、识别器及人机交互模块,所述检测器、跟踪器、收集器、分析器、识别器、人机交互模块、在线学习模块依次连接;所述在线学习模块还与识别器连接。As shown in Figure 1, face recognition system of the present invention comprises detector, tracker, collector, analyzer, online learning module, recognizer and human-computer interaction module, described detector, tracker, collector, analysis The device, the recognizer, the human-computer interaction module and the online learning module are sequentially connected; the online learning module is also connected with the recognizer.

如图2所示,本发明的人脸识别方法,包括以下步骤:As shown in Figure 2, the face recognition method of the present invention comprises the following steps:

S1检测器检测帧序列中是否有人脸;若是,进行步骤S2,若否,重复步骤S1;S1 detector detects whether there is a human face in the frame sequence; if so, proceed to step S2, if not, repeat step S1;

检测器检测过程中采用Viola and Jones提出的人脸检测算法。在该算法中,选用Adaboost训练分类器,运用多姿态分类的方法,可以较高的检测率快速的检测人脸。The face detection algorithm proposed by Viola and Jones is used in the detector detection process. In this algorithm, Adaboost is used to train the classifier, and the method of multi-pose classification can be used to detect faces quickly with a high detection rate.

S2跟踪器对检测到的人脸进行跟踪;The S2 tracker tracks the detected faces;

跟踪过程中采用一种改进的Camshift结合Kalman滤波的算法。当出现大面积与目标颜色相近的背景干扰时,启动ROI(感兴趣区域)帧差法,只对Kalman预测区域进行帧差运算,通过运动对象的边缘信息提取运动对象面,然后与目标概率分布图进行与运算,把不运动的干扰背景滤除掉。当目标被严重遮挡时,由于Camshift算法失效,采用Kalman预测值代替Camshift计算出的最优位置值,并且将Kalman预测值作为Kalman滤波更新的观测值,这种算法可有效克服严重遮挡导致Kalman滤波失效的问题。这种算法能根据检测器中检测到的人脸进行实时跟踪,以保证图像区域内是同一个人的人脸。An improved Camshift combined with Kalman filter algorithm is used in the tracking process. When there is a large area of background interference similar to the target color, start the ROI (region of interest) frame difference method, only perform frame difference calculation on the Kalman prediction area, extract the moving object surface through the edge information of the moving object, and then compare it with the target probability distribution The graph performs an AND operation to filter out the non-moving interference background. When the target is severely occluded, due to the failure of the Camshift algorithm, the Kalman predicted value is used instead of the optimal position value calculated by Camshift, and the Kalman predicted value is used as the observation value updated by the Kalman filter. This algorithm can effectively overcome the severe occlusion caused by the Kalman filter. failure problem. This algorithm can perform real-time tracking based on the faces detected in the detector to ensure that the faces of the same person are in the image area.

S3收集器收集人脸图像。The S3 collector collects face images.

S4分析器分析收集到的人脸图像是否为可靠样本;若是,进行步骤S5,若否,重复步骤S2~S4;S4 analyzer analyzes whether the face image collected is a reliable sample; if so, proceed to step S5, if not, repeat steps S2~S4;

通过可靠性分析,过滤掉一些头部旋转角度超过90度的破坏性样本(例如图3所示样本),保留可用于识别人脸的可靠样本(例如图4所示样本)。Through reliability analysis, some destructive samples with head rotation angles exceeding 90 degrees (such as the sample shown in Figure 3) are filtered out, and reliable samples that can be used to recognize faces (such as the sample shown in Figure 4) are retained.

S5分析器对可靠样本中的目标人脸提取形状参数和纹理参数,分别对目标人脸的形状和纹理进行建模,通过模型融合得到目标人脸的平均脸;具体包括以下步骤:The S5 analyzer extracts shape parameters and texture parameters from the target face in reliable samples, models the shape and texture of the target face respectively, and obtains the average face of the target face through model fusion; specifically includes the following steps:

S5.1对可靠样本中的目标人脸的轮廓、眉毛、眼睛、鼻子、嘴唇部位描68个点(如图5所示),将点的坐标位置写成向量形式;S5.1 Draw 68 points on the outline, eyebrows, eyes, nose, and lips of the target face in the reliable sample (as shown in Figure 5), and write the coordinate positions of the points in vector form;

x={x1,y1,x2,y2,x3,y3,......,x68,y68}。x={x1,y1,x2,y2,x3,y3,...,x68,y68}.

S5.2对人脸的形状进行建模:首先将描点后的人脸图像两两进行Procrustes变换,得到平均形状人脸,再通过主元分析降维得到形状参数和形状模型;S5.2 Modeling the shape of the face: first, perform Procrustes transformation on the face images after the points are drawn, to obtain the average shape of the face, and then obtain the shape parameters and shape model through principal component analysis;

S5.3对人脸的纹理进行建模:先将平均形状人脸进行delaunay三角划分,再用分片仿射法进行纹理填充,最终用主元分析法降维得到平均纹理模型和纹理参数;S5.3 Modeling the texture of the face: first delaunay triangulate the average shape of the face, then use the slice affine method to fill the texture, and finally use the principal component analysis method to reduce the dimensionality to obtain the average texture model and texture parameters;

S5.4将形状参数和纹理参数进行加权组合,采用主元分析法降维得到融合参数,最终得到平均脸。S5.4 Weighted combination of shape parameters and texture parameters, using principal component analysis method to reduce dimensionality to obtain fusion parameters, and finally obtain the average face.

S6识别器通过线性判别特征脸方法得到平均脸与人脸类库中的最接近人脸类的匹配度C;The S6 discriminator obtains the matching degree C between the average face and the closest face class in the face class library through the linear discriminant eigenface method;

所述线性判别特征脸方法,包括以下步骤:The linear discriminative eigenface method comprises the following steps:

S6.1对分析器传来的平均脸与人脸类库中的人脸类通过类间、类内最近邻样本算法,得到类间和类内差异的度量;S6.1 Through the inter-class and intra-class nearest neighbor sample algorithm for the average face sent by the analyzer and the face class in the face class library, the measurement of the inter-class and intra-class differences is obtained;

S6.2根据每个人脸类得到的类间和类内的差异,得到类间散布矩阵和类内散布矩阵;S6.2 Obtain the inter-class scatter matrix and the intra-class scatter matrix according to the inter-class and intra-class differences obtained for each face class;

S6.3根据步骤S6.2得到的类间散布矩阵和类内散布矩阵,利用Fisher鉴别准则得到最优鉴别矢量集;S6.3 According to the inter-class scatter matrix and the intra-class scatter matrix obtained in step S6.2, the optimal identification vector set is obtained by using the Fisher identification criterion;

S6.4将分析器传来的平均脸向最优鉴别矢量集做投影,得到低维的特征数据;S6.4 Project the average face from the analyzer to the optimal discrimination vector set to obtain low-dimensional feature data;

S6.5根据最邻近匹配原则,得到平均脸与人脸类库中的最接近人脸类的匹配度C;S6.5 Obtain the matching degree C between the average face and the closest face class in the face class library according to the nearest neighbor matching principle;

若匹配度C小于50%,则进行步骤S7;若匹配度C大于95%,识别用户身份,识别结束;若匹配度C大于50%且小于95%,则通过人机交互模块要求用户输入名字,若用户输入的名字所对应的人脸类已存在人脸类库中,进行步骤S8;否则,进行步骤S7。If the matching degree C is less than 50%, proceed to step S7; if the matching degree C is greater than 95%, identify the user identity, and the identification ends; if the matching degree C is greater than 50% and less than 95%, the user is required to input a name through the human-computer interaction module , if the face class corresponding to the name input by the user already exists in the face class database, go to step S8; otherwise, go to step S7.

S7在线学习模块在人脸类库中新建一个人脸类,将目标人脸的平均脸添加到新建的人脸类中并标注用户名,并把该人脸类传给识别器,进行步骤S9。S7 online learning module creates a new face class in the face class library, adds the average face of the target face to the newly created face class and marks the user name, and passes the face class to the recognizer, and proceeds to step S9 .

S8在线学习模块更新用户输入的名字所对应的人脸类,并将更新的人脸类传给识别器;进行步骤S9;S8 online learning module updates the corresponding face class of the name input by the user, and passes the updated face class to the recognizer; proceed to step S9;

所述在线学习模块更新用户输入的名字所对应的人脸类,具体为:Described online learning module updates the face class corresponding to the name input by the user, specifically:

在线学习模块计算分析器传过来的平均脸与用户输入的名字所对应的人脸类中的所有的人脸样本的差值,如果差值超过类内距离,则更新该人脸类,否则不更新。The online learning module calculates the difference between the average face passed by the analyzer and all the face samples in the face class corresponding to the name input by the user. If the difference exceeds the intra-class distance, the face class is updated, otherwise it is not. renew.

S9识别器更新人脸类库。The S9 recognizer updates the face class library.

上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受所述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above-mentioned embodiment is a preferred embodiment of the present invention, but the embodiment of the present invention is not limited by the embodiment, and any other changes, modifications, substitutions and combinations made without departing from the spirit and principle of the present invention , simplification, all should be equivalent replacement methods, and are all included in the protection scope of the present invention.

Claims (5)

1.一种人脸识别方法,其特征在于,包括以下步骤:1. a face recognition method, is characterized in that, comprises the following steps: S1检测器检测帧序列中是否有人脸;若是,进行步骤S2,若否,重复步骤S1;S1 detector detects whether there is a human face in the frame sequence; if so, proceed to step S2, if not, repeat step S1; S2跟踪器对检测到的人脸进行跟踪;The S2 tracker tracks the detected faces; S3收集器收集人脸图像;The S3 collector collects face images; S4分析器分析收集到的人脸图像是否为可靠样本;若是,进行步骤S5,若否,重复步骤S2~S4;S4 Analyzer analyzes whether the face image collected is a reliable sample; if so, proceed to step S5, if not, repeat steps S2~S4; S5分析器对可靠样本中的目标人脸提取形状参数和纹理参数,分别对目标人脸的形状和纹理进行建模,通过模型融合得到目标人脸的平均脸,具体包括以下步骤:The S5 analyzer extracts shape parameters and texture parameters from the target face in reliable samples, models the shape and texture of the target face respectively, and obtains the average face of the target face through model fusion, specifically including the following steps: S5.1对可靠样本中的目标人脸进行描点;S5.1 trace the target face in the reliable sample; S5.2对人脸的形状进行建模:首先将描点后的人脸图像两两进行Procrustes变换,得到平均形状人脸,再通过主元分析降维得到形状参数和形状模型;S5.2 Modeling the shape of the face: first, perform Procrustes transformation on the face images after the points are drawn, to obtain the average shape of the face, and then obtain the shape parameters and shape model through principal component analysis; S5.3对人脸的纹理进行建模:先将平均形状人脸进行delaunay三角划分,再用分片仿射法进行纹理填充,最终用主元分析法降维得到平均纹理模型和纹理参数;S5.3 Modeling the texture of the face: first delaunay triangulate the average shape of the face, then use the slice affine method to fill the texture, and finally use the principal component analysis method to reduce the dimensionality to obtain the average texture model and texture parameters; S5.4将形状参数和纹理参数进行加权组合,采用主元分析法降维得到融合参数,最终得到平均脸;S5.4 Weighted combination of shape parameters and texture parameters, using principal component analysis method to reduce dimensionality to obtain fusion parameters, and finally obtain the average face; S6识别器通过线性判别特征脸方法得到平均脸与人脸类库中的最接近人脸类的匹配度C;The S6 discriminator obtains the matching degree C between the average face and the closest face class in the face class library through the linear discriminant eigenface method; 若C<B,则进行步骤S7;若C>A,识别用户身份,识别结束;若B<C<A,则通过人机交互模块要求用户输入名字,若用户输入的名字所对应的人脸类已存在人脸类库中,进行步骤S8;否则,进行步骤S7;A、B的值由用户根据经验确定;If C<B, proceed to step S7; if C>A, identify the user identity, and the recognition ends; if B<C<A, the user is required to input a name through the human-computer interaction module, if the face corresponding to the name input by the user If the class exists in the face class storehouse, proceed to step S8; otherwise, proceed to step S7; the values of A and B are determined empirically by the user; S7在线学习模块在人脸类库中新建一个人脸类,将目标人脸的平均脸添加到新建的人脸类中并标注用户名,并将该人脸类传给识别器,进行步骤S9;S7 online learning module creates a new face class in the face class library, adds the average face of the target face to the newly created face class and marks the user name, and passes the face class to the recognizer, and proceeds to step S9 ; S8在线学习模块更新用户输入的名字所对应的人脸类,并将更新的人脸类传给识别器;进行步骤S9;S8 online learning module updates the corresponding face class of the name input by the user, and passes the updated face class to the recognizer; proceed to step S9; S9识别器更新人脸类库。The S9 recognizer updates the face class library. 2.根据权利要求1所述的人脸识别方法,其特征在于,步骤S6所述线性判别特征脸方法,包括以下步骤:2. face recognition method according to claim 1, is characterized in that, the described linear discrimination eigenface method of step S6, comprises the following steps: S6.1对分析器传来的平均脸与人脸类库中的人脸类通过类间、类内最近邻样本算法,得到类间和类内差异的度量;S6.1 Through the inter-class and intra-class nearest neighbor sample algorithm for the average face sent by the analyzer and the face class in the face class library, the measurement of the inter-class and intra-class differences is obtained; S6.2根据每个人脸类得到的类间和类内的差异,得到类间散布矩阵和类内散布矩阵;S6.2 Obtain the inter-class scatter matrix and the intra-class scatter matrix according to the inter-class and intra-class differences obtained for each face class; S6.3根据步骤S6.2得到的类间散布矩阵和类内散布矩阵,利用Fisher鉴别准则得到最优鉴别矢量集;S6.3 According to the inter-class scatter matrix and the intra-class scatter matrix obtained in step S6.2, the optimal identification vector set is obtained by using the Fisher identification criterion; S6.4将分析器传来的平均脸向最优鉴别矢量集做投影,得到低维的特征数据;S6.4 Project the average face from the analyzer to the optimal discrimination vector set to obtain low-dimensional feature data; S6.5根据最邻近匹配原则,得到平均脸与人脸类库中的最接近人脸类的匹配度C。S6.5 According to the nearest neighbor matching principle, obtain the matching degree C between the average face and the closest face class in the face class library. 3.根据权利要求1所述的人脸识别方法,其特征在于,步骤S5.1所述对可靠样本中的目标人脸进行描点,具体为:3. The face recognition method according to claim 1, characterized in that, in step S5.1, the target face in the reliable sample is described, specifically: 对可靠样本中的目标人脸的轮廓、眉毛、眼睛、鼻子、嘴唇部位进行描点,将点的坐标位置写成向量形式。Draw points on the outline, eyebrows, eyes, nose, and lips of the target face in the reliable sample, and write the coordinates of the points in vector form. 4.根据权利要求1所述的人脸识别方法,其特征在于,步骤S8所述在线学习模块更新用户输入的名字所对应的人脸类,具体为:4. The face recognition method according to claim 1, wherein the online learning module of step S8 updates the corresponding face class of the name input by the user, specifically: 在线学习模块计算分析器传过来的平均脸与用户输入的名字所对应的人脸类中的所有的人脸样本的差值,如果差值超过类内距离,则更新该人脸类,否则不更新。The online learning module calculates the difference between the average face passed by the analyzer and all the face samples in the face class corresponding to the name input by the user. If the difference exceeds the intra-class distance, the face class is updated, otherwise it is not. renew. 5.实现权利要求1~4任一项所述人脸识别方法的人脸识别系统,其特征在于,包括检测器、跟踪器、收集器、分析器、在线学习模块、识别器及人机交互模块,所述检测器、跟踪器、收集器、分析器、识别器、人机交互模块、在线学习模块依次连接;所述在线学习模块还与识别器连接。5. realize the face recognition system of any one described face recognition method of claim 1~4, it is characterized in that, comprise detector, tracker, collector, analyzer, online learning module, recognizer and man-machine interaction module, the detector, tracker, collector, analyzer, recognizer, human-computer interaction module, and online learning module are connected in sequence; the online learning module is also connected with the recognizer.
CN201210310643.7A 2012-08-28 2012-08-28 Human face recognition method and system thereof Active CN102867173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210310643.7A CN102867173B (en) 2012-08-28 2012-08-28 Human face recognition method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210310643.7A CN102867173B (en) 2012-08-28 2012-08-28 Human face recognition method and system thereof

Publications (2)

Publication Number Publication Date
CN102867173A CN102867173A (en) 2013-01-09
CN102867173B true CN102867173B (en) 2015-01-28

Family

ID=47446037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210310643.7A Active CN102867173B (en) 2012-08-28 2012-08-28 Human face recognition method and system thereof

Country Status (1)

Country Link
CN (1) CN102867173B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745235B (en) * 2013-12-18 2017-07-04 小米科技有限责任公司 Face identification method, device and terminal device
CN104765739B (en) * 2014-01-06 2018-11-02 南京宜开数据分析技术有限公司 Extensive face database search method based on shape space
CN104091164A (en) * 2014-07-28 2014-10-08 北京奇虎科技有限公司 Face picture name recognition method and system
CN104794468A (en) * 2015-05-20 2015-07-22 成都通甲优博科技有限责任公司 Human face detection and tracking method based on unmanned aerial vehicle mobile platform
CN106326815B (en) * 2015-06-30 2019-09-13 芋头科技(杭州)有限公司 A kind of facial image recognition method
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN106778470A (en) * 2016-11-15 2017-05-31 东软集团股份有限公司 A kind of face identification method and device
CN106778653A (en) * 2016-12-27 2017-05-31 北京光年无限科技有限公司 Towards the exchange method and device based on recognition of face Sample Storehouse of intelligent robot
CN106950844A (en) * 2017-04-01 2017-07-14 东莞市四吉电子设备有限公司 A smart home monitoring method and device
CN107665341A (en) * 2017-09-30 2018-02-06 珠海市魅族科技有限公司 One kind identification control method, electronic equipment and computer product
CN109358649A (en) * 2018-12-14 2019-02-19 电子科技大学 UAV ground station control management system for aerial photography
CN109903412A (en) * 2019-02-01 2019-06-18 北京清帆科技有限公司 A kind of intelligent check class attendance system based on face recognition technology
CN110271557B (en) * 2019-06-12 2021-05-14 浙江亚太机电股份有限公司 Vehicle user feature recognition system
CN110334626B (en) * 2019-06-26 2022-03-04 北京科技大学 Online learning system based on emotional state

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006097902A3 (en) * 2005-03-18 2007-03-29 Philips Intellectual Property Method of performing face recognition
CN101377814A (en) * 2007-08-27 2009-03-04 索尼株式会社 Face image processing apparatus, face image processing method, and computer program
CN101587485A (en) * 2009-06-15 2009-11-25 无锡骏聿科技有限公司 Face information automatic login method based on face recognition technology
CN102004899A (en) * 2010-11-03 2011-04-06 无锡中星微电子有限公司 Human face identifying system and method
CN102402691A (en) * 2010-09-08 2012-04-04 中国科学院自动化研究所 Method for tracking human face posture and motion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006097902A3 (en) * 2005-03-18 2007-03-29 Philips Intellectual Property Method of performing face recognition
CN101377814A (en) * 2007-08-27 2009-03-04 索尼株式会社 Face image processing apparatus, face image processing method, and computer program
CN101587485A (en) * 2009-06-15 2009-11-25 无锡骏聿科技有限公司 Face information automatic login method based on face recognition technology
CN102402691A (en) * 2010-09-08 2012-04-04 中国科学院自动化研究所 Method for tracking human face posture and motion
CN102004899A (en) * 2010-11-03 2011-04-06 无锡中星微电子有限公司 Human face identifying system and method

Also Published As

Publication number Publication date
CN102867173A (en) 2013-01-09

Similar Documents

Publication Publication Date Title
CN102867173B (en) Human face recognition method and system thereof
Jalal et al. Multi-features descriptors for human activity tracking and recognition in Indoor-outdoor environments
Shehzed et al. Multi-person tracking in smart surveillance system for crowd counting and normal/abnormal events detection
CN109472198B (en) Gesture robust video smiling face recognition method
Urtasun et al. 3D tracking for gait characterization and recognition
CN110991315A (en) Method for detecting wearing state of safety helmet in real time based on deep learning
CN105809144A (en) Gesture recognition system and method adopting action segmentation
CN105739702A (en) Multi-posture fingertip tracking method for natural man-machine interaction
CN101697199A (en) Detection method of head-face gesture and disabled assisting system using same to manipulate computer
CN108171133A (en) A kind of dynamic gesture identification method of feature based covariance matrix
CN104298964B (en) A kind of human body behavior act method for quickly identifying and device
Pandey et al. Hand gesture recognition for sign language recognition: A review
CN102592115B (en) Hand positioning method and system
CN110232308A (en) Robot gesture track recognizing method is followed based on what hand speed and track were distributed
CN103198330B (en) Real-time human face attitude estimation method based on deep video stream
CN102831408A (en) Human face recognition method
CN107886558A (en) A kind of human face expression cartoon driving method based on RealSense
CN104346602A (en) Face recognition method and device based on feature vectors
CN105654505B (en) A kind of collaboration track algorithm and system based on super-pixel
CN103426008A (en) Vision human hand tracking method and system based on on-line machine learning
Wang et al. A new hand gesture recognition algorithm based on joint color-depth superpixel earth mover's distance
CN105261038B (en) Finger tip tracking based on two-way light stream and perception Hash
Zhu et al. Human action recognition with skeletal information from depth camera
Tang et al. Hand tracking and pose recognition via depth and color information
CN103456012B (en) Based on visual human hand detecting and tracking method and the system of maximum stable area of curvature

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant