WO2019153175A1 - 基于机器学习的遮挡人脸识别系统、方法及存储介质 - Google Patents

基于机器学习的遮挡人脸识别系统、方法及存储介质 Download PDF

Info

Publication number
WO2019153175A1
WO2019153175A1 PCT/CN2018/075772 CN2018075772W WO2019153175A1 WO 2019153175 A1 WO2019153175 A1 WO 2019153175A1 CN 2018075772 W CN2018075772 W CN 2018075772W WO 2019153175 A1 WO2019153175 A1 WO 2019153175A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
subset
feature
machine learning
recognized
Prior art date
Application number
PCT/CN2018/075772
Other languages
English (en)
French (fr)
Inventor
陈少伟
梁洁
Original Assignee
国民技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国民技术股份有限公司 filed Critical 国民技术股份有限公司
Priority to PCT/CN2018/075772 priority Critical patent/WO2019153175A1/zh
Publication of WO2019153175A1 publication Critical patent/WO2019153175A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the invention belongs to the technical field of computers, and relates to an occlusion face recognition system and method based on machine learning and a storage medium.
  • Face recognition is an important part of biometrics. It has certain advantages in terms of universality, uniqueness and easy collection. It has high academic research value and market application prospects. It has gradually developed into the current pattern recognition field. One of the most representative and challenging research content. After years of research, face recognition technology has achieved fruitful research results, but these results are mainly achieved in a strictly restricted laboratory environment. With the application and popularization of digital cameras, smart phones and intelligent monitoring systems, face recognition in real life scenes and unconstrained environments has gradually become a research hotspot.
  • each person calculates a multidimensional vector, and compares whether the two images are the same person, the distance of the vector corresponding to the two images is calculated, if the distance is greater than a given threshold They think that these two people are the same person. If the face is occluded more, even if it is the same person, the algorithm will not consider this to be the same person.
  • An object of the present invention is to provide a occlusion face recognition system, method and storage medium based on machine learning, to solve the above problem of recognizing an inaccurate face, especially when a face is occluded. If only two eyes are exposed, the overall face feature vector will lose most of the information, and the result is not a complete face. At this time, when the whole face is compared, it is definitely not recognized.
  • the invention provides a feature vector obtained by two eyes in the solution to compare with two eyes of the target face, thereby greatly improving the recognition accuracy.
  • the invention provides a machine learning-based occlusion face recognition system, the system comprising an image acquisition unit, a machine learning unit, a database, and a parameter comparison unit, wherein:
  • the image collecting unit collects a face image to be recognized and sends the image to the machine learning unit;
  • the machine learning unit uses the machine learning technology to train the to-be-recognized face image, and obtains a feature set, a feature subset, and a weight parameter of the face image to be recognized, and sends the parameter to the parameter comparison unit;
  • the database stores a training data set and a subset of training data of a large number of face images
  • the parameter comparison unit outputs a face recognition result by comparing the feature set, the feature subset, and the weight parameter of the face to be compared with the training data set and the training data subset of the database.
  • the face image to be recognized includes an unoccluded picture and an occlusion picture.
  • the machine learning technology adopted by the machine learning unit is a target recognition technology of a convolutional neural network CNN.
  • the training data subset and the feature subset are divided according to facial features, including a left eye subset, a left eye subset, a right eye subset, a right eye subset, a nose subset, and a mouth subset. Or more than one or a combination.
  • the present invention also provides a method for occluding face recognition according to the machine learning-based occlusion face recognition system, comprising the following steps:
  • Step S1 the data collection unit collects a face image to be recognized and sends the image to the machine learning unit;
  • Step S2 the machine learning unit uses the machine learning technology to train the to-be-recognized face image, and obtains a feature set, a feature subset, and a weight parameter of the face image to be recognized, and sends the parameter to the parameter comparison unit;
  • Step S3 the parameter comparison unit compares the feature set and the subset of the face to be compared with the training data set and the training data subset of the database;
  • step S4 the feature set and the subset with the same comparison result are weighted and summed, and the final comparison result is obtained to identify the face.
  • step S2 the manner in which the machine learning unit trains the face image to be recognized includes the following steps:
  • the first step is to first use the face recognition technology to mark the feature set of the face and frame it as a feature subset of the face;
  • the target recognition algorithm is retrained to obtain the coordinates of the face feature set and the depth learning parameters corresponding to the coordinates of the subset, that is, the new weight parameter;
  • the third step is to input the feature set of the face and the frame of each subset, that is, the coordinates, into the target recognition algorithm based on machine learning, perform training, obtain new parameters, and use these parameters to perform the photo of the face to be tested. Inference operation, the feature set of the face in the image and the coordinates of each subset are obtained;
  • the new coordinates including the whole face and the subset are retrained, and the loss function of the training is consistent with the loss function of the original face, and the multidimensional feature vector of the entire face feature set and the multidimensional feature vector of each subset are obtained.
  • the comparison object is a multi-dimensional vector set, which is composed of a face multi-dimensional feature vector, a feature subset existence condition, and a feature subset multi-dimensional feature vector.
  • step S3 the manner of comparing includes the following steps:
  • the parameter comparison unit first determines whether all of the feature subsets are present
  • the parameter comparison unit searches in the training data set stored in the database
  • the parameter comparison unit searches the subset of training data stored in the database to obtain a result of comparison with the feature set and the feature subset.
  • the present invention also provides a computer readable storage medium having stored thereon a processor program, wherein when the program is executed by a computer, the following steps are implemented:
  • Step S1 the data collection unit collects a face image to be recognized and sends the image to the machine learning unit;
  • Step S2 the machine learning unit uses the machine learning technology to train the to-be-recognized face image, and obtains a feature set, a feature subset, and a weight parameter of the face image to be recognized, and sends the parameter to the parameter comparison unit;
  • Step S3 the parameter comparison unit compares the feature set and the subset of the face to be compared with the training data set and the training data subset of the database;
  • step S4 the feature set and the subset with the same comparison result are weighted and summed, and the final comparison result is obtained to identify the face.
  • step S2 the manner in which the machine learning unit trains the face image to be recognized includes the following steps:
  • the first step is to first use the face recognition technology to mark the feature set of the face and frame it as a feature subset of the face;
  • the target recognition algorithm is retrained to obtain the coordinates of the face feature set and the depth learning parameters corresponding to the coordinates of the subset, that is, the new weight parameter;
  • the third step is to input the feature set of the face and the frame of each subset, that is, the coordinates, into the target recognition algorithm based on machine learning, perform training, obtain new parameters, and use these parameters to perform the photo of the face to be tested. Inference operation, the feature set of the face in the image and the coordinates of each subset are obtained;
  • the new coordinates including the whole face and the subset are retrained, and the loss function of the training is consistent with the loss function of the original face, and the multidimensional feature vector of the entire face feature set and the multidimensional feature vector of each subset are obtained.
  • the comparison object is a multi-dimensional vector set, which is composed of a face multi-dimensional feature vector, a feature subset existence condition, and a feature subset multi-dimensional feature vector.
  • step S3 the manner of comparing includes the following steps:
  • the parameter comparison unit first determines whether all of the feature subsets are present
  • the parameter comparison unit searches in the training data set stored in the database
  • the parameter comparison unit searches the subset of training data stored in the database to obtain a result of comparison with the feature set and the feature subset.
  • the invention achieves very significant benefits:
  • Figure 1 is a schematic diagram of the system composition of the present invention.
  • FIG. 2 is a schematic diagram showing the annotation of the embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of an identification method according to an embodiment of the present invention.
  • the present invention provides a occlusion face recognition system based on machine learning.
  • the system includes an image acquisition unit 1, a machine learning unit 2, a database 3, and a parameter comparison unit 4, wherein: the image acquisition unit 1 collecting a face image to be recognized and sending it to the machine learning unit 2; the machine learning unit 2 uses the machine learning technology to train the face image to be recognized, and obtains a feature set and a feature subset of the face image to be recognized. And the weight parameter is sent to the parameter comparison unit; the database 3 stores a training data set and a training data subset of the plurality of face images; and the parameter comparison unit 4 passes the feature set, the feature subset and the weight parameter of the face to be recognized. The training data set of the database 3 is compared with the training data subset, and the face recognition result is output.
  • the face image to be recognized includes an unoccluded picture and an occlusion picture.
  • the machine learning technology adopted by the machine learning unit 2 is a target recognition technology of a convolutional neural network CNN.
  • the training data subset and the feature subset are divided according to facial features, including one or more of a left eye subset, a left eye subset, a right eye subset, a right eye subset, a nose subset, and a mouth subset. Or a combination.
  • other division methods may be used, and the subset may be divided according to the actual situation and obvious features of the face, and is not limited to six parts.
  • the present invention further provides a method for occluding face recognition according to the machine learning-based occlusion face recognition system.
  • the identification method is shown in FIG. 3.
  • FIG. 3 is a schematic flowchart of an identification method according to an embodiment of the present invention. Includes the following steps:
  • Step S1 the data collection unit 1 collects a face image to be recognized and sends it to the machine learning unit 2;
  • step S2 the machine learning unit 2 trains the to-be-recognized face image by using a machine learning technique, and obtains a feature set, a feature subset, and a weight parameter of the face image to be recognized, and sends the parameter to the parameter comparison unit 4.
  • the specific method is as follows: firstly, the face recognition technology is first used to mark the feature set of the face, and the frame is displayed as a feature subset of the face; the second step is to retrain the target recognition algorithm to obtain the face.
  • the fourth step is The new coordinates including the entire face and the subset are retrained, and the loss function of the training is consistent with the loss function of the original face.
  • the multidimensional feature vector of the entire face feature set and the multidimensional feature vector of each subset are obtained.
  • Step S3 the parameter comparison unit 4 compares the feature set and the subset of the face to be recognized with the training data set and the training data subset of the database 3; the comparison object is a multi-dimensional vector set, and the face multi-dimensional feature vector, The feature subset exists, and the feature subset is composed of multi-dimensional feature vectors.
  • the parameter comparison unit 4 first determines whether all of the feature subsets are present; if present, the parameter comparison unit performs a search in the training data set stored in the database 3; if not, the parameters are compared The unit 4 searches in the training data subset stored in the database 3 to obtain a result of comparison with the feature set and the feature subset.
  • step S4 the feature set and the subset with the same comparison result are weighted and summed, and the final comparison result is obtained to identify the face.
  • the present invention also provides a computer readable storage medium having stored thereon a processor program, wherein when the program is executed by a computer, the following steps are implemented:
  • Step S1 the data collection unit 1 collects a face image to be recognized and sends it to the machine learning unit 2;
  • step S2 the machine learning unit 2 trains the to-be-recognized face image by using a machine learning technique, and obtains a feature set, a feature subset, and a weight parameter of the face image to be recognized, and sends the parameter to the parameter comparison unit 4.
  • the specific method is as follows: firstly, the face recognition technology is first used to mark the feature set of the face, and the frame is displayed as a feature subset of the face; the second step is to retrain the target recognition algorithm to obtain the face.
  • the fourth step is The new coordinates including the entire face and the subset are retrained, and the loss function of the training is consistent with the loss function of the original face.
  • the multidimensional feature vector of the entire face feature set and the multidimensional feature vector of each subset are obtained.
  • Step S3 the parameter comparison unit 4 compares the feature set and the subset of the face to be recognized with the training data set and the training data subset of the database 3; the comparison object is a multi-dimensional vector set, and the face multi-dimensional feature vector, The feature subset exists, and the feature subset is composed of multi-dimensional feature vectors.
  • the parameter comparison 4 unit first determines whether all of the feature subsets are present; if present, the parameter comparison unit 4 performs a search in the training data set stored in the database 3; if not, the parameters The comparing unit 4 searches in the training data subset stored in the database 3 to obtain a result of comparison with the feature set and the feature subset.
  • step S4 the feature set and the subset with the same comparison result are weighted and summed, and the final comparison result is obtained to identify the face.
  • the image acquisition unit 1 collects a picture of a face to be recognized and sends it to the machine learning unit 2.
  • the machine learning unit 2 trains the picture of the recognized face, and obtains the feature set, the feature subset and the weight parameter of the face image to be recognized, and sends the parameter to the parameter comparison unit.
  • FIG. 2 a subset of the pictures is labeled, and FIG. 2 is a schematic diagram of the set of the present embodiment.
  • the left eye, the left eye, the right eye, the right eye, the nose, and the mouth of the face are marked, and the box is displayed as a subset of the face frame.
  • the face frame and the six subsets of the left eyebrow, the left eye, the right eyebrow, the right eye, the nose, and the mouth are also coordinates, and are input into the object detection algorithm based on machine learning, such as yolo and SSD, for training. , get new parameters.
  • the inference operation for a given face photo will result in the coordinates of the face and the six subsets in the image and the depth learning parameters corresponding to the coordinates, that is, the new convolutional neural network CNN.
  • Weight parameter the weight parameter.
  • Each subset is retrained, and the loss function of the training can be consistent with the original face.
  • a multi-dimensional feature vector of the entire face and a multi-dimensional feature vector of each subset are obtained.
  • the multidimensional vector is used to calculate the distance from the multidimensional vector V of the target face, and the cosine distance is generally used. If the face is partially occluded, the vector V0 is likely to be very different from the non-occlusion, so the face cannot be correctly recognized.
  • the present invention also obtains the feature extraction vector of the local face subset, and multiple results are obtained.
  • the existence of each subset is marked, such as (V0, i1, i2, i3, i4, i5, i6, V1, V2, V3, V4, V5, V6)
  • i1 represents whether the left eyebrow is detected, and 0 represents no detection.
  • 1 represents detection.
  • i2 stands for the left eye
  • i3 stands for the right eyebrow
  • i4 stands for the right eye
  • i5 stands for the nose
  • i6 stands for the mouth.
  • V1, V2, V3, V4, V5, and V6 represent their corresponding feature vectors, respectively.
  • Each subset exists ie i1, i2, i3, i4, i5, i6 are all 1, which is a complete face; and if a subset does not exist, its corresponding Vi's feature vector will be all 0 For example, if i1 is not detected, indicating that the subset does not exist, i1 is 0, and the corresponding V1 has a feature vector of 0, and the subset no longer participates in the comparison.
  • the parameter comparison unit 4 compares the feature set and the subset of the face to be compared with the training data set and the training data subset of the database. Performing parameter comparison on the multi-dimensional vector set and the picture parameters in the database 3 to calculate whether the same person is the same person.
  • each person gets a separate multidimensional vector V0 and then compares it with the V in the database.
  • the present invention first detects whether each subset exists, and if it exists, directly compares the entire face. If it is not a complete face, only part of the face, the vector corresponding to the corresponding face is not compared, and the vector corresponding to the corresponding subset is compared.
  • the parameter comparison unit 4 finally performs weighted summation on the results obtained by comparing the subsets to obtain a final comparison result.
  • the advantage of this comparison is: when the face is occluded more, such as only revealing two eyes, the overall face feature vector will lose most of the information, not a complete face, this time with the whole When the face is compared, it is definitely not recognized.
  • the invention provides a feature vector obtained by two eyes in the solution to compare with two eyes of the target face, thereby greatly improving the recognition accuracy.
  • the machine learning occlusion face recognition system and method of the present invention can also be applied in other environments and occasions, such as identification of other animals, plants or other occluded categories, and can also adopt such a system and method. And / or local feature values to improve recognition accuracy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种基于机器学习的遮挡人脸识别系统、方法及存储介质。所述系统包括图像采集单元(1)、机器学习单元(2)、数据库(3)、参数比较单元(4),其中:所述图像采集单元(1)采集待识别人脸图片送到所述机器学习单元(2);所述机器学习单元(2)利用机器学习技术对所述待识别人脸图片进行训练,得到待识别人脸图片的特征集、特征子集及权重参数送到参数比较单元(4);所述数据库(3)存储大量人脸图片的训练数据集及训练数据子集;所述参数比较单元(4)通过对待识别人脸的特征集与所述数据库的训练数据集及训练数据子集比较,识别人脸。所述系统、方法及存储介采用了机器学习技术,拓宽了人工智能的应用领域;能更准确的识别被遮挡的人脸,提高了识别效率和准确度。

Description

基于机器学习的遮挡人脸识别系统、方法及存储介质 技术领域
本发明属于计算机技术领域,涉及基于机器学习的遮挡人脸识别系统及方法及存储介质。
背景技术
人脸识别是生物特征识别领域的重要组成部分,在普遍性、独特性以及易采集性方面具有一定的优势,具有较高的学术研究价值以及市场应用前景,已逐渐发展成为当前模式识别领域中最具代表性和挑战性的研究内容之一。经过多年的研究,人脸识别技术已经取得了丰硕的研究成果,但这些成果主要是在实验室严格受限的环境下取得的。随着数码相机、智能手机以及智能监控系统的应用和普及,实际生活场景、无约束环境下的人脸识别逐渐成为研究的热点。
在公开的文献资料中,有关人脸识别的研究多数集中于光照、表情以及姿态变化的研究,对遮挡问题研究较少。而人脸部分遮挡的问题在实际生活场景中是普遍存在的,也是无约束环境下的人脸识别研究中的重点问题,处理人脸识别中的遮挡问题也日益引起研究者的关注。现实采集到的人脸图像存在的遮挡,如眼镜、围巾或者其他一些较大面积的干扰噪声会造成面部信息的不完整,识别难度增大,并且遮挡可能在类型、位置、大小上存在不同的变化,很难有效地对遮挡区域进行建模。
现有基于深度学习的人脸识别技术,每个人会计算出一个多维向量,比较两幅图像是否为同一个人时,会计算这两幅图像对应的向量的距离,如果距离大于某一个给定的阈值,则认为这两个人为同一个人。如果脸部被遮挡的比较多,则就算是同一个人,算法也不会认为这是同一个人。
发明内容
本发明的目的在于提供一种基于机器学习的遮挡人脸识别系统、方法及存储介质,以解决上述识别人脸不准确的问题,特别是当人脸被遮挡比较多的时候。如只露两只眼睛,整体的人脸特征向量会丢失大部分信息,得到的不是一张完整的人脸,这时候跟整张脸去比对时肯定是识别不通过。本发明提供方案中用两只眼睛得到的特征向量去跟目标人脸的两只眼睛对比,大幅度提高识别准确率。
本发明提供一种基于机器学习的遮挡人脸识别系统,所述系统包括图像采集单元、机器学习单元、数据库、参数比较单元,其中:
所述图像采集单元采集待识别人脸图片送到所述机器学习单元;
所述机器学习单元利用机器学习技术对所述待识别人脸图片进行训练,得到待识别人脸图片的特征集、特征子集及权重参数送到参数比较单元;
所述数据库存储大量人脸图片的训练数据集及训练数据子集;
所述参数比较单元通过对待识别人脸的特征集、特征子集及权重参数与所述数据库的训练数据集及训练数据子集比较,输出人脸识别结果。
进一步地,所述待识别人脸图片包括无遮挡图片和遮挡图片。
进一步地,所述机器学习单元采用的机器学习技术为卷积神经网络CNN的目标识别技术。
进一步地,所述训练数据子集和特征子集根据人脸五官进行划分,包括左眉子集、左眼子集、右眉子集、右眼子集、鼻子子集、嘴巴子集的一个或一个以上或组合。
本发明还提供一种根据所述的基于机器学习的遮挡人脸识别系统进行遮挡人脸识别的方法,包括以下步骤:
步骤S1,所述数据采集单元采集待识别人脸图片送到所述机器学习单元;
步骤S2,所述机器学习单元利用机器学习技术对所述待识别人脸图片进行训练,得到待识别人脸图片的特征集、特征子集及权重参数送到参数比较单元;
步骤S3,所述参数比较单元对待识别人脸的特征集及子集与所述数据库的训练数据集及训练数据子集进行比较;
步骤S4,比较结果相同的特征集及子集进行加权求和,得到最终的对比结果,识别人脸。
进一步地,步骤S2中,所述机器学习单元训练待识别人脸图片的方式包括如下步骤:
第一步、首先采用人脸识别技术,对人脸的特征集进行标注,并且框出来,作为人脸的特征子集;
第二步、重新训练目标识别算法,得到人脸特征集的坐标以及子集的坐标对应的深度学习参数,也就是新的权重参数;
第三步、将人脸的特征集以及各个子集的框也就是坐标,输入到基于机器学习的目标识别算法中,进行训练,得到新的参数,利用这些参数,对于待测人脸照片进行推理运算,就会得到这副图像中的人脸的特征集和各个子集的坐标;
第四步、对包括整脸和子集的新的坐标重新进行训练,训练的损失函数同原本整脸的损失函数保持一致,得到整脸特征集的多维特征向量和各子集的多维特征向量。
进一步地,步骤S3中,所述比较对象是多维向量集,由人脸多维特征向量、特征子集存在情况、特征子集多维特征向量组成。
进一步地,步骤S3中,所述比较的方式包括如下步骤:
所述参数比较单元首先确定所有的所述特征子集是否都存在;
如果存在,则所述参数比较单元在所述数据库存储的训练数据集中进行搜索;
如果不存在,则所述参数比较单元在所述数据库存储的训练数据子集中进行搜索,得到与所述特征集及特征子集比较的结果。
本发明还提供一种计算机可读存储介质,其上存储有处理器程序,其特征在于,该程序被计算机执行时,实现以下步骤:
步骤S1,所述数据采集单元采集待识别人脸图片送到所述机器学习单元;
步骤S2,所述机器学习单元利用机器学习技术对所述待识别人脸图片进行训练,得到待识别人脸图片的特征集、特征子集及权重参数送到参数比较单元;
步骤S3,所述参数比较单元对待识别人脸的特征集及子集与所述数据库的训练数据集及训练数据子集进行比较;
步骤S4,比较结果相同的特征集及子集进行加权求和,得到最终的对比结果,识别人脸。
进一步地,步骤S2中,所述机器学习单元训练待识别人脸图片的方式包括如下步骤:
第一步、首先采用人脸识别技术,对人脸的特征集进行标注,并且框出来,作为人脸的特征子集;
第二步、重新训练目标识别算法,得到人脸特征集的坐标以及子集的坐标对应的深度学习参数,也就是新的权重参数;
第三步、将人脸的特征集以及各个子集的框也就是坐标,输入到基于机器学习的目标识别算法中,进行训练,得到新的参数,利用这些参数,对于待测人脸照片进行推理运算,就会得到这副图像中的人脸的特征集和各个子集的坐标;
第四步、对包括整脸和子集的新的坐标重新进行训练,训练的损失函数同原本整脸的损失函数保持一致,得到整脸特征集的多维特征向量和各子集的多维特征向量。
步骤S3中,所述比较对象是多维向量集,由人脸多维特征向量、特征子集存在情况、特征子集多维特征向量组成。
步骤S3中,所述比较的方式包括如下步骤:
所述参数比较单元首先确定所有的所述特征子集是否都存在;
如果存在,则所述参数比较单元在所述数据库存储的训练数据集中进行搜索;
如果不存在,则所述参数比较单元在所述数据库存储的训练数据子集中进行搜索,得到与所述特征集及特征子集比较的结果。
本发明获得了非常明显的益处:
采用了机器学习技术,拓宽了人工智能的应用领域;
能更准确的识别被遮挡的人脸,提高了识别效率和准确度。
附图说明
图1为本发明系统组成示意图。
图2为本发明实施例子集标注示意图。
图3为本发明实施例识别方法流程示意图。
具体实施方式
以下结合附图和实施例,对本发明的具体实施方式进行更加详细的说明,以便能够更好地理解本发明的方案以及其各个方面的优点。然而,以下描述的具体实施方式和实施例仅是说明的目的,而不是对本发明的限制。
本发明提供一种基于机器学习的遮挡人脸识别系统,如图1所示,所述系统包括图像采集单元1、机器学习单元2、数据库3、参数比较单元4,其中:所述图像采集单元1采集待识别人脸图片送到所述机器学习单元2;所述机器学习单元2利用机器学习技术对所述待识别人脸图片进行训练,得到待识别人脸图片的特征集、特征子集及权重参数送到参数比较单元;所述数据库3存储大量人脸图片的训练数据集及训练数据子集;所述参数比较单元4通过对待识别人脸的特征集、特征子集及权重参数与所述数据库3的训练数据集及训练数据子集比较,输出人脸识别结果。
其中,所述待识别人脸图片包括无遮挡图片和遮挡图片。
进一步地,所述机器学习单元2采用的机器学习技术为卷积神经网络CNN的目标识别技术。所述训练数据子集和特征子集根据人脸五官进行划分,包括左眉子集、左眼子集、右眉子集、右眼子集、鼻子子集、嘴巴子集的一个或一个以上或组合。实际实施中,对人脸的子集划分时,也可以采用其他的划分方法,也可根据人脸的实际情况及明显特征去划分所述子集,而不局限于6个部分。
本发明还提供一种根据所述的基于机器学习的遮挡人脸识别系统进行遮挡人脸识别的方法,所述识别方法见图3,图3为本发明实施例识别方法流程示意图。包括以下步骤:
步骤S1,所述数据采集单元1采集待识别人脸图片送到所述机器学习单元2;
步骤S2,所述机器学习单元2利用机器学习技术对所述待识别人脸图片进行训练,得到待识别人脸图片的特征集、特征子集及权重参数送到参数比较单元4。其具体方式为:第一步、首先采用人脸识别技术,对人脸的特征集进行标注,并且框出来,作为人脸的特征子集;第二步、重新训练目标识别算法,得到人脸特征集的坐标以及子集的坐标对应的深度学习参数,也就是新的权重参数;第三步、将人脸的特征集以及各个子集的框也就是坐标,输入到基于机器学习的目标识别算法中,进行训练,得到新的参数,利用这些参数,对于待测人脸照片进行推理运算,就会得到这副图像中的人脸的特征集和各个子集的坐标;第四步、对包括整脸和子集的新的坐标重新进行训练,训练的损失函数同原本整脸的损失函数保持一致,得到整脸特征集的多维特征向量和各子集的多维特征向量。
步骤S3,所述参数比较单元4对待识别人脸的特征集及子集与所述数据库3的训练数据集及训练数据子集进行比较;比较对象是多维向量集,由人脸多维特征向量、特征子集存在情况、特征子集多维特征向量组成。所述参数比较单元4首先确定所有的所述特征子集是否都存在;如果存在,则所述参数比较单元在所述数据库3存储的训练数据集中进 行搜索;如果不存在,则所述参数比较单元4在所述数据库3存储的训练数据子集中进行搜索,得到与所述特征集及特征子集比较的结果。
步骤S4,比较结果相同的特征集及子集进行加权求和,得到最终的对比结果,识别人脸。
本发明还提供一种计算机可读存储介质,其上存储有处理器程序,其特征在于,该程序被计算机执行时,实现以下步骤:
步骤S1,所述数据采集单元1采集待识别人脸图片送到所述机器学习单元2;
步骤S2,所述机器学习单元2利用机器学习技术对所述待识别人脸图片进行训练,得到待识别人脸图片的特征集、特征子集及权重参数送到参数比较单元4。其具体方式为:第一步、首先采用人脸识别技术,对人脸的特征集进行标注,并且框出来,作为人脸的特征子集;第二步、重新训练目标识别算法,得到人脸特征集的坐标以及子集的坐标对应的深度学习参数,也就是新的权重参数;第三步、将人脸的特征集以及各个子集的框也就是坐标,输入到基于机器学习的目标识别算法中,进行训练,得到新的参数,利用这些参数,对于待测人脸照片进行推理运算,就会得到这副图像中的人脸的特征集和各个子集的坐标;第四步、对包括整脸和子集的新的坐标重新进行训练,训练的损失函数同原本整脸的损失函数保持一致,得到整脸特征集的多维特征向量和各子集的多维特征向量。
步骤S3,所述参数比较单元4对待识别人脸的特征集及子集与所述数据库3的训练数据集及训练数据子集进行比较;比较对象是多维向量集,由人脸多维特征向量、特征子集存在情况、特征子集多维特征向量组成。所述参数比较4单元首先确定所有的所述特征子集是否都存在;如果存在,则所述参数比较单元4在所述数据库3存储的训练数据集中进行搜索;如果不存在,则所述参数比较单元4在所述数据库3存储的训练数据子集中进行搜索,得到与所述特征集及特征子集比较的结果。
步骤S4,比较结果相同的特征集及子集进行加权求和,得到最终的对比结果,识别人脸。
实施例1
如图1所示,所述图像采集单元1采集待识别人脸的图片送到所述机器学习单元2。
所述机器学习单元2对待识别人脸的图片进行训练,得到待识别人脸图片的特征集、特征子集及权重参数送到参数比较单元。首先,如图2所示,对图片进行子集标注,图2是本实施例子集标注示意图。
采用人脸识别技术,标注出脸的左眉、左眼、右眉、右眼、鼻子、嘴巴,并且框出来,作为人脸框的子集。
将人脸框以及6个子集左眉、左眼、右眉、右眼、鼻子、嘴巴的框也就是坐标,输入到yolo、SSD等基于机器学习的目标识别(Object Detection)算法中,进行训练,得到新的参数。
利用这些参数,对于给定的一副人脸照片进行推理运算,就会得到这副图像中的人脸以及6个子集的坐标以及坐标对应的深度学习参数,也就是卷积神经网络CNN的新的权重参数。
对每个子集都重新进行训练,训练的损失函数可以同原本整脸的保持一致。得到整脸的多维特征向量以及各子集的多维特征向量。
如果仅仅是得到一个提取特征后的多维向量V0,利用这个多维向量跟目标人脸的多维向量V计算其距离,一般情况下用的是余弦距离。如果人脸局部被遮挡,则向量V0很有可能跟非遮挡时差别非常大,因此就无法正确识别此人脸。
本发明除了得到整体人脸的多维向量V0,还会得到局部人脸子集的特征提取向量,会得到多个结果。同时标注每个子集的存在情况,如(V0,i1,i2,i3,i4,i5,i6,V1,V2,V3,V4,V5,V6)i1代表左眉是否检测到,0代表没有检测到,1代表检测到。同理,i2代表左眼,i3代表右眉,i4代表右眼,i5代表鼻子,i6代表嘴巴。V1,V2,V3,V4,V5,V6分别代表其对应的特征向量。每个子集都存在,即i1,i2,i3,i4,i5,i6都为1,这是一个完整的人脸;而如果某个子集不存在,则其对应的Vi的特征向量会为全0,例如,如果i1没有检测到,表示该子集不 存在,这时i1为0,而对应的V1的特征向量为0,该子集不再参与比较。
所述参数比较单元4对待识别人脸的特征集及子集与所述数据库的训练数据集及训练数据子集进行比较。对所述多维向量集与所述数据库3中的图片参数进行参数比较,计算是否为同一个人。
在传统的算法中,每个人都会得到单独的一个多维向量V0,然后跟数据库中的V进行对比。本发明是先检测每个子集是否存在,如果都存在,则直接比较整脸。如果不是完整的脸,只有部分的人脸,则不比较完整的脸对应的向量,而去比较对应的子集对应的向量。
所述参数比较单元4最后对各子集对比得到的结果,进行加权求和,得到最终的对比结果。
这样比较的好处为:当人脸被遮挡比较多的时候,如只露两只眼睛,整体的人脸特征向量会丢失大部分信息,得到的不是一张完整的人脸,这时候跟整张脸去比对时肯定是识别不通过。本发明提供方案中用两只眼睛得到的特征向量去跟目标人脸的两只眼睛对比,大幅度提高识别准确率。
本发明的所述机器学习的遮挡人脸识别系统、方法,也可以类比应用在其他环境和场合,如其他动物、植物或其他被遮挡类别的识别也可以采用此种系统、方法,比对整体和/或局部特征值,提高识别准确率。
最后应说明的是:上述实施例仅仅是为清楚地说明本发明所作的举例,而并非对实施方式的限定。对于在上述说明的基础上做出其它不同形式的变化或变动,仍处于本发明的保护范围之中。

Claims (10)

  1. 一种基于机器学习的遮挡人脸识别系统,所述系统包括图像采集单元、机器学习单元、数据库、参数比较单元,其中:
    所述图像采集单元采集待识别人脸图片送到所述机器学习单元;
    所述机器学习单元利用机器学习技术对所述待识别人脸图片进行训练,得到待识别人脸图片的特征集、特征子集及权重参数送到参数比较单元;
    所述数据库存储大量人脸图片的训练数据集及训练数据子集;
    所述参数比较单元通过对待识别人脸的特征集、特征子集与所述数据库的训练数据集及训练数据子集比较,对比较结果进行权重计算,输出人脸识别结果。
  2. 根据权利要求1所述的系统,其特征在于,所述待识别人脸图片包括无遮挡图片和遮挡图片。
  3. 根据权利要求1所述的系统,其特征在于,所述机器学习单元采用的机器学习技术为卷积神经网络CNN的目标识别技术。
  4. 根据权利要求1所述的系统,其特征在于,所述训练数据子集和特征子集根据人脸五官进行划分,包括左眉子集、左眼子集、右眉子集、右眼子集、鼻子子集、嘴巴子集的一个或一个以上或组合,所述各个子集用一个多维向量来表示该子集的特征,整脸多维向量与各子集存在情况及各子集多维向量一起构成多维向量集。
  5. 根据权利要求1至权利要求3任一项所述的基于机器学习的遮挡人脸识别系统进行遮挡人脸识别的方法,包括以下步骤:
    步骤S1,所述数据采集单元采集待识别人脸图片送到所述机器学习单元;
    步骤S2,所述机器学习单元利用机器学习技术对所述待识别人脸图片进行训练,得到待识别人脸图片的特征集、特征子集及权重参数送到参数比较单元;
    步骤S3,所述参数比较单元对待识别人脸的特征集及子集与所述数据库的训练数据集及训练数据子集进行比较;
    步骤S4,比较结果相同的特征集及子集进行加权求和,得到最终的对比结果,识别人脸。
  6. 根据权利要求5所述的方法,其特征在于,步骤S2中,所述机器学习单元训练待识别人脸图片的方式包括如下步骤:
    第一步、首先采用人脸识别技术,对人脸的特征集进行标注,并且框出来,作为人脸的特征子集;
    第二步、重新训练目标识别算法,得到人脸特征集的坐标以及子集的坐标对应的深度学习参数,也就是新的权重参数;
    第三步、将人脸的特征集以及各个子集的框也就是坐标,输入到基于机器学习的目标识别算法中,进行训练,得到新的参数,利用这些参数,对于待测人脸照片进行推理运算,就会得到这副图像中的人脸的特征集和各个子集的坐标;
    第四步、对包括整脸和子集的新的坐标重新进行训练,训练的损失函数同原本整脸的损失函数保持一致,得到整脸特征集的多维特征向量和各子集的多维特征向量。
  7. 根据权利要求5所述的方法,其特征在于,步骤S3中,所述比较对象是多维向量集,由人脸多维特征向量、特征子集存在情况、特征子集多维特征向量组成。
  8. 根据权利要求5所述的方法,其特征在于,步骤S3中,所述比较的方式包括如下步骤:
    所述参数比较单元首先确定所有的所述特征子集是否都存在;
    如果存在,则所述参数比较单元在所述数据库存储的训练数据集中进行搜索;
    如果不存在,则所述参数比较单元在所述数据库存储的训练数据子集中进行搜索,得到与所述特征集及特征子集比较的结果。
  9. 一种计算机可读存储介质,其上存储有处理器程序,其特征在于,该程序被计算机执行时,实现以下步骤:
    步骤S1,所述数据采集单元采集待识别人脸图片送到所述机器学习单元;
    步骤S2,所述机器学习单元利用机器学习技术对所述待识别人脸图片进行训练,得到待识别人脸图片的特征集、特征子集及权重参数送到参数比较单元;
    步骤S3,所述参数比较单元对待识别人脸的特征集及子集与所述数据库的训练数据集及训练数据子集进行比较;
    步骤S4,比较结果相同的特征集及子集进行加权求和,得到最终的对比结果,识别人脸。
  10. 根据权利要求9所述的计算机可读存储介质,其特征在于:
    步骤S2中,所述机器学习单元训练待识别人脸图片的方式包括如下第一步、首先采用人脸识别技术,对人脸的特征集进行标注,并且框出来,作为人脸的特征子集;
    第二步、重新训练目标识别算法,得到人脸特征集的坐标以及子集的坐标对应的深度学习参数,也就是新的权重参数;
    第三步、将人脸的特征集以及各个子集的框也就是坐标,输入到基于机器学习的目标识别算法中,进行训练,得到新的参数,利用这些参数,对于待测人脸照片进行推理运算,就会得到这副图像中的人脸的特征集和各个子集的坐标;
    第四步、对包括整脸和子集的新的坐标重新进行训练,训练的损失函数同原本整脸的损失函数保持一致,得到整脸特征集的多维特征向量和各子集的多维特征向量。
    步骤S3中,所述比较对象是多维向量集,由人脸多维特征向量、特征子集存在情况、特征子集多维特征向量组成。
    步骤S3中,所述比较的方式包括如下步骤:
    所述参数比较单元首先确定所有的所述特征子集是否都存在;
    如果存在,则所述参数比较单元在所述数据库存储的训练数据集中进行搜索;
    如果不存在,则所述参数比较单元在所述数据库存储的训练数据子集中进行搜索,得到与所述特征集及特征子集比较的结果。
PCT/CN2018/075772 2018-02-08 2018-02-08 基于机器学习的遮挡人脸识别系统、方法及存储介质 WO2019153175A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/075772 WO2019153175A1 (zh) 2018-02-08 2018-02-08 基于机器学习的遮挡人脸识别系统、方法及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/075772 WO2019153175A1 (zh) 2018-02-08 2018-02-08 基于机器学习的遮挡人脸识别系统、方法及存储介质

Publications (1)

Publication Number Publication Date
WO2019153175A1 true WO2019153175A1 (zh) 2019-08-15

Family

ID=67548106

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/075772 WO2019153175A1 (zh) 2018-02-08 2018-02-08 基于机器学习的遮挡人脸识别系统、方法及存储介质

Country Status (1)

Country Link
WO (1) WO2019153175A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127308A (zh) * 2019-12-08 2020-05-08 复旦大学 用于局部遮挡下单样本人脸识别的镜像特征重排修复方法
CN111814571A (zh) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 一种基于背景过滤的口罩人脸识别方法及系统
CN111914628A (zh) * 2020-06-19 2020-11-10 北京百度网讯科技有限公司 人脸识别模型的训练方法和装置
CN112132157A (zh) * 2020-08-26 2020-12-25 浙江工业大学 一种基于树莓派的步态人脸融合识别方法
CN113033375A (zh) * 2021-03-22 2021-06-25 广州大学华软软件学院 一种基于yolov3的人脸口罩检测方法、系统、设备及介质
CN113033582A (zh) * 2019-12-09 2021-06-25 杭州海康威视数字技术股份有限公司 模型训练方法、特征提取方法及装置
CN113536824A (zh) * 2020-04-13 2021-10-22 南京行者易智能交通科技有限公司 一种基于YOLOv3的乘客检测模型的改进方法,及模型训练方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855496A (zh) * 2012-08-24 2013-01-02 苏州大学 遮挡人脸认证方法及系统
CN106096535A (zh) * 2016-06-07 2016-11-09 广东顺德中山大学卡内基梅隆大学国际联合研究院 一种基于双线性联合cnn的人脸验证方法
CN106355138A (zh) * 2016-08-18 2017-01-25 电子科技大学 基于深度学习和关键点特征提取的人脸识别方法
CN106485215A (zh) * 2016-09-29 2017-03-08 西交利物浦大学 基于深度卷积神经网络的人脸遮挡检测方法
CN106951826A (zh) * 2017-02-14 2017-07-14 清华大学 人脸检测方法及装置
CN107016370A (zh) * 2017-04-10 2017-08-04 电子科技大学 一种基于数据增强的部分遮挡人脸识别方法
CN107066955A (zh) * 2017-03-24 2017-08-18 武汉神目信息技术有限公司 一种从局部人脸区域还原整张人脸的方法
CN107292287A (zh) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 人脸识别方法、装置、电子设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855496A (zh) * 2012-08-24 2013-01-02 苏州大学 遮挡人脸认证方法及系统
CN106096535A (zh) * 2016-06-07 2016-11-09 广东顺德中山大学卡内基梅隆大学国际联合研究院 一种基于双线性联合cnn的人脸验证方法
CN106355138A (zh) * 2016-08-18 2017-01-25 电子科技大学 基于深度学习和关键点特征提取的人脸识别方法
CN106485215A (zh) * 2016-09-29 2017-03-08 西交利物浦大学 基于深度卷积神经网络的人脸遮挡检测方法
CN106951826A (zh) * 2017-02-14 2017-07-14 清华大学 人脸检测方法及装置
CN107066955A (zh) * 2017-03-24 2017-08-18 武汉神目信息技术有限公司 一种从局部人脸区域还原整张人脸的方法
CN107016370A (zh) * 2017-04-10 2017-08-04 电子科技大学 一种基于数据增强的部分遮挡人脸识别方法
CN107292287A (zh) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 人脸识别方法、装置、电子设备及存储介质

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127308A (zh) * 2019-12-08 2020-05-08 复旦大学 用于局部遮挡下单样本人脸识别的镜像特征重排修复方法
CN111127308B (zh) * 2019-12-08 2023-06-30 复旦大学 用于局部遮挡下单样本人脸识别的镜像特征重排修复方法
CN113033582A (zh) * 2019-12-09 2021-06-25 杭州海康威视数字技术股份有限公司 模型训练方法、特征提取方法及装置
CN113033582B (zh) * 2019-12-09 2023-09-26 杭州海康威视数字技术股份有限公司 模型训练方法、特征提取方法及装置
CN113536824A (zh) * 2020-04-13 2021-10-22 南京行者易智能交通科技有限公司 一种基于YOLOv3的乘客检测模型的改进方法,及模型训练方法
CN113536824B (zh) * 2020-04-13 2024-01-12 南京行者易智能交通科技有限公司 一种基于YOLOv3的乘客检测模型的改进方法,及模型训练方法
CN111814571A (zh) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 一种基于背景过滤的口罩人脸识别方法及系统
CN111914628A (zh) * 2020-06-19 2020-11-10 北京百度网讯科技有限公司 人脸识别模型的训练方法和装置
CN111914628B (zh) * 2020-06-19 2023-06-20 北京百度网讯科技有限公司 人脸识别模型的训练方法和装置
CN112132157A (zh) * 2020-08-26 2020-12-25 浙江工业大学 一种基于树莓派的步态人脸融合识别方法
CN113033375A (zh) * 2021-03-22 2021-06-25 广州大学华软软件学院 一种基于yolov3的人脸口罩检测方法、系统、设备及介质

Similar Documents

Publication Publication Date Title
WO2019153175A1 (zh) 基于机器学习的遮挡人脸识别系统、方法及存储介质
KR102174595B1 (ko) 비제약형 매체에 있어서 얼굴을 식별하는 시스템 및 방법
CN108520226B (zh) 一种基于躯体分解和显著性检测的行人重识别方法
Sandbach et al. Markov random field structures for facial action unit intensity estimation
CN108614999B (zh) 基于深度学习的眼睛睁闭状态检测方法
Gou et al. Learning-by-synthesis for accurate eye detection
CN102194131A (zh) 基于五官几何比例特征的快速人脸识别方法
Chiranjeevi et al. Neutral face classification using personalized appearance models for fast and robust emotion detection
Wang et al. Head pose estimation with combined 2D SIFT and 3D HOG features
Presti et al. Boosting Hankel matrices for face emotion recognition and pain detection
Araujo et al. Fast eye localization without a face model using inner product detectors
Kacete et al. Real-time eye pupil localization using Hough regression forest
Cherniavsky et al. Semi-supervised learning of facial attributes in video
Xia et al. Face occlusion detection using deep convolutional neural networks
Wang et al. Embedding metric learning into set-based face recognition for video surveillance
Sheeba et al. Hybrid features-enabled dragon deep belief neural network for activity recognition
Xia et al. Face recognition and application of film and television actors based on Dlib
Taylor et al. Pose-sensitive embedding by nonlinear nca regression
Jiang et al. Orientation-guided similarity learning for person re-identification
El-Barkouky et al. Selective part models for detecting partially occluded faces in the wild
Zhu et al. A Joint Cascaded Framework for Simultaneous Eye State, Eye Center, and Gaze Estimation
Roy et al. A tutorial review on face detection
TW201931193A (zh) 人臉辨識裝置及方法、提升影像辨識率的方法、及電腦可讀儲存介質
Xu et al. A facial expression recognition methond based on improved hog features and geometric features
CN108710823B (zh) 一种人脸相似性比较方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905524

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08/12/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18905524

Country of ref document: EP

Kind code of ref document: A1