CN108830151A - Mask detection method based on gauss hybrid models - Google Patents

Mask detection method based on gauss hybrid models Download PDF

Info

Publication number
CN108830151A
CN108830151A CN201810426435.0A CN201810426435A CN108830151A CN 108830151 A CN108830151 A CN 108830151A CN 201810426435 A CN201810426435 A CN 201810426435A CN 108830151 A CN108830151 A CN 108830151A
Authority
CN
China
Prior art keywords
frame
face
mask
hybrid models
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810426435.0A
Other languages
Chinese (zh)
Inventor
章姝俊
姚杨
姚一杨
戴波
王彦波
江樱
邱兰馨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Zhejiang Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
State Grid Zhejiang Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Zhejiang Electric Power Co Ltd, Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd filed Critical State Grid Zhejiang Electric Power Co Ltd
Priority to CN201810426435.0A priority Critical patent/CN108830151A/en
Publication of CN108830151A publication Critical patent/CN108830151A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及计算机视觉技术领域,尤其涉及一种基于高斯混合模型的面具检测方法,包括以下步骤:根据人脸图像样本建立高斯混合模型;从视频流中筛选含有人脸的关键帧,并从关键帧中提取人脸特征;将关键帧中提取的人脸特征送入高斯混合模型进行匹配,根据匹配结果判断关键帧中的人脸是否佩戴面具。通过使用本发明,可以实现以下效果:采用高斯混合模型将人脸图库进行分类,能有效地分辨真实人脸和面具;采用三次关键帧筛选,去除冗余帧,减少运算,提高检测效率。

The present invention relates to the technical field of computer vision, in particular to a mask detection method based on a Gaussian mixture model, comprising the following steps: establishing a Gaussian mixture model according to a human face image sample; screening key frames containing a human face from a video stream, and Extract the facial features in the frame; send the extracted facial features into the Gaussian mixture model for matching, and judge whether the face in the key frame wears a mask according to the matching result. By using the present invention, the following effects can be achieved: the Gaussian mixture model is used to classify the face library, which can effectively distinguish between real faces and masks; three key frame screening is used to remove redundant frames, reduce calculations, and improve detection efficiency.

Description

基于高斯混合模型的面具检测方法Mask Detection Method Based on Gaussian Mixture Model

技术领域technical field

本发明涉及计算机视觉技术领域,尤其涉及一种基于高斯混合模型的面具检测方法。The invention relates to the technical field of computer vision, in particular to a mask detection method based on a Gaussian mixture model.

背景技术Background technique

随着目前电子商务以及移动支付的快速发展,获取用户的人脸图像是防止欺诈的一种有效手段。如果用户在拍照期间佩戴了面具,那么很可能会带来欺诈行为。然而,现有的人脸检测技术不能很好地分辨真实人脸和面具,因此,目前缺乏一种有效的面具检测技术。With the rapid development of e-commerce and mobile payment, obtaining user's face image is an effective means to prevent fraud. If the user wears a mask during the photoshoot, it is likely to lead to fraudulent behavior. However, existing face detection techniques cannot distinguish between real faces and masks well, thus, an effective mask detection technique is currently lacking.

发明内容Contents of the invention

为解决上述问题,本发明提出一种基于高斯混合模型的面具检测方法,用于检测视频流中的人脸是否佩戴面具。In order to solve the above problems, the present invention proposes a mask detection method based on a Gaussian mixture model, which is used to detect whether a face in a video stream wears a mask.

一种基于高斯混合模型的面具检测方法,包括以下步骤:根据人脸图像样本建立高斯混合模型;从视频流中筛选含有人脸的关键帧,并从关键帧中提取人脸特征;将关键帧中提取的人脸特征送入高斯混合模型进行匹配,根据匹配结果判断关键帧中的人脸是否佩戴面具。A mask detection method based on a Gaussian mixture model, comprising the following steps: establishing a Gaussian mixture model according to a human face image sample; screening key frames containing human faces from a video stream, and extracting human face features from the key frames; The facial features extracted from the image are sent to the Gaussian mixture model for matching, and it is judged whether the face in the key frame is wearing a mask or not according to the matching result.

优选的,所述从关键帧中提取人脸特征的方法为:将关键帧中的彩色图像处理为灰度图像;利用人脸检测器对灰度图像进行人脸检测,在灰度图像中找出人脸区域并利用矩形框进行标记;根据标记的矩形框,在矩形框中通过人脸特征点检测算法提取人脸特征。Preferably, the method for extracting human face features from the key frame is: processing the color image in the key frame into a grayscale image; utilizing a face detector to detect a face on the grayscale image, and finding The face area is drawn and marked with a rectangular frame; according to the marked rectangular frame, the face features are extracted through the face feature point detection algorithm in the rectangular frame.

优选的,所述高斯混合模型为:Preferably, the Gaussian mixture model is:

其中,K代表了模型的数量;πk代表了权重,满足N(x;μk,∑k)为混合模型的第k个分量。Among them, K represents the number of models; π k represents the weight, satisfying N(x; μ k , ∑ k ) is the kth component of the mixed model.

优选的,所属从视频流中筛选含有人脸的关键帧的方法为:从视频流中抽取视频帧;利用人脸检测器滤除视频帧中不含有人脸的冗余帧;从含有人脸的视频帧中滤除重复的冗余帧,得到关键帧。Preferably, the method for screening key frames containing human faces from video streams is: extracting video frames from video streams; utilizing face detectors to filter out redundant frames that do not contain human faces in video frames; Repeated redundant frames are filtered out from the video frames to obtain key frames.

优选的,所述从含有人脸的视频帧中滤除重复的冗余帧,得到关键帧的方法为:获取关键帧中每一帧的特征值,并将特征值代入以下公式求得相邻的两个帧中人脸特征的相似程度 Preferably, the method for filtering out repeated redundant frames from video frames containing human faces to obtain the key frame is: obtaining the feature value of each frame in the key frame, and substituting the feature value into the following formula to obtain the adjacent The similarity of facial features in two frames of

其中,xij代表第i帧的第j维特征值,x(i+1)j代表第i+1帧的第j维特征值;设置阈值Tf,当时,则第i+1帧是冗余帧,删除第i+1帧;否则,保留第i+1帧。Among them, x ij represents the j-th dimension feature value of the i-th frame, x (i+1)j represents the j-th dimension feature value of the i+1-th frame; set the threshold T f , when , then the i+1th frame is a redundant frame, delete the i+1th frame; otherwise, keep the i+1th frame.

优选的,所述判断关键帧中的人脸是否佩戴面具的方法为:将关键帧中提取的人脸特征送入高斯混合模型进行匹配后得到若干个概率密度,求得若干概率密度中的最大值Pmax;设定经验阈值T,若Pmax>T,则判断为没有佩戴面具;否则,则判断为佩戴了面具。Preferably, the method for judging whether the face in the key frame is wearing a mask is: sending the face features extracted in the key frame into a Gaussian mixture model for matching to obtain several probability densities, and obtain the maximum probability density among the several probability densities. value P max ; set an empirical threshold T, if P max > T, it is judged that the mask is not worn; otherwise, it is judged that the mask is worn.

通过使用本发明,可以实现以下效果:By using the present invention, the following effects can be achieved:

1.本发明采用高斯混合模型将人脸图库进行分类,能有效地分辨真实人脸和面具;1. The present invention uses the Gaussian mixture model to classify the face library, which can effectively distinguish real faces and masks;

2.采用三次关键帧筛选,去除冗余帧,减少运算,提高检测效率。2. Three times of key frame screening are used to remove redundant frames, reduce calculations, and improve detection efficiency.

附图说明Description of drawings

下面结合附图和具体实施方式对本发明作进一步详细的说明。The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

图1是本发明的流程示意图。Fig. 1 is a schematic flow chart of the present invention.

具体实施方式Detailed ways

以下结合附图,对本发明的技术方案作进一步的描述,但本发明并不限于这些实施例。The technical solutions of the present invention will be further described below in conjunction with the accompanying drawings, but the present invention is not limited to these embodiments.

本发明基于高斯混合模型的面具检测方法的基本思想是根据人脸图像样本建立高斯混合模型;从视频流中多次筛选得到含有人脸的关键帧,并从关键帧中提取人脸特征;将关键帧中提取的人脸特征送入高斯混合模型进行匹配,根据匹配结果判断关键帧中的人脸是否佩戴面具,从而实现真实人脸和面具的快速、有效辨别。The basic idea of the mask detection method based on the Gaussian mixture model of the present invention is to establish a Gaussian mixture model according to the face image sample; obtain the key frame containing the face from the video stream multiple times, and extract the face feature from the key frame; The face features extracted in the key frame are sent to the Gaussian mixture model for matching, and whether the face in the key frame wears a mask is judged according to the matching result, so as to realize the fast and effective identification of the real face and the mask.

图1是本发明的流程示意图,根据图1,可以看出本发明主要包括以下几个步骤。Fig. 1 is a schematic flow chart of the present invention, according to Fig. 1, it can be seen that the present invention mainly includes the following steps.

步骤一.根据人脸图像样本建立高斯混合模型。Step 1. Establish a Gaussian mixture model based on face image samples.

具体的,在本实施例中,本发明采用了中科院人脸数据库,从中选取了10000张人脸图像作为训练集,包括不同的性别、年龄、头部姿势、表情等。首先,将彩色图像处理为灰度图像。其次,利用人脸检测器对灰度图像进行人脸检测,在一张灰度图像中找出人脸区域并利用矩形框进行标记。通常在一张图像中,人脸占整个图像的比例不确定,利用人脸检测器进行标记,有利于提取人脸特征。最后,根据标记,在图像中通过人脸特征点检测算法提取人脸特征。Specifically, in this embodiment, the present invention uses the face database of the Chinese Academy of Sciences, from which 10,000 face images are selected as a training set, including different genders, ages, head postures, expressions, etc. First, the color image is processed into a grayscale image. Secondly, use the face detector to detect the face of the gray-scale image, find out the face area in a gray-scale image and mark it with a rectangular frame. Usually, in an image, the proportion of the face to the whole image is uncertain, and the face detector is used to mark it, which is conducive to the extraction of face features. Finally, according to the markers, face features are extracted in the image through a face feature point detection algorithm.

根据提取的人脸特征建立高斯混合模型。在本实施例中,高斯混合模型为:A Gaussian mixture model is established based on the extracted face features. In this example, the Gaussian mixture model is:

其中,K代表了模型的数量;πk代表了权重,满足N(x;μkk)为混合模型的第k个分量。Among them, K represents the number of models; π k represents the weight, satisfying N(x; μ k , Σ k ) is the kth component of the mixed model.

高斯混合模型(GMM)就是指对样本的概率密度分布进行估计,而估计采用的模型是几个高斯模型的加权和。每个高斯模型就代表了一个类。对样本中的数据分别在几个高斯模型上投影,就会分别得到在各个类上的概率。然后我们可以选取概率最大的类作为为判决结果。在本实施例中,对于建立好的高斯混合模型,这些人脸特征数据聚类完成后,会被分成多类,比如男人分为一类,女人分为一类,小孩分为一类,年龄大的分为一类,脸圆的人分为一类等等。将每一类图像,对应一组离散小波两层变换的子带能量特征,得到多组特征向量,并将该多组特征向量作为训练样本,利用EM算法,分别训练对应多组特征向量的多个混合高斯模型。利用高斯混合模型将人脸特征数据进行细分为多类,使得最终的辨别结果更加准确。The Gaussian mixture model (GMM) refers to the estimation of the probability density distribution of the sample, and the model used in the estimation is the weighted sum of several Gaussian models. Each Gaussian model represents a class. The data in the sample are projected on several Gaussian models, and the probabilities of each class will be obtained respectively. Then we can select the class with the highest probability as the judgment result. In this embodiment, for the established Gaussian mixture model, after the clustering of these face feature data is completed, it will be divided into multiple categories, such as men into one category, women into one category, children into one category, age People with big faces are divided into one category, people with round faces are divided into one category, and so on. For each type of image, corresponding to a set of subband energy features of discrete wavelet two-layer transform, multiple sets of feature vectors are obtained, and the multiple sets of feature vectors are used as training samples, and the EM algorithm is used to train multiple sets of feature vectors corresponding to multiple sets of feature vectors. a mixed Gaussian model. The Gaussian mixture model is used to subdivide the facial feature data into multiple categories, making the final identification result more accurate.

步骤二.从视频流中筛选含有人脸的关键帧,并从关键帧中提取人脸特征。Step 2. Screen key frames containing human faces from the video stream, and extract human face features from the key frames.

具体的,在本实施例中,为减少模型的运算并提高检测效率,对视频流中的视频帧进行三次筛选,得到只含有人脸的关键帧。Specifically, in this embodiment, in order to reduce model calculations and improve detection efficiency, the video frames in the video stream are screened three times to obtain key frames containing only human faces.

第一次筛选:从视频流中按照n帧/s的频率抽取视频帧,得到视频帧,n一般取1。在视频流中按一定的频率抽取视频帧,并且保证视频流中出现的人脸没有遗漏。The first screening: Extract video frames from the video stream at a frequency of n frames/s to obtain video frames, where n is generally 1. Extract video frames at a certain frequency in the video stream, and ensure that no faces appearing in the video stream are missed.

第二次筛选:利用人脸检测器滤除视频帧中不含有人脸的冗余帧。由于视频流中存在很多没有人脸的场景,所以在视频帧的抽取过程中也会存在很多没有人脸的视频帧。这部分视频帧不存在人脸,所以不需要对其进行面具检测。在本实施例中将这部分不存在人脸的冗余帧通过人脸检测器进行筛选,从而减少了高斯混合模型的运算,提高了真实人脸和面具辨别的效率。由于人脸检测器在识别过程中存在误差,所以在一次筛选后的视频帧中还可能存在没有人脸的视频帧,在实际筛选过程中,可以通过人脸检测器多次筛选。The second screening: Use the face detector to filter out redundant frames without human faces in the video frames. Since there are many scenes without human faces in the video stream, there will also be many video frames without human faces during the video frame extraction process. There is no face in this part of the video frame, so there is no need to perform mask detection on it. In this embodiment, the part of redundant frames without human faces is screened by the human face detector, thereby reducing the operation of the Gaussian mixture model and improving the efficiency of identifying real human faces and masks. Because the face detector has errors in the recognition process, there may be video frames without faces in the video frames after one screening. In the actual screening process, the face detector can be screened multiple times.

第三次筛选:从含有人脸的视频帧中滤除重复的冗余帧,得到关键帧。具体的,获取N帧关键帧中每一帧的特征值,并将特征值代入以下公式求得相邻的两个帧中人脸特征的相似程度 The third screening: filter out duplicate redundant frames from video frames containing human faces to obtain key frames. Specifically, obtain the eigenvalues of each frame in N key frames, and substitute the eigenvalues into the following formula to obtain the similarity of facial features in two adjacent frames

其中,xij代表第i帧的第j维特征值,x(i+1)j代表第i+1帧的第j维特征值;设置阈值Tf,当时,则第i+1帧是冗余帧,删除第i+1帧;否则,保留第i+1帧。在第二次筛选后的视频帧中存在同一类似场景的视频帧,所以不需要对重复的视频帧进行面具检测。在本实施例中,将前后两个视频帧进行128维的特征值对比,若得到的结果小于设定的阈值,则判断这两个视频帧过于类似,判定后一帧为冗余帧。由于不需要对类似的冗余帧进行运算,提高了真实人脸和面具辨别的效率。在本实施例中,通过前后两个视频帧进行特征值对比来判定是否为冗余帧,但申请人并没有对冗余帧的判定方式进行限定,也可以采用其他图像处理方法对其进行判定。Among them, x ij represents the j-th dimension feature value of the i-th frame, x (i+1)j represents the j-th dimension feature value of the i+1-th frame; set the threshold T f , when , then the i+1th frame is a redundant frame, delete the i+1th frame; otherwise, keep the i+1th frame. There are video frames of the same similar scene in the video frames after the second screening, so there is no need to perform mask detection on repeated video frames. In this embodiment, two video frames before and after are compared with 128-dimensional eigenvalues. If the obtained result is less than the set threshold, it is judged that the two video frames are too similar, and the latter frame is judged as a redundant frame. Since there is no need to operate on similar redundant frames, the efficiency of real face and mask discrimination is improved. In this embodiment, it is judged whether it is a redundant frame by comparing the feature values of two video frames before and after, but the applicant does not limit the method of judging redundant frames, and other image processing methods can also be used to judge it .

一帧即一副静止的画面。在本实施例中,从关键帧中提取人脸特征与从人脸图像样本中提取人脸特征的方法相同,这里不再重复说明。A frame is a still picture. In this embodiment, the method of extracting facial features from key frames is the same as that of extracting facial features from facial image samples, which will not be repeated here.

步骤三.将关键帧中提取的人脸特征送入高斯混合模型进行匹配,根据匹配结果判断关键帧中的人脸是否佩戴面具。Step 3. Send the facial features extracted in the key frame to the Gaussian mixture model for matching, and judge whether the face in the key frame wears a mask according to the matching result.

具体的,将关键帧中提取的人脸特征送入高斯混合模型中,与各个模型的聚类中心进行距离度量得到若干个概率密度,求得若干概率密度中的最大值Pmax;设定经验阈值T,在本实施例中,T=96。若Pmax>T,则判断为没有佩戴面具;否则,则判断为佩戴了面具。Specifically, the face features extracted in the key frame are sent to the Gaussian mixture model, and the distance measurement with the cluster centers of each model is obtained to obtain several probability densities, and the maximum value P max among the several probability densities is obtained; The threshold T, in this embodiment, T=96. If P max >T, it is determined that the mask is not worn; otherwise, it is determined that the mask is worn.

本发明所属技术领域的技术人员可以对所描述的具体实施例做各种各样的修改或补充或采用类似的方式替代,但并不会偏离本发明的精神或者超越所附权利要求书所定义的范围。Those skilled in the art to which the present invention belongs can make various modifications or supplements to the described specific embodiments or adopt similar methods to replace them, but they will not deviate from the spirit of the present invention or go beyond the definition of the appended claims range.

Claims (6)

1. a kind of mask detection method based on gauss hybrid models, which is characterized in that include the following steps:
Gauss hybrid models are established according to facial image sample;
The key frame containing face is screened from video flowing, and face characteristic is extracted from key frame;
The face characteristic extracted in key frame feeding gauss hybrid models are matched, are judged in key frame according to matching result Face whether wear mask.
2. the mask detection method according to claim 1 based on gauss hybrid models, which is characterized in that described from key The method of extraction face characteristic is in frame:
It is gray level image by the Color Image Processing in key frame;
Face datection is carried out to gray level image using human-face detector, human face region is found out in gray level image and utilizes rectangle frame It is marked;
According to the rectangle frame of label, face characteristic is extracted by facial feature points detection algorithm in rectangle frame.
3. the mask detection method according to claim 1 based on gauss hybrid models, which is characterized in that the Gauss is mixed Molding type is:
Wherein, K represents the quantity of model;πkWeight is represented, is metN(x;μk,∑k) it is mixed model K-th of component.
4. the mask detection method according to claim 1 based on gauss hybrid models, which is characterized in that affiliated from video The method of key frame of the screening containing face is in stream:
Video frame is extracted from video flowing;
The redundant frame that face is not contained in video frame is filtered out using human-face detector;
Duplicate redundant frame is filtered out from the video frame containing face, obtains key frame.
5. the mask detection method according to claim 4 based on gauss hybrid models, which is characterized in that described from containing Duplicate redundant frame is filtered out in the video frame of face, the method for obtaining key frame is:
The characteristic value of each frame in key frame is obtained, and characteristic value substitution following formula is acquired into face spy in two adjacent frames The similarity degree of sign
Wherein, xijRepresent the jth dimensional feature value of the i-th frame, x(i+1)jRepresent the jth dimensional feature value of i+1 frame;
Threshold value T is setf, whenWhen, then i+1 frame is redundant frame, deletes i+1 frame;Otherwise, retain i+1 frame.
6. the mask detection method according to claim 1 based on gauss hybrid models, which is characterized in that the judgement is closed The method whether face in key frame wears mask is:The face characteristic extracted in key frame is sent into gauss hybrid models to carry out Several probability density are obtained after matching, acquire the maximum value P in several probability densitymax;Empirical value T is set, if Pmax> T is then judged as and does not wear mask;Otherwise, then it is judged as and has worn mask.
CN201810426435.0A 2018-05-07 2018-05-07 Mask detection method based on gauss hybrid models Pending CN108830151A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810426435.0A CN108830151A (en) 2018-05-07 2018-05-07 Mask detection method based on gauss hybrid models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810426435.0A CN108830151A (en) 2018-05-07 2018-05-07 Mask detection method based on gauss hybrid models

Publications (1)

Publication Number Publication Date
CN108830151A true CN108830151A (en) 2018-11-16

Family

ID=64147636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810426435.0A Pending CN108830151A (en) 2018-05-07 2018-05-07 Mask detection method based on gauss hybrid models

Country Status (1)

Country Link
CN (1) CN108830151A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110879972A (en) * 2019-10-24 2020-03-13 深圳云天励飞技术有限公司 A face detection method and device
CN112650461A (en) * 2020-12-15 2021-04-13 广州舒勇五金制品有限公司 Relative position-based display system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101464950A (en) * 2009-01-16 2009-06-24 北京航空航天大学 Video human face identification and retrieval method based on on-line learning and Bayesian inference
CN104361326A (en) * 2014-11-18 2015-02-18 新开普电子股份有限公司 Method for distinguishing living human face
CN105426515A (en) * 2015-12-01 2016-03-23 小米科技有限责任公司 Video classification method and apparatus
CN106446772A (en) * 2016-08-11 2017-02-22 天津大学 Cheating-prevention method in face recognition system
CN107194376A (en) * 2017-06-21 2017-09-22 北京市威富安防科技有限公司 Mask fraud convolutional neural networks training method and human face in-vivo detection method
CN107194985A (en) * 2017-04-11 2017-09-22 中国农业大学 A kind of three-dimensional visualization method and device towards large scene
CN107844779A (en) * 2017-11-21 2018-03-27 重庆邮电大学 A kind of video key frame extracting method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101464950A (en) * 2009-01-16 2009-06-24 北京航空航天大学 Video human face identification and retrieval method based on on-line learning and Bayesian inference
CN104361326A (en) * 2014-11-18 2015-02-18 新开普电子股份有限公司 Method for distinguishing living human face
CN105426515A (en) * 2015-12-01 2016-03-23 小米科技有限责任公司 Video classification method and apparatus
CN106446772A (en) * 2016-08-11 2017-02-22 天津大学 Cheating-prevention method in face recognition system
CN107194985A (en) * 2017-04-11 2017-09-22 中国农业大学 A kind of three-dimensional visualization method and device towards large scene
CN107194376A (en) * 2017-06-21 2017-09-22 北京市威富安防科技有限公司 Mask fraud convolutional neural networks training method and human face in-vivo detection method
CN107844779A (en) * 2017-11-21 2018-03-27 重庆邮电大学 A kind of video key frame extracting method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GIRIJA CHETTY ET AL: "Liveness Verification in Audio-Video Speaker Authentication", 《10TH AUSTRALIAN INTERNATIONAL CONFERENCE ON SPEECH SCIENCE & TECHNOLOGY》 *
刘伟锋等: "基于Gabor特征和混合高斯模型的人脸表情分析", 《计算机工程与应用》 *
唐旭: "基于高斯混合模型分类的SAR图像检索", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
屈有佳: "基于SIFT特征的关键帧提取算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110879972A (en) * 2019-10-24 2020-03-13 深圳云天励飞技术有限公司 A face detection method and device
CN112650461A (en) * 2020-12-15 2021-04-13 广州舒勇五金制品有限公司 Relative position-based display system

Similar Documents

Publication Publication Date Title
CN108038476B (en) A feature extraction method for facial expression recognition based on edge detection and SIFT
US8682029B2 (en) Rule-based segmentation for objects with frontal view in color images
CN103971386B (en) A kind of foreground detection method under dynamic background scene
CN111931758B (en) Face recognition method and device combining facial veins
WO2018072233A1 (en) Method and system for vehicle tag detection and recognition based on selective search algorithm
CN113095263B (en) Training method and device for pedestrian re-recognition model under shielding and pedestrian re-recognition method and device under shielding
CN102194108B (en) Smile face expression recognition method based on clustering linear discriminant analysis of feature selection
CN101142584A (en) Methods for Facial Feature Detection
CN102073841A (en) Poor video detection method and device
CN105205480A (en) Complex scene human eye locating method and system
CN102968637A (en) Complicated background image and character division method
CN102034107B (en) Unhealthy image differentiating method based on robust visual attention feature and sparse representation
CN101853397A (en) A bionic face detection method based on human visual characteristics
CN107220598B (en) Iris image classification method based on deep learning features and Fisher Vector coding model
CN105718866A (en) Visual target detection and identification method
CN112861605A (en) Multi-person gait recognition method based on space-time mixed characteristics
CN102004925A (en) Method for training object classification model and identification method using object classification model
CN106022223A (en) High-dimensional local-binary-pattern face identification algorithm and system
CN112990120B (en) A Cross-Domain Person Re-identification Method Using Camera Style Separation Domain Information
CN113743365A (en) Method and device for detecting fraudulent behavior in face recognition process
CN107392105A (en) A kind of expression recognition method based on reverse collaboration marking area feature
CN104680189B (en) Based on the bad image detecting method for improving bag of words
CN110458064B (en) Combining data-driven and knowledge-driven low-altitude target detection and recognition methods
CN108830151A (en) Mask detection method based on gauss hybrid models
CN107203788B (en) Medium-level visual drug image identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181116

RJ01 Rejection of invention patent application after publication