CN108830151A - Mask detection method based on gauss hybrid models - Google Patents

Mask detection method based on gauss hybrid models Download PDF

Info

Publication number
CN108830151A
CN108830151A CN201810426435.0A CN201810426435A CN108830151A CN 108830151 A CN108830151 A CN 108830151A CN 201810426435 A CN201810426435 A CN 201810426435A CN 108830151 A CN108830151 A CN 108830151A
Authority
CN
China
Prior art keywords
frame
face
hybrid models
gauss hybrid
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810426435.0A
Other languages
Chinese (zh)
Inventor
章姝俊
姚杨
姚一杨
戴波
王彦波
江樱
邱兰馨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Zhejiang Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
State Grid Zhejiang Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Zhejiang Electric Power Co Ltd, Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd filed Critical State Grid Zhejiang Electric Power Co Ltd
Priority to CN201810426435.0A priority Critical patent/CN108830151A/en
Publication of CN108830151A publication Critical patent/CN108830151A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to technical field of computer vision more particularly to a kind of mask detection methods based on gauss hybrid models, include the following steps:Gauss hybrid models are established according to facial image sample;The key frame containing face is screened from video flowing, and face characteristic is extracted from key frame;The face characteristic extracted in key frame feeding gauss hybrid models are matched, judge whether the face in key frame wears mask according to matching result.By using the present invention, following effect may be implemented:Face picture library is classified using gauss hybrid models, can effectively differentiate real human face and mask;It is screened using key frame three times, removes redundant frame, reduce operation, improve detection efficiency.

Description

Mask detection method based on gauss hybrid models
Technical field
The present invention relates to technical field of computer vision more particularly to a kind of mask detection sides based on gauss hybrid models Method.
Background technique
With current e-commerce and the fast development of mobile payment, the facial image for obtaining user prevents fraud A kind of effective means.If user has worn mask during taking pictures, it is likely to bring fraud.However, existing Human face detection tech cannot differentiate real human face and mask well, therefore, lack a kind of effective mask detection skill at present Art.
Summary of the invention
To solve the above problems, the present invention proposes a kind of mask detection method based on gauss hybrid models, for detecting Whether the face in video flowing wears mask.
A kind of mask detection method based on gauss hybrid models, includes the following steps:It is established according to facial image sample Gauss hybrid models;The key frame containing face is screened from video flowing, and face characteristic is extracted from key frame;By key frame The face characteristic of middle extraction is sent into gauss hybrid models and is matched, and judges whether the face in key frame wears according to matching result It wears a mask.
Preferably, the method that face characteristic is extracted from key frame is:It is by the Color Image Processing in key frame Gray level image;Face datection is carried out to gray level image using human-face detector, human face region is found out in gray level image and is utilized Rectangle frame is marked;According to the rectangle frame of label, face characteristic is extracted by facial feature points detection algorithm in rectangle frame.
Preferably, the gauss hybrid models are:
Wherein, K represents the quantity of model;πkWeight is represented, is metN(x;μk,∑k) it is mixing K-th of component of model.
Preferably, the affiliated method that the key frame containing face is screened from video flowing is:Video is extracted from video flowing Frame;The redundant frame that face is not contained in video frame is filtered out using human-face detector;Repetition is filtered out from the video frame containing face Redundant frame, obtain key frame.
Preferably, described that duplicate redundant frame is filtered out from the video frame containing face, the method for obtaining key frame is:It obtains The characteristic value of each frame in key frame is taken, and characteristic value substitution following formula is acquired to the phase of face characteristic in two adjacent frames Like degree
Wherein, xijRepresent the jth dimensional feature value of the i-th frame, x(i+1)jRepresent the jth dimensional feature value of i+1 frame;Threshold value is set Tf, whenWhen, then i+1 frame is redundant frame, deletes i+1 frame;Otherwise, retain i+1 frame.
Preferably, the method that whether face judged in key frame wears mask is:The people that will be extracted in key frame Face feature is sent into after gauss hybrid models are matched and obtains several probability density, and the maximum value in several probability density is acquired Pmax;Empirical value T is set, if Pmax> T, then be judged as and do not wear mask;Otherwise, then it is judged as and has worn mask.
By using the present invention, following effect may be implemented:
1. the present invention is classified face picture library using gauss hybrid models, real human face and face can be effectively differentiated Tool;
2. screening using key frame three times, redundant frame is removed, reduces operation, improves detection efficiency.
Detailed description of the invention
The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.
Fig. 1 is flow diagram of the invention.
Specific embodiment
Below in conjunction with attached drawing, technical scheme of the present invention will be further described, but the present invention is not limited to these realities Apply example.
Basic thought the present invention is based on the mask detection method of gauss hybrid models is established according to facial image sample Gauss hybrid models;Repeatedly screening obtains the key frame containing face from video flowing, and face characteristic is extracted from key frame; The face characteristic extracted in key frame feeding gauss hybrid models are matched, the people in key frame is judged according to matching result Whether face wears mask, to realize quick, effective discrimination of real human face and mask.
Fig. 1 is flow diagram of the invention, according to Fig. 1, it can be seen that the invention mainly comprises following steps.
Step 1 establishes gauss hybrid models according to facial image sample.
Specifically, in the present embodiment, present invention employs Chinese Academy of Sciences's face databases, 10000 people are therefrom had chosen Face image is as training set, including different genders, age, head pose, expression etc..Firstly, being ash by Color Image Processing Spend image.Secondly, carrying out Face datection to gray level image using human-face detector, human face region is found out in a gray level image And it is marked using rectangle frame.Usually in an image, the ratio that face accounts for whole image is uncertain, utilizes Face datection Device is marked, and is conducive to extract face characteristic.Finally, being mentioned in the picture by facial feature points detection algorithm according to label Take face characteristic.
Gauss hybrid models are established according to the face characteristic of extraction.In the present embodiment, gauss hybrid models are:
Wherein, K represents the quantity of model;πkWeight is represented, is metN(x;μkk) it is mixed K-th of component of molding type.
Gauss hybrid models (GMM), which just refer to, estimates the probability density distribution of sample, and estimates the model used It is the weighted sum of several Gauss models.Each Gauss model just represents a class.To the data in sample respectively several high It is projected on this model, the probability in each class will be respectively obtained.Then we can choose the class of maximum probability and are Court verdict.In the present embodiment, for established gauss hybrid models, after the completion of these face characteristic data clusters, meeting It is divided into multiclass, for example man is divided into one kind, woman is divided into one kind, and child is divided into one kind, and older is divided into one kind, face circle People is divided into one kind etc..By every a kind of image, the sub-belt energy feature of corresponding one group two layers of discrete wavelet transformation obtains multiple groups spy Vector is levied, and the more of corresponding multiple groups feature vector are respectively trained using EM algorithm using the multiple groups feature vector as training sample A mixed Gauss model.Face characteristic data are finely divided as multiclass using gauss hybrid models, so that final discrimination knot Fruit is more accurate.
Step 2 screens the key frame containing face from video flowing, and face characteristic is extracted from key frame.
Specifically, in the present embodiment, to reduce the operation of model and improving detection efficiency, to the video frame in video flowing It is screened three times, obtains the key frame for containing only face.
It screens for the first time:According to n frame/s decimation in frequency video frame from video flowing, video frame is obtained, n generally takes 1.? Certain decimation in frequency video frame is pressed in video flowing, and guarantees that the face occurred in video flowing is not omitted.
Programmed screening:The redundant frame that face is not contained in video frame is filtered out using human-face detector.Due in video flowing In the presence of much not no scenes of face, so can also there is the video frame of many not faces in the extraction process of video frame. Face is not present in this partial video frame, so not needing to carry out mask detection to it.This part is not present in the present embodiment The redundant frame of face is screened by human-face detector, to reduce the operation of gauss hybrid models, improves true people The efficiency that face and mask distinguish.Since there are errors in identification process for human-face detector, so the video after primary screening There is likely to be the video frames of not face in frame, in practical screening process, can repeatedly be screened by human-face detector.
Third time is screened:Duplicate redundant frame is filtered out from the video frame containing face, obtains key frame.Specifically, obtaining The characteristic value of each frame in N frame key frame is taken, and characteristic value substitution following formula is acquired into face characteristic in two adjacent frames Similarity degree
Wherein, xijRepresent the jth dimensional feature value of the i-th frame, x(i+1)jRepresent the jth dimensional feature value of i+1 frame;Threshold value is set Tf, whenWhen, then i+1 frame is redundant frame, deletes i+1 frame;Otherwise, retain i+1 frame.After programmed screening Video frame in there are the video frames of same similar scene, so do not need to duplicate video frame carry out mask detection.At this In embodiment, the characteristic value that former and later two video frames carry out 128 dimensions is compared, if obtained result is less than the threshold value of setting, Judge that the two video frames are excessively similar, determines that a later frame is redundant frame.Due to not needing to carry out operation to similar redundant frame, Improve the efficiency of real human face and mask discrimination.In the present embodiment, characteristic value comparison is carried out by former and later two video frames Determine whether redundant frame, but applicant can also use other figures not being defined to the decision procedure of redundant frame As processing method determines it.
The secondary static picture of one frame i.e. one.In the present embodiment, extracted from key frame face characteristic with from facial image The method that face characteristic is extracted in sample is identical, and explanation is not repeated herein.
Step 3 matches the face characteristic extracted in key frame feeding gauss hybrid models, according to matching result Judge whether the face in key frame wears mask.
Specifically, the face characteristic extracted in key frame is sent into gauss hybrid models, in the cluster of each model The heart carries out distance metric and obtains several probability density, acquires the maximum value P in several probability densitymax;Empirical value T is set, In the present embodiment, T=96.If Pmax> T, then be judged as and do not wear mask;Otherwise, then it is judged as and has worn mask.
Those skilled in the art can make various modifications to described specific embodiment Or supplement or be substituted in a similar manner, however, it does not deviate from the spirit of the invention or surmounts the appended claims determines The range of justice.

Claims (6)

1. a kind of mask detection method based on gauss hybrid models, which is characterized in that include the following steps:
Gauss hybrid models are established according to facial image sample;
The key frame containing face is screened from video flowing, and face characteristic is extracted from key frame;
The face characteristic extracted in key frame feeding gauss hybrid models are matched, are judged in key frame according to matching result Face whether wear mask.
2. the mask detection method according to claim 1 based on gauss hybrid models, which is characterized in that described from key The method of extraction face characteristic is in frame:
It is gray level image by the Color Image Processing in key frame;
Face datection is carried out to gray level image using human-face detector, human face region is found out in gray level image and utilizes rectangle frame It is marked;
According to the rectangle frame of label, face characteristic is extracted by facial feature points detection algorithm in rectangle frame.
3. the mask detection method according to claim 1 based on gauss hybrid models, which is characterized in that the Gauss is mixed Molding type is:
Wherein, K represents the quantity of model;πkWeight is represented, is metN(x;μk,∑k) it is mixed model K-th of component.
4. the mask detection method according to claim 1 based on gauss hybrid models, which is characterized in that affiliated from video The method of key frame of the screening containing face is in stream:
Video frame is extracted from video flowing;
The redundant frame that face is not contained in video frame is filtered out using human-face detector;
Duplicate redundant frame is filtered out from the video frame containing face, obtains key frame.
5. the mask detection method according to claim 4 based on gauss hybrid models, which is characterized in that described from containing Duplicate redundant frame is filtered out in the video frame of face, the method for obtaining key frame is:
The characteristic value of each frame in key frame is obtained, and characteristic value substitution following formula is acquired into face spy in two adjacent frames The similarity degree of sign
Wherein, xijRepresent the jth dimensional feature value of the i-th frame, x(i+1)jRepresent the jth dimensional feature value of i+1 frame;
Threshold value T is setf, whenWhen, then i+1 frame is redundant frame, deletes i+1 frame;Otherwise, retain i+1 frame.
6. the mask detection method according to claim 1 based on gauss hybrid models, which is characterized in that the judgement is closed The method whether face in key frame wears mask is:The face characteristic extracted in key frame is sent into gauss hybrid models to carry out Several probability density are obtained after matching, acquire the maximum value P in several probability densitymax;Empirical value T is set, if Pmax> T is then judged as and does not wear mask;Otherwise, then it is judged as and has worn mask.
CN201810426435.0A 2018-05-07 2018-05-07 Mask detection method based on gauss hybrid models Pending CN108830151A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810426435.0A CN108830151A (en) 2018-05-07 2018-05-07 Mask detection method based on gauss hybrid models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810426435.0A CN108830151A (en) 2018-05-07 2018-05-07 Mask detection method based on gauss hybrid models

Publications (1)

Publication Number Publication Date
CN108830151A true CN108830151A (en) 2018-11-16

Family

ID=64147636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810426435.0A Pending CN108830151A (en) 2018-05-07 2018-05-07 Mask detection method based on gauss hybrid models

Country Status (1)

Country Link
CN (1) CN108830151A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110879972A (en) * 2019-10-24 2020-03-13 深圳云天励飞技术有限公司 Face detection method and device
CN112650461A (en) * 2020-12-15 2021-04-13 广州舒勇五金制品有限公司 Relative position-based display system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101464950A (en) * 2009-01-16 2009-06-24 北京航空航天大学 Video human face identification and retrieval method based on on-line learning and Bayesian inference
CN104361326A (en) * 2014-11-18 2015-02-18 新开普电子股份有限公司 Method for distinguishing living human face
CN105426515A (en) * 2015-12-01 2016-03-23 小米科技有限责任公司 Video classification method and apparatus
CN106446772A (en) * 2016-08-11 2017-02-22 天津大学 Cheating-prevention method in face recognition system
CN107194985A (en) * 2017-04-11 2017-09-22 中国农业大学 A kind of three-dimensional visualization method and device towards large scene
CN107194376A (en) * 2017-06-21 2017-09-22 北京市威富安防科技有限公司 Mask fraud convolutional neural networks training method and human face in-vivo detection method
CN107844779A (en) * 2017-11-21 2018-03-27 重庆邮电大学 A kind of video key frame extracting method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101464950A (en) * 2009-01-16 2009-06-24 北京航空航天大学 Video human face identification and retrieval method based on on-line learning and Bayesian inference
CN104361326A (en) * 2014-11-18 2015-02-18 新开普电子股份有限公司 Method for distinguishing living human face
CN105426515A (en) * 2015-12-01 2016-03-23 小米科技有限责任公司 Video classification method and apparatus
CN106446772A (en) * 2016-08-11 2017-02-22 天津大学 Cheating-prevention method in face recognition system
CN107194985A (en) * 2017-04-11 2017-09-22 中国农业大学 A kind of three-dimensional visualization method and device towards large scene
CN107194376A (en) * 2017-06-21 2017-09-22 北京市威富安防科技有限公司 Mask fraud convolutional neural networks training method and human face in-vivo detection method
CN107844779A (en) * 2017-11-21 2018-03-27 重庆邮电大学 A kind of video key frame extracting method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GIRIJA CHETTY ET AL: "Liveness Verification in Audio-Video Speaker Authentication", 《10TH AUSTRALIAN INTERNATIONAL CONFERENCE ON SPEECH SCIENCE & TECHNOLOGY》 *
刘伟锋等: "基于Gabor特征和混合高斯模型的人脸表情分析", 《计算机工程与应用》 *
唐旭: "基于高斯混合模型分类的SAR图像检索", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
屈有佳: "基于SIFT特征的关键帧提取算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110879972A (en) * 2019-10-24 2020-03-13 深圳云天励飞技术有限公司 Face detection method and device
CN112650461A (en) * 2020-12-15 2021-04-13 广州舒勇五金制品有限公司 Relative position-based display system

Similar Documents

Publication Publication Date Title
Yang et al. Exposing GAN-synthesized faces using landmark locations
Huang et al. Tracknet: A deep learning network for tracking high-speed and tiny objects in sports applications
Li et al. Multiple-human parsing in the wild
Agarwal et al. Swapped! digital face presentation attack detection via weighted local magnitude pattern
Luo et al. Group sparsity and geometry constrained dictionary learning for action recognition from depth maps
CN109815826B (en) Method and device for generating face attribute model
Wang et al. Human action recognition by semilatent topic models
Vrigkas et al. Matching mixtures of curves for human action recognition
CN109815874A (en) A kind of personnel identity recognition methods, device, equipment and readable storage medium storing program for executing
CN111401144A (en) Escalator passenger behavior identification method based on video monitoring
CN102034107B (en) Unhealthy image differentiating method based on robust visual attention feature and sparse representation
Winarno et al. Multi-view faces detection using Viola-Jones method
Zhu et al. Action recognition in broadcast tennis video using optical flow and support vector machine
CN106845456A (en) A kind of method of falling over of human body monitoring in video monitoring system
CN108830151A (en) Mask detection method based on gauss hybrid models
Mallet et al. Using deep learning to detecting deepfakes
CN101950448A (en) Detection method and system for masquerade and peep behaviors before ATM (Automatic Teller Machine)
CN108681928A (en) A kind of intelligent advertisement put-on method
CN104680189A (en) Pornographic image detection method based on improved bag-of-words model
Gunay et al. Facial age estimation based on decision level fusion of amm, lbp and gabor features
CN110008876A (en) A kind of face verification method based on data enhancing and Fusion Features
CN103971100A (en) Video-based camouflage and peeping behavior detection method for automated teller machine
Siddiquie et al. Recognizing plays in american football videos
Hu et al. CoverTheFace: face covering monitoring and demonstrating using deep learning and statistical shape analysis
Panicker et al. Cardio-pulmonary resuscitation (CPR) scene retrieval from medical simulation videos using local binary patterns over three orthogonal planes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181116