CN109558813A - A kind of AI depth based on pulse signal is changed face video evidence collecting method - Google Patents

A kind of AI depth based on pulse signal is changed face video evidence collecting method Download PDF

Info

Publication number
CN109558813A
CN109558813A CN201811352507.8A CN201811352507A CN109558813A CN 109558813 A CN109558813 A CN 109558813A CN 201811352507 A CN201811352507 A CN 201811352507A CN 109558813 A CN109558813 A CN 109558813A
Authority
CN
China
Prior art keywords
video
face
signal
pulse signal
power spectrum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811352507.8A
Other languages
Chinese (zh)
Inventor
叶登攀
刘昌瑞
梅园
李世钰
江顺之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201811352507.8A priority Critical patent/CN109558813A/en
Publication of CN109558813A publication Critical patent/CN109558813A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow

Abstract

The AI depth based on pulse signal that the invention discloses a kind of is changed face video evidence collecting method, since the cardiovascular pulse wave propagated in human body periodically can cause vascular wall to stretch, it is fluctuated so that the tissue containing a large amount of blood vessels is also synchronous to the absorbability of light, to reflect regular pulse signal.In face video shooting process, these minor changes being invisible to the naked eye but can destroy these regular governed minor changes by the forgery face that common camera is recorded, and generated using AI method.According to this characteristic, the classifier that present invention combination machine learning algorithm SVM training obtains, effectively identification forge the improper due pulse signal of human body in video by the face true to nature of representative of deep learning, to be achieved the purpose of video evidence obtaining.The present invention does not need that video to be detected is required to contain digital watermark information, does not need to install other additional detection devices, therefore with good application prospect yet.

Description

A kind of AI depth based on pulse signal is changed face video evidence collecting method
Technical field
The invention belongs to field of information security technology, are related to a kind of evidence collecting method based on pulse signal, more particularly to It is a kind of based on pulse signal, for depth AI forge video of changing face evidence collecting method.
Background technique
With the continuous development of computer vision and image processing techniques, we can accomplish to pass through deep learning now Method generate face extremely true to nature in video.The behind of this technology is unlimited application space, but therein certain Abuse has but sounded safety alarm for us, and the video after distorting misleads to the public by Internet communication, not only can Daily life is upset, and the harmony that can seriously threaten society is stablized, therefore designs a reliable video evidence obtaining side Method is extremely urgent.
In order to guarantee multimedia content safety, academia has carried out a large amount of research to video evidence obtaining work, at this stage Technology be broadly divided into following a few classes:
1, video active forensic technologies, such technical requirements content providers must pre-process video, such as extract Abstract or insertion verification information etc., representative method is digital signature and video watermark technology.Such methods are not applicable The evidence obtaining work of multitude of video from a wealth of sources in internet.
2, the passive forensic technologies of video, such technology mainly pass through compression domain, time domain or the sky for differentiating video to be detected Characteristic of field to video carries out evidence obtaining work.Such method multitude of video data from a wealth of sources suitable for internet, and have Biggish exploration and perfect space.
Summary of the invention
The characteristics of forging evidence obtaining research for existing video and deficiency, the present invention provides a kind of taking based on pulse signal Card method is collected evidence mainly for the forgery face video generated using deep learning, and this method does not need to require to be detected Video contains digital watermark information, does not also need to install other additional detection devices, can the multimedia content peace of effective guarantee Entirely.
The technical scheme adopted by the invention is that: a kind of AI depth based on pulse signal is changed face video evidence collecting method, It is characterized in that, comprising the following steps:
Step 1: collecting the video for containing true and false face, video data is divided into training set and test set two parts, and pre- Region of interest ROI is selected in processing;
Step 2: extracting pretreated data-signal, obtain corresponding power spectrum;
Step 3: according to power spectrum, selecting feature vector, input the feature into and carry out model training in SVM;
Step 4: the model obtained using step 3 predicts video to be detected, determines whether forgery face video.
Since the cardiovascular pulse wave propagated in human body periodically can cause vascular wall to stretch, so that containing a large amount of blood vessels Tissue it is also synchronous to the absorbability of light fluctuate, to reflect regular pulse signal.In face video shooting process In, these minor changes being invisible to the naked eye, the adulterator that but can be recorded by common camera, and be generated using AI method Face can destroy these regular governed minor changes.According to this characteristic, present invention combination machine learning algorithm SVM is trained The classifier arrived, effectively identification forge the improper due pulse of human body in video by the face true to nature of representative of deep learning Signal, to be achieved the purpose of video evidence obtaining.
The present invention has the advantage that compared with existing evidence collecting method
1, a kind of evidence collecting method based on pulse signal is proposed, although the method that can use deep learning at this stage is pseudo- The face video for being difficult to distinguish is produced, but the method for this deep learning can destroy the pulse signal of normal human, so as to As a kind of evidence collecting method;
2, the method proposed does not need to carry out complicated human-computer interaction, better user experience;
3, the method proposed does not need that video to be detected is required to contain digital watermark information;
4, the method proposed does not need to install other additional detection devices, easy to operate, at low cost;
5, propose method combine the SVM classifier in machine learning, using sample training model, reduce manually at This, while detection efficiency and accuracy rate can be made to be promoted.
Detailed description of the invention
Fig. 1 is the flow chart of the embodiment of the present invention;
Fig. 2 is the procedure chart to power spectrum of signal processing simultaneously in the embodiment of the present invention.
Fig. 3 is the training flow chart of SVM of the embodiment of the present invention;
Fig. 4 is the differentiation procedure chart of video of embodiment of the present invention evidence obtaining.
Specific embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, with reference to the accompanying drawings and embodiments to this hair It is bright to be described in further detail, it should be understood that implementation example described herein is merely to illustrate and explain the present invention, not For limiting the present invention.
The video evidence collecting method referring to Fig.1, a kind of AI depth based on pulse signal provided by the invention is changed face, including it is following Step:
Step 1: collecting the video ([document 1]) for containing true and false face, video data is divided into training set and test set two Part, and pre-process and select region of interest ROI;
Training set and test set are divided into the true of acquisition and forgery video data, and pre-processed to obtain region of interest Domain ROI, specific implementation include following sub-step:
Step 1.1: according to machine learning to the conventional treatment method of data set, 2/3~4/5 sample of data set being regarded Training set, remaining sample regard test set;
Step 1.2: ([document 2]) is pre-processed to video using Haar feature, detects the face in video, due to The volumetric blood pulse BVP signal of estimation pulse is distributed uneven suction according to face skin to illumination on face skin The face detected is further processed using the detectMultiScale function in OpenCV, determines volume for receipts ability Head and cheek are required region of interest ROI.
Step 2: extracting pretreated data-signal, obtain corresponding power spectrum;
What Fig. 2 was indicated is to extract pretreated data-signal, and obtain the process of corresponding power spectrum, and specific steps are such as Under:
Step 2.1: every frame region of interest ROI image that step 1 is chosen carries out 3 primary colours separation, generates R, G, B tri- Then channel image takes signal value of the average gray of each channel all pixels as the frame image, form 3 original letters Number, as original pulse signal;
Step 2.2: the original pulse signal extracted in step 2.1 being subjected to trending and is handled, in order to observe arteries and veins It fights the inherent characteristic of signal;
Step 2.3: trending will be removed in step 2.2 treated signal, use independent component analysis ICA method ([document 3]) isolated pure pulse original signal;
Step 2.4: the signal that step 2.3 is obtained carries out signal screening using correlation analysis;
Step 2.5: by the signal after step 2.4 screening, changing FFT using fast Fourier, obtain corresponding power spectrum.
Step 3: according to power spectrum, selecting feature vector, input the feature into progress model training ([document in SVM 4]);
Step 3.1: according to power spectrum, selecting feature vector;
In terms of feature vector, the feature of selection has the corresponding pulse signal peak value P (r) of RGB triple channel, P (g), P (b), And RGB triple channel is in summation S (r), the S of the corresponding power spectrum of 0.7HZ to 4HZ (i.e. 42bpm to 240bpm Pulse Rate) (g),S(b);
Step 3.2: inputting the feature into and carry out model training in SVM;
The feature that step 3.1 is proposed, which is input in SVM, carries out model training, and training process is as shown in figure 3, training goal It is to obtain one video sample can be divided into the hyperplane of true, pseudo- two class, and be spaced maximum, formula is expressed as follows:
Wherein ω is normal vector, and b is displacement item, determines the distance between hyperplane and origin, (χii) it is corresponding Data set, n are video sample quantity.
Step 4: the model obtained using step 3 predicts video to be detected, determines whether forgery face video;
As shown in Figure 4, the specific steps are as follows:
Step 4.1: video to be detected being pre-processed first, selects area-of-interest;
Step 4.2: extracting the signal of pretreatment rear video, mainly have primary colours separation, remove trending, Blind Signal Separation, letter It number screens and extracts power spectrum;
Step 4.3: according to power spectrum, the corresponding pulse signal peak value P (r) of RGB triple channel, P (g), P (b) are chosen, with And RGB triple channel is in summation S (r), the S of the corresponding power spectrum of 0.7HZ to 4HZ (i.e. 42bpm to 240bpm Pulse Rate) (g),S(b);
Step 4.4, it inputs the feature into the trained classifier of step 3, calculates confidence level and be compared with threshold value, from And it is able to determine whether video to be detected is to forge face video;
If confidence level is greater than threshold value, for real human face video;It otherwise, is forgery face video.
It should be understood that the part that this specification does not elaborate belongs to the prior art.
The characteristics of forging evidence collecting method for the above video and deficiency, the present invention propose a kind of AI based on pulse signal Depth is changed face video evidence collecting method.Proposed method is primarily directed to be taken using the forgery face that deep learning generates Card, since the cardiovascular pulse wave propagated in human body periodically can cause vascular wall to stretch, internal volumetric blood and blood oxygen contain Amount changes therewith, fluctuates so that the tissue containing a large amount of blood vessels is also synchronous to the absorbability of light, to realize optical body The purpose of product cardiotokography (PPG) non-contact detection video human face heart rate ([document 5]), and network (GAN) is fought with production Do not have regular governed arteries and veins possessed by real human face for the depth AI of the representative forgery face video ([document 6]) synthesized It fights information, so method can be used as a kind of new method that face forges evidence obtaining.Meanwhile this method does not need to require view to be detected Frequency contains digital watermark information, does not also need to install other additional detection devices, therefore with good application prospect.
[1]Thies J,Zollhofer M,Stamminger M,et al.Face2Face:Real-Time Face Capture and Reenactment of RGB Videos[C]//Computer Vision and Pattern Recognition.
IEEE,2016:1-2.
[2]Mita T,Kaneko T,Hori O.Joint Haar-like features for face detection [C]//Tenth IEEE International Conference on Computer Vision.2005:1619-1626.
[3]Alghoul K,Alharthi S,Osman H A,et al.Heart Rate Variability Extraction From Videos Signals:ICA vs.EVM Comparison[J].IEEE Access,2017,5 (99):4711-4719.
[4]Bennett K P,Campbell C.Support vector machines:hype or hallelujah [J].Acm Sigkdd Explorations Newsletter,2000,2(2):1-13.
[5]Verkruysse W,Svaasand LO,Nelson JS.Remote plethysmographic imaging using ambient light[J].Optics Express,2008,16(26):21434-45.
[6]Korshunova I,Shi W,Dambre J,et al.Fast Face-Swap Using Convolutional Neural Networks[J].2016:3697-3705.
It should be understood that the above-mentioned description for preferred embodiment is more detailed, can not therefore be considered to this The limitation of invention patent protection range, those skilled in the art under the inspiration of the present invention, are not departing from power of the present invention Benefit requires to make replacement or deformation under protected ambit, fall within the scope of protection of the present invention, this hair It is bright range is claimed to be determined by the appended claims.

Claims (5)

  1. The video evidence collecting method 1. a kind of AI depth based on pulse signal is changed face, which comprises the following steps:
    Step 1: collecting the video for containing true and false face, video data is divided into training set and test set two parts, and pre-process Obtain region of interest ROI;
    Step 2: extracting pretreated data-signal, obtain corresponding power spectrum;
    Step 3: according to power spectrum, selecting feature vector, input the feature into and carry out model training in SVM;
    Step 4: the model obtained using step 3 predicts video to be detected, determines whether forgery face video.
  2. The video evidence collecting method 2. the AI depth according to claim 1 based on pulse signal is changed face, it is characterised in that: step In 1, training set and test set are divided into the true of acquisition and forgery video data, and pre-processed to obtain area-of-interest ROI, specific implementation include following sub-step:
    Step 1.1: 2/3~4/5 sample of data set being regarded into training set, remaining sample regards test set;
    Step 1.2: video being pre-processed using Haar feature, detects the face in video, due to estimating the blood of pulse Liquid volume pulse BVP signal be distributed on face skin it is uneven, according to face skin to the absorbability of illumination, to detection To face be further processed, to obtain region of interest ROI.
  3. The video evidence collecting method 3. the AI depth according to claim 1 based on pulse signal is changed face, which is characterized in that step 2 specific implementation includes following sub-step:
    Step 2.1: every frame region of interest ROI image that step 1 is chosen carries out 3 primary colours separation, generates R, G, B triple channel figure Then picture takes signal value of the average gray of each channel all pixels as the frame image, forms 3 original signals, as Original pulse signal;
    Step 2.2: the original pulse signal extracted in step 2.1 being subjected to trending and is handled, in order to observe that pulse is believed Number inherent characteristic;
    Step 2.3: trending will be removed in step 2.2 treated signal, isolated pure pulse original signal;
    Step 2.4: the signal that step 2.3 is obtained, the signal screening more refined;
    Step 2.5: using the signal after step 2.4 screening, obtaining corresponding power spectrum.
  4. The video evidence collecting method 4. the AI depth according to claim 1 based on pulse signal is changed face, which is characterized in that step 3 specific implementation includes following sub-step:
    Step 3.1: according to power spectrum, selecting feature vector;
    In terms of feature vector, the feature of selection has the corresponding pulse signal peak value P (r) of RGB triple channel, P (g), P (b), and Summation S (r), S (g), S (b) of the RGB triple channel in the corresponding power spectrum of 0.7HZ to 4HZ;
    Step 3.2: inputting the feature into and carry out model training in SVM;
    The feature that step 3.1 is proposed, which is input in SVM, carries out model training, so that video sample can be divided by obtaining one Very, the hyperplane of pseudo- two classes, and it is spaced maximum;Formula is expressed as follows:
    Wherein ω is normal vector, and b is displacement item, determines the distance between hyperplane and origin, (χii) it is corresponding data Collection, n are video sample quantity.
  5. The video evidence collecting method 5. the AI depth according to claim 1 based on pulse signal is changed face, which is characterized in that step 4 specific implementation includes following sub-step:
    Step 4.1: video to be detected being pre-processed first, selects area-of-interest;
    Step 4.2: extracting the signal of pretreatment rear video, mainly have primary colours separation, remove trending, Blind Signal Separation, signal sieve It selects and extracts power spectrum;
    Step 4.3: according to power spectrum, choosing the corresponding pulse signal peak value P (r) of RGB triple channel, P (g), P (b) and RGB Summation S (r), S (g), S (b) of the triple channel in the corresponding power spectrum of 0.7HZ to 4HZ;
    Step 4.4, it inputs the feature into the trained classifier of step 3, calculates confidence level and be compared with threshold value, thus To determine whether video to be detected is to forge face video.
CN201811352507.8A 2018-11-14 2018-11-14 A kind of AI depth based on pulse signal is changed face video evidence collecting method Pending CN109558813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811352507.8A CN109558813A (en) 2018-11-14 2018-11-14 A kind of AI depth based on pulse signal is changed face video evidence collecting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811352507.8A CN109558813A (en) 2018-11-14 2018-11-14 A kind of AI depth based on pulse signal is changed face video evidence collecting method

Publications (1)

Publication Number Publication Date
CN109558813A true CN109558813A (en) 2019-04-02

Family

ID=65866229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811352507.8A Pending CN109558813A (en) 2018-11-14 2018-11-14 A kind of AI depth based on pulse signal is changed face video evidence collecting method

Country Status (1)

Country Link
CN (1) CN109558813A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110279406A (en) * 2019-05-06 2019-09-27 苏宁金融服务(上海)有限公司 A kind of touchless pulse frequency measurement method and device based on camera
CN111311549A (en) * 2020-01-20 2020-06-19 中国人民解放军国防科技大学 Image authentication method, system and storage medium
CN111783644A (en) * 2020-06-30 2020-10-16 百度在线网络技术(北京)有限公司 Detection method, device, equipment and computer storage medium
CN111797735A (en) * 2020-06-22 2020-10-20 深圳壹账通智能科技有限公司 Face video recognition method, device, equipment and storage medium
CN113963427A (en) * 2021-12-22 2022-01-21 浙江工商大学 Method and system for rapid in vivo detection
CN114402359A (en) * 2019-07-18 2022-04-26 纽洛斯公司 System and method for detecting composite video of person
CN115410260A (en) * 2022-09-05 2022-11-29 中国科学院自动化研究所 Face counterfeit discrimination and evidence obtaining method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150193669A1 (en) * 2011-11-21 2015-07-09 Pixart Imaging Inc. System and method based on hybrid biometric detection
CN105989357A (en) * 2016-01-18 2016-10-05 合肥工业大学 Human face video processing-based heart rate detection method
CN106473750A (en) * 2016-10-08 2017-03-08 西安电子科技大学 Personal identification method based on photoplethysmographic optimal period waveform
CN107506713A (en) * 2017-08-15 2017-12-22 哈尔滨工业大学深圳研究生院 Living body faces detection method and storage device
CN108038456A (en) * 2017-12-19 2018-05-15 中科视拓(北京)科技有限公司 A kind of anti-fraud method in face identification system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150193669A1 (en) * 2011-11-21 2015-07-09 Pixart Imaging Inc. System and method based on hybrid biometric detection
CN105989357A (en) * 2016-01-18 2016-10-05 合肥工业大学 Human face video processing-based heart rate detection method
CN106473750A (en) * 2016-10-08 2017-03-08 西安电子科技大学 Personal identification method based on photoplethysmographic optimal period waveform
CN107506713A (en) * 2017-08-15 2017-12-22 哈尔滨工业大学深圳研究生院 Living body faces detection method and storage device
CN108038456A (en) * 2017-12-19 2018-05-15 中科视拓(北京)科技有限公司 A kind of anti-fraud method in face identification system

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
GUILLAUME GIBERT等: "Face detection method based on photoplethysmography", 《2013 10TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE》 *
KARIM ALGHOUL等: "Heart Rate Variability Extraction From Videos Signals: ICA vs. EVM Comparison", 《IEEE ACCESS》 *
MING-ZHER POH等: "Advancements in Noncontact, Multiparameter Physiological Measurements Using a Webcam", 《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING》 *
MING-ZHER POH等: "Non-contact, automated cardiac pulse measurements using video imaging and blind source separation", 《HTTPS://DOI.ORG/10.1364/OE.18.010762》 *
TAKESHI MITA等: "Joint Haar-like Features for Face Detection", 《TENTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
王利文等: "基于支持向量机的云图自动识别和提取方法", 《天文学报》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110279406A (en) * 2019-05-06 2019-09-27 苏宁金融服务(上海)有限公司 A kind of touchless pulse frequency measurement method and device based on camera
CN114402359A (en) * 2019-07-18 2022-04-26 纽洛斯公司 System and method for detecting composite video of person
US11676690B2 (en) 2019-07-18 2023-06-13 Nuralogix Corporation System and method for detection of synthesized videos of humans
CN114402359B (en) * 2019-07-18 2023-11-17 纽洛斯公司 System and method for detecting a composite video of a person
CN111311549A (en) * 2020-01-20 2020-06-19 中国人民解放军国防科技大学 Image authentication method, system and storage medium
CN111797735A (en) * 2020-06-22 2020-10-20 深圳壹账通智能科技有限公司 Face video recognition method, device, equipment and storage medium
CN111783644A (en) * 2020-06-30 2020-10-16 百度在线网络技术(北京)有限公司 Detection method, device, equipment and computer storage medium
CN113963427A (en) * 2021-12-22 2022-01-21 浙江工商大学 Method and system for rapid in vivo detection
CN115410260A (en) * 2022-09-05 2022-11-29 中国科学院自动化研究所 Face counterfeit discrimination and evidence obtaining method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109558813A (en) A kind of AI depth based on pulse signal is changed face video evidence collecting method
Jee et al. Liveness detection for embedded face recognition system
CN108921041A (en) A kind of biopsy method and device based on RGB and IR binocular camera
CN111967427A (en) Fake face video identification method, system and readable storage medium
CN105518710B (en) Video detecting method, video detection system and computer program product
CN108446690B (en) Human face in-vivo detection method based on multi-view dynamic features
CN105975938A (en) Smart community manager service system with dynamic face identification function
Monwar et al. Pain recognition using artificial neural network
CN110287918A (en) Vivo identification method and Related product
CN107862298B (en) Winking living body detection method based on infrared camera device
CN109409343A (en) A kind of face identification method based on In vivo detection
Das et al. A framework for liveness detection for direct attacks in the visible spectrum for multimodal ocular biometrics
Shivakumara et al. A new RGB based fusion for forged IMEI number detection in mobile images
CN117095471B (en) Face counterfeiting tracing method based on multi-scale characteristics
CN106709480B (en) Intersected human face recognition methods based on weighed intensities PCNN models
CN109522865A (en) A kind of characteristic weighing fusion face identification method based on deep neural network
Bhaskar et al. Advanced algorithm for gender prediction with image quality assessment
HR et al. A novel hybrid biometric software application for facial recognition considering uncontrollable environmental conditions
Sikander et al. Facial feature detection: A facial symmetry approach
CN110135362A (en) A kind of fast face recognition method based under infrared camera
CN112861588A (en) Living body detection method and device
Das et al. Face liveness detection based on frequency and micro-texture analysis
El-Sayed et al. An identification system using eye detection based on wavelets and neural networks
Salih et al. Color model based convolutional neural network for image spam classification
Yu et al. Research on face anti-spoofing algorithm based on image fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190402

RJ01 Rejection of invention patent application after publication