CN110163126A - A kind of biopsy method based on face, device and equipment - Google Patents

A kind of biopsy method based on face, device and equipment Download PDF

Info

Publication number
CN110163126A
CN110163126A CN201910370820.2A CN201910370820A CN110163126A CN 110163126 A CN110163126 A CN 110163126A CN 201910370820 A CN201910370820 A CN 201910370820A CN 110163126 A CN110163126 A CN 110163126A
Authority
CN
China
Prior art keywords
heart rate
signal
face
detection region
pixel data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910370820.2A
Other languages
Chinese (zh)
Inventor
佟金广
张玏
李骊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing HJIMI Technology Co Ltd
Original Assignee
Beijing HJIMI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing HJIMI Technology Co Ltd filed Critical Beijing HJIMI Technology Co Ltd
Priority to CN201910370820.2A priority Critical patent/CN110163126A/en
Publication of CN110163126A publication Critical patent/CN110163126A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A kind of biopsy method based on face, device and equipment disclosed by the invention, belong to technical field of face recognition.This method, which is specifically included that, chooses heart rate detection region based on facial image;The heart rate value in each heart rate detection region is extracted according to the pixel data in the heart rate detection region;The heart rate value extracted composition heart rate feature vector is inputted in trained classifier and is classified, obtain In vivo detection result, the present invention can effectively improve the accuracy rate and reliability of existing biopsy method, and can be effective against the three-dimensional attacks such as 3D face mask.

Description

A kind of biopsy method based on face, device and equipment
Technical field
The present invention relates to technical field of face recognition more particularly to a kind of biopsy method based on face, device and Equipment.
Background technique
With the fast development of artificial intelligence technology, artificial intelligence band is more and more experienced in daily life The convenience come.Currently, face recognition technology has been widely applied as one of most widely used artificial intelligence technology in gold Melt, in the authentications scene such as security protection.It is no longer single to believe collected face already in existing face identification system Breath is matched with database, but joined In vivo detection function wherein.In vivo detection seeks to be acquired by analysis The feature of the image arrived into system is invasion that legitimate user itself or illegal user are carried out by camouflage to distinguish. From the point of view of present computer information technology development level, to obtain easy by the face image of object of attack.Therefore, living body Detection module is essential in face identification system.It at present, can be big to the Means of Intrusion of face identification system In terms of cause is divided into following two.One side attack means mainly using with by the photo of object of attack face, video etc. into Row invasion.This kind invasion is mainly carried out by two-dimensional mediums such as electronic curtains, we unite and are called plane attack means. On the other hand, the main invasion carried out using 3D face mask or head model, coplanar attack means are significantly different, we Referred to as three-dimensional attack means.
Existing biopsy method has been able to obtain good effect when coping with the attack of the planes such as photo, video. But in face of complex environment or the attack means of high quality, there are still many shortcomings.It is embodied in following side Face: (1) robustness of existing biopsy method is bad, and detection method can only attack mould for one or more of living bodies mostly Formula.Once attack mode changes, it can to change for detecting the hypothesis of feature originally, cause testing result inaccurate Really.For example, extracting human face action as in the detection method of characteristic of division, it has been assumed that face's base of non-living body object of attack Non-rigid motion is not present in this.This hypothesis can set up the attack means such as photo and head model, but face video It is just no longer valid when attack.(2) it faces three-dimensional attack, especially 3D face mask to attack, there is no a kind of largely effective at present Biopsy method.The 3D face mask true to nature of moulding currently on the market the various aspects such as shape, material, details all and Real human face is all without the slightest difference, and even human eye is all difficult to differentiate between sometimes.Certainly, those extract texture, movement, the work of depth characteristic Body detecting method seems helpless to the attack of this high quality.(3) part biopsy method needs additional data to adopt Collection tool such as depth camera, infrared photography head etc..These additional hardware requirements improve face detection system cost and Using threshold.(4) the existing biopsy method based on heart rate detection, heart rate extraction process and classification judgment method are excessively thick It is rough.Heart rate extracts the influence for being highly susceptible to ambient lighting and object to be detected movement, and judging result not can guarantee completely.
Summary of the invention
In view of the deficiencies of the prior art, it is an object of the invention to propose a kind of living body inspection accurate, reliable, robustness is good Survey method, device and equipment.
One aspect of the present invention provides a kind of biopsy method based on face, this method comprises:
Heart rate detection region is chosen based on facial image;Each heart is extracted according to the pixel data in the heart rate detection region The heart rate value of rate detection zone;The heart rate value extracted composition heart rate feature vector is inputted in trained classifier and is divided Class obtains In vivo detection result;
The heart rate value that the pixel data according to the heart rate detection region extracts each heart rate detection region specifically wraps It includes: extracting the pixel data of Current heart rate detection zone, form pixel data sequence after continuous acquisition multiframe;Utilize Signal separator Method, isolated from the pixel data sequence of generation include heart rate information AC signal;Pass through the side of frequency-domain transform Method obtains heart rate value of the characteristic frequency of amplitude maximum in AC signal as Current heart rate detection zone.
Preferably, above-mentioned to be based on facial image to choose heart rate detection region including: to detect to obtain from the facial image Face key point chooses several pieces of polygonal regions as heart rate detection using the face key point from the facial image Region, several pieces of polygonal regions include forehead region, cheek region.
Optionally, further include after above-mentioned detection obtains face key point, according to the face key point to comprising The facial image for stating face key point carries out human face posture correction;
It is described that several pieces of polygonal regions are chosen from the facial image as heart rate detection region specially from correction Several pieces of polygonal regions are chosen on facial image afterwards as heart rate detection region.
Preferably, the above-mentioned method using Signal separator, isolating from the pixel data sequence of generation includes heart rate The AC signal of information specifically includes: using the method for colour of skin standard mapping and vector space projection, dividing from AC signal Separate out deluster strong component and specular components, thus obtain include heart rate information AC signal.
Preferably, the above-mentioned method using Signal separator, isolating from the pixel data sequence of generation includes heart rate The AC signal of information specifically includes:
A1, the AC signal that pixel value is calculated according to the pixel data sequence, the AC signal packet of the pixel value Include light intensity component, specular components and heart rate component;
A2, standardized colour of skin vector is defined, acquires standardized mapping matrix using the standardized colour of skin vector, The AC signal of the pixel value is mapped to standardized vector space using the standardized mapping matrix;
A3, the AC signal of the pixel value projected to using projection matrix in the plane perpendicular to white light vector two On a direction;
A4, the linear combination that AC signal on described two directions is tuned using tuner parameters are obtained comprising heart rate information AC signal.
On the other hand the application additionally provides a kind of living body detection device based on face, which includes:
Module is chosen in heart rate detection region, for choosing heart rate detection region based on facial image;
Heart rate value extraction module, for choosing the heart rate detection region that module is chosen according to the heart rate detection region In pixel data, extract the heart rate value in each heart rate detection region;
Categorization module, the heart rate value composition heart rate feature vector input instruction for extracting the heart rate value extraction module Classify in the classifier perfected, obtains In vivo detection result.
Preferably, above-mentioned apparatus further include: for obtaining the image collection module of facial image.
Preferably, above-mentioned heart rate value extraction module specifically includes:
First extraction unit, for extracting the pixel data of the image in heart rate detection region;
Judging unit, for judging whether the pixel data of the first extraction unit extraction forms the pixel of default frame number Data sequence is to trigger the second extraction unit, otherwise continues to acquire pixel data until forming the pixel data of default frame number Sequence;
Second extraction unit, for extracting each heart rate inspection from the corresponding pixel data sequence in each heart rate detection region Survey region respectively corresponding to heart rate value.
Preferably, above-mentioned second extraction unit, specifically for calculating the friendship of pixel value according to the pixel data sequence Signal is flowed, the AC signal of the pixel value includes light intensity component, specular components and heart rate component;For defining standardization Colour of skin vector, acquire standardized mapping matrix using the standardized colour of skin vector, utilize the standardized mapping The AC signal of the pixel value is mapped to standardized vector space by matrix;It is also used to utilize projection matrix by the pixel The AC signal of value projects in the both direction in the plane perpendicular to white light vector;It is also used to tune institute using tuner parameters The linear combination for stating AC signal in both direction obtains the AC signal comprising heart rate information;It is also used to described in obtaining AC signal comprising heart rate information carries out Fourier transformation and obtains its frequency signal, and finds out corresponding to the point of amplitude maximum Frequency is obtained heart rate value.
On the other hand a kind of In vivo detection equipment based on face that the application provides, including image collecting device and processing Device;
Described image acquisition device is sent to the processor for acquiring color image;
The processor, for running computer program, described program mentioned-above living body based on face when running Detection method.
The application has the advantages that in terms of In vivo detection, proposes a kind of work based on contactless heart rate measurement Body detecting method, can effectively improve the accuracy rate and reliability of existing biopsy method, and can be effective against 3D The three-dimensional attacks such as face mask.
Detailed description of the invention
Fig. 1 is the flow chart of biopsy method of one of the embodiment of the present application based on face;
Fig. 2 is the schematic diagram that light is reflected in skin surface in the embodiment of the present application;
Fig. 3 is method flow diagram of one of the embodiment of the present application based on heart rate detection extracted region heart rate value;
Fig. 4 is a kind of flow chart of the biopsy method based on face in another embodiment of the application;
Fig. 5 is the composed structure schematic diagram of living body detection device of one of the embodiment of the present application based on face;
Fig. 6 is the composed structure schematic diagram of In vivo detection equipment of one of the embodiment of the present application based on face.
Specific embodiment
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The some embodiments recorded in application, for those of ordinary skill in the art, without creative efforts, It can also be obtained according to these attached drawings other attached drawings.
This application involves a kind of biopsy methods based on face, specifically using the contactless heart based on face Rate measurement method achievees the purpose that In vivo detection, and this method calculated in selected image by analysis continuous multiple frames image The heart rate information of face key position, and classify heart rate information as feature vector, to realize the mesh of In vivo detection 's.
Embodiment one
The specific implementation process of the present embodiment is: firstly, the image data of continuous multiple frames is obtained by image collection module, Face frame position and face key point position are extracted from the image data got.Then, it is closed according to face frame and face Key point position selects several area to be tested as heart rate detection region on the image, and stores the heart rate detection selected The pixel data in region.Continuous acquisition multiple image, and the time interval of the frame per second calculating adjacent image obtained according to image, shape Pixel data sequence.Isolated from sequence using signal separation techniques include heart rate information AC signal, and pass through Frequecy characteristic is extracted in frequency-domain transform processing, to obtain heart rate value.Finally, the heart rate in several heart rate detection regions is unified into Feature vector inputs in trained feature classifiers, and the purpose of In vivo detection can be realized.
It is provided by the embodiments of the present application a kind of based on human face in-vivo detection method referring to Fig. 1, comprising:
Step S1: heart rate detection region is chosen based on facial image;
In the present embodiment, as a kind of possible implementation, Image Acquisition is carried out using colour imagery shot, to what is got Each frame image carries out Face datection and face key point location, obtains face frame and face key point position, and utilize face Frame intercepted out from the original image of acquisition include 68 face key point P (i) (i=1,2,3 ..., 68) facial image.
Specifically, choosing heart rate detection region based on facial image: being using above-mentioned 68 face key points, from face figure As several pieces of polygonal regions of upper selection are as area to be tested, that is, heart rate detection region, such as forehead, cheek region, this The method that kind chooses area to be tested using face key point, can be effectively prevented from the regions such as eyes, mouth, to minimize The influence of facial non-rigid motion.
As a preferred method, this step obtain include face key point facial image after, can also first Heart rate detection region is chosen after being corrected to facial image again.For it will be appreciated by persons skilled in the art that in advance The human face key point for the standard posture demarcated is (if the 5 human face key points demarcated in advance are respectively left eye, the right side Eye, nose, the left corners of the mouth and the right corners of the mouth) and nth frame face key point between there is transformation relation, such as T is graph transformation matrix, The face detected is corrected by transformation matrix T, so that it may be transformed to the human face posture of standard, people can be eliminated Influence of the face rigid motion to testing result.
Step S2: the heart rate value in each heart rate detection region is extracted according to the pixel data in the heart rate detection region;
It is the schematic diagram that light is reflected in skin surface as shown in Figure 2.Wherein, skin and tissue are to the absorptivity base of light Originally it remains unchanged, but the blood volume in artery can change with heart beat cycle, and artery is caused also the absorptivity of light Cyclically-varying.Therefore, the signal by changing in detection reflected light with pulse cycle, so that it may obtain heart rate information.
The implementation of this step is as shown in figure 3, specifically include:
Step 101: extracting the pixel data in heart rate detection region, after continuous acquisition multiframe, it is raw that frame per second is obtained according to image Pixel data sequence;
Step 102: using the method for Signal separator, isolating from the pixel data sequence of generation includes heart rate information AC signal;
Step 103: the characteristic frequency of amplitude maximum in AC signal is obtained as the heart rate by the method for frequency-domain transform The heart rate value of detection zone.
Step S3: the heart rate value extracted composition heart rate feature vector being inputted in trained classifier and is classified, Obtain In vivo detection result.
It is understood that the present embodiment utilizes machine learning or deep learning method training classifier.
Previous method for detecting human face, inspection policies be inherently in facial image find can distinguish living body with The visual signature of non-living body.However, these visual signatures are usually can be cheated by camouflage perfect enough.This Shen Please embodiment biopsy method, propose a kind of new approaches of In vivo detection, that is, utilize physiologic information --- the heart rate of people To distinguish living body and non-living body.This detection feature has caught the essential distinction of living body and non-living body, so that the detection method is difficult By three-dimensional attacks mode deceptions such as 3D face mask, head models.
The biopsy method of the embodiment of the present application carries out Face datection in the detection process, using the method analyzed frame by frame With face critical point detection, and using face key point carry out human face posture correction, overcome one of face in the detection process The influence of a little rigid motions.When choosing heart rate detection region, avoid moving more region such as eyes, mouth etc. on face, Avoid on face some non-rigid motions to the adverse effect of detection.In heart rate value extracting method, go out to wrap by theory analysis Mixing AC signal ingredient containing heart rate signal, using the method for colour of skin standard mapping and vector space projection, from mixing AC signal in, be precisely separated out include heart rate information AC signal, and by frequency domain handle method successfully divide The heart rate information of detection zone is separated out.
In addition, the biopsy method of the embodiment of the present application, to the hardware device free of claims for Image Acquisition, phase Than simplifying detection system in other detection methods based on infrared image, depth image, is conducive to promote, apply.
Embodiment two
Biopsy method of the another kind based on face provided by the embodiments of the present application, method executing subject are a kind of base In the In vivo detection equipment of face, which includes the camera for acquiring image data, and for based on collected The processor of image data completion In vivo detection.It is understood that the realization of the embodiment of the present application use it is pre-designed Human-face detector, face Keypoint detector and preparatory trained In vivo detection classifier.
As shown in figure 4, method provided by the embodiments of the present application the following steps are included:
Step 201: acquisition image data;
The present embodiment preferably uses colour imagery shot acquisition to obtain image data.
Step 202: Face datection and face key point location are carried out to every frame image of acquisition;
Face datection and face critical point detection are carried out to each frame image that colour imagery shot is got, obtain face frame And face key point position, and intercepted out from original image using face frame include 68 face key point P (i) (i=1, 2,3 ..., 68) facial image.It is understood that completing the Face datection of this step and face critical point detection can use The human-face detector that can design to those skilled in the art, face Keypoint detector.
In face critical point detection method, the key point number that this programme uses is 68, can also be used in addition to this The detection method of other numbers common are the critical point detection of 98,108 points.
Step 203: judging whether to detect face, be, perform the next step, otherwise return to step 201;
Specifically, indicating to detect if if step 202 detects face frame with pre-designed human-face detector Otherwise face is not detected in face.
Step 204: human face posture correction is carried out to facial image;
In order to exclude the rigid motion of face in the detection process, need to correct face using face key point. Continuous N frame image collected for camera, obtains N facial images by the way of detecting frame by frame and N group face is crucial Point P (i)n(n=1,2 ..., N).By 68 face key points, 5 human face key points can be positioned by calculating, respectively For left eye Ple, right eye Pre, nose PnoseAnd left and right corners of the mouth Plm、Prm.Its calculation method are as follows:
Ple=(P (43)+P (44)+P (45)+P (46)+P (47)+P (48))/6
Formula (1)
Pre=(P (37)+P (38)+P (39)+P (40)+P (41)+P (42))/6
Formula (2)
Pnose=P (31) formula (3)
Plm=(P (55)+P (65))/2 formula (4)
Prm=(P (49)+P (61))/2 formula (5)
Assuming that PoThe human face key point for the standard posture demarcated in advance for 5, standard key point PoWith n-th frame people Face key point PnBetween there is transformation relations:
Po=TPnFormula (6)
In formula (6), T is graph transformation matrix, is corrected by transformation matrix T to the face detected, so that it may With the human face posture for the standard of being transformed to, influence of the face rigid motion to testing result can be eliminated.
Step 205: choosing heart rate detection region from the facial image after correction, and extract the pixel data in region;
The key of heart rate detection is exactly the pulse signal that recovery includes pulse wave information from image data, this signal It is very faint, be easy such as blinked by face non-rigid motion, smiling is influenced.Using 68 face key points, from rectifying Several pieces of polygonal regions are chosen on face after just as area to be tested i.e. heart rate detection region, such as the institutes such as forehead, cheek In region, this method for choosing area to be tested using face key point can be effectively prevented from eyes, mouth etc. frequent occurrence The region of movement, to minimize the influence of facial non-rigid motion.
Step 206: judge whether the pixel data extracted forms the pixel data sequence of default frame number, be execute it is next It walks, otherwise return step 201;
In general, default frame number is, that is, by continuous acquisition multiframe, to obtain frame per second not less than 50 frames according to image and generate picture Plain data sequence.
Step 207: extracting heart rate value from the corresponding pixel data sequence in each heart rate detection region, and synthesize heart rate spy Sign vector, which is input in classifier, classifies, and obtains In vivo detection result.
The method that the present embodiment uses color space vector projection as a preferred method, has been extracted comprising the heart The AC signal of rate information, in the method for forming heart rate feature vector, the method that the present embodiment uses is by each heart rate The heart rate value of detection zone is unified into heart rate feature vector, and being then input to the testing result that SVM classifier obtains is living body Testing result.
This step is implemented as follows:
Firstly, the AC signal of pixel value is calculated according to the pixel data sequence, the AC signal of the pixel value Including light intensity component, specular components and heart rate component;
Under normal conditions, the pixel value of certain point can be described with rgb color mode on image, a certain on facial image The pixel value of pixel can indicate are as follows:
Ck(t)=uc·I0·c0+uc·I0·c0·i(t)+us·I0·s(t)+up·I0·p(t)+vn(t) formula (7)
In formula (7), Ck(t) three rows have respectively represented three color channels of k-th of pixel on image, i (t) generation The table variation of light source illumination intensity, is mainly influenced by light source itself and the distance between light source, object, imaging sensor, s (t) specular reflectivity changes caused by representing due to face movement, and p (t) is precisely due to reflectivity changes caused by pulse Signal, vn(t) output error of imaging sensor itself is represented.In the detection process, own to each detection zone Pixel value is averaged, therefore error term caused by imaging sensor itself can be ignored, that is, take the pixel number obtained after mean value According to can indicate are as follows:
C (t)=uc·I0·c0+uc·I0·c0·i(t)+us·I0·s(t)+up·I0·p(t)
Formula (8)
When selected sample time long enough, it is believed that the mean value of i (t), s (t) and p (t) in sampling time period It is zero.Therefore, mean value of the pixel value C (t) within sample timeAre as follows:
It utilizesA diagonal normalization matrix N can be defined, so that:
Have after C (t) is normalized using normalization matrix N:
Cn(t)=NC (t)=1 (1+i (t))+Nus·I0·s(t)+N·up·I0P (t) formula (11)
Remove the AC signal component that can be obtained by pixel value after DC component are as follows:
So far, the AC signal of pixel value on image is decomposed into three parts, respectively light intensity component 1i (t), mirror Face reflecting component Nus·I0S (t) and heart rate component Nup·I0·p(t)。
Secondly, defining standardized colour of skin vector, standardized mapping square is acquired using the standardized colour of skin vector Battle array, is mapped to standardized vector space for the AC signal of the pixel value using the standardized mapping matrix;Specifically It is as follows:
It include the pulse wave signal i.e. component p (t) of heart rate information to be extracted from mixed AC signal component, The method first defines standardized colour of skin vector u under the conditions of white-light illuminatingskin, by taking rgb color mode as an example, uskim's Three components Rs s, Gs, Bs can be solved in the following manner:
Measurement result is found through a large number of experiments, can be in the hope of a representative standardized colour of skin vector uskin, to represent the colour of skin situation under the conditions of all white-light illuminatings, and can be in the hope of standardization mapping matrix M.
By standardization mapping matrix M to the AC signal of pixel valueIt is mapped with:
Then, the AC signal of the pixel value is projected in the plane perpendicular to white light vector using projection matrix In both direction;It is specific as follows:
The standardized colour of skin space under the conditions of white-light illuminating, mirror surface at this time have been had been mapped to due to colour of skin vector Reflecting component has just substantially been mapped to the direction of white light, that is, has:
M·N·us·I01 formula of ≈ κ (15)
In formula (15), κ is proportionality coefficient.After above-mentioned transformation, the part for containing specular components is thrown Shadow has arrived on 1 direction of white light vector, thus by projection method, by the AC signal of entire pixel value project to perpendicular to In the plane of amount 1, so that it may separate specular components, mathematic(al) representation from AC signal are as follows:
In formula (16), PcFor the projection matrix of a 2*3, and meet Pc·M·N·us·I0≈κ·Pc1=0, Two column of obtained S (t) have separately included the letter of the exchange in the both direction being projected in the plane perpendicular to white light vector 1 Number.
Finally, tuning the linear combination of AC signal on described two directions using tuner parameters, obtain believing comprising heart rate The AC signal of breath.It is specific as follows:
The estimator comprising heart rate information p (t) can be obtained according to S (t)Are as follows:
In formula (17), tuner parametersσ(S1) and σ (S2) be respectively two row data standard deviation, until This, so that it may light intensity component i (t) is separated from AC signal, the method has been successfully separated out the friendship comprising heart rate information Flow signal
After this, rightIt carries out Fourier transformation and obtains its frequency signalAnd find out the point institute of amplitude maximum Corresponding frequency f0, heart rate value at this time is are as follows:
H=60 × f0Formula (18)
If selected is R heart rate detection region, then the R heart rate data that will go out from R heart rate detection extracted region Composition R dimension heart rate feature vector H (i) (i=1,2 ..., R), which inputs in trained SVM classifier, classifies, and can be obtained Facial image to be detected whether be living body classification results.
It is understood that specifically including blind source analysis there are also some other method that can be used for heart rate signal extraction Method PCA (Principle Component Analysis, Principal Component Analysis), ICA (Independent Component Analysis, independent component analysis method), the machine learning method etc. of separation method and data-driven based on color space.
Forming method about heart rate feature vector can also be flexible and changeable, such as randomly selects the heart in several heart rate detection regions Rate value synthesizes heart rate feature vector, or chooses one or more heart rate values according to area size and form heart rate feature vector.Separately Outside, heart rate characteristic pattern out can also be learnt from entire area to be tested using deep learning method, classified for living body.
In addition, the selection of classifier is also not necessarily limited to the SVM classifier of the present embodiment use, such as decision tree can also be chosen, patrolled The other methods such as recurrence, random forest, deep neural network are collected to classify.
Embodiment three
As shown in figure 5, the embodiment of the present application provides one kind and is based on the basis of the method provided based on previous embodiment The living body detection device of face, comprising:
Image collection module 301, for obtaining facial image;
Optionally, described image obtains module 301 and specifically includes:
Sensor unit, for acquiring image data to be detected;
Human-face detector unit, the current frame image data for acquiring to the sensor unit carry out the inspection of face frame It surveys;If detecting that face frame then triggers critical point detection unit, then adopted to sensor unit if face frame is not detected Next frame image data of collection is detected.
Face Keypoint detector unit is detected from current frame image data for the position according to the face frame The face key point for obtaining predetermined number, for example, 68 face key points.
Facial image interception unit, face frame for being detected using the human-face detector unit is from sensor unit Intercepted out on the original image of acquisition include predetermined number face key point facial image;
Optionally, image collection module 301 further include: human face posture correcting unit, for utilizing face key point to institute It states the facial image that facial image interception unit is truncated to be corrected, obtains the facial image of standard posture.
Assuming that P0The human face key point for the standard posture demarcated in advance for 5, this 5 standards demarcated in advance The human face key point of posture is respectively left eye, right eye, nose and the left and right corners of the mouth, P0It is crucial with present frame n-th frame face Point PnBetween there is transformation relations: P0=TPn, T be graph transformation matrix, by T to the present frame facial image detected into Row correction, so that it may be transformed to the human face posture of standard, influence of the face rigid motion to testing result can be eliminated.
Module 302 is chosen in heart rate detection region, chooses for obtaining the facial image that module 301 obtains based on described image Heart rate detection region;
Preferably, it is specifically used for utilizing face key point, several pieces of polygon areas is chosen from the facial image after correction As area to be tested, that is, heart rate detection region, such as several pieces of polygonal regions include forehead region, cheek location in domain Domain etc., this method for choosing area to be tested using face key point, can be effectively prevented from eyes, mouth etc. and transport frequent occurrence Dynamic region, to minimize the influence of facial non-rigid motion.
Heart rate value extraction module 303 is examined for choosing the heart rate that module 302 is chosen according to the heart rate detection region The pixel data in region is surveyed, the heart rate value in each heart rate detection region is extracted;
Optionally, the heart rate value extraction module 303 specifically includes:
First extraction unit, for extracting the pixel data of the image in heart rate detection region, the picture of any point on image Plain value can be described with rgb color mode.
Judging unit is to touch for judging whether the pixel data extracted forms the pixel data sequence of default frame number The second extraction unit is sent out, otherwise continues to acquire pixel data until forming the pixel data sequence of default frame number;
In general, default frame number is, that is, by continuous acquisition multiframe, to obtain frame per second not less than 50 frames according to image and generate picture Plain data sequence.
Second extraction unit, for extracting each heart rate inspection from the corresponding pixel data sequence in each heart rate detection region Survey region respectively corresponding to heart rate value.
In the present embodiment, the second extraction unit is implemented as follows:
For calculating the AC signal of pixel value, the AC signal packet of the pixel value according to the pixel data sequence Include light intensity component, specular components and heart rate component;
For defining standardized colour of skin vector, standardized mapping square is acquired using the standardized colour of skin vector Battle array, is mapped to standardized vector space for the AC signal of the pixel value using the standardized mapping matrix;
For the AC signal of the pixel value to be projected in the plane perpendicular to white light vector using projection matrix In both direction;
For tuning the linear combination of AC signal on described two directions using tuner parameters, obtain comprising heart rate information AC signal.
Its frequency signal is obtained for carrying out Fourier transformation to the AC signal described in obtaining comprising heart rate information, and Finding out frequency corresponding to the point of amplitude maximum is obtained heart rate value.
Categorization module 304, the heart rate value for extracting the heart rate value extraction module 303 form heart rate feature vector It inputs in trained classifier and classifies, obtain In vivo detection result.
, then will be from three heart rate detections specifically, if selected is three heart rate detection regions of forehead and left and right cheek The heart rate feature vector H (i) (i=1,2,3) for three heart rate values composition that extracted region goes out input in trained classifier into Row classification, to obtain In vivo detection result.The selection of classifier preferably uses SVM classifier in the present embodiment, in addition to this Such as decision tree, logistic regression, random forest, deep neural network other methods can be chosen to classify.
Example IV
On the basis of the biopsy method and living body detection device that previous embodiment provides, correspondingly, the application is also A kind of terminal device is provided.The specific implementation of terminal device is described below with reference to embodiment and attached drawing.
Referring to Fig. 6, which is a kind of structural representation of In vivo detection equipment based on face provided by the embodiments of the present application Figure.
As shown in fig. 6, equipment provided in this embodiment, comprising:
Image collecting device 401 and processor 402;
Wherein, described image acquisition device 401 is sent to the processor 402 for acquiring color image;
The processor 402 executes such as preceding method embodiment one for running computer program when described program is run Or biopsy method described in two.
In practical applications, which can be the equipment such as mobile phone or tablet computer.In the present embodiment for The concrete type of equipment is without limiting.
Optionally, In vivo detection equipment provided in this embodiment can also further comprise: display device 403.
As an example, display device 403 can be display screen.Processor 402 runs computer program and obtains In vivo detection As a result it after, result can be will test is sent to display device 403 and shown.
Optionally, terminal device provided in this embodiment can also further comprise: memory 404.Memory 404 is for depositing Store up aforementioned computer program.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that A specific embodiment of the invention is only limitted to this, for those of ordinary skill in the art to which the present invention belongs, is not taking off Under the premise of from present inventive concept, several simple deduction or replace can also be made, all shall be regarded as belonging to the present invention by institute Claims of submission determine protection scope.

Claims (10)

1. a kind of biopsy method based on face, which is characterized in that the described method includes:
Heart rate detection region is chosen based on facial image;Each heart rate inspection is extracted according to the pixel data in the heart rate detection region Survey the heart rate value in region;The heart rate value extracted composition heart rate feature vector is inputted in trained classifier and is classified, Obtain In vivo detection result;
The heart rate value that the pixel data according to the heart rate detection region extracts each heart rate detection region specifically includes: mentioning The pixel data of Current heart rate detection zone is taken, forms pixel data sequence after continuous acquisition multiframe;Utilize the side of Signal separator Method, isolated from the pixel data sequence of generation include heart rate information AC signal;It is obtained by the method for frequency-domain transform Heart rate value of the characteristic frequency of amplitude maximum as Current heart rate detection zone into AC signal.
2. the method according to claim 1, wherein described choose heart rate detection region packet based on facial image Include: detection obtains face key point from the facial image, is chosen from the facial image using the face key point For several pieces of polygonal regions as heart rate detection region, several pieces of polygonal regions include forehead region, cheek institute In region.
3. according to the method described in claim 2, it is characterized in that, it further includes later root that the detection, which obtains face key point, Human face posture correction is carried out to the facial image for including the face key point according to the face key point;
It is specially from the face after correction that several pieces of polygonal regions are chosen from the facial image as heart rate detection region Several pieces of polygonal regions are chosen on image as heart rate detection region.
4. the method according to claim 1, wherein the method using Signal separator, from the pixel of generation Isolate in data sequence includes that the AC signal of heart rate information specifically includes: using colour of skin standard mapping and vector space The method of projection separates light intensity component and specular components from AC signal, so that obtaining includes heart rate information AC signal.
5. the method according to claim 1, wherein the method using Signal separator, from the pixel of generation Isolate in data sequence includes that the AC signal of heart rate information specifically includes:
A1, the AC signal that pixel value is calculated according to the pixel data sequence, the AC signal of the pixel value includes light Strong component, specular components and heart rate component;
A2, standardized colour of skin vector is defined, acquires standardized mapping matrix using the standardized colour of skin vector, utilized The AC signal of the pixel value is mapped to standardized vector space by the standardized mapping matrix;
A3, the AC signal of the pixel value is projected to two sides in the plane perpendicular to white light vector using projection matrix Upwards;
A4, the linear combination that AC signal on described two directions is tuned using tuner parameters, obtain the friendship comprising heart rate information Flow signal.
6. a kind of living body detection device based on face, which is characterized in that described device includes:
Module is chosen in heart rate detection region, for choosing heart rate detection region based on facial image;
Heart rate value extraction module, for being chosen in the heart rate detection region that module is chosen according to the heart rate detection region Pixel data extracts the heart rate value in each heart rate detection region;
Categorization module, the heart rate value composition heart rate feature vector input for extracting the heart rate value extraction module train Classifier in classify, obtain In vivo detection result.
7. device according to claim 6, which is characterized in that described device further include: for obtaining the figure of facial image As obtaining module.
8. device according to claim 6, which is characterized in that the heart rate value extraction module specifically includes:
First extraction unit, for extracting the pixel data of the image in heart rate detection region;
Judging unit, for judging whether the pixel data of the first extraction unit extraction forms the pixel data of default frame number Sequence is to trigger the second extraction unit, otherwise continues to acquire pixel data until forming the pixel data sequence of default frame number;
Second extraction unit, for extracting each heart rate detection area from the corresponding pixel data sequence in each heart rate detection region The respective corresponding heart rate value in domain.
9. device according to claim 8, which is characterized in that second extraction unit is specifically used for according to the picture Plain data sequence calculates the AC signal of pixel value, and the AC signal of the pixel value includes light intensity component, mirror-reflection point Amount and heart rate component;For defining standardized colour of skin vector, standardized reflect is acquired using the standardized colour of skin vector Matrix is penetrated, the AC signal of the pixel value is mapped to standardized vector space using the standardized mapping matrix; It is also used to that the AC signal of the pixel value is projected to two sides in the plane perpendicular to white light vector using projection matrix Upwards;It is also used to tune the linear combination of AC signal on described two directions using tuner parameters, obtains comprising heart rate information AC signal;It is also used to carry out the AC signal described in obtaining comprising heart rate information Fourier transformation to obtain its frequency letter Number, and finding out frequency corresponding to the point of amplitude maximum is obtained heart rate value.
10. a kind of In vivo detection equipment based on face, which is characterized in that including image collecting device and processor;
Described image acquisition device is sent to the processor for acquiring color image;
The processor, for running computer program, perform claim requires the described in any item bases of 1-5 when described program is run In the biopsy method of face.
CN201910370820.2A 2019-05-06 2019-05-06 A kind of biopsy method based on face, device and equipment Pending CN110163126A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910370820.2A CN110163126A (en) 2019-05-06 2019-05-06 A kind of biopsy method based on face, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910370820.2A CN110163126A (en) 2019-05-06 2019-05-06 A kind of biopsy method based on face, device and equipment

Publications (1)

Publication Number Publication Date
CN110163126A true CN110163126A (en) 2019-08-23

Family

ID=67633712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910370820.2A Pending CN110163126A (en) 2019-05-06 2019-05-06 A kind of biopsy method based on face, device and equipment

Country Status (1)

Country Link
CN (1) CN110163126A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688957A (en) * 2019-09-27 2020-01-14 腾讯科技(深圳)有限公司 Living body detection method and device applied to face recognition and storage medium
CN111428577A (en) * 2020-03-03 2020-07-17 电子科技大学 Face living body judgment method based on deep learning and video amplification technology
CN111797794A (en) * 2020-07-13 2020-10-20 中国人民公安大学 Facial dynamic blood flow distribution detection method
CN112052830A (en) * 2020-09-25 2020-12-08 北京百度网讯科技有限公司 Face detection method, device and computer storage medium
CN112381011A (en) * 2020-11-18 2021-02-19 中国科学院自动化研究所 Non-contact heart rate measurement method, system and device based on face image
CN114557685A (en) * 2020-11-27 2022-05-31 上海交通大学 Non-contact motion robust heart rate measuring method and measuring device
CN115424335A (en) * 2022-11-03 2022-12-02 智慧眼科技股份有限公司 Living body recognition model training method, living body recognition method and related equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320950A (en) * 2015-11-23 2016-02-10 天津大学 A video human face living body detection method
CN106491114A (en) * 2016-10-25 2017-03-15 Tcl集团股份有限公司 A kind of heart rate detection method and device
CN106845395A (en) * 2017-01-19 2017-06-13 北京飞搜科技有限公司 A kind of method that In vivo detection is carried out based on recognition of face
CN107122709A (en) * 2017-03-17 2017-09-01 上海云从企业发展有限公司 Biopsy method and device
CN107644191A (en) * 2016-07-21 2018-01-30 中兴通讯股份有限公司 A kind of face identification method and system, terminal and server
CN109145817A (en) * 2018-08-21 2019-01-04 佛山市南海区广工大数控装备协同创新研究院 A kind of face In vivo detection recognition methods
CN109363660A (en) * 2018-10-26 2019-02-22 石家庄昊翔网络科技有限公司 Rhythm of the heart method and server based on BP neural network
CN109409343A (en) * 2018-12-11 2019-03-01 福州大学 A kind of face identification method based on In vivo detection

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320950A (en) * 2015-11-23 2016-02-10 天津大学 A video human face living body detection method
CN107644191A (en) * 2016-07-21 2018-01-30 中兴通讯股份有限公司 A kind of face identification method and system, terminal and server
CN106491114A (en) * 2016-10-25 2017-03-15 Tcl集团股份有限公司 A kind of heart rate detection method and device
CN106845395A (en) * 2017-01-19 2017-06-13 北京飞搜科技有限公司 A kind of method that In vivo detection is carried out based on recognition of face
CN107122709A (en) * 2017-03-17 2017-09-01 上海云从企业发展有限公司 Biopsy method and device
CN109145817A (en) * 2018-08-21 2019-01-04 佛山市南海区广工大数控装备协同创新研究院 A kind of face In vivo detection recognition methods
CN109363660A (en) * 2018-10-26 2019-02-22 石家庄昊翔网络科技有限公司 Rhythm of the heart method and server based on BP neural network
CN109409343A (en) * 2018-12-11 2019-03-01 福州大学 A kind of face identification method based on In vivo detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WENJIN WANG等: "Algorithmic Principles of Remote PPG", 《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING》 *
刘经纬等: "《"互联网+"人工智能技术实现》", 15 April 2019, 首都经济贸易大学出版社 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688957A (en) * 2019-09-27 2020-01-14 腾讯科技(深圳)有限公司 Living body detection method and device applied to face recognition and storage medium
CN110688957B (en) * 2019-09-27 2023-06-30 腾讯科技(深圳)有限公司 Living body detection method, device and storage medium applied to face recognition
CN111428577B (en) * 2020-03-03 2022-05-03 电子科技大学 Face living body judgment method based on deep learning and video amplification technology
CN111428577A (en) * 2020-03-03 2020-07-17 电子科技大学 Face living body judgment method based on deep learning and video amplification technology
CN111797794A (en) * 2020-07-13 2020-10-20 中国人民公安大学 Facial dynamic blood flow distribution detection method
CN112052830A (en) * 2020-09-25 2020-12-08 北京百度网讯科技有限公司 Face detection method, device and computer storage medium
CN112052830B (en) * 2020-09-25 2022-12-20 北京百度网讯科技有限公司 Method, device and computer storage medium for face detection
CN112381011A (en) * 2020-11-18 2021-02-19 中国科学院自动化研究所 Non-contact heart rate measurement method, system and device based on face image
CN112381011B (en) * 2020-11-18 2023-08-22 中国科学院自动化研究所 Non-contact heart rate measurement method, system and device based on face image
CN114557685A (en) * 2020-11-27 2022-05-31 上海交通大学 Non-contact motion robust heart rate measuring method and measuring device
CN114557685B (en) * 2020-11-27 2023-11-14 上海交通大学 Non-contact type exercise robust heart rate measurement method and measurement device
CN115424335A (en) * 2022-11-03 2022-12-02 智慧眼科技股份有限公司 Living body recognition model training method, living body recognition method and related equipment
CN115424335B (en) * 2022-11-03 2023-08-04 智慧眼科技股份有限公司 Living body recognition model training method, living body recognition method and related equipment

Similar Documents

Publication Publication Date Title
CN110163126A (en) A kind of biopsy method based on face, device and equipment
CN106778518B (en) Face living body detection method and device
CN109640821B (en) Method and apparatus for face detection/recognition system
CN109858439A (en) A kind of biopsy method and device based on face
JP5955133B2 (en) Face image authentication device
US8515127B2 (en) Multispectral detection of personal attributes for video surveillance
JP2000259814A (en) Image processor and method therefor
EP3241151A1 (en) An image face processing method and apparatus
CN106599785B (en) Method and equipment for establishing human body 3D characteristic identity information base
JP6822482B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
CN104123543A (en) Eyeball movement identification method based on face identification
CN108416291A (en) Face datection recognition methods, device and system
CN109325408A (en) A kind of gesture judging method and storage medium
Shrivastava et al. Conceptual model for proficient automated attendance system based on face recognition and gender classification using Haar-Cascade, LBPH algorithm along with LDA model
Mohsin et al. Pupil detection algorithm based on feature extraction for eye gaze
Campadelli et al. A face recognition system based on local feature characterization
Gul et al. A machine learning approach to detect occluded faces in unconstrained crowd scene
Siegmund et al. Face presentation attack detection in ultraviolet spectrum via local and global features
WO2022120532A1 (en) Presentation attack detection
Subasic et al. Expert system segmentation of face images
CN210442821U (en) Face recognition device
Thomas et al. Real Time Face Mask Detection and Recognition using Python
CN115968487A (en) Anti-spoofing system
Park Face Recognition: face in video, age invariance, and facial marks
Batista Locating facial features using an anthropometric face model for determining the gaze of faces in image sequences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190823