CN109784302A - A kind of human face in-vivo detection method and face recognition device - Google Patents

A kind of human face in-vivo detection method and face recognition device Download PDF

Info

Publication number
CN109784302A
CN109784302A CN201910082329.XA CN201910082329A CN109784302A CN 109784302 A CN109784302 A CN 109784302A CN 201910082329 A CN201910082329 A CN 201910082329A CN 109784302 A CN109784302 A CN 109784302A
Authority
CN
China
Prior art keywords
face
relevant
coordinate value
key point
random action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910082329.XA
Other languages
Chinese (zh)
Other versions
CN109784302B (en
Inventor
曹诚
占广
陈涛
陈炳轩
吴梦溪
李发成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Leopard Internet Technology Co Ltd
Original Assignee
Shenzhen Leopard Internet Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Leopard Internet Technology Co Ltd filed Critical Shenzhen Leopard Internet Technology Co Ltd
Priority to CN201910082329.XA priority Critical patent/CN109784302B/en
Publication of CN109784302A publication Critical patent/CN109784302A/en
Application granted granted Critical
Publication of CN109784302B publication Critical patent/CN109784302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention is suitable for field of image processing, provides a kind of human face in-vivo detection method and face recognition device.The described method includes: the face in detection image;Generate random action instruction relevant to face;Acquire image sequence;According to the human face action of acquired image Sequence Detection user, statistic relevant to the instruction of each random action is calculated according to the position of face key point, calculate separately the relative variation that relevant all statistics are instructed to each random action, and judge whether the human face action of user is consistent with random action instruction according to relative variation, if consistent, determine current face for living body.Human face in-vivo detection method provided by the invention does not need to describe different statistics by state machine etc., operand can be greatly reduced, reduce the complexity of algorithm, algorithm operation efficiency is improved, can realize efficient face In vivo detection in the less built-in terminal of the resources such as mobile phone, plate.

Description

A kind of human face in-vivo detection method and face recognition device
Technical field
The invention belongs to field of image processing more particularly to a kind of human face in-vivo detection method and face recognition devices.
Background technique
To prevent the non-living bodies such as human face photo, video face to the attack of face identification system, need to study face living body Detection method.Common human face in-vivo detection method includes human face in-vivo detection method based on binocular camera shooting, based on near-infrared The human face in-vivo detection method of camera shooting, the human face in-vivo detection method based on machine learning and the people based on random action instruction Face biopsy method.
Human face in-vivo detection method based on binocular camera shooting calculates the depth letter at each position of face according to Binocular Vision Principle Breath distinguishes that existing object is the human face photo, video or living body faces of plane accordingly.But such methods require to take the photograph using two As head acquires face picture simultaneously.Human face in-vivo detection method based on near-infrared camera shooting is mainly according to living body under near infrared light The texture difference of face and non-living body face detects whether face is living body, such methods require using near-infrared light source and Filter.As it can be seen that the human face in-vivo detection method based on binocular camera shooting and based on near-infrared camera shooting requires special imaging device, It is not suitable for the built-in terminals such as common at present mobile phone, plate.
Human face in-vivo detection method based on machine learning generallys use a large amount of living body faces images and non-living body face figure As being trained to construct face In vivo detection classifier, advantage can be achieved on blind Detecting, the disadvantage is that being difficult to construct complete Training dataset, detection performance are affected by image quality, and resources occupation rate is higher, and operation efficiency is relatively low, are not easy portion Administration uses on the built-in terminals such as mobile phone, plate.
Human face in-vivo detection method based on random action instruction requires user to move according to stochastic instruction, for example opens Mouth, blink etc., instruction cooperation correctly then determine face for living body.This method is of less demanding to image-forming condition and device resource, It is wider in the face In vivo detection field application of the built-in terminals such as mobile phone, plate.
However, the existing human face in-vivo detection method computational efficiency based on random action instruction is relatively low.Such as Shen Qing Publication It number is 105989264 A of CN, the Chinese patent of entitled " biological characteristic in-vivo detection method and system " needs to use The training of the modes such as SVM or recurrence obtains Attitude estimation classifier, is then obtained in Attitude estimation classifier using training to people Face image carries out posture and expression estimation, and operand is bigger;100592322 C of Authorization Notice No. CN, entitled " photograph The automatic computer authentication method of piece face and living body faces " Chinese patent need using condition random field theory foundation be used for Judge that the model of blink movement, computation complexity are higher;Application publication number is 106874876 A of CN, entitled " a kind of Human face in-vivo detection method and device " Chinese patent also need to carry out recognition of face, through recognition of face from historical data Target face information is obtained, and the operand of recognition of face is quite big.
Summary of the invention
The purpose of the present invention is to provide a kind of complexities that can reduce algorithm, the face living body of raising algorithm operation efficiency Detection method, computer readable storage medium and face recognition device.
In a first aspect, the present invention provides a kind of human face in-vivo detection methods, which comprises
Face in S101, detection image;
S102, generation random action instruction relevant to face;
S103, acquisition image sequence;
S104, according to the human face action of acquired image Sequence Detection user, calculated according to the position of face key point Statistic relevant to the instruction of each random action, calculates separately the phase that relevant all statistics are instructed to each random action Judge whether the human face action of user is consistent with random action instruction to variable quantity, and according to relative variation, if unanimously, Determine that current face is living body.
Second aspect, the present invention provides a kind of computer readable storage medium, the computer readable storage medium is deposited Computer program is contained, the step such as above-mentioned human face in-vivo detection method is realized when the computer program is executed by processor Suddenly.
The third aspect, the present invention provides a kind of face recognition devices, comprising:
One or more processors;
Memory;And
One or more computer programs, the processor and the memory are connected by bus, wherein one Or multiple computer programs are stored in the memory, and are configured to be executed by one or more of processors, It is characterized in that, the step of processor realizes above-mentioned human face in-vivo detection method when executing the computer program.
In the present invention, due to calculating statistics relevant to the instruction of each random action according to the position of face key point Amount, calculates separately the relative variation that relevant all statistics are instructed to each random action, and sentence according to relative variation Whether the human face action of disconnected user is consistent with random action instruction, if unanimously, determining current face for living body.Therefore it is not required to Different statistics is described by state machine etc., operand can be greatly reduced, and reduced the complexity of algorithm, improved Algorithm operation efficiency can realize efficient face In vivo detection in the less built-in terminal of the resources such as mobile phone, plate.
Detailed description of the invention
Fig. 1 is the flow chart for the human face in-vivo detection method that the embodiment of the present invention one provides.
Fig. 2 is the specific block diagram for the face recognition device that the embodiment of the present invention three provides.
Specific embodiment
In order to which the purpose of the present invention, technical solution and beneficial effect is more clearly understood, below in conjunction with attached drawing and implementation Example, the present invention will be described in further detail.It should be appreciated that specific embodiment described herein is only used to explain this hair It is bright, it is not intended to limit the present invention.
In order to illustrate technical solutions according to the invention, the following is a description of specific embodiments.
Embodiment one:
Referring to Fig. 1, the human face in-vivo detection method that the embodiment of the present invention one provides is the following steps are included: should be noted It is, if having substantially the same as a result, human face in-vivo detection method of the invention is not limited with process sequence shown in FIG. 1.
Face in S101, detection image.
In the embodiment of the present invention one, S101 can specifically include following steps:
S1011, image is obtained;
Such as obtain a frame image of video camera acquisition;Video camera can be what face recognition device carried, be also possible to The external camera being connect with face recognition device;Face recognition device can be mobile terminal (such as mobile phone, tablet computer Deng) or desktop computer etc.;
Face in S1012, detection image;
In the embodiment of the present invention one, Face datection can be carried out using VJ algorithm and (please refer to " Rapid object Detection using a boosted cascade of simple features " (P.Viola, M.Jones, Proceedings of the 2001IEEE Computer Society Conference on Computer Vision And Pattern Recognition, 2001), the algorithm combination Haar feature and Adaboost classifier carry out Face datection, Feature extraction is accelerated using integrogram, cascade mode is carried out by the strong classifier for constructing AdaBoost, can improved Substantially accelerate detection speed while Face datection performance, has the advantages that operation efficiency is high, resources occupation rate is lower, be suitble to The built-in terminals such as mobile phone, tablet computer carry out real-time Face datection.
Whether the size for the face that S1013, judgement detect coincide with preset window, if coincide, executes S102.
For the accuracy for improving face In vivo detection, it is provided with preset window, it is desirable that face is placed on default window by user Dependent instruction movement is carried out in mouthful.Preset window can be circle, ellipse, rectangle etc..
The apex coordinate in the upper left corner of the extraneous rectangle frame of preset window is denoted as (X1, Y1), the apex coordinate note in the lower right corner For (X2, Y2).In the embodiment of the present invention one, X1=69, Y2=169, X2=254, Y2=408, naturally it is also possible to be set as Other values.Assuming that the apex coordinate in the upper left corner of the face rectangle frame of the human face region detected is denoted as (x1, y1), the lower right corner Apex coordinate is denoted as (x2, y2).So, the human face region detected and the overlapping region area A of preset window can be indicated are as follows:
A=(min (X2, x2)-max (X1, x1)+1) (min (Y2, y2)-max (Y1, y1)+1), wherein max and min points It Biao Shi not be maximized and be minimized operation.
The human face region detected and the registration I of preset window can be indicated are as follows:
If I < preset value, then it is assumed that the size of the face detected is misfitted with preset window, returns to S1011, otherwise, Execute S102.Preset value can be set to 0.4, naturally it is also possible to be other empirical values.
S102, generation random action instruction relevant to face.
In the embodiment of the present invention one, random action instruction relevant to face may include shaking the head, nodding, blinking, opening One of mouth or any combination.
S103, acquisition image sequence.
Image sequence includes user's movement corresponding image completed for cooperation random action instruction, it is contemplated that movement etc. To and complete substantially duration, the embodiment of the present invention one acquire image sequence length be 100 frames, naturally it is also possible to be other Empirical value.
S104, according to the human face action of acquired image Sequence Detection user, calculated according to the position of face key point Statistic relevant to the instruction of each random action, calculates separately the phase that relevant all statistics are instructed to each random action Judge whether the human face action of user is consistent with random action instruction to variable quantity, and according to relative variation, if unanimously, Determine that current face is living body.
In the embodiment of the present invention one, S104 can specifically include following steps:
A frame image in S1041, reading acquired image sequence, then executes S1042, if in image sequence Image all reads and finishes, then directly executes S1045.
S1042, locating human face's key point.
Currently, locating human face's key point usually requires 68 key points of positioning, however most of key point is living for face Physical examination survey is nonsensical, and the positioning of these redundancy key points increases resources occupation rate, and reduces operation efficiency.Also having only needs 12 key points are positioned, but these key points are not enough the expression of eyes and mouth region, what is for example positioned is upper and lower Eyelid and upper and lower lip portion all only have a key point, and these parts itself are there is no special location feature, be easy because Position error and cause eyes and mouth states to judge incorrectly.In this way, the accuracy of face In vivo detection is be easy to cause to decline.
The embodiment of the present invention one is directed to four operating position fixings, 19 face key points of shaking the head, nod, blink and open one's mouth, wherein Left eye, right eye and mouth distinguish 6 key points, 1 key point of nose.Specifically: the upper eyelid of left eye and palpebra inferior have 2 respectively There is 1 key point at a key point, the both sides canthus of left eye respectively;The upper eyelid of right eye and palpebra inferior have 2 key points respectively, right There is 1 key point at the both sides canthus of eye respectively;Upper lip and lower lip have 2 key points, the both sides corners of the mouth difference of lip respectively There is 1 key point.Eye and mouth key point are more, and the movement range that is primarily due to blink and open one's mouth is relatively small, to eye and The key point positioning accuracy request of mouth is higher.Wherein, upper palpebra inferior and upper and lower lip portion have 2 key points, not only may be used Judged by accident to avoid eyes caused by single key point position error or mouth states, and can distinguish different user eyes, Eyes caused by mouth shape difference or mouth states erroneous judgement.The relative position variation of nose and eyes, mouth can be used for sentencing It certainly shakes the head and nodding action.25 face key points can also be positioned, i.e. left eye, right eye and mouth distinguishes 8 key points, nose 1 key point.Specifically: the upper eyelid of left eye and palpebra inferior have 3 key points respectively, and the both sides canthus of left eye has 1 respectively Key point;The upper eyelid of right eye and palpebra inferior have 3 key points respectively, and there is 1 key point at the both sides canthus of right eye respectively;Upper mouth Lip and lower lip have 3 key points respectively, and the both sides corners of the mouth of lip has 1 key point respectively.Can certainly position 18 or 24 face key points, the i.e. supratip key point of delocalization.It is also possible to the face key point of other quantity, as long as guaranteeing left Eye, right eye and mouth distinguish at least six key point.
The localization method of the face key point of the embodiment of the present invention one uses Active Shape Model Method.The embodiment of the present invention One opens the key point position of facial image using handmarking 500, as training dataset.
Since human face action mainly carries out in preset window, in order to reduce operand, locating human face's key point tool Body are as follows: locating human face's key point in preset window.
S1043, statistic relevant to the random action instruction being currently generated is calculated according to the position of face key point.
The present invention only calculates statistic relevant to the random action instruction generated each time, do not need calculate and other with The relevant statistic of machine action command does not need more to describe different statistics by state machine etc..It can substantially drop in this way Low operand improves operation efficiency.Such as in S102, the instruction of the random action that is currently generated is to shake the head, then in S1043, only Calculate statistic relevant to shaking the head.
For shaking the head and nodding action, common method is to calculate relevant three Eulerian angles of human face posture information (pitch, yaw, roll), the calculating of these three values are related to complicated angle calculation, matrix operation, and computation complexity is high.Due to The present invention only needs to judge whether the human face action of user is consistent with random action instruction, the tool without judging current face Body movement.Therefore, the present invention carrys out Counting statistics amount only in accordance with face position when shaking the head or nodding, and is sentenced according to relative variation Whether the human face action of disconnected user is consistent with random action instruction.
Specifically, the random action instruction being currently generated is calculated and is shaken the head according to the position of face key point when shaking the head Relevant statistic U1Specifically:
Wherein, x1 is the x-axis coordinate value at the right canthus of right eye, and x10 is the x-axis coordinate value at the left side canthus of left eye, x13 It is the x-axis coordinate value of nose.When positive subject to human face posture, U1Close to 1, when shaking the head, U1Far from 1.
The random action instruction being currently generated is to calculate system relevant to nodding according to the position of face key point when nodding Measure U2Specifically:
Wherein, y13 is the y-axis coordinate value of nose, and y4 is the y-axis coordinate value at the left side canthus of right eye, the right of y7 left eye The y-axis coordinate value at canthus, y14 are the y-axis coordinate values of the right corners of the mouth of lip, and y17 is the y-axis coordinate of the left side corners of the mouth of lip Value.When positive subject to human face posture, U2Close to 1, when nodding, U2Far from 1.
When the random action instruction being currently generated is blink, system relevant to blink is calculated according to the position of face key point Measure U3Specifically:
Wherein, (x2, y2) and (x3, y3) is the coordinate value of 2 key points in the upper eyelid of right eye respectively, (x5, y5) and (x6, y6) is the coordinate value of 2 key points of the palpebra inferior of right eye respectively, and (x4, y4) is the coordinate value at the left side canthus of right eye, (x1, y1) is the coordinate value at the right canthus of right eye, and (x8, y8) and (x9, y9) is 2 key points in the upper eyelid of left eye respectively Coordinate value, (x11, y11) and (x12, y12) is the coordinate value of 2 key points of the palpebra inferior of left eye, (x10, y10) respectively It is the coordinate value at the left side canthus of left eye, (x7, y7) is the coordinate value at the right canthus of left eye, U3Value is bigger, illustrates eyes It is bigger to open degree;Conversely, illustrate eyes to open degree smaller.
The random action instruction being currently generated is to calculate system relevant to opening one's mouth according to the position of face key point when opening one's mouth Measure U4Specifically:
Wherein, (x15, y15) and (x16, y16) is the coordinate value of 2 key points of upper lip respectively, (x18, y18) and (x19, y19) is the coordinate value of 2 key points of lower lip respectively, and (x14, y14) is the coordinate value of the right corners of the mouth of lip, (x17, y17) is the coordinate value of the left side corners of the mouth of lip.U4Value is bigger, illustrates that the opening degree of mouth is bigger;Conversely, illustrating mouth Bar opening degree it is smaller.
As it can be seen that calculating the calculating for instructing relevant statistic to the random action being currently generated in the embodiment of the present invention one Complexity is low, and operand is small, convenient for realizing rapid solving on the built-in terminals such as mobile phone.
Statistic relevant to the random action instruction being currently generated described in S1044, caching.
S1045, after reading to image sequence, the relevant statistic of all and random action instruction of caching is read, Calculate separately the relative variation that relevant all statistics are instructed to each random action.
Assuming that random action instruction is that (k=1 indicates to shake the head k, and k=2 expression is nodded, and k=3 indicates blink, and k=4 indicates to open Mouth), corresponding statistic is Uk, the quantity of statistic is N (due to the image sequence that S103 is acquired in the present invention in spatial cache Length be 100 frames, and the case where S1042 fails there may be crucial point location, therefore N≤100).For each random action All statistics of instruction can be linked to be a curve, have recorded the situation of change of statistic during human face action.Wherein, bent The wave crest and trough of line reflect the limiting condition of human face action, for example: U when head shaking movement moves to the leftmost side1Minimum reaches Wave trough position;U when head shaking movement moves to the rightmost side1Maximum reaches crest location.U when nodding action moves to top side2Most It is small, reach wave trough position;U when nodding action moves to lower side2Maximum reaches crest location.U when eyes open degree maximum3 Crest location is reached, U when eyes closed3Reach wave trough position.U when mouth opening degree maximum4Crest location is reached, mouth closes U when conjunction4Reach wave trough position.
Error is calculated in order to reduce data, the embodiment of the present invention one is first filtered statistic, filter window 3 (can certainly be other empirical values, such as 4,5 etc.), (can certainly use other filtering sides using mean filter method Method), filtered statistic is
N is the quantity of statistic in spatial cache;
In view of the continuity of human face action, for simplicity, the maximum value of the embodiment of the present invention one and minimum value are come generation For wave crest and trough, it is denoted as respectivelyWith
In view of the eyes under the quasi- front normal attitude of different user open degree, mouth is closed degree and face are opposite Position is all variant, and the embodiment of the present invention one will instruct the mean value of relevant all statistics to each random action, as with The a reference value at family
Calculate the relative variation Δ U that relevant all statistics are instructed to each random actionkFor
The physical significance of the relative variation is that the movement of face is more significant, thenWithDifference it is bigger, relatively Variation delta UkAlso bigger.It therefore, can be by relative variation Δ UkFoundation as movement judgement.
S1046, when instructed to random action relevant all statistics relative variation be greater than or equal to it is described with When the preset threshold of the corresponding movement of machine action command, determine that the human face action of user and random action instruction are consistent, and determine Current face is living body, and it is inconsistent otherwise to determine that the human face action of user is instructed with random action.
The embodiment of the present invention one is judged using simple threshold value mode decision scheme.If TkFor the default of kth class movement Threshold value, (k=1 expression is shaken the head, and k=2 expression is nodded, and k=3 indicates blink, and k=4 expression is opened one's mouth), as relative variation Δ UkGreatly In or equal to preset threshold TkWhen, determine that the human face action of user and random action instruction are consistent, is otherwise determined as inconsistent.Tk Using empirical value, in the embodiment of the present invention one, T1=T2=0.6, T3=0.3, T4=0.9.Since the present invention is according to phase Judge variable quantity whether the human face action of user is consistent with random action instruction, avoids complicated angle and matrix fortune It calculates, improves operation efficiency.Since the embodiment of the present invention one only needs to calculate the mean value, maximum value and minimum value of each statistic, Movement judgement is realized using simple threshold value mode decision scheme, for the machine learning methods such as classifier, efficiency is obtained greatly Width is promoted, and does not need training data, and resources occupation rate is also very small.
Embodiment two:
Second embodiment of the present invention provides a kind of computer readable storage medium, the computer-readable recording medium storage There is computer program, the face living body inspection provided such as the embodiment of the present invention one is provided when the computer program is executed by processor The step of survey method.
Embodiment three:
Fig. 2 shows the specific block diagram for the face recognition device that the embodiment of the present invention three provides, a kind of recognitions of face Equipment 100 includes: one or more processors 101, memory 102 and one or more computer programs, wherein the place Reason device 101 is connected with the memory 102 by bus, and one or more of computer programs are stored in the memory It in 102, and is configured to be executed by one or more of processors 101, the processor 101 executes the computer journey The step of human face in-vivo detection method such as the offer of the embodiment of the present invention one is realized when sequence.
In the embodiment of the present invention three, face recognition device can be mobile terminal (such as mobile phone, tablet computer etc.) or Desktop computer etc..
In the present invention, due to calculating statistics relevant to the instruction of each random action according to the position of face key point Amount, calculates separately the relative variation that relevant all statistics are instructed to each random action, and sentence according to relative variation Whether the human face action of disconnected user is consistent with random action instruction, if unanimously, determining current face for living body.Therefore this hair The human face in-vivo detection method of bright offer does not need to describe different statistics by state machine etc., and operation can be greatly reduced Amount, reduces the complexity of algorithm, improves algorithm operation efficiency, embedded end that can be less in resources such as mobile phone, plates Realize efficient face In vivo detection in end.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage Medium may include: read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), disk or CD etc..
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (10)

1. a kind of human face in-vivo detection method, which is characterized in that the described method includes:
Face in S101, detection image;
S102, generation random action instruction relevant to face;
S103, acquisition image sequence;
S104, according to the human face action of acquired image Sequence Detection user, according to the position of face key point calculate with it is every A random action instructs relevant statistic, calculates separately and instructs the opposite of relevant all statistics to become to each random action Change amount, and judge whether the human face action of user is consistent with random action instruction according to relative variation, if unanimously, determined Current face is living body.
2. the method as described in claim 1, which is characterized in that the S101 is specifically included:
S1011, image is obtained;
Face in S1012, detection image;
Whether the size for the face that S1013, judgement detect coincide with preset window, if coincide, executes S102.
3. the method as described in claim 1, which is characterized in that relevant to the face random action instruction include shake the head, It one of nods, blink, open one's mouth or any combination.
4. method as described in any one of claims 1 to 3, which is characterized in that the S104 is specifically included:
A frame image in S1041, reading acquired image sequence, then executes S1042, if the image in image sequence It all reads and finishes, then directly execute S1045;
S1042, locating human face's key point;
S1043, statistic relevant to the random action instruction being currently generated is calculated according to the position of face key point;
Statistic relevant to the random action instruction being currently generated described in S1044, caching;
S1045, after being read to image sequence, all and the random action for reading caching instructs relevant statistic, distinguishes Calculate the relative variation that relevant all statistics are instructed to each random action;
S1046, when instructed to random action relevant all statistics relative variation be greater than or equal to described with motor-driven When making to instruct the preset threshold of corresponding movement, determine that the human face action of user and random action instruction are consistent, and determine current Face is living body, and it is inconsistent otherwise to determine that the human face action of user is instructed with random action.
5. method as claimed in claim 3, which is characterized in that for four operating position fixings 19 of shaking the head, nod, blink and open one's mouth A face key point, wherein left eye, right eye and mouth distinguish 6 key points, 1 key point of nose, specifically: the upper eye of left eye Eyelid and palpebra inferior have 2 key points respectively, and there is 1 key point at the both sides canthus of left eye respectively;The upper eyelid of right eye and palpebra inferior There are 2 key points respectively, there is 1 key point at the both sides canthus of right eye respectively;Upper lip and lower lip have 2 key points respectively, The both sides corners of the mouth of lip has 1 key point respectively;Alternatively,
For four operating position fixings, 25 face key points of shaking the head, nod, blink and open one's mouth, wherein left eye, right eye and mouth divide Other 8 key points, 1 key point of nose, specifically: the upper eyelid of left eye and palpebra inferior have 3 key points respectively, and the two of left eye There is 1 key point at side canthus respectively;The upper eyelid of right eye and palpebra inferior have 3 key points, the both sides canthus difference of right eye respectively There is 1 key point;Upper lip and lower lip have 3 key points respectively, and the both sides corners of the mouth of lip has 1 key point respectively.
6. method as claimed in claim 4, which is characterized in that locating human face's key point specifically: in preset window Locating human face's key point.
7. method as claimed in claim 5, which is characterized in that the random action instruction being currently generated is when shaking the head, according to people The position of face key point calculates statistic U relevant to shaking the head1Specifically:
Wherein, x1 is the x-axis coordinate value at the right canthus of right eye, and x10 is the x-axis coordinate value at the left side canthus of left eye, and x13 is nose The x-axis coordinate value of point;
The random action instruction being currently generated is to calculate statistic relevant to nodding according to the position of face key point when nodding U2Specifically:
Wherein, y13 is the y-axis coordinate value of nose, and y4 is the y-axis coordinate value at the left side canthus of right eye, the right canthus of y7 left eye Y-axis coordinate value, y14 is the y-axis coordinate value of the right corners of the mouth of lip, and y17 is the y-axis coordinate value of the left side corners of the mouth of lip;
When the random action instruction being currently generated is blink, statistic relevant to blink is calculated according to the position of face key point U3Specifically:
Wherein, (x2, y2) and (x3, y3) is the coordinate value of 2 key points in the upper eyelid of right eye respectively, (x5, y5) and (x6, Y6) be respectively right eye palpebra inferior 2 key points coordinate value, (x4, y4) is the coordinate value at the left side canthus of right eye, (x1, Y1) be right eye the right canthus coordinate value, (x8, y8) and (x9, y9) is the seat of 2 key points in the upper eyelid of left eye respectively Scale value, (x11, y11) and (x12, y12) are the coordinate value of 2 key points of the palpebra inferior of left eye respectively, and (x10, y10) is left The coordinate value at the left side canthus of eye, (x7, y7) is the coordinate value at the right canthus of left eye;
The random action instruction being currently generated is to calculate statistic relevant to opening one's mouth according to the position of face key point when opening one's mouth U4Specifically:
Wherein, (x15, y15) and (x16, y16) is the coordinate value of 2 key points of upper lip respectively, (x18, y18) and (x19, Y19) be respectively lower lip 2 key points coordinate value, (x14, y14) is the coordinate value of the right corners of the mouth of lip, (x17, Y17) be lip the left side corners of the mouth coordinate value.
8. method as claimed in claim 4, which is characterized in that described to read all relevant to random action instruction of caching After statistic, the method also includes: statistic is filtered;
It is described to calculate separately the relative variation that relevant all statistics are instructed to each random action specifically:
The maximum value and minimum value of Counting statistics amount, are denoted as respectivelyWith
Calculate the mean value that relevant all statistics are instructed to each random action, a reference value as user
Calculate the relative variation that relevant all statistics are instructed to each random action
9. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In the computer program realizes human face in-vivo detection method as claimed in any one of claims 1 to 8 when being executed by processor The step of.
10. a kind of face recognition device, comprising:
One or more processors;
Memory;And
One or more computer programs, the processor and the memory are connected by bus, wherein one or more A computer program is stored in the memory, and is configured to be executed by one or more of processors, special Sign is that the processor realizes face living body inspection as claimed in any one of claims 1 to 8 when executing the computer program The step of survey method.
CN201910082329.XA 2019-01-28 2019-01-28 Face living body detection method and face recognition device Active CN109784302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910082329.XA CN109784302B (en) 2019-01-28 2019-01-28 Face living body detection method and face recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910082329.XA CN109784302B (en) 2019-01-28 2019-01-28 Face living body detection method and face recognition device

Publications (2)

Publication Number Publication Date
CN109784302A true CN109784302A (en) 2019-05-21
CN109784302B CN109784302B (en) 2023-08-15

Family

ID=66502750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910082329.XA Active CN109784302B (en) 2019-01-28 2019-01-28 Face living body detection method and face recognition device

Country Status (1)

Country Link
CN (1) CN109784302B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353404A (en) * 2020-02-24 2020-06-30 支付宝实验室(新加坡)有限公司 Face recognition method, device and equipment
CN111539386A (en) * 2020-06-03 2020-08-14 黑龙江大学 Identity authentication system integrating fingerprint and face living body detection
CN112287909A (en) * 2020-12-24 2021-01-29 四川新网银行股份有限公司 Double-random in-vivo detection method for randomly generating detection points and interactive elements
CN112329727A (en) * 2020-11-27 2021-02-05 四川长虹电器股份有限公司 Living body detection method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080037836A1 (en) * 2006-08-09 2008-02-14 Arcsoft, Inc. Method for driving virtual facial expressions by automatically detecting facial expressions of a face image
CN102254151A (en) * 2011-06-16 2011-11-23 清华大学 Driver fatigue detection method based on face video analysis
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face
CN103679118A (en) * 2012-09-07 2014-03-26 汉王科技股份有限公司 Human face in-vivo detection method and system
CN105740779A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Method and device for human face in-vivo detection
CN107358152A (en) * 2017-06-02 2017-11-17 广州视源电子科技股份有限公司 A kind of vivo identification method and system
CN107392089A (en) * 2017-06-02 2017-11-24 广州视源电子科技股份有限公司 A kind of eyebrow movement detection method and device and vivo identification method and system
US20180308107A1 (en) * 2017-04-24 2018-10-25 Guangdong Matview Intelligent Science & Technology Co., Ltd. Living-body detection based anti-cheating online research method, device and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080037836A1 (en) * 2006-08-09 2008-02-14 Arcsoft, Inc. Method for driving virtual facial expressions by automatically detecting facial expressions of a face image
CN102254151A (en) * 2011-06-16 2011-11-23 清华大学 Driver fatigue detection method based on face video analysis
CN103679118A (en) * 2012-09-07 2014-03-26 汉王科技股份有限公司 Human face in-vivo detection method and system
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face
CN105740779A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Method and device for human face in-vivo detection
US20180308107A1 (en) * 2017-04-24 2018-10-25 Guangdong Matview Intelligent Science & Technology Co., Ltd. Living-body detection based anti-cheating online research method, device and system
CN107358152A (en) * 2017-06-02 2017-11-17 广州视源电子科技股份有限公司 A kind of vivo identification method and system
CN107392089A (en) * 2017-06-02 2017-11-24 广州视源电子科技股份有限公司 A kind of eyebrow movement detection method and device and vivo identification method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄叶珏: "基于交互式随机动作的人脸活体检测", 《软件导刊》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353404A (en) * 2020-02-24 2020-06-30 支付宝实验室(新加坡)有限公司 Face recognition method, device and equipment
CN111353404B (en) * 2020-02-24 2023-12-01 支付宝实验室(新加坡)有限公司 Face recognition method, device and equipment
CN111539386A (en) * 2020-06-03 2020-08-14 黑龙江大学 Identity authentication system integrating fingerprint and face living body detection
CN112329727A (en) * 2020-11-27 2021-02-05 四川长虹电器股份有限公司 Living body detection method and device
CN112287909A (en) * 2020-12-24 2021-01-29 四川新网银行股份有限公司 Double-random in-vivo detection method for randomly generating detection points and interactive elements
CN112287909B (en) * 2020-12-24 2021-09-07 四川新网银行股份有限公司 Double-random in-vivo detection method for randomly generating detection points and interactive elements

Also Published As

Publication number Publication date
CN109784302B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
CN109784302A (en) A kind of human face in-vivo detection method and face recognition device
KR102299847B1 (en) Face verifying method and apparatus
KR102483642B1 (en) Method and apparatus for liveness test
US8836777B2 (en) Automatic detection of vertical gaze using an embedded imaging device
US11580203B2 (en) Method and apparatus for authenticating a user of a computing device
CN105184246B (en) Living body detection method and living body detection system
JP7165742B2 (en) LIFE DETECTION METHOD AND DEVICE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
JP4807167B2 (en) Impersonation detection device
US11715231B2 (en) Head pose estimation from local eye region
WO2018218839A1 (en) Living body recognition method and system
US20220043895A1 (en) Biometric authentication system, biometric authentication method, and program
WO2019011073A1 (en) Human face live detection method and related product
WO2018078857A1 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
US20220019771A1 (en) Image processing device, image processing method, and storage medium
CN112257696A (en) Sight estimation method and computing equipment
CN112329727A (en) Living body detection method and device
CN110276313B (en) Identity authentication method, identity authentication device, medium and computing equipment
CN111639582A (en) Living body detection method and apparatus
JP6098133B2 (en) Face component extraction device, face component extraction method and program
WO2015181729A1 (en) Method of determining liveness for eye biometric authentication
CN112183200A (en) Eye movement tracking method and system based on video image
CN109409322B (en) Living body detection method and device, face recognition method and face detection system
JP2004157778A (en) Nose position extraction method, program for operating it on computer, and nose position extraction device
CA2931457C (en) Measuring cervical spine posture using nostril tracking
Cheung et al. Pose-tolerant non-frontal face recognition using EBGM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 1301, Middle Block, Fujing Building, No. 29 Fuzhong Road, Xintian Community, Huafu Street, Futian District, Shenzhen City, Guangdong Province, 518000

Applicant after: Shenzhen Xinheyuan Technology Co.,Ltd.

Address before: 1301, Middle Block, Fujing Building, No. 29 Fuzhong Road, Xintian Community, Huafu Street, Futian District, Shenzhen City, Guangdong Province, 518000

Applicant before: SHENZHEN FENGBAO INTERNET TECHNOLOGY CO.,LTD.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: C301, Floor 3, Xindu Commercial Plaza, 2008 Xinzhou South Road, Xingang Community, Fubao Street, Futian District, Shenzhen, Guangdong 518000

Applicant after: Shenzhen Xinheyuan Technology Co.,Ltd.

Address before: 1301, Middle Block, Fujing Building, No. 29 Fuzhong Road, Xintian Community, Huafu Street, Futian District, Shenzhen City, Guangdong Province, 518000

Applicant before: Shenzhen Xinheyuan Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant