CN107358154A - A kind of head movement detection method and device and vivo identification method and system - Google Patents
A kind of head movement detection method and device and vivo identification method and system Download PDFInfo
- Publication number
- CN107358154A CN107358154A CN201710406496.6A CN201710406496A CN107358154A CN 107358154 A CN107358154 A CN 107358154A CN 201710406496 A CN201710406496 A CN 201710406496A CN 107358154 A CN107358154 A CN 107358154A
- Authority
- CN
- China
- Prior art keywords
- head
- video
- numerical value
- face
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Abstract
The invention discloses a kind of head movement detection method, including:Some frame of video are extracted from face video to be measured;Obtain some key point positions on the head of each frame of video extracted from the face video to be measured;According to the head of the frame of video of some each extractions of key point position acquisition on the head three dimensions yaw angle numerical value and angle of pitch numerical value;The yaw angle numerical value and angle of pitch numerical value on the head of the frame of video based on each extraction judge the situation of the head movement of the face video to be measured.Accordingly, the invention also discloses a kind of head movement detection means.Head movement detection method disclosed by the invention and device calculate simple efficiency high, require low to hardware device.
Description
Technical field
The present invention relates to field of face identification, more particularly to a kind of head movement detection method and device and vivo identification side
Method and system.
Background technology
With the development of face recognition technology, increasing scene needs to use Face datection and goes quickly identification one
The identity of people.But there is undesirable to remove progress recognition of face, so whole face instead of true man using picture or video
The security of identifying system just cannot be guaranteed.And face vivo identification can detect current face to be measured be living body faces and
Face in non-photograph or video, so as to ensure that the security of face identification system.When carrying out recognition of face, Ke Yitong
Crossing the detection to the head movement of face to be measured helps to identify whether face is live body.
Existing head movement detection technique has:By depth camera or dual camera obtain face 3 d image from
And head pose can be determined to judge the situation of head movement, the program is higher to hardware-dependent, so that cost is higher.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of head movement detection method and device, requires low to hardware device,
So as to reduce cost.
To achieve the above object, the embodiments of the invention provide a kind of head movement detection method, including step:
Some frame of video are extracted from face video to be measured;
Obtain some key point positions on the head of each frame of video extracted from the face video to be measured;
According to the head of the frame of video of some each extractions of key point position acquisition on the head in three-dimensional
The yaw angle numerical value and angle of pitch numerical value in space;
The yaw angle numerical value and angle of pitch numerical value on the head of the frame of video based on each extraction are treated described in judging
Survey the situation of the head movement of face video.
Compared with prior art, a kind of head movement detection method disclosed in the embodiment of the present invention is by from the people to be measured
Some frame of video are obtained in face video, then according to some passes on the head that face to be measured is determined from each frame of video of extraction
Key point position, then according to the face to be measured of frame of video corresponding to some key point position acquisitions on head head in three-dimensional space
Between yaw angle numerical value and angle of pitch numerical value, be finally based on the yaw angle numerical value on head and angle of pitch numerical value judge that face to be measured regards
The technical scheme of the situation of the head movement of frequency;The program calculates simple efficient, any common camera or mobile terminal mobile phone
Camera can be simple to device hardware requirement as the input hardware of face video to be measured, so as to reduce cost.
Further, the yaw angle numerical value and angle of pitch number on the head of the frame of video based on each extraction
Value judges that the head movement of the face video to be measured includes:
When the yaw angle numerical value on the head is less than the first default yaw angle numerical value, the frame of video corresponding to judgement
Head state turns left for head, when the yaw angle numerical value on the head is more than the second default yaw angle numerical value, corresponding to judgement
The head state of the frame of video is turned right for head;
When the angle of pitch numerical value on the head is less than the first default angle of pitch numerical value, the frame of video corresponding to judgement
Head state is bowed for head, when the angle of pitch numerical value on the head is more than the second default angle of pitch numerical value, corresponding to judgement
The head state of the frame of video comes back for head;
Yaw angle numerical value is preset when the yaw angle numerical value on the head is more than the first default yaw angle numerical value and is less than second,
When the angle of pitch numerical value on the head is more than when the first default angle of pitch numerical value and presets angle of pitch numerical value less than second, judge
The head state of the corresponding frame of video is that head front is facing forward;
It is head front that if some frame of video extracted from the face video to be measured, which include the head state,
Frame of video facing forward and the head state are that head left-hand rotation/head right-hand rotation/head is bowed the/frame of video that comes back of head, then sentence
There is a motion on the head of the fixed face video to be measured, and the head movement is that head left-hand rotation/head right-hand rotation/head is bowed/head
Portion comes back.
Further, the head for obtaining each frame of video extracted from the face video to be measured is some
Key point position includes:
Face datection is done with dlib storehouses to each frame of video extracted from the face video to be measured and face closes
Key point position is detected, and obtains some key point positions of face to be measured;
The some of head are obtained from some key point positions of the face to be measured of the frame of video of each extraction
Key point position;Wherein, some key point positions on the head include a key point position of left eye, a key point of right eye
Two key point positions of position, a key point position of nose and mouth.
Further, the institute of the frame of video of some each extractions of key point position acquisition according to the head
The yaw angle numerical value and angle of pitch numerical value that head is stated in three dimensions include:
Some key point position acquisitions using the aperture camera model in the image library opencv that increases income according to the head
Yaw angle numerical value and angle of pitch numerical value of the head of the frame of video of each extraction in three dimensions.
Accordingly, the embodiment of the present invention also provides a kind of head movement detection means, including:
Frame of video extracting unit, for extracting some frame of video from face video to be measured;
Header key point position acquisition unit, for obtaining each video extracted from the face video to be measured
Some key point positions on the head of frame;
Head numerical value acquiring unit, for being regarded described in some each extractions of key point position acquisition according to the head
Yaw angle numerical value and angle of pitch numerical value of the head of frequency frame in three dimensions;
Head movement judging unit, for the frame of video based on each extraction the head yaw angle numerical value and
Angle of pitch numerical value judges the situation of the head movement of the face video to be measured.
Compared with prior art, a kind of head movement detection means disclosed in the embodiment of the present invention, passes through frame of video first
Extracting unit extracts some frame of video from face video to be measured, is then obtained and extracted by header key point position acquisition unit
Each frame of video head some key point positions, and pass through the frame of video that head numerical value acquiring unit obtains each extraction
Head yaw angle numerical value and angle of pitch numerical value;Finally by yaw angle numerical value of the head movement judging unit based on head and
Angle of pitch numerical value judges the technical scheme of the head movement of face video to be measured;The program is by detecting the key point position on head
To realize the judgement to head movement, calculating process is simply efficient, the camera of any common camera or mobile terminal mobile phone
Can be simple as the input hardware of face video to be measured, device hardware requirement.
Further, the head movement judging unit includes:
Head state judge module, for when the yaw angle numerical value on the head is less than the first default yaw angle numerical value,
The head state of the frame of video corresponding to judgement turns left for head, is preset partially when the yaw angle numerical value on the head is more than second
During boat angle numerical value, the head state of the frame of video corresponding to judgement is that head is turned right;
When the angle of pitch numerical value on the head is less than the first default angle of pitch numerical value, the frame of video corresponding to judgement
Head state is bowed for head, when the angle of pitch numerical value on the head is more than the second default angle of pitch numerical value, corresponding to judgement
The head state of the frame of video comes back for head;
Yaw angle numerical value is preset when the yaw angle numerical value on the head is more than the first default yaw angle numerical value and is less than second,
When the angle of pitch numerical value on the head is more than when the first default angle of pitch numerical value and presets angle of pitch numerical value less than second, judge
The head state of the corresponding frame of video is that head front is facing forward;
Head movement judge module, if including institute for some frame of video extracted from the face video to be measured
State that head state is head front frame of video facing forward and the head state is that head left-hand rotation/head right-hand rotation/head is bowed/head
The frame of video that portion comes back, then judging the head of the face video to be measured has motion, and the head movement is head left-hand rotation/head
Portion's right-hand rotation/head bows/and head comes back.
Further, the header key point position acquisition unit includes:
Face key point position detecting module, for each frame of video to being extracted from the face video to be measured
Face datection is done with dlib storehouses and face key point position is detected, and obtains some key point positions of face to be measured;
Header key point position acquisition module, for the frame of video from each extraction the face to be measured it is some
Some key point positions on head are obtained in key point position;Wherein, some key point positions on the head include left eye
One key point position, a key point position of right eye, two key point positions of a key point position of nose and mouth.
Further, the head numerical value acquiring unit is specifically used for:
Some key point position acquisitions using the aperture camera model in the image library opencv that increases income according to the head
Yaw angle numerical value and angle of pitch numerical value of the head of the frame of video of each extraction in three dimensions.
Correspondingly, the embodiment of the present invention also provides a kind of vivo identification method, including step:
Detect situation and the other at least one positions motions of the head movement of the face to be measured in face video to be measured
Situation, wherein, using the face to be measured in a kind of head movement detection method detection provided by the invention face video to be measured
The situation of head movement;
Situation based on position motion obtains motion score value corresponding to each position motion of the face to be measured;
Calculate the summation after motion score value weighting corresponding to each position motion, and the summation that will be calculated
As vivo identification score value;Wherein, corresponding weights are preset in each position motion;
Judge that the face to be measured that the vivo identification score value is not less than predetermined threshold value is live body.
Compared with prior art, a kind of vivo identification method disclosed in the embodiment of the present invention, it is public using the embodiment of the present invention
The head movement detection method opened detects the motion conditions on the head of face to be measured, and by detecting other portions of face to be measured
The motion conditions of position, the motion score value that position corresponding to acquisition is moved, sum and make after being weighted to each position motion score value
For vivo identification score value, by the use of vivo identification score value as the face to be measured whether be live body criterion technical side
Case;Wherein, head movement detection method calculating process is simply efficient, and device hardware requirement is simple;Using detection head and other
At least one position motion solves in the prior art the problem of algorithm is single, and security is not high, and scalability is strong, and is based on people
The detection of face's position motion can be realized by two dimensional image, not high to hardware requirement;In addition, add using to different parts motion
Power carries out fraction fusion again, and the vivo identification degree of accuracy is high, and this vivo identification method accuracy rate is high, hardware requirement is low and security
It is high.
Correspondingly, the present invention also provides a kind of vivo identification system, including:
At least two face position motion detection apparatus, each face position motion detection apparatus are to be measured for detecting
Position corresponding to face is moved, wherein a face position motion detection apparatus is a kind of head movement detection dress provided by the invention
Put;
Score value acquisition device is moved at position, and the people to be measured is obtained for the detection case based on each position motion
Motion score value corresponding to each position motion of face;
Vivo identification score value computing device, it is total after score value weighting is moved corresponding to each position motion for calculating
With, and using the summation being calculated as vivo identification score value;Wherein, the vivo identification score value computing device has been preset
The weights corresponding with each position motion;
Live body judgment means, it is work for judging the vivo identification score value not less than the face to be measured of predetermined threshold value
Body.
Compared with prior art, a kind of vivo identification system passes through at least two face position disclosed in the embodiment of the present invention
Motion detection apparatus obtains the motion score value at least two positions on face to be measured, wherein a face position motion detection apparatus
Using head movement detection means;Position is moved to sum after score value is weighted by vivo identification score value computing device and is used as
Vivo identification score value, by live body judgment means by the use of vivo identification score value as face to be measured whether be live body criterion
Technical scheme;Head fortune work(detection means calculating is simple efficient, and device hardware requirement is simple;Using detecting at least two positions
The motion conditions that telecontrol equipment detects at least two positions solve in the prior art the problem of algorithm is single, and security is not high,
Scalability is strong, and the detection based on the motion of face position can be realized by two dimensional image, not high to hardware requirement, in addition,
Fraction fusion is carried out by vivo identification score value computing device again to different parts motion weighting, the vivo identification degree of accuracy is high, obtains
Obtained the beneficial effect that vivo identification accuracy rate is high, hardware requirement is low and safe.
Brief description of the drawings
Fig. 1 is a kind of schematic flow sheet for head movement detection method that the embodiment of the present invention 1 provides;
Fig. 2 is coordinate system schematic diagram of the head in three dimensions;
Fig. 3 is a kind of step S14 for head movement detection method that the embodiment of the present invention 1 provides schematic flow sheet;
Fig. 4 is a kind of step S12 for head movement detection method that the embodiment of the present invention 1 provides schematic flow sheet;
Fig. 5 is the model schematic of 68 key points of face to be measured;
Fig. 6 is a kind of structural representation for head movement detection means that the embodiment of the present invention 2 provides;
Fig. 7 is a kind of schematic flow sheet for vivo identification method that the embodiment of the present invention 3 provides;
Fig. 8 is a kind of step S24 schematic flow sheets for vivo identification method that the embodiment of the present invention 3 provides;
Fig. 9 is a kind of structural representation for vivo identification system that the embodiment of the present invention 4 provides.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other under the premise of creative work is not made
Embodiment, belong to the scope of protection of the invention.
A kind of head movement detection method that the embodiment of the present invention 1 provides, referring to Fig. 1, Fig. 1 is that the flow of the present embodiment is shown
It is intended to, including step:
S11, some frame of video are extracted from face video to be measured;
Some key point positions on the head for each frame of video that S12, acquisition are extracted from face video to be measured;
S13, according to the head of the frame of video of some each extractions of key point position acquisition on head in the inclined of three dimensions
Navigate angle numerical value and angle of pitch numerical value;
S14, frame of video based on each extraction head yaw angle numerical value and angle of pitch numerical value judge that face to be measured regards
The situation of the head movement of frequency.
Head involved by step S13 and step S14 can be found in shown in Fig. 2 in the yaw angle and the angle of pitch of three dimensions,
Fig. 2 is coordinate system schematic diagram of the head in three dimensions of face, and the yaw angle yaw on head is around the angle of Y-axis rotation, head
The angle of pitch pitch in portion is the angle around X-axis rotation.
Referring to Fig. 3, Fig. 3 is the specific schematic flow sheets of the present embodiment step S14, and further, step S14 is specifically included
Step:
S141, when the yaw angle yaw numerical value on head is less than the first default yaw angle numerical value, frame of video corresponding to judgement
Head state turns left for head, when the yaw angle yaw numerical value on head is more than the second default yaw angle numerical value, is regarded corresponding to judgement
The head state of frequency frame is turned right for head;
S142, when the angle of pitch pitch numerical value on head is less than the first default angle of pitch numerical value, frame of video corresponding to judgement
Head state bowed for head, when the angle of pitch pitch numerical value on head is more than the second default angle of pitch numerical value, judge corresponding
Frame of video head state be head come back;
S143, it is more than the first default yaw angle numerical value and less than the second default yaw angle when the yaw angle yaw numerical value on head
Numerical value, the angle of pitch pitch numerical value on head are more than when the first default angle of pitch numerical value and less than the second default angle of pitch numerical value
When, the head state of frame of video is facing forward for head front corresponding to judgement;
Regarded if S144, some frame of video extracted from face video to be measured include head state for head front is facing forward
Frequency frame and head state are that head left-hand rotation/head right-hand rotation/head is bowed the/frame of video that comes back of head, then judge that face to be measured regards
There is a motion on the head of frequency, and head movement be head left-hand rotation/head right-hand rotation/head bow/head comes back.
Step S14 judges the face head in face video to be measured based on angle of pitch pitch numerical value and yaw angle yaw numerical value
Whether motion is had;In addition to this it is possible to face video to be measured is obtained based on angle of pitch pitch numerical value and yaw angle yaw numerical value
Face to be measured head rotation direction;, further can also be according to each frame of video according to the demand of practice
The angle of pitch pitch numerical value and yaw angle yaw numerical value on head obtain the degree of head rotation, are greatly enriched the present embodiment pair
The testing result of face head movement situation to be measured.
In step s 11 some frame of video are extracted from face video to be measured, it is preferred to use from face video to be measured
The frame of video of successive frame is obtained, or, it is preferred to use correspond to extraction according to regular hour frequency from face video to be measured and regard
Frequency frame.
Referring to Fig. 4, Fig. 4 is the specific schematic flow sheets of the present embodiment step S12, and step S12 specifically includes step:
S121, Face datection and face key point are done with dlib storehouses to each frame of video extracted from face video to be measured
Detection, obtain some key point positions of face to be measured;
Referring to Fig. 5, Fig. 5 is 68 points that the face to be measured that Face datection and face critical point detection obtain is done using dlib storehouses
Illustraton of model;Some face key point positions obtained in step S121 are the key in Fig. 5 shown in 1~key point of key point 68
Point position;
S122, from some key point positions of the face to be measured of the frame of video of each extraction obtain head some keys
Point position;Wherein, some key point positions on head include a key point position, a key point position of right eye, nose for left eye
The one key point position in portion and two key point positions of mouth.
Wherein, the key point for preferably obtaining left eye is set to the key point position for the central point for representing left eye, right eye
One key point is set to the key point position for the central point for representing right eye, and a key point of nose is set to the key for representing nose
Point position, two key point positions of mouth are respectively the key point position for the key point position and right corners of the mouth for representing the left corners of the mouth.Its
In, 5 key point positions on above-mentioned head can be found in Fig. 5, specifically, calculating key point 37~key that left eye is represented in Fig. 5
The average value of the x coordinate of 42 this 6 key points of point is the x coordinate of left eye central point, calculates 37~key point of key point 42 this 6
The average value of the y-coordinate of key point is the y-coordinate of left eye central point, and the key point position of the central point of left eye is acquired left
The x coordinate and y-coordinate of eye central point, similarly, according to 43~key point of key point 48 of expression right eye, this 6 key points can obtain
The key point position of the central point of right eye;The key point of nose is set to the key point position shown in key point 34, the left corners of the mouth
Key point is set to the key point position shown in key point 49;The key point of the right corners of the mouth is set to the key point shown in key point 55
Position.Herein, give tacit consent to that horizontal direction is established in each frame of video of extraction is x-axis, vertical direction is the xy coordinate axis bodies of y-axis
System, the key point position of the face to be measured obtained from each frame of video of extraction is crucial point coordinates.
When getting the head rotation of face to be measured in face video to be measured, the eye of the face acquired in different video frame
Relative position between these positions of portion, nose and mouth can change with rotation, so, the present embodiment step S13
It is middle to be obtained according to this 5 key point positions for representing left eye central point, right eye central point, nose, the left corners of the mouth and the right corners of the mouth respectively
The head of the frame of video of each extraction is taken in the yaw angle yaw numerical value and angle of pitch pitch numerical value of three dimensions.
Specifically, step S13 includes:Some passes using the aperture camera model in the image library opencv that increases income according to head
Yaw angle numerical value and angle of pitch numerical value of the head of the frame of video of each extraction of key point position acquisition in three dimensions.
When it is implemented, the present embodiment obtains some frame of video from face video to be measured, then according to from the every of extraction
Some key point positions on the head of face to be measured are determined in one frame of video, specifically, a key point position, the right side including left eye
A key point position, a key point position of nose and the two key point positions of mouth of eye;And according to some keys on head
Yaw angle yaw numerical value and angle of pitch pitch numerical value of the head of frame of video in three dimensions corresponding to point position acquisition;Then base
The state on the head of each frame of video is judged in yaw angle yaw numerical value and angle of pitch pitch numerical value, finally, based on each extraction
The head of face to be measured of condition adjudgement face video to be measured on head of frame of video whether move, and corresponding motion feelings
Condition.
Compared with prior art, the header key point position of the frame of video of extraction is obtained to calculate head in three dimensions
Yaw angle numerical value and angle of pitch numerical value, then according to the yaw angle numerical value on head acquired in the different video frame of extraction and bow
Elevation angle numerical value determines the head state of the face to be measured of each frame of video extracted, and then obtains the head of face video to be measured
Motion;The present embodiment calculating is simple efficient, and the camera of any common camera or mobile terminal mobile phone can be used as people to be measured
The input hardware of face video is simple to device hardware requirement.
A kind of head movement detection means that the embodiment of the present invention 2 provides, referring to Fig. 6, the structural representation of Fig. 6 the present embodiment
Figure;The present embodiment specifically includes:
Frame of video extracting unit 11, for extracting some frame of video from face video to be measured;
Header key point position acquisition unit 12, for obtaining the head of each frame of video extracted from face video to be measured
Some key point positions in portion;
Head numerical value acquiring unit 13, the frame of video for some each extractions of key point position acquisition according to head
Yaw angle numerical value and angle of pitch numerical value of the head in three dimensions;
Head movement judging unit 14, yaw angle numerical value and the angle of pitch for the head of the frame of video based on each extraction
Numerical value judges the situation of the head movement of face video to be measured.
Wherein, head can be found in shown in Fig. 2 in the yaw angle and the angle of pitch of three dimensions, and Fig. 2 is the head of face three
The coordinate system schematic diagram of dimension space, the yaw angle yaw on head are the angle around Y-axis rotation, and the angle of pitch pitch on head is to enclose
The angle rotated around X-axis.
Further, head movement judging unit 14 is specifically included with lower module:
Head state judge module 141, for being less than the first default yaw angle numerical value when the yaw angle yaw numerical value on head
When, the head state of frame of video corresponding to judgement turns left for head, when the yaw angle yaw numerical value on head is more than the second default driftage
During the numerical value of angle, the head state of frame of video corresponding to judgement is turned right for head;
When the angle of pitch pitch numerical value on head is less than the first default angle of pitch numerical value, the head of frame of video corresponding to judgement
Portion's state is bowed for head, when the angle of pitch pitch numerical value on head is more than the second default angle of pitch numerical value, is regarded corresponding to judgement
The head state of frequency frame comes back for head;
Yaw angle numerical value is preset when the yaw angle yaw numerical value on head is more than the first default yaw angle numerical value and is less than second,
When the angle of pitch pitch numerical value on head is more than when the first default angle of pitch numerical value and presets angle of pitch numerical value less than second, sentence
The head state of frame of video is facing forward for head front corresponding to fixed;
Head movement judge module 142, if including head shape for some frame of video extracted from face video to be measured
State be head front frame of video facing forward and head state be head left-hand rotation/head right-hand rotation/head bow/head come back video
Frame, then judging the head of face video to be measured has a motion, and head movement is that head left-hand rotation/head right-hand rotation/head is bowed/head
Come back.
Head movement judging unit 14 is based on angle of pitch pitch numerical value and yaw angle yaw numerical value judges face video to be measured
In face head whether have motion;In addition to this it is possible to obtained based on angle of pitch pitch numerical value and yaw angle yaw numerical value
The direction of the head rotation of the face to be measured of face video to be measured;, further can also basis according to the demand of practice
The angle of pitch pitch numerical value and yaw angle yaw numerical value on the head of each frame of video obtain the degree of head rotation, greatly abundant
Testing result of the present embodiment to face head movement to be measured.
When extracting some frame of video from face video to be measured by frame of video extracting unit 11, preferably from face to be measured
The frame of video of successive frame is obtained in video, or, preferably correspondingly taken out according to regular hour frequency from face video to be measured
Take frame of video.
Header key point position acquisition unit 12 specifically includes:
Face critical point detection module 121, for each frame of video dlib storehouses to being extracted from face video to be measured
Face datection and face critical point detection are done, obtains some face key point positions;
Referring to Fig. 5, Fig. 5 is 68 that the face to be measured that Face datection and face critical point detection obtain is done using dlib storehouses
The model schematic of key point;Some face key point positions obtained by face critical point detection module 121 are in Fig. 5
Key point position shown in 1~key point of key point 68;
Header key point position acquisition module 122, for being obtained in some face key points of the frame of video from each extraction
Take some key point positions on head.
Wherein, the key point that left eye is preferably obtained by header key point position acquisition module 122 is set to an expression left side
The key point position of the central point of eye, a key point of right eye are set to the key point position for the central point for representing right eye, nose
A key point be set to the key point position for representing nose, two key point positions of mouth are respectively the key for representing the left corners of the mouth
Point position and the key point position of the right corners of the mouth.Wherein, 5 key point positions on above-mentioned head can be found in Fig. 5, specifically, calculating
Represent that the average value of the x coordinate of this 6 key points of 37~key point of key point 42 of left eye is sat for the x of left eye central point in Fig. 5
Mark, the average value of the y-coordinate of this 6 key points of 37~key point of calculating key point 42 is the y-coordinate of left eye central point, left eye
The key point position of central point is the x coordinate and y-coordinate of acquired left eye central point, similarly, according to the key for representing right eye
This 6 key points of point 43~key point 48 can obtain the key point position of the central point of right eye;The key point of nose is set to pass
Key point position shown in key point 34, the key point of the left corners of the mouth are set to the key point position shown in key point 49;The right corners of the mouth
Key point is set to the key point position shown in key point 55.Herein, the foundation level side in each frame of video of extraction is given tacit consent to
To for x-axis, vertical direction is the xy coordinate axial systems of y-axis, the key of the face to be measured obtained from each frame of video of extraction
Point position is crucial point coordinates.
When getting the head rotation of face to be measured in face video to be measured, the eye of the face acquired in different video frame
Relative position between these face positions of portion, nose and mouth can change with rotation, so, the present embodiment passes through
Head numerical value acquiring unit 13 according to represent respectively left eye central point, right eye central point, nose, the left corners of the mouth and the right corners of the mouth this 5
Individual key point position obtains yaw angle yaw numerical value and the angle of pitch of the head of the frame of video of each extraction in three dimensions
Pitch numerical value.
Specifically, head numerical value acquiring unit 13 is used for:Using the aperture camera model in the image library opencv that increases income according to
Yaw angle numerical value and the angle of pitch of the head of the frame of video of some each extractions of key point position acquisition on head in three dimensions
Numerical value.
When it is implemented, the present embodiment extracts some regard by frame of video extracting unit 11 from face video to be measured first
Frequency frame, each of extraction is then obtained by the face critical point detection module 121 of header key point position acquisition unit 12 and regarded
The key point position of the face to be measured of frequency frame, and by header key point position acquisition module 122 from some of face to be measured
Some key point positions on head are obtained in key point position, specifically, a key point position, a pass of right eye including left eye
Two key point positions of key point position, a key point position of nose and mouth;Then by head numerical value acquiring unit 13
The head of the frame of video of each extraction is calculated according to some key point positions on head in the yaw angle yaw numerical value of three dimensions and is bowed
Elevation angle pitch numerical value;Finally by the head state judge module 141 of head movement judging unit 14 according to yaw angle yaw numbers
Value judges the head state of corresponding frame of video with angle of pitch pitch numerical value, and by head movement judge module 142 based on each
Whether the head of the face to be measured of the condition adjudgement face video to be measured on the head of the frame of video of extraction moves, and corresponding fortune
Emotionally condition.
Compared with prior art, the present embodiment calculating process is simply efficient;Any common camera or mobile terminal mobile phone
Camera can be as the input hardware of face video to be measured, device hardware requirement is simple.
A kind of vivo identification method that the embodiment of the present invention 3 provides, referring to Fig. 7, Fig. 7 is the flow signal of the present embodiment
Figure, including step:
S21, face to be measured in detection face video to be measured head movement situation and other at least one position fortune
Dynamic situation, wherein, a kind of head movement detection method provided using the embodiment of the present invention 1 is detected in face video to be measured
The situation of the head movement of face to be measured;The detailed process of detection head movement may refer to embodiment 1, not repeat herein;
S22, the situation based on position motion obtain motion score value corresponding to each position motion of face to be measured;
S23, calculate the summation after motion score value weighting corresponding to the motion of each position, and using the summation being calculated as
Vivo identification score value;Wherein, corresponding weights are preset in each position motion;
S24, judge that the face to be measured that vivo identification score value is not less than predetermined threshold value is live body.
Other at least one position motions of detection face to be measured in the present embodiment step S21 are mouth motion, eye
At least one of motion, facial movement, eyebrow movement and forehead motion;As a rule, the mouth motion of face and eye fortune
Traverse degree is obvious, is advantageous to be detected, and can preferably select at least one of the motion of detection mouth and eye motion.
Similar to a kind of head movement detection method of the offer of the embodiment of the present invention 1, face to be measured its is detected in step S21
Its at least one position motion specifically includes:Detect each video that the face video of face to be measured is extracted every default frame number
Position key point position corresponding to the motion of frame detection position, passes through the change of the position key point position of each frame of video of extraction
Degree come determine position motion situation;Or detect each frame of video detection that face to be measured is extracted every default frame number
Position grey value characteristics corresponding to the motion of position, by the intensity of variation of the gray value at the position of each frame of video of extraction come really
Determine the situation of position motion.Above-mentioned implementation is only to detect the example of other at least one position motions;Based on the present embodiment
Vivo identification method basis on, by other specific embodiments realize to an at least position in addition move fortune
Dynamic detection, also within the protection domain of the present embodiment.
Each position is set in the step S23 of the present embodiment and moves the preferred embodiment of corresponding weights as according to every
The significant degree setting of one position motion.For example, the position motion that the face to be measured in face video to be measured is detected as step S21 is
Mouth motion, eye motion and head movement;Generally, mouth motion is obvious, therefore weight is maximum, head movement simulation precision
It is minimum, therefore weight is minimum, the Weight Algorithm for being correspondingly arranged position motion is:Mouth moves>Eye motion>Head movement;
Or, each position is set in step S23 and moves another preferred embodiment of corresponding weights as according to difference
Application scenarios carry out the weighed value adjusting of position motion automatically and set, specific practice:Under a certain scene, people to be measured is collected
The normal input video of the various positions motion of face is used as positive sample, attacks video as negative sample, take (positive sample by number+
Negative sample refuses number)/the accuracy rate of (positive sample sum+negative sample sum) as position motion, then each position is transported
Dynamic accuracy rate is ranked up according to order from big to small, the weight of each position motion also according to this order from big to small,
Readjust the weight of each position motion.For weight after readjusting to calculate vivo identification score value, the recognition result can
With the accuracy rate of the position motion detection under adaptive different scenes, increase the accuracy rate of the vivo identification result of the present embodiment.
Above two sets each position and moves any preferred embodiment of corresponding weights in the present embodiment
Protection domain in.
Specifically, referring to Fig. 8, Fig. 8 is step S24 schematic flow sheet, including step:
S241, accounted for by vivo identification score value vivo identification total score ratio calculation face to be measured vivo identification confidence
Degree;
S242, when vivo identification confidence level is not less than preset value, determine that vivo identification score value is not less than predetermined threshold value;
S243, judge that the face to be measured that vivo identification score value is not less than predetermined threshold value is live body.
Specifically, in step S241, vivo identification total score is that can be obtained after face to be measured is identified the present embodiment
The maximum obtained, the vivo identification confidence level of face to be measured are calculated by following formula:
F=(s/s_max) * 100%
Wherein, s_max represents vivo identification total score, and f represents vivo identification confidence level, and 0<f<1;
Preset value is represented with e, as f >=e, i.e., when vivo identification confidence level is not less than preset value, it is determined that vivo identification point
Value is not less than predetermined threshold value, judges that the face to be measured that vivo identification score value is not less than predetermined threshold value is live body;Work as f<E, i.e. live body
When recognition confidence is less than preset value, it is determined that vivo identification score value is less than predetermined threshold value, and it is pre- to judge that vivo identification score value is less than
If the face to be measured of threshold value is non-living body.
The vivo identification confidence level obtained using vivo identification score value, can also be further expanded, for the present embodiment
Establish classifying system and carry out live body judgement and live body classification, to obtain abundant vivo identification result.
Step S22 obtains motion score value bag corresponding to each position motion of face to be measured based on the situation that position is moved
Include:
Situation based on head movement moves score value corresponding to obtaining:When the head of the detection face to be measured in step S21
The motion conditions of motion are that face head to be measured has motion, then the motion score value of the head movement obtained is 1 point;Otherwise obtain
The motion score value of head movement is 0 point.
Similar, the motion conditions based on other at least one position motions obtain corresponding motion score value:As step S21
In the corresponding motion conditions of detection face to be measured have motion for the corresponding position of face to be measured, then the corresponding position obtained moves
Motion score value be 1 point;Otherwise the motion score value obtained is 0 point.Or, treated in practical operation when carrying out In vivo detection
The position motion for surveying face has specific requirement, then when the corresponding motion conditions of the detection face to be measured in step S21 are people to be measured
The corresponding position of face has motion, and the position motion obtained meets specific requirement, then the motion point of the corresponding position motion obtained
It is worth for 1 point;Otherwise the motion score value obtained is 0 point.
Except moving score value corresponding to the judgement of without motion acquisition by having, if acquired position fortune in the step s 21
Dynamic motion conditions are the movement degree of position motion, can also be obtained according to its movement degree in score value section corresponding to transport
Dynamic score value, such as set fraction and be divided into 10 grades, value is between 0 to 1, and which is also within the protection domain of the present embodiment.
When it is implemented, some frame of video are first extracted from face video to be measured, and each frame of video to being extracted is examined
Position motion is surveyed so as to obtain the motion conditions at corresponding position, wherein, detection people to be measured is included to detection face position sports bag
The head movement of face:68 key points of face to be measured are first obtained, then therefrom obtain some key point positions on head, and root
According to the head yaw angle numerical value and angle of pitch numerical value of frame of video corresponding to some key point positions determination on head, and then judge to treat
The motion conditions on the head surveyed in face video;Situation about being moved according to each position moves score value corresponding to obtaining, and is specially
There is motion at the position, then the motion score value obtained is 1 point, and the motion score value otherwise obtained is 0 point;Then above-mentioned obtain often is calculated
The summation after score value is weighted is moved at one position, and the summation represents vivo identification score value;Finally accounted for the vivo identification score value
The ratio calculation vivo identification confidence level of vivo identification total score, wherein, when vivo identification confidence level is not less than preset value, it is determined that
Vivo identification score value is not less than predetermined threshold value, so as to judge face to be measured for live body;Otherwise, it is determined that face to be measured is non-living body.
The present embodiment can operate with plurality of devices end, and this is said exemplified by sentencing the implement scene for applying to cell phone end
It is bright:In mobile phone terminal vivo identification, occur a kind of live body action request order at random, for example, require that face to be measured is carried out respectively
The live body action that head is turned left, blinks and open one's mouth;Now if the weight of default position motion is corresponding mouth motion of opening one's mouth
Weight w1=3, the weight w2=2 of eye motion corresponding to blink, head turn left corresponding to head movement weight w3=1;
Vivo identification total score is calculated, i.e. vivo identification best result s_max is 3*1+2*1+1*1=6 points.Assuming that detect to open one's mouth to be scored at
1 point, blink is scored at 1 point, and head turns left to be scored at 0 point, and vivo identification score value s is the summation after the motion weighting of each position,
The motion score value of above-mentioned position motion is substituted into, calculates s=3*1+2*1+1*0=5 points of vivo identification score value;Finally, live body is calculated
Recognition confidence f=s/s_max=5/6=83.33%.If setting now setting value e, as 80%, judges that the face to be measured is
Live body, and live body confidence level is 83.33%.
The present embodiment solves in the prior art the problem of algorithm is single, and security is not high, and scalability is strong;For to be measured
The method calculating of the mouth motion of face is simple efficient, not high to the hardware requirement of equipment;In addition, in the present embodiment using pair
The detection of multiple position motions carries out fraction fusion, vivo identification again to carry out vivo identification to different parts motion weighting
The degree of accuracy is high, and is advantageous to improve security.
A kind of vivo identification system that the embodiment of the present invention 4 provides, referring to Fig. 9, Fig. 9 is the structural representation of the present embodiment
Figure, the present embodiment include:
At least two face position motion detection apparatus 1, each face position motion detection apparatus 1 are used to detect people to be measured
Move at position corresponding to face;Face position motion detection apparatus 1a and face position motion detection apparatus 1b in Fig. 9 represent inspection
Survey 2 face position motion detection apparatus 1 of two different parts motion;A wherein face position motion detection apparatus 1 is using this
A kind of head movement detection means that inventive embodiments 2 provide, referring specifically to embodiment 2, is not repeated herein.
Position motion score value acquisition device 2, the detection case for being moved based on each position obtains the every of face to be measured
Motion score value corresponding to the motion of one position;
Vivo identification score value computing device 3, for calculating the summation after score value weighting is moved corresponding to each position motion,
And using the summation being calculated as vivo identification score value;Wherein, vivo identification score value computing device 3 is preset and each position
Move corresponding weights;
Live body judgment means 4, it is live body for judging vivo identification score value not less than the face to be measured of predetermined threshold value.
Wherein, the other at least position motion detection unit 1 in addition to head movement detection means 1 is corresponding detects at least
The motion of one position includes at least position fortune in mouth motion, eye motion, eyebrow movement, forehead motion and facial movement
It is dynamic.Because mouth motion and the motion of eye motion are obvious, can preferably use in the motion of detection mouth and eye motion extremely
Few one kind.
Similar head movement detection means 1, an at least face position motion detection apparatus 1 is to be measured specifically for detecting in addition
Position key point position corresponding to each frame of video detection position motion that the face video of face is extracted every default frame number,
The situation of position motion is determined by the intensity of variation of the position key point position of each frame of video of extraction;Or face
Position motion detection apparatus 1 can also be specifically used for detecting each frame of video detection that face to be measured is extracted every default frame number
Position grey value characteristics corresponding to the motion of position, by the intensity of variation of the gray value at the position of each frame of video of extraction come really
Determine the situation of position motion.Above-mentioned implementation is only that at least the detection position of a face position motion detection apparatus 1 is moved in addition
Example, when the face position motion detection apparatus 1 by other embodiment realize to an at least position in addition motion fortune
Dynamic detection, also within the protection domain of the present embodiment.
Position motion score value acquisition device 2 is specifically used for the motion conditions based on head movement and obtains corresponding motion point
Value:The motion conditions of the head movement of face to be measured are that face head to be measured has motion, then the motion point of the head movement obtained
It is worth for 1 point;Otherwise the motion score value of the head movement obtained is 0 point.Position motion score value acquisition device 2 specifically is additionally operable to be based on
The motion conditions of other at least one position motions move score value corresponding to obtaining:When feelings are moved at the corresponding position of face to be measured
For condition to there is motion, then the motion score value of the corresponding position motion obtained is 1 point;Otherwise the motion score value obtained is 0 point.
Except above-mentioned position motion score value acquisition device 2 be used for based on the motion of each position whether have the situation of motion and it is straight
An embodiment for moving score value for whether having motion is obtained to obtain, is obtained in by face position motion detection apparatus 1
The motion conditions of position motion include the movement degree of position motion, can also move score value acquisition device 2 by position and be based on
Movement degree and obtain a motion score value between 0 to 1, such as setting motion score value be divided into 10 grades, value between 0 to 1,
The alternate embodiments can not only indicate whether motion, moreover it is possible to embody the degree of motion.
The weights corresponding with the motion of each position are to be moved according to each position in vivo identification score value computing device 3
Significant degree is set;Position motion such as detection is mouth motion, head movement and during head movement, now, mouth motion ratio compared with
Substantially, therefore weight is maximum, and head movement simulation precision is minimum, therefore weight is minimum, and the Weight Algorithm of position motion corresponds to:Mouth
Motion>Head movement>Head movement.
Or, the weights corresponding with the motion of each position are according to different application field in vivo identification score value computing device 3
Scape carries out the weighed value adjusting of position motion automatically and set, specific practice:Under a certain scene, each of face to be measured is collected
The normal input video of kind of position motion is used as positive sample, attacks video as negative sample, takes that (positive sample passes through number+negative sample
Refuse number)/the accuracy rate of (positive sample sum+negative sample sum) as position motion, the standard that then each position is moved
True rate is ranked up according to order from big to small, the weight of each position motion also according to this order from big to small, adjust again
The weight of whole each position motion.
Above two sets each position and moves any preferred embodiment of corresponding weights in the present embodiment
Protection domain in.
Live body judgment means 4 specifically include:
Vivo identification confidence computation unit 41, for accounting for the ratio calculation of vivo identification total score by vivo identification score value
The vivo identification confidence level of face to be measured;
Wherein, vivo identification total score is that all sites motion obtained by vivo identification score value computing device 3 is corresponding
Motion score value weighting after summation maximum, vivo identification total score represents with s_max;F represents vivo identification confidence level, and
0<f<1;Vivo identification confidence computation unit 41 calculates the vivo identification confidence level of face to be measured by following formula:
F=(s/s_max) * 100%
Live body judging unit 42, for when vivo identification confidence level is not less than preset value, determining vivo identification score value not
Less than predetermined threshold value, judge that the face to be measured that vivo identification score value is not less than predetermined threshold value is live body.
Wherein, preset value is represented with e, is judged by live body judging unit 42:When f >=e, i.e. vivo identification confidence level be not small
When preset value, it is determined that vivo identification score value is not less than predetermined threshold value, judges vivo identification score value not less than predetermined threshold value
Face to be measured is live body;Work as f<When e, i.e. vivo identification confidence level are less than preset value, it is determined that vivo identification score value is less than default
Threshold value, judge that the face to be measured that vivo identification score value is less than predetermined threshold value is non-living body.
The vivo identification confidence level obtained by vivo identification confidence computation unit 41, can also be further expanded,
Classifying system is established for the present embodiment vivo identification system and carries out live body judgement and live body classification, to obtain abundant live body knowledge
Other result.
When it is implemented, first, obtain what corresponding position was moved by least two face position motion detection apparatus 1
Motion conditions, wherein head movement detection means of the face position motion detection apparatus 1 using the embodiment of the present invention 2;And lead to
Cross motion conditions of the position motion score value acquisition device 2 based on position motion and obtain corresponding motion score value;Then, live body is passed through
Summation is as vivo identification point after the motion score value that identification score value computing device 3 moves to each position of acquisition is weighted
Value, finally, vivo identification is accounted for by the vivo identification confidence computation unit 41 of live body judgment means 4 using vivo identification score value
The vivo identification confidence level of the ratio calculation face to be measured of total score, and the work as obtained by the judgement of live body judging unit 42 when calculating
Body recognition confidence is live body not less than the face to be measured of predetermined threshold value.
The present embodiment solves in the prior art that algorithm is single using detection at least two face position motion detection apparatus,
The problem of security is not high, scalability is strong, and the calculating of head movement detection means is simple efficient, not high to hardware requirement;Separately
Outside, fraction fusion being carried out by vivo identification score value computing device again to different parts motion weighting, the vivo identification degree of accuracy is high,
Obtain the beneficial effect that vivo identification accuracy rate is high, hardware requirement is low and safe.
Described above is the preferred embodiment of the present invention, it is noted that for those skilled in the art
For, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications are also considered as
Protection scope of the present invention.
Claims (10)
1. a kind of head movement detection method, it is characterised in that the head movement detection method includes step:
Some frame of video are extracted from face video to be measured;
Obtain some key point positions on the head of each frame of video extracted from the face video to be measured;
According to the head of the frame of video of some each extractions of key point position acquisition on the head in three dimensions
Yaw angle numerical value and angle of pitch numerical value;
The yaw angle numerical value and angle of pitch numerical value on the head of the frame of video based on each extraction judge the people to be measured
The situation of the head movement of face video.
2. a kind of head movement detection method as claimed in claim 1, it is characterised in that described based on described in each extraction
The yaw angle numerical value and angle of pitch numerical value on the head of frame of video judge the situation of the head movement of the face video to be measured
Including:
When the yaw angle numerical value on the head is less than the first default yaw angle numerical value, the head of the frame of video corresponding to judgement
State is turned left for head, described corresponding to judgement when the yaw angle numerical value on the head is more than the second default yaw angle numerical value
The head state of frame of video is turned right for head;
When the angle of pitch numerical value on the head is less than the first default angle of pitch numerical value, the head of the frame of video corresponding to judgement
State is bowed for head, described corresponding to judgement when the angle of pitch numerical value on the head is more than the second default angle of pitch numerical value
The head state of frame of video comes back for head;
Yaw angle numerical value is preset when the yaw angle numerical value on the head is more than the first default yaw angle numerical value and is less than second, it is described
When the angle of pitch numerical value on head is more than when the first default angle of pitch numerical value and presets angle of pitch numerical value less than second, judge corresponding
The frame of video head state for head front it is facing forward;
It is facing forward for head front that if some frame of video extracted from the face video to be measured include the head state
Frame of video and the head state be head left-hand rotation/head right-hand rotation/head bow/head come back frame of video, then judge institute
Stating the head of face video to be measured has a motion, and the head movement is that head left-hand rotation/head right-hand rotation/head is bowed/head lift
Head.
3. a kind of head movement detection method as claimed in claim 5, it is characterised in that described to obtain from the face to be measured
Some key point positions on the head of each frame of video extracted in video include:
Face datection and face key point are done with dlib storehouses to each frame of video extracted from the face video to be measured
Position is detected, and obtains some key point positions of face to be measured;
Some keys on head are obtained from some key point positions of the face to be measured of the frame of video of each extraction
Point position;Wherein, some key point positions on the head include a key point position of left eye, a key point of right eye
Put, two key point positions of a key point position of nose and mouth.
4. a kind of head movement detection method as claimed in claim 1, it is characterised in that described according to some of the head
Yaw angle numerical value and angle of pitch number of the head of the frame of video of each extraction of key point position acquisition in three dimensions
Value includes:
It is each according to some key point position acquisitions on the head using the aperture camera model in the image library opencv that increases income
Yaw angle numerical value and angle of pitch numerical value of the head of the frame of video extracted in three dimensions.
A kind of 5. head movement detection means, it is characterised in that including:
Frame of video extracting unit, for extracting some frame of video from face video to be measured;
Header key point position acquisition unit, for obtaining each frame of video extracted from the face video to be measured
Some key point positions on head;
Head numerical value acquiring unit, the frame of video for some each extractions of key point position acquisition according to the head
The head three dimensions yaw angle numerical value and angle of pitch numerical value;
Head movement judging unit, yaw angle numerical value and pitching for the head of the frame of video based on each extraction
Angle numerical value judges the situation of the head movement of the face video to be measured.
A kind of 6. head movement detection means as claimed in claim 5, it is characterised in that the head movement judging unit bag
Include:
Head state judge module, for when the yaw angle numerical value on the head is less than the first default yaw angle numerical value, judging
The head state of the corresponding frame of video turns left for head, when the yaw angle numerical value on the head is more than the second default yaw angle
During numerical value, the head state of the frame of video corresponding to judgement is turned right for head;
When the angle of pitch numerical value on the head is less than the first default angle of pitch numerical value, the head of the frame of video corresponding to judgement
State is bowed for head, described corresponding to judgement when the angle of pitch numerical value on the head is more than the second default angle of pitch numerical value
The head state of frame of video comes back for head;
Yaw angle numerical value is preset when the yaw angle numerical value on the head is more than the first default yaw angle numerical value and is less than second, it is described
When the angle of pitch numerical value on head is more than when the first default angle of pitch numerical value and presets angle of pitch numerical value less than second, judge corresponding
The frame of video head state for head front it is facing forward;
Head movement judge module, if including the head for some frame of video extracted from the face video to be measured
Portion's state be head front frame of video facing forward and the head state be head left-hand rotation/head right-hand rotation/head bow/head lift
The frame of video of head, then judging the head of the face video to be measured has motion, and the head movement is that head left-hand rotation/head is right
Turn/head bows/head comes back.
A kind of 7. head movement detection means as claimed in claim 5, it is characterised in that the header key point position acquisition
Unit includes:
Face key point position detecting module, for using each frame of video extracted from the face video to be measured
Dlib does in storehouse Face datection and the detection of face key point position, obtains some key point positions of face to be measured;
Header key point position acquisition module, some keys for the face to be measured of the frame of video from each extraction
Some key point positions on head are obtained in point position;Wherein, some key point positions on the head include a pass of left eye
Key point position, a key point position of right eye, two key point positions of a key point position of nose and mouth.
A kind of 8. head movement detection means as claimed in claim 5, it is characterised in that the head numerical value acquiring unit tool
Body is used for:
It is each according to some key point position acquisitions on the head using the aperture camera model in the image library opencv that increases income
Yaw angle numerical value and angle of pitch numerical value of the head of the frame of video extracted in three dimensions.
9. a kind of vivo identification method, it is characterised in that the vivo identification method includes step:
The situation of the head movement of the face to be measured in face video to be measured and the situation of other at least one position motions are detected,
Wherein, the people to be measured in face video to be measured is detected using the head movement detection method as described in any one of Claims 1 to 4
The situation of the head movement of face;
Situation based on position motion obtains motion score value corresponding to each position motion of the face to be measured;
Calculate the summation after motion score value weighting corresponding to each position motion, and using the summation being calculated as
Vivo identification score value;Wherein, corresponding weights are preset in each position motion;
Judge that the face to be measured that the vivo identification score value is not less than predetermined threshold value is live body.
10. a kind of vivo identification system, it is characterised in that the vivo identification system includes:
At least two face position motion detection apparatus, each face position motion detection apparatus are used to detect face to be measured
Corresponding position motion, wherein a face position motion detection apparatus is a kind of head as described in any one of claim 5~8
Motion detection apparatus;
Score value acquisition device is moved at position, and the face to be measured is obtained for the detection case based on each position motion
Motion score value corresponding to each position motion;
Vivo identification score value computing device, for calculating the summation after score value weighting is moved corresponding to each position motion,
And using the summation being calculated as vivo identification score value;Wherein, the vivo identification score value computing device preset with
Move corresponding weights in each position;
Live body judgment means, it is live body for judging the vivo identification score value not less than the face to be measured of predetermined threshold value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710406496.6A CN107358154A (en) | 2017-06-02 | 2017-06-02 | A kind of head movement detection method and device and vivo identification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710406496.6A CN107358154A (en) | 2017-06-02 | 2017-06-02 | A kind of head movement detection method and device and vivo identification method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107358154A true CN107358154A (en) | 2017-11-17 |
Family
ID=60271697
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710406496.6A Pending CN107358154A (en) | 2017-06-02 | 2017-06-02 | A kind of head movement detection method and device and vivo identification method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107358154A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108197534A (en) * | 2017-12-19 | 2018-06-22 | 迈巨(深圳)科技有限公司 | A kind of head part's attitude detecting method, electronic equipment and storage medium |
CN109086727A (en) * | 2018-08-10 | 2018-12-25 | 北京奇艺世纪科技有限公司 | A kind of method, apparatus and electronic equipment of the movement angle of determining human body head |
CN109299323A (en) * | 2018-09-30 | 2019-02-01 | Oppo广东移动通信有限公司 | A kind of data processing method, terminal, server and computer storage medium |
CN109829439A (en) * | 2019-02-02 | 2019-05-31 | 京东方科技集团股份有限公司 | The calibration method and device of a kind of pair of head motion profile predicted value |
CN110263691A (en) * | 2019-06-12 | 2019-09-20 | 合肥中科奔巴科技有限公司 | Head movement detection method based on android system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103514439A (en) * | 2012-06-26 | 2014-01-15 | 谷歌公司 | Facial recognition |
CN104978548A (en) * | 2014-04-02 | 2015-10-14 | 汉王科技股份有限公司 | Visual line estimation method and visual line estimation device based on three-dimensional active shape model |
CN105159452A (en) * | 2015-08-28 | 2015-12-16 | 成都通甲优博科技有限责任公司 | Control method and system based on estimation of human face posture |
CN105487665A (en) * | 2015-12-02 | 2016-04-13 | 南京邮电大学 | Method for controlling intelligent mobile service robot based on head posture recognition |
CN105868677A (en) * | 2015-01-19 | 2016-08-17 | 阿里巴巴集团控股有限公司 | Live human face detection method and device |
-
2017
- 2017-06-02 CN CN201710406496.6A patent/CN107358154A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103514439A (en) * | 2012-06-26 | 2014-01-15 | 谷歌公司 | Facial recognition |
CN104978548A (en) * | 2014-04-02 | 2015-10-14 | 汉王科技股份有限公司 | Visual line estimation method and visual line estimation device based on three-dimensional active shape model |
CN105868677A (en) * | 2015-01-19 | 2016-08-17 | 阿里巴巴集团控股有限公司 | Live human face detection method and device |
CN105159452A (en) * | 2015-08-28 | 2015-12-16 | 成都通甲优博科技有限责任公司 | Control method and system based on estimation of human face posture |
CN105487665A (en) * | 2015-12-02 | 2016-04-13 | 南京邮电大学 | Method for controlling intelligent mobile service robot based on head posture recognition |
Non-Patent Citations (2)
Title |
---|
武君等: "EPNP和POIST算法在头部姿态估计上的实验比较与分析", 《北方工业大学学报》 * |
邢世蒙: "头部运动姿态检测分析与设计", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108197534A (en) * | 2017-12-19 | 2018-06-22 | 迈巨(深圳)科技有限公司 | A kind of head part's attitude detecting method, electronic equipment and storage medium |
CN109086727A (en) * | 2018-08-10 | 2018-12-25 | 北京奇艺世纪科技有限公司 | A kind of method, apparatus and electronic equipment of the movement angle of determining human body head |
CN109299323A (en) * | 2018-09-30 | 2019-02-01 | Oppo广东移动通信有限公司 | A kind of data processing method, terminal, server and computer storage medium |
CN109299323B (en) * | 2018-09-30 | 2021-05-25 | Oppo广东移动通信有限公司 | Data processing method, terminal, server and computer storage medium |
CN109829439A (en) * | 2019-02-02 | 2019-05-31 | 京东方科技集团股份有限公司 | The calibration method and device of a kind of pair of head motion profile predicted value |
CN109829439B (en) * | 2019-02-02 | 2020-12-29 | 京东方科技集团股份有限公司 | Method and device for calibrating predicted value of head motion track |
CN110263691A (en) * | 2019-06-12 | 2019-09-20 | 合肥中科奔巴科技有限公司 | Head movement detection method based on android system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107358154A (en) | A kind of head movement detection method and device and vivo identification method and system | |
CN107330920B (en) | Monitoring video multi-target tracking method based on deep learning | |
CN105930767B (en) | A kind of action identification method based on human skeleton | |
CN106548182B (en) | Pavement crack detection method and device based on deep learning and main cause analysis | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN109389185B (en) | Video smoke identification method using three-dimensional convolutional neural network | |
CN107330914A (en) | Face position method for testing motion and device and vivo identification method and system | |
CN104361332B (en) | A kind of face eye areas localization method for fatigue driving detection | |
CN107358155A (en) | A kind of funny face motion detection method and device and vivo identification method and system | |
CN107392089A (en) | A kind of eyebrow movement detection method and device and vivo identification method and system | |
CN107330370A (en) | A kind of brow furrows motion detection method and device and vivo identification method and system | |
CN105046206B (en) | Based on the pedestrian detection method and device for moving prior information in video | |
CN107909027A (en) | It is a kind of that there is the quick human body target detection method for blocking processing | |
Chen et al. | Obstacle detection system for visually impaired people based on stereo vision | |
CN106886216A (en) | Robot automatic tracking method and system based on RGBD Face datections | |
CN109460704A (en) | A kind of fatigue detection method based on deep learning, system and computer equipment | |
CN109993061B (en) | Face detection and recognition method, system and terminal equipment | |
CN107358153A (en) | A kind of mouth method for testing motion and device and vivo identification method and system | |
CN103105924B (en) | Man-machine interaction method and device | |
CN103150572A (en) | On-line type visual tracking method | |
CN105868734A (en) | Power transmission line large-scale construction vehicle recognition method based on BOW image representation model | |
CN109308718A (en) | A kind of space personnel positioning apparatus and method based on more depth cameras | |
CN107368777A (en) | A kind of smile motion detection method and device and vivo identification method and system | |
CN109271941A (en) | A kind of biopsy method for taking the photograph attack based on anti-screen | |
CN107358151A (en) | A kind of eye motion detection method and device and vivo identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171117 |