CN107358155A - Method and device for detecting ghost face action and method and system for recognizing living body - Google Patents
Method and device for detecting ghost face action and method and system for recognizing living body Download PDFInfo
- Publication number
- CN107358155A CN107358155A CN201710412025.6A CN201710412025A CN107358155A CN 107358155 A CN107358155 A CN 107358155A CN 201710412025 A CN201710412025 A CN 201710412025A CN 107358155 A CN107358155 A CN 107358155A
- Authority
- CN
- China
- Prior art keywords
- face
- video
- measured
- mouth
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000009471 action Effects 0.000 title claims abstract description 67
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000005259 measurement Methods 0.000 claims abstract description 23
- 230000033001 locomotion Effects 0.000 claims description 226
- 238000001514 detection method Methods 0.000 claims description 102
- 238000000605 extraction Methods 0.000 claims description 46
- 238000004364 calculation method Methods 0.000 abstract description 9
- 238000005303 weighing Methods 0.000 abstract description 4
- 230000004886 head movement Effects 0.000 description 15
- 239000000047 product Substances 0.000 description 8
- 210000001061 forehead Anatomy 0.000 description 7
- 210000004709 eyebrow Anatomy 0.000 description 6
- 210000003128 head Anatomy 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000004927 fusion Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000001727 in vivo Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000037303 wrinkles Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 239000006227 byproduct Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for detecting a grimackin action, which comprises the following steps: extracting a plurality of video frames from a face video to be detected; acquiring the positions of the extracted face region, a plurality of key point positions of eyes and a plurality of key point positions of a mouth of each video frame; respectively calculating the area of the face region, the area of the eyes and the area of the mouth according to the position of the face region, the positions of a plurality of key points of the eyes and the positions of a plurality of key points of the mouth; obtaining a measurement score by calculating a ratio of a sum of the eye area and the mouth area of each extracted video frame to the face area; and judging the face ghosting action condition of the face video to be detected based on the weighing fraction. Correspondingly, the invention also discloses a device for detecting the face-ghosting action. The invention has simple calculation and high efficiency.
Description
Technical field
The present invention relates to field of face identification, more particularly to a kind of funny face motion detection method and device and vivo identification side
Method and system.
Background technology
With the development of face recognition technology, increasing scene needs to use Face datection and goes quickly identification one
The identity of people.But there is undesirable to remove progress recognition of face, so whole face instead of true man using picture or video
The security of identifying system just cannot be guaranteed.And face vivo identification can detect current face to be measured be living body faces and
Face in non-photograph or video, so as to ensure that the security of face identification system.When carrying out recognition of face, Ke Yitong
Crossing the detection of the action of the funny face to face to be measured helps to identify whether face is live body., can be real to realize in human bioequivalence
Now efficiently simply identify face whether live body, thus need a kind of efficiently simple funny face motion detection technical scheme.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of funny face motion detection method and device, calculates simple, efficiency high.
To achieve the above object, the embodiments of the invention provide a kind of funny face motion detection method, including step:
Some frame of video are extracted from face video to be measured;
Obtain the human face region position of each frame of video extracted from the face video to be measured, eye it is some
Key point position and some key point positions of mouth;
Pass through the human face region position, some key point positions of the eye and some key points of the mouth
Put and calculate human face region area, eye area and mouth area respectively;
The people is accounted for by the eye area and the mouth area sum for the frame of video for calculating each extraction
The ratio of face region area, which obtains, weighs fraction;
The measurement fraction of the frame of video based on each extraction judges the funny face action of the face video to be measured
Situation.
Compared with prior art, a kind of funny face action detection device disclosed in the embodiment of the present invention, then obtain what is extracted
The human face region position of face to be measured, some key point positions of eye and some key points of mouth in each frame of video
Put;Then, human face region area, eye area and mouth area are obtained;And face is accounted for eye area and mouth area sum
The ratio of region area, which obtains, weighs fraction, and finally, the measurement fraction of the frame of video based on each extraction judges that face to be measured regards
Whether frequency has funny face action;The program based on obtain human face region position, the key point position of eye mouth calculates face area respectively
The area in domain, eye and mouth, calculating process is simply efficient, and the weighing apparatus obtained according to the area of human face region, eye and mouth
Fraction is measured to judge that funny face acts, testing result can be accurately obtained and amount of calculation is small, efficiency high;And any common camera or
The camera of person mobile terminal mobile phone can be simple as the input hardware of face video to be measured, device hardware requirement.
Further, the measurement fraction of the frame of video based on each extraction judges that the face to be measured regards
The situation of the funny face action of frequency includes:
Judge whether the measurement fraction of the frame of video of each extraction is weighed within fraction range default, if
It is that then the face to be measured of the corresponding frame of video is normal condition, if it is not, then the face to be measured of the corresponding frame of video is
Funny face state;
When frame of video and to be measured face of the face to be measured included in some frame of video of extraction for normal condition simultaneously
For the frame of video of funny face state, then judging the face to be measured of the face to be measured has funny face action.
Further, the human face region position for obtaining each frame of video extracted from the face video to be measured
Put, some key point positions of some key point positions of eye and mouth include:
Face datection is done with dlib storehouses to each frame of video extracted from the face video to be measured and face closes
Key point position is detected, and obtains the human face region position and some key point positions of face to be measured;
The eye is obtained from some key point positions of the face to be measured of the frame of video of each extraction
Some key point positions and some key point positions of the mouth.
Further, it is described to pass through the human face region position, some key point positions of the eye and the mouth
Some key point positions calculate human face region area, eye area and mouth area respectively and include:
By the human face region position acquisition face length and face width, institute is multiplied by by calculating the face length
The product for stating face width obtains the human face region area;
By some key point position acquisition left eye length and left eye width of the eye, and right eye length and right eye it is wide
Degree, the left eye width acquisition left eye area is multiplied by by calculating the left eye length, by calculating the right eye length
It is multiplied by the right eye width and obtains the right eye area;Institute is obtained by calculating the left eye area and the right eye area sum
State eye area;
By some key point position acquisition mouth length and mouth width of the mouth, grown by calculating the mouth
The product that degree is multiplied by the mouth width obtains the mouth area.
Accordingly, the embodiment of the present invention also provides a kind of funny face action detection device, including:
Frame of video extracting unit, for extracting some frame of video from face video to be measured;
Key point position acquisition unit, for obtaining each frame of video extracted from the face video to be measured
Some key point positions of human face region position, some key point positions of eye and mouth;
Area acquiring unit, for passing through the human face region position, some key point positions of the eye and described
Some key point positions of mouth calculate human face region area, eye area and mouth area respectively;
Fraction acquiring unit is weighed, for the eye area of the frame of video by calculating each extraction and described
The ratio that mouth area sum accounts for the human face region area obtains measurement fraction;
Funny face acts judging unit, for being treated described in the measurement fraction judgement of the frame of video based on each extraction
Survey the situation of the funny face action of face video.
Compared with prior art, a kind of funny face action detection device disclosed in the embodiment of the present invention, is extracted by frame of video
Unit obtains some frame of video from face video to be measured, and then obtaining each of extraction by key point position acquisition unit regards
Some key point positions of the human face region position of face to be measured, some key point positions of eye and mouth in frequency frame;Then,
Human face region area, eye area and mouth area are obtained by area acquiring unit;And by weighing fraction acquiring unit meter
Calculate eye area and mouth area sum accounts for the ratio acquisition measurement fraction of human face region area, finally, sentenced by funny face action
The measurement fraction of disconnected each frame of video of the unit based on extraction judges whether face video to be measured has funny face action.The program calculates
Process is simply efficient, the camera of any common camera or mobile terminal mobile phone can as face video to be measured input it is hard
Part, device hardware requirement are simple.
Further, the funny face action judging unit includes:
Funny face condition judgment module, for judging whether the measurement fraction of the frame of video of each extraction is being preset
Weigh within fraction range, if so, then the face to be measured of the corresponding frame of video is normal condition, if it is not, then corresponding institute
The face to be measured for stating frame of video is funny face state;
Funny face acts judge module, for being normal shape when extracting the face to be measured included in some frame of video simultaneously
The frame of video of state and the frame of video that face to be measured is funny face state, then judging the face to be measured of the face to be measured has funny face to move
Make.
Further, the key point position acquisition unit includes:
Face key point position detecting module, for each frame of video to being extracted from the face video to be measured
Face datection is done with dlib storehouses and face key point position is detected, and obtains the human face region position and some passes of face to be measured
Key point position;
Eye mouth key point position acquisition module, for the frame of video from each extraction the face to be measured it is some
Some key point positions of the eye and some key point positions of the mouth are obtained in key point position.
Further, the area acquiring unit includes:
Human face region area acquisition module, for by the human face region position acquisition face length and face width,
The product that the face width is multiplied by by calculating the face length obtains the human face region area;
Eye area acquisition module, for wide by some key point position acquisition left eye length and left eye of the eye
Degree, and right eye length and right eye width, the left eye width acquisition left eye area is multiplied by by calculating the left eye length,
The right eye width acquisition right eye area is multiplied by by calculating the right eye length;By calculating the left eye area and institute
State right eye area sum and obtain the eye area;
Mouth area acquisition module, for wide by some key point position acquisition mouth length and mouth of the mouth
Degree, the product that the mouth width is multiplied by by calculating the mouth length obtain the mouth area.
Accordingly, the present invention also provides a kind of vivo identification method, including step:
Detect situation and the motion of other at least one positions of the funny face action of the face to be measured in face video to be measured
Situation, wherein, using the face to be measured in a kind of funny face motion detection method detection provided by the invention face video to be measured
The situation of funny face action;
Situation based on position motion obtains motion score value corresponding to each position motion of the face to be measured;
Calculate the summation after motion score value weighting corresponding to each position motion, and the summation that will be calculated
As vivo identification score value;Wherein, corresponding weights are preset in each position motion;
Judge that the face to be measured that the vivo identification score value is not less than predetermined threshold value is live body.
Compared with prior art, a kind of vivo identification method disclosed in the embodiment of the present invention, using ghost disclosed by the invention
Face motion detection method detects the situation of the funny face action of the face to be measured of face video to be measured, and by detecting face to be measured
Other positions motion conditions, corresponding to acquisition position move motion score value, to position motion score value be weighted after ask
With as vivo identification score value, by the use of vivo identification score value as the face to be measured whether be live body criterion technology
Scheme;Wherein, funny face motion detection method calculating process is simply efficient, and device hardware requirement is simple;Using detection funny face action
Solving in the prior art the problem of algorithm is single, and security is not high with other at least one position motions, scalability is strong, and
Detection based on the motion of face position can be realized by two dimensional image, not high to hardware requirement;In addition, using to different parts
Motion weighting carries out fraction fusion again, and the vivo identification degree of accuracy is high, and this vivo identification method accuracy rate is high, hardware requirement is low and peace
Quan Xinggao.
The present invention also provides a kind of vivo identification system, including:
At least two face position motion detection apparatus, each face position motion detection apparatus are to be measured for detecting
Position corresponding to face is moved, wherein a face position motion detection apparatus is a kind of funny face motion detection dress provided by the invention
Put;
Score value acquisition device is moved at position, and the people to be measured is obtained for the detection case based on each position motion
Motion score value corresponding to each position motion of face;
Vivo identification score value computing device, it is total after score value weighting is moved corresponding to each position motion for calculating
With, and using the summation being calculated as vivo identification score value;Wherein, the vivo identification score value computing device has been preset
The weights corresponding with each position motion;
Live body judgment means, it is work for judging the vivo identification score value not less than the face to be measured of predetermined threshold value
Body.
Compared with prior art, a kind of vivo identification system passes through at least two face position disclosed in the embodiment of the present invention
Motion detection apparatus obtains the motion score value at least two positions on the face to be measured, wherein, face position motion inspection
Survey device and detection means is made using the funny face of the present invention;Score value is moved by vivo identification score value computing device to position to add
After power summation be used as vivo identification score value, by live body judgment means by the use of vivo identification score value as the face to be measured whether
For the technical scheme of the criterion of live body;The calculating of funny face action detection device is simple efficient, and device hardware requirement is simple;Using
The motion conditions that detection at least two kind position telecontrol equipment detects at least two positions solve in the prior art that algorithm is single,
The problem of security is not high, scalability is strong, and the detection based on the motion of face position can be realized by two dimensional image, to hard
Part is less demanding, in addition, carrying out fraction fusion, live body again to different parts motion weighting by vivo identification score value computing device
Recognition accuracy is high, obtains the beneficial effect that vivo identification accuracy rate is high, hardware requirement is low and safe.
Brief description of the drawings
Fig. 1 is a kind of schematic flow sheet for funny face motion detection method that the embodiment of the present invention 1 provides;
Fig. 2 is the step S15 for the embodiment that a kind of funny face motion detection method that the embodiment of the present invention 1 provides provides stream
Journey schematic diagram;
Fig. 3 is a kind of step S12 for funny face motion detection method that the embodiment of the present invention 1 provides schematic flow sheet;
Fig. 4 is the model schematic of 68 key points of face to be measured;
Fig. 5 is a kind of step S13 for funny face motion detection method that the embodiment of the present invention 1 provides schematic flow sheet;
Fig. 6 is a kind of structural representation for funny face action detection device that the embodiment of the present invention 2 provides;
Fig. 7 is a kind of schematic flow sheet for vivo identification method that the embodiment of the present invention 3 provides;
Fig. 8 is a kind of step S24 schematic flow sheets for vivo identification method that the embodiment of the present invention 3 provides;
Fig. 9 is a kind of structural representation for vivo identification system that the embodiment of the present invention 4 provides.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other under the premise of creative work is not made
Embodiment, belong to the scope of protection of the invention.
A kind of funny face motion detection method that the embodiment of the present invention 1 provides, referring to Fig. 1, Fig. 1 is that the flow of the present embodiment is shown
It is intended to, including step:
S11, some frame of video are extracted from face video to be measured;
The human face region position for each frame of video that S12, acquisition are extracted from face video to be measured, some keys of eye
Point position and some key point positions of mouth;
S13, counted respectively by some key point positions of human face region position, some key point positions of eye and mouth
Calculate human face region area, eye area and mouth area;
S14, the eye area of frame of video by calculating each extraction and mouth area sum account for human face region area
Ratio, which obtains, weighs fraction;
S15, the measurement fraction of frame of video based on each extraction judge the situation of the funny face action of face video to be measured.
Generally, funny face is the funny facial expression deliberately worked it out of face, most significantly shows as eye and mouth
The a certain degree of twist distortion of generation.Based on the phenomenon, the present embodiment is to the definition standard of funny face:When the eye of face to be measured
The ratio that area and mouth area sum account for human face region area is not weighed within fraction range default, then judges people's face-like
State is funny face state.Other embodiments provided by the invention can refer to the explanation of the above-mentioned definition standard to funny face, no longer superfluous
State.
So referring to Fig. 2, Fig. 2 is the specific schematic flow sheets of step S15, and step S15 includes:
S151, judge whether the measurement fraction of frame of video of each extraction is weighed within fraction range default, if so, then
The face to be measured of corresponding frame of video is normal condition, if it is not, then the face to be measured of corresponding frame of video is funny face state;
S152, when extracting frame of video and to be measured face of the face to be measured included in some frame of video for normal condition simultaneously
For the frame of video of funny face state, then judging the face to be measured of face to be measured has funny face action.
The present embodiment weighs fraction range come whether the frame of video of extraction corresponding to judging is funny face state according to default, when
Extract in some frame of video at the same including face to be measured be the frame of video of normal condition and face to be measured is regarding for funny face state
Frequency frame, illustrates to do existing normal condition by the face to be measured in face video to be measured also there's something fishy face-like state, i.e., face to be measured makes ghost
Face acts.
In step s 11 some frame of video are extracted from face video to be measured, it is preferred to use from face video to be measured
The frame of video of successive frame is obtained, or, it is preferred to use correspond to extraction according to regular hour frequency from face video to be measured and regard
Frequency frame.
Referring to Fig. 3, figure, 3 be the specific schematic flow sheets of step S12, and step S12 is specifically included:
S121, Face datection and face key point are done with dlib storehouses to each frame of video extracted from face video to be measured
Position is detected, and obtains human face region position and some key point positions of face to be measured;
Dlib storehouses refer to a cross-platform general-purpose library write using C++ technologies;
Referring to Fig. 4, Fig. 4 is the model schematic of 68 key points of face to be measured;Some people obtained in step S121
Face key point position is the key point position shown in 1~key point of key point 68 in Fig. 4;In addition, by each of extraction
Frame of video does Face datection, can obtain human face region position;In the present embodiment, preferably human face region is the square for representing face
Shape frame region, it is corresponding, when the position for tetra- points of H, I, J and K for obtaining example in Fig. 4 can determine that the rectangle frame area of face
Domain, that is, obtain human face region position.
S122, from some key point positions of the face to be measured of the frame of video of each extraction obtain eye some keys
Point position and some key point positions of mouth.
In Fig. 4, step S122 obtain eye some key point positions be in Fig. 4 37~key point of key point 48 this
Key point position shown in 12 key points, wherein, the key point position shown in this 6 key points of 37~key point of key point 42
Represent left eye, the key point positional representation right eye shown in this 6 key points of 43~key point of key point 48;If the mouth of acquisition
Dry key point position is the key point position in Fig. 4 shown in this 20 key points of 49~key point of key point 68.
Referring to Fig. 5, Fig. 5 is step S13 schematic flow sheet, and step S13 is specifically included:
S131, by human face region position acquisition face length and face width, be multiplied by face by calculating face length
The product of width obtains human face region area.
Corresponding step S121 preferably obtains human face region position to determine the rectangle frame region of face, and obtains determination square
The position of tetra- points of H, I, J and K of four vertex positions in shape frame region, i.e. Fig. 4 examples;So, it is preferably logical in step S131
Cross line segment HK length and represent face length, line segment HI represents face width, and the corresponding area for calculating face rectangle frame HIJK obtains
Take human face region area.
S132, some key point position acquisition left eye length and left eye width by eye, and right eye length and right eye
Width, left eye width is multiplied by by calculating left eye length and obtains left eye area, being multiplied by right eye width by calculating right eye length obtains
Take right eye area;Eye area is obtained by calculating left eye area and right eye area sum;
S133, some key point position acquisition mouth length and mouth width by mouth, by calculating mouth length
The product for being multiplied by mouth width obtains mouth area;
Referring to Fig. 4, in step S132 and S133, definition represents this 6 key points of 37~key point of key point 42 of left eye
In the maximum of x coordinate to subtract the minimum value of x coordinate be left eye length, the maximum of the y-coordinate of 6 key points of right eye subtracts
The minimum value for removing y-coordinate is left eye width;Right eye length and right eye width can similarly be obtained;Definition represents the key point of mouth
The minimum value that the maximum of the x coordinate in key point shown in this 20 key points of 49~key point 68 subtracts x coordinate is mouth
Minister's degree, it is mouth width to represent that the maximum of the y-coordinate of 20 key points of mouth subtracts the minimum value of y-coordinate.Herein,
It is x-axis that acquiescence establishes horizontal direction in each frame of video of extraction, and vertical direction is the xy coordinate axial systems of y-axis, from extraction
Each frame of video in the key point position of face to be measured that obtains be crucial point coordinates.
During the area of the present embodiment calculating eye and mouth, width is preferably multiplied by by computational length to obtain corresponding position
Area, obtain the way of example of mouth area as mouth length is multiplied by mouth width;Can simply and efficiently it be calculated
As a result come for obtaining the judgement weighed fraction and then carry out funny face state, amount of calculation is small, efficiency high.And similarly, the present embodiment
When human face region area is calculated, if also may be used when the human face region position obtained to determine the non-rectangle frame region of face
To subtract the minimum of x coordinate using the maximum from the x coordinate in some coordinate points on the non-rectangle frame region for determining face
Value is face length, and to subtract the minimum value of y-coordinate be face width to the maximum of y-coordinate, also in the present embodiment
In protection domain.
When it is implemented, the present embodiment obtains some frame of video from face video to be measured, each of extraction is then obtained
Some key point positions of the human face region position of face to be measured, some key point positions of eye and mouth in frame of video;Connect
, obtain human face region area, eye area and mouth area;And human face region face is accounted for eye area and mouth area sum
Long-pending ratio, which obtains, weighs fraction, finally, judges that it is normal condition to weigh fraction in the face to be measured of the frame of video of preset range,
Judge weigh fraction not the frame of video of preset range face to be measured for funny face state;When simultaneously some frame of video of extraction are wrapped
The frame of video of normal condition and the frame of video of funny face state are included, then judges that face video to be measured has funny face action.
Compared with prior art, the present embodiment carries out Face datection and face critical point detection to the frame of video of extraction, and
Human face region area, eye area and mouth area are calculated based on each position key point position, obtain eye area and mouth face
Human face region area ratio judges funny face shared by product sum;Calculate simple efficient, any common camera or mobile terminal
The camera of mobile phone can be simple to device hardware requirement as the input hardware of face video to be measured.
The embodiment that a kind of funny face action detection device that the embodiment of the present invention 2 provides provides, referring to Fig. 6, this implementation of Fig. 6
The structural representation of example;The present embodiment specifically includes:
Frame of video extracting unit 11, for extracting some frame of video from face video to be measured;
Key point position acquisition unit 12, obtain the human face region position of each frame of video extracted from face video to be measured
Put, some key point positions of some key point positions of eye and mouth;
Area acquiring unit 13, for obtaining the human face region position of each frame of video extracted from face video to be measured
Put, some key point positions of some key point positions of eye and mouth;
Fraction acquiring unit 14 is weighed, for passing through human face region position, some key point positions of eye and mouth
Some key point positions calculate human face region area, eye area and mouth area respectively;
Funny face acts judging unit 15, and the measurement fraction for the frame of video based on each extraction judges face video to be measured
Funny face action situation.
Funny face action judging unit 15 is specifically included with lower module:
Funny face condition judgment module 151, for judging whether the measurement fraction of frame of video of each extraction is presetting measurement
Within fraction range, if so, then the face to be measured of corresponding frame of video is normal condition, if it is not, then corresponding frame of video is treated
Survey face is funny face state;
Funny face acts judge module 152, for being normal shape when extracting the face to be measured included in some frame of video simultaneously
The frame of video of state and the frame of video that face to be measured is funny face state, then judging the face to be measured of face to be measured has funny face action.
When extracting some frame of video from face video to be measured by frame of video extracting unit 11, preferably from face to be measured
The frame of video of successive frame is obtained in video, or, preferably correspondingly taken out according to regular hour frequency from face video to be measured
Take frame of video.
Key point position acquisition unit 12 specifically includes:
Face critical point detection module 121, for each frame of video dlib storehouses to being extracted from face video to be measured
Face datection and face critical point detection are done, obtains human face region position and some key point positions of face to be measured;
Referring to Fig. 4, some face key point positions obtained by face critical point detection module 121 are to be closed in Fig. 4
Key point position shown in 1~key point of key point 68;In addition, by doing Face datection to each frame of video of extraction, can obtain
Take human face region position;In the present embodiment, preferably human face region is corresponding to represent the rectangle frame region of face, works as acquisition
The position of tetra- points of H, I, J and K of example is the rectangle frame region that can determine that face in Fig. 4, that is, obtains human face region position.
Eye mouth key point position acquisition module 122, some keys for the face to be measured of the frame of video from each extraction
Point obtains some key point positions of eye and some key point positions of mouth in position.
In Fig. 4, some key point positions of the eye obtained by eye mouth key point position acquisition module 122 are Fig. 4
Key point position shown in this 12 key points of middle 37~key point of key point 48, wherein, 37~key point of key point 42 this 6
Key point positional representation left eye shown in key point, the key point position shown in this 6 key points of 43~key point of key point 48
Represent right eye;Some key point positions of the mouth of acquisition are this 20 key point institutes of 49~key point of key point 68 in Fig. 4
The key point position shown.
Specifically, area acquiring unit 13 is included with lower module:
Human face region area acquisition module 131, for by human face region position acquisition face length and face width, leading to
Cross and calculate the product acquisition human face region area that face length is multiplied by face width;
It is corresponding that human face region position is preferably obtained to determine the rectangle frame region of face by face critical point detection module,
And obtain the position of four vertex positions, i.e. tetra- points of H, I, J and K of Fig. 4 examples that determine rectangle frame region;So, people
Face region area acquisition module 131 preferably represents face length by line segment HK length, and line segment HI represents face width, to correspond to
Face rectangle frame HIJK area is calculated to obtain human face region area.
Eye area acquisition module 132, for wide by some key point position acquisition left eye length and left eye of eye
Degree, and right eye length and right eye width, left eye width acquisition left eye area is multiplied by by calculating left eye length, by calculating right eye
Length is multiplied by right eye width and obtains right eye area;Eye area is obtained by calculating left eye area and right eye area sum;
Mouth area acquisition module 133, for wide by some key point position acquisition mouth length and mouth of mouth
Degree, the product that mouth width is multiplied by by calculating mouth length obtain mouth area.
Referring to Fig. 4, eye area acquisition module 132, specifically for 37~key point of key point 42 of definition expression left eye
The minimum value that the maximum of x coordinate in this 6 key points subtracts x coordinate is left eye length, and the y of 6 key points of right eye is sat
The minimum value that target maximum subtracts y-coordinate is left eye width;Right eye length and right eye width can similarly be obtained;Mouth area
Acquisition module 133, the key point shown in this 20 key points of 49~key point of key point 68 of mouth is represented specifically for definition
In the maximum of x coordinate to subtract the minimum value of x coordinate be mouth length, represent the y-coordinate of 20 key points of mouth most
The minimum value that big value subtracts y-coordinate is mouth width.Herein, give tacit consent to and establish horizontal direction in each frame of video of extraction and be
X-axis, vertical direction be y-axis xy coordinate axial systems, the key point of the face to be measured obtained from each frame of video of extraction
Put as crucial point coordinates.
When the present embodiment area acquiring unit 13 is used to calculate the area of eye and mouth, preferably it is multiplied by by computational length
Width obtains the area of corresponding position, as mouth length is multiplied by mouth width obtains the way of example of mouth area;Can
Simply and efficiently obtain result of calculation to come for obtaining the judgement weighed fraction and then carry out funny face state, amount of calculation is small, efficiency
It is high.And similarly, when the present embodiment area acquiring unit 13 is used to calculate human face region area, if when the human face region position obtained
To determine the non-rectangle frame region of face, the x coordinate in some coordinate points on non-rectangle frame region by determining face
The minimum value that maximum subtracts x coordinate is face length, and y-coordinate maximum subtract y-coordinate minimum value it is wide for face
Degree, also in the protection domain of the present embodiment.
When it is implemented, the present embodiment obtains some videos by frame of video extracting unit 11 from face video to be measured
Frame, then by key point position acquisition unit 12 obtain extract each frame of video in face to be measured human face region position,
Some key point positions of eye and some key point positions of mouth;Then, face area is obtained by area acquiring unit 13
Domain area, eye area and mouth area;And calculate eye area and mouth area sum by weighing fraction acquiring unit 14
The ratio for accounting for human face region area obtains measurement fraction, finally, acts judging unit 15 by funny face and judges to weigh fraction pre-
If the face to be measured of the frame of video of scope is normal condition, judge weigh fraction not preset range frame of video face to be measured
For funny face state;The frame of video of some frame of video when extraction while the frame of video including normal condition and funny face state, then sentence
Fixed face video to be measured has funny face action.
Compared with prior art, the present embodiment calculates simple efficient, and any common camera or mobile terminal mobile phone are taken the photograph
As head can be simple to device hardware requirement as the input hardware of face video to be measured.
A kind of vivo identification method that the embodiment of the present invention 3 provides, referring to Fig. 7, Fig. 7 is the flow signal of the present embodiment
Figure, wherein, the present embodiment specifically includes step:
S21, detect situation and other at least one position fortune that the funny face of the face to be measured in face video to be measured acts
Dynamic situation, wherein, a kind of funny face motion detection method provided using the embodiment of the present invention 1 is detected in face video to be measured
The situation of the funny face action of face to be measured;The detailed process of detection funny face action may refer to a kind of funny face motion detection of the present invention
The embodiment that method provides, is not repeated herein;
S22, the situation based on position motion obtain motion score value corresponding to each position motion of face to be measured;
S23, calculate the summation after motion score value weighting corresponding to the motion of each position, and using the summation being calculated as
Vivo identification score value;Wherein, corresponding weights are preset in each position motion;
S24, judge that the face to be measured that vivo identification score value is not less than predetermined threshold value is live body.
Other at least one position motions of detection face to be measured in the present embodiment step S21 are mouth motion, eye
At least one of motion, head movement, eyebrow movement and forehead motion;As a rule, mouth motion includes whether mouth is opened
Close, whether eye motion has to open including eyes is closed action, and whether head movement rotates including head, and eyebrow movement is including eyebrow
No shake, forehead motion include whether forehead has wrinkle change;Wherein, mouth motion, eye motion and head movement motion journey
Spend obvious, be advantageous to be detected, can preferably select in the motion of detection mouth, eye motion and head movement at least
It is a kind of.
Example, other at least one positions motions that face to be measured is detected in step S21 specifically include:Detect face to be measured
Each frame of video detection position motion for being extracted every default frame number of face video corresponding to position key point position, pass through
Extract each frame of video position key point position intensity of variation come determine position motion situation;Or detection is to be measured
Position grey value characteristics corresponding to each frame of video detection position motion that face is extracted every default frame number, pass through extraction
The intensity of variation of the gray value at the position of each frame of video come determine position motion situation.Above-mentioned implementation is only to detect it
The example of its at least one position motion;On the basis of vivo identification method based on the present embodiment, by other specific
Embodiment realize to an at least position in addition motion motion detection, also within the protection domain of the present embodiment.
Each position is set in the step S23 of the present embodiment and moves the preferred embodiment of corresponding weights as according to every
The significant degree setting of one position motion.For example, the position motion that the face to be measured in face video to be measured is detected as step S21 is
Funny face action, mouth motion and head movement;Generally, mouth motion is obvious, therefore weight is maximum, head movement simulation precision
It is minimum, therefore weight is minimum, and funny face action is to include considering eye and mouth, is correspondingly arranged the weight of position motion
Strategy is:Mouth moves>Funny face acts>Head movement.
Or, each position is set in step S23 and moves another preferred embodiment of corresponding weights as according to difference
Application scenarios carry out the weighed value adjusting of position motion automatically and set, specific practice:Under a certain scene, people to be measured is collected
The normal input video of the various positions motion of face is used as positive sample, attacks video as negative sample, take (positive sample by number+
Negative sample refuses number)/the accuracy rate of (positive sample sum+negative sample sum) as position motion, then each position is transported
Dynamic accuracy rate is ranked up according to order from big to small, the weight of each position motion also according to this order from big to small,
Readjust the weight of each position motion.For weight after readjusting to calculate vivo identification score value, the recognition result can
With the accuracy rate of the position motion detection under adaptive different scenes, increase the accuracy rate of the vivo identification result of the present embodiment.
Above two sets each position and moves any preferred embodiment of corresponding weights in the present embodiment
Protection domain in.
Specifically, referring to Fig. 8, Fig. 8 is step S24 schematic flow sheet, including step:
S241, accounted for by vivo identification score value vivo identification total score ratio calculation face to be measured vivo identification confidence
Degree;
S242, when vivo identification confidence level is not less than preset value, determine that vivo identification score value is not less than predetermined threshold value;
S243, judge that the face to be measured that vivo identification score value is not less than predetermined threshold value is live body.
Specifically, in step S241, vivo identification total score is that can be obtained after face to be measured is identified the present embodiment
The maximum obtained, the vivo identification confidence level of face to be measured are calculated by following formula:
F=(s/s_max) * 100%
Wherein, s_max represents vivo identification total score, and f represents vivo identification confidence level, and 0<f<1;
Preset value is represented with e, as f >=e, i.e., when vivo identification confidence level is not less than preset value, it is determined that vivo identification point
Value is not less than predetermined threshold value, judges that the face to be measured that vivo identification score value is not less than predetermined threshold value is live body;Work as f<E, i.e. live body
When recognition confidence is less than preset value, it is determined that vivo identification score value is less than predetermined threshold value, and it is pre- to judge that vivo identification score value is less than
If the face to be measured of threshold value is non-living body.
The vivo identification confidence level obtained using vivo identification score value, can also be further expanded, for the present embodiment
Establish classifying system and carry out live body judgement and live body classification, to obtain abundant vivo identification result.
Step S22 obtains motion score value bag corresponding to each position motion of face to be measured based on the situation that position is moved
Include:
Motion conditions based on funny face action move score value corresponding to obtaining:When the detection face to be measured in step S21
Motion conditions are that face to be measured has funny face action, then the motion score value of the funny face action obtained is 1 point;Otherwise the funny face obtained is moved
The motion score value of work is 0 point.
Similar, the motion conditions based on other at least one position motions obtain corresponding motion score value:As step S21
In the corresponding motion conditions of detection face to be measured have motion for the corresponding position of face to be measured, then the corresponding position obtained moves
Motion score value be 1 point;Otherwise the motion score value obtained is 0 point.
Except moving score value corresponding to the judgement of without motion acquisition by having, if acquired position fortune in the step s 21
Dynamic motion conditions are the movement degree of position motion, can also be obtained according to its movement degree in score value section corresponding to transport
Dynamic score value, such as set fraction and be divided into 10 grades, value is between 0 to 1.
When it is implemented, some frame of video are first extracted from face video to be measured, and each frame of video to being extracted is examined
Position motion is surveyed so as to obtain the motion conditions at corresponding position, wherein detecting position motion to detect whether face to be measured has
Funny face acts:Face datection first is carried out to the frame of video of extraction and obtains 68 key points of face to be measured, so as to obtain face
Regional location, the mouth key point position of face to be measured and eye key point position, calculated based on each site location and obtain face
Region area, eye area and mouth area, and accounted for according to the eye area and mouth area sum of each frame of video of extraction
The ratio of human face region area judges whether face to be measured has funny face action;Situation about being moved according to each position obtains corresponding
Motion score value, be specially the position have motion, then the motion score value obtained be 1 point, the motion score value otherwise obtained be 0 point;
Then calculate it is above-mentioned obtain the summation after each position motion score value is weighted, the summation represents vivo identification score value;Finally
The ratio calculation vivo identification confidence level of vivo identification total score is accounted for the vivo identification score value, wherein, when vivo identification confidence level
During not less than preset value, determine that vivo identification score value is not less than predetermined threshold value, so as to judge face to be measured for live body;Otherwise, sentence
Fixed face to be measured is non-living body.
The present embodiment can operate with plurality of devices end, and this is said exemplified by sentencing the implement scene for applying to cell phone end
It is bright:In mobile phone terminal vivo identification, occur a kind of live body action request order at random, for example, require that face to be measured is carried out respectively
The live body action that head is turned left, funny face acts and is opened one's mouth;Now if the weight of default position motion is corresponding mouth of opening one's mouth
The weight w1=3 of motion, weight w2=2 corresponding to funny face action, the weight w3=1 of head movement corresponding to the left-hand rotation of head;Meter
Vivo identification total score is calculated, i.e. vivo identification best result s_max is 3*1+2*1+1*1=6 points.Assuming that detect to open one's mouth to be scored at 1
Point, funny face action is scored at 1 point, and head turns left to be scored at 0 point, and vivo identification score value s is total after the motion weighting of each position
With substitute into the motion score value of above-mentioned position motion, calculate s=3*1+2*1+1*0=5 points of vivo identification score value;Finally, calculate and live
Body recognition confidence f=s/s_max=5/6=83.33%.If setting now setting value e, as 80%, judges the face to be measured
For live body, and live body confidence level is 83.33%.
The present embodiment solves in the prior art the problem of algorithm is single, and security is not high, and scalability is strong;For to be measured
The detection method calculating of the funny face action of face is simple efficient, not high to the hardware requirement of equipment;In addition, adopt in the present embodiment
With the detection moved to multiple positions to carry out vivo identification, and fraction fusion, live body are carried out again to different parts motion weighting
Recognition accuracy is high, and is advantageous to improve security.
A kind of vivo identification system that the embodiment of the present invention 4 provides, referring to Fig. 9, Fig. 9 is the structural representation of the present embodiment
Figure, the present embodiment include:
At least two face position motion detection apparatus 1, each face position motion detection apparatus are used to detect face to be measured
The situation of corresponding position motion;Face position motion detection apparatus 1a and face position motion detection apparatus 1b tables in Fig. 9
Show two face position motion detection apparatus 1 of detection two different parts motion;A wherein face position motion detection apparatus 1 is this
A kind of funny face action detection device that inventive embodiments 2 provide, reference can be made to embodiment 2, is not repeated herein.
It should be noted that Fig. 9 is only to be example including 2 face position motion detection apparatus 1, in practice,
The present embodiment can also include the face position motion detection apparatus 1 of more than 2.
Position motion score value acquisition device 2, the detection case for being moved based on each position obtains the every of face to be measured
Motion score value corresponding to the motion of one position;
Vivo identification score value computing device 3, for calculating the motion point acquired in the motion detection apparatus of each face position
Summation after value weighting, and using the summation being calculated as vivo identification score value;Wherein, vivo identification score value computing device has been
The default weights corresponding with the motion of each position;
Live body judgment means 4, it is live body for judging vivo identification score value not less than the face to be measured of predetermined threshold value.
Wherein, at least one of the corresponding detection of at least position motion detection unit 1 in addition to funny face action detection device 1
Position motion includes at least one of mouth motion, eye motion, head movement, eyebrow movement and forehead motion.Mouth moves
Closed including whether mouth is opened, or, mouth motion includes whether face has smile action, i.e. the mobile degree of the corners of the mouth exceedes pre- bidding
It is accurate;Whether eye motion has to open including eyes is closed action, and whether head movement rotates including head, and eyebrow movement is including eyebrow
No shake, forehead motion include whether forehead has wrinkle change;Wherein, mouth motion, eye motion and head movement motion journey
Spend obvious, be advantageous to be detected, preferably select at least one in the motion of detection mouth, eye motion and head movement
Kind.
Example, an at least face position motion detection apparatus 1 is every specifically for the face video for detecting face to be measured in addition
Regarded every position key point position, each by extraction corresponding to each frame of video detection position motion that default frame number is extracted
The intensity of variation of the position key point position of frequency frame come determine position motion situation;Or face position motion detection apparatus 1
Portion corresponding to each frame of video detection position motion that face to be measured is extracted every default frame number can also be specifically used for detecting
Position grey value characteristics, the feelings of position motion are determined by the intensity of variation of the gray value at the position of each frame of video of extraction
Condition, the position motion that the embodiment is commonly available to the detection of face position motion detection apparatus 1 are transported for eye motion or forehead
It is dynamic.Above-mentioned implementation is only the example that at least detection position of a face position motion detection apparatus 1 is moved in addition, works as face
Position motion detection apparatus 1 realizes the motion detection moved to an at least position in addition by other embodiment, also in this implementation
Within the protection domain of example.
Position motion score value acquisition device 2 is specifically used for the motion conditions based on mouth motion and obtains corresponding motion point
Value:The motion conditions of face to be measured are that face to be measured has funny face action, then the motion score value of the funny face action obtained is 1 point;It is no
The motion score value of the funny face action then obtained is 0 point.Position motion score value acquisition device 2 is specifically additionally operable to be based on other at least one
The motion conditions of individual position motion move score value corresponding to obtaining:When the corresponding position motion conditions of face to be measured are to have fortune
Dynamic, then the motion score value of the corresponding position motion obtained is 1 point;Otherwise the motion score value obtained is 0 point.
Except above-mentioned position motion score value acquisition device 2 be used for based on the motion of each position whether have the situation of motion and it is straight
An embodiment for moving score value for whether having motion is obtained to obtain, is obtained in by face position motion detection apparatus 1
The motion conditions of position motion include the movement degree of position motion, can also move score value acquisition device 2 by position and be based on
Movement degree and obtain a motion score value between 0 to 1, such as setting motion score value be divided into 10 grades, value between 0 to 1,
The alternate embodiments can not only indicate whether motion, moreover it is possible to embody the degree of motion.
The weights corresponding with the motion of each position are to be moved according to each position in vivo identification score value computing device 3
Significant degree is set;Position motion such as detection is funny face action, mouth moves and head movement;Generally, mouth motion ratio is brighter
It is aobvious, therefore weight is maximum, head movement simulation precision is minimum, therefore weight is minimum, and funny face action is to include to eye and mouth
Consider, the Weight Algorithm for being correspondingly arranged position motion is:Mouth moves>Funny face acts>Head movement.
Or, the weights corresponding with the motion of each position are according to different application field in vivo identification score value computing device 3
Scape carries out the weighed value adjusting of position motion automatically and set, specific practice:Under a certain scene, each of face to be measured is collected
The normal input video of kind of position motion is used as positive sample, attacks video as negative sample, takes that (positive sample passes through number+negative sample
Refuse number)/the accuracy rate of (positive sample sum+negative sample sum) as position motion, the standard that then each position is moved
True rate is ranked up according to order from big to small, the weight of each position motion also according to this order from big to small, adjust again
The weight of whole each position motion.
Above two sets each position and moves any preferred embodiment of corresponding weights in the present embodiment
Protection domain in.
Live body judgment means 4 include:
Vivo identification confidence computation unit 41, for accounting for the ratio calculation of vivo identification total score by vivo identification score value
The vivo identification confidence level of face to be measured;
Wherein, vivo identification total score is that all face positions obtained by vivo identification score value computing device 3 are moved
The maximum of summation after the motion score value weighting of detection means 1, vivo identification total score are represented with s_max;F represents vivo identification
Confidence level, and 0<f<1;Vivo identification confidence computation unit 41 calculates the vivo identification confidence of face to be measured by following formula
Degree:
F=(s/s_max) * 100%
Live body judging unit 42, for when vivo identification confidence level is not less than preset value, determining vivo identification score value not
Less than predetermined threshold value, judge that the face to be measured that vivo identification score value is not less than predetermined threshold value is live body.
Wherein, preset value is represented with e, is judged by live body judging unit 42:When f >=e, i.e. vivo identification confidence level be not small
When preset value, it is determined that vivo identification score value is not less than predetermined threshold value, judges vivo identification score value not less than predetermined threshold value
Face to be measured is live body;Work as f<When e, i.e. vivo identification confidence level are less than preset value, it is determined that vivo identification score value is less than default
Threshold value, judge that the face to be measured that vivo identification score value is less than predetermined threshold value is non-living body.
The vivo identification confidence level obtained by vivo identification confidence computation unit 41, can also be further expanded,
Classifying system is established for the present embodiment vivo identification system and carries out live body judgement and live body classification, to obtain abundant live body knowledge
Other result.
When it is implemented, first, the motion moved at corresponding position is obtained by each face position motion detection apparatus 1
Situation, wherein, a face position motion detection apparatus 1 is a kind of embodiment of funny face action detection device of the present invention;And pass through
Motion conditions of the position motion score value acquisition device 2 based on position motion move score value corresponding to obtaining;Then, known by live body
Summation is used as vivo identification score value after the motion score value that other score value computing device 3 moves to each position of acquisition is weighted,
Finally, to account for vivo identification using vivo identification score value by the vivo identification confidence computation unit 41 of live body judgment means 4 total
The vivo identification confidence level of the ratio calculation face to be measured divided, and the live body as obtained by the judgement of live body judging unit 42 when calculating
Recognition confidence is live body not less than the face to be measured of predetermined threshold value.
The present embodiment solves in the prior art that algorithm is single using detection at least two face position motion detection apparatus,
The problem of security is not high, scalability is strong, and funny face action detection device is not high to hardware requirement;In addition, known by live body
Other score value computing device carries out fraction fusion again to different parts motion weighting, and the vivo identification degree of accuracy is high, obtains live body knowledge
The beneficial effect that other accuracy rate is high, hardware requirement is low and safe.
Described above is the preferred embodiment of the present invention, it is noted that for those skilled in the art
For, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications are also considered as
Protection scope of the present invention.
Claims (10)
1. a kind of funny face motion detection method, it is characterised in that the funny face motion detection method includes step:
Some frame of video are extracted from face video to be measured;
Obtain human face region position, some keys of eye of each frame of video extracted from the face video to be measured
Point position and some key point positions of mouth;
Pass through some key point positions point of the human face region position, some key point positions of the eye and the mouth
Ji Suan not human face region area, eye area and mouth area;
The face area is accounted for by the eye area and the mouth area sum for the frame of video for calculating each extraction
The ratio of domain area, which obtains, weighs fraction;
The measurement fraction of the frame of video based on each extraction judges the feelings of the funny face action of the face video to be measured
Condition.
2. a kind of funny face motion detection method as claimed in claim 1, it is characterised in that described based on described in each extraction
The measurement fraction of frame of video judges that the situation of the funny face action of the face video to be measured includes:
Judge whether the measurement fraction of the frame of video of each extraction is weighed within fraction range default, if so, then
The face to be measured of the corresponding frame of video is normal condition, if it is not, then the face to be measured of the corresponding frame of video is funny face
State;
When extracting, the face to be measured that includes simultaneously in some frame of video is the frame of video of normal condition and face to be measured is ghost
The frame of video of face-like state, then judging the face to be measured of the face to be measured has funny face action.
3. a kind of funny face motion detection method as claimed in claim 2, it is characterised in that described to obtain from the face to be measured
Human face region position, some key point positions of eye and some keys of mouth of each frame of video extracted in video
Point position includes:
Face datection and face key point are done with dlib storehouses to each frame of video extracted from the face video to be measured
Position is detected, and obtains the human face region position and some key point positions of face to be measured;
The some of the eye are obtained from some key point positions of the face to be measured of the frame of video of each extraction
Key point position and some key point positions of the mouth.
4. a kind of funny face motion detection method as claimed in claim 3, it is characterised in that described to pass through the human face region position
Put, some key point positions of some key point positions of the eye and the mouth calculate human face region area, eye respectively
Portion's area and mouth area include:
By the human face region position acquisition face length and face width, the people is multiplied by by calculating the face length
The product of face width obtains the human face region area;
By some key point position acquisition left eye length and left eye width of the eye, and right eye length and right eye width,
The left eye width acquisition left eye area is multiplied by by calculating the left eye length, is multiplied by by calculating the right eye length
The right eye width obtains the right eye area;The eye is obtained by calculating the left eye area and the right eye area sum
Portion's area;
By some key point position acquisition mouth length and mouth width of the mouth, multiplied by calculating the mouth length
The mouth area is obtained with the product of the mouth width.
A kind of 5. funny face action detection device, it is characterised in that including:
Frame of video extracting unit, for extracting some frame of video from face video to be measured;
Key point position acquisition unit, for obtaining the face of each frame of video extracted from the face video to be measured
Some key point positions of regional location, some key point positions of eye and mouth;
Area acquiring unit, for passing through the human face region position, some key point positions of the eye and the mouth
Some key point positions calculate human face region area, eye area and mouth area respectively;
Weigh fraction acquiring unit, the eye area and the mouth for the frame of video by calculating each extraction
The ratio that area sum accounts for the human face region area obtains measurement fraction;
Funny face acts judging unit, and the measurement fraction for the frame of video based on each extraction judges the people to be measured
The situation of the funny face action of face video.
6. a kind of funny face action detection device as claimed in claim 5, it is characterised in that the funny face acts judging unit bag
Include:
Funny face condition judgment module, for judging whether the measurement fraction of the frame of video of each extraction is presetting measurement
Within fraction range, if so, then the face to be measured of the corresponding frame of video is normal condition, if it is not, then corresponding described regard
The face to be measured of frequency frame is funny face state;
Funny face acts judge module, for being normal condition when extracting the face to be measured included in some frame of video simultaneously
Frame of video and the frame of video that face to be measured is funny face state, then judging the face to be measured of the face to be measured has funny face action.
A kind of 7. funny face action detection device as claimed in claim 6, it is characterised in that the key point position acquisition unit
Including:
Face key point position detecting module, for using each frame of video extracted from the face video to be measured
Dlib does in storehouse Face datection and the detection of face key point position, obtains some keys of the human face region position and face to be measured
Point position;
Eye mouth key point position acquisition module, some keys for the face to be measured of the frame of video from each extraction
Point obtains some key point positions of the eye and some key point positions of the mouth in position.
8. a kind of funny face action detection device as claimed in claim 7, it is characterised in that the area acquiring unit includes:
Human face region area acquisition module, for by the human face region position acquisition face length and face width, passing through
Calculate the product acquisition human face region area that the face length is multiplied by the face width;
Eye area acquisition module, for some key point position acquisition left eye length and left eye width by the eye,
And right eye length and right eye width, the left eye width acquisition left eye area is multiplied by by calculating the left eye length, is led to
Cross to calculate the right eye length and be multiplied by the right eye width and obtain the right eye area;By calculating the left eye area and described
Right eye area sum obtains the eye area;
Mouth area acquisition module, for some key point position acquisition mouth length and mouth width by the mouth,
The product that the mouth width is multiplied by by calculating the mouth length obtains the mouth area.
9. a kind of vivo identification method, it is characterised in that the vivo identification method includes step:
The situation of the funny face action of the face to be measured in face video to be measured and the situation of other at least one position motions are detected,
Wherein, the people to be measured in face video to be measured is detected using the funny face motion detection method as described in any one of Claims 1 to 4
The situation of the funny face action of face;
Situation based on position motion obtains motion score value corresponding to each position motion of the face to be measured;
Calculate the summation after motion score value weighting corresponding to each position motion, and using the summation being calculated as
Vivo identification score value;Wherein, corresponding weights are preset in each position motion;
Judge that the face to be measured that the vivo identification score value is not less than predetermined threshold value is live body.
10. a kind of vivo identification system, it is characterised in that the vivo identification system includes:
At least two face position motion detection apparatus, each face position motion detection apparatus are used to detect face to be measured
The situation of corresponding position motion, wherein a face position motion detection apparatus is one as described in any one of claim 5~8
Kind funny face action detection device;
Score value acquisition device is moved at position, and the face to be measured is obtained for the detection case based on each position motion
Motion score value corresponding to each position motion;
Vivo identification score value computing device, for calculating the summation after score value weighting is moved corresponding to each position motion,
And using the summation being calculated as vivo identification score value;Wherein, the vivo identification score value computing device preset with
Move corresponding weights in each position;
Live body judgment means, it is live body for judging the vivo identification score value not less than the face to be measured of predetermined threshold value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710412025.6A CN107358155A (en) | 2017-06-02 | 2017-06-02 | Method and device for detecting ghost face action and method and system for recognizing living body |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710412025.6A CN107358155A (en) | 2017-06-02 | 2017-06-02 | Method and device for detecting ghost face action and method and system for recognizing living body |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107358155A true CN107358155A (en) | 2017-11-17 |
Family
ID=60271846
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710412025.6A Pending CN107358155A (en) | 2017-06-02 | 2017-06-02 | Method and device for detecting ghost face action and method and system for recognizing living body |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107358155A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416314A (en) * | 2018-03-16 | 2018-08-17 | 中山大学 | The important method for detecting human face of picture |
CN109697416A (en) * | 2018-12-14 | 2019-04-30 | 腾讯科技(深圳)有限公司 | A kind of video data handling procedure and relevant apparatus |
CN110363124A (en) * | 2019-07-03 | 2019-10-22 | 广州多益网络股份有限公司 | Rapid expression recognition and application method based on face key points and geometric deformation |
CN110555353A (en) * | 2018-06-04 | 2019-12-10 | 北京嘀嘀无限科技发展有限公司 | Action recognition method and device |
CN111259857A (en) * | 2020-02-13 | 2020-06-09 | 星宏集群有限公司 | Human face smile scoring method and human face emotion classification method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104616438A (en) * | 2015-03-02 | 2015-05-13 | 重庆市科学技术研究院 | Yawning action detection method for detecting fatigue driving |
CN104794465A (en) * | 2015-05-13 | 2015-07-22 | 上海依图网络科技有限公司 | In-vivo detection method based on attitude information |
CN105447432A (en) * | 2014-08-27 | 2016-03-30 | 北京千搜科技有限公司 | Face anti-fake method based on local motion pattern |
CN105739707A (en) * | 2016-03-04 | 2016-07-06 | 京东方科技集团股份有限公司 | Electronic equipment, face identifying and tracking method and three-dimensional display method |
CN106446831A (en) * | 2016-09-24 | 2017-02-22 | 南昌欧菲生物识别技术有限公司 | Face recognition method and device |
CN106778450A (en) * | 2015-11-25 | 2017-05-31 | 腾讯科技(深圳)有限公司 | A kind of face recognition method and device |
-
2017
- 2017-06-02 CN CN201710412025.6A patent/CN107358155A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105447432A (en) * | 2014-08-27 | 2016-03-30 | 北京千搜科技有限公司 | Face anti-fake method based on local motion pattern |
CN104616438A (en) * | 2015-03-02 | 2015-05-13 | 重庆市科学技术研究院 | Yawning action detection method for detecting fatigue driving |
CN104794465A (en) * | 2015-05-13 | 2015-07-22 | 上海依图网络科技有限公司 | In-vivo detection method based on attitude information |
CN106778450A (en) * | 2015-11-25 | 2017-05-31 | 腾讯科技(深圳)有限公司 | A kind of face recognition method and device |
CN105739707A (en) * | 2016-03-04 | 2016-07-06 | 京东方科技集团股份有限公司 | Electronic equipment, face identifying and tracking method and three-dimensional display method |
CN106446831A (en) * | 2016-09-24 | 2017-02-22 | 南昌欧菲生物识别技术有限公司 | Face recognition method and device |
Non-Patent Citations (3)
Title |
---|
AVINASH KUMAR SINGH 等: ""Face Recognition with Liveness Detection using Eye and Mouth Movement"", 《IEEE》 * |
詹泽慧: ""基于智能Agent 的远程学习者情感与认知识别模型"", 《现代远程教育研究》 * |
韦妍: ""人脸表情识别概述"", 《网络安全技术与应用》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416314A (en) * | 2018-03-16 | 2018-08-17 | 中山大学 | The important method for detecting human face of picture |
CN108416314B (en) * | 2018-03-16 | 2022-03-08 | 中山大学 | Picture important face detection method |
CN110555353A (en) * | 2018-06-04 | 2019-12-10 | 北京嘀嘀无限科技发展有限公司 | Action recognition method and device |
CN110555353B (en) * | 2018-06-04 | 2022-11-15 | 北京嘀嘀无限科技发展有限公司 | Action recognition method and device |
CN109697416A (en) * | 2018-12-14 | 2019-04-30 | 腾讯科技(深圳)有限公司 | A kind of video data handling procedure and relevant apparatus |
CN109697416B (en) * | 2018-12-14 | 2022-11-18 | 腾讯科技(深圳)有限公司 | Video data processing method and related device |
CN110363124A (en) * | 2019-07-03 | 2019-10-22 | 广州多益网络股份有限公司 | Rapid expression recognition and application method based on face key points and geometric deformation |
CN111259857A (en) * | 2020-02-13 | 2020-06-09 | 星宏集群有限公司 | Human face smile scoring method and human face emotion classification method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107358155A (en) | Method and device for detecting ghost face action and method and system for recognizing living body | |
CN106295522B (en) | A kind of two-stage anti-fraud detection method based on multi-orientation Face and environmental information | |
CN104504394B (en) | A kind of intensive Population size estimation method and system based on multi-feature fusion | |
CN102542289B (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
CN106886216A (en) | Robot automatic tracking method and system based on RGBD Face datections | |
CN105989608B (en) | A kind of vision capture method and device towards intelligent robot | |
CN107657244B (en) | Human body falling behavior detection system based on multiple cameras and detection method thereof | |
CN104599287B (en) | Method for tracing object and device, object identifying method and device | |
CN109670430A (en) | A kind of face vivo identification method of the multiple Classifiers Combination based on deep learning | |
CN107330370A (en) | Forehead wrinkle action detection method and device and living body identification method and system | |
CN102214309B (en) | Special human body recognition method based on head and shoulder model | |
KR20170006355A (en) | Method of motion vector and feature vector based fake face detection and apparatus for the same | |
CN107330914A (en) | Human face part motion detection method and device and living body identification method and system | |
CN109918971A (en) | Number detection method and device in monitor video | |
CN103390164A (en) | Object detection method based on depth image and implementing device thereof | |
KR20140045854A (en) | Method and apparatus for monitoring video for estimating gradient of single object | |
CN110298297A (en) | Flame identification method and device | |
CN107392089A (en) | Eyebrow movement detection method and device and living body identification method and system | |
CN109117755A (en) | A kind of human face in-vivo detection method, system and equipment | |
CN107358154A (en) | Head motion detection method and device and living body identification method and system | |
CN112183472A (en) | Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet | |
CN101477626A (en) | Method for detecting human head and shoulder in video of complicated scene | |
CN107292299B (en) | Side face recognition methods based on kernel specification correlation analysis | |
CN107358153A (en) | Mouth movement detection method and device and living body identification method and system | |
CN107358151A (en) | Eye movement detection method and device and living body identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171117 |