CN107330370A - Forehead wrinkle action detection method and device and living body identification method and system - Google Patents

Forehead wrinkle action detection method and device and living body identification method and system Download PDF

Info

Publication number
CN107330370A
CN107330370A CN201710406498.5A CN201710406498A CN107330370A CN 107330370 A CN107330370 A CN 107330370A CN 201710406498 A CN201710406498 A CN 201710406498A CN 107330370 A CN107330370 A CN 107330370A
Authority
CN
China
Prior art keywords
face
video
measured
frame
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710406498.5A
Other languages
Chinese (zh)
Other versions
CN107330370B (en
Inventor
陈�全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN201710406498.5A priority Critical patent/CN107330370B/en
Publication of CN107330370A publication Critical patent/CN107330370A/en
Application granted granted Critical
Publication of CN107330370B publication Critical patent/CN107330370B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a forehead wrinkle action detection method, which comprises the following steps: extracting a plurality of video frames from a face video to be detected; acquiring a forehead area of each video frame extracted from the face video to be detected; calculating the gradient value of each pixel point of the forehead area of each extracted video frame through an edge detection operator; calculating variance of gradient values of each pixel point of the forehead area of each extracted video frame to obtain corresponding forehead wrinkle values of the video frames; and judging the forehead wrinkle action condition of the face video to be detected based on the forehead wrinkle value of each extracted video frame. Correspondingly, the invention also discloses a forehead wrinkle action detection device. The invention has simple calculation and high efficiency.

Description

A kind of brow furrows motion detection method and device and vivo identification method and system
Technical field
Know the present invention relates to field of face identification, more particularly to a kind of brow furrows motion detection method and device, live body Other method and system.
Background technology
With the development of face recognition technology, increasing scene needs to use Face datection and goes quickly identification one The identity of people.But there is undesirable to go to carry out recognition of face, so whole face instead of true man using picture or video The security of identifying system just cannot be guaranteed.And face vivo identification can detect current face to be measured be living body faces and Face in non-photograph or video, so as to ensure that the security of face identification system.When carrying out recognition of face, Ke Yitong Crossing the detection of the brow furrows action to face to be measured contributes to whether identification face is live body.To realize in human bioequivalence, Can realize efficiently simply identification face whether live body, thus need a kind of efficiently simple brow furrows motion detection technical side Case.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of brow furrows motion detection method and device, calculates simple, efficiency It is high.
To achieve the above object, the invention provides a kind of brow furrows motion detection method, including:
Some frame of video are extracted from face video to be measured;
Obtain the forehead region of each frame of video extracted from the face video to be measured;
Calculate the Grad of each pixel in the forehead region of each frame of video extracted;
The variance for calculating the Grad of the pixel in the forehead region of each frame of video extracted obtains correspondence The frame of video brow furrows value;
The brow furrows value of the frame of video based on each extraction judges the forehead wrinkle of the face video to be measured The situation of line action.
Compared with prior art, a kind of brow furrows motion detection method disclosed in the embodiment of the present invention by obtaining first Some frame of video, then according to the forehead region that face to be measured is determined from each frame of video of extraction, then, obtain each picture The Grad of vegetarian refreshments, and calculate each frame of video of extraction Grad variance as brow furrows value;Finally, according to each The brow furrows value of frame of video judges the scheme that the face to be measured in the face video frame extracted has brow furrows to act, realization pair The judgement for whether having brow furrows action of face to be measured, calculating process is simply efficient, any common camera or mobile terminal The camera of mobile phone can as face video to be measured input hardware, it is simple to device hardware requirement.
Further, the forehead of face video to be measured described in the ratio in judgement of the frame of video based on each extraction The situation of wrinkle action includes:
Judge the brow furrows value be less than the first predetermined threshold value the frame of video face to be measured forehead region without Wrinkle, judging the forehead region of to be measured face of the brow furrows value more than the frame of video of the second predetermined threshold value has wrinkle Line;
If extract in some frame of video frame of video without wrinkle of the forehead region including the face to be measured simultaneously and The forehead region frame of video wrinkly of face to be measured, then judging the face to be measured of the face to be measured has brow furrows action.
Further, the gradient of each pixel in the forehead region for calculating each frame of video extracted Value includes:
Each pixel in the forehead region of each frame of video extracted is calculated by sobel operators Sobel values;Wherein, the sobel values represent the Grad.
As further scheme, the present invention calculates the Grad of pixel using sobel operators, and sobel operators calculate effect Rate is high, is conducive to efficiently obtaining Grad.
Further, the forehead region bag for obtaining each frame of video extracted from the face video to be measured Include:
Face datection is made of dlib storehouses to each frame of video extracted from the face video to be measured and face is closed Key point position is detected, obtains the human face region position and some key point positions of face to be measured;
Some key point positions of eyebrow are obtained from some face key points of the frame of video of each extraction, based on described Forehead region described in some key point positions of eyebrow and the human face region position acquisition.
Accordingly, the present invention also provides a kind of brow furrows action detection device, including:
Frame of video extracting unit, for extracting some frame of video from face video to be measured;
Forehead area acquisition unit, the volume of each frame of video extracted for obtaining from the face video to be measured Head region;
Grad acquiring unit, each pixel in the forehead region for calculating each frame of video extracted Grad;
Brow furrows value acquiring unit, each picture in the forehead region for calculating each frame of video extracted The variance of the Grad of vegetarian refreshments obtains the brow furrows value of the corresponding frame of video;
Brow furrows act judging unit, and the brow furrows value for the frame of video based on each extraction judges The situation of the brow furrows action of the face video to be measured.
Compared with prior art, a kind of brow furrows action detection device passes through video first disclosed in the embodiment of the present invention Frame extracting unit obtains some frame of video, is then determined by forehead area acquisition unit according to from each frame of video of extraction The forehead region of face to be measured, then, the Grad of each pixel is obtained by Grad acquiring unit, and wrinkle by forehead The variance that line value acquiring unit calculates the Grad of each frame of video extracted is used as brow furrows value;Finally, wrinkled by forehead Line acts judging unit and judges that the face to be measured in the face video frame extracted has volume according to the brow furrows value of each frame of video The scheme of head wrinkle action, realizes the judgement for whether having brow furrows to act to face to be measured, and amount of calculation is small, apparatus of the present invention Testing result can efficiently be obtained, the camera of any common camera or mobile terminal mobile phone can be used as face video to be measured Input hardware, device hardware is simple.
Further, the brow furrows action judging unit is specifically included:
Wrinkle condition judgment module, for judging that the brow furrows value is less than the frame of video of the first predetermined threshold value The forehead region of face to be measured judges that the brow furrows value is more than the to be measured of the frame of video of the second predetermined threshold value without wrinkle There is wrinkle in the forehead region of face;
Wrinkle acts judge module, if including the forehead of the face to be measured simultaneously for extracting in some frame of video The forehead region of frame of video and to be measured face of the region without wrinkle frame of video wrinkly, then judge the to be measured of the face to be measured Face has brow furrows action.
Further, what the Grad acquiring unit was extracted specifically for being calculated by sobel operators each described regards The sobel values of each pixel in the forehead region of frequency frame;Wherein, the sobel values represent the Grad.
Further, the forehead area acquisition unit includes:
Face critical point detection module, for using each frame of video extracted from the face video to be measured Dlib does in storehouse Face datection and the detection of face key point position, obtains some keys of the human face region position and face to be measured Point position;
Forehead region acquisition module, if for obtaining eyebrow in some face key points of the frame of video from each extraction Dry key point position, forehead region described in some key point positions and the human face region position acquisition based on the eyebrow.
Accordingly, the embodiment of the present invention also provides a kind of vivo identification method, including step:
The situation and at least one other position for detecting the brow furrows action of the face to be measured in face video to be measured are transported Dynamic situation, wherein, treating in face video to be measured is detected using a kind of brow furrows motion detection method disclosed by the invention Survey the situation of the brow furrows action of face;
Move corresponding motion score value in each position that the face to be measured is obtained based on the situation that position is moved;
The summation after the corresponding motion score value weighting of each position motion is calculated, and the obtained summation will be calculated It is used as vivo identification score value;Wherein, corresponding weights are preset in each position motion;
Judge the to be measured face of the vivo identification score value not less than predetermined threshold value as live body.
Compared with prior art, a kind of vivo identification method disclosed in the embodiment of the present invention, using eyebrow disclosed by the invention Hair method for testing motion detects the situation of the brow furrows action of the face to be measured of face video to be measured, and to be measured by detecting The motion conditions at other positions of face, obtain the motion score value of corresponding position motion, and position motion score value is weighted After sum as vivo identification score value, by the use of vivo identification score value as the face to be measured whether be live body criterion Technical scheme;Wherein, brow furrows motion detection method calculating process is simply efficient, and device hardware requirement is simple;Using detection Eyebrow movement and the motion of other at least one positions solve that algorithm in the prior art is single, and the problem of security is not high can expand Malleability is strong, and can be realized based on the detection that face position is moved by two dimensional image, not high to hardware requirement;In addition, using Fraction fusion is carried out again to different parts motion weighting, the vivo identification degree of accuracy is high, and this vivo identification method accuracy rate is high, hardware It is required that low and safe.
Accordingly, the embodiment of the present invention also provides a kind of vivo identification system, including:
At least two face position motion detection apparatus, each face position motion detection apparatus is used to detect to be measured The situation of the corresponding position motion of face, wherein a face position motion detection apparatus is to use a kind of forehead disclosed by the invention Wrinkle action detection device;
Score value acquisition device is moved at position, and the face to be measured is obtained for the situation based on each position motion Move corresponding motion score value in each position;
Vivo identification score value computing device, it is total after the corresponding motion score value weighting of each position motion for calculating With, and the summation that calculating is obtained is used as vivo identification score value;Wherein, the vivo identification score value computing device has been preset The weights corresponding with each position motion;
Live body judgment means, for judging the to be measured face of the vivo identification score value not less than predetermined threshold value as work Body.
Compared with prior art, a kind of vivo identification system passes through at least two face position disclosed in the embodiment of the present invention Motion detection apparatus obtains the motion score value at least two positions on the face to be measured, wherein, face position motion inspection Survey brow furrows action detection device of the device using the present invention;Score value is moved to position by vivo identification score value computing device Summed after being weighted as vivo identification score value, the people to be measured is used as by the use of vivo identification score value by live body judgment means Face whether be live body criterion technical scheme;Brow furrows action detection device calculates simple efficient, and device hardware will Ask simple;Detect that the motion conditions at least two positions solve existing skill using detection at least two face position telecontrol equipment Algorithm is single in art, the problem of security is not high, and scalability is strong, and can pass through two dimension based on the detection that face position is moved Image is realized, not high to hardware requirement, in addition, being carried out again to different parts motion weighting by vivo identification score value computing device Fraction is merged, and the vivo identification degree of accuracy is high, obtains the beneficial effect that vivo identification accuracy rate is high, hardware requirement is low and safe Really.
Brief description of the drawings
Fig. 1 is a kind of schematic flow sheet for brow furrows motion detection method that the embodiment of the present invention 1 is provided;
Fig. 2 is a kind of step S15 for brow furrows motion detection method that the embodiment of the present invention 1 is provided flow signal Figure;
Fig. 3 is a kind of step S12 for brow furrows motion detection method that the embodiment of the present invention 1 is provided flow signal Figure;
Fig. 4 is the model schematic of 68 key points of face to be measured;
Fig. 5 is that the structure for the embodiment that a kind of brow furrows action detection device that the embodiment of the present invention 2 is provided is provided is shown It is intended to;
Fig. 6 is a kind of schematic flow sheet for vivo identification method that the embodiment of the present invention 3 is provided;
Fig. 7 is a kind of step S24 schematic flow sheets for vivo identification method that the embodiment of the present invention 3 is provided;
Fig. 8 is a kind of structural representation for vivo identification system that the embodiment of the present invention 4 is provided.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
A kind of brow furrows motion detection method that the embodiment of the present invention 1 is provided, referring to Fig. 1, Fig. 1 is the stream of the present embodiment Journey schematic diagram, including step:
S11, some frame of video are extracted from face video to be measured;
The forehead region for each frame of video that S12, acquisition are extracted from face video to be measured;
The Grad of each pixel in the forehead region for each frame of video that S13, calculating are extracted;
The variance of the Grad of the pixel in the forehead region for each frame of video that S14, calculating are extracted obtains corresponding regard The brow furrows value of frequency frame;
S15, the brow furrows value of frame of video based on each extraction judge the brow furrows action of face video to be measured Situation.
Generally in face picture, the forehead of common people does not produce the image smoother obtained during wrinkle, and color becomes Change little;Forehead has a color change when producing wrinkle, and the pixel value intensity of variation for the image that correspondence is obtained is larger, fluctuation compared with Greatly, it can distinguish whether forehead has wrinkle based on the phenomenon.The present invention does not have wrinkly be defined as to forehead:The forehead detected The forehead region that wrinkle value is less than preset first threshold value is no wrinkle;It is wrinkly to forehead to be defined as:The forehead detected Wrinkle value is more than the forehead region of default Second Threshold to have wrinkle;It is to the definition that brow furrows are acted:The forehead production of face The action of raw wrinkle.Correspond to detected in the frame of video of face video to be measured face forehead have wrinkle and do not have it is wrinkly State be can be explained face to be measured have a brow furrows action, the other embodiments that the present invention is provided can refer to above-mentioned to forehead The explanation of the definition standard of wrinkle action, is repeated no more.
So, referring to Fig. 2, Fig. 2 is step S15 schematic flow sheet, and step S15 is specifically included:
The forehead region that S151, judgement brow furrows value are less than the face to be measured of the frame of video of the first predetermined threshold value is wrinkle-free Line, judging the forehead region of to be measured face of the brow furrows value more than the frame of video of the second predetermined threshold value has wrinkle;
If S152, extracting in some frame of video while frame of video of the forehead region including face to be measured without wrinkle and to be measured The forehead region frame of video wrinkly of face, then judging the face to be measured of face to be measured has brow furrows action.
In step s 13, calculated by using edge detection operator each frame of video of extraction forehead region it is every The Grad of one pixel;Herein, the edge detection operator used is preferably sobel operators, sobel operators it is Chinese entitled Sobel Operator;Then step S13 is specifically included:Using sobel operators calculate extract each frame of video forehead region it is every The sobel values of one pixel;Wherein, sobel values are the Grad of the intensity of variation of the pixel value for representing each pixel.
Sobel operators generally include detection level direction edge and vertical direction edge, because the wrinkle that forehead is produced is big Multiple edge is horizontal direction, further preferably, and selection is reduced to a detection level side when sobel operators apply to the present embodiment Edge, now, the sobel values that step S13 is obtained are:Included with current pixel center with convolution kernel size identical region picture The convolution of element and vertical direction does the end value of convolution algorithm;Sobel operators calculate each frame of video of extraction in step S13 The calculating process of the sobel values of each pixel in forehead region is specially:Forehead region is entered with the convolution kernel of vertical direction Row convolution algorithm;Convolution kernel of the picture element matrix with a M*M that convolution algorithm is M*M, the value multiplication of each matrix position, Then this M*M product addition, the end value of convolution algorithm is obtained.
It is corresponding, each pixel in the forehead region of each frame of video extracted is calculated in step S14 by sobel operators The variance of the sobel values of point obtains the brow furrows value of corresponding frame of video.
In addition, the convolution kernel by sobel operators respectively from vertical direction with horizontal direction is carried out to forehead region Convolution algorithm and obtain sobel values, and the implementation of the brow furrows value of the corresponding frame of video of variance acquisition of the sobel values calculated Mode is also within the protection domain of the present embodiment.
On basis based on the present invention, sobel operators can also be replaced to other edge detection operators, such as used The edge detection operators such as canny operators, Prewitt operators and Roberts operators, come the forehead of each frame of video for obtaining extraction Wrinkle value come realize brow furrows action judgement embodiment, also within protection scope of the present invention.With other edges Detective operators are compared, and are the reason for the present embodiment preferred sobel operators, sobel operator amounts of calculation are small, efficiency high;When this reality When the brow furrows motion detection method for applying example applies to vivo identification detection, the height for whether having wrinkle to act to face can be realized Effect judges.
Referring to Fig. 3, step S12 is specifically included:
S121, with dlib storehouses Face datection and face key point are done to each frame of video for being extracted from face video to be measured Position is detected, obtains human face region position and some key point positions of face to be measured;
Dlib storehouses refer to a cross-platform general-purpose library write using C++ technologies;
Referring to Fig. 4, Fig. 4 is the model schematic of 68 key points of face to be measured;Some people obtained in step S121 Face key point position is the key point position shown in 1~key point of key point 68 in Fig. 4;In addition, by each of extraction Frame of video does Face datection, can obtain human face region position;In the present embodiment, preferably human face region is the square for representing face Shape frame region, it is corresponding, when the position for obtaining tetra- points of H, I, J and K of example in Fig. 4 is the rectangle frame area that can determine that face Domain, that is, obtain human face region position.
S122, from some face key points of the frame of video of each extraction obtain eyebrow some key point positions, base In some key point positions of eyebrow and human face region position acquisition forehead region.
In Fig. 4, some key points for the eyebrow that step S122 is obtained are set to this 10 passes of 18~key point of key point 27 Position shown in key point, specifically, some key points of left eyebrow are set to this 5 key point institutes of 18~key point of key point 22 The position shown, some key points of right eyebrow are set to the position shown in 23~key point of key point 27 this 5 key points.It is based on Some key point positions of eyebrow determine the lower boundary in forehead region, and the upper side frame for representing the rectangle frame of human face region is forehead area The coboundary in domain, coboundary and lower boundary based on forehead region in human face region determine forehead region, referring to Fig. 4 examples Rectangular area HOPI is forehead region.
Step S11 extracts some frame of video from face video to be measured to be included:Successive frame is extracted from face video to be measured to regard Frequency frame;Or, frame of video is extracted successively from face video to be measured according to predeterminated frequency.
When it is implemented, the present embodiment obtains some frame of video from face video to be measured, then according to from the every of extraction The forehead region of face to be measured is determined in one frame of video, then, by the convolution kernel of sobel operator vertical directions to forehead area Domain carries out convolution algorithm to obtain the sobel values of each pixel, and calculates the variance of the sobel values of each frame of video of extraction It is used as the brow furrows value of the frame of video;Finally, the forehead of the face to be measured of corresponding frame of video is judged according to brow furrows value Whether there is wrinkle, judge do not have wrinkly regard including forehead frame of video wrinkly and forehead simultaneously in the face video frame extracted The face to be measured of the face video to be measured of frequency frame has brow furrows action.
Compared with prior art, the present embodiment calculates simple efficient, and any common camera or mobile terminal mobile phone are taken the photograph As head can as face video to be measured input hardware, it is simple to device hardware requirement.
A kind of brow furrows action detection device that the embodiment of the present invention 2 is provided, referring to Fig. 5, Fig. 5 is the knot of the present embodiment Structure schematic diagram, including:
Frame of video extracting unit 11, for extracting some frame of video from face video to be measured;
Forehead area acquisition unit 12, the forehead area of each frame of video extracted for obtaining from face video to be measured Domain;
Grad acquiring unit 13, the gradient of each pixel in the forehead region for calculating each frame of video extracted Value;
Brow furrows value acquiring unit 14, each pixel in the forehead region for calculating each frame of video extracted The variance of Grad obtains the brow furrows value of corresponding frame of video;
Brow furrows act judging unit 15, and the brow furrows value for the frame of video based on each extraction judges people to be measured The situation of the brow furrows action of face video.
Brow furrows action judging unit 15 is specifically included:
Wrinkle condition judgment module 151, for judging that brow furrows value is less than the to be measured of the frame of video of the first predetermined threshold value The forehead region of face judges that brow furrows value is more than the forehead area of the face to be measured of the frame of video of the second predetermined threshold value without wrinkle There is wrinkle in domain;
Wrinkle acts judge module 152, if for extracting in some frame of video while including the forehead region of face to be measured The forehead region of frame of video and face to be measured frame of video wrinkly without wrinkle, then judging the face to be measured of face to be measured has volume Head wrinkle action.
Grad acquiring unit 13 calculates the every of the forehead region of each frame of video of extraction using edge detection operator The Grad of one pixel, the edge detection operator of use is preferably sobel operators, the Chinese entitled Sobel of sobel operators Operator;Grad acquiring unit 13 specifically for:By sobel operators calculate extract each frame of video forehead region it is every The sobel values of one pixel;Wherein, sobel values are the Grad of the intensity of variation of the pixel value for representing each pixel.
Sobel operators generally include detection level direction edge and vertical direction edge, because the wrinkle that forehead is produced is big Multiple edge is horizontal direction, further preferably, and selection is reduced to a detection level side when sobel operators apply to the present embodiment Edge, now, is defined as by the sobel values of the expression Grad of Grad acquiring unit 13:Included with current pixel center The end value that convolution algorithm is done with the convolution of convolution kernel size identical area pixel and vertical direction;Grad acquiring unit 13 calculate the calculating process of the sobel values of each pixel in the forehead region of each frame of video extracted by sobel operators: Convolution algorithm is carried out to forehead region with the convolution kernel of vertical direction;Convolution algorithm is M*M picture element matrix and a M*M Convolution kernel, the value of each matrix position is multiplied, and then this M*M product addition, obtains the end value of convolution algorithm.
Corresponding, brow furrows value acquiring unit 14 is used for the volume that each frame of video extracted is calculated by sobel operators The variance of the sobel values of each pixel of head region obtains the brow furrows value of corresponding frame of video.
In addition, Grad acquiring unit 13 can be used for by sobel operators respectively from vertical direction and level The convolution kernel in direction carries out convolution algorithm to forehead region and obtains sobel values, and corresponding brow furrows value acquiring unit 14 is counted Calculate the corresponding frame of video of variance acquisition of sobel values brow furrows value embodiment also the present embodiment protection domain it It is interior.
On basis based on the present invention, Grad acquiring unit 13 can also replace edge detection operator, such as pass through The edge detection operators such as canny operators, Prewitt operators and Roberts operators, come the forehead of each frame of video for obtaining extraction Wrinkle value come realize brow furrows action judgement embodiment, also within protection scope of the present invention.With other edges Detective operators are compared, and are the reason for the present embodiment preferred sobel operators, sobel operator amounts of calculation are small, efficiency high;When this reality When the brow furrows motion detection method for applying example applies to vivo identification detection, the height for whether having wrinkle to act to face can be realized Effect judges.
Forehead area acquisition unit 12 is specifically included:
Face critical point detection module 121, for each frame of video dlib storehouses to being extracted from face video to be measured Face datection and the detection of face key point position are done, human face region position and some key point positions of face to be measured is obtained;
Dlib storehouses refer to a cross-platform general-purpose library write using C++ technologies;
Referring to Fig. 4, Fig. 4 is the model schematic of 68 key points of face to be measured;Some people obtained in step S121 Face key point position is the key point position shown in 1~key point of key point 68 in Fig. 4;In addition, by each of extraction Frame of video does Face datection, can obtain human face region position;In the present embodiment, preferably human face region is the square for representing face Shape frame region, it is corresponding, when the position for obtaining tetra- points of H, I, J and K of example in Fig. 4 is the rectangle frame area that can determine that face Domain, that is, obtain human face region position.
Forehead region acquisition module 122, for obtaining eyebrow in some face key points of the frame of video from each extraction Some key point positions, some key point positions and human face region position acquisition forehead region based on eyebrow.
In Fig. 4, some key points for the eyebrow that forehead region acquisition module 122 is obtained are set to 18~key point of key point Position shown in 27 this 10 key points, specifically, some key points of left eyebrow be set to 18~key point of key point 22 this 5 Position shown in individual key point, some key points of right eyebrow are set to shown in 23~key point of key point 27 this 5 key points Position.Some key point positions based on eyebrow determine the lower boundary in forehead region, represent human face region rectangle frame it is upper Frame is the coboundary in forehead region, and coboundary and lower boundary based on forehead region in human face region determine forehead region, It is forehead region referring to the rectangular area HOPI of Fig. 4 examples.
Frame of video extracting unit 11 from face video to be measured specifically for extracting successive frame frame of video;Or, frame of video is taken out Unit 11 is taken specifically for extracting frame of video successively from face video to be measured according to predeterminated frequency.
When it is implemented, the present embodiment obtains some videos by frame of video extracting unit 11 from face video to be measured Frame, then by forehead area acquisition unit 12 according to the forehead region that face to be measured is determined from each frame of video of extraction, Then, convolution fortune is carried out to forehead region with the convolution kernel of vertical direction by sobel operators by Grad acquiring unit 13 Calculate to obtain the sobel values of each pixel, and each frame of video extracted is calculated by brow furrows value acquiring unit 14 The variance of sobel values as the frame of video brow furrows value;Finally, judging unit 15 is acted according to forehead by brow furrows Wrinkle value judges whether the forehead of the face to be measured of corresponding frame of video has wrinkle, judges to wrap simultaneously in the face video frame extracted Including the face to be measured that forehead frame of video wrinkly and forehead do not have the face video to be measured of frame of video wrinkly has brow furrows Action.
Compared with prior art, the present embodiment calculates simple efficient, and any common camera or mobile terminal mobile phone are taken the photograph As head can as face video to be measured input hardware, it is simple to device hardware requirement.
A kind of vivo identification method that the embodiment of the present invention 3 is provided, referring to Fig. 6, Fig. 6 is the flow signal of the present embodiment Figure, wherein, the present embodiment specifically includes step:
S21, the situation of the brow furrows action of face to be measured in detection face video to be measured and it is other at least one treat The situation of the position motion of face is surveyed, wherein, a kind of brow furrows motion detection method provided using the embodiment of the present invention 1 is examined The situation of the brow furrows action for the face to be measured surveyed in face video to be measured;Detect that the detailed process of brow furrows action can be with The embodiment provided referring to a kind of brow furrows motion detection method of the invention, is not repeated herein;
Corresponding motion score value is moved at S22, each position for obtaining face to be measured based on the situation that position is moved;
S23, calculate the summation after the corresponding motion score value weighting of each position motion, and will calculate obtained summation as Vivo identification score value;Wherein, corresponding weights are preset in each position motion;
S24, judge vivo identification score value not less than predetermined threshold value face to be measured as live body.
At least one other position motion of detection face to be measured in example, the present embodiment step S21 is mouth motion, At least one of eye motion, head movement, facial movement and eyebrow movement;As a rule, mouth motion, which includes mouth, is No is closed action, or, mouth motion includes smile action, the i.e. mobile degree of the corners of the mouth and exceedes preset standard;Eye motion includes Whether eyes, which have to open, is closed action;Whether head movement rotates including head;Facial movement includes the overall variation at face position, such as Funny face is acted, and the eye of face and the overall variation degree of mouth exceed preparatory condition;Whether eyebrow movement shakes including eyebrow; As a rule, mouth motion, eye motion and the head movement movement degree of face substantially, are conducive to being detected, can be with excellent At least one of the motion of choosing selection detection mouth, eye motion and head movement.
Detect that at least one other position motion of face to be measured is specifically included in example, step S21:Detect face to be measured Face video move corresponding position key point position every each frame of video detection position that default frame number is extracted, pass through Extract each frame of video position key point position intensity of variation come determine position motion situation;Or, detect to be measured Face moves corresponding position grey value characteristics every each frame of video detection position that default frame number is extracted, and passes through extraction The intensity of variation of the gray value at the position of each frame of video come determine position motion situation.Above-mentioned implementation is only to detect it The example of its at least one position motion;On the basis of vivo identification method based on the present embodiment, by other specific Embodiment realize to an at least position in addition move motion detection, also within the protection domain of the present embodiment.
Each position is set in the step S23 of the present embodiment and moves the preferred embodiment of corresponding weights as according to every The significant degree setting of one position motion.Example, when step S21 detects that the position motion of the face to be measured in face video to be measured is Brow furrows action, eye motion and mouth motion;Mouth motion is obvious, therefore weight is maximum, and eye takes second place, and forehead is most It is small, then, the Weight Algorithm for being correspondingly arranged position motion is:Mouth is moved>Eye motion>Brow furrows are acted.
Or, another preferred embodiment of the corresponding weights of each position motion is set in step S23 as according to difference Application scenarios carry out the weighed value adjusting of position motion automatically and set, specific practice:Under a certain scene, people to be measured is collected The normal input video of the various positions motion of face as positive sample, attack video as negative sample, take (positive sample by number+ Negative sample refuses number)/(positive sample sum+negative sample sum) accuracy rate for being moved as the position, then each position fortune Dynamic accuracy rate is ranked up according to order from big to small, the weight of each position motion also according to this order from big to small, Readjust the weight of each position motion.Weight after readjusting is to calculate vivo identification score value, and the recognition result can With the accuracy rate of the position motion detection under adaptive different scenes, increase the accuracy rate of the vivo identification result of the present embodiment.
Above two sets each position and moves any preferred embodiment of corresponding weights in the present embodiment Protection domain in.
Specifically, referring to Fig. 7, Fig. 7 is step S24 schematic flow sheet, including step:
S241, the ratio calculation face to be measured that vivo identification total score is accounted for by vivo identification score value vivo identification confidence Degree;
S242, when vivo identification confidence level be not less than preset value when, determine vivo identification score value be not less than predetermined threshold value;
S243, judge vivo identification score value not less than predetermined threshold value face to be measured as live body.
Specifically, in step S241, vivo identification total score is that can be obtained after face to be measured is identified the present embodiment The maximum obtained, the vivo identification confidence level of face to be measured is calculated by following formula:
F=(s/s_max) * 100%
Wherein, s_max represents vivo identification total score, and f represents vivo identification confidence level, and 0<f<1;
Preset value is represented with e, when f >=e, i.e. vivo identification confidence level are not less than preset value, it is determined that vivo identification point Value is not less than predetermined threshold value, judges to be measured face of the vivo identification score value not less than predetermined threshold value as live body;Work as f<E, i.e. live body When recognition confidence is less than preset value, it is determined that vivo identification score value is less than predetermined threshold value, judge that vivo identification score value is less than pre- If the face to be measured of threshold value is non-living body.
The vivo identification confidence level obtained using vivo identification score value, can also be further expanded, for the present embodiment Set up classifying system and carry out live body judgement and live body classification, to obtain abundant vivo identification result.
Step S22 obtains the corresponding motion score value bag of each position motion of face to be measured based on the situation that position is moved Include:
Corresponding motion score value is obtained based on the motion conditions that brow furrows are acted:As the detection people to be measured in step S21 Face has brow furrows action, then the motion score value of the brow furrows action obtained is 1 point;Otherwise the brow furrows action obtained Motion score value be 0 point.
Similar, obtain corresponding motion score value based on the motion conditions that at least one other position are moved:As step S21 In the corresponding motion conditions of detection face to be measured have motion for the corresponding position of face to be measured, then the corresponding position obtained is moved Motion score value be 1 point;Otherwise the motion score value obtained is 0 point.
Except by there is the judgement of without motion to obtain corresponding motion score value, if acquired position fortune in the step s 21 Dynamic motion conditions are the movement degree that position is moved, and can also obtain corresponding fortune in score value is interval according to its movement degree Dynamic score value, such as setting fraction is divided into 10 grades, and value is between 0 to 1.
When it is implemented, face video position motion to be measured is detected so as to obtain the motion conditions at corresponding position, wherein, A kind of brow furrows motion detection method that the detection of one position motion is provided using the present invention;The feelings moved according to each position Condition obtains corresponding motion score value, is specially that position motion has motion, then the motion score value obtained is 1 point, is otherwise obtained It is 0 point to move score value;Then calculate it is above-mentioned obtain the summation after each position motion score value is weighted, the summation represents live body Recognize score value;The ratio calculation vivo identification confidence level of vivo identification total score is finally accounted for the vivo identification score value, wherein, work as work Body recognition confidence be not less than preset value when, determine vivo identification score value be not less than predetermined threshold value so that judge face to be measured as Live body;Otherwise, it is determined that face to be measured is non-living body.
The present embodiment can operate with plurality of devices end, and this is said exemplified by sentencing the implement scene for applying to cell phone end It is bright:In mobile phone terminal vivo identification, occur a kind of live body action request order at random, for example, require that face to be measured is carried out respectively Open one's mouth, blink and brow furrows action live body action;If now the weight of default position motion is:Open one's mouth corresponding mouth The weight w1=3 of motion, the weight w2=2 for corresponding eye motion of blinking, brow furrows act the weight w3 of corresponding motion =1;It is 3*1+2*1+1*1=6 points to calculate vivo identification total score, i.e. vivo identification best result s_max.Assuming that detecting to open one's mouth It is divided into 1 point, blink is scored at 1 point, and brow furrows action is scored at 0 point, and vivo identification score value s is after the motion of each position is weighted Summation, substitute into the motion score value of above-mentioned position motion, calculate s=3*1+2*1+1*0=5 points of vivo identification score value;Finally, count Calculate vivo identification confidence level f=s/s_max=5/6=83.33%.If setting now setting value e, as 80%, judges that this is to be measured Face is live body, and live body confidence level is 83.33%.
The present embodiment solves that algorithm in the prior art is single, and the problem of security is not high, scalability is strong;For to be measured The brow furrows motion detection method of face calculates simple efficient, not high to the hardware requirement of equipment;In addition, in the present embodiment Vivo identification is carried out using the detection moved to multiple positions, and fraction fusion is carried out again to different parts motion weighting, it is living Body recognition accuracy is high, and is conducive to improving security.
A kind of vivo identification system that the embodiment of the present invention 4 is provided, referring to Fig. 8, Fig. 8 is the structural representation of the present embodiment Figure, the present embodiment includes:
At least two face position motion detection apparatus 1, everyone is used to detect to be measured face's position motion detection apparatus 1 The situation of the corresponding position of face one motion;Face position motion detection apparatus 1a and face position motion detection dress in Fig. 8 1b is put to represent to detect 2 faces position motion detection apparatus 1 of two different parts motion;One of face position motion detection Device 1 is a kind of brow furrows action detection device that the embodiment of the present invention 2 is provided, referring specifically to saying for the embodiment of the present invention 2 It is bright, do not repeat herein.
It should be noted that Fig. 8 is only to be example including 2 faces position motion detection apparatus 1, the present embodiment can be with Including the face position motion detection apparatus 1 of more than 2.
Position motion score value acquisition device 2, the situation for being moved based on each position obtains each portion of face to be measured The corresponding motion score value of position motion;
Vivo identification score value computing device 3, for calculating after the corresponding motion score value weighting of each face position motion Summation, and the summation that calculating is obtained is used as vivo identification score value;Wherein, vivo identification score value computing device 3 has been preset and every Move corresponding weights in one position;
Live body judgment means 4, for judging to be measured face of the vivo identification score value not less than predetermined threshold value as live body.
Example, an at least position motion detection apparatus 1 correspondence detection in addition to brow furrows action detection device 1 is at least The motion of one position includes at least position fortune in mouth motion, eye motion, head movement, eyebrow movement and facial movement It is dynamic;Whether mouth motion includes mouth, which opens, is closed, or, mouth motion includes the mobile journey whether face has smile action, the i.e. corners of the mouth Degree exceedes preset standard;Whether eye motion has to open including eyes is closed action;Whether head movement rotates including head;Eyebrow is transported It is dynamic to include whether eyebrow has shake action;Facial movement includes the overall variation at face position, and such as funny face is acted, the eye of face Exceed preparatory condition with the overall variation degree of mouth;As a rule, the mouth motion of face, eye motion and head movement Degree is obvious, is conducive to being detected, can preferably select at least one in the motion of detection mouth, eye motion and head movement Kind.
Example, an at least face position motion detection apparatus 1 is every specifically for the face video for detecting face to be measured in addition The each frame of video detection position extracted every default frame number moves corresponding position key point position, and each by extraction regards The intensity of variation of the position key point position of frequency frame come determine position motion situation;Or, face position motion detection apparatus 1 Can also be specifically for detecting the corresponding portion of each frame of video detection position motion that face to be measured is extracted every default frame number Position grey value characteristics, the feelings of position motion are determined by the intensity of variation of the gray value at the position of each frame of video of extraction Condition, the position motion that the embodiment is commonly available to the detection of face position motion detection apparatus 1 is that eye motion or forehead are transported It is dynamic.Above-mentioned implementation is only the example that at least detection position of a face position motion detection apparatus 1 is moved in addition, works as face Position motion detection apparatus 1 realizes the motion detection moved to an at least position in addition by other embodiment, also in this implementation Within the protection domain of example.
Position 2 motion conditions specifically for being acted based on brow furrows of motion score value acquisition device obtain corresponding motion Score value:The motion conditions of face to be measured is have brow furrows action, then the motion score value of the brow furrows action obtained is 1 point; Otherwise the motion score value of the brow furrows action obtained is 0 point.Position motion score value acquisition device 2 is specifically additionally operable to based on other The motion conditions of at least one position motion obtain corresponding motion score value:When the corresponding position motion conditions of face to be measured are There is motion, then the motion score value of the corresponding position motion obtained is 1 point;Otherwise the motion score value obtained is 0 point.
Except above-mentioned position move score value acquisition device 2 be used for based on each position move whether have the situation of motion and it is straight Obtain one whether have motion motion score value embodiment, obtained in by face position motion detection apparatus 1 The motion conditions of position motion include the movement degree that position is moved, and can also move score value acquisition device 2 by position and be based on Movement degree and obtain a motion score value between 0 to 1, such as setting motion score value be divided into 10 grades, value between 0 to 1, The alternate embodiments can not only indicate whether motion, moreover it is possible to embody the degree of motion.
The weights corresponding with the motion of each position are to be moved according to each position in vivo identification score value computing device 3 Significant degree is set;When position motion such as detection is brow furrows action, eye motion and mouth motion, now, mouth motion It is obvious, therefore weight is maximum, eye motion takes second place, and brow furrows action weight is minimum, the Weight Algorithm correspondence of position motion For:Mouth is moved>Eye motion>Brow furrows are acted.
Or, the weights corresponding with the motion of each position are according to different application in vivo identification score value computing device 3 Scape carries out the weighed value adjusting of position motion automatically and set, specific practice:Under a certain scene, each of face to be measured is collected The normal input video of kind of position motion is as positive sample, and attack video takes that (positive sample passes through number+negative sample as negative sample Refuse number)/(positive sample sum+negative sample sum) accuracy rate for being moved as the position, then the standard of each position motion True rate is ranked up according to order from big to small, the weight of each position motion also according to this order from big to small, adjust again The weight of whole each position motion.
Above two sets each position and moves any preferred embodiment of corresponding weights in the present embodiment Protection domain in.
Live body judgment means 4 include:
Vivo identification confidence computation unit 41, the ratio calculation for accounting for vivo identification total score by vivo identification score value The vivo identification confidence level of face to be measured;
Wherein, vivo identification total score is all sites motion correspondence obtained by vivo identification score value computing device 3 Motion score value weighting after summation maximum, vivo identification total score represents with s_max;F represents vivo identification confidence level, and 0<f<1;Vivo identification confidence computation unit 41 calculates the vivo identification confidence level of face to be measured by following formula:
F=(s/s_max) * 100%
Live body judging unit 42, for when vivo identification confidence level is not less than preset value, determining vivo identification score value not Less than predetermined threshold value, judge to be measured face of the vivo identification score value not less than predetermined threshold value as live body.
Wherein, preset value is represented with e, is judged by live body judging unit 42:As f >=e, i.e. vivo identification confidence level is not small When preset value, it is determined that vivo identification score value is not less than predetermined threshold value, judge vivo identification score value not less than predetermined threshold value Face to be measured is live body;Work as f<When e, i.e. vivo identification confidence level are less than preset value, it is determined that vivo identification score value is less than default Threshold value, judges that vivo identification score value is less than the face to be measured of predetermined threshold value as non-living body.
The vivo identification confidence level obtained by vivo identification confidence computation unit 41, can also be further expanded, Classifying system is set up for the present embodiment vivo identification system and carries out live body judgement and live body classification, is known with obtaining abundant live body Other result.
When it is implemented, first, the motion that corresponding position is moved is obtained by each face position motion detection apparatus 1 Situation, wherein, a face position motion detection apparatus 1 is a kind of embodiment of brow furrows action detection device of the invention;And The corresponding motion score value of motion conditions acquisition that score value acquisition device 2 is moved based on position is moved by position;Then, work is passed through The motion score value of each position motion of body identification score value 3 pairs of acquisitions of computing device is summed as vivo identification point after being weighted Value, finally, vivo identification is accounted for by the vivo identification confidence computation unit 41 of live body judgment means 4 using vivo identification score value The vivo identification confidence level of the ratio calculation face to be measured of total score, and the work as obtained by the judgement of live body judging unit 42 when calculating Body recognition confidence is live body not less than the face to be measured of predetermined threshold value.
It is single that the present embodiment solves algorithm in the prior art using detection at least two face position motion detection apparatus, The problem of security is not high, scalability is strong, and the brow furrows action movement detection means used is not high to hardware requirement;Separately Outside, fraction fusion is carried out by vivo identification score value computing device again to different parts motion weighting, the vivo identification degree of accuracy is high, Obtain the beneficial effect that vivo identification accuracy rate is high, hardware requirement is low and safe.
Described above is the preferred embodiment of the present invention, it is noted that for those skilled in the art For, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications are also considered as Protection scope of the present invention.

Claims (10)

1. a kind of brow furrows motion detection method, it is characterised in that the brow furrows motion detection method includes step:
Some frame of video are extracted from face video to be measured;
Obtain the forehead region of each frame of video extracted from the face video to be measured;
Calculate the Grad of each pixel in the forehead region of each frame of video extracted;
The variance for calculating the Grad of the pixel in the forehead region of each frame of video extracted obtains corresponding institute State the brow furrows value of frame of video;
The brow furrows value of the frame of video based on each extraction judges that the brow furrows of the face video to be measured are moved The situation of work.
2. a kind of brow furrows motion detection method as claimed in claim 1, it is characterised in that described based on each extraction The situation of the brow furrows action of face video to be measured described in the ratio in judgement of the frame of video includes:
Judge that the brow furrows value is less than the forehead region of the face to be measured of the frame of video of the first predetermined threshold value without wrinkle, Judging the forehead region of to be measured face of the brow furrows value more than the frame of video of the second predetermined threshold value has wrinkle;
If extracting in some frame of video frame of video without wrinkle of the forehead region including the face to be measured simultaneously and to be measured The forehead region frame of video wrinkly of face, then judging the face to be measured of the face to be measured has brow furrows action.
3. a kind of brow furrows motion detection method as claimed in claim 2, it is characterised in that it is each that the calculating is extracted The Grad of each pixel in the forehead region of the frame of video includes:
The sobel values of each pixel in the forehead region of each frame of video extracted are calculated by sobel operators; Wherein, the sobel values represent the Grad.
4. a kind of brow furrows motion detection method as claimed in claim 2, it is characterised in that the acquisition is from described to be measured The forehead region of each frame of video extracted in face video includes:
Face datection and face key point are done with dlib storehouses to each frame of video for being extracted from the face video to be measured Position is detected, obtains the human face region position and some key point positions of face to be measured;
Some key point positions of eyebrow are obtained from some face key points of the frame of video of each extraction, based on the eyebrow Some key point positions and the human face region position acquisition described in forehead region.
5. a kind of brow furrows action detection device, it is characterised in that including:
Frame of video extracting unit, for extracting some frame of video from face video to be measured;
Forehead area acquisition unit, the forehead area of each frame of video extracted for obtaining from the face video to be measured Domain;
Grad acquiring unit, the ladder of each pixel in the forehead region for calculating each frame of video extracted Angle value;
Brow furrows value acquiring unit, the ladder of the pixel in the forehead region for calculating each frame of video extracted The variance of angle value obtains the brow furrows value of the corresponding frame of video;
Brow furrows act judging unit, and the brow furrows value for the frame of video based on each extraction judges described The situation of the brow furrows action of face video to be measured.
6. a kind of brow furrows action detection device as claimed in claim 5, it is characterised in that the brow furrows action is sentenced Disconnected unit is specifically included:
Wrinkle condition judgment module, for judging that the brow furrows value is less than the to be measured of the frame of video of the first predetermined threshold value The forehead region of face judges that the brow furrows value is more than the face to be measured of the frame of video of the second predetermined threshold value without wrinkle Forehead region have wrinkle;
Wrinkle acts judge module, if including the forehead region of the face to be measured simultaneously for extracting in some frame of video The forehead region of frame of video and face to be measured frame of video wrinkly without wrinkle, then judge the face to be measured of the face to be measured There is brow furrows action.
7. a kind of brow furrows action detection device as claimed in claim 6, it is characterised in that the Grad acquiring unit Specifically for each pixel in the forehead region that each frame of video extracted is calculated by sobel operators Sobel values;Wherein, the sobel values represent the Grad.
8. a kind of brow furrows action detection device as claimed in claim 6, it is characterised in that the forehead region obtains single Member includes:
Face critical point detection module, for each frame of video dlib storehouses to being extracted from the face video to be measured Face datection and the detection of face key point position are done, some key points of the human face region position and face to be measured are obtained Put;
Forehead region acquisition module, some passes for obtaining eyebrow in some face key points of the frame of video from each extraction Key point position, forehead region described in some key point positions and the human face region position acquisition based on the eyebrow.
9. a kind of vivo identification method, it is characterised in that the vivo identification method includes step:
What the situation and at least one other position for detecting the brow furrows action of the face to be measured in face video to be measured were moved Situation, wherein, face video to be measured is detected using the brow furrows motion detection method as described in any one of Claims 1 to 4 In face to be measured brow furrows action situation;
Move corresponding motion score value in each position that the face to be measured is obtained based on the situation that position is moved;
Calculate the summation after the corresponding motion score value weighting of each position motion, and will calculate the obtained summation as Vivo identification score value;Wherein, corresponding weights are preset in each position motion;
Judge the to be measured face of the vivo identification score value not less than predetermined threshold value as live body.
10. a kind of vivo identification system, it is characterised in that the vivo identification system includes:
At least two face position motion detection apparatus, each face position motion detection apparatus is used to detect face to be measured The situation of corresponding position motion, wherein a face position motion detection apparatus is one as described in any one of claim 5~8 Plant brow furrows action detection device;
Score value acquisition device is moved at position, and each of the face to be measured is obtained for the situation based on each position motion Move corresponding motion score value in position;
Vivo identification score value computing device, for calculating the summation after the corresponding motion score value weighting of each position motion, And the summation for obtaining calculating is used as vivo identification score value;Wherein, the vivo identification score value computing device preset with Move corresponding weights in each position;
Live body judgment means, for judging the to be measured face of the vivo identification score value not less than predetermined threshold value as live body.
CN201710406498.5A 2017-06-02 2017-06-02 Forehead wrinkle action detection method and device and living body identification method and system Active CN107330370B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710406498.5A CN107330370B (en) 2017-06-02 2017-06-02 Forehead wrinkle action detection method and device and living body identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710406498.5A CN107330370B (en) 2017-06-02 2017-06-02 Forehead wrinkle action detection method and device and living body identification method and system

Publications (2)

Publication Number Publication Date
CN107330370A true CN107330370A (en) 2017-11-07
CN107330370B CN107330370B (en) 2020-06-19

Family

ID=60193840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710406498.5A Active CN107330370B (en) 2017-06-02 2017-06-02 Forehead wrinkle action detection method and device and living body identification method and system

Country Status (1)

Country Link
CN (1) CN107330370B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647600A (en) * 2018-04-27 2018-10-12 深圳爱酷智能科技有限公司 Face identification method, equipment and computer readable storage medium
CN109034138A (en) * 2018-09-11 2018-12-18 湖南拓视觉信息技术有限公司 Image processing method and device
CN109745014A (en) * 2018-12-29 2019-05-14 江苏云天励飞技术有限公司 Thermometry and Related product
CN109829434A (en) * 2019-01-31 2019-05-31 杭州创匠信息科技有限公司 Method for anti-counterfeit and device based on living body texture
WO2020015149A1 (en) * 2018-07-16 2020-01-23 华为技术有限公司 Wrinkle detection method and electronic device
CN111199171A (en) * 2018-11-19 2020-05-26 华为技术有限公司 Wrinkle detection method and terminal equipment
CN112200120A (en) * 2020-10-23 2021-01-08 支付宝(杭州)信息技术有限公司 Identity recognition method, living body recognition device and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908140A (en) * 2010-07-29 2010-12-08 中山大学 Biopsy method for use in human face identification
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face
CN104298482A (en) * 2014-09-29 2015-01-21 上海华勤通讯技术有限公司 Method for automatically adjusting output of mobile terminal
CN104794464A (en) * 2015-05-13 2015-07-22 上海依图网络科技有限公司 In vivo detection method based on relative attributes
US20150261999A1 (en) * 2013-06-25 2015-09-17 Morpho Method for detecting a true face
CN105138981A (en) * 2015-08-20 2015-12-09 北京旷视科技有限公司 In-vivo detection system and method
US20160188958A1 (en) * 2014-12-31 2016-06-30 Morphotrust Usa, Llc Detecting Facial Liveliness
US20160342851A1 (en) * 2015-05-22 2016-11-24 Yahoo! Inc. Computerized system and method for determining authenticity of users via facial recognition
CN106778450A (en) * 2015-11-25 2017-05-31 腾讯科技(深圳)有限公司 A kind of face recognition method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908140A (en) * 2010-07-29 2010-12-08 中山大学 Biopsy method for use in human face identification
US20150261999A1 (en) * 2013-06-25 2015-09-17 Morpho Method for detecting a true face
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face
CN104298482A (en) * 2014-09-29 2015-01-21 上海华勤通讯技术有限公司 Method for automatically adjusting output of mobile terminal
US20160188958A1 (en) * 2014-12-31 2016-06-30 Morphotrust Usa, Llc Detecting Facial Liveliness
CN104794464A (en) * 2015-05-13 2015-07-22 上海依图网络科技有限公司 In vivo detection method based on relative attributes
US20160342851A1 (en) * 2015-05-22 2016-11-24 Yahoo! Inc. Computerized system and method for determining authenticity of users via facial recognition
CN105138981A (en) * 2015-08-20 2015-12-09 北京旷视科技有限公司 In-vivo detection system and method
CN106778450A (en) * 2015-11-25 2017-05-31 腾讯科技(深圳)有限公司 A kind of face recognition method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
AVINASH KUMAR SINGH 等: "Face Recognition with Liveness Detection using Eye and Mouth Movement", 《IEEE》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647600A (en) * 2018-04-27 2018-10-12 深圳爱酷智能科技有限公司 Face identification method, equipment and computer readable storage medium
CN108647600B (en) * 2018-04-27 2021-10-08 深圳爱酷智能科技有限公司 Face recognition method, face recognition device and computer-readable storage medium
CN111566693A (en) * 2018-07-16 2020-08-21 华为技术有限公司 Wrinkle detection method and electronic equipment
WO2020015149A1 (en) * 2018-07-16 2020-01-23 华为技术有限公司 Wrinkle detection method and electronic device
CN111566693B (en) * 2018-07-16 2022-05-20 荣耀终端有限公司 Wrinkle detection method and electronic equipment
US11941804B2 (en) 2018-07-16 2024-03-26 Honor Device Co., Ltd. Wrinkle detection method and electronic device
CN109034138B (en) * 2018-09-11 2021-09-03 湖南拓视觉信息技术有限公司 Image processing method and device
CN109034138A (en) * 2018-09-11 2018-12-18 湖南拓视觉信息技术有限公司 Image processing method and device
CN111199171A (en) * 2018-11-19 2020-05-26 华为技术有限公司 Wrinkle detection method and terminal equipment
CN111199171B (en) * 2018-11-19 2022-09-23 荣耀终端有限公司 Wrinkle detection method and terminal equipment
US11978231B2 (en) 2018-11-19 2024-05-07 Honor Device Co., Ltd. Wrinkle detection method and terminal device
CN109745014A (en) * 2018-12-29 2019-05-14 江苏云天励飞技术有限公司 Thermometry and Related product
CN109829434A (en) * 2019-01-31 2019-05-31 杭州创匠信息科技有限公司 Method for anti-counterfeit and device based on living body texture
CN112200120A (en) * 2020-10-23 2021-01-08 支付宝(杭州)信息技术有限公司 Identity recognition method, living body recognition device and electronic equipment

Also Published As

Publication number Publication date
CN107330370B (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN107330370A (en) Forehead wrinkle action detection method and device and living body identification method and system
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN104504394B (en) A kind of intensive Population size estimation method and system based on multi-feature fusion
CN106295522B (en) A kind of two-stage anti-fraud detection method based on multi-orientation Face and environmental information
CN105518709B (en) The method, system and computer program product of face for identification
CN104166861B (en) A kind of pedestrian detection method
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN104008370B (en) A kind of video face identification method
CN108108684A (en) A kind of attention detection method for merging line-of-sight detection
CN109670430A (en) A kind of face vivo identification method of the multiple Classifiers Combination based on deep learning
CN107330914A (en) Human face part motion detection method and device and living body identification method and system
CN103605971B (en) Method and device for capturing face images
WO2021139171A1 (en) Facial enhancement based recognition method, apparatus and device, and storage medium
CN106886216A (en) Robot automatic tracking method and system based on RGBD Face datections
CN107358152B (en) Living body identification method and system
CN107392089A (en) Eyebrow movement detection method and device and living body identification method and system
Xu et al. Real-time pedestrian detection based on edge factor and Histogram of Oriented Gradient
CN104951773A (en) Real-time face recognizing and monitoring system
CN103473564B (en) A kind of obverse face detection method based on sensitizing range
CN106778645A (en) A kind of image processing method and device
CN107358155A (en) Method and device for detecting ghost face action and method and system for recognizing living body
CN101477626A (en) Method for detecting human head and shoulder in video of complicated scene
CN107665361A (en) A kind of passenger flow counting method based on recognition of face
CN106709438A (en) Method for collecting statistics of number of people based on video conference
CN105138967B (en) Biopsy method and device based on human eye area active state

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant