CN107330370B - Forehead wrinkle action detection method and device and living body identification method and system - Google Patents
Forehead wrinkle action detection method and device and living body identification method and system Download PDFInfo
- Publication number
- CN107330370B CN107330370B CN201710406498.5A CN201710406498A CN107330370B CN 107330370 B CN107330370 B CN 107330370B CN 201710406498 A CN201710406498 A CN 201710406498A CN 107330370 B CN107330370 B CN 107330370B
- Authority
- CN
- China
- Prior art keywords
- face
- forehead
- detected
- living body
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000001061 forehead Anatomy 0.000 title claims abstract description 263
- 230000037303 wrinkles Effects 0.000 title claims abstract description 192
- 238000001514 detection method Methods 0.000 title claims abstract description 92
- 238000000034 method Methods 0.000 title claims description 24
- 238000004364 calculation method Methods 0.000 claims abstract description 20
- 230000033001 locomotion Effects 0.000 claims description 257
- 210000004709 eyebrow Anatomy 0.000 claims description 28
- 238000000605 extraction Methods 0.000 claims description 7
- 238000003708 edge detection Methods 0.000 abstract description 11
- 230000004424 eye movement Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 230000004886 head movement Effects 0.000 description 6
- 230000004927 fusion Effects 0.000 description 4
- 210000003128 head Anatomy 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000004397 blinking Effects 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a forehead wrinkle action detection method, which comprises the following steps: extracting a plurality of video frames from a face video to be detected; acquiring a forehead area of each video frame extracted from the face video to be detected; calculating the gradient value of each pixel point of the forehead area of each extracted video frame through an edge detection operator; calculating variance of gradient values of each pixel point of the forehead area of each extracted video frame to obtain corresponding forehead wrinkle values of the video frames; and judging the forehead wrinkle action condition of the face video to be detected based on the forehead wrinkle value of each extracted video frame. Correspondingly, the invention also discloses a forehead wrinkle action detection device. The invention has simple calculation and high efficiency.
Description
Technical Field
The invention relates to the field of face recognition, in particular to a forehead wrinkle action detection method and device and a living body recognition method and system.
Background
With the development of face recognition technology, more and more scenes need to use face detection to rapidly recognize the identity of a person. However, if there is a lawful person, the image or video can be used to replace the real person for face recognition, so that the safety of the whole face recognition system cannot be guaranteed. The face living body recognition can detect that the current face to be detected is a living body face instead of a face in a photo or a video, so that the safety of the face recognition system is ensured. When the face recognition is carried out, the detection of forehead wrinkle action of the face to be detected is helpful for recognizing whether the face is a living body. In order to realize efficient and simple recognition of whether a human face is living or not during human body recognition, an efficient and simple forehead wrinkle action detection technical scheme is needed.
Disclosure of Invention
The embodiment of the invention aims to provide a forehead wrinkle action detection method and device, which are simple in calculation and high in efficiency.
In order to achieve the above object, the present invention provides a forehead wrinkle movement detection method, including:
extracting a plurality of video frames from a face video to be detected;
acquiring a forehead area of each video frame extracted from the face video to be detected;
calculating a gradient value of each pixel point of the forehead area of each extracted video frame;
calculating variance of gradient values of pixel points of the forehead area of each extracted video frame to obtain forehead wrinkle values of the corresponding video frames;
and judging the forehead wrinkle action condition of the face video to be detected based on the forehead wrinkle value of each extracted video frame.
Compared with the prior art, the forehead wrinkle action detection method disclosed by the embodiment of the invention comprises the steps of firstly obtaining a plurality of video frames, then determining the forehead area of a face to be detected from each extracted video frame, then obtaining the gradient value of each pixel point, and calculating the variance of the gradient value of each extracted video frame to be used as the forehead wrinkle value; and finally, judging whether the face to be detected in the extracted face video frames has forehead wrinkles according to the forehead wrinkles of each video frame, so as to judge whether the face to be detected has forehead wrinkles, wherein the calculation process is simple and efficient, and any common camera or camera of a mobile terminal mobile phone can be used as input hardware of the face video to be detected, so that the requirement on equipment hardware is simple.
Further, the determining the forehead wrinkle action of the face video to be detected based on the ratio of each extracted video frame includes:
judging that no wrinkle exists in the forehead area of the face to be detected of the video frame with the forehead wrinkle value smaller than a first preset threshold value, and judging that wrinkles exist in the forehead area of the face to be detected of the video frame with the forehead wrinkle value larger than a second preset threshold value;
and if the video frames which simultaneously comprise the video frame without wrinkles in the forehead area of the face to be detected and the video frame with wrinkles in the forehead area of the face to be detected are extracted, judging that the face to be detected of the face to be detected has forehead wrinkles.
Further, the calculating the extracted gradient value of each pixel point of the forehead region of each video frame includes:
calculating a sobel value of each pixel point of the forehead area of each extracted video frame through a sobel operator; wherein the sobel values represent the gradient values.
As a further scheme, the sobel operator is adopted to calculate the gradient value of the pixel point, the sobel operator is high in calculation efficiency, and the gradient value can be obtained efficiently.
Further, the acquiring the forehead area of each video frame extracted from the face video to be detected includes:
performing face detection and face key point position detection on each video frame extracted from the face video to be detected by using a dlib library, and acquiring the position of the face area and a plurality of key point positions of the face to be detected;
and acquiring a plurality of key point positions of eyebrows from a plurality of face key points of each extracted video frame, and acquiring the forehead area based on the key point positions of the eyebrows and the face area position.
Correspondingly, the invention also provides a forehead wrinkle action detection device, which comprises:
the video frame extraction unit is used for extracting a plurality of video frames from the face video to be detected;
the forehead area acquisition unit is used for acquiring the forehead area of each video frame extracted from the face video to be detected;
a gradient value obtaining unit, configured to calculate a gradient value of each pixel point in the forehead region of each extracted video frame;
the forehead wrinkle value acquisition unit is used for calculating variance of gradient values of all pixel points of the forehead area of each extracted video frame to acquire forehead wrinkle values of the corresponding video frames;
and the forehead wrinkle action judging unit is used for judging the forehead wrinkle action condition of the face video to be detected based on the forehead wrinkle value of each extracted video frame.
Compared with the prior art, the forehead wrinkle action detection device disclosed by the embodiment of the invention firstly acquires a plurality of video frames through the video frame extraction unit, then determines the forehead area of a face to be detected from each extracted video frame through the forehead area acquisition unit, then acquires the gradient value of each pixel point through the gradient value acquisition unit, and calculates the variance of the gradient value of each extracted video frame as the forehead wrinkle value through the forehead wrinkle value acquisition unit; finally, the forehead wrinkle action judging unit judges whether the forehead wrinkle action exists on the face to be detected in the extracted face video frames according to the forehead wrinkle value of each video frame, the judgment on the forehead wrinkle action of the face to be detected is realized, the calculated amount is small, the device can efficiently obtain the detection result, any common camera or the camera of a mobile terminal mobile phone can be used as the input hardware of the face video to be detected, and the equipment hardware is simple.
Further, the forehead wrinkle action determination unit specifically includes:
the wrinkle state judging module is used for judging that no wrinkle exists in the forehead area of the face to be detected of the video frame with the forehead wrinkle value smaller than a first preset threshold value and judging that wrinkles exist in the forehead area of the face to be detected of the video frame with the forehead wrinkle value larger than a second preset threshold value;
and the wrinkle action judging module is used for judging that the forehead wrinkles of the face to be detected act if the video frames which simultaneously comprise the video frames without wrinkles in the forehead area of the face to be detected and the video frames with wrinkles in the forehead area of the face to be detected are extracted from the video frames.
Further, the gradient value obtaining unit is specifically configured to calculate, by using a sobel operator, a sobel value of each pixel point of the forehead region of each of the extracted video frames; wherein the sobel values represent the gradient values.
Further, the forehead area acquisition unit includes:
the face key point detection module is used for performing face detection and face key point position detection on each video frame extracted from the face video to be detected by using a dlib library, and acquiring the position of the face area and a plurality of key point positions of the face to be detected;
and the forehead area acquisition module is used for acquiring a plurality of key point positions of the eyebrows from a plurality of face key points of each extracted video frame and acquiring the forehead area based on the plurality of key point positions of the eyebrows and the face area position.
Correspondingly, the embodiment of the invention also provides a living body identification method, which comprises the following steps:
detecting the forehead wrinkle action condition of the face to be detected in the face video to be detected and the movement condition of at least one other part, wherein the forehead wrinkle action condition of the face to be detected in the face video to be detected is detected by adopting the forehead wrinkle action detection method disclosed by the invention;
acquiring a motion score corresponding to the motion of each part of the face to be detected based on the condition of the part motion;
calculating the weighted sum of the motion scores corresponding to the motion of each part, and taking the calculated sum as a living body identification score; wherein, the movement of each part has preset corresponding weight;
and judging the face to be detected with the living body identification score not less than a preset threshold value as a living body.
Compared with the prior art, the method for identifying the living body disclosed by the embodiment of the invention adopts the technical scheme that the eyebrow movement detection method disclosed by the invention is adopted to detect the forehead wrinkle action condition of the face to be detected of the face video to be detected, and obtains the movement scores of the corresponding part movement by detecting the movement conditions of other parts of the face to be detected, weights the part movement scores and sums the part movement scores to be used as the living body identification scores, and the living body identification scores are used as the judgment standard of whether the face to be detected is the living body or not; the forehead wrinkle action detection method is simple and efficient in calculation process, and simple in equipment hardware requirement; the method solves the problems of single algorithm and low safety in the prior art by detecting the eyebrow movement and the movement of at least one other part, has strong expandability, can realize the detection based on the face part movement through a two-dimensional image, and has low requirement on hardware; in addition, score fusion is carried out after weighting of the motion of different parts, the living body identification accuracy is high, and the living body identification method is high in accuracy, low in hardware requirement and high in safety.
Correspondingly, an embodiment of the present invention further provides a living body identification system, including:
at least 2 personal face position motion detection devices, each of which is used for detecting the condition of the position motion corresponding to the face to be detected, wherein one of the face position motion detection devices is a forehead wrinkle action detection device disclosed by the invention;
the part movement score acquisition device is used for acquiring a movement score corresponding to the movement of each part of the face to be detected based on the movement condition of each part;
living body identification score calculation means for calculating a sum of weighted motion scores corresponding to the motions of each of the parts, and taking the sum obtained by the calculation as a living body identification score; wherein the living body identification score calculating means has preset a weight corresponding to each of the part movements;
and the living body judgment device is used for judging the face to be detected with the living body identification score not less than a preset threshold value as a living body.
Compared with the prior art, the living body recognition system disclosed by the embodiment of the invention obtains the movement scores of at least two parts on the face to be detected through at least 2 personal face position movement detection devices, wherein the forehead wrinkle action detection device is adopted by one personal face position movement detection device; weighting the part motion scores by a living body recognition score calculating device and then summing the weighted part motion scores to obtain a living body recognition score, and using the living body recognition score as a judgment standard for judging whether the face to be detected is a living body by a living body judging device; the forehead wrinkle action detection device is simple and efficient in calculation, and the hardware requirement of equipment is simple; the device for detecting the motion of at least 2 personal face positions solves the problems of single algorithm and low safety in the prior art, has high expandability, can realize detection based on the motion of the face positions through a two-dimensional image, has low requirements on hardware, weights different position motions through a living body recognition score calculating device and then performs score fusion, has high living body recognition accuracy, and obtains the beneficial effects of high living body recognition accuracy, low hardware requirements and high safety.
Drawings
Fig. 1 is a schematic flowchart of a forehead wrinkle action detection method according to embodiment 1 of the present invention;
fig. 2 is a flowchart illustrating step S15 of a forehead wrinkle movement detection method according to embodiment 1 of the present invention;
fig. 3 is a flowchart illustrating step S12 of a forehead wrinkle movement detection method according to embodiment 1 of the present invention;
FIG. 4 is a schematic diagram of a model of 68 key points of a face to be measured;
fig. 5 is a schematic structural diagram of an embodiment of a forehead wrinkle movement detection apparatus according to embodiment 2 of the present invention;
fig. 6 is a schematic flowchart of a living body identification method according to embodiment 3 of the present invention;
fig. 7 is a schematic flow chart of step S24 of a living body identification method according to embodiment 3 of the present invention;
fig. 8 is a schematic structural diagram of a living body identification system according to embodiment 4 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of the embodiment, and includes the following steps:
s11, extracting a plurality of video frames from the face video to be detected;
s12, acquiring the forehead area of each video frame extracted from the face video to be detected;
s13, calculating the gradient value of each pixel point in the forehead area of each extracted video frame;
s14, calculating the variance of the gradient values of the pixel points in the forehead area of each extracted video frame to obtain the forehead wrinkle value of the corresponding video frame;
and S15, judging the forehead wrinkle action condition of the face video to be detected based on the forehead wrinkle value of each extracted video frame.
In a face picture, an image obtained when the forehead of a general person does not wrinkle is smooth, and the color change is small; the color changes when wrinkles are generated on the forehead, the pixel value change degree of the correspondingly acquired image is large, the fluctuation is large, and whether the wrinkles exist on the forehead can be distinguished based on the phenomenon. The definition of no wrinkle on forehead in the invention is as follows: the forehead area with the detected forehead wrinkle value smaller than a preset first threshold value is free of wrinkles; the definition of forehead wrinkles is: the forehead area with the detected forehead wrinkle value larger than a preset second threshold value is wrinkled; the definition of forehead wrinkles movement is: the forehead of the human face generates wrinkles. That is, the forehead wrinkle action of the face to be detected can be described corresponding to the states of the forehead wrinkles and the forehead wrinkles detected in the video frame of the face video to be detected.
Therefore, referring to fig. 2, fig. 2 is a schematic flowchart of step S15, and step S15 specifically includes:
s151, judging that no wrinkle exists in the forehead area of the face to be detected of the video frame with the forehead wrinkle value smaller than the first preset threshold value, and judging that wrinkles exist in the forehead area of the face to be detected of the video frame with the forehead wrinkle value larger than the second preset threshold value;
s152, if the video frames which simultaneously comprise the video frame without wrinkles in the forehead area of the face to be detected and the video frame with wrinkles in the forehead area of the face to be detected are extracted from the plurality of video frames, the action that the forehead wrinkles exist in the face to be detected of the face to be detected is judged.
In step S13, calculating a gradient value of each pixel point of the forehead region of each extracted video frame by using an edge detection operator; the adopted edge detection operator is preferably a sobel operator, and the Chinese name of the sobel operator is a Sobel operator; step S13 specifically includes: calculating the sobel value of each pixel point of the forehead area of each extracted video frame by using a sobel operator; the sobel value is a gradient value used for representing the variation degree of the pixel value of each pixel point.
The sobel operator generally includes detecting a horizontal edge and a vertical edge, and since most of wrinkles generated by the forehead are in the horizontal direction, it is further preferable that the selection is simplified to only detect the horizontal edge when the sobel operator is applied to the present embodiment, and at this time, the sobel value obtained in step S13 is: performing convolution operation on the convolution of the area pixels which are contained in the center of the current pixel and have the same size as the convolution kernel and the convolution in the vertical direction to obtain a result value; the operation process of calculating the sobel value of each pixel point of the forehead area of each video frame extracted by the sobel operator in the step S13 specifically includes: carrying out convolution operation on the forehead area by using a convolution core in the vertical direction; the convolution operation is an M x M pixel matrix and an M x M convolution kernel, the value of each matrix position is multiplied, and then the M x M products are added to obtain the result value of the convolution operation.
Correspondingly, in step S14, the variance of the sobel value of each pixel point in the forehead area of each video frame is calculated through the sobel operator, so as to obtain the forehead wrinkle value of the corresponding video frame.
In addition, the embodiment that the sobel operator performs convolution operation on the forehead area from the convolution kernel in the vertical direction and the convolution kernel in the horizontal direction respectively to obtain the sobel value, and the calculated variance of the sobel value obtains the forehead wrinkle value of the corresponding video frame is also within the protection scope of the embodiment.
Based on the principle of the invention, the sobel operator can replace other edge detection operators, for example, edge detection operators such as canny operator, Prewitt operator, Roberts operator and the like are adopted to obtain the forehead wrinkle value of each extracted video frame to realize the judgment of forehead wrinkle action, and the embodiment is also within the protection scope of the invention. Compared with other edge detection operators, the reason why the sobel operator is preferred in the embodiment is that the sobel operator has small calculation amount and high efficiency; when the forehead wrinkle action detection method is applied to living body identification detection, efficient judgment on whether wrinkles act on the face or not can be achieved.
Referring to fig. 3, step S12 specifically includes:
s121, performing face detection and face key point position detection on each video frame extracted from the face video to be detected by using a dlib library, and acquiring the position of a face area and a plurality of key point positions of the face to be detected;
the dlib library refers to a cross-platform general library written by using a C + + technology;
referring to fig. 4, fig. 4 is a schematic diagram of a model of 68 key points of a face to be measured; the positions of the key points of the faces acquired in the step S121 are the key point positions shown by key points 1 to 68 in fig. 4; in addition, the position of a human face area can be obtained by performing human face detection on each extracted video frame; in this embodiment, the face area is preferably a rectangular frame area representing a face, and accordingly, when the positions of H, I, J and K four points illustrated in fig. 4 are obtained, the rectangular frame area of the face, that is, the position of the face area, can be determined.
And S122, acquiring a plurality of key point positions of eyebrows from a plurality of face key points of each extracted video frame, and acquiring a forehead area based on the plurality of key point positions of the eyebrows and the face area position.
In fig. 4, the key points of the eyebrows obtained in step S122 are the positions shown by 10 key points, i.e., key points 18 to 27, specifically, the key points of the left eyebrow are the positions shown by 5 key points, i.e., key points 18 to 22, and the key points of the right eyebrow are the positions shown by 5 key points, i.e., key points 23 to 27. The lower boundary of the forehead region is determined based on the positions of a plurality of key points of the eyebrows, the upper border of a rectangular frame representing the face region is the upper boundary of the forehead region, the forehead region is determined in the face region based on the upper boundary and the lower boundary of the forehead region, and the rectangular region HOPI shown in the example of FIG. 4 is the forehead region.
The step S11 of extracting a plurality of video frames from the face video to be detected includes: extracting continuous frame video frames from a face video to be detected; or, sequentially extracting video frames from the face video to be detected according to a preset frequency.
In specific implementation, the embodiment acquires a plurality of video frames from a face video to be detected, then determines the forehead area of the face to be detected from each extracted video frame, then performs convolution operation on the forehead area by using a vertical convolution kernel through a sobel operator to acquire a sobel value of each pixel point, and calculates the variance of the sobel value of each extracted video frame as the forehead wrinkle value of the video frame; and finally, judging whether the forehead of the face to be detected of the corresponding video frame has wrinkles according to the forehead wrinkle value, and judging that the forehead wrinkles of the face to be detected of the face video to be detected, which simultaneously comprises the video frame with the forehead wrinkles and the video frame without the forehead wrinkles, in the extracted face video frames.
Compared with the prior art, the method is simple and efficient in calculation, a camera of any common camera or a camera of a mobile terminal mobile phone can be used as input hardware of the face video to be detected, and requirements on equipment hardware are simple.
Referring to fig. 5, a forehead wrinkle detection device provided in embodiment 2 of the present invention is shown in fig. 5, where fig. 5 is a schematic structural diagram of this embodiment, and includes:
the video frame extraction unit 11 is used for extracting a plurality of video frames from the face video to be detected;
a forehead region acquiring unit 12, configured to acquire a forehead region of each video frame extracted from the face video to be detected;
a gradient value obtaining unit 13, configured to calculate a gradient value of each pixel point in the forehead region of each extracted video frame;
a forehead wrinkle value obtaining unit 14, configured to calculate a variance of the gradient value of each pixel in the forehead region of each extracted video frame to obtain a forehead wrinkle value of the corresponding video frame;
and the forehead wrinkle action judging unit 15 is used for judging the forehead wrinkle action condition of the face video to be detected based on the forehead wrinkle value of each extracted video frame.
The forehead wrinkle action determination unit 15 specifically includes:
the wrinkle state determination module 151 is configured to determine that a forehead region of the face to be detected of the video frame with a forehead wrinkle value smaller than a first preset threshold is wrinkle-free, and determine that a forehead region of the face to be detected of the video frame with a forehead wrinkle value larger than a second preset threshold is wrinkle-free;
the wrinkle action determining module 152 is configured to determine that the forehead wrinkle action of the face to be detected exists if the video frames including the video frame without wrinkles in the forehead region of the face to be detected and the video frame with wrinkles in the forehead region of the face to be detected are extracted from the plurality of video frames.
The gradient value obtaining unit 13 calculates the gradient value of each pixel point in the forehead region of each extracted video frame by using an edge detection operator, wherein the edge detection operator is preferably a sobel operator, and the Chinese name of the sobel operator is a sobel operator; the gradient value obtaining unit 13 is specifically configured to: calculating the sobel value of each pixel point of the forehead area of each extracted video frame through a sobel operator; the sobel value is a gradient value used for representing the variation degree of the pixel value of each pixel point.
The sobel operator generally includes detecting a horizontal edge and a vertical edge, and since most of wrinkles generated by the forehead are in the horizontal direction, it is further preferable that the sobel operator is simplified to detect only the horizontal edge when applied to the present embodiment, and at this time, the sobel value representing the gradient value by the gradient value obtaining unit 13 is defined as: performing convolution operation on the convolution of the area pixels which are contained in the center of the current pixel and have the same size as the convolution kernel and the convolution in the vertical direction to obtain a result value; the gradient value obtaining unit 13 calculates the operation process of the sobel value of each pixel point of the forehead area of each extracted video frame through the sobel operator: carrying out convolution operation on the forehead area by using a convolution core in the vertical direction; the convolution operation is an M x M pixel matrix and an M x M convolution kernel, the value of each matrix position is multiplied, and then the M x M products are added to obtain the result value of the convolution operation.
Correspondingly, the forehead wrinkle value obtaining unit 14 is configured to calculate a variance of the sobel value of each pixel point of the forehead region of each extracted video frame through a sobel operator to obtain a forehead wrinkle value of the corresponding video frame.
Besides, the gradient value obtaining unit 13 may also be configured to obtain the sobel value by performing convolution operation on the forehead area by the sobel operator from the convolution kernel in the vertical direction and the convolution kernel in the horizontal direction, and the embodiment in which the corresponding forehead wrinkle value obtaining unit 14 calculates the variance of the sobel value to obtain the forehead wrinkle value of the corresponding video frame is also within the protection scope of the embodiment.
Based on the principle of the present invention, the gradient value obtaining unit 13 may also replace an edge detection operator, such as an edge detection operator, e.g., a canny operator, a Prewitt operator, and a Roberts operator, to obtain the forehead wrinkle value of each extracted video frame to achieve the determination of forehead wrinkle action, which is also within the protection scope of the present invention. Compared with other edge detection operators, the reason why the sobel operator is preferred in the embodiment is that the sobel operator has small calculation amount and high efficiency; when the forehead wrinkle action detection method is applied to living body identification detection, efficient judgment on whether wrinkles act on the face or not can be achieved.
The forehead area obtaining unit 12 specifically includes:
the face key point detection module 121 is configured to perform face detection and face key point position detection on each video frame extracted from the face video to be detected by using a dlib library, and acquire a face region position and a plurality of key point positions of the face to be detected;
the dlib library refers to a cross-platform general library written by using a C + + technology;
referring to fig. 4, fig. 4 is a schematic diagram of a model of 68 key points of a face to be measured; the positions of the key points of the faces acquired in the step S121 are the key point positions shown by key points 1 to 68 in fig. 4; in addition, the position of a human face area can be obtained by performing human face detection on each extracted video frame; in this embodiment, the face area is preferably a rectangular frame area representing a face, and accordingly, when the positions of H, I, J and K four points illustrated in fig. 4 are obtained, the rectangular frame area of the face, that is, the position of the face area, can be determined.
The forehead region obtaining module 122 is configured to obtain a plurality of key point positions of the eyebrows from a plurality of face key points of each extracted video frame, and obtain a forehead region based on the plurality of key point positions of the eyebrows and the face region position.
In fig. 4, the key points of the eyebrows acquired by the forehead region acquiring module 122 are positions shown by 10 key points, namely, key points 18 to key points 27, specifically, the key points of the left eyebrow are positions shown by 5 key points, namely, key points 18 to key points 22, and the key points of the right eyebrow are positions shown by 5 key points, namely, key points 23 to key points 27. The lower boundary of the forehead region is determined based on the positions of a plurality of key points of the eyebrows, the upper border of a rectangular frame representing the face region is the upper boundary of the forehead region, the forehead region is determined in the face region based on the upper boundary and the lower boundary of the forehead region, and the rectangular region HOPI shown in the example of FIG. 4 is the forehead region.
The video frame extraction unit 11 is specifically configured to extract continuous frame video frames from a face video to be detected; or, the video frame extraction unit 11 is specifically configured to sequentially extract video frames from the face video to be detected according to a preset frequency.
In specific implementation, in this embodiment, the video frame extraction unit 11 acquires a plurality of video frames from a face video to be detected, the forehead region acquisition unit 12 determines a forehead region of the face to be detected from each extracted video frame, then, the gradient value acquisition unit 13 performs convolution operation on the forehead region by using a vertical convolution kernel through a sobel operator to acquire a sobel value of each pixel point, and the forehead wrinkle value acquisition unit 14 calculates a variance of the sobel value of each extracted video frame as a forehead wrinkle value of the video frame; finally, the forehead wrinkle action judging unit 15 judges whether the forehead of the face to be detected of the corresponding video frame has wrinkles according to the forehead wrinkle value, and judges that the extracted face video frame simultaneously comprises the video frame with the forehead wrinkles and the video frame without the forehead wrinkles, and the forehead wrinkles action of the face to be detected of the face video to be detected.
Compared with the prior art, the method is simple and efficient in calculation, a camera of any common camera or a camera of a mobile terminal mobile phone can be used as input hardware of the face video to be detected, and requirements on equipment hardware are simple.
Referring to fig. 6, fig. 6 is a schematic flow chart of the present embodiment, where the present embodiment specifically includes the steps of:
s21, detecting the forehead wrinkle action condition of the face to be detected in the face video to be detected and the position movement condition of at least one other face to be detected, wherein the forehead wrinkle action condition of the face to be detected in the face video to be detected is detected by adopting the forehead wrinkle action detection method provided by the embodiment 1 of the invention; the specific process of detecting forehead wrinkle action can be referred to the embodiment provided by the forehead wrinkle action detection method of the present invention, and is not described herein again;
s22, obtaining a movement score corresponding to the movement of each part of the face to be detected based on the situation of the part movement;
s23, calculating the weighted sum of the motion scores corresponding to the motions of each part, and taking the calculated sum as a living body identification score; wherein, the corresponding weight is preset for each part movement;
and S24, judging the face to be detected with the living body identification score not less than the preset threshold value as the living body.
For example, in step S21, the at least one other part motion of the detected human face is detected as at least one of mouth motion, eye motion, head motion, face motion and eyebrow motion; generally speaking, the mouth movement includes whether the mouth is opened or closed, or the mouth movement includes smiling movement, that is, the movement degree of the mouth corner exceeds a preset standard; eye movement includes whether the eyes have opening and closing actions; the head movement includes whether the head is rotating; the facial movement comprises the whole change of human face parts, such as the action of a ghost face, and the whole change degree of eyes and a mouth of the human face exceeds a preset condition; the eyebrow movement includes whether the eyebrows shake or not; generally speaking, the degree of mouth movement, eye movement and head movement of the human face is significant, which is beneficial for detection, and at least one of mouth movement, eye movement and head movement can be preferably selected for detection.
For example, the step S21 of detecting the motion of at least one other part of the face to be detected specifically includes: detecting the position of a part key point corresponding to the motion of the detected part of each video frame extracted every preset frame number by the face video of the face to be detected, and determining the condition of the motion of the part according to the change degree of the position of the part key point of each extracted video frame; or detecting the part gray value characteristics corresponding to the detected part motion of each video frame extracted every preset frame number of the face to be detected, and determining the part motion condition according to the change degree of the gray value of the extracted part of each video frame. The above implementation method is only an example of detecting the movement of at least one other part; based on the principle of the living body identification method of the embodiment, it is within the scope of the embodiment that the motion detection of the motion of at least one other part is realized by other specific embodiments.
The preferred embodiment of the present embodiment that sets the weight corresponding to each part motion in step S23 is set according to the degree of significance of each part motion. For example, when the step S21 detects that the part motion of the human face to be detected in the human face video to be detected is forehead wrinkle motion, eye motion and mouth motion; the mouth movement is obvious, so the weight is the largest, the eyes are the second, and the forehead is the smallest, then, the weight strategy of the movement of the corresponding set part is: mouth movement > eye movement > forehead wrinkle movement.
Or, another preferred embodiment of the step S23 of setting the weight corresponding to each part motion is to automatically adjust the weight of the part motion according to different application scenarios, specifically: in a certain scene, collecting normal input videos of various part movements of a face to be detected as positive samples, attack videos as negative samples, taking (the number of positive samples passing plus the number of negative sample refuses)/(the total number of positive samples plus the total number of negative samples) as the accuracy rate of the part movements, then sequencing the accuracy rate of each part movement according to the sequence from large to small, and readjusting the weight of each part movement according to the sequence from large to small. The readjusted weight is used for calculating a living body identification score, and the identification result can be adaptive to the accuracy of part motion detection in different scenes, so that the accuracy of the living body identification result of the embodiment is improved.
Any of the above two preferable embodiments for setting the weight corresponding to the motion of each part are within the protection scope of the present embodiment.
Specifically, referring to fig. 7, fig. 7 is a schematic flow chart of step S24, including the steps of:
s241, calculating the living body recognition confidence coefficient of the face to be detected according to the ratio of the living body recognition score to the total living body recognition score;
s242, when the living body recognition confidence coefficient is not smaller than a preset value, determining that the living body recognition score is not smaller than a preset threshold value;
and S243, judging the face to be detected with the living body identification score not less than the preset threshold value as the living body.
Specifically, in step S241, the living body identification total score is a maximum value that can be obtained after the face to be detected is identified in this embodiment, and the living body identification confidence of the face to be detected is calculated by the following formula:
f=(s/s_max)*100%
wherein s _ max represents a living body identification total score, f represents a living body identification confidence, and 0< f < 1;
e represents a preset value, when f is larger than or equal to e, namely the living body recognition confidence coefficient is not smaller than the preset value, the living body recognition score is determined to be not smaller than a preset threshold value, and the face to be detected with the living body recognition score not smaller than the preset threshold value is judged to be a living body; and when f < e, namely the living body recognition confidence coefficient is smaller than a preset value, determining that the living body recognition score is smaller than a preset threshold value, and judging that the face to be detected with the living body recognition score smaller than the preset threshold value is a non-living body.
The living body recognition confidence obtained by using the living body recognition score can be further expanded and used for establishing a grading system for living body judgment and living body grading in the embodiment so as to obtain rich living body recognition results.
Step S22, acquiring a motion score corresponding to each part motion of the face to be detected based on the part motion condition includes:
obtaining a corresponding movement score based on the movement condition of forehead wrinkle movement: when the forehead wrinkle action of the face to be detected is detected in the step S21, the obtained movement score of the forehead wrinkle action is 1 score; otherwise, the obtained movement score of the forehead wrinkle action is 0.
Similarly, the corresponding movement score is obtained based on the movement condition of the movement of at least one other part: when the corresponding motion condition of the detected face in the step S21 is that the corresponding part of the detected face has motion, the obtained motion score of the motion of the corresponding part is 1 score; otherwise, the obtained movement score is 0.
In addition to obtaining the corresponding movement score through the judgment of whether there is movement, if the movement condition of the part movement obtained in step S21 is the movement degree of the part movement, the corresponding movement score may also be obtained in the score interval according to the movement degree, for example, the score is set to 10 grades, and the value is between 0 and 1.
In specific implementation, the movement of the video part of the face to be detected is detected so as to obtain the movement condition of the corresponding part, wherein the movement of one part is detected by adopting the forehead wrinkle action detection method provided by the invention; acquiring a corresponding movement score according to the movement condition of each part, wherein if the part moves, the acquired movement score is 1 score, and if not, the acquired movement score is 0 score; then calculating the sum of the weighted motion scores of all the parts, wherein the sum represents the living body recognition score; finally, calculating a living body recognition confidence coefficient by using the ratio of the living body recognition score to the total living body recognition score, wherein when the living body recognition confidence coefficient is not less than a preset value, the living body recognition score is determined to be not less than a preset threshold value, so that the face to be detected is judged to be a living body; otherwise, the face to be detected is judged to be a non-living body.
The embodiment can be applied to various device sides, and here, an implementation scenario applied to a mobile phone side is taken as an example for explanation: when the living body is identified at the mobile phone end, a living body action requirement sequence appears at random, for example, the living body actions of opening the mouth, blinking and forehead wrinkles of the face to be detected are required; at this time, if the weight of the preset part motion is: the weight w1 of mouth movement corresponding to mouth opening is 3, the weight w2 of eye movement corresponding to eye blinking is 2, and the weight w3 of movement corresponding to forehead wrinkle movement is 1; and calculating a living body identification total score, namely a living body identification highest score s _ max is 3 × 1+2 × 1+1 × 1 ═ 6. Assuming that a mouth opening score is 1, a blink score is 1, a forehead wrinkle action score is 0, a living body recognition score s is the sum of weighted motions of each part, the living body recognition score s is substituted into the motion score of the part motion, and the living body recognition score s is calculated to be 3 + 1+2 + 1+ 0-5; finally, the living body recognition confidence f is calculated to be s/s _ max 5/6 to be 83.33%. If the setting value e is set to 80% at this time, the face to be measured is determined to be a living body, and the confidence of the living body is 83.33%.
The method solves the problems of single algorithm and low safety in the prior art, and has strong expandability; the forehead wrinkle action detection method for the face to be detected is simple and efficient in calculation, and has low requirements on hardware of equipment; in addition, in the embodiment, the living body recognition is performed by detecting the motion of a plurality of parts, and score fusion is performed after weighting the motion of different parts, so that the living body recognition accuracy is high, and the safety is favorably improved.
Referring to fig. 8, fig. 8 is a schematic structural diagram of the present embodiment, where the living body identification system provided in embodiment 4 of the present invention includes:
at least 2 personal face position motion detection devices 1, each of the face position motion detection devices 1 being configured to detect a position motion corresponding to a face to be detected; the face part motion detection means 1a and the face part motion detection means 1b in fig. 8 represent 2-person face part motion detection means 1 that detect two different part motions; one of the face part motion detection devices 1 is a forehead wrinkle motion detection device provided in embodiment 2 of the present invention, which is specifically referred to the description of embodiment 2 of the present invention, and is not described herein again.
It should be noted that fig. 8 only exemplifies that 2 human face part motion detection devices 1 are included, and the present embodiment may further include more than 2 human face part motion detection devices 1.
The part movement score acquisition device 2 is used for acquiring a movement score corresponding to the movement of each part of the face to be detected based on the movement condition of each part;
a living body recognition score calculation means 3 for calculating a weighted sum of the movement scores corresponding to the movements of each face part, and taking the calculated sum as a living body recognition score; wherein, the living body recognition score calculating device 3 has preset a weight corresponding to the movement of each part;
and the living body judgment device 4 is used for judging the human face to be detected, the living body identification score of which is not less than the preset threshold value, as the living body.
Illustratively, the at least one part motion detection device 1 except the forehead wrinkle motion detection device 1 detects at least one part motion corresponding to the detected at least one part motion, including at least one part motion of mouth motion, eye motion, head motion, eyebrow motion and face motion; the mouth movement comprises whether the mouth is opened or closed or not, or whether the face has smiling action or not, namely the movement degree of the mouth corner exceeds a preset standard; eye movement includes whether the eyes have opening and closing actions; the head movement includes whether the head is rotating; the eyebrow movement comprises whether the eyebrows shake or not; the facial movement comprises the whole change of human face parts, such as the action of a ghost face, and the whole change degree of eyes and a mouth of the human face exceeds a preset condition; generally, the degree of mouth movement, eye movement, and head movement of a human face is significant and advantageous for detection, and at least one of mouth movement, eye movement, and head movement may be preferably selected for detection.
In an example, the at least one face part motion detection device 1 is specifically configured to detect a part key point position corresponding to the motion of each video frame, which is extracted every preset number of frames by the face video of the face to be detected, and determine the situation of the part motion according to the variation degree of the part key point position of each extracted video frame; alternatively, the face part motion detection apparatus 1 may be further specifically configured to detect a part gray scale value feature corresponding to the detected part motion of each video frame extracted every preset number of frames of the face to be detected, and determine the part motion condition according to the change degree of the extracted gray scale value feature of the part of each video frame. The above-mentioned implementation method is only an example of detecting the part motion by at least one other human face part motion detection apparatus 1, and it is also within the protection scope of the present embodiment that the human face part motion detection apparatus 1 implements the motion detection of at least one other part motion by other implementations.
The part motion score acquiring device 2 is specifically configured to acquire a corresponding motion score based on a motion condition of a forehead wrinkle action: if the movement condition of the face to be detected is forehead wrinkle movement, the acquired movement score of the forehead wrinkle movement is 1 score; otherwise, the obtained movement score of the forehead wrinkle action is 0. The part motion score obtaining device 2 is further specifically configured to obtain a corresponding motion score based on a motion condition of at least one other part motion: when the corresponding part of the face to be detected moves, the obtained movement score of the corresponding part movement is 1 score; otherwise, the obtained movement score is 0.
In addition to the above-mentioned embodiment in which the part motion score acquiring means 2 is used to directly acquire a motion score of whether there is motion based on whether there is motion of each part motion, when the motion condition of the part motion acquired by the face part motion detecting means 1 includes the motion degree of the part motion, a motion score between 0 and 1 may also be acquired by the part motion score acquiring means 2 based on the motion degree, for example, the motion score is set to 10 levels, and the value is set to between 0 and 1, which may not only indicate whether there is motion, but also reflect the motion degree.
The weight value corresponding to the movement of each part in the living body identification score calculating device 3 is set according to the degree of significance of the movement of each part; if the detected part motion is forehead wrinkle motion, eye motion and mouth motion, at this time, the mouth motion is obvious, so the weight is the largest, the eye motion is the next to the forehead wrinkle motion weight, and the weight strategy of the part motion corresponds to: mouth movement > eye movement > forehead wrinkle movement.
Or, the weight corresponding to each part motion in the living body recognition score calculating device 3 is set by automatically adjusting the weight of the part motion according to different application scenes, and the specific method is as follows: in a certain scene, collecting normal input videos of various part movements of a face to be detected as positive samples, attack videos as negative samples, taking (the number of positive samples passing plus the number of negative sample refuses)/(the total number of positive samples plus the total number of negative samples) as the accuracy rate of the part movements, then sequencing the accuracy rate of each part movement according to the sequence from large to small, and readjusting the weight of each part movement according to the sequence from large to small.
Any of the above two preferable embodiments for setting the weight corresponding to the motion of each part are within the protection scope of the present embodiment.
The living body judgment device 4 includes:
a living body recognition confidence coefficient calculation unit 41 for calculating a living body recognition confidence coefficient of the face to be detected by a ratio of the living body recognition score to the total living body recognition score;
wherein the total score of the living body identification is the maximum value of the weighted sum of the motion scores corresponding to the motions of all the parts acquired by the living body identification score calculation device 3, and the total score of the living body identification is represented by s _ max; f represents a living body recognition confidence, and 0< f < 1; the living body recognition confidence coefficient calculation unit 41 calculates the living body recognition confidence coefficient of the face to be measured by the following formula:
f=(s/s_max)*100%
and the living body judging unit 42 is configured to determine that the living body recognition score is not less than a preset threshold value when the living body recognition confidence is not less than the preset value, and judge that the human face to be detected, of which the living body recognition score is not less than the preset threshold value, is a living body.
Where a preset value is denoted by e, it is judged by the living body judging unit 42 that: when f is larger than or equal to e, namely the living body recognition confidence coefficient is not smaller than a preset value, determining that the living body recognition score is not smaller than a preset threshold value, and judging the face to be detected with the living body recognition score not smaller than the preset threshold value as a living body; and when f < e, namely the living body recognition confidence coefficient is smaller than a preset value, determining that the living body recognition score is smaller than a preset threshold value, and judging that the face to be detected with the living body recognition score smaller than the preset threshold value is a non-living body.
The living body recognition confidence obtained by the living body recognition confidence calculation unit 41 can be further expanded for the living body recognition system of the present embodiment to establish a classification system for living body judgment and living body classification to obtain a rich living body recognition result.
In specific implementation, firstly, the motion condition of the corresponding part motion is obtained through each human face part motion detection device 1, wherein one human face part motion detection device 1 is an embodiment of the forehead wrinkle action detection device of the invention; a corresponding movement score is acquired through the part movement score acquisition device 2 based on the movement condition of the part movement; then, the obtained motion scores of the motion of each part are weighted by the living body recognition score calculating device 3 and summed up to be a living body recognition score, and finally, the living body recognition confidence of the face to be measured is calculated by the living body recognition confidence calculating unit 41 of the living body judging device 4 using the ratio of the living body recognition score to the total living body recognition score, and the face to be measured when the calculated living body recognition confidence is not less than the preset threshold is judged as the living body by the living body judging unit 42.
The embodiment adopts the detection device for detecting at least 2 face positions to solve the problems of single algorithm and low safety in the prior art, has strong expandability, and adopts the forehead wrinkle motion detection device with low requirements on hardware; in addition, the living body recognition score calculating device is used for weighting the movement of different parts and then performing score fusion, so that the living body recognition accuracy is high, and the beneficial effects of high living body recognition accuracy, low hardware requirement and high safety are achieved.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.
Claims (8)
1. A forehead wrinkle action detection method is characterized by comprising the following steps:
extracting a plurality of video frames from a face video to be detected;
acquiring a forehead area of each video frame extracted from the face video to be detected;
calculating a gradient value of each pixel point of the forehead area of each extracted video frame;
calculating variance of gradient values of pixel points of the forehead area of each extracted video frame to obtain forehead wrinkle values of the corresponding video frames;
judging the forehead wrinkle action condition of the face video to be detected based on the forehead wrinkle value of each extracted video frame;
wherein the determining the forehead wrinkle action of the face video to be detected based on the forehead wrinkle value of each extracted video frame includes:
judging that no wrinkle exists in the forehead area of the face to be detected of the video frame with the forehead wrinkle value smaller than a first preset threshold value, and judging that wrinkles exist in the forehead area of the face to be detected of the video frame with the forehead wrinkle value larger than a second preset threshold value;
and if the video frames which simultaneously comprise the video frame without wrinkles in the forehead area of the face to be detected and the video frame with wrinkles in the forehead area of the face to be detected are extracted, judging that the face to be detected of the face to be detected has forehead wrinkles.
2. The method as claimed in claim 1, wherein said calculating the gradient value of each pixel point of the forehead region of each of the video frames comprises:
calculating a sobel value of each pixel point of the forehead area of each extracted video frame through a sobel operator; wherein the sobel values represent the gradient values.
3. The method as claimed in claim 1, wherein said obtaining a forehead area of each of the video frames extracted from the face video to be detected comprises:
performing face detection and face key point position detection on each video frame extracted from the face video to be detected by using a dlib library, and acquiring the position of the face area and a plurality of key point positions of the face to be detected;
and acquiring a plurality of key point positions of eyebrows from a plurality of face key points of each extracted video frame, and acquiring the forehead area based on the key point positions of the eyebrows and the face area position.
4. A forehead wrinkle motion detection device, comprising:
the video frame extraction unit is used for extracting a plurality of video frames from the face video to be detected;
the forehead area acquisition unit is used for acquiring the forehead area of each video frame extracted from the face video to be detected;
a gradient value obtaining unit, configured to calculate a gradient value of each pixel point in the forehead region of each extracted video frame;
the forehead wrinkle value acquisition unit is used for calculating variance of gradient values of pixel points of the forehead area of each extracted video frame to acquire forehead wrinkle values of the corresponding video frames;
the forehead wrinkle action judging unit is used for judging the forehead wrinkle action condition of the face video to be detected based on the forehead wrinkle value of each extracted video frame;
wherein, the forehead wrinkle action judging unit specifically comprises:
the wrinkle state judging module is used for judging that no wrinkle exists in the forehead area of the face to be detected of the video frame with the forehead wrinkle value smaller than a first preset threshold value and judging that wrinkles exist in the forehead area of the face to be detected of the video frame with the forehead wrinkle value larger than a second preset threshold value;
and the wrinkle action judging module is used for judging that the forehead wrinkles of the face to be detected act if the video frames which simultaneously comprise the video frames without wrinkles in the forehead area of the face to be detected and the video frames with wrinkles in the forehead area of the face to be detected are extracted from the video frames.
5. A forehead wrinkle action detection device as claimed in claim 4, wherein the gradient value obtaining unit is specifically configured to calculate a sobel value of each pixel point of the forehead region of each of the extracted video frames through a sobel operator; wherein the sobel values represent the gradient values.
6. A forehead wrinkle movement detection device according to claim 4, wherein the forehead area acquisition unit includes:
the face key point detection module is used for performing face detection and face key point position detection on each video frame extracted from the face video to be detected by using a dlib library, and acquiring the position of the face area and a plurality of key point positions of the face to be detected;
and the forehead area acquisition module is used for acquiring a plurality of key point positions of the eyebrows from a plurality of face key points of each extracted video frame and acquiring the forehead area based on the plurality of key point positions of the eyebrows and the face area position.
7. A living body identification method, characterized by comprising the steps of:
detecting the forehead wrinkle action condition of the face to be detected in the face video to be detected and the movement condition of at least one other part, wherein the forehead wrinkle action condition of the face to be detected in the face video to be detected is detected by adopting the forehead wrinkle action detection method according to any one of claims 1 to 3;
acquiring a motion score corresponding to the motion of each part of the face to be detected based on the condition of the part motion;
calculating the weighted sum of the motion scores corresponding to the motion of each part, and taking the calculated sum as a living body identification score; wherein, the movement of each part has preset corresponding weight;
and judging the face to be detected with the living body identification score not less than a preset threshold value as a living body.
8. A living body identification system, characterized in that the living body identification system comprises:
at least 2 human face position motion detection devices, each of the human face position motion detection devices is used for detecting the condition of the position motion corresponding to the human face to be detected, wherein one human face position motion detection device is a forehead wrinkle action detection device according to any one of claims 4 to 6;
the part movement score acquisition device is used for acquiring a movement score corresponding to the movement of each part of the face to be detected based on the movement condition of each part;
living body identification score calculation means for calculating a sum of weighted motion scores corresponding to the motions of each of the parts, and taking the sum obtained by the calculation as a living body identification score; wherein the living body identification score calculating means has preset a weight corresponding to each of the part movements;
and the living body judgment device is used for judging the face to be detected with the living body identification score not less than a preset threshold value as a living body.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710406498.5A CN107330370B (en) | 2017-06-02 | 2017-06-02 | Forehead wrinkle action detection method and device and living body identification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710406498.5A CN107330370B (en) | 2017-06-02 | 2017-06-02 | Forehead wrinkle action detection method and device and living body identification method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107330370A CN107330370A (en) | 2017-11-07 |
CN107330370B true CN107330370B (en) | 2020-06-19 |
Family
ID=60193840
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710406498.5A Active CN107330370B (en) | 2017-06-02 | 2017-06-02 | Forehead wrinkle action detection method and device and living body identification method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107330370B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108647600B (en) * | 2018-04-27 | 2021-10-08 | 深圳爱酷智能科技有限公司 | Face recognition method, face recognition device and computer-readable storage medium |
EP3809361B1 (en) | 2018-07-16 | 2023-01-25 | Honor Device Co., Ltd. | Wrinkle detection method and electronic device |
CN109034138B (en) * | 2018-09-11 | 2021-09-03 | 湖南拓视觉信息技术有限公司 | Image processing method and device |
CN111199171B (en) | 2018-11-19 | 2022-09-23 | 荣耀终端有限公司 | Wrinkle detection method and terminal equipment |
CN109745014B (en) * | 2018-12-29 | 2022-05-17 | 江苏云天励飞技术有限公司 | Temperature measurement method and related product |
CN109829434A (en) * | 2019-01-31 | 2019-05-31 | 杭州创匠信息科技有限公司 | Method for anti-counterfeit and device based on living body texture |
CN112200120B (en) * | 2020-10-23 | 2023-06-30 | 支付宝(杭州)信息技术有限公司 | Identity recognition method, living body recognition device and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101908140A (en) * | 2010-07-29 | 2010-12-08 | 中山大学 | Biopsy method for use in human face identification |
CN105138981A (en) * | 2015-08-20 | 2015-12-09 | 北京旷视科技有限公司 | In-vivo detection system and method |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3007553B1 (en) * | 2013-06-25 | 2015-07-17 | Morpho | METHOD FOR DETECTING A TRUE FACE |
CN103440479B (en) * | 2013-08-29 | 2016-12-28 | 湖北微模式科技发展有限公司 | A kind of method and system for detecting living body human face |
CN104298482B (en) * | 2014-09-29 | 2017-08-25 | 华勤通讯技术有限公司 | The method of mobile terminal adjust automatically output |
US9928603B2 (en) * | 2014-12-31 | 2018-03-27 | Morphotrust Usa, Llc | Detecting facial liveliness |
CN104794464B (en) * | 2015-05-13 | 2019-06-07 | 上海依图网络科技有限公司 | A kind of biopsy method based on relative priority |
US10049287B2 (en) * | 2015-05-22 | 2018-08-14 | Oath Inc. | Computerized system and method for determining authenticity of users via facial recognition |
CN106778450B (en) * | 2015-11-25 | 2020-04-24 | 腾讯科技(深圳)有限公司 | Face recognition method and device |
-
2017
- 2017-06-02 CN CN201710406498.5A patent/CN107330370B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101908140A (en) * | 2010-07-29 | 2010-12-08 | 中山大学 | Biopsy method for use in human face identification |
CN105138981A (en) * | 2015-08-20 | 2015-12-09 | 北京旷视科技有限公司 | In-vivo detection system and method |
Also Published As
Publication number | Publication date |
---|---|
CN107330370A (en) | 2017-11-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107330370B (en) | Forehead wrinkle action detection method and device and living body identification method and system | |
CN107330914B (en) | Human face part motion detection method and device and living body identification method and system | |
CN107358152B (en) | Living body identification method and system | |
CN107346422B (en) | Living body face recognition method based on blink detection | |
CN106022209B (en) | A kind of method and device of range estimation and processing based on Face datection | |
TWI686774B (en) | Human face live detection method and device | |
CN105072327B (en) | A kind of method and apparatus of the portrait processing of anti-eye closing | |
CN108182409B (en) | Living body detection method, living body detection device, living body detection equipment and storage medium | |
EP3651055A1 (en) | Gesture recognition method, apparatus, and device | |
CN105426828B (en) | Method for detecting human face, apparatus and system | |
CN106056079B (en) | A kind of occlusion detection method of image capture device and human face five-sense-organ | |
CN109670430A (en) | A kind of face vivo identification method of the multiple Classifiers Combination based on deep learning | |
CN110223322B (en) | Image recognition method and device, computer equipment and storage medium | |
KR20180109665A (en) | A method and apparatus of image processing for object detection | |
CN111382648A (en) | Method, device and equipment for detecting dynamic facial expression and storage medium | |
Bhoi et al. | Template matching based eye detection in facial image | |
CN101908140A (en) | Biopsy method for use in human face identification | |
CN107392089A (en) | Eyebrow movement detection method and device and living body identification method and system | |
CN102184016B (en) | Noncontact type mouse control method based on video sequence recognition | |
WO2018078857A1 (en) | Line-of-sight estimation device, line-of-sight estimation method, and program recording medium | |
CN106881716A (en) | Human body follower method and system based on 3D cameras robot | |
CN111967319A (en) | Infrared and visible light based in-vivo detection method, device, equipment and storage medium | |
CN109543629B (en) | Blink identification method, device, equipment and readable storage medium | |
CN107358155A (en) | Method and device for detecting ghost face action and method and system for recognizing living body | |
CN113326754A (en) | Smoking behavior detection method and system based on convolutional neural network and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |