WO2018218839A1 - Procédé et système de reconnaissance de corps vivant - Google Patents

Procédé et système de reconnaissance de corps vivant Download PDF

Info

Publication number
WO2018218839A1
WO2018218839A1 PCT/CN2017/104612 CN2017104612W WO2018218839A1 WO 2018218839 A1 WO2018218839 A1 WO 2018218839A1 CN 2017104612 W CN2017104612 W CN 2017104612W WO 2018218839 A1 WO2018218839 A1 WO 2018218839A1
Authority
WO
WIPO (PCT)
Prior art keywords
living body
motion
face
movement
score
Prior art date
Application number
PCT/CN2017/104612
Other languages
English (en)
Chinese (zh)
Inventor
陈�全
Original Assignee
广州视源电子科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2018218839A1 publication Critical patent/WO2018218839A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present invention relates to the field of face recognition, and in particular, to a living body recognition method and system.
  • the human face detection can detect that the person currently performing face recognition is a living face rather than a face in a photo or video, thereby ensuring the security of the face recognition system.
  • the infrared camera is used to obtain the face temperature to perform the human face detection.
  • the drawback of this type of solution is that it has higher hardware requirements.
  • An object of the embodiments of the present invention is to provide a living body identification method and system, which have low hardware requirements and high security.
  • an embodiment of the present invention provides a living body identification method, and the living body identification method includes the following steps:
  • the face to be tested whose living body recognition score is not less than a preset threshold is determined to be a living body.
  • a living body identification method disclosed in the embodiment of the present invention obtains a motion score of at least two parts on the face of the person to be tested, and weights the part motion score and then sums it as a living body. Identifying the score, using the living body recognition score as a technical solution for determining whether the face to be tested is a living body; using the detection of at least two parts of the motion solves the problem that the algorithm in the prior art is single and the security is not high, The scalability is strong, and the detection based on the motion of the face part can be realized by the two-dimensional image, and the hardware requirements are not high.
  • the weight of the different parts is weighted and then the score fusion is performed, the accuracy of the living body recognition is high, and the living body identification method is accurate. High rate, low hardware requirements and high security.
  • the at least two parts of the movement include eye movement, mouth movement, head movement, eyebrow movement, forehead movement and face At least two parts of the movement in the movement.
  • the motion of the part corresponding to the detection may be several of a plurality of parts on the face part, so that when the living body detection is performed, the selectivity is wide, and the malicious attack is largely resisted, which greatly increases safety.
  • the detecting the movement of at least two parts of the face to be tested includes the following steps:
  • the motion of the part is determined by the degree of change in the position of the key point of each of the extracted video frames.
  • the motion of the part motion is determined by detecting the degree of change of the position of the key point corresponding to the motion of the part by detecting each video frame that is extracted, and the detection method can be implemented only by using a two-dimensional image, and The algorithm is simple, the requirements on the device are not high, and the recognition efficiency is high.
  • the weight corresponding to each part of the motion is set according to the visibility of each part of the motion; or, the weight corresponding to each part of the motion is according to each of the current application scenarios.
  • the accuracy of the movement of the part is set.
  • determining that the living body identification score is not less than a preset threshold comprises the steps of:
  • determining that the living body recognition score is not less than a preset threshold When the living body recognition confidence is not less than a preset value, determining that the living body recognition score is not less than a preset threshold.
  • the living body recognition score can be normalized to the living body confidence level, thereby performing living body judgment, and the living body confidence level can also be used for living body grading, and the recognition result is richer than the prior art.
  • the embodiment of the present invention further provides a living body identification system for identifying whether the face to be tested is a living body, and the living body identification system includes:
  • each of the part motion detecting units is configured to detect a part motion corresponding to the face to be tested, and obtain a corresponding motion score
  • a living body recognition score calculation unit configured to calculate a weighted sum of motion scores corresponding to each of the part motions, and use the calculated sum as a living body recognition score; wherein the living body recognition score calculation unit The weight corresponding to each of the part movements has been preset;
  • the living body judging unit is configured to determine that the human face to be tested whose living body recognition score is not less than a preset threshold is a living body.
  • the living body identification system disclosed in the embodiment of the present invention acquires the motion scores of at least two parts of the face of the person to be tested through at least two parts motion detecting unit, and uses the living body identification score calculation unit.
  • the part motion score is weighted and summed as a living body recognition score
  • the living body judgment unit uses the living body recognition score as a criterion for determining whether the face to be tested is a living body.
  • At least two of the at least two part motion detection units in the at least two part motion detection units include at least two parts of the eye movement, the mouth movement, the head movement, the eyebrow movement, the forehead movement, and the facial movement.
  • each of the part motion detecting units includes:
  • a part detecting module configured to detect a key point position of the part corresponding to the part motion for each video frame extracted by the face video of the face to be tested;
  • the part motion condition obtaining module is configured to determine a motion of the part by using a degree of change of a position of a key point of each of the extracted video frames, and obtain a corresponding motion score according to the motion of the part.
  • the weight corresponding to each part of the motion in the living body recognition score calculation unit is set according to the visibility of each part of the motion; or, the living body recognition score calculation unit is The weight corresponding to each part of the motion is set according to the accuracy of each part of the motion in the current application scenario.
  • the living body determining unit includes:
  • a biometric recognition confidence calculation module configured to calculate a living body recognition confidence of the human face to be tested by using a ratio of the living body recognition score to a living body recognition total score
  • a living body judging module configured to determine that the living body identification score is not less than a preset threshold when the living body recognition confidence is not less than a preset value, and determine that the living body recognition score is not less than a preset threshold
  • the face is a living body.
  • Embodiment 1 is a schematic flow chart of Embodiment 1 of a living body identification method according to the present invention.
  • FIG. 2 is a schematic flow chart of step S1 of Embodiment 1 provided by a living body identification method according to the present invention
  • FIG. 3 is a schematic diagram of a 68-point model of a face to be tested
  • step S3 of Embodiment 1 of the living body identification method provided by the present invention is a schematic flow chart of step S3 of Embodiment 1 of the living body identification method provided by the present invention.
  • Fig. 5 is a schematic structural view showing an embodiment of a living body recognition system according to the present invention.
  • FIG. 1 is a schematic flowchart of Embodiment 1 of a living body identification method according to the present invention, including the steps:
  • the face to be tested that determines that the living body recognition score is not less than a preset threshold is a living body.
  • detecting at least two parts of the face to be tested in step S1 of the embodiment comprises detecting eye movement, mouth movement and head movement; generally, eye movement, mouth movement and head of the face
  • the degree of exercise is obvious, which is conducive to detection, and the calculation is simple and efficient.
  • FIG. 2 is a schematic flowchart of step S1 of the first embodiment, where step S1 includes:
  • the face position of the part corresponding to the motion of each video frame extracted by the face video of the face to be tested is detected every preset frame number;
  • FIG. 3 is a 68-point model of the face to be tested; specifically, the continuous frame/jump frame of the face video of the face to be tested uses the dlib library to perform face detection and face key of the face to be tested.
  • Point detection the dlib library here is a cross-platform general library written in C++ technology; 68 key points of each video frame can be obtained; it can be obtained from the 68 key points of the acquired face to be tested. The position of the key point corresponding to the desired part movement.
  • a preferred embodiment of setting the weight corresponding to each part of motion in step S3 of the first embodiment is based on the visibility of each part of motion.
  • the general strategy is adopted, the mouth movement is relatively obvious, so the weight is the largest, the head motion simulation accuracy is the lowest, and the weight is the smallest.
  • the weighting strategy of the part motion in the first embodiment is: mouth movement>eye movement> Head movement
  • another preferred embodiment for setting the weight corresponding to the motion of each part in step S3 is set by automatically performing weight adjustment of the part motion according to different application scenarios, in a specific scenario: Collect the normal input video of the motion of various parts of the face to be tested as a positive sample, and attack the video as a negative sample, taking (positive sample pass number + negative sample reject number) / (positive sample total + negative sample total) as the part motion
  • the accuracy rate then the accuracy of each part of the movement is sorted in descending order, the weight of each part of the movement is also in this order from large to small, to re-adjust the weight of each part of the movement.
  • the re-adjusted weight is used to calculate the living body recognition score, and the recognition result can adapt the accuracy of the part motion detection in different scenarios, and increase the accuracy of the living body recognition result of the embodiment.
  • FIG. 4 is a schematic flowchart of step S4, including steps:
  • the face to be tested that determines that the living body recognition score is not less than a preset threshold is a living body.
  • the living body recognition total score is the maximum value that can be obtained after the face to be tested is identified in the embodiment, and the living body recognition confidence of the face to be tested is calculated by the following formula:
  • s_max represents the total score of the living body recognition
  • f represents the confidence of the living body recognition
  • f ⁇ e that is, the living body recognition confidence is not less than the preset value
  • the face is a living body; when f ⁇ e, that is, the living body recognition confidence is less than the preset value, it is determined that the living body recognition score is less than the preset threshold, and the face to be tested whose living body recognition score is less than the preset threshold is determined to be inactive.
  • the living body recognition confidence obtained by using the living body recognition score can be further expanded, and is used in the present embodiment to establish a classification system for living body judgment and living body classification to obtain rich living body recognition results.
  • a specific process of determining the motion of the part from the degree of change in the position of the key point of the acquisition portion in step S12 is as follows:
  • the detection process of the mouth movement the 8 key points of 61-68 in the obtained 68-point model of the face represent the mouth of the face to be tested.
  • the maximum value of the x coordinate in the 8 key points minus the minimum value of the x coordinate, which is the length of the mouth.
  • the mouth length is divided by the mouth width to represent the mouth value, and the thresholds a1 and a2 are set, wherein a1 ⁇ a2; when the mouth value is less than a1, the mouth is opened, and when the mouth value is greater than a2, The mouth is closed.
  • the detection process of the eye movement using the obtained key points of 37-48 in the 68-point model of the face to represent the eye of the face to be tested; wherein, the 37-42 key points represent the right eye
  • the four key points of 43-48 represent the left eye.
  • the maximum value of the x coordinate of the six key points representing the right eye minus the minimum value of the x coordinate is the length of the right eye
  • the maximum value of the y coordinate of the six key points of the right eye minus the minimum value of the y coordinate is The width of the right eye; dividing the length of the right eye by the width of the right eye to represent the value of the right eye, the same as the value of the left eye; preferably, defining the average of the value of the left eye and the value of the right eye as the eye value, setting the threshold b1 and B2, where b1 ⁇ b2, when the eye value is less than b1, it means that the eye is open, and when the eye value is greater than b2, it means that the eye is closed.
  • the eye movement determined by the partial frame is the eye opening, and the eye movement determined by the other partial frame is When the eye is closed, it is determined that the eye has motion.
  • the average value of the left eye value and the right eye value is defined as the eye value to determine the motion condition by the eye value
  • the eye value determines the corresponding right eye motion and/or left eye motion, that is, changes the eye motion into the left eye-right eye, the right eye-left eye, and only the left eye and only the right eye. As the movement process increases, the whole living body is more variability, which can increase the safety of living body detection.
  • the detection process of the head movement using the six key points representing the left eye, the six key points representing the right eye, and the key points 34, 49 and 55 in the obtained 68-point model of the face to detect the face Head movement; wherein, the average value of the x coordinate of the six key points representing the left eye is defined as the x coordinate of point A, and the average value of the y coordinate of the six key points of the left eye is the y coordinate of point A, the same
  • the right eye B point is defined, and the key points 34, 49, and 55 in the 68-point model are defined as C point, D point, and E point, respectively.
  • the A to E points obtained above are five-point models representing facial feature points.
  • the angle value of the face in the three-dimensional space-the yaw angle yaw value and the pitch angle pitch value are obtained according to the five-point model of the face feature point described above.
  • Thresholds c1 and c2 are set, where c1 ⁇ c2; when yaw ⁇ c1, it means that the head turns left, and when yaw > c2, it means that the head turns right.
  • Thresholds d1 and d2 are set, where d1 ⁇ d2; when pitch ⁇ d1, it means that the head is head down, and when pitch > d2, it means that the head is headed up. When the yaw value is between c1 and c2, and d1 ⁇ pitch ⁇ d2, it means that the head is facing forward.
  • the head of the face to be tested has a head-up motion, that is, it is determined that the head has motion; and so on, by detecting the head of the face to be tested, there is a head movement, a left-hand movement, and a right-hand movement. It is determined that the head has motion.
  • the step S2 acquires the corresponding motion score according to the situation of the part motion determined by the part motion detection process, which specifically includes:
  • the condition of the mouth movement to obtain the corresponding motion score includes: the mouth has motion, the obtained motion score of the mouth movement is 1 point; the mouth has no motion, and the obtained motion score of the mouth movement is 0 points.
  • the case of eye movement to obtain the corresponding exercise score includes:
  • the condition of the head movement obtains the corresponding motion score includes: if the head of the face to be tested has any one of a head movement, a head movement, a left-hand movement, and a right-hand movement, the head is determined Exercise, the obtained motor movement has a score of 1 point. If the head of the person to be tested has no head movement, head movement, left-hand movement and right-hand movement, the head is Without exercise, the obtained head movement has a motion score of 0.
  • each video frame extracted by the preset number of frames of the face video of the face is first acquired with 68 key points of the face, thereby acquiring the eye movement, the mouth movement and the to-be-detected respectively.
  • Eye key position corresponding to head movement, mouth key position and head a key point position to determine the state of the eye, mouth, and head of the video frame; then determining the eye movement, mouth motion, and the state of the eye, mouth, and head from the plurality of extracted video frames, respectively
  • the condition of the head movement; the corresponding motion score is obtained according to the motion of each part, specifically, if the part has motion, the obtained exercise score is 1 point, otherwise the obtained exercise score is 0 points; then the above calculation is performed Obtaining a weighted sum of the motion scores of each part, the sum represents a living body identification score; finally, the living body identification confidence is calculated by using the ratio of the living body recognition score to the total score of the living body recognition, wherein the confidence of the living body recognition is not less than When the preset value is determined
  • This embodiment can be applied to a variety of device terminals.
  • the implementation scenario of the mobile phone terminal is taken as an example.
  • a sequence of living action requests is randomly generated, for example, the face to be tested is required to be separately performed.
  • the score of the open mouth is 1 point
  • the score of the blink is 1 point
  • the score of the left head of the head is 0
  • the score of the living body recognition is the sum of the weights of each part
  • the exercise score of the above part is calculated to calculate the living body.
  • the embodiment solves the problem that the algorithm is single and the security is not high in the prior art, and the scalability is strong; the detection of the motion of the part of the face to be tested can be realized by the two-dimensional image, and the hardware requirement of the device is not high; In the present embodiment, the detection of eye movement, mouth movement and head movement is used for living body recognition, and the motion effects of these parts are obvious, and the accuracy of motion judgment is high; Score fusion, high accuracy of living recognition; detection of multiple parts of motion is conducive to improving safety.
  • a second embodiment of the present invention provides a second embodiment of the present invention.
  • the main process of the second embodiment can be referred to the steps S1 to S4 of the first embodiment of the present invention.
  • step S1 of the second embodiment can be referred to the first embodiment of FIG. 2, and the steps S11-S12 are also included:
  • the face position of the part corresponding to the motion of each video frame extracted by the face video of the face to be tested is detected every preset frame number;
  • FIG. 3 is a 68-point model of the face to be tested; specifically, the continuous frame/jump frame of the face video of the face to be tested uses the dlib library to perform face detection and face key of the face to be tested.
  • Point detection the dlib library here is a cross-platform general library written in C++ technology; 68 key points of each video frame can be obtained; it can be obtained from the 68 key points of the acquired face to be tested. The position of the key point corresponding to the desired part movement.
  • the detection process of the mouth movement using the obtained key points of 61-68 in the 68-point model of the face to represent the mouth of the face to be tested, using the mouth that has been trained by the SVM classifier in advance
  • the state classification model predicts the mouth state of each frame of the face video of the face to be tested, wherein the pre-training process of the mouth state classification model trained by the SVM classifier is: 61 in the 68-point model of the face -68
  • These 8 key points indicate the mouth features of the face to be tested, manually select a certain number of face photos of the mouth, and mark the mouth state of these faces as 1; manually select a certain number of mouths
  • the part is a closed face photo, and the face state of these face photos is marked as 0, and then the SVM classifier is used to train the mouth state classification model. If the mouth state of the extracted video frames has both 0 and 1, it is determined that the mouth has motion, otherwise it is determined that the mouth has no motion.
  • the 8 key points of 61-68 in the obtained 68-point model of the face are used to represent the mouth of the face to be tested, and the mouth state is trained by the soft-max regression classifier.
  • the model predicts the mouth state score of each frame of the face video of the face to be tested, wherein the pre-training process of the trained mouth state classification model by the soft-max regression classifier is: according to the mouth opening difference Degrees are marked on a number of face photos, that is, the state score is marked on the mouth according to the degree of opening of the mouth: the score can be set to 10 levels, and the value is between 0 and 1; then, the mouth is closed for 0 points.
  • the maximum opening mouth is 1 point, and the half opening mouth is 0.5 points.
  • the mouth state scores of several video frames extracted by the face video of the face to be tested can be obtained; when the maximum and minimum of the mouth state scores When the difference between the values is greater than the preset threshold, the mouth is considered to have motion, otherwise the mouth has no motion.
  • the detection process of the eye movement using the obtained key points of 37-48 in the 68-point model of the face to represent the eye of the face to be tested; wherein, the 37-42 key points represent the right eye
  • the four key points of 43-48 represent the left eye.
  • Predicting the eye state of each frame of the face video of the face to be tested with the eye state classification model trained in advance by the SVM classifier, wherein the pre-training of the eye state classification model trained by the SVM classifier The process is as follows: the 12 key points of 37-48 in the 68-point model of the face represent the eye features of the face to be tested, and manually select a certain number of face images of the eyes in the blinking state, and mark the faces of the faces.
  • the eye state is 1; manually select a certain number of eyes as the face photos of the eye closed state, mark the eye state of these face photos as 0, and then train with the SVM classifier as the eye state classification model. If the eye state of the extracted video frames has both 0 and 1, it is determined that the eye has motion, otherwise it is determined that the eye has no motion.
  • the 12 key points of 37-48 in the obtained 68-point model of the face are used to represent the eye of the face to be tested, and the eye state classification trained by the soft-max regression classifier is used in advance.
  • the model predicts an eye state score of each frame of the face video of the face to be tested, wherein the pre-training process of the eye state classification model trained by the soft-max regression classifier is: according to the difference of the eye opening Degrees are marked on a number of face photos, that is, the state score is marked on the eye according to the degree of opening of the eye: the score can be set to 10 levels, and the value is between 0 and 1; then, the eye is closed to 0. Points, the maximum blink is 1 point, and the half blink is 0.5 points.
  • the eye state scores of several video frames extracted by the face video of the face to be tested can be obtained; when the maximum and minimum of the eye state scores When the difference between the values is greater than the preset threshold, the eye is considered Have exercise, otherwise there is no movement in the eyes.
  • the average value of the left eye value and the right eye value is defined as the eye value to determine the motion condition by the eye value
  • it is also possible to directly pass the right eye value and/or The left eye value is used to determine the corresponding right eye motion and/or left eye motion, that is, the eye motion is changed to the left eye-right eye, the right eye-left eye, and only the left eye and only the right eye.
  • the whole living body is more variability, which can increase the safety of living body detection.
  • the movement of the head movement is four kinds: the left side of the head, the right turn of the head, the head of the head and the head of the head.
  • the head raising is taken as an example to illustrate the detection process of the head movement:
  • the head state of each frame of the face video of the face to be tested is predicted by the SVM classifier trained head state classification model, wherein the pre-training process of the head state classification model trained by the SVM classifier is:
  • the six key points representing the left eye, the six key points representing the right eye, and the key points 34, 49, and 55 in the 68-point model of the face represent the head features of the face to be tested; Select a certain number of face photos with the head as the heading state, and mark the head state of these face photos as 1; manually select a certain number of heads to face the face in the normal forward state, and mark the head of these face photos
  • the state is 0; then the SVM classifier is trained to classify the head state. If the head states of the extracted video frames have both 0 and 1, it
  • the six key points representing the left eye, the six key points representing the right eye, and the key points 34, 49, and 55 in the obtained 68-point model represent the person to be tested.
  • the head of the face predicts the head state score of each frame of the face video of the face to be tested using the head state classification model that has been trained in advance by the soft-max regression classifier, wherein the soft-max regression classification is performed.
  • the pre-training process of the trained head state classification model is: labeling a number of face photos according to different degrees of head heading, that is, marking the head with a state score according to the head lifting degree: a score can be set For level 10, the value is between 0 and 1; then, the head is normally 0 points forward, the maximum head is 1 point, and the half head is 0.5 points.
  • the head state classification model trained by the soft-max regression classifier in advance, the head state scores of several video frames extracted by the face video of the face to be tested can be obtained; when the maximum and minimum of the head state scores are obtained When the difference between the values is greater than the preset threshold, the head is considered to have motion, otherwise the head has no motion.
  • the detection process of the left head turn, the head right turn, and the head down head three other head movements is similar to the above-described head motion detection process using the head lift as an example, and will not be described here.
  • the step S2 acquires the corresponding motion score according to the motion of the part determined by the part motion detection process, which specifically includes:
  • the motion of the mouth movement obtains the corresponding motion score: it is determined that the mouth has motion, and the obtained motion score of the mouth movement is 1 point; if the mouth has no motion, the obtained motion score of the mouth movement is 0 .
  • the motion of the eye movement obtains the corresponding motion score: it is determined that the eye has motion, and the obtained motion score of the eye movement is 1 point; if the eye has no motion, the obtained motion score of the eye movement is 0. .
  • the motion of the head movement obtains the corresponding motion score: it is determined that the head has motion, and the obtained motion score of the head motion is 1 point. If it is determined that the head has no motion, the obtained motion score of the head motion is 0. Minute.
  • the degree of motion of each part of the motion can also be obtained by step S1, and correspondingly, in step S2, a motion score between 0 and 1 is obtained based on the degree of motion, instead of just getting 1 or 0.
  • the exercise score the alternative embodiment not only indicates whether there is motion, but also the degree of exercise.
  • each video frame extracted by the preset number of frames of the face video of the face is obtained by acquiring 68 key points of the face, thereby respectively acquiring the position of the key point of the eye to be detected, and the mouth.
  • Key position and head key position to determine the state of the eye, mouth and head of the video frame; then determine the eye from the state of the eye, mouth and head in several extracted video frames The movement, the mouth movement and the head movement; the corresponding motion score is obtained according to the motion of each part; then the sum of the weight scores of each part is calculated, and the sum represents the living body recognition score;
  • the living body recognition confidence is calculated by using the ratio of the living body recognition score to the total score of the living body recognition, wherein when the living body recognition confidence is not less than the preset value, determining that the living body recognition score is not less than a preset threshold, thereby determining the person to be tested
  • the face is a living body; otherwise, it is determined that the face to be tested is not a living body.
  • the second embodiment can be applied to multiple device terminals.
  • the implementation scenario of the mobile phone terminal is taken as an example.
  • a sequence of living action requests is randomly generated, for example, to request the faces to be tested.
  • the score of the open mouth is 1 point
  • the score of the blink is 1 point
  • the score of the left head of the head is 0
  • the score of the living body recognition is the sum of the weights of each part
  • the exercise score of the above part is calculated to calculate the living body.
  • the second embodiment solves the problem that the algorithm is single and the security is not high in the prior art, and the scalability is strong; the detection of the motion of the part of the face to be tested can be realized by the two-dimensional image, and the hardware requirement of the device is not high;
  • the detection of eye movement, mouth movement and head movement is used to perform living body recognition, and the motion effects of these parts are obvious, and the accuracy of motion judgment is high; Fractional fusion is performed, and the accuracy of living body recognition is high; the detection of multiple parts of motion is beneficial to improve safety.
  • the third embodiment of the present invention provides a third embodiment of the present invention.
  • the main process of the third embodiment can be referred to the steps S1 to S4 of the first embodiment of the present invention.
  • the above part may refer to the first embodiment, and details are not described herein.
  • the degree of eye movement, mouth movement and head movement of the human face is obvious, which is advantageous for detection, and the calculation is simple and efficient;
  • the motion of detecting the part of the face to be tested in step S1 is Including the detection of the eye movement, the mouth movement and the head movement; at the same time, the movement of the part detecting the face to be tested in the step S1 of the third embodiment further includes the movement of the three parts of the facial movement, the eyebrow movement and the forehead movement. At least one of them.
  • the at least two parts of the motion for detecting the face to be tested in step S1 include the face video of the face to be measured extracted by the preset number of frames.
  • Each video frame detects the location of the key point corresponding to the motion of the part; see Figure 3, Figure 3 is the 68-point model of the face to be tested; specifically, the continuous frame/jump frame of the face video of the face to be measured is dlib
  • the library performs face detection and face key point detection of the face to be tested, and can obtain 68 key points of each video frame extracted; the required part can be obtained from the obtained 68 key points of the face to be tested.
  • step S1 further includes face detection of the face to be tested of each video frame, thereby acquiring a face rectangle, which can be seen in the face rectangle HIJK of FIG.
  • a preferred embodiment of setting the weight corresponding to the motion of each part in step S3 is set according to the visibility of each part of the motion.
  • the general strategy is adopted, and the weight of the part motion is: mouth movement>eye movement>head movement; the weight of at least one part movement of the facial movement, the eyebrow movement, and the forehead movement is smaller than the above mouth. Weight values for exercise, eye movements, and head movements.
  • another preferred embodiment for setting the weight corresponding to the motion of each part in step S3 is set by automatically performing weight adjustment of the part motion according to different application scenarios, in a specific scenario: Collect the normal input video of the motion of various parts of the face to be tested as a positive sample, and attack the video as a negative sample, taking (positive sample pass number + negative sample reject number) / (positive sample total + negative sample total) as the part motion
  • the accuracy rate then the accuracy of each part of the movement is sorted in descending order, the weight of each part of the movement is also in this order from large to small, to re-adjust the weight of each part of the movement.
  • the re-adjusted weight is used to calculate the living body recognition score, and the recognition result can adapt the accuracy of the part motion detection in different scenarios, and increase the accuracy of the living body recognition result of the embodiment.
  • the method for detecting the movement of the mouth of the face to be tested, the movement of the eye and the movement of the head in step S1, and the obtaining the motion score corresponding to the movement of each part of the face to be tested in step S2 may refer to a living body identification method of the present invention.
  • the specific process of detecting the movement of the mouth of the face to be tested, the movement of the eye and the movement of the head, and the motion score corresponding to the movement of each part of the face to be tested, in the first embodiment and the second embodiment Make a statement.
  • the third embodiment of the motion detection of the mouth movement and the eye movement can also adopt other alternative embodiments:
  • the face video of the face to be tested detects the mouth position of the face to be tested for each video frame extracted by the preset number of frames, and calculates the mouth position The gray average value; then it is judged whether the gray level average value of the mouth position is smaller than the preset mouth gray value judgment threshold, and if so, the mouth is in a closed state; if not, the mouth is in an open state.
  • the alternative embodiment utilizes the principle that the mouth is opened to expose the teeth, the teeth are mainly white, and the gray value is relatively large, the average gray value of the mouth opening is large, and the average gray value is small when the mouth is closed,
  • the state of the mouth is recognized by calculating the average gray value of the mouth, thereby determining the condition of the mouth movement.
  • the movement of the mouth movement determined by the partial frame is the mouth opening, and there is another partial frame determined movement of the mouth movement When the mouth is closed, it is determined that the mouth has motion.
  • the alternative embodiment obtains the motion score of the corresponding mouth motion, including: determining that the mouth has motion, and the obtained motion score of the mouth motion is 1 point; otherwise, determining that the mouth has no motion, the acquired mouth motion The exercise score is 0.
  • the movement of the mouth movement may include the movement of the mouth of the mouth angle, in addition to the mouth opening and closing, such as when the face is smiling, two The corners of the mouth will expand to the sides of the cheeks.
  • the key point 55 in the obtained face 68 point model represents the left corner point
  • the key point 49 represents the right corner point. Based on the left and right corner points of the first frame of the face video of the face to be tested, the back extraction is calculated.
  • the distance moved by the left corner of the video frame and the distance moved by the right corner point determines whether the distance moved by the left corner point and the distance moved by the right corner point are greater than a preset threshold, and if so, the state of the mouth motion is determined to be Smile, if not, determine that the state of mouth movement is normal.
  • the movement of the mouth movement determined by the partial frame is a smile state, and the movement of the mouth movement determined by the other partial frame is normal In the state, it is determined that the mouth has motion.
  • an alternative embodiment of the detection process of eye movement the identification object is Asian: the Asian eye color is black, the eyelid color is yellow; the face video of the face is pre-predicted Let each video frame extracted by the number of frames detect the eye position of the face to be tested, determine the position of the eye through the position of the eye, and calculate the average value of the gray of the eye position; then determine whether the average value of the gray of the eye position is less than Set the eyeball gray value judgment threshold. If yes, the eye is in the open state; if not, the eye is closed.
  • This alternative embodiment utilizes the detection of the eyeball position of the eye to identify the difference in the detected average gray value of the closed eye of the eye.
  • the average gray value of the eyeball position of the eye will be relatively small, and when the eye is closed, the average gray value of the eyeball position of the eye will be large.
  • the movement of the eye movement determined by the partial frame is the eye opening, and there is another part of the frame determined movement of the eye movement When the eye is closed, it is determined that the eye has motion.
  • the alternative embodiment obtains the motion condition of the corresponding eye movement, and obtains the corresponding motion score, including: determining that the eye has motion, and the obtained motion score of the eye motion is 1 point; determining that the eye has no motion, obtaining The motor score for the eye movement is 0.
  • the face video of the face to be tested detects the center position of the eye of the eye of the face to be tested for each video frame extracted by the preset number of frames, And calculating a relative position of the center position of the eyeball in the eye; and then determining whether the distance between the relative position of the center position of the eyeball position in the eye and the normal position of the center position of the eyeball position in the eye is greater than a preset value, If yes, the eyeball position is not in the normal position, and if not, the eyeball position is in the normal position.
  • the eye movement determined by the partial frame is that the eyeball position is not in the normal position, and the eye movement determined by the other partial frame is When the eyeball is in the normal position, the movement of the eye of the face to be tested is that the eyeball rotates, that is, the eye is determined to have motion; otherwise, the eye is determined to have no motion.
  • the detecting part motion of the face to be tested in step S1 of the third embodiment further includes detecting at least one of facial motion, eyebrow motion, and forehead movement, and the process of detecting facial motion, eyebrow motion, and forehead motion of the face to be tested includes :
  • the process of detecting the facial motion determining the eye, the mouth and the face region of the face to be tested; and calculating the ratio of the sum of the eye area and the mouth area to the area of the face region; and then determining whether the ratio is Within the preset range value, if yes, it indicates that the face state is normal, and if not, it indicates that the face state is a ghost face state.
  • the facial movement here includes ghost face movements.
  • the ratio of the sum of the eye area and the mouth area of the face to the area of the face area exceeds a preset range value; otherwise, it is a normal state; when it is detected that the face has both a ghost state and a normal state, It is determined that the face has a ghost face movement, that is, the face has motion.
  • An example is to calculate the eye area, the mouth area, and the face area: the eye area is obtained by multiplying the eye length by the eye width, and the mouth area is obtained by multiplying the mouth length by the mouth width, through the face rectangle HI JK The area gets the area of the face area.
  • obtaining the facial motion to obtain the exercise score includes: the facial score of the facial motion obtained by the motion is 1 point; otherwise, the facial motion is determined to be no motion, and the obtained facial motion has a motion score of 0.
  • the detection process of eyebrow movement the 5 key points of 18-22 in the obtained 68-point model of the face represent the right eyebrow point, and the 5 key points of 23-27 represent the left eyebrow point;
  • the method fits the curve of each eyebrow and calculates the curvature of the key point 20 of the right eyebrow as the characteristic value of the right eyebrow and the curvature of the key point 25 of the left eyebrow as the characteristic value of the left eyebrow, the characteristic value of the right eyebrow and the characteristic value of the left eyebrow.
  • the average value is the eyebrow eigenvalue; then it is judged whether the eyebrow eigenvalue is greater than a preset threshold, and if so, the condition indicating the eyebrow is the eyebrow, and if not, the eyebrow is normal.
  • each video frame of the face video extracted from the face to be tested if some frames determine that the state of the eyebrows is an eyebrow, and another partial frame determines that the state of the eyebrows is normal, it is determined that the eyebrows have motion, otherwise Determine that there is no movement of the eyebrows.
  • obtaining the eyebrow movement to obtain the exercise score includes: determining that the eyebrow has motion, and obtaining the exercise score of the eyebrow motion is 1 point; determining that the eyebrow has no motion, and obtaining the exercise score of the eyebrow motion is 0.
  • the detection process of the forehead movement the forehead position is determined by the obtained 68-point model of the face, wherein the forehead is determined and then the sobel value of the forehead area is calculated by the sobel operator, and the variance of the sobel value of the forehead area is taken as the forehead wrinkle value.
  • the sobel value here is the result of the convolution operation of the convolution of the pixel of the area containing the same size as the convolution kernel at the center of the current pixel; the extraction of the face video of the face to be tested In a video frame, if the forehead wrinkle value of the partial frame is greater than the first preset threshold, and the forehead wrinkle value of the other partial frame is less than the second predetermined threshold, it is determined that the forehead has motion; otherwise, the forehead is determined to have no motion.
  • the example determines the position of the forehead area: usually the forehead area refers to the area above the eyebrow in the face of the face, based on this definition, the position of the eyebrow key point can be obtained first, and then the forehead area is determined according to the position of the face rectangle and the key point of the eyebrow. As shown in the rectangular box HOPK of Figure 3.
  • obtaining the forehead movement to obtain the exercise score includes: determining that the forehead has motion, and the obtained forehead motion has a motion score of 1; determining that the forehead has no motion, and obtaining the forehead motion has a motion score of 0.
  • the third embodiment in addition to the above-mentioned embodiment of whether or not there is a motion score according to whether or not the motion of each part is motioned, it is also possible to obtain a motion score of 0 according to the degree of motion of each part.
  • This alternative embodiment not only indicates whether there is motion, but also the degree of motion.
  • the third embodiment implemented by this alternative embodiment is also within the scope of the present invention.
  • the face video of the face to be tested is detected for each video frame extracted by the preset number of frames, and the key points of the face are acquired, thereby obtaining the key point positions of each part of the motion, thereby The characteristics of the corresponding part, according to the location of several video frames
  • the characteristic condition determines the motion of each part of the motion, and obtains the corresponding motion score; then calculates the sum of the weighted each part of the motion score, and the sum represents the living body recognition score; and finally uses the living body identification score
  • the value of the living body recognition total score is used to calculate the living body recognition confidence, wherein when the living body recognition confidence is not less than the preset value, it is determined that the living body recognition score is not less than the preset threshold, thereby determining that the face to be tested is a living body; otherwise , to determine that the face to be tested is not a living body.
  • the third embodiment solves the problem that the algorithm is single and the security is not high in the prior art, and the scalability is strong; the detection of the motion of the part of the face to be tested can be realized by the two-dimensional image, and the hardware requirement of the device is not high;
  • the detection of eye movement, mouth movement and head movement is used to perform living body recognition, and the motion effects of these parts are obvious, the accuracy of motion judgment is high, and the facial motion is expanded.
  • the detection of the movement of the eyebrows and forehead movements improves the accuracy of the recognition results; the weighting of the different parts is used to perform the score fusion, and the accuracy of the living body recognition is high; the detection of the movement of various parts is beneficial to improve the safety. .
  • FIG. 5 is a schematic structural diagram of the embodiment.
  • the embodiment includes:
  • each part motion detecting unit 1 is used for detecting the motion of the part corresponding to the face to be tested.
  • the part motion detecting unit 1a and the part motion detecting unit 1b indicate that two different parts are detected.
  • the two-part motion detection unit 1 of the movement is used for detecting the motion of the part corresponding to the face to be tested.
  • the part motion score unit 2 is configured to obtain a motion score corresponding to each part of the motion of the face to be tested based on the motion of each part;
  • the living body recognition score calculation unit 3 is configured to calculate the weighted sum of the motion scores corresponding to each part motion obtained, and use the calculated sum as a living body recognition score; wherein the living body recognition score calculation unit 3 has Preset the weight corresponding to each part of the movement.
  • the living body judging unit 4 is configured to determine that the human face to be tested whose living body recognition score is not less than a preset threshold is a living body.
  • the motion of at least two parts corresponding to the detected at least two parts of the motion detecting unit 1 includes at least two parts of the movements of the eye movement, the mouth movement, the head movement, the eyebrow movement, the forehead movement and the facial movement.
  • each part of the motion detecting unit 1 comprises:
  • the part detecting module 11 is configured to detect a key point position of the part corresponding to the movement of the part of each video frame extracted by the face video of the face to be tested;
  • the part motion condition obtaining module 12 is configured to determine the motion of the part by the degree of change of the position of the key point of each video frame extracted.
  • the weight corresponding to the motion of each part in the living body recognition score calculation unit 3 is set according to the visibility of the motion of each part; or the weight corresponding to the motion of each part in the living body recognition score calculation unit 3 It is set according to the accuracy of the movement of each part in the current application scenario.
  • the living body judging unit 4 includes:
  • the living body recognition confidence calculation module 41 is configured to calculate a living body recognition confidence of the face to be tested by using a ratio of the living body recognition score to the total score of the living body recognition;
  • the living body judging module 42 is configured to determine that the living body recognition score is not less than a preset threshold when the living body recognition confidence is not less than the preset value, and determine that the living face whose living body recognition score is not less than the preset threshold is a living body.
  • the part detecting module 11 of each part of the motion detecting unit 1 detects the key point position of the corresponding part in each of the extracted video frames, and determines the motion of the part motion by the motion score obtaining module 12, Then, the motion score of the part motion is obtained by the part motion score unit 2 based on the motion of the part; then, the motion score of each part motion obtained by the vital body recognition score calculation unit 3 is weighted and summed as the living body recognition.
  • the biometric recognition confidence calculation module 41 of the living body judging unit 4 calculates the biometric recognition confidence of the face to be tested using the wallpaper of the living body recognition score in the living body recognition score, and determines by the living body judging module 42 when calculating The obtained living body recognition confidence is not less than the preset threshold, and the face to be tested is a living body.
  • the detection of at least two parts motion detecting unit solves the problem that the algorithm in the prior art is single and the security is not high, and the scalability is strong, and the detection of the part motion based on the face can be realized by the two-dimensional image,
  • the hardware requirements are not high.
  • the living body recognition score calculation unit weights the motion of different parts and then performs score fusion. The accuracy of living body recognition is high, and the beneficial effects of high recognition accuracy, low hardware requirements and high safety are obtained.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

L'invention concerne un procédé de reconnaissance de corps vivant, comprenant les étapes consistant à : détecter le mouvement d'au moins deux parties d'un visage à détecter (S1) ; sur la base du mouvement de chaque partie, acquérir un score de mouvement correspondant au mouvement de chaque partie du visage à détecter (S2) ; calculer la somme pondérée des scores de mouvement correspondant au mouvement de chaque partie, et utiliser la somme calculée en tant que score de reconnaissance de corps vivant (S3), le mouvement de chaque partie ayant déjà une pondération correspondante prédéfinie ; et à déterminer qu'un visage à détecter ayant un score de reconnaissance de corps vivant qui n'est pas inférieur à un seuil prédéfini est un corps vivant (S4). L'invention concerne un système de reconnaissance de corps vivant correspondant, comprenant au moins deux unités de détection de mouvement de partie, une unité d'acquisition de score de mouvement de partie, une unité de calcul de score de reconnaissance de corps vivant, et une unité de détermination de corps vivant. Le présent procédé et le système selon l'invention présentent de faibles exigences matérielles de dispositifs, peuvent assurer une reconnaissance efficace d'un corps vivant, ont une forte extensibilité et une sécurité élevée, et ne sont pas vulnérables à des attaques.
PCT/CN2017/104612 2017-06-02 2017-09-29 Procédé et système de reconnaissance de corps vivant WO2018218839A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710406488.1A CN107358152B (zh) 2017-06-02 2017-06-02 一种活体识别方法和系统
CN201710406488.1 2017-06-02

Publications (1)

Publication Number Publication Date
WO2018218839A1 true WO2018218839A1 (fr) 2018-12-06

Family

ID=60272209

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/104612 WO2018218839A1 (fr) 2017-06-02 2017-09-29 Procédé et système de reconnaissance de corps vivant

Country Status (2)

Country Link
CN (1) CN107358152B (fr)
WO (1) WO2018218839A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321849A (zh) * 2019-07-05 2019-10-11 腾讯科技(深圳)有限公司 图像数据处理方法、装置以及计算机可读存储介质
CN111523344A (zh) * 2019-02-01 2020-08-11 上海看看智能科技有限公司 人体活体检测系统及方法
CN113221771A (zh) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 活体人脸识别方法、装置、设备、存储介质及程序产品

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740429A (zh) * 2017-11-30 2019-05-10 沈阳工业大学 基于嘴角坐标平均值变化的笑脸识别方法
CN107977640A (zh) * 2017-12-12 2018-05-01 成都电科海立科技有限公司 一种基于车载人脸识别图像采集装置的采集方法
CN108446690B (zh) * 2018-05-31 2021-09-14 北京工业大学 一种基于多视角动态特征的人脸活体检测方法
CN109582139A (zh) * 2018-11-21 2019-04-05 广东智媒云图科技股份有限公司 一种机器交互启动触发方法及系统
CN109784302B (zh) * 2019-01-28 2023-08-15 深圳信合元科技有限公司 一种人脸活体检测方法及人脸识别设备
TWI734454B (zh) * 2020-04-28 2021-07-21 鴻海精密工業股份有限公司 身份辨識裝置以及身份辨識方法
CN111860455B (zh) * 2020-08-04 2023-08-18 中国银行股份有限公司 基于html5页面的活体检测方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440479A (zh) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 一种活体人脸检测方法与系统
CN104794464A (zh) * 2015-05-13 2015-07-22 上海依图网络科技有限公司 一种基于相对属性的活体检测方法
CN105224921A (zh) * 2015-09-17 2016-01-06 桂林远望智能通信科技有限公司 一种人脸图像择优系统和处理方法
CN105243378A (zh) * 2015-11-13 2016-01-13 清华大学 基于眼部信息的活体人脸检测方法及装置
CN105426815A (zh) * 2015-10-29 2016-03-23 北京汉王智远科技有限公司 活体检测方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100514353C (zh) * 2007-11-26 2009-07-15 清华大学 一种基于人脸生理性运动的活体检测方法及系统
CN104951730B (zh) * 2014-03-26 2018-08-31 联想(北京)有限公司 一种唇动检测方法、装置及电子设备
CN105989264B (zh) * 2015-02-02 2020-04-07 北京中科奥森数据科技有限公司 生物特征活体检测方法及系统
CN105335719A (zh) * 2015-10-29 2016-02-17 北京汉王智远科技有限公司 活体检测方法及装置
CN105243376A (zh) * 2015-11-06 2016-01-13 北京汉王智远科技有限公司 一种活体检测方法和装置
CN105740688B (zh) * 2016-02-01 2021-04-09 腾讯科技(深圳)有限公司 解锁的方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440479A (zh) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 一种活体人脸检测方法与系统
CN104794464A (zh) * 2015-05-13 2015-07-22 上海依图网络科技有限公司 一种基于相对属性的活体检测方法
CN105224921A (zh) * 2015-09-17 2016-01-06 桂林远望智能通信科技有限公司 一种人脸图像择优系统和处理方法
CN105426815A (zh) * 2015-10-29 2016-03-23 北京汉王智远科技有限公司 活体检测方法及装置
CN105243378A (zh) * 2015-11-13 2016-01-13 清华大学 基于眼部信息的活体人脸检测方法及装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523344A (zh) * 2019-02-01 2020-08-11 上海看看智能科技有限公司 人体活体检测系统及方法
CN111523344B (zh) * 2019-02-01 2023-06-23 上海看看智能科技有限公司 人体活体检测系统及方法
CN110321849A (zh) * 2019-07-05 2019-10-11 腾讯科技(深圳)有限公司 图像数据处理方法、装置以及计算机可读存储介质
CN110321849B (zh) * 2019-07-05 2023-12-22 腾讯科技(深圳)有限公司 图像数据处理方法、装置以及计算机可读存储介质
CN113221771A (zh) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 活体人脸识别方法、装置、设备、存储介质及程序产品
CN113221771B (zh) * 2021-05-18 2023-08-04 北京百度网讯科技有限公司 活体人脸识别方法、装置、设备、存储介质及程序产品

Also Published As

Publication number Publication date
CN107358152B (zh) 2020-09-08
CN107358152A (zh) 2017-11-17

Similar Documents

Publication Publication Date Title
WO2018218839A1 (fr) Procédé et système de reconnaissance de corps vivant
CN108182409B (zh) 活体检测方法、装置、设备及存储介质
WO2020119450A1 (fr) Procédé d'identification de risques utilisant une image faciale, dispositif, appareil informatique et support de stockage
CN107346422B (zh) 一种基于眨眼检测的活体人脸识别方法
JP5010905B2 (ja) 顔認証装置
KR20220150868A (ko) 모션벡터 및 특징벡터 기반 위조 얼굴 검출 방법 및 장치
US8891819B2 (en) Line-of-sight detection apparatus and method thereof
US11715231B2 (en) Head pose estimation from local eye region
CN111767900B (zh) 人脸活体检测方法、装置、计算机设备及存储介质
CN103440479B (zh) 一种活体人脸检测方法与系统
CN106682578B (zh) 基于眨眼检测的弱光人脸识别方法
CN110223322B (zh) 图像识别方法、装置、计算机设备和存储介质
US20210271865A1 (en) State determination device, state determination method, and recording medium
CN107330370B (zh) 一种额头皱纹动作检测方法和装置及活体识别方法和系统
CN105095885B (zh) 一种人眼状态的检测方法和检测装置
JP6822482B2 (ja) 視線推定装置、視線推定方法及びプログラム記録媒体
CN109978884A (zh) 基于人脸分析的多人图像评分方法、系统、设备及介质
Singh et al. Lie detection using image processing
Rezaei et al. 3D cascade of classifiers for open and closed eye detection in driver distraction monitoring
CN108108651B (zh) 基于视频人脸分析的驾驶员非专心驾驶检测方法及系统
CN104008364A (zh) 人脸识别方法
CN103544478A (zh) 一种全方位人脸检测的方法及系统
CN111860394A (zh) 一种基于姿态估计和动作检测的动作活体识别方法
Zhang et al. A novel efficient method for abnormal face detection in ATM
Zhou Eye-Blink Detection under Low-Light Conditions Based on Zero-DCE

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17911871

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 03.04.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17911871

Country of ref document: EP

Kind code of ref document: A1