CN107491759B - Mixed mode registration method and device - Google Patents

Mixed mode registration method and device Download PDF

Info

Publication number
CN107491759B
CN107491759B CN201710719086.7A CN201710719086A CN107491759B CN 107491759 B CN107491759 B CN 107491759B CN 201710719086 A CN201710719086 A CN 201710719086A CN 107491759 B CN107491759 B CN 107491759B
Authority
CN
China
Prior art keywords
mode
instruction
face
user
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710719086.7A
Other languages
Chinese (zh)
Other versions
CN107491759A (en
Inventor
陈书楷
杨奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Entropy Technology Co., Ltd
Original Assignee
Xiamen Zkteco Biometric Identification Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Zkteco Biometric Identification Technology Co ltd filed Critical Xiamen Zkteco Biometric Identification Technology Co ltd
Priority to CN201710719086.7A priority Critical patent/CN107491759B/en
Publication of CN107491759A publication Critical patent/CN107491759A/en
Application granted granted Critical
Publication of CN107491759B publication Critical patent/CN107491759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a mixed mode registration method and a device, wherein the method comprises the following steps: receiving a face registration instruction input by a user; sending a mixed mode instruction, and prompting a user to enter a corresponding expression mode according to the mixed mode instruction; collecting a face image of a user; and judging whether the expression mode of the user in the face image is matched with the mixed mode instruction, if so, determining that the face image comes from a living body, and registering the face. The method realizes the diversification of the face registration template.

Description

Mixed mode registration method and device
Technical Field
The invention relates to the technical field of face registration, in particular to a mixed mode registration method and device.
Background
At present, existing attendance checking and entrance guard products have security holes attacked by face photos, paper face photos, infrared printed photos and the like. In the product application of payment and social companies, an anti-fake mode is single, such as a face authentication mode when a payment bank logs in an account, for example, a blinking single action.
When the face detection is carried out, if a user uses a forged face to carry out detection, a leak that the forged face passes through the face verification can appear, so that the face detection is disabled, the obtained face features have no expression, the face features are single, and the face registration template is single.
Disclosure of Invention
The invention aims to provide a mixed mode registration method and a mixed mode registration device, which are used for realizing the diversification of face registration templates.
In order to solve the above technical problem, the present invention provides a mixed mode registration method, including:
receiving a face registration instruction input by a user;
sending a mixed mode instruction, and prompting a user to enter a corresponding expression mode according to the mixed mode instruction;
collecting a face image of a user;
and judging whether the expression mode of the user in the face image is matched with the mixed mode instruction, if so, determining that the face image comes from a living body, and registering the face.
Preferably, the mixed mode instruction includes an eye-closing and mouth-opening instruction, a mouth-opening smiling instruction or an eye-closing smiling instruction; the expression mode comprises an eye-closing and mouth-opening mode, a mouth-opening smiling mode or an eye-closing smiling mode.
Preferably, sending the mixed mode instruction and prompting the user to enter the corresponding expression mode according to the mixed mode instruction includes:
sending an eye closing and mouth opening instruction, and prompting a user to enter an eye closing and mouth opening mode according to the eye closing and mouth opening instruction;
or sending a mouth opening smile instruction, and prompting a user to enter a mouth opening smile mode according to the mouth opening smile instruction;
or sending an eye closing smile instruction, and prompting the user to enter an eye closing smile mode according to the eye closing smile instruction.
Preferably, the determining whether the expression mode of the user in the face image matches the mixed mode instruction, and if so, determining that the face image is from a living body, and performing face registration includes:
when the mixed mode instruction is an eye closing and mouth opening instruction, judging whether the expression mode of the user in the face image is an eye closing and mouth opening mode, if so, determining that the face image comes from a living body, and registering the face;
or when the mixed mode instruction is a mouth opening smile instruction, judging whether the expression mode of the user in the face image is a mouth opening smile mode, if so, determining that the face image comes from a living body, and registering the face;
or when the mixed mode instruction is an eye-closing smile instruction, judging whether the expression mode of the user in the face image is an eye-closing smile mode, if so, determining that the face image comes from a living body, and performing face registration.
The present invention also provides a face registration apparatus, which is characterized in that, the face registration apparatus is used for implementing the method, and comprises:
the receiving module is used for receiving a face registration instruction input by a user;
the sending module is used for sending a mixed mode instruction and prompting a user to enter a corresponding expression mode according to the mixed mode instruction;
the acquisition module is used for acquiring a face image of a user;
and the judging module is used for judging whether the expression mode of the user in the face image is matched with the mixed mode instruction, and if so, determining that the face image comes from a living body and registering the face.
Preferably, the mixed mode instruction includes an eye-closing and mouth-opening instruction, a mouth-opening smiling instruction or an eye-closing smiling instruction; the expression mode comprises an eye-closing and mouth-opening mode, a mouth-opening smiling mode or an eye-closing smiling mode.
Preferably, the sending module is specifically configured to send an eye closing and mouth opening instruction, and prompt the user to enter an eye closing and mouth opening mode according to the eye closing and mouth opening instruction; or sending a mouth opening smile instruction, and prompting a user to enter a mouth opening smile mode according to the mouth opening smile instruction; or sending an eye closing smile instruction, and prompting the user to enter an eye closing smile mode according to the eye closing smile instruction.
Preferably, the determining module is specifically configured to determine whether an expression pattern of a user in the face image is an eye-closing and mouth-opening pattern when the mixed mode instruction is an eye-closing and mouth-opening instruction, and if so, determine that the face image is from a living body, and perform face registration; or when the mixed mode instruction is a mouth opening smile instruction, judging whether the expression mode of the user in the face image is a mouth opening smile mode, if so, determining that the face image comes from a living body, and registering the face; or when the mixed mode instruction is an eye-closing smile instruction, judging whether the expression mode of the user in the face image is an eye-closing smile mode, and if so, determining that the face image comes from a living body to register the face.
The invention provides a mixed mode registration method and a mixed mode registration device, which receive a face registration instruction input by a user; sending a mixed mode instruction, and prompting a user to enter a corresponding expression mode according to the mixed mode instruction; collecting a face image of a user; and judging whether the expression mode of the user in the face image is matched with the mixed mode instruction, if so, determining that the face image comes from a living body, and registering the face. And sending a mixed mode instruction to prompt the user to enter a corresponding expression mode according to the mixed mode instruction, and judging whether the expression mode of the user in the face image is matched with the mixed mode instruction, so that when the collected face is a living body, the user can enter the corresponding expression mode under the reminding of the mixed mode instruction, and at the moment, the expression mode of the user in the collected face image is matched with the mixed mode instruction, so that the face is determined to be the living body, if the face is forged, the forged face cannot enter the corresponding expression mode under the reminding of the mixed mode instruction, the expression mode of the user in the face image cannot be matched with the mixed mode instruction, and the face is determined not to be the living body, so that the loophole that the forged face passes face verification is avoided, and the user is prompted to enter the corresponding expression mode according to the mixed mode instruction by sending the mixed mode instruction, and finishing face registration and realizing diversification of face registration templates.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a mixed mode registration method provided by the present invention;
FIG. 2 is a face image of a user in an eye-closing and mouth-opening mode;
FIG. 3 is a face image of a user in a smiling mode with open mouth;
FIG. 4 is a face image of a user in a closed-eye smile mode;
fig. 5 is a schematic structural diagram of a mixed-mode registration apparatus provided in the present invention.
Detailed Description
The core of the invention is to provide a mixed mode registration method and a mixed mode registration device so as to realize the diversification of the provided face registration template.
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a mixed mode registration method provided in the present invention, the method includes:
s11: receiving a face registration instruction input by a user;
s12: sending a mixed mode instruction, and prompting a user to enter a corresponding expression mode according to the mixed mode instruction;
s13: collecting a face image of a user;
s14: and judging whether the expression mode of the user in the face image is matched with the mixed mode instruction, if so, determining that the face image comes from a living body, and registering the face.
Therefore, in the method, a mixed mode instruction is sent to prompt a user to enter a corresponding expression mode according to the mixed mode instruction, whether the expression mode of the user in a face image is matched with the mixed mode instruction is judged, so that when an acquired face is a living body, the user can enter the corresponding expression mode under the reminding of the mixed mode instruction, the expression mode of the user in the acquired face image is matched with the mixed mode instruction, the face is determined to be the living body, if the face is forged, the forged face cannot enter the corresponding expression mode under the reminding of the mixed mode instruction, the expression mode of the user in the face image cannot be matched with the mixed mode instruction, the face is determined not to be the living body, the loophole that the forged face passes face verification is avoided, and the user is prompted to enter the corresponding expression mode according to the mixed mode instruction by sending the mixed mode instruction, and finishing face registration and realizing diversification of face registration templates.
Based on the method, specifically, the mixed mode instruction includes an eye-closing and mouth-opening instruction, a mouth-opening smiling instruction or an eye-closing smiling instruction; the expression mode comprises an eye-closing and mouth-opening mode, a mouth-opening smiling mode or an eye-closing smiling mode.
Further, the process of step S12 specifically includes: sending an eye closing and mouth opening instruction, and prompting a user to enter an eye closing and mouth opening mode according to the eye closing and mouth opening instruction; or sending a mouth opening smile instruction, and prompting a user to enter a mouth opening smile mode according to the mouth opening smile instruction; or sending an eye closing smile instruction, and prompting the user to enter an eye closing smile mode according to the eye closing smile instruction.
Further, the process of step S14 specifically includes: when the mixed mode instruction is an eye closing and mouth opening instruction, judging whether the expression mode of the user in the face image is an eye closing and mouth opening mode, if so, determining that the face image comes from a living body, and registering the face; or when the mixed mode instruction is a mouth opening smile instruction, judging whether the expression mode of the user in the face image is a mouth opening smile mode, if so, determining that the face image comes from a living body, and registering the face; or when the mixed mode instruction is an eye-closing smile instruction, judging whether the expression mode of the user in the face image is an eye-closing smile mode, if so, determining that the face image comes from a living body, and performing face registration.
The method comprises the steps of firstly obtaining eye key points and lip key points in a face image, then determining expression modes of users in the face image through the eye key points and the lip key points, and then judging whether the expression modes of the users in the face image are matched with a mixed mode instruction or not.
If the mixed mode instruction is an eye closing and mouth opening instruction and the expression mode is an eye closing and mouth opening mode, the expression mode is matched with the mixed mode instruction; if the mixed mode instruction is a mouth opening smile instruction and the expression mode is a mouth opening smile mode, matching the expression mode with the mixed mode instruction; and if the mixed mode instruction is the eye closing smile instruction and the expression mode is the eye closing smile mode, matching the expression mode with the mixed mode instruction.
Further, in step S14, the method further includes, after performing face registration: extracting a face feature template, and storing the face feature template; and comparing the face feature template with the registration standard template, and if the face feature template is consistent with the registration standard template, successfully identifying the face.
Specifically, in the process of judging whether the expression mode of the user in the face image is the eye-closing mouth-opening mode, the distance between key points in the middle of an upper eyelid and a lower eyelid of the eye and the distance between key points in the middle of an upper lip and a lower lip are calculated firstly, and if the eye distance is close to 0 and the mouth distance is greater than a preset threshold value, the expression mode is the eye-closing mouth-opening mode.
Therefore, when the collected face is a living body, the user can carry out the eye closing and mouth opening actions under the reminding of the eye closing and mouth opening instruction, at the moment, the eyes and the lips on the collected face image can be in the eye closing and mouth opening states, the face is judged to be in the eye closing and mouth opening mode through the eye key points and the lip key points, the face is determined to be the living body, if the face is forged, the forged face cannot be in the eye closing and mouth opening states, the face is judged not to be in the eye closing and mouth opening states through the eye key points and the lip key points, the face is determined not to be the living body, the defect that the forged face passes the face verification is avoided, the user is reminded to enter the eye closing and mouth opening mode through the sending of the eye closing and mouth opening instruction, the face feature template extraction is completed, and the diversification of the face registration template is realized.
The process of judging whether the expression mode of the user in the face image is the mouth opening smile mode comprises two processes, wherein one process is to judge whether the face expression is the smile expression, and the other process is to judge whether the mouth state is the mouth opening state, and the two processes are not in sequence, can freely specify the sequence and can also be carried out simultaneously. Specifically, a deep learning model is adopted to judge whether the facial expression of the face is a smile expression; calculating the distance between key points in the middle of the upper lip and the lower lip, and judging whether the distance between the mouths is greater than a preset threshold value; and if the facial expression of the face is smiling expression and the mouth distance is greater than a preset threshold value, determining that the expression mode of the user is a mouth-opening smiling state.
Therefore, when the collected face is a living body, the user can perform mouth opening smiling action under the reminding of the mouth opening smiling instruction, the face state on the collected face image can be the mouth opening smiling state, the face is judged to be the mouth opening smiling mode through the deep learning model and the lip key point, the face is determined to be the living body, if the face is forged, the forged face cannot perform mouth opening smiling, the face is judged not to be the mouth opening smiling state, the face is determined not to be the living body, the defect that the forged face passes through face verification is avoided, the user is reminded to enter the mouth opening smiling mode through sending the mouth opening smiling instruction, face feature template extraction is completed, and diversification of face registration templates is realized.
The process of judging whether the expression mode of the user in the face image is the eye-closing smile mode comprises two processes, wherein one process is to judge whether the facial expression of the face is the smile expression, the other process is to judge whether the eye state is the eye-closing state, and the two processes are not in sequence, can freely designate the sequence and can also be carried out simultaneously. Specifically, a deep learning model is adopted to judge whether the facial expression of the face is a smile expression; calculating the distance between key points in the middle of upper and lower eyelids of the eyes, and judging whether the distance between the eyes is close to 0; and if the facial expression of the face is smile expression and the eye distance is close to 0, determining that the expression mode of the user is an eye-closing smile mode.
Therefore, when the collected face is a living body, the user can perform eye-closing smile action under the reminding of the eye-closing smile instruction, the face state on the collected face image can be the eye-closing smile state, the face is judged to be the eye-closing smile mode through the eye key point and the deep learning model, the face is determined to be the living body, if the face is forged, the forged face cannot perform eye-closing smile, the face is judged not to be the eye-closing smile mode, the face is determined not to be the living body, therefore, the loophole of the forged face passing through face verification is avoided, the user is reminded to enter the eye-closing smile state through sending the eye-closing smile instruction, face feature template extraction is completed, and diversification of face registration templates is realized.
In the method, an instruction is randomly sent to a user, and the user needs to collect a corresponding face image, namely a mixed face image containing expressions and facial actions according to the instruction, so that the method is a novel anti-counterfeiting mode. When the face registration is carried out or the recognition process is carried out, the mixed mode image of the face expression and the face action of the user is collected, so that the aim of living body detection is fulfilled, and the anti-counterfeiting capability is enhanced.
Based on the method, the method specifically comprises the following steps when the face is registered:
1. during collection, the system randomly sends a mixed mode instruction;
when registering, a user clicks to enter the registration, the system prompts the start of face registration, and the system prompts the user to acquire a mixed-mode image;
2. a user needs to acquire a mixed mode image according to an instruction;
when the face image is recognized, a user clicks a face to log in or verify, the system starts to collect the face, and the system prompts the user to operate according to an instruction;
3. judging whether the images are from living bodies or not according to the images acquired by the cooperation of the users;
for example, the system sends 3 instructions, and the user follows 3 times, which indicates that the user is a living body, otherwise, the user is not;
4. if the living body is the living body, the living body can be normally used, otherwise, the registration or the identification is refused, and an alarm is given.
And if the living body judgment is successful, extracting the face feature template to finish the registration. And after the face feature template is extracted, comparing the extracted face feature template with a registration module to finish recognition, otherwise, failing to recognize.
Specifically, as shown in fig. 2, the determination of the eye-closing and mouth-opening mode is performed by first calculating the distance between the key points between the upper and lower eyelids of the eye and the distance between the key points between the upper and lower lips according to the black mark on the face of the person in the figure, which is the key point of the eye-closing and mouth-opening actions of the face, and then according to a set distance threshold, if the eye distance is close to 0 and the mouth distance is greater than the threshold, the mode is satisfied.
Regarding the judgment of the smile mode, as shown in fig. 3, the smile expression of the face is judged, then the distance between the middle key points of the upper and lower lips is calculated according to the key points of the mouth, namely the black point marks on the face in the figure, and then whether the smile expression and the mouth distance are in accordance with the mode is judged according to the condition that the smile expression and the mouth distance are greater than the threshold value.
Regarding the judgment of the closed-eye smile mode, as shown in fig. 4, the face smile expression is judged, then the distance between the key points in the middle of the upper and lower eyelids is calculated according to the key points of the eyes, namely the black point marks on the face in the figure, and then whether the mode is met or not is judged according to the condition that the smile expression and the eye distance are close to 0.
Wherein, the anti-counterfeiting judgment can adopt multi-state judgment, a random instruction prompts a user to judge 2 or more states, if the verification is successful, registration is carried out, otherwise, the judgment is rejected. The anti-counterfeiting judgment can also adopt a single state, a single random instruction prompts the user to judge one of 3 states, if the judgment is successful, the identification or the verification is carried out, otherwise, the verification fails.
The method combines the facial expression and the facial key points, judges whether the 3 states are met according to the random instruction, and prevents the false body attack in a mixed mode, thereby making up the security hole of the forged face and improving the detection accuracy.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a mixed mode registration apparatus provided in the present invention, for implementing the method, the apparatus includes:
the receiving module 101 is used for receiving a face registration instruction input by a user;
a sending module 102, configured to send a mixed mode instruction, and prompt a user to enter a corresponding expression mode according to the mixed mode instruction;
the acquisition module 103 is used for acquiring a face image of a user;
and the judging module 104 is used for judging whether the expression mode of the user in the face image is matched with the mixed mode instruction, and if so, determining that the face image comes from a living body and performing face registration.
Therefore, in the device, a mixed mode instruction is sent to prompt a user to enter a corresponding expression mode according to the mixed mode instruction, whether the expression mode of the user in a face image is matched with the mixed mode instruction is judged, so that when the collected face is a living body, the user can enter the corresponding expression mode under the reminding of the mixed mode instruction, at the moment, the expression mode of the user in the collected face image is matched with the mixed mode instruction, the face is determined to be the living body, if the face is forged, the forged face cannot enter the corresponding expression mode under the reminding of the mixed mode instruction, the expression mode of the user in the face image cannot be matched with the mixed mode instruction, the face is determined not to be the living body, the loophole that the forged face passes face verification is avoided, and the user is prompted to enter the corresponding expression mode according to the mixed mode instruction by sending the mixed mode instruction, and finishing face registration and realizing diversification of face registration templates.
Based on the above device, specifically, the mixed mode instruction includes an eye-closing and mouth-opening instruction, a mouth-opening smiling instruction, or an eye-closing smiling instruction; the expression mode comprises an eye-closing and mouth-opening mode, a mouth-opening smiling mode or an eye-closing smiling mode.
Further, the sending module is specifically configured to send an eye closing and mouth opening instruction, and prompt the user to enter an eye closing and mouth opening mode according to the eye closing and mouth opening instruction; or sending a mouth opening smile instruction, and prompting a user to enter a mouth opening smile mode according to the mouth opening smile instruction; or sending an eye closing smile instruction, and prompting the user to enter an eye closing smile mode according to the eye closing smile instruction.
Further, the judging module is specifically configured to, when the mixed mode instruction is an eye-closing and mouth-opening instruction, judge whether the expression mode of the user in the face image is an eye-closing and mouth-opening mode, and if so, determine that the face image is from a living body, and perform face registration; or when the mixed mode instruction is a mouth opening smile instruction, judging whether the expression mode of the user in the face image is a mouth opening smile mode, if so, determining that the face image comes from a living body, and registering the face; or when the mixed mode instruction is an eye-closing smile instruction, judging whether the expression mode of the user in the face image is an eye-closing smile mode, and if so, determining that the face image comes from a living body to register the face.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The mixed mode registration method and device provided by the invention are described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (2)

1. A mixed mode registration method, comprising:
receiving a face registration instruction input by a user;
sending a mixed mode instruction, and prompting a user to enter a corresponding expression mode according to the mixed mode instruction;
collecting a face image of a user;
judging whether the expression mode of the user in the face image is matched with the mixed mode instruction, if so, determining that the face image comes from a living body, and registering the face;
after the face registration, the method also comprises the following steps: extracting a face feature template, and storing the face feature template; comparing the face feature template with the registered standard template, and if the face feature template is consistent with the registered standard template, successfully identifying the face;
the mixed mode instruction comprises an eye closing and mouth opening instruction, a mouth opening smiling instruction or an eye closing smiling instruction; the expression mode comprises an eye closing and mouth opening mode, a mouth opening smiling mode or an eye closing smiling mode;
wherein, send mixed mode instruction, the suggestion user gets into corresponding expression mode according to mixed mode instruction, include:
sending an eye closing and mouth opening instruction, and prompting a user to enter an eye closing and mouth opening mode according to the eye closing and mouth opening instruction;
or sending a mouth opening smile instruction, and prompting a user to enter a mouth opening smile mode according to the mouth opening smile instruction;
or sending an eye closing smile instruction, and prompting a user to enter an eye closing smile mode according to the eye closing smile instruction;
the judging whether the expression mode of the user in the face image is matched with the mixed mode instruction or not, if so, determining that the face image comes from a living body, and registering the face comprises the following steps:
when the mixed mode instruction is an eye closing and mouth opening instruction, judging whether the expression mode of the user in the face image is an eye closing and mouth opening mode, if so, determining that the face image comes from a living body, and performing face registration, wherein the method specifically comprises the following steps: calculating the distance between key points in the middle of upper and lower eyelids of the eyes and the distance between key points in the middle of upper and lower lips, wherein if the distance between the eyes is close to 0 and the distance between the mouths is greater than a preset threshold value, the expression mode is an eye-closing mouth-opening mode;
or, when the mixed mode instruction is a mouth opening smile instruction, judging whether the expression mode of the user in the face image is a mouth opening smile mode, if so, determining that the face image comes from a living body, and performing face registration, specifically comprising: judging whether the facial expression of the face is smiling expression or not by adopting a deep learning model; calculating the distance between key points in the middle of the upper lip and the lower lip, and judging whether the distance between the mouths is greater than a preset threshold value; if the facial expression of the face is the smiling expression and the mouth distance is larger than a preset threshold value, determining that the expression mode of the user is a mouth opening smiling mode;
or when the mixed mode instruction is an eye-closing smile instruction, judging whether the expression mode of the user in the face image is an eye-closing smile mode, if so, determining that the face image comes from a living body, and registering the face; the method specifically comprises the following steps: judging whether the facial expression of the face is smiling expression or not by adopting a deep learning model; calculating the distance between key points in the middle of upper and lower eyelids of the eyes, and judging whether the distance between the eyes is close to 0; and if the facial expression of the face is smile expression and the eye distance is close to 0, determining that the expression mode of the user is an eye-closing smile mode.
2. A face registration apparatus for implementing the method of claim 1, comprising:
the receiving module is used for receiving a face registration instruction input by a user;
the sending module is used for sending a mixed mode instruction and prompting a user to enter a corresponding expression mode according to the mixed mode instruction;
the acquisition module is used for acquiring a face image of a user;
the judging module is used for judging whether the expression mode of the user in the face image is matched with the mixed mode instruction or not, if so, determining that the face image comes from a living body, and registering the face;
after the face registration, the method also comprises the following steps: extracting a face feature template, and storing the face feature template; comparing the face feature template with the registered standard template, and if the face feature template is consistent with the registered standard template, successfully identifying the face;
the mixed mode instruction comprises an eye closing and mouth opening instruction, a mouth opening smiling instruction or an eye closing smiling instruction; the expression mode comprises an eye closing and mouth opening mode, a mouth opening smiling mode or an eye closing smiling mode;
the sending module is specifically used for sending an eye closing and mouth opening instruction and prompting a user to enter an eye closing and mouth opening mode according to the eye closing and mouth opening instruction; or sending a mouth opening smile instruction, and prompting a user to enter a mouth opening smile mode according to the mouth opening smile instruction; or sending an eye closing smile instruction, and prompting a user to enter an eye closing smile mode according to the eye closing smile instruction;
the judging module is specifically configured to, when the mixed mode instruction is an eye-closing and mouth-opening instruction, judge whether an expression mode of a user in the face image is an eye-closing and mouth-opening mode, and if yes, determine that the face image is from a living body, and perform face registration, specifically including: calculating the distance between key points in the middle of upper and lower eyelids of the eyes and the distance between key points in the middle of upper and lower lips, wherein if the distance between the eyes is close to 0 and the distance between the mouths is greater than a preset threshold value, the expression mode is the eye-closing and mouth-opening mode; or, when the mixed mode instruction is a mouth opening smile instruction, judging whether the expression mode of the user in the face image is a mouth opening smile mode, if so, determining that the face image comes from a living body, and performing face registration, specifically comprising: judging whether the facial expression of the face is smiling expression or not by adopting a deep learning model; calculating the distance between key points in the middle of the upper lip and the lower lip, and judging whether the distance between the mouths is greater than a preset threshold value; if the facial expression of the face is the smiling expression and the mouth distance is larger than a preset threshold value, determining that the expression mode of the user is a mouth opening smiling mode; or when the mixed mode instruction is an eye-closing smile instruction, judging whether the expression mode of the user in the face image is an eye-closing smile mode, and if so, determining that the face image comes from a living body to perform face registration; the method specifically comprises the following steps: judging whether the facial expression of the face is smiling expression or not by adopting a deep learning model; calculating the distance between key points in the middle of upper and lower eyelids of the eyes, and judging whether the distance between the eyes is close to 0; and if the facial expression of the face is smile expression and the eye distance is close to 0, determining that the expression mode of the user is an eye-closing smile mode.
CN201710719086.7A 2017-08-21 2017-08-21 Mixed mode registration method and device Active CN107491759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710719086.7A CN107491759B (en) 2017-08-21 2017-08-21 Mixed mode registration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710719086.7A CN107491759B (en) 2017-08-21 2017-08-21 Mixed mode registration method and device

Publications (2)

Publication Number Publication Date
CN107491759A CN107491759A (en) 2017-12-19
CN107491759B true CN107491759B (en) 2020-07-03

Family

ID=60646217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710719086.7A Active CN107491759B (en) 2017-08-21 2017-08-21 Mixed mode registration method and device

Country Status (1)

Country Link
CN (1) CN107491759B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105539B (en) * 2018-10-26 2021-08-31 珠海格力电器股份有限公司 Access control management method and device
CN110164007B (en) * 2019-05-21 2022-02-01 一石数字技术成都有限公司 Access control system based on identity evidence and face image incidence relation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678984A (en) * 2013-12-20 2014-03-26 湖北微模式科技发展有限公司 Method for achieving user authentication by utilizing camera
CN104751110A (en) * 2013-12-31 2015-07-01 汉王科技股份有限公司 Bio-assay detection method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440479B (en) * 2013-08-29 2016-12-28 湖北微模式科技发展有限公司 A kind of method and system for detecting living body human face
CN106169075A (en) * 2016-07-11 2016-11-30 北京小米移动软件有限公司 Auth method and device
CN106650646A (en) * 2016-12-09 2017-05-10 南京合荣欣业金融软件有限公司 Action recognition based living body face recognition method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678984A (en) * 2013-12-20 2014-03-26 湖北微模式科技发展有限公司 Method for achieving user authentication by utilizing camera
CN104751110A (en) * 2013-12-31 2015-07-01 汉王科技股份有限公司 Bio-assay detection method and device

Also Published As

Publication number Publication date
CN107491759A (en) 2017-12-19

Similar Documents

Publication Publication Date Title
CN106302330B (en) Identity verification method, device and system
CN105468950B (en) Identity authentication method and device, terminal and server
WO2019127365A1 (en) Face living body detection method, electronic device and computer program product
WO2019127262A1 (en) Cloud end-based human face in vivo detection method, electronic device and program product
WO2018082011A1 (en) Living fingerprint recognition method and device
CN111881726B (en) Living body detection method and device and storage medium
CN106599660A (en) Terminal safety verification method and terminal safety verification device
CN109756458B (en) Identity authentication method and system
CN105718874A (en) Method and device of in-vivo detection and authentication
CN111861240A (en) Suspicious user identification method, device, equipment and readable storage medium
WO2018129687A1 (en) Fingerprint anti-counterfeiting method and device
CN101393598A (en) Starting and unblock method decided by human face identification by utilizing mobile phone cam
CN111126366B (en) Method, device, equipment and storage medium for distinguishing living human face
CN112231668A (en) User identity authentication method based on keystroke behavior, electronic equipment and storage medium
CN105975838A (en) Secure chip, biological feature identification method and biological feature template registration method
CN107491759B (en) Mixed mode registration method and device
CN111046810A (en) Data processing method and processing device
CN110633642A (en) Identity information verification method and device, terminal equipment and storage medium
CN106650703A (en) Palm anti-counterfeiting method and apparatus
CN110599187A (en) Payment method and device based on face recognition, computer equipment and storage medium
CN117853103A (en) Payment system activation method based on intelligent bracelet
Rufai et al. A biometric model for examination screening and attendance monitoring in Yaba College of Technology
CN116453196B (en) Face recognition method and system
US20120219192A1 (en) Method of controlling a session at a self-service terminal, and a self-service terminal
CN113158958B (en) Traffic method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Room 1301, No.132, Fengqi Road, phase III, software park, Xiamen City, Fujian Province

Patentee after: Xiamen Entropy Technology Co., Ltd

Address before: 361000, Xiamen three software park, Fujian Province, 8 North Street, room 2001

Patentee before: XIAMEN ZKTECO BIOMETRIC IDENTIFICATION TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address