WO2022048352A1 - 基于面部表情的解锁方法、装置、计算机设备和存储介质 - Google Patents

基于面部表情的解锁方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2022048352A1
WO2022048352A1 PCT/CN2021/108879 CN2021108879W WO2022048352A1 WO 2022048352 A1 WO2022048352 A1 WO 2022048352A1 CN 2021108879 W CN2021108879 W CN 2021108879W WO 2022048352 A1 WO2022048352 A1 WO 2022048352A1
Authority
WO
WIPO (PCT)
Prior art keywords
unlocking
expression
facial
node
unlocked
Prior art date
Application number
PCT/CN2021/108879
Other languages
English (en)
French (fr)
Inventor
吴雪蕾
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP21863420.2A priority Critical patent/EP4099198A4/en
Publication of WO2022048352A1 publication Critical patent/WO2022048352A1/zh
Priority to US17/893,028 priority patent/US20230100874A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular, to an unlocking method, device, computer equipment and storage medium based on facial expressions.
  • the corresponding password is usually entered to unlock, or the user's fingerprint is entered to unlock, or the face image is collected for face recognition to unlock.
  • a third party can unlock it through a photo or a face model, thus causing hidden dangers to information security.
  • a facial expression-based unlocking method is provided.
  • a facial expression-based unlocking method executed by a terminal, the method comprising:
  • the unlocking is completed.
  • a facial expression-based unlocking device comprising:
  • the display module is used to display the emoticon unlock page
  • the first display module is used to display the unlocking node sequence on the expression unlocking page
  • a generating module for generating an unlocking state identifier based on the facial expression in the facial image collected in real time at the unlocking node to be processed in the unlocking node sequence;
  • the unlocking module is used to complete the unlocking based on the matching between the unlocked state identification and the facial expression in the corresponding described facial image and the corresponding target expression.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
  • the unlocking is completed.
  • the unlocking is completed.
  • a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium; a processor of the computer device reads the computer instructions from the computer-readable storage medium, processes The computer executes the computer instructions, so that the computer device executes the above-mentioned facial expression-based unlocking method.
  • Fig. 1 is the application environment diagram of the unlocking method based on facial expression in one embodiment
  • FIG. 2 is a schematic flowchart of an unlocking method based on facial expressions in one embodiment
  • FIG. 3 is a schematic diagram of an emoticon unlocking page in one embodiment
  • FIG. 4 is a schematic diagram of entering an expression unlocking page by triggering a face detection control in one embodiment
  • 5 is a schematic diagram of prompting to adjust the collection orientation in one embodiment
  • FIG. 6 is a schematic diagram of recognizing a facial image in one embodiment, and superimposing the facial expression model diagram obtained by the recognition on the facial expression;
  • FIG. 7 is a schematic diagram of facial feature points in one embodiment
  • FIG. 8 is a schematic flowchart of an unlocking step in combination with facial expressions and human faces in one embodiment
  • FIG. 9 is a schematic flowchart of an unlocking step in combination with expressions and gestures in one embodiment
  • 10 is a schematic flowchart of an unlocking step of each unlocking node by at least two facial expressions in one embodiment
  • Figure 11 is a schematic flow chart of the steps of inputting an emoticon in one embodiment
  • FIG. 12 is a schematic diagram of entering an expression entry page through a face entry control in one embodiment
  • 13 is a schematic diagram of performing expression map input after expression recognition in one embodiment
  • FIG. 14 is a schematic diagram of combining and sorting the expression identifiers in one embodiment
  • 15 is a schematic diagram of expression combination and expression recognition in one embodiment
  • 16 is a schematic flowchart of an unlocking method based on facial expressions in another embodiment
  • 17 is a structural block diagram of an unlocking device based on facial expressions in one embodiment
  • Figure 18 is a diagram of the internal structure of a computer device in one embodiment.
  • the unlocking method based on facial expressions provided in this application can be applied to the application environment shown in FIG. 1 .
  • the terminal 102 and the server 104 are included.
  • the terminal 102 can collect the facial image of the object to be tested in real time through a built-in camera or an external camera, and then recognize the facial expression of the facial image, and when the recognition is completed, generate an unlocked status indicator at the corresponding unlocked node to prompt the unlocked node.
  • the unlocking state of the unlocking node sequence when the unlocking state identifier is generated at each unlocking node in the unlocking node sequence, and the facial expression in each facial image matches the corresponding target expression, then the entire unlocking node sequence is successfully unlocked.
  • the object to be tested may refer to a user to be tested or other objects to be tested (such as animals). In subsequent embodiments, the object to be tested is a user to be tested as an example for description.
  • the sequence of unlocking nodes can be regarded as a security lock with multiple passwords, and each unlocking node corresponds to a password, which is decoded by facial expressions.
  • the unlocking state includes: a state in which an unlocking operation has not been performed and a state in which an unlocking operation has been performed.
  • the state where the unlocking operation has been performed includes: a state where the unlocking operation has been performed and the unlocking node has been successfully unlocked, and a state where the unlocking operation has been performed but the unlocking node has not been successfully unlocked.
  • the terminal 102 can also pre-enter the facial expressions of the target object, and different unlocking nodes can record different facial expressions (that is, corresponding to different target expressions), or two or three different unlocking nodes can record the same facial expressions (that is, Two or three different unlocked nodes correspond to the same target expression).
  • each unlocking node corresponds to a password, and the unlocking node is unlocked (ie, decoded) through the corresponding facial expression.
  • the facial expression of the target object pre-entered by the terminal 102 can be stored locally or stored in the server 104.
  • the server 104 compares the facial expression recognized from the facial image of the object to be tested with the saved facial expression, To judge whether the facial expression of the object to be tested is consistent with the target expression of the target object.
  • the terminal 102 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc.; in addition, it may be an access control device, a gate, etc., but is not limited thereto.
  • the server 104 may be an independent physical server, or a server cluster composed of multiple physical servers, or a cloud that provides basic cloud computing services such as cloud servers, cloud databases, cloud storage, and content delivery networks (CDNs). server.
  • cloud that provides basic cloud computing services such as cloud servers, cloud databases, cloud storage, and content delivery networks (CDNs). server.
  • CDNs content delivery networks
  • the terminal 102 and the server 104 may be connected through a communication connection method such as Bluetooth, USB (Universal Serial Bus, Universal Serial Bus) or a network, which is not limited in this application.
  • a communication connection method such as Bluetooth, USB (Universal Serial Bus, Universal Serial Bus) or a network, which is not limited in this application.
  • an unlocking method based on facial expressions is provided, and the method is applied to the terminal 102 in FIG. 1 as an example for description, including the following steps:
  • the expression unlocking page may refer to a page that uses facial expressions to unlock, or a page that verifies the facial expressions of the object to be measured. For example, when the terminal is unlocked to enter the emoticon unlocking page of the user operation interface, or the emoticon unlocking page that is unlocked when entering the fund management page of the application or other pages with private information (such as the chat history page in social applications), or It is the emoticon unlock page for swiping facial expressions to pay in the payment scene.
  • the facial expression unlocking page includes a face preview area, an unlocking progress area and an expression prompt area.
  • the face preview area is used to display the facial image collected in real time;
  • the unlock progress area is used to display the sequence of unlocking nodes.
  • the unlocking operation is displayed at the position of the unlocking node.
  • the expression prompt area can be used to display the prompt image corresponding to the unlocked node. For example, if the first unlocked node is unlocked, the expression prompt area can display the prompt image associated with the first unlocked node.
  • the prompt image can be a blue sky; if the pre-registered expression corresponding to the first unlock node is an open mouth expression, the prompt image can be a sunflower.
  • the terminal may display the emoticon unlock page when detecting an unlocking instruction, or detecting an operating instruction to enter a page with private information, or detecting an operating instruction to enter a fund management page, or detecting a payment instruction.
  • the expression unlocking page is entered.
  • the face may refer to the face of a person, or the face of other objects.
  • the terminal when the object to be tested wants to enter the operation page of the terminal, the terminal will enter the expression unlocking page when receiving the unlocking instruction. Or, in a scenario where the object to be tested wants to use online payment, when the terminal detects a payment instruction, it enters the emoticon unlock page. Alternatively, as shown in FIG. 4 , when the object to be tested clicks or touches the face recognition control on the face management page, the expression unlocking page is entered.
  • the unlocked node sequence may refer to a node sequence formed by nodes that need to be unlocked (ie, unlocked nodes).
  • the unlocked nodes in the unlocked node sequence may be nodes with a sequence, in other words, each unlocked node in the unlocked node sequence is unlocked in sequence during unlocking.
  • the unlocking node sequence may correspond to a security lock or a string of passwords, and each unlocking node in the unlocking node sequence is unlocked through corresponding facial expressions.
  • the unlocking node sequence is displayed in the unlocking progress area.
  • a pointer such as an arrow or a ">" symbol, may be displayed between unlocking nodes in the unlocking node sequence, as shown in FIG. 4 .
  • the terminal may further display the face image collected in real time in the face preview area of the expression unlock page.
  • the face can generally refer to the face, chin, lips, eyes, nose, eyebrows, forehead and ears of the object to be measured.
  • the face preview area can display a face capture frame, so that face blocks in the face image can be captured through the face capture frame, which can avoid the problem of high computational complexity caused by recognizing the entire face image.
  • the terminal collects the object to be measured in real time through a built-in camera or an external camera connected to the terminal to obtain a facial image of the object to be measured, and then displays the facial image collected in real time in the face preview area of the expression unlock page.
  • the face image may only include the face of the object to be measured, or may include the face and hands of the object to be measured.
  • the method may further include: the terminal detects whether the face key point in the face image is located in the face collection frame; if so, executing S206 ; if not, send out a prompt message for adjusting the collection orientation.
  • the facial key points may be the ears, chin, and forehead of the object to be measured. If the ears, chin and forehead in the face image are within the face capture frame, it can indicate that the entire face is within the face capture frame.
  • the unlocked node to be processed may refer to a node in the unlocked node sequence that has not been unlocked.
  • the unlocking node to be processed may also refer to a node in the unlocking node sequence that has not performed the unlocking operation and currently needs to perform the unlocking operation.
  • the unlocking status identifier can be used to indicate that the unlocking node to be processed has successfully completed the unlocking, or that the unlocking node to be processed has performed the unlocking operation, but it is uncertain whether the unlocking has been successfully completed. According to the above two meanings of the unlocked state identifier, S206 can be divided into the following two scenarios:
  • the unlocking status flag indicates that the unlocking node to be processed has performed the unlocking operation, but it is uncertain whether the unlocking has been successfully completed.
  • the unlocking node sequence is displayed in the unlocking progress area of the emoticon unlocking page.
  • the step of displaying the facial images collected in real time in the face preview area of the facial expression unlocking page may specifically include: the terminal sequentially performs facial expressions on the facial images corresponding to the unlocked nodes to be processed in the unlocked node sequence according to the sequence of the unlocked nodes in the unlocked node sequence Recognition; each time the facial expression recognition is completed, an unlocking state identifier is generated at the corresponding unlocking node in the unlocking progress area.
  • the unlocking state identifier can be used to indicate that the unlocking node currently to be processed has performed the unlocking operation.
  • the unlocking state identifier includes: an unlocking operation has been performed and the unlocking node is successfully unlocked, or an unlocking operation has been performed but the unlocking node has not been successfully unlocked.
  • the steps may include: when the terminal unlocks the unlocking node currently to be processed in the unlocking node sequence, performing facial expression recognition on the currently collected facial image, The recognized facial expression is compared with the target expression corresponding to the unlocked node to be processed, and when the comparison result is obtained, an unlocked state identifier is generated at the current unlocked node to be processed. After the unlocking state identifier is generated, the unlocked node currently to be processed is transformed into a processed unlocked node, and then the remaining unlocked nodes to be processed are unlocked.
  • each unlocking node ie, nodes 1-6
  • the unlock process includes: performing facial expression recognition on the face image displayed in the face preview area , that is, perform facial expression recognition on the face blocks in the face collection frame, and obtain a facial expression recognition result; wherein, the facial expression is an expression of opening the mouth and squinting the left eye.
  • the facial expression is an expression of opening the mouth and squinting the left eye.
  • the terminal generates an expression model diagram corresponding to the facial expression every time it completes facial expression recognition; superimposes and displays the expression model diagram on the corresponding facial image in the face preview area, and then associates the expression model diagram with the corresponding unlocking node.
  • an unlocking state identifier is generated at the unlocking node to be processed to prompt that the facial image corresponding to the unlocking node to be processed has been recognized.
  • the expression model diagram can refer to the black dots in Figure 6.
  • the unlocking status flag indicates that the unlocking node to be processed has been successfully unlocked.
  • the terminal generates an expression model diagram corresponding to the facial expression every time it completes the facial expression recognition; superimposes and displays the expression model diagram on the corresponding facial image in the face preview area;
  • the expression maps are consistent, it is determined that the facial expression in the facial image matches the corresponding target expression, and then an unlocking state identifier is generated at the unlocking node to be processed in the unlocking node sequence.
  • the expression model diagram may refer to a graphic generated according to the recognized facial expression, and may be used to represent the facial expression obtained by recognizing the object to be tested.
  • the expression model diagram may also be used to indicate that the unlocked node to be processed corresponds to The facial image has been subjected to facial expression recognition.
  • the expression model diagram is consistent with the expression diagram of the corresponding unlocking node, it means that the facial expression in the facial image matches the expression diagram pre-recorded by the unlocking node to be processed, and the unlocking node to be processed can be successfully unlocked at this time.
  • the facial expression in the facial image is recognized, and when the facial key points in the facial image are recognized, a face with the left eye squinting with the mouth is generated.
  • the expression-matched expression model map is then superimposed on the facial expressions in that face image.
  • the expression model graph is compared with the pre-entered emoticon graph corresponding to the first unlocking node, and if they are consistent, an unlocking state identifier is generated at the position of the first unlocking node.
  • a prompt image can be used to prompt the expression corresponding to the unlocking node that needs to be unlocked currently.
  • the expression unlocking page includes an expression prompting area; the method further includes: in the process of unlocking the unlocked nodes to be processed in the unlocking node sequence, in response to the expression prompting operation triggered in the expression prompting area, at The expression prompt area displays the prompt image corresponding to the unlocked node to be processed. Therefore, when the user forgets what the facial expression corresponding to the unlocking solution is, he can associate the corresponding facial expression through the prompt image.
  • the prompt image can be used to associate the image of joy, anger, sadness and joy.
  • the specific meaning of the prompt image may not be clear.
  • the sunflower image is used to associate mouth opening. .
  • the sunflower image can be used to determine the facial expression corresponding to the first unlocking node, and then make The facial expression to unlock the 1st unlock node.
  • the unlocking succeeds when each unlocking node in the sequence of unlocking nodes generates an unlocking state identifier, and the facial expression in the corresponding facial image matches the corresponding target expression; for example, referring to FIG. 6 , when the unlocking node When the positions 1-6 are all changed from colorless (or white) to gray, and the facial expressions in the corresponding facial images match the corresponding target expressions, the unlocking is successful. Or, when unlocking state identifiers are generated at at least two unlocking nodes in the unlocking node sequence, and the facial expressions in the corresponding facial images match the corresponding target expressions, the unlocking is successful; for example, referring to FIG. 6, when unlocking nodes 1-3 The unlocking succeeds when the positions of the nodes are all changed from colorless (or white) to gray, and the facial expressions in the facial images corresponding to unlocking nodes 1-3 match the corresponding target expressions.
  • the target expression can refer to the expression in the pre-entered expression map.
  • each unlocking node has one or more expression images recorded in advance, and the expression in the expression map is the target expression. .
  • the recognized facial expression matches the expression map corresponding to the unlocked node to be processed, it means that the facial expression matches the corresponding target expression; conversely, when the recognized facial expression matches the corresponding target expression, it means that the facial expression matches the corresponding target expression.
  • the corresponding emoticons corresponding to the unlocked nodes to be processed are matched.
  • the generated expression model map matches the expression map corresponding to the unlocked node to be processed, it means that the expression model map matches the corresponding target expression; conversely, when the generated expression model map matches the corresponding target expression, It means that the expression model graph matches the expression graph corresponding to the unlocked node to be processed.
  • the terminal records the time interval corresponding to each unlocking node when the unlocking is completed, and then calculates the total time interval.
  • the unlocking is successful when each unlocking node in the sequence of unlocking nodes generates an unlocking state identifier, the facial expression in each facial image matches the corresponding target expression, and the total time interval is less than the preset time interval.
  • the terminal fails to be unlocked.
  • the unlocking can be performed again, that is, the execution of S204 to S208 is returned to, and the unlocking is suspended until the cumulative number of successful unlocking or unlocking failure reaches the preset number of times.
  • the terminal when an unlocking state identifier is generated at each unlocking node in the unlocking node sequence, but the facial expression in at least one facial image does not match the corresponding target expression, the terminal sends out a prompt message that the unlocking fails; cancel the display Unlock the state flag, and return to executing S204 to S208.
  • the accumulated number of times of unlocking failure is acquired; when the accumulated number of times reaches a preset number, the unlocking process is suspended; the reserved communication signal is obtained, and alarm information is sent to the reserved communication signal.
  • the above cumulative number of times may refer to the current round of unlocking failures or the total number of unlocking failures within a preset time period, and the preset time period may be set according to the actual situation, which is not specifically limited in this embodiment.
  • the above-mentioned reserved communication signal may refer to: a communication identifier reserved in the application account of the terminal by the target object (such as a user who enters an emoticon) for receiving alarm information or emergency contact, such as a reserved mobile phone number, email account or Other instant messaging accounts, etc. It should be noted that the cumulative number of unlocking failures refers to the total number of consecutive unlocking failures. If one of them is successfully unlocked, the cumulative number of unlocking failures is set to zero.
  • the unlocking process will be suspended, and an alarm message will be sent to the reserved mobile phone number to notify the reserved mobile phone number.
  • the target user corresponding to the ID number will be alerted to inform the target user to reset the password to avoid malicious unlocking by others, or inform the target user to obtain the password again.
  • performing face recognition on the facial image can obtain the recognition result of the facial feature points as shown in FIG. 7 .
  • each facial feature point identified by digital marking is used, such as 1 shown in FIG. 7 .
  • ⁇ 17 represents the facial edge feature points
  • 18-22 and 23-27 correspond to the user's left and right eyebrow feature points
  • 28-36 represent the user's nose feature points
  • 37-42 represent the user's left eye
  • 43 to 48 represent the feature points of the user's right eye
  • 49 to 68 represent the feature points of the user's lips.
  • Facial feature point recognition technology is usually divided into two categories according to the different criteria it adopts:
  • the local feature-based method can utilize the local geometric features of the face, such as the relative positions and relative distances of some facial organs (eyes, nose, mouth, etc.) to describe the face.
  • Its feature components usually include the Euclidean distance, curvature and angle between feature points, etc., which can achieve an efficient description of the salient features of the face.
  • the integral projection method is used to locate the facial feature points, and the Euclidean distance between the feature points is used as the feature component to identify the multi-dimensional facial feature point vector for classification.
  • the feature components mainly include: the vertical distance between the eyebrow and the center of the eye: multiple description data of the eyebrow radian; the nose width and the vertical position of the nose; 100% correct recognition rate.
  • the local feature-based method may also be an empirical description about the general characteristics of facial feature points.
  • a face image has some obvious basic features.
  • the face area usually includes facial feature points such as eyes, nose and lips, and its brightness is generally lower than the surrounding area; the eyes are roughly symmetrical, and the nose and mouth are distributed on the axis of symmetry, etc. .
  • the whole-based method takes the face image as a whole and performs some transformation processing on it to identify features. This method considers the overall attributes of the face, and also retains the topological relationship between the face parts and the parts themselves. Information.
  • the method of subspace analysis can be used to find a linear or nonlinear space transformation according to a certain target, and compress the original high-dimensional data into a low-dimensional subspace, so that the distribution of data in this subspace is more compact and the calculation is reduced. complexity.
  • a group of rectangular grid nodes can also be placed on the face image, the features of each node are described by the multi-scale wavelet features at the node, and the connection relationship between the nodes is represented by geometric distance, thus forming a two-dimensional topology based Illustration of the face representation.
  • the recognition is based on the similarity between the nodes and connections in the two images.
  • the method based on the whole also has a method based on a neural network, etc.
  • the type of the method based on the whole is not limited.
  • an unlocking node sequence composed of multiple unlocking nodes is configured on the expression unlocking page.
  • the facial image collected in real time needs to be subjected to expression recognition, and each unlocking node corresponds to a specific unlocking node.
  • the entire unlocking process is completed only when the facial expressions in the facial images corresponding to all unlocking nodes match the target expressions corresponding to the corresponding unlocking nodes, which can effectively avoid unlocking due to misappropriation of images or face models. Effectively improve information security.
  • unlocking can also be performed by combining facial expressions and face recognition.
  • the method may further include:
  • the facial expression can be an expression presented by different parts of the face performing corresponding actions or being in a corresponding posture, such as the expression of opening the mouth and squinting the left eye as shown in FIG. 5 and FIG. 6 .
  • the step of facial expression recognition includes: the terminal extracts eye feature points from the facial image; among the eye feature points, determining a first distance between the upper eyelid feature point and the lower eyelid feature point, and determining The second distance between the left eye corner feature point and the right eye corner feature point; the eye posture is determined according to the relationship between the ratio between the first distance and the second distance and at least one preset interval.
  • the feature point of the left corner of the eye and the feature point of the right corner refer to the feature point of the left corner of the eye and the feature point of the right corner of the same eye respectively.
  • the feature point of the left corner refers to the feature point of the left corner of the left eye
  • the feature points refer to the feature points of the right corner of the left eye.
  • the terminal calculates the first distance between the upper eyelid feature point 38 and the lower eyelid feature point 42 according to the facial feature points obtained by the face recognition technology, and calculates the distance between the left eye corner feature point 37 and the lower eyelid feature point 42 .
  • the second distance between the feature points 40 at the corner of the right eye when the ratio between the first distance and the second distance is 0, it is determined that the object to be measured squints his left eye.
  • the ratio between the first distance and the second distance is less than 0.2, it is determined that the object to be measured blinks left.
  • the ratio between the first distance and the second distance is greater than 0.2 and less than 0.6, it is determined that the object to be tested stares at the left eye.
  • Method 1 Identify the lip pose according to the height of the lip feature points.
  • the step of facial expression recognition further includes: the terminal extracts lip feature points from the facial image; among the lip feature points, determining according to the height difference between the lip center feature point and the lip corner feature point Lip gesture.
  • the terminal determines the height of the lip center feature point 63 and the height of the lip corner feature point 49 (or 55), and then calculates the difference between the lip center feature point 63 and the lip corner feature point 49 (or 55). If the height difference is positive (that is, the height of the lip center feature point 63 is higher than the height of the lip corner feature point 49 ), the lip gesture is determined to be a smile.
  • Mode 2 Identify the lip pose according to the distance between the upper and lower lip feature points.
  • the terminal determines the lip pose according to a third distance between the upper lip feature point and the lower lip feature point.
  • the facial feature points obtained by the face recognition technology compare the third distance between the upper lip feature point and the lower lip feature point with the distance threshold, and when the third distance reaches the distance threshold, determine the lip pose position Open your mouth.
  • the third distance between the upper lip feature point 63 and the lower lip feature point 67 is compared with the distance threshold, and if it is greater than or equal to the distance threshold, it indicates that the object to be tested is opening its mouth.
  • the terminal calculates a fourth distance between the feature point of the left lip corner and the feature point of the right lip corner, and when the third distance is greater than or equal to the fourth distance, the lip gesture position is determined to be an open mouth.
  • Mode 3 Identify the lip pose according to the ratio between the distance between the upper and lower lips and the distance between the feature points of the left and right lip corners.
  • the lip posture is determined according to the relationship between the ratio between the third distance and the fourth distance and at least one preset interval; wherein the fourth distance is the feature of the left lip angle The distance between the point and the feature point of the right lip corner.
  • the terminal determines the third distance between the upper lip feature point 63 and the lower lip feature point 67, and determines the fourth distance between the left lip corner feature point 49 and the right lip corner feature point 55,
  • the relationship between the ratio between the third distance and the fourth distance and the at least one preset interval determines the lip pose. For example, when the ratio between the third distance and the fourth distance is in the first preset interval, the lip gesture position is determined to be an open mouth; in addition, when the ratio between the third distance and the fourth distance is in the second preset interval When the lip pose is determined to be closed. Wherein, the values of the first preset interval are all greater than the values of the second preset interval.
  • the step of facial expression recognition further includes: extracting eyebrow feature points and eyelid feature points from the facial image; determining a fifth distance between the eyebrow feature points and the eyelid feature points; Set the size relationship of the distance to determine the eyebrow posture.
  • the distance between the eyebrow feature point and the eyelid feature point 38 is calculated, and when the distance is greater than the preset distance, it is determined that the object to be measured is raising his eyebrows.
  • facial expressions to unlock face recognition can also be performed to determine whether the face matches the pre-registered face of the target object. If it matches, it means that the object to be tested and the target object are the same object.
  • the facial image is an image obtained by collecting the object to be tested.
  • each unlocking node When each unlocking node generates an unlocking state identifier, and the facial expression in each facial image matches the corresponding target expression, and it is determined that the object to be tested is inconsistent with the target object according to the face recognition result, the unlocking fails. When it is determined that the object to be tested is inconsistent with the target object according to the face recognition result, it means that the currently detected object to be tested and the target object are not the same object, and the unlocking fails at this time.
  • the combination of facial expression and human face to unlock can further improve the security of unlocking node sequence and improve information security.
  • unlocking can also be performed in combination with facial expressions and gestures, as shown in FIG. 9 , the method may further include:
  • the facial expression can be an expression presented by different parts of the face performing corresponding actions or being in a corresponding posture, such as the expression of opening the mouth and squinting the left eye as shown in FIG. 5 and FIG. 6 .
  • S904 in the process of facial expression recognition, perform gesture recognition on the hand in the facial image.
  • the terminal performs convolution processing on the facial image through a neural network model, thereby extracting gesture features in the facial image, and determining a specific gesture according to the gesture features.
  • the neural network model may be a network model for extracting gesture features, and may specifically be a two-dimensional convolutional neural network model.
  • a two-dimensional network model can be one of the network branches of a machine learning model.
  • the terminal may also determine whether the facial expression is consistent with the corresponding target expression after each completion of facial expression recognition; in addition, the terminal may also determine whether the gesture is consistent with the corresponding target expression after each completion of gesture recognition Whether the gestures are consistent, if the facial expression is consistent with the corresponding target expression, and the gesture is consistent with the corresponding target gesture, an unlocking state identifier is generated at the corresponding unlocking node in the unlocking progress area.
  • the combination of facial expressions and gestures for unlocking can further improve the security of the unlocking node sequence and improve the information security.
  • each unlocking node can be unlocked by at least two facial expressions, and each unlocking node corresponds to at least two different target expressions; as shown in FIG. 10 , S206 may specifically include:
  • S1002 Perform facial expression recognition on at least two facial images corresponding to the unlocked node to be processed.
  • facial expression recognition may be performed on at least two facial images corresponding to the unlocking node to be processed.
  • unlocking the unlocking node when unlocking the unlocking node 1, first collect the facial image 1, and then perform facial expression recognition on the facial image 1 to obtain the facial expression 1; then perform facial expression recognition on the collected facial image 1 to obtain the facial expression 2.
  • Facial expressions in at least two facial images may be the same or different.
  • the terminal determines the collection time interval between the at least two facial images; The unlocking time interval between the processed unlocking nodes; when the collection time interval or the unlocking time interval satisfies the corresponding time interval condition, perform S1004.
  • the unlocking efficiency can be improved, and it can also prevent others from unlocking an unlocking node by trying different facial expressions multiple times.
  • each unlocking node uses at least two facial expressions to unlock, which can further improve the security of the unlocking node sequence and improve the information security.
  • a specific emoticon can be pre-entered for each unlocked node, and the step of entering an emoticon can include:
  • the expression entry page may include an expression identification, or may not include an expression identification.
  • the expression identifier is used to indicate a corresponding expression type, and different expression identifiers correspond to different expression types.
  • the terminal switches the page to an emoticon entry page containing an emoticon identifier when acquiring the emoticon entry instruction triggered on the emoticon management page.
  • an emoticon entry page containing an emoticon identifier
  • FIG. 12 when a click or touch operation is detected on the face entry control on the expression management page, the expression entry page that displays the expression logo is entered.
  • the upper part of the emoticon entry page is used to display the collected emoticons, and the off-duty part is used to display the emoticon logo.
  • the terminal when the expression entry page includes expression identifiers, sequentially collects the expression images of the target object according to the arrangement order of the expression identifiers, and then sequentially records the collected expression images. As shown in FIG. 12 , in the expression entry page, the expression image is firstly entered for the first expression identification, and then the expression is entered for the subsequent expression identifications in turn.
  • the terminal when the expression entry page does not contain an expression identification, the terminal identifies the collected expression to obtain a target expression; and displays the corresponding expression identification in the expression identification preview area of the expression entry page according to the target expression.
  • the facial image of the open mouth is recognized to obtain the facial expression of the open mouth, and then an expression mark is generated according to the facial expression of the open mouth, and displayed in the expression mark preview area of the expression entry page.
  • the expression combination page may be a page for combining expression identifications.
  • the terminal may choose to sort the expression identifiers on the expression entry page, and construct an expression unlocking sequence according to the sorted expression identifiers; Make a combination, and then sort, and construct an expression unlocking sequence according to the combination and the sorted facial expression identifiers.
  • the target object may combine the expression identifiers in the expression combination page in pairs to obtain the corresponding expression combination.
  • the expression combination page on the right of Figure 14 in the first row, the ordinary expression and the left staring expression are combined.
  • the corresponding facial expressions are pre-entered according to the facial expressions, and then the facial expressions are combined and sorted to generate the facial expression unlocking sequence, so that when unlocking, the facial expression unlocking nodes can be unlocked according to the corresponding facial expressions.
  • the facial expression unlocking nodes can be unlocked according to the corresponding facial expressions.
  • a face image is taken as an example for illustration.
  • the function of the smart combination lock is mainly realized by means of face recognition + combined expression + expression sequence + prompt image for prompting, which can well realize the privacy of users , the combination of expressions can increase the difficulty of cracking expression recognition, the use of expression sequences can increase the complexity of unlocking, and the use of prompt images can help users memorize password combinations and avoid password leakage.
  • the expression sequence corresponds to the above unlocked node sequence.
  • the user can customize a combination of different expressions, set an expression sequence, add a prompt image and a picture (or animation) prompt, and obtain a set of combination locks, which can meet the user's own encryption and decryption needs, and increase the number of personal privacy protection.
  • the specific implementation steps include:
  • the facial feature points are recognized by an expression recognizer to obtain corresponding facial expressions.
  • Blink according to the facial feature points obtained by face recognition technology, divide the distance between the center feature points of the upper and lower eyelids by the distance between the feature points of the left and right eye corners to obtain the ratio between the two; When the ratio between them is less than the set distance threshold, it is judged as blinking. As shown in FIG. 7 , the distance between the center feature point 38 of the upper eyelid of the left eye and the center feature point 42 of the lower eyelid of the left eye is divided by the distance between the feature points 37 and 40 of the left and right corners of the eye, and if the obtained ratio is less than 0.2, Then it is judged that the user blinked the left eye.
  • Duzui according to the facial feature points obtained by face recognition technology, set a threshold for the proportion of Duzui, and divide the distance between the upper and lower lip feature points and the distance between the left and right lip feature points.
  • the obtained quotient value reaches the threshold value of the ratio of the set mouth, it is judged to be a mouth.
  • the distance between the upper and lower center feature points 67 and 63 of the lips is divided by the distance between the left and right feature points 49 and 55 of the lips.
  • Smile according to the facial feature points obtained by the face recognition technology, when the central feature point of the lips is lower than the height of the left and right feature points of the lips, it is judged to be a smile. As shown in FIG. 7 , when the position of the center feature point 63 of the lips is lower than the positions of the left and right feature points 49 and 55 of the lips, it is determined to be a smile.
  • Angry according to the facial feature points obtained by the face recognition technology, when the central feature point of the lips is higher than the height of the left and right feature points of the lips, it is judged to be angry. As shown in the figure, when the position of the central feature point 63 of the lips is higher than the positions of the left and right feature points 49 and 55 of the lips, it is judged to be angry.
  • Raising eyebrows setting a distance threshold for raising eyebrows according to the facial feature points obtained by the face recognition technology, when the distance between the feature points of the eyebrows and the feature points of the upper eyelid is greater than the distance threshold, it is judged that the eyebrows are being raised.
  • blinking the left eye + opening the mouth can be set as an expression combination
  • blinking the right eye and pouting the mouth can also be set as an expression combination
  • the user can arrange the set expression combinations in a certain order, and build a password group according to the sorted expression combinations, such as: expression combination 1 (wink + nod): expression combination 2 (left eye wink + open mouth): expression combination 3 (wink + pouting): expression combination 4 (smile): expression combination 5 (raise eyebrow): expression combination 6 (stare).
  • the association scheme is set according to the user's own preferences. Try to avoid too many obvious prompts to avoid revealing the password combination.
  • Face recognition is performed first, then expression recognition is performed.
  • a prompt image is displayed.
  • the expression combination is then recognized to determine whether the expression combination corresponding to the unlocked node is correct.
  • the entire expression sequence is identified to determine whether the sequence of each expression combination is correct. Finally, judge the cumulative number of timeouts or failures. If the cumulative number of timeouts or failures reaches 5, the unlocking will be suspended, and an early warning message will be sent to the fixed mailbox and mobile phone number.
  • the solution of this embodiment can help users to flexibly set password combinations for access control, file encryption, entry into the operation page of the intelligent terminal and payment, etc.
  • the technical difficulty is low, the privacy is strong, and it is easy to operate and memorize.
  • steps in the flowcharts of FIGS. 2 and 8-11 are sequentially displayed according to the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in FIGS. 2 and 8-11 may include multiple steps or multiple stages. These steps or stages are not necessarily executed at the same time, but may be executed at different times. These steps or stages The order of execution of the steps is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in the other steps.
  • a facial expression-based unlocking device is provided, and the device can adopt a software module or a hardware module, or a combination of the two to become a part of the computer equipment, and the device specifically includes: Display module 1702, first display module 1704, generation module 1706 and unlock module 1708, wherein:
  • a display module 1702 configured to display the expression unlocking page
  • the first display module 1704 is used to display the unlocked node sequence on the expression unlocking page
  • the generating module 1706 is used to generate an unlocking state identifier based on the facial expression in the facial image collected in real time at the unlocking node to be processed in the unlocking node sequence;
  • the unlocking module 1708 is configured to complete the unlocking based on the unlocking state identifier and the matching between the facial expression in the corresponding facial image and the corresponding target expression.
  • an unlocking node sequence composed of multiple unlocking nodes is configured on the expression unlocking page.
  • the facial image collected in real time needs to be subjected to expression recognition, and each unlocking node corresponds to a specific unlocking node.
  • the entire unlocking process is completed only when the facial expressions in the facial images corresponding to all unlocking nodes match the target expressions corresponding to the corresponding unlocking nodes, which can effectively avoid unlocking due to misappropriation of images or face models. Effectively improve information security.
  • the unlocking node sequence is displayed in the unlocking progress area of the emoticon unlocking page.
  • the device also includes:
  • the second display module is used to display the face image collected in real time in the face preview area of the expression unlock page;
  • the generating module is also used to perform facial expression recognition on the facial images corresponding to the unlocked nodes to be processed in the unlocked node sequence according to the sequence of unlocked nodes in the unlocked node sequence; when the facial expression recognition is completed each time, in the unlock progress area An unlocking state flag is generated at the corresponding unlocking node.
  • the apparatus further includes: a generating module, a superimposing module and a determining module; wherein:
  • the generation module is used to generate an expression model map corresponding to the facial expression every time the facial expression recognition is completed;
  • the overlay module is used to overlay and display the expression model diagram on the corresponding face image in the face preview area;
  • the determining module is used for determining that the facial expression in the facial image matches the corresponding target expression when the expression model graph is consistent with the expression graph of the corresponding unlocking node.
  • the face and hands are included in the face image.
  • the device also includes: an identification module; wherein:
  • the recognition module is used to perform gesture recognition on the hand in the facial image during the process of facial expression recognition
  • the generating module is further configured to generate an unlocking state identifier at the corresponding unlocking node in the unlocking progress area each time the facial expression recognition and gesture recognition are completed.
  • the combination of facial expressions and gestures for unlocking can further improve the security of the unlocking node sequence and improve the information security.
  • each unlocked node corresponds to at least two different target expressions.
  • the generating module is also used to perform facial expression recognition on at least two facial images corresponding to the unlocked node to be processed; when the facial expressions in the at least two facial images are matched with the corresponding target expressions, it is generated at the unlocked node to be processed Unlock status ID.
  • the determining module is further configured to, when the facial expressions in the at least two facial images match the corresponding target expressions, determine the acquisition time interval between the at least two facial images; or, determine the to-be-processed facial expressions The unlocking time interval between the unlocked node and the last processed unlocked node;
  • the generating module is further configured to generate an unlocking state identifier at the unlocking node to be processed when the acquisition time interval or the unlocking time interval satisfies the corresponding time interval condition.
  • each unlocking node uses at least two facial expressions to unlock, which can further improve the security of the unlocking node sequence and improve the information security.
  • the recognition module is further configured to extract eye feature points from the facial image; among the eye feature points, determine the first distance between the upper eyelid feature point and the lower eyelid feature point, and determine the left eye corner The second distance between the feature point and the feature point at the corner of the right eye; the eye posture is determined according to the relationship between the ratio between the first distance and the second distance and at least one preset interval.
  • the recognition module is further configured to extract lip feature points from the facial image; in the lip feature points, the lip pose is determined according to the height difference between the lip center feature point and the lip corner feature point ; Or, in the lip feature points, determine the lip pose according to the third distance between the upper lip feature point and the lower lip feature point; or, in the lip feature points, according to the third distance and the fourth distance
  • the relationship between the ratio of and at least one preset interval determines the lip posture; wherein, the fourth distance is the distance between the feature point of the left lip corner and the feature point of the right lip corner.
  • the identification module is further configured to extract eyebrow feature points and eyelid feature points from the facial image; determine a fifth distance between the eyebrow feature point and the eyelid feature point; according to the fifth distance and the preset distance The size relationship determines the eyebrow posture.
  • a face capture frame is displayed in the face preview area.
  • the device also includes: a detection module and a prompt module; wherein:
  • the detection module is used to detect whether the facial key points in the facial image are located in the facial acquisition frame;
  • the generating module is also used to generate an unlocking state identifier based on the facial expression in the corresponding facial image at each unlocking node according to the order of unlocking nodes in the unlocking node sequence if the facial key point in the facial image is located in the facial acquisition frame;
  • the prompt module is used for sending out prompt information for adjusting the acquisition orientation if the facial key points in the facial image are not located in the facial acquisition frame.
  • the facial image is an image obtained by capturing the object to be tested.
  • the identification module is further configured to perform face recognition on the facial image corresponding to each unlocked node according to the sequence of unlocked nodes in the unlocked node sequence, and obtain a face recognition result;
  • the unlocking module is also used for when each unlocking node generates an unlocking state mark, and the facial expression in each facial image matches the corresponding target expression, and when it is determined that the object to be tested is consistent with the target object according to the face recognition result, Unlocked successfully.
  • the combination of facial expressions and faces to unlock can further improve the security of the unlocked node sequence and improve information security.
  • the apparatus further comprises: a cancellation module; wherein:
  • the prompting module is also used to issue a prompt message of unlocking failure when each unlocking node in the unlocking node sequence generates an unlocking state identifier, but the facial expression in each facial image does not match the corresponding target expression;
  • the canceling module is used for canceling the display of the unlocking state identifier, and returning to perform the step of generating the unlocking state identifier based on the facial expression in the corresponding facial image at the unlocking node to be processed in the unlocking node sequence.
  • the apparatus further includes: an acquisition module, a pause module and an alarm module; wherein:
  • the acquisition module is used to acquire the cumulative number of unlocking failures when the unlocking fails
  • the pause module is used to pause the unlocking process when the accumulated number reaches the preset number
  • the alarm module is used to obtain the reserved communication signal and send alarm information to the reserved communication signal.
  • the apparatus further includes: an entry module and a combination module; wherein:
  • the second display module is used to display the expression entry page
  • the entry module is used to sequentially enter the emoticons corresponding to the emoticons on the emoticon entry page through the emoticon entry page;
  • the input module is used to enter the expression combination page
  • the combination module is used to combine the expression identifiers in response to the combination operation triggered on the expression combination page to obtain the expression unlocking sequence.
  • the expression unlocking page includes an expression prompt area.
  • the second display module is further configured to, in the process of unlocking the unlocking node to be processed in the unlocking node sequence, in response to the expression prompting operation triggered in the expression prompting area, display the corresponding unlocking node in the expression prompting area Hint image.
  • the corresponding facial expressions are pre-entered according to the facial expressions, and then the facial expressions are combined and sorted to generate the facial expression unlocking sequence, so that when unlocking, the facial expression unlocking nodes can be unlocked according to the corresponding facial expressions.
  • the facial expression unlocking nodes can be unlocked according to the corresponding facial expressions.
  • each module in the above-mentioned facial expression-based unlocking device may be implemented in whole or in part by software, hardware, and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided, and the computer device may be a terminal, and its internal structure diagram may be as shown in FIG. 18 .
  • the computer equipment includes a processor, memory, a communication interface, a display screen, and an input device connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the nonvolatile storage medium stores an operating system and a computer program.
  • the internal memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium.
  • the communication interface of the computer device is used for wired or wireless communication with an external terminal, and the wireless communication can be realized by WIFI, operator network, NFC (Near Field Communication) or other technologies.
  • the computer program implements a facial expression-based unlocking method when executed by the processor.
  • the display screen of the computer equipment may be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment may be a touch layer covered on the display screen, or a button, a trackball or a touchpad set on the shell of the computer equipment , or an external keyboard, trackpad, or mouse.
  • FIG. 18 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied. Include more or fewer components than shown in the figures, or combine certain components, or have a different arrangement of components.
  • a computer device including a memory and a processor, where a computer program is stored in the memory, and the processor implements the steps in the foregoing method embodiments when the processor executes the computer program.
  • a computer-readable storage medium which stores a computer program, and when the computer program is executed by a processor, implements the steps in the foregoing method embodiments.
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps in the foregoing method embodiments.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical memory, and the like.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM can be in various forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

一种基于面部表情的解锁方法、装置、计算机设备和存储介质。所述方法包括:显示表情解锁页面(S202);在所述表情解锁页面展示解锁节点序列(S204);在所述解锁节点序列中待处理的解锁节点处,基于实时采集的面部图像中的面部表情产生解锁状态标识(S206);基于所述解锁状态标识以及相应所述面部图像中的面部表情与对应的目标表情间的匹配性,完成解锁(S208)。

Description

基于面部表情的解锁方法、装置、计算机设备和存储介质
本申请要求于2020年09月03日提交中国专利局,申请号为2020109161381,发明名称为“基于面部表情的解锁方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,特别是涉及一种基于面部表情的解锁方法、装置、计算机设备和存储介质。
背景技术
随着智能设备的普及以及信息安全的不断发展,用户对信息安全的要求也越来越高,因此安全锁的应用也越来越受到广大用户喜爱,如设备开机密码锁、应用登录安全锁以及支付密码锁等等。
常用的安全锁解锁方案中,通常是输入对应的密码进行解锁,或者录入用户的指纹进行解锁,或者是采集人脸图像进行人脸识别来解锁。对于人脸识别的方式解锁而言,第三方可以通过照片或人脸模型来进行解锁,从而造成信息安全隐患。
发明内容
根据本申请的各种实施例,提供了一种基于面部表情的解锁方法、装置、计算机设备和存储介质。
一种基于面部表情的解锁方法,由终端执行,所述方法包括:
显示表情解锁页面;
在所述表情解锁页面展示解锁节点序列;
在所述解锁节点序列中待处理的解锁节点处,基于实时采集的面部图像中的面部表情产生解锁状态标识;
基于所述解锁状态标识以及相应所述面部图像中的面部表情与对应的目标表情间的匹配性,完成解锁。
一种基于面部表情的解锁装置,所述装置包括:
显示模块,用于显示表情解锁页面;
第一展示模块,用于在所述表情解锁页面展示解锁节点序列;
产生模块,用于在所述解锁节点序列中待处理的解锁节点处,基于实时采集的面部图像中的面部表情产生解锁状态标识;
解锁模块,用于基于所述解锁状态标识以及相应所述面部图像中的面部表情与对应的目标表情间的匹 配性,完成解锁。
一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
显示表情解锁页面;
在所述表情解锁页面展示解锁节点序列;
在所述解锁节点序列中待处理的解锁节点处,基于实时采集的面部图像中的面部表情产生解锁状态标识;
基于所述解锁状态标识以及相应所述面部图像中的面部表情与对应的目标表情间的匹配性,完成解锁。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
显示表情解锁页面;
在所述表情解锁页面展示解锁节点序列;
在所述解锁节点序列中待处理的解锁节点处,基于实时采集的面部图像中的面部表情产生解锁状态标识;
基于所述解锁状态标识以及相应所述面部图像中的面部表情与对应的目标表情间的匹配性,完成解锁。
一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中;计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述基于面部表情的解锁方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
图1为一个实施例中基于面部表情的解锁方法的应用环境图;
图2为一个实施例中基于面部表情的解锁方法的流程示意图;
图3为一个实施例中表情解锁页面的示意图;
图4为一个实施例中通过触发人脸检测控件进入表情解锁页面的示意图;
图5为一个实施例中提示调整采集方位的示意图;
图6为一个实施例中对面部图像进行识别,并将识别所得的表情模型图叠加在面部表情上的示意图;
图7为一个实施例中面部特征点的示意图;
图8为一个实施例中结合表情和人脸进行解锁步骤的流程示意图;
图9为一个实施例中结合表情和手势进行解锁步骤的流程示意图;
图10为一个实施例中通过至少两个面部表情对每一个解锁节点进行解锁步骤的流程示意图;
图11为一个实施例中录入表情图步骤的流程示意图;
图12为一个实施例中通过人脸录入控件进入表情录入页面的示意图;
图13为一个实施例中表情识别后,进行表情图录入的示意图;
图14为一个实施例中对表情标识进行组合和排序的示意图;
图15为一个实施例中表情组合与表情识别的示意图;
图16为另一个实施例中基于面部表情的解锁方法的流程示意图;
图17为一个实施例中基于面部表情的解锁装置的结构框图;
图18为一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的基于面部表情的解锁方法,可以应用于如图1所示的应用环境中。在该应用环境中,包括终端102和服务器104。终端102可以通过内置的摄像头或外置摄像头实时采集待测对象的面部图像,然后对该面部图像的面部表情进行识别,在识别完成时,在对应解锁节点出产生解锁状态标识以提示该解锁节点的解锁状态,当解锁节点序列中每个解锁节点处均产生了解锁状态标识、且每个面部图像中的面部表情与对应的目标表情匹配时,则对整个解锁节点序列解锁成功。
其中,待测对象可以指待测用户或其它待测对象(如动物),在后续实施例中,以待测对象为待测用户为例进行说明。解锁节点序列可以视为多位密码的安全锁,每个解锁节点对应一位密码,该位密码通过面部表情来进行解码。解锁状态包括:未进行解锁操作的状态和已进行解锁操作的状态。该已进行解锁操作的状态包括:已进行解锁操作、且对该解锁节点成功解锁的状态,已进行解锁操作、但对该解锁节点未成功解锁的状态。
此外,终端102还可以预先录入目标对象的表情图,不同解锁节点可以录入不同的表情图(即对应不同的目标表情),或者某两个或三个不同解锁节点可以录入相同的表情图(即某两个或三个不同解锁节点对应相同的目标表情)。其中,每个解锁节点对应一位密码,通过对应的面部表情对该解锁节点进行解锁(也即解码)。终端102预先录入目标对象的表情图可以保存于本地,也可以保存于服务器104,在解锁过程中,通过服务器104将从待测对象的面部图像识别的面部表情与保存的表情图进行比对,以判断待测对象的面部表情与目标对象的目标表情是否一致。
其中,终端102可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能手表等;此外,还可以是门禁设备、闸门等,但并不局限于此。
服务器104可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群,可以是提供云服 务器、云数据库、云存储和内容分发网络(Content Delivery Network,CDN)等基础云计算服务的云服务器。
终端102与服务器104之间可以通过蓝牙、USB(Universal Serial Bus,通用串行总线)或者网络等通讯连接方式进行连接,本申请在此不做限制。
在一个实施例中,如图2所示,提供了一种基于面部表情的解锁方法,以该方法应用于图1中的终端102为例进行说明,包括以下步骤:
S202,显示表情解锁页面。
其中,表情解锁页面可以指采用面部表情进行解锁的页面,或者是对待测对象的面部表情进行验证的页面。如对终端进行解锁以进入用户操作界面的表情解锁页面,或者进入应用程序的资金管理页面或其它具有隐私信息的页面(如社交应用中的聊天记录页面)时进行解锁的表情解锁页面,又或者是支付场景中进行刷面部表情以进行支付的表情解锁页面。
例如,如图2所示,该表情解锁页面中包含了面部预览区、解锁进度区和表情提示区。其中,该面部预览区用于展示实时采集的面部图像;该解锁进度区用来展示解锁节点序列,当对某个解锁节点进行了解锁操作时,在该解锁节点的位置处显示已进行解锁操作的解锁状态标识。该表情提示区可以用于展示对应解锁节点的提示图像,如对第1个解锁节点进行解锁,则该表情提示区可以展示与第1个解锁节点相关联的提示图像。举例来说,若第1个解锁节点对应的预先录入的表情为微笑表情,提示图像可以是蓝天;若第1个解锁节点对应的预先录入的表情为张嘴表情,提示图像可以是向日葵。
在一个实施例中,终端可以在检测到解锁指令、或检测到进入具有隐私信息的页面的操作指令、或检测到进入资金管理页面的操作指令、或检测到支付指令时,显示表情解锁页面。此外,当获取到在面部管理页面上触发的面部识别指令时,进入该表情解锁页面。其中,面部可以指人的脸部,或其它对象的面部。
例如,待测对象欲进入终端的操作页面的场景,终端会在接收到解锁指令时,进入表情解锁页面。或者,待测对象欲采用在线支付的场景,终端在检测到支付指令时,进入表情解锁页面。或者,如图4所示,当待测对象在面部管理页面上点击或触摸人脸识别控件时,则进入表情解锁页面。
S204,在表情解锁页面展示解锁节点序列。
其中,解锁节点序列可以指需要进行解锁的节点(即解锁节点)所构成的节点序列。该解锁节点序列中的解锁节点可以是具有先后顺序的节点,换句话说,在解锁时按照先后顺序依次对解锁节点序列中的各解锁节点进行解锁。该解锁节点序列可以对应一个安全锁,或对应一串密码,通过对应的面部表情对解锁节点序列中的各解锁节点进行解锁。
在一个实施例中,在显示表情解锁页面时,在解锁进度区中展示解锁节点序列。其中,在该解锁进度区中,解锁节点序列中的各解锁节点之间可以显示指向符,如箭头或“>”符号,如图4所示。
在一个实施例中,在S204之后,终端还可以在表情解锁页面的面部预览区展示实时采集的面部图像。
其中,面部可以泛指待测对象的脸部、下巴部位、唇部、眼部、鼻子部位、眉部、额头部位和耳部等。该面部预览区可以显示由面部采集框,以通过该面部采集框截取面部图像中的面部图块,可以避免对整个面部图像进行识别而导致计算量高的问题。
在一个实施例中,终端通过内置的摄像头或与终端连接的外置摄像头实时采集待测对象,得到待测对象的面部图像,然后将实时采集的面部图像展示于表情解锁页面的面部预览区。其中,该面部图像可以只包含该待测对象的面部,也可以包含该待测对象的面部和手部。
在一个实施例中,在表情解锁页面的面部预览区展示实时采集的面部图像的步骤之后,该方法还可以包括:终端检测面部图像中的面部关键点是否位于面部采集框内;若是,则执行S206;若否,则发出调整采集方位的提示信息。
其中,面部关键点可以是待测对象的耳部、下巴部位和额头部位等。面部图像中的耳部、下巴部位和额头部位处于面部采集框内,则可以表明整个面部都在面部采集框内。
例如,如图5所示,当检测面部图像中的面部关键点未位于面部采集框(如图5中的黑色矩形线框)内时,在面部采集框附近显示“请调整采集方位”的提示信息,以使面部图像中的整个面部都位于面部采集框内。
S206,在解锁节点序列中待处理的解锁节点处,基于实时采集的面部图像中的面部表情产生解锁状态标识。
其中,待处理的解锁节点可以指解锁节点序列中未进行解锁操作的节点。在本实施例中,该待处理的解锁节点还可以指解锁节点序列中未进行解锁操作、且当前需要进行解锁操作的节点。
该解锁状态标识可以用来表示待处理的解锁节点已成功完成解锁,或者表示待处理的解锁节点已进行解锁操作、但并不确定是否成功完成解锁。根据解锁状态标识的上述两种含义,可以将S206划分为以下两种场景:
场景1,解锁状态标识表示待处理的解锁节点已进行解锁操作、但并不确定是否成功完成解锁。
在一个实施例中,解锁节点序列展示于表情解锁页面的解锁进度区。在表情解锁页面的面部预览区展示实时采集的面部图像的步骤,具体可以包括:终端按照解锁节点序列中解锁节点的顺序,依次对解锁节点序列中待处理的解锁节点对应的面部图像进行面部表情识别;当每次完成面部表情识别时,在解锁进度区的相应解锁节点处产生解锁状态标识。其中,该解锁状态标识可用于表示该当前待处理的解锁节点已进行解锁操作。该解锁状态标识包括:已进行解锁操作、且对该解锁节点成功解锁的状态标识,或已进行解锁操作、但对该解锁节点未成功解锁的状态标识。
对于解锁节点序列中某一个待处理的解锁节点的解锁步骤,其步骤可以包括:终端在对解锁节点序列中当前待处理的解锁节点进行解锁时,对当前采集的面部图像进行面部表情识别,将识别的面部表情与该 待处理的解锁节点对应的目标表情进行对比,当获得对比结果后,在当前待处理的解锁节点处产生解锁状态标识。在产生解锁状态标识之后,该当前待处理的解锁节点转变为已处理的解锁节点,然后接着对剩余待处理的解锁节点进行解锁。
例如,如图6所示,在左边的图中,解锁节点序列中的各解锁节点(即节点1-6)均未进行解锁操作。当检测到面部图像中的整个面部均位于面部采集框内时,首先对解锁节点序列中的第1个解锁节点进行解锁,解锁的过程包括:对展示于面部预览区的面部图像进行面部表情识别,即对面部采集框内的面部图块进行面部表情识别,得到面部表情的识别结果;其中,该面部表情为张嘴眯左眼的表情。在得到面部表情的识别结果之后,将识别的面部表情与第1个解锁节点对应的目标表情进行对比,当获得对比结果后,在第1个解锁节点的位置处产生解锁状态标识,表示已对第1个解锁节点进行了解锁操作。依此类推,对解锁节点序列中的其它解锁节点进行解锁。
在一个实施例中,终端当每次完成面部表情识别时,生成与面部表情对应的表情模型图;在面部预览区的相应面部图像上叠加显示表情模型图,然后将表情模型图与相应解锁节点的表情图进行对比,在获得对比结果后,在待处理的解锁节点处产生解锁状态标识,以提示该待处理的解锁节点对应的面部图像已完成识别。其中,表情模型图可以参考图6中的黑色圆点。
场景2,解锁状态标识表示待处理的解锁节点已成功完成解锁。
在一个实施例中,终端当每次完成面部表情识别时,生成与面部表情对应的表情模型图;在面部预览区的相应面部图像上叠加显示表情模型图;当表情模型图与相应解锁节点的表情图一致时,确定面部图像中的面部表情与对应的目标表情匹配,然后在解锁节点序列中待处理的解锁节点处产生解锁状态标识。
其中,该表情模型图可以指根据识别的面部表情所生成的图形,可用来表示对该待测对象进行识别所得的面部表情,此外,该表情模型图还可用来表示该待处理的解锁节点对应的面部图像已进行面部表情识别。
表情模型图与相应解锁节点的表情图一致时,说明面部图像中的面部表情与该待处理的解锁节点预先录入的表情图相匹配,此时该待处理的解锁节点可以成功解锁。
例如,如图6所示,当对第1个解锁节点进行解锁时,对面部图像中的面部表情进行识别,在识别完面部图像中的面部关键点时,生成与该张嘴眯左眼的面部表情匹配的表情模型图,然后叠加在该面部图像中的面部表情上。然后,将该表情模型图与第1个解锁节点对应的预先录入的表情图进行比对,若一致,则在第1个解锁节点的位置处产生解锁状态标识。
在对每个解锁节点进行解锁的过程中,可以通过提示图像来提示当前需要解锁的解锁节点对应的表情是什么。
在一个实施例中,表情解锁页面中包含表情提示区;该方法还包括:在对解锁节点序列中待处理的解锁节点进行解锁的过程中,响应于在表情提示区触发的表情提示操作,在表情提示区展示与待处理的解锁 节点对应的提示图像。从而,用户可以在忘记该解锁解对应的面部表情是什么时,可以通过提示图像联想到对应的面部表情。需要说明的是,对于录入目标表情的目标对象而言,该提示图像可以用于联想到喜怒哀乐的图像,对于其他用户,可能无法清楚该提示图像的具体含义,如利用向日葵图像联想到张嘴。
如图5所示,当对第1个解锁节点进行解锁时,若待测对象忘记是什么面部表情时,可以通过向日葵图像来确定该第1个解锁节点对应的是什么面部表情,然后做出该面部表情以解锁第1个解锁节点。
S208,基于解锁状态标识以及相应面部图像中的面部表情与对应的目标表情间的匹配性,完成解锁。
在一个实施例中,当解锁节点序列中每个解锁节点处均产生解锁状态标识、且相应面部图像中的面部表情与对应的目标表情匹配时,解锁成功;例如,参考图6,当解锁节点1-6的位置处均由无色(或白色)转为灰色、且相应面部图像中的面部表情与对应的目标表情匹配时,解锁成功。或者,当解锁节点序列中至少两个解锁节点处产生了解锁状态标识、且相应面部图像中的面部表情与对应的目标表情匹配时,解锁成功;例如,参考图6,当解锁节点1-3的位置处均由无色(或白色)转为灰色、且解锁节点1-3对应的面部图像中的面部表情与对应的目标表情匹配时,解锁成功。
其中,目标表情可以指预先录入的表情图中的表情,如对设置了6个解锁节点的安全锁,每个解锁节点预先录入一个或多个表情图,该表情图中的表情即为目标表情。
当识别的面部表情与相应待处理的解锁节点对应的表情图匹配时,表示该面部表情与对应的目标表情匹配;反过来,当识别的面部表情与对应的目标表情匹配,说明该面部表情与相应待处理的解锁节点对应的表情图匹配。
此外,当产生的表情模型图与相应待处理的解锁节点对应的表情图匹配时,表示该表情模型图与对应的目标表情匹配;反过来,当产生的表情模型图与对应的目标表情匹配,说明该表情模型图与相应待处理的解锁节点对应的表情图匹配。
在一个实施例中,终端在解锁每个解锁节点的过程中,记录每个解锁节点在完成解锁时对应的时间间隔,然后计算总时间间隔。当解锁节点序列中每个解锁节点处均产生解锁状态标识、且每个面部图像中的面部表情与对应的目标表情匹配、且总时间间隔小于预设时间间隔时,解锁成功。
当解锁节点序列中每个解锁节点处均产生解锁状态标识、但存在一个或多个面部图像中的面部表情与对应的目标表情不匹配时,终端解锁失败。此时,可以重新进行解锁,即返回执行S204至S208,直至解锁成功或解锁失败的累计次数达到预设次数时,暂停解锁。
在一个实施例中,当解锁节点序列中每个解锁节点处均产生解锁状态标识、但至少一个面部图像中的面部表情与对应的目标表情不匹配时,终端发出解锁失败的提示信息;取消展示解锁状态标识,返回执行S204至S208。
在一个实施例中,当解锁失败时,获取解锁失败的累计次数;当累计次数达到预设次数时,暂停解锁过程;获取预留通信号,并向预留通信号发送告警信息。
其中,上述的累计次数可以指本轮解锁失败或预设时段内解锁失败的总次数,该预设时段可根据实际情况进行设置,在本实施例中并不做具体限定。上述的预留通信号可以指:目标对象(如录入表情图的用户)在终端的应用账户中预留的用于接收告警信息或紧急联系的通信标识,如预留的手机号、邮箱账号或其它即时通信账号等。需要说明的是,解锁失败的累计次数指的是连续解锁失败的总次数,若其中有一次解锁成功,则解锁失败的累计次数置为零。
例如,以预留通信号为预留手机号为例,若5分钟内解锁失败的累计次数达到预设次数5,则暂停解锁过程,并向预留手机号发送告警信息,以向预留手机号对应的目标用户进行告警,以告知该目标用户重置密码,以避免他人恶意解锁,或者告知该目标用户重新获取密码。
示例性的,对面部图像进行人脸识别可得到如图7所示的面部特征点的识别结果,为了下文方便说明,采用数字标记识别得到的各个面部特征点,例如图7中所示的1~17表示脸部边缘特征点,18~22以及23~27对应表示用户的左眉部特征点和右眉部特征点,28~36表示用户的鼻子特征点,37~42表示用户的左眼特征点,43~48表示用户的右眼特征点,49~68表示用户的嘴唇特征点。需要指出的是,以上仅为示例,在可选实施例中可以在以上脸部特征点中仅识别部分或更多的特征点,或采用其他方式标记各个特征点,均属于本发明实施例的范畴。
面部特征点识别技术按照其采用的准则的不同,通常根据所识别的特征的不同分为两类:
(1)基于局部特征的方法
在一个实施例中,基于局部特征的方法可以利用脸部的局部几何特征,如一些脸部器官(眼、鼻、嘴等)的相对位置和相对距离来描述脸部。其特征分量通常包括特征点间的欧氏距离、曲率和角度等,可以实现对脸部显著特征的一个高效描述。
例如,使用积分投影法定位脸部特征点,以特征点间欧氏距离作为特征分量识别出多维的面部特征点向量用于分类。特征分量主要包括:眉毛与眼睛中心的垂直距离:眉毛弧度的多个描述数据;鼻宽及鼻的垂直位置;鼻孔位置以及脸宽等,通过上述面部特征点的识别,在识别过程中可以取得100%正确识别率。
在本发明可选实施例中,基于局部特征的方法还可以是关于面部特征点一般特点的经验描述。例如,面部图像有一些明显的基本特征,如面部区域通常包括眼部、鼻子部位和唇部等面部特征点,其亮度一般低于周边区域;双眼大致对称,鼻、嘴分布在对称轴上等。
(2)基于整体的方法
这里,基于整体的方法则是把面部图像作为一个整体,对其进行某种变换处理识别特征,该方法考虑了脸部的整体属性,也保留了脸部件之间的拓扑关系和各部件本身的信息。
由于面部图像的维数通常非常高,且面部图像在高维空间中的分布很不紧凑,因而不利于分类,并且在计算上的复杂度也非常大。可采用子空间分析的方法,根据一定的目标来寻找一个线性或非线性的空间变换,把原始高维数据压缩到一个低维子空间,使数据在此子空间内的分布更加紧凑,降低计算的复杂度。
此外,也可在面部图像上放置一组矩形网格节点,每个节点的特征用该节点处的多尺度小波特征描述,各节点之间的连接关系用几何距离表示,从而构成基于二维拓扑图的脸部表述。在脸部识别过程中,根据两幅图像中各节点和连接之间的相似性进行识别。
基于整体的方法除了上述的子空间分析法和弹性图匹配法,还有基于神经网络的方法等,在本发明实施例中,对基于整体方法的类型不做限制。
上述实施例中,在表情解锁页面配置了多个解锁节点组成的解锁节点序列,在对每个解锁节点进行解锁时,需要对实时采集的面部图像进行表情识别,而且每个解锁节点均对应特定的目标表情,只有当所有解锁节点对应的面部图像中的面部表情与相应解锁节点对应的目标表情相匹配时,才完成整个解锁过程,从而可以有效地避免因盗用图像或人脸模型实现解锁,有效地提高了信息安全性。
在一个实施例中,除了通过面部表情进行解锁之外,还可以结合面部表情和人脸识别的方式进行解锁。如图8所示,该方法还可以包括:
S802,按照解锁节点序列中解锁节点的顺序,依次对解锁节点序列中待处理的解锁节点对应的面部图像进行面部表情识别。
其中,面部表情可以由面部上的不同部位进行相应动作或处于相应姿态所呈现出来的表情,如图5和图6所示的张嘴眯左眼的表情。
在一个实施例中,面部表情识别的步骤包括:终端从面部图像中提取眼部特征点;在眼部特征点中,确定上眼皮特征点与下眼皮特征点之间的第一距离,以及确定左眼角特征点与右眼角特征点之间的第二距离;根据第一距离与第二距离之间的比值与至少一个预设区间之间的关系确定眼部姿态。
其中,左眼角特征点和右眼角特征点分别指同一眼部的左边眼角特征点和右边眼角特征点,例如,对于左眼,左眼角特征点指的是左眼的左边眼角特征点,右眼角特征点指的是左眼的右边眼角特征点。
举例来说,如图7所示,终端根据人脸识别技术获取到的面部特征点,计算上眼皮特征点38与下眼皮特征点42之间的第一距离,以及计算左眼角特征点37与右眼角特征点40之间的第二距离,当第一距离与第二距离之间的比值为0时,判定待测对象眯左眼。当第一距离与第二距离之间的比值小于0.2时,判定待测对象左眨眼。当第一距离与第二距离之间的比值大于0.2且小于0.6时,判定待测对象瞪左眼。
对于唇部姿态,可以采用以下方式进行识别:
方式1,根据唇部特征点的高度识别唇部姿态。
在一个实施例中,面部表情识别的步骤还包括:终端从面部图像中提取唇部特征点;在唇部特征点中,根据唇部中心特征点与唇角特征点之间的高度差值确定唇部姿态。
例如,如图7所示,终端确定唇部中心特征点63的高度以及唇角特征点49(或55)的高度,然后计算唇部中心特征点63与唇角特征点49(或55)之间的高度差值,若高度差值为正(即唇部中心特征点63的高度高于唇角特征点49的高度),则判断唇部姿态为微笑。
方式2,根据上下嘴唇特征点之间的距离识别唇部姿态。
在一个实施例中,在唇部特征点中,终端根据上嘴唇特征点与下嘴唇特征点之间的第三距离确定唇部姿态。
例如,根据人脸识别技术获取到的面部特征点,将上嘴唇特征点与下嘴唇特征点之间的第三距离与距离阈值进行比较,当第三距离达到距离阈值时,确定唇部姿态位张嘴。如图7所示,将上嘴唇特征点63与下嘴唇特征点67之间的第三距离与距离阈值进行比较,若大于或等于距离阈值,则标识待测对象在张嘴。
在另一个实施例中,终端计算左唇角特征点与右唇角特征点之间的第四距离,当第三距离大于或等于第四距离时,确定唇部姿态位为张嘴。
方式3,根据上下嘴唇之间距离与左右唇角特征点之间距离的比值识别唇部姿态。
在一个实施例中,在唇部特征点中,根据第三距离与第四距离之间的比值与至少一个预设区间之间的关系确定唇部姿态;其中,第四距离为左唇角特征点与右唇角特征点之间的距离。
例如,如图7所示,终端确定上嘴唇特征点63与下嘴唇特征点67之间的第三距离,以及确定左唇角特征点49与右唇角特征点55之间的第四距离,第三距离与第四距离之间的比值与至少一个预设区间之间的关系确定唇部姿态。例如,当第三距离与第四距离之间的比值处于第一预设区间时,确定唇部姿态位为张嘴;此外,当第三距离与第四距离之间的比值处于第二预设区间时,确定唇部姿态位为闭嘴。其中,第一预设区间的值均大于第二预设区间的值。
在一个实施例中,面部表情识别的步骤还包括:从面部图像中提取眉部特征点和眼睑特征点;确定眉部特征点与眼睑特征点之间的第五距离;根据第五距离与预设距离的大小关系确定眉部姿态。
例如,如图7所示,计算眉部特征点与眼睑特征点38之间的距离,当该距离大于预设距离时,确定待测对象在挑眉。
S804,按照解锁节点序列中解锁节点的顺序,对与每个解锁节点对应的面部图像进行人脸识别,得到人脸识别结果。
人脸识别的过程,可以参考上述实施例的S208。
S806,当每个解锁节点处均产生解锁状态标识、且每个面部图像中的面部表情与对应的目标表情匹配、且根据人脸识别结果确定待测对象与目标对象一致时,解锁成功。
在采用面部表情进行解锁时,还可进行人脸识别,以判断该人脸是否与预先录入的目标对象的人脸匹配,若匹配,表示待测对象与目标对象是同一个对象。其中,面部图像为采集待测对象所得的图像。
当每个解锁节点处均产生解锁状态标识、且每个面部图像中的面部表情与对应的目标表情匹配、且根据人脸识别结果确定待测对象与目标对象不一致时,则解锁失败。根据人脸识别结果确定待测对象与目标对象不一致时,说明当前检测的待测对象与目标对象不是同一个对象,此时解锁失败。
上述实施例中,将面部表情与人脸相结合进行解锁,可以进一步提高解锁节点序列的安全性,提高了 信息安全。
在一个实施例中,除了通过面部表情进行解锁之外,还可以结合面部表情和手势一起进行解锁,如图9所示,该方法还可以包括:
S902,按照解锁节点序列中解锁节点的顺序,依次对解锁节点序列中待处理的解锁节点对应的面部图像进行面部表情识别。
其中,面部表情可以由面部上的不同部位进行相应动作或处于相应姿态所呈现出来的表情,如图5和图6所示的张嘴眯左眼的表情。
面部表情的识别可以参考上述实施例的S802。
S904,在进行面部表情识别的过程中,对面部图像中的手部进行手势识别。
在一个实施例中,终端通过神经网络模型,对面部图像进行卷积处理,从而提取出面部图像中的手势特征,根据该手势特征确定具体的手势。
其中,该神经网络模型可以是用于提取手势特征的网络模型,具体可以是二维卷积神经网络模型。二维网络模型可以是机器学习模型的其中一个网络分支。
S906,当每次完成面部表情识别和手势识别时,在解锁进度区的相应解锁节点处产生解锁状态标识。
在一个实施例中,终端还可以在每次完成面部表情识别之后,判断该面部表情与对应的目标表情是否一致;此外,终端还可以在每次完成手势识别之后,判断该手势与对应的目标手势是否一致,若该面部表情与对应的目标表情一致、且该手势与对应的目标手势一致,则在解锁进度区的相应解锁节点处产生解锁状态标识。
S908,当每个解锁节点处均产生解锁状态标识、且每个面部图像中的面部表情与对应的目标表情匹配、且每个识别的手势与对应的目标手势一致时,解锁成功。
上述实施例中,将面部表情与手势相结合进行解锁,可以进一步提高解锁节点序列的安全性,提高了信息安全。
在一个实施例中,每个解锁节点可以通过至少两个面部表情进行解锁,每个解锁节点对应至少两个不同的目标表情;如图10所示,S206具体可以包括:
S1002,对待处理的解锁节点对应的至少两张面部图像进行面部表情识别。
其中,在对待处理的解锁节点进行解锁时,可以对该待处理的解锁节点对应的至少两张面部图像进行面部表情识别。
例如,在对解锁节点1进行解锁时,先采集到面部图像1,然后对该面部图像1进行面部表情识别,得到面部表情1;然后对采集到的面部图像1进行面部表情识别,得到面部表情2。
面部表情识别的过程可以参考上述实施例中的S802。
S1004,当至少两张面部图像中的面部表情与对应的目标表情均匹配时,在待处理的解锁节点处产生 解锁状态标识。
至少两张面部图像中的面部表情之间,可以相同也可以不同。
在一个实施例中,当至少两张面部图像中的面部表情与对应的目标表情均匹配时,终端确定至少两张面部图像之间的采集时间间隔;或者,确定待处理的解锁节点与上一已处理的解锁节点之间的解锁时间间隔;当采集时间间隔或解锁时间间隔满足对应的时间间隔条件时,执行S1004。以避免在对同一个解锁节点进行解锁,或者对不同解锁节点进行解锁时,耗费过多时间,可以提高解锁效率,此外也可以避免他人通过多次尝试不同面部表情对某个解锁节点进行解锁。
S1006,当每个解锁节点处均产生解锁状态标识、且每个面部图像中的面部表情与对应的目标表情匹配时,解锁成功。
上述实施例中,每个解锁节点采用至少两个面部表情进行解锁,可以进一步提高解锁节点序列的安全性,提高了信息安全。
在一个实施例中,可以为每个解锁节点预先录入特定的表情图,录入表情图的步骤可以包括:
S1102,展示表情录入页面。
其中,该表情录入页面可以包含表情标识,此外也可以不包含表情标识。该表情标识用于表示对应的表情类型,不同的表情标识对应不同的表情类型。
在一个实施例中,终端在获取到在表情管理页面上触发的表情录入指令时,将页面切换至包含表情标识的表情录入页面。如图12所示,当检测到在表情管理页面的人脸录入控件上进行点击或触摸的操作时,进入展示包含表情标识的表情录入页面。其中,表情录入页面的上半部分用于展示采集的表情图,下班部分用于展示表情标识。
S1104,通过表情录入页面依次录入与表情录入页面中表情标识对应的表情图。
在一个实施例中,当该表情录入页面中包含表情标识时,终端按照表情标识的排列顺序依次采集目标对象的表情图,然后依次录入所采集的表情图。如图12所示,在表情录入页面中,首先对第一个表情标识进行表情图录入,然后依次对后续表情标识进行表情录入。
在一个实施例中,当该表情录入页面中不包含表情标识时,终端对采集的表情图进行识别,得到目标表情;根据该目标表情在表情录入页面的表情标识预览区展示对应的表情标识。如图13所示,对张嘴的面部图像进行识别,得到张嘴的面部表情,然后根据该张嘴的面部表情生成表情标识,在表情录入页面的表情标识预览区进行展示。
S1106,进入表情组合页面。
其中,表情组合页面可以是用来将表情标识进行组合的页面。
S1108,响应于在表情组合页面触发的组合操作,将表情标识进行组合,获得表情解锁序列。
在一个实施例中,在该表情组合页面中,终端可以选择将表情录入页面中的表情标识进行排序,根据 排序后的表情标识构建表情解锁序列;或者,终端对表情录入页面中的表情标识先进行组合,然后进行排序,根据组合和排序后的额表情标识构建表情解锁序列。
例如,如图14所示,目标对象可以将表情组合页面中的表情标识进行两两组合,得到对应的表情组合。如图14右边的表情组合页面所示,第一行中,将普通表情与左瞪眼表情进行组合。
上述实施例中,按照表情标识预先录入对应的表情图,然后将表情标识进行组合和排序,从而生成表情解锁序列,以便在解锁时,根据对应的面部表情对表情解锁节点进行解锁,而且,在对每个解锁节点进行解锁时,需要将当前识别的面部表情与预先录入的对应表情图进行对比,只有在两者匹配的情况下,才可以完成对该解锁节点的解锁,从而可以有效提高解锁节点的安全性。
作为一个示例,以人脸图像为例进行阐述。如图15和图16所示,在本实施例中,主要通过人脸识别+组合表情+表情序列+提示图像以进行提示的方式实现智能密码锁的功能,可以很好地实现用户的隐私性,对表情进行组合可以增加表情识别的破解难度,使用表情序列可以增加解锁的复杂性,使用提示图像可以帮助用户记忆密码组合,同时避免密码泄露。其中,该表情序列对应上述的解锁节点序列。
在本实施例中,可以让用户自定义不同表情的组合,设置表情序列,添加提示图像及图片(或动画)提示,得组合出一套密码锁,可以满足用户自己的加解密需要,增加个人隐私保护。对于密码锁的定义与解密,具体的实现步骤包括:
S1,通过表情识别器进行人脸特征点的识别,得到对应的面部表情。
S2,重新定义和实现表情触发的逻辑。
1)眨眼,根据人脸识别技术获取到的人脸特征点,将上下眼皮的中心特征点之间的距离除以左右眼角的特征点之间的距离,得到两者之间的比值;当两者之间的比值小于设定的距离阈值时,则判断为眨眼。如图7所示,左眼上眼皮中心特征点38到左眼下眼皮中心特征点42之间的距离,除以左右眼角特征点37和40之间的距离,若得到的比值小于0.2时,则判断用户眨左眼。
2)瞪眼,根据人脸识别技术获取到的人脸特征点,将上下眼皮的中心特征点之间的距离除以左右眼角的特征点之间的距离,得到两者之间的比值;当得到两者之间的比值大于设定的比例阈值时,则判断为瞪眼。如图7,左眼上眼皮中心特征点38到左眼下眼皮中心特征点42之间的距离,除以左右眼角特征点37和40之间的距离,若得到的比值小于0.6,则判断为眨左眼。
3)嘟嘴,根据人脸识别技术获取到的人脸特征点,设置一个嘟嘴的比例阈值,将嘴唇上下的特征点之间的距离与嘴唇左右特征点之间的距离进行相除,当所得的商值达到设置嘟嘴的比例阈值时,判断为嘟嘴。如图7,嘴唇上下中心特征点67和63之间的距离和嘴唇左右特征点49和55之间的距离相除,若得到的商值大于1.0,则判断为嘟嘴。
4)微笑,根据人脸识别技术获取到的人脸特征点,当嘴唇的中心特征点低于嘴唇左右特征点的高度时,判断为微笑。如图7所示,当嘴唇的中心特征点63的位置低于嘴唇左右特征点49和55的位置时, 则判断为微笑。
5)生气,根据人脸识别技术获取到的人脸特征点,当嘴唇的中心特征点高于嘴唇左右特征点的高度时,判断为生气。如图,当嘴唇的中心特征点63的位置高于嘴唇左右特征点49和55的位置时,则判断为生气。
6)张嘴,根据人脸识别技术获取到的人脸特征点,将上下嘴唇的单位距离计算加权平均,设置一个张嘴的距离阈值。当上下嘴唇的特征点之间的距离大于该距离阈值,则判断为张嘴。如图7所示,当上嘴唇中心特征点63与下嘴唇终端特征点67之间的距离大于嘴唇左右特征点49和55之间距离的0.2倍时,则判断为张嘴。
7)挑眉,根据人脸识别技术获取到的人脸特征点,设置一个挑眉的距离阈值,当眉毛特征点到上眼睑特征点之间的距离大于该距离阈值,则判断为挑眉。
8)点头,根据人脸识别技术获取到的人脸特征点,算出上一帧人脸图像中的头部上下旋转的角度,以及当前帧人脸图像中头部上下旋转的角度,计算两者之间的角度差值,当角度差值超过10度时,则判断为点头。
S3,对表情进行组合。
用户可以根据需要将以上表情两两组合,如眨左眼+张嘴设置成一个表情组合,眨右眼+嘟嘴也可以设置成一个表情组合。
S4,对不同表情组合进行排序。
用户可以将已经设置好的表情组合按照一定顺序排列,根据排序后的表情组合构建成一个密码组,如:表情组合1(眨眼+点头):表情组合2(眨左眼+张嘴):表情组合3(眨眼+嘟嘴):表情组合4(微笑):表情组合5(挑眉):表情组合6(瞪眼)。
S5,设置提示图像和动画。
每个表情组合,可以设置一个能帮助记忆且不会泄露表情的提示图像或动画,帮助记忆。例如,设置蓝天来记忆微笑,设置下雨记忆生气,设置手掌记忆点头等,该联想方案根据用户自己的偏好设置。尽量避免提示太多明显,以防泄露密码组合。
S6,设置超限保护器。
每种表情组合设置验证时间如5s,设置总的识别错误次数,如识别错误操作五次,密码锁会被锁住,一定时间内无法打开,并且向固定的邮箱和手机号发送预警信息。
S7,进行解锁。
先进行人脸识别,然后进行表情识别,当出现某个解锁节点不知道对应的表情时,展示提示图像。当单个面部表情识别完成后,然后对表情组合进行识别,以确定对应解锁节点所对应的表情组合是否正确。当每个表情组合正确识别之后,对整个表情序列进行识别,以确定各表情组合的顺序是否正确。最后进行 超时或失败累计次数的判断,若超时或失败累计次数达到5次时,暂停解锁,并且向固定的邮箱和手机号发送预警信息。
通过本实施例的方案,能够帮助用户灵活设置密码组合,用于门禁、文件加密、进入智能终端操作页面和支付等需求,技术难度低,私密性强,便于操作和记忆。
应该理解的是,虽然图2、8-11的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2、8-11中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图17所示,提供了一种基于面部表情的解锁装置,该装置可以采用软件模块或硬件模块,或者是二者的结合成为计算机设备的一部分,该装置具体包括:显示模块1702、第一展示模块1704、产生模块1706和解锁模块1708,其中:
显示模块1702,用于显示表情解锁页面;
第一展示模块1704,用于在表情解锁页面展示解锁节点序列;
产生模块1706,用于在解锁节点序列中待处理的解锁节点处,基于实时采集的面部图像中的面部表情产生解锁状态标识;
解锁模块1708,用于基于解锁状态标识以及相应面部图像中的面部表情与对应的目标表情间的匹配性,完成解锁。
上述实施例中,在表情解锁页面配置了多个解锁节点组成的解锁节点序列,在对每个解锁节点进行解锁时,需要对实时采集的面部图像进行表情识别,而且每个解锁节点均对应特定的目标表情,只有当所有解锁节点对应的面部图像中的面部表情与相应解锁节点对应的目标表情相匹配时,才完成整个解锁过程,从而可以有效地避免因盗用图像或人脸模型实现解锁,有效地提高了信息安全性。
在一个实施例中,解锁节点序列展示于表情解锁页面的解锁进度区。该装置还包括:
第二展示模块,用于在表情解锁页面的面部预览区展示实时采集的面部图像;
产生模块,还用于按照解锁节点序列中解锁节点的顺序,依次对解锁节点序列中待处理的解锁节点对应的面部图像进行面部表情识别;当每次完成面部表情识别时,在解锁进度区的相应解锁节点处产生解锁状态标识。
在一个实施例中,装置还包括:生成模块、叠加模块和确定模块;其中:
生成模块,用于当每次完成面部表情识别时,生成与面部表情对应的表情模型图;
叠加模块,用于在面部预览区的相应面部图像上叠加显示表情模型图;
确定模块,用于当表情模型图与相应解锁节点的表情图一致时,确定面部图像中的面部表情与对应的目标表情匹配。
在一个实施例中,面部图像中包括面部和手部。装置还包括:识别模块;其中:
识别模块,用于在进行面部表情识别的过程中,对面部图像中的手部进行手势识别;
产生模块,还用于当每次完成面部表情识别和手势识别时,在解锁进度区的相应解锁节点处产生解锁状态标识。
上述实施例中,将面部表情与手势相结合进行解锁,可以进一步提高解锁节点序列的安全性,提高了信息安全。
在一个实施例中,每个解锁节点对应至少两个不同的目标表情。产生模块,还用于对待处理的解锁节点对应的至少两张面部图像进行面部表情识别;当至少两张面部图像中的面部表情与对应的目标表情均匹配时,在待处理的解锁节点处产生解锁状态标识。
在一个实施例中,确定模块,还用于当至少两张面部图像中的面部表情与对应的目标表情均匹配时,确定至少两张面部图像之间的采集时间间隔;或者,确定待处理的解锁节点与上一已处理的解锁节点之间的解锁时间间隔;
产生模块,还用于当采集时间间隔或解锁时间间隔满足对应的时间间隔条件时,在待处理的解锁节点处产生解锁状态标识。
上述实施例中,每个解锁节点采用至少两个面部表情进行解锁,可以进一步提高解锁节点序列的安全性,提高了信息安全。
在一个实施例中,识别模块,还用于从面部图像中提取眼部特征点;在眼部特征点中,确定上眼皮特征点与下眼皮特征点之间的第一距离,以及确定左眼角特征点与右眼角特征点之间的第二距离;根据第一距离与第二距离之间的比值与至少一个预设区间之间的关系确定眼部姿态。
在一个实施例中,识别模块,还用于从面部图像中提取唇部特征点;在唇部特征点中,根据唇部中心特征点与唇角特征点之间的高度差值确定唇部姿态;或者,在唇部特征点中,根据上嘴唇特征点与下嘴唇特征点之间的第三距离确定唇部姿态;或者,在唇部特征点中,根据第三距离与第四距离之间的比值与至少一个预设区间之间的关系确定唇部姿态;其中,第四距离为左唇角特征点与右唇角特征点之间的距离。
在一个实施例中,识别模块,还用于从面部图像中提取眉部特征点和眼睑特征点;确定眉部特征点与眼睑特征点之间的第五距离;根据第五距离与预设距离的大小关系确定眉部姿态。
在一个实施例中,面部预览区中展示有面部采集框。装置还包括:检测模块和提示模块;其中:
检测模块,用于检测面部图像中的面部关键点是否位于面部采集框内;
产生模块,还用于若面部图像中的面部关键点位于面部采集框内,则按照解锁节点序列中解锁节点的顺序,在每个解锁节点处基于相应面部图像中的面部表情产生解锁状态标识;
提示模块,用于若面部图像中的面部关键点未位于面部采集框内,则发出调整采集方位的提示信息。
在一个实施例中,面部图像为采集待测对象所得的图像。识别模块,还用于按照解锁节点序列中解锁节点的顺序,对与每个解锁节点对应的面部图像进行人脸识别,得到人脸识别结果;
解锁模块,还用于当每个解锁节点处均产生解锁状态标识、且每个面部图像中的面部表情与对应的目标表情匹配、且根据人脸识别结果确定待测对象与目标对象一致时,解锁成功。
上述实施例中,将面部表情与人脸相结合进行解锁,可以进一步提高解锁节点序列的安全性,提高了信息安全。
在一个实施例中,装置还包括:取消模块;其中:
提示模块,还用于当解锁节点序列中每个解锁节点处均产生解锁状态标识、但每个面部图像中的面部表情与对应的目标表情不匹配时,发出解锁失败的提示信息;
取消模块,用于取消展示解锁状态标识,返回执行在解锁节点序列中待处理的解锁节点处,基于相应面部图像中的面部表情产生解锁状态标识的步骤。
在一个实施例中,装置还包括:获取模块、暂停模块和告警模块;其中:
获取模块,用于当解锁失败时,获取解锁失败的累计次数;
暂停模块,用于当累计次数达到预设次数时,暂停解锁过程;
告警模块,用于获取预留通信号,并向预留通信号发送告警信息。
在一个实施例中,装置还包括:录入模块和组合模块;其中:
第二展示模块,用于展示表情录入页面;
录入模块,用于通过表情录入页面依次录入与表情录入页面中表情标识对应的表情图;
录入模块,用于进入表情组合页面;
组合模块,用于响应于在表情组合页面触发的组合操作,将表情标识进行组合,获得表情解锁序列。
在一个实施例中,表情解锁页面中包含表情提示区。第二展示模块,还用于在对解锁节点序列中待处理的解锁节点进行解锁的过程中,响应于在表情提示区触发的表情提示操作,在表情提示区展示与待处理的解锁节点对应的提示图像。
上述实施例中,按照表情标识预先录入对应的表情图,然后将表情标识进行组合和排序,从而生成表情解锁序列,以便在解锁时,根据对应的面部表情对表情解锁节点进行解锁,而且,在对每个解锁节点进行解锁时,需要将当前识别的面部表情与预先录入的对应表情图进行对比,只有在两者匹配的情况下,才可以完成对该解锁节点的解锁,从而可以有效提高解锁节点的安全性。
关于基于面部表情的解锁装置的具体限定可以参见上文中对于基于面部表情的解锁方法的限定,在此 不再赘述。上述基于面部表情的解锁装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图18所示。该计算机设备包括通过系统总线连接的处理器、存储器、通信接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、运营商网络、NFC(近场通信)或其他技术实现。该计算机程序被处理器执行时以实现一种基于面部表情的解锁方法。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图18中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,存储有计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述各方法实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以 是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种基于面部表情的解锁方法,由终端执行,其特征在于,所述方法包括:
    显示表情解锁页面;
    在所述表情解锁页面展示解锁节点序列;
    在所述解锁节点序列中待处理的解锁节点处,基于实时采集的面部图像中的面部表情产生解锁状态标识;
    基于所述解锁状态标识以及相应所述面部图像中的面部表情与对应的目标表情间的匹配性,完成解锁。
  2. 根据权利要求1所述的方法,其特征在于,所述解锁节点序列展示于所述表情解锁页面的解锁进度区;所述方法还包括:
    在所述表情解锁页面的面部预览区展示实时采集的面部图像;
    所述在所述解锁节点序列中待处理的解锁节点处,基于实时采集的面部图像中的面部表情产生解锁状态标识包括:
    按照所述解锁节点序列中解锁节点的顺序,依次对所述解锁节点序列中待处理的解锁节点对应的所述面部图像进行面部表情识别;
    当每次完成面部表情识别时,在所述解锁进度区的相应解锁节点处产生解锁状态标识。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    当每次完成面部表情识别时,生成与所述面部表情对应的表情模型图;
    在所述面部预览区的相应面部图像上叠加显示所述表情模型图;
    当所述表情模型图与相应解锁节点的表情图一致时,确定所述面部图像中的面部表情与对应的目标表情匹配。
  4. 根据权利要求1所述的方法,其特征在于,每个所述解锁节点对应至少两个不同的目标表情;所述在所述解锁节点序列中待处理的解锁节点处,基于相应所述面部图像中的面部表情产生解锁状态标识包括:
    对待处理的解锁节点对应的至少两张面部图像进行面部表情识别;
    当所述至少两张面部图像中的面部表情与对应的目标表情均匹配时,在所述待处理的解锁节点处产生解锁状态标识。
  5. 根据权利要求2至4任一项所述的方法,其特征在于,所述依次对所述解锁节点序列中待处理的解锁节点对应的所述面部图像进行面部表情识别包括:
    依次在所述解锁节点序列中待处理的解锁节点对应的所述面部图像中,提取眼部特征点;
    在所述眼部特征点中,确定上眼皮特征点与下眼皮特征点之间的第一距离,以及确定左眼角特征点与右眼角特征点之间的第二距离;
    根据所述第一距离与所述第二距离之间的比值与至少一个预设区间之间的关系确定眼部姿态。
  6. 根据权利要求2至4任一项所述的方法,其特征在于,所述依次对所述解锁节点序列中待处理的 解锁节点对应的所述面部图像进行面部表情识别包括:
    依次在所述解锁节点序列中待处理的解锁节点对应的所述面部图像中,提取唇部特征点;
    在所述唇部特征点中,根据唇部中心特征点与唇角特征点之间的高度差值确定唇部姿态;或者,
    在所述唇部特征点中,根据上嘴唇特征点与下嘴唇特征点之间的第三距离确定唇部姿态;或者,
    在所述唇部特征点中,根据所述第三距离与第四距离之间的比值与至少一个预设区间之间的关系确定唇部姿态;其中,所述第四距离为左唇角特征点与右唇角特征点之间的距离。
  7. 根据权利要求2所述的方法,其特征在于,所述面部预览区中展示有面部采集框;所述在所述表情解锁页面的面部预览区展示实时采集的面部图像之后,所述方法还包括:
    检测所述面部图像中的面部关键点是否位于所述面部采集框内;
    若是,则执行所述在所述解锁节点序列中待处理的解锁节点处,基于实时采集的面部图像中的面部表情产生解锁状态标识的步骤;
    若否,则发出调整采集方位的提示信息。
  8. 根据权利要求1所述的方法,其特征在于,所述面部图像为采集待测对象所得的图像;所述方法还包括:
    按照所述解锁节点序列中解锁节点的顺序,对与每个解锁节点对应的面部图像进行人脸识别,得到人脸识别结果;
    所述基于所述解锁状态标识以及相应所述面部图像中的面部表情与对应的目标表情间的匹配性,完成解锁包括:
    当每个解锁节点处均产生解锁状态标识、且每个所述面部图像中的面部表情与对应的目标表情匹配、且根据所述人脸识别结果确定所述待测对象与目标对象一致时,解锁成功。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当所述解锁节点序列中每个解锁节点处均产生解锁状态标识、但每个所述面部图像中的面部表情与对应的目标表情不匹配时,发出解锁失败的提示信息;
    取消展示所述解锁状态标识,返回执行所述在所述解锁节点序列中待处理的解锁节点处,基于实时采集的面部图像中的面部表情产生解锁状态标识的步骤。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    当解锁失败时,获取解锁失败的累计次数;
    当所述累计次数达到预设次数时,暂停解锁过程;
    获取预留通信号,并向所述预留通信号发送告警信息。
  11. 根据权利要求1至10中任一项所述的方法,其特征在于,所述方法还包括:
    展示表情录入页面;
    通过所述表情录入页面依次录入与所述表情录入页面中表情标识对应的表情图;
    进入表情组合页面;
    响应于在所述表情组合页面触发的组合操作,将所述表情标识进行组合,获得所述表情解锁序列。
  12. 根据权利要求1至10中任一项所述的方法,其特征在于,所述表情解锁页面中包含表情提示区;所述方法还包括:
    在对所述解锁节点序列中待处理的解锁节点进行解锁的过程中,响应于在所述表情提示区触发的表情提示操作,在所述表情提示区展示与所述待处理的解锁节点对应的提示图像。
  13. 根据权利要求2或3所述的方法,其特征在于,所述面部图像中包括面部和手部;所述方法还包括:
    在进行面部表情识别的过程中,对所述面部图像中的手部进行手势识别;
    所述当每次完成面部表情识别时,在所述解锁进度区的相应解锁节点处产生解锁状态标识包括:
    当每次完成面部表情识别和手势识别时,在所述解锁进度区的相应解锁节点处产生解锁状态标识。
  14. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    当所述至少两张面部图像中的面部表情与对应的目标表情均匹配时,确定所述至少两张面部图像之间的采集时间间隔;或者,
    确定所述待处理的解锁节点与上一已处理的解锁节点之间的解锁时间间隔;
    当所述采集时间间隔或所述解锁时间间隔满足对应的时间间隔条件时,执行所述在所述待处理的解锁节点处产生解锁状态标识的步骤。
  15. 根据权利要求2至4中任一项所述的方法,其特征在于,所述依次对所述解锁节点序列中待处理的解锁节点对应的所述面部图像进行面部表情识别包括:
    依次在所述解锁节点序列中待处理的解锁节点对应的所述面部图像中,提取眉部特征点和眼睑特征点;
    确定所述眉部特征点与所述眼睑特征点之间的第五距离;
    根据所述第五距离与预设距离的大小关系确定眉部姿态。
  16. 一种基于面部表情的解锁装置,其特征在于,所述装置包括:
    显示模块,用于显示表情解锁页面;
    第一展示模块,用于在所述表情解锁页面展示解锁节点序列;
    产生模块,用于在所述解锁节点序列中待处理的解锁节点处,基于实时采集的所述面部图像中的面部表情产生解锁状态标识;
    解锁模块,用于基于所述解锁状态标识以及相应所述面部图像中的面部表情与对应的目标表情间的匹配性,完成解锁。
  17. 根据权利要求16所述的装置,其特征在于,所述解锁节点序列展示于所述表情解锁页面的解锁进度区;所述装置还包括:
    第二展示模块,用于在所述表情解锁页面的面部预览区展示实时采集的面部图像;
    所述产生模块,还用于按照所述解锁节点序列中解锁节点的顺序,依次对所述解锁节点序列中待处理的解锁节点对应的所述面部图像进行面部表情识别;当每次完成面部表情识别时,在所述解锁进度区的相应解锁节点处产生解锁状态标识。
  18. 根据权利要求17所述的装置,其特征在于,所述装置还包括:
    生成模块,用于当每次完成面部表情识别时,生成与所述面部表情对应的表情模型图;
    叠加模块,用于在所述面部预览区的相应面部图像上叠加显示所述表情模型图;
    确定模块,用于当所述表情模型图与相应解锁节点的表情图一致时,确定所述面部图像中的面部表情与对应的目标表情匹配。
  19. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至15中任一项所述的方法的步骤。
  20. 一种计算机可读存储介质,存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至15中任一项所述的方法的步骤。
PCT/CN2021/108879 2020-09-03 2021-07-28 基于面部表情的解锁方法、装置、计算机设备和存储介质 WO2022048352A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21863420.2A EP4099198A4 (en) 2020-09-03 2021-07-28 UNLOCK METHOD AND APPARATUS BASED ON FACIAL EXPRESSION, COMPUTER DEVICE AND STORAGE MEDIUM
US17/893,028 US20230100874A1 (en) 2020-09-03 2022-08-22 Facial expression-based unlocking method and apparatus, computer device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010916138.1 2020-09-03
CN202010916138.1A CN113536262A (zh) 2020-09-03 2020-09-03 基于面部表情的解锁方法、装置、计算机设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/893,028 Continuation US20230100874A1 (en) 2020-09-03 2022-08-22 Facial expression-based unlocking method and apparatus, computer device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022048352A1 true WO2022048352A1 (zh) 2022-03-10

Family

ID=78094247

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/108879 WO2022048352A1 (zh) 2020-09-03 2021-07-28 基于面部表情的解锁方法、装置、计算机设备和存储介质

Country Status (4)

Country Link
US (1) US20230100874A1 (zh)
EP (1) EP4099198A4 (zh)
CN (1) CN113536262A (zh)
WO (1) WO2022048352A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797249A (zh) * 2019-04-09 2020-10-20 华为技术有限公司 一种内容推送方法、装置与设备
CN115412518A (zh) * 2022-08-19 2022-11-29 网易传媒科技(北京)有限公司 表情发送方法、装置、存储介质及电子设备
CN115830749B (zh) * 2022-11-24 2024-05-17 惠州市则成技术有限公司 家庭门禁管理方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130300650A1 (en) * 2012-05-09 2013-11-14 Hung-Ta LIU Control system with input method using recognitioin of facial expressions
CN106203038A (zh) * 2016-06-30 2016-12-07 维沃移动通信有限公司 一种解锁方法及移动终端
CN109165588A (zh) * 2018-08-13 2019-01-08 安徽工程大学 一种人脸表情识别加密解锁装置和方法
CN109214301A (zh) * 2018-08-10 2019-01-15 百度在线网络技术(北京)有限公司 基于人脸识别和手势识别的控制方法及装置
CN109325330A (zh) * 2018-08-01 2019-02-12 平安科技(深圳)有限公司 微表情锁生成及解锁方法、装置、终端设备及存储介质
CN109829277A (zh) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 终端解锁方法、装置、计算机设备和存储介质

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4999570B2 (ja) * 2007-06-18 2012-08-15 キヤノン株式会社 表情認識装置及び方法、並びに撮像装置
US9082235B2 (en) * 2011-07-12 2015-07-14 Microsoft Technology Licensing, Llc Using facial data for device authentication or subject identification
CN102509053A (zh) * 2011-11-23 2012-06-20 唐辉 用于验证授权的方法、处理器、设备和移动终端
CN103778360A (zh) * 2012-10-26 2014-05-07 华为技术有限公司 一种基于动作分析的人脸解锁的方法和装置
KR102041984B1 (ko) * 2012-12-18 2019-11-07 삼성전자주식회사 추가 구성 요소를 이용한 얼굴 인식 기능을 가지는 모바일 장치 및 그 제어 방법
CN104036167A (zh) * 2013-03-04 2014-09-10 联想(北京)有限公司 一种信息处理的方法及电子设备
TW201504839A (zh) * 2013-07-19 2015-02-01 Quanta Comp Inc 可攜式電子裝置及互動式人臉登入方法
KR102365393B1 (ko) * 2014-12-11 2022-02-21 엘지전자 주식회사 이동단말기 및 그 제어방법
US20180373922A1 (en) * 2015-12-17 2018-12-27 Intel IP Corporation Facial gesture captcha
CN105825112A (zh) * 2016-03-18 2016-08-03 北京奇虎科技有限公司 移动终端的解锁方法及装置
CN108875333B (zh) * 2017-09-22 2023-05-16 北京旷视科技有限公司 终端解锁方法、终端和计算机可读存储介质
CN108875491B (zh) * 2017-10-11 2021-03-23 北京旷视科技有限公司 人脸解锁认证的数据更新方法、认证设备和系统以及非易失性存储介质
CN108875335B (zh) * 2017-10-23 2020-10-09 北京旷视科技有限公司 人脸解锁与录入表情和表情动作的方法、以及认证设备和非易失性存储介质
CN108108610A (zh) * 2018-01-02 2018-06-01 联想(北京)有限公司 身份验证方法、电子设备及可读存储介质
CN108650408B (zh) * 2018-04-13 2021-01-08 维沃移动通信有限公司 一种屏幕解锁方法和移动终端
CN108830062B (zh) * 2018-05-29 2022-10-04 浙江水科文化集团有限公司 人脸识别方法、移动终端及计算机可读存储介质
CN110197108A (zh) * 2018-08-17 2019-09-03 平安科技(深圳)有限公司 身份验证方法、装置、计算机设备及存储介质
CN109409199B (zh) * 2018-08-31 2021-01-12 百度在线网络技术(北京)有限公司 微表情训练方法、装置、存储介质及电子设备
CN111104660B (zh) * 2018-10-26 2022-08-30 北京小米移动软件有限公司 一种多指纹解锁的方法、装置、移动终端和存储介质
CN111222107A (zh) * 2018-11-23 2020-06-02 奇酷互联网络科技(深圳)有限公司 解锁方法、智能终端及计算机可读存储介质
CN109886697B (zh) * 2018-12-26 2023-09-08 巽腾(广东)科技有限公司 基于表情组别的操作确定方法、装置及电子设备
CN110928410A (zh) * 2019-11-12 2020-03-27 北京字节跳动网络技术有限公司 基于多个表情动作的交互方法、装置、介质和电子设备
CN111104923A (zh) * 2019-12-30 2020-05-05 北京字节跳动网络技术有限公司 一种人脸识别方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130300650A1 (en) * 2012-05-09 2013-11-14 Hung-Ta LIU Control system with input method using recognitioin of facial expressions
CN106203038A (zh) * 2016-06-30 2016-12-07 维沃移动通信有限公司 一种解锁方法及移动终端
CN109325330A (zh) * 2018-08-01 2019-02-12 平安科技(深圳)有限公司 微表情锁生成及解锁方法、装置、终端设备及存储介质
CN109214301A (zh) * 2018-08-10 2019-01-15 百度在线网络技术(北京)有限公司 基于人脸识别和手势识别的控制方法及装置
CN109165588A (zh) * 2018-08-13 2019-01-08 安徽工程大学 一种人脸表情识别加密解锁装置和方法
CN109829277A (zh) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 终端解锁方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN113536262A (zh) 2021-10-22
US20230100874A1 (en) 2023-03-30
EP4099198A1 (en) 2022-12-07
EP4099198A4 (en) 2023-09-06

Similar Documents

Publication Publication Date Title
WO2022048352A1 (zh) 基于面部表情的解锁方法、装置、计算机设备和存储介质
CN107066983B (zh) 一种身份验证方法及装置
TWI751161B (zh) 終端設備、智慧型手機、基於臉部識別的認證方法和系統
Patel et al. Continuous user authentication on mobile devices: Recent progress and remaining challenges
TWI754887B (zh) 活體檢測方法和裝置、電子設備及儲存介質
Zhao et al. Mobile user authentication using statistical touch dynamics images
Sultana et al. A concept of social behavioral biometrics: motivation, current developments, and future trends
CN106778141B (zh) 基于手势识别的解锁方法、装置及移动终端
US20220075996A1 (en) Method and device for determining operation based on facial expression groups, and electronic device
CN107133608A (zh) 基于活体检测和人脸验证的身份认证系统
US10846514B2 (en) Processing images from an electronic mirror
CN107424266A (zh) 人脸识别解锁的方法和装置
WO2022188697A1 (zh) 提取生物特征的方法、装置、设备、介质及程序产品
US20200302039A1 (en) Authentication verification using soft biometric traits
Zhang et al. Robust multimodal recognition via multitask multivariate low-rank representations
CN112364827A (zh) 人脸识别方法、装置、计算机设备和存储介质
CN112001932A (zh) 人脸识别方法、装置、计算机设备和存储介质
EP2701096A2 (en) Image processing device and image processing method
Abate et al. On the impact of multimodal and multisensor biometrics in smart factories
US20230306792A1 (en) Spoof Detection Based on Challenge Response Analysis
Xu et al. A High‐Security and Smart Interaction System Based on Hand Gesture Recognition for Internet of Things
US11734400B2 (en) Electronic device and control method therefor
CN112818733B (zh) 信息处理方法、装置、存储介质及终端
Fegade et al. Residential security system based on facial recognition
CN107742073A (zh) 信息展示方法、装置、计算机装置及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21863420

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021863420

Country of ref document: EP

Effective date: 20220901

NENP Non-entry into the national phase

Ref country code: DE