CN105740688B - Unlocking method and device - Google Patents

Unlocking method and device Download PDF

Info

Publication number
CN105740688B
CN105740688B CN201610070773.6A CN201610070773A CN105740688B CN 105740688 B CN105740688 B CN 105740688B CN 201610070773 A CN201610070773 A CN 201610070773A CN 105740688 B CN105740688 B CN 105740688B
Authority
CN
China
Prior art keywords
face
virtual mapping
action
position information
mapping icon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610070773.6A
Other languages
Chinese (zh)
Other versions
CN105740688A (en
Inventor
阳萍
陆莉
王小叶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201610070773.6A priority Critical patent/CN105740688B/en
Publication of CN105740688A publication Critical patent/CN105740688A/en
Application granted granted Critical
Publication of CN105740688B publication Critical patent/CN105740688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to an unlocking method and device, comprising the following steps: the method comprises the steps of obtaining random appointed actions, prompting a user to complete the appointed actions, obtaining a face video sequence, identifying face images in the face video sequence, positioning a face key point set corresponding to the face images, tracking the face key point set, judging whether the appointed actions are completed according to position information of the face key point set, if so, obtaining position information of a first face key point, determining corresponding virtual mapping icon position information according to the position information of the first face key point, displaying a virtual mapping icon according to the virtual mapping icon position information, judging whether the virtual mapping icon moves to an appointed position in a screen, and if so, operating an unlocking instruction to improve unlocking safety and interactivity.

Description

Unlocking method and device
Technical Field
The invention relates to the technical field of computers, in particular to an unlocking method and device.
Background
With the development of computer technology, various applications are accompanied with the operation of identity authentication, and as a common identity identification mode, face authentication is also commonly applied to the fields of account security, financial payment and the like, such as unlocking technology.
The existing face unlocking method is usually to compare the features of the face with the pre-stored features for matching and unlock according to the matching result, the interactivity is poor during unlocking, and the unlocking can be successful through a photo, so that the safety is low.
Disclosure of Invention
Therefore, it is necessary to provide an unlocking method and apparatus to improve the safety and interactivity of unlocking.
A method of unlocking, the method comprising:
acquiring a random appointed action, and prompting a user to complete the appointed action;
acquiring a face video sequence, identifying a face image in the face video sequence, and positioning a face key point set corresponding to the face image;
tracking a face key point set, judging whether the specified action is finished according to the position information of the face key point set, if so, acquiring the position information of a first face key point, and determining the position information of a corresponding virtual mapping icon according to the position information of the first face key point;
displaying a virtual mapping icon according to the position information of the virtual mapping icon, judging whether the virtual mapping icon moves to a specified position in a screen, and if so, operating an unlocking instruction.
An apparatus to unlock, the apparatus comprising:
the prompting module is used for acquiring a random specified action and prompting a user to complete the specified action;
the positioning module is used for acquiring a face video sequence, identifying a face image in the face video sequence and positioning a face key point set corresponding to the face image;
the action judging module is used for tracking the face key point set, judging whether the specified action is finished according to the position information of the face key point set, and if so, entering the virtual mapping icon module;
the virtual mapping icon display module is used for acquiring the position information of the first face key point, determining the corresponding virtual mapping icon position information according to the position information of the first face key point and displaying a virtual mapping icon according to the virtual mapping icon position information;
and the first unlocking module is used for judging whether the virtual mapping icon moves to a specified position in the screen or not, and if so, operating an unlocking instruction.
The unlocking method and the device prompt a user to complete the designated action by acquiring the random designated action, acquire a human face video sequence, identify a human face image in the human face video sequence, position a human face key point set corresponding to the human face image, track the human face key point set, judge whether the designated action is completed according to the position information of the human face key point set, if so, acquire the position information of a first human face key point, determine the corresponding position information of a virtual mapping icon according to the position information of the first human face key point, display the virtual mapping icon according to the position information of the virtual mapping icon, judge whether the virtual mapping icon moves to the designated position in a screen, if so, run an unlocking instruction to complete the designated action and move the virtual mapping icon corresponding to the first human face key point to the designated position in the process of completing the designated action, interaction between a user and terminal display in the unlocking process is increased, interestingness is increased, photos and video camouflage faces can be effectively removed, the unlocking difficulty is improved, multiple conditions are needed, and unlocking safety is greatly improved.
A method of unlocking, the method comprising:
acquiring a random appointed action, and sending information corresponding to the appointed action to a terminal so that the terminal prompts a user to finish the appointed action;
acquiring a face video sequence, identifying a face image in the face video sequence, and positioning a face key point set corresponding to the face image;
tracking a face key point set, judging whether the specified action is finished according to the position information of the face key point set, if so, acquiring the position information of a first face key point, and determining the position information of a corresponding virtual mapping icon according to the position information of the first face key point;
and sending the position information of the virtual mapping icon to a terminal so that the terminal displays the virtual mapping icon according to the position information of the virtual mapping icon, judging whether the virtual mapping icon moves to a specified position in a screen or not, and if so, running an unlocking instruction.
An apparatus to unlock, the apparatus comprising:
the action designating module is used for acquiring a random designated action and sending information corresponding to the designated action to the terminal so as to prompt the user to complete the designated action;
the face key point positioning module is used for acquiring a face video sequence, identifying a face image in the face video sequence and positioning a face key point set corresponding to the face image;
the judging module is used for tracking the face key point set, judging whether the specified action is finished according to the position information of the face key point set, and if so, entering the virtual mapping icon module;
the virtual mapping icon module is used for acquiring the position information of the first face key point and determining the corresponding virtual mapping icon position information according to the position information of the first face key point;
and the second unlocking module is used for sending the position information of the virtual mapping icon to the terminal so that the terminal displays the virtual mapping icon according to the position information of the virtual mapping icon, judging whether the virtual mapping icon moves to a specified position in a screen or not, and if so, operating an unlocking instruction.
The unlocking method and the device have the advantages that the random appointed action is obtained, the information corresponding to the appointed action is sent to the terminal, so that the terminal prompts a user to finish the appointed action, a human face video sequence is obtained, a human face image in the human face video sequence is identified, a human face key point set corresponding to the human face image is positioned, the human face key point set is tracked, whether the appointed action is finished or not is judged according to the position information of the human face key point set, if yes, the position information of a first human face key point is obtained, the position information of a corresponding virtual mapping icon is determined according to the position information of the first human face key point, the position information of the virtual mapping icon is sent to the terminal, so that the terminal displays the virtual mapping icon according to the position information of the virtual mapping icon, whether the virtual mapping icon moves to the appointed position in a screen or not is judged, if yes, an unlocking instruction is, the virtual mapping icon corresponding to the key point of the first face is moved to the designated position in the process of finishing the designated action, so that the interaction between a user and terminal display in the unlocking process is increased, the interestingness is increased, the photo and video camouflage face can be effectively removed, the unlocking difficulty is improved, multiple conditions are needed, and the unlocking safety is greatly improved.
Drawings
FIG. 1 is a diagram of an application environment for a method of unlocking in one embodiment;
FIG. 2 is a diagram illustrating an internal structure of the terminal of FIG. 1 according to one embodiment;
FIG. 3 is a diagram illustrating the internal structure of the server of FIG. 1 in one embodiment;
FIG. 4 is a flow diagram of a method of unlocking in one embodiment;
FIG. 5 is a diagram illustrating various designated actions and corresponding designated locations in one embodiment;
FIG. 6 is a flow diagram that illustrates the determination of whether to perform a specified action, under an embodiment;
FIG. 7 is a flow diagram that illustrates obtaining location information for a moved virtual map icon, under an embodiment;
FIG. 8 is a diagram illustrating a relationship between displacement differences of key points of a first face and displacement differences of virtual map icons during a heads-up maneuver in accordance with an embodiment;
FIG. 9 is a diagram illustrating a relationship between displacement differences of key points of a first face and displacement differences of virtual map icons during a heads-down maneuver according to an embodiment;
FIG. 10 is a schematic interface diagram of a terminal when starting to capture a face video according to an embodiment;
FIG. 11 is a diagram illustrating a terminal interface displaying virtual map icons and specified actions, in one embodiment;
FIG. 12 is a diagram illustrating a terminal interface where a virtual map icon moves along with a human face in one embodiment;
FIG. 13 is a diagram illustrating an interface of a terminal after a virtual map icon has been moved to a specified location on the screen, in accordance with an embodiment;
FIG. 14 is a flow diagram of another method of unlocking in one embodiment;
FIG. 15 is a flowchart of obtaining location information for a moved virtual map icon in another embodiment;
FIG. 16 is a block diagram of the structure of a device unlocked in one embodiment;
FIG. 17 is a block diagram that illustrates the structure of a virtual map icon display module in one embodiment;
FIG. 18 is a block diagram showing the construction of an unlocking means in another embodiment;
FIG. 19 is a block diagram of an alternate embodiment of an unlock mechanism;
FIG. 20 is a block diagram that illustrates the structure of a virtual map icon module in one embodiment;
fig. 21 is a block diagram of another embodiment of an unlocking device.
Detailed Description
FIG. 1 is a diagram of an application environment in which a method for unlocking operates, according to an embodiment. As shown in fig. 1, the application environment includes a terminal 110 and a server 120, wherein the terminal 110 and the server 120 communicate via a network.
The terminal 110 includes a video sequence acquisition device, which may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, and the like. The terminal 110 may receive location information from the server 120 or transmit a video sequence or the like to the server 120 through the network, and the server 120 may respond to an unlocking request or the like transmitted by the terminal 110.
In one embodiment, the internal structure of the terminal 110 in fig. 1 is as shown in fig. 2, and the terminal 110 includes a processor, a graphic processing unit, a storage medium, a memory, a network interface, a display screen, and an input device, which are connected through a system bus. The storage medium of the terminal 110 stores an operating system, and further includes a first unlocking device, where the first unlocking device is used to implement a method suitable for unlocking the terminal. The processor is used to provide computational and control capabilities that support the operation of the entire terminal 110. The graphic processing unit in the terminal 110 is configured to at least provide a drawing capability of a display interface, the memory provides an environment for operating the first unlocked device in the storage medium, and the network interface is configured to perform network communication with the server 120, such as receiving random action information sent by the server 120. The display screen is used for displaying an application interface and the like, for example, displaying a virtual mapping icon and the like which moves along with the action of the human face on the application interface, and the input device comprises a camera which is used for collecting the video of a user and receiving the command or data input by the user. For a terminal 110 with a touch screen, the display screen and input device may be a touch screen.
In one embodiment, the internal structure of the server 120 in fig. 1 is shown in fig. 3, and the server 120 includes a processor, a storage medium, a memory, and a network interface connected by a system bus. The storage medium of the server 120 stores an operating system, a database and a second unlocking device, where the database is used to store data, such as face data of a user, and the second unlocking device is used to implement an unlocking method suitable for the server 120. The processor of the server 120 is used to provide computing and control capabilities to support the operation of the entire server 120. The memory of the server 120 provides an environment for the operation of the second unlocked device in the storage medium. The network interface of the server 120 is used for communicating with the external terminal 110 via a network connection, and for example, transmits random operation information to the terminal 110.
In one embodiment, as shown in fig. 4, there is provided an unlocking method, which is exemplified by being applied to a terminal in the application environment, and includes the following steps:
step S210, acquiring a random specified action, and prompting a user to complete the specified action.
Specifically, when the terminal needs to open or enter a file, open an application, release a lock screen mode, or the like, an unlock request may be sent to the server, and a random designated action may be obtained from the server. Or randomly acquiring the specified action from an action set pre-stored in the terminal, and generating corresponding prompt information according to the specified action. The designated action can be one or a combination of several of shaking head left, shaking head right, blinking, opening mouth, raising head up and raising head down, and the number of the designated actions can be one or more. The prompt may be displayed in text, exemplary video, and the specified action may include a specified completion time and sequence of completion.
Step S220, a face video sequence is obtained, face images in the face video sequence are identified, and a face key point set corresponding to the face images is positioned.
Specifically, a camera is used for collecting a user face video sequence, each frame of face image in the face video sequence is identified, and a face identification algorithm can be defined according to needs. When gathering people's face video sequence, can open leading camera automatically to with the real-time demonstration of the face image of gathering in the predetermined display frame of terminal, the near of predetermined display frame can show tip information, tip information includes: specifying motion information, motion normative prompt information, and the like. The preset designated position is also displayed inside or on the edge of the preset display frame, the unlocking can be successfully carried out only when the virtual mapping icon moves to the preset designated position, the preset designated position can be displayed in the form of a preset icon, such as a lock-shaped icon, and the preset color and the preset animation effect can be set for eye-catching display.
The key points of the face can be accurately positioned through a key point positioning algorithm, and certain shielding and multi-angle positioning are supported. The face key points comprise points of eyebrow, eyes, nose, mouth, face contour, chin, forehead and other parts, each part corresponds to one type of face key point, and the face key point set comprises at least one type of face key point.
Step S230, tracking the face key point set, judging whether the specified action is finished according to the position information of the face key point set, and if so, entering step S240.
Specifically, whether the designated action is finished or not can be judged directly through the position relation, and whether the mouth is opened or not can be judged directly through the position relation of key points of the mouth part when the mouth is opened for detection. The method can obtain a pre-established three-dimensional face model, substitutes the position information of the face key point set corresponding to different time into the three-dimensional face model, and calculates to obtain the angle corresponding to the face. Or extracting image features according to the positions of the face key point set, determining whether the appointed action is finished according to the change of the image features, for example, detecting the change of the image features corresponding to the eye region key points during blink detection, and judging whether blinking exists. And obtaining a corresponding action trend according to the distance change between the first face key point and the second face key point, thereby judging whether the appointed action is finished. If the distance between the nose and the mouth is gradually reduced due to the upward movement of the head for the upward head raising action, and if the picture is moved upward in parallel, the distance between the nose and the mouth is constant, and a real person and the picture can be distinguished. If the photo is inclined at a certain angle to simulate the head-up action, the face can be deformed and changed after the angle is changed because the photo is planar, so that key points cannot be positioned, and the specified action cannot be completed. The position change of the face key point set can conveniently judge whether the appointed action is finished or not, and the photos which cannot finish the appointed action are excluded. If the specified action is not completed, no processing is performed, and unlocking cannot be performed.
Step S240, obtaining the position information of the first face key point, and determining the position information of the corresponding virtual mapping icon according to the position information of the first face key point.
Specifically, when the face key point set includes a plurality of types of face key points, the first face key point is one of the types of face key points. The virtual map icon is a virtual map of the first facial keypoints with positions that follow corresponding movements of the positions of the first facial keypoints. When the first face key point is located for the first time, displaying a corresponding initial virtual mapping icon on the screen, wherein the displayed position can be a self-defined preset position, such as the center of the screen, or displaying the virtual mapping icon according to the current position information of the first face key point and a preset position mapping relation. The position mapping relationship can be customized according to needs, such as a mapping relationship taking a transverse coordinate and/or a longitudinal coordinate as a variable, a mapping relationship taking a displacement difference as a variable, and the like. When the first face key point moves along with the completion of the specified action of the face, the position information of the first face key point changes along with the change of coordinates, and the position information of the moved virtual mapping icon can be calculated according to the preset position mapping relation, such as calculating a new coordinate value. When calculating the position information of the virtual mapping icon, the calculation can be performed by using the change of the position instead of the absolute coordinate, for example, the displacement difference of the virtual mapping icon is calculated according to the preset proportion by using the change value of the coordinate of the first face key point, namely the displacement difference, so that the position information after the movement is obtained according to the displacement difference of the virtual mapping icon and the position information before the movement. In one embodiment, for the vertical designated action, the designated position is directly above or below the initial position of the virtual mapping icon, only the position information in the ordinate direction is used when the position information of the virtual mapping icon corresponding to the first face key point is determined, and the position information in the abscissa direction is ignored, so that the situation that the horizontal movement of the virtual mapping icon deviates from the designated position due to the fact that the head moves in the horizontal direction when the face finishes the action is avoided. In one embodiment, for the left-right direction designated action, the designated position is right left or right of the initial position of the virtual mapping icon, the position information in the horizontal coordinate direction is only used when the position information of the virtual mapping icon corresponding to the first face key point is determined, and the position information in the vertical coordinate direction is ignored, so that the situation that the virtual mapping icon longitudinally moves to deviate from the designated position due to the fact that the head moves in the longitudinal direction when the face finishes the action is avoided.
In one embodiment, the first facial keypoint is the tip of the nose.
Specifically, the nose tip is located at the relative center of the face, the spatial distance between the nose tip and the face is large, the nose tip serves as a first key point, the movement of the virtual mapping icon is conveniently controlled according to the movement of the nose tip, and the movement range is large.
And step S250, displaying the virtual mapping icon according to the position information of the virtual mapping icon, judging whether the virtual mapping icon moves to a specified position in the screen, and if so, executing an unlocking instruction.
Specifically, the position information of the virtual mapping icon changes correspondingly with the position information of the first face key point, so that the displayed virtual mapping icon moves along with the movement direction of the first face key point on the terminal screen. And when the virtual mapping icon is moved to the specified position in the screen, executing the unlocking instruction, and if the virtual mapping icon is not moved to the specified position in the screen, not executing the unlocking instruction. The virtual mapping icon is required to be moved to the designated position in the process of completing the designated action, and the designated position can be randomly determined, so that the recorded video is difficult to achieve the unlocking condition, the interaction between a user and terminal display in the unlocking process is increased, the interestingness is increased, the unlocking difficulty is improved, multiple conditions are required, and the unlocking safety is greatly improved. As shown in fig. 5, which is a schematic diagram of different designated actions and corresponding designated positions in one embodiment, as shown in fig. 5a, a first designated position 310 is corresponding to a downward nodding of the designated action, as shown in fig. 5b, a second designated position 320 is corresponding to an upward nodding of the designated action, as shown in fig. 5c, a third designated position 330 is corresponding to a rightward nodding, and a fourth designated position 340 is corresponding to a leftward nodding, as shown in fig. 5d, where the virtual mapping icon needs to be moved to the corresponding designated position in the process of completing the designated action to be unlocked. It can be understood that when there are a plurality of designated actions, different designated actions may correspond to different designated positions, and each designated action needs to be completed in sequence, and the virtual mapping icon is moved to the corresponding designated position in the process of completing the designated action, so that the unlocking instruction can be executed.
In the embodiment, a user is prompted to complete a specified action by acquiring a random specified action, a face video sequence is acquired, a face image in the face video sequence is identified, a face key point set corresponding to the face image is positioned, the face key point set is tracked, whether the specified action is completed or not is judged according to the position information of the face key point set, if so, the position information of a first face key point is acquired, the position information of a corresponding virtual mapping icon is determined according to the position information of the first face key point, the virtual mapping icon is displayed according to the position information of the virtual mapping icon, whether the virtual mapping icon moves to a specified position in a screen or not is judged, if so, an unlocking instruction is operated, the specified action is not only required to be completed, but also the virtual mapping icon corresponding to the first face key point needs to move to the specified position in the process of completing the specified action, interaction between a user and terminal display in the unlocking process is increased, interestingness is increased, photos and video camouflage faces can be effectively removed, the unlocking difficulty is improved, multiple conditions are needed, and unlocking safety is greatly improved.
In one embodiment, the movement is designated as a nod or a shake, as shown in fig. 6, step S230 comprises:
step S231, acquiring a three-dimensional face model, and calculating a corresponding face angle according to the position information of the face key point set and the three-dimensional face model.
Specifically, a three-dimensional face model can be established in advance, all types of key points of the human face are detected through an N-point perspective method when the three-dimensional face model is established, then the three-dimensional face model is reconstructed by using images with overlapped angle information according to the key points, and the real three-dimensional face model is formed by learning the real three-dimensional facial expression and movement conditions through a deep learning algorithm. The three-dimensional face model comprises features in a three-dimensional direction, and a plane face, such as a photo, can be identified. And substituting the positions of the key points into the three-dimensional face model to obtain corresponding face angles.
And step S232, judging whether the specified action is finished according to the face angle.
Specifically, for the motion of nodding or shaking the head, the angle change of the face has a corresponding trend characteristic, and the change is gradually continuous, and for the condition that there is a sudden change, or the change trend does not accord with the corresponding trend characteristic, it can be detected that the designated motion is not completed. If the angle change of the human face is not single, such as simultaneous existence of increase and decrease, the motion of shaking the head to the left can be detected.
In one embodiment, as shown in fig. 7, step S240 includes:
in step S241, the current first position information of the virtual mapping icon is obtained.
Specifically, the virtual mapping icon is currently displayed at a first position on the screen of the terminal, and the first position information may be in the form of coordinates, which may include an abscissa and an ordinate. In one embodiment, the corresponding first position information is determined according to the direction of the specified action, and if the direction is up and down, only the ordinate needs to be acquired, and if the direction is left and right, only the abscissa needs to be acquired. Or acquiring first position information according to the relation between the initial virtual mapping icon and the designated position, acquiring the ordinate if the line segment formed by the initial virtual mapping icon and the designated position as the 2-end point is in the ordinate direction, and acquiring the abscissa if the line segment formed by the initial virtual mapping icon and the designated position as the 2-end point is in the abscissa direction.
Step S242, obtaining a displacement difference of the first face key point, and determining a displacement difference of the virtual mapping icon according to the displacement difference.
Specifically, the displacement difference is a distance difference formed by the positions of the first face key points before and after movement, and can be represented by a vector. The displacement difference includes a direction or a positive and negative component. In one embodiment, the displacement difference of the corresponding first face key point is determined according to the direction of the designated action, if the direction is up and down, only the displacement difference in the ordinate direction needs to be obtained, if the direction is left and right, only the displacement difference in the abscissa direction needs to be obtained, and it can be ensured that the calculated displacement difference of the virtual mapping icon is only in one direction, if the displacement difference only moves in the up and down direction, or moves in the left and right direction, and does not deviate. And acquiring a mapping relation corresponding to the displacement difference of the first face key point and the displacement difference of the virtual mapping icon, and calculating to obtain the displacement difference of the virtual mapping icon according to the mapping relation if the displacement difference of the virtual mapping icon is 3 times of the displacement difference of the first face key point.
In step S243, second position information of the moved virtual mapping icon is obtained according to the displacement difference of the virtual mapping icon and the first position information.
Specifically, a positive movement direction may be specified, and if the position difference is a positive value, the movement is performed in the positive direction, and if the position difference is a negative value, the movement is performed in the negative direction. The displacement difference may be a distance difference in only one direction, such as the abscissa direction or the ordinate direction. And when the displacement difference comprises the position information of the abscissa and the ordinate at the same time, respectively calculating to obtain the movement in different directions, and obtaining the final second position of the moved virtual mapping icon. In one embodiment, the nose tip is taken as the first face key point, the virtual mapping icon moves along with the nose tip, as shown in fig. 8, during the head raising action, the nose tip moves upwards by a distance h1, and the virtual mapping icon moves upwards by a distance h2, wherein h2 and h1 follow the mapping relationship corresponding to the displacement difference of the first face key point and the displacement difference of the virtual mapping icon. As shown in fig. 9, during the lowering action, the nose tip moves down by a distance h3, and the virtual map icon moves down by a distance h4, wherein h3 and h4 also follow the mapping relationship between the displacement difference of the key points of the first face and the displacement difference of the virtual map icon. In this embodiment, the position information of the moved virtual mapping icon can be quickly and conveniently obtained through the displacement difference.
In one embodiment, before the step of executing the unlocking instruction, the method further includes: and matching the face image in the face video sequence with pre-stored face data, generating corresponding authentication information according to a matching result, and operating an unlocking instruction according to the authentication information.
Specifically, the identity may be verified when the virtual mapping icon is moved to a designated position in the screen, or the identity may be verified in advance as needed. The pre-stored face data can be acquired in advance, and can be acquired face images or characteristic data acquired after the face images are analyzed. The pre-stored face data can be directly stored in the terminal or uploaded to the server for storage. And generating corresponding identity authentication information according to the matching result, if the matching degree exceeds a preset threshold value, passing the identity authentication and having unlocking authority, otherwise, failing to pass the identity authentication and having no authority to unlock. The face image in the face video sequence is matched with the pre-stored face data, so that the identity can be accurately verified, and the safety of the unlocking operation of the terminal is further ensured. In one embodiment, the terminal sends an authentication request to the server, the authentication request comprises a face video sequence, so that the server matches a face image in the face video sequence with pre-stored face data, corresponding authentication information is generated according to a matching result and is returned to the terminal, and the terminal runs an unlocking instruction according to the authentication information.
In a specific embodiment, the designated movement is one of raising the head upwards, nodding the head downwards, shaking the head leftwards and shaking the head rightwards, the corresponding designated positions are respectively right above, right below, right left and right of the edge of the display frame, and the specific process of unlocking is as follows:
1. the face data of the user are collected through the camera, stored in the server, and the face lock authority of the terminal is opened.
2. When the terminal needs to open or enter a file, open an application, release a screen locking mode and the like, an unlocking request is sent to the server, a random specified action is issued by the server, such as 'head-down', an interface shown in fig. 10 is displayed, a front camera of the terminal is simultaneously opened, prompt information 410 is generated according to the current face position of a user, the user is reminded to adjust the position between the device and the face, when the system identifies the nose tip of a first key point in the face of the user, a virtual mapping icon, namely a sphere 420, appears on the screen, as shown in fig. 11, the prompt information is displayed below a display frame 430, and at the moment, the prompt information 410 is changed into 'head-down'.
3. In the process of nodding the head of a user, the position of the nose tip is moved, the position information of the sphere after movement is calculated according to the position information of the nose tip, and therefore the sphere moving up and down along the Y axis along with the action of the user is displayed according to the moved position information and is shown in figure 12, wherein the sphere in figure 12a moves to the lower part, the sphere in figure 12b moves to the upper part, when the sphere moves to a specified position 440, as shown in figure 13, an identity verification request is sent to a server, the identity verification request comprises a face video sequence collected in real time, the server matches a face image in the face video sequence with pre-stored face data to obtain identity verification information and sends the identity verification information to a terminal, if the identity verification information received by the terminal is successful in matching, an unlocking instruction is operated, and if the matching is unsuccessful, the unlocking cannot be performed.
As shown in fig. 14, in one embodiment, there is provided an unlocking method, which is exemplified by the server applied in the application environment, and includes the following steps:
step S510, acquiring a random designated action, and sending information corresponding to the designated action to the terminal, so that the terminal prompts the user to complete the designated action.
Specifically, when the terminal needs to open or enter a file, open an application, release a screen locking mode, and the like, an unlocking request is sent to the server, and the server obtains a random designated action according to the received unlocking request. The terminal can also acquire random appointed action information to the server in advance, and directly adopt the pre-stored random appointed action information to generate prompt information when unlocking is required. The designated action can be one or a combination of several of shaking head left, shaking head right, blinking, opening mouth, raising head up and raising head down, and the number of the designated actions can be one or more. The server sends information corresponding to the designated action to the terminal so as to enable the terminal to display prompt information, the prompt information can be displayed in the form of characters and demonstration videos, and the designated action can comprise specified completion time and completion sequence.
Step S520, a face video sequence is obtained, a face image in the face video sequence is identified, and a face key point set corresponding to the face image is positioned.
Specifically, a user face video sequence which is uploaded by a terminal and acquired through a camera is obtained, each frame of face image in the face video sequence is identified, and a face identification algorithm can be defined according to needs. When gathering people's face video sequence, the leading camera can be opened automatically to the terminal to with the real-time demonstration of the face image of gathering in the predetermined display frame of terminal, the near of predetermined display frame can show tip information, tip information includes: specifying motion information, motion normative prompt information, and the like. The preset designated position is also displayed inside or on the edge of the preset display frame, the unlocking can be successfully carried out only when the virtual mapping icon moves to the preset designated position, the preset designated position can be displayed in the form of a preset icon, such as a lock-shaped icon, and the preset color and the preset animation effect can be set for eye-catching display.
The key points of the face can be accurately positioned through a key point positioning algorithm, and certain shielding and multi-angle positioning are supported. The face key points comprise points of eyebrow, eyes, nose, mouth, face contour, chin, forehead and other parts, each part corresponds to one type of face key point, and the face key point set comprises at least one type of face key point.
Step S530, tracking the face key point set, judging whether the specified action is finished according to the position information of the face key point set, and if so, entering step S540.
Specifically, whether the designated action is finished or not can be judged directly through the position relation, and whether the mouth is opened or not can be judged directly through the position relation of key points of the mouth part when the mouth is opened for detection. The method can obtain a pre-established three-dimensional face model, substitutes the position information of the face key point set corresponding to different time into the three-dimensional face model, and calculates to obtain the angle corresponding to the face. Or extracting image features according to the positions of the face key point set, determining whether the appointed action is finished according to the change of the image features, for example, detecting the change of the image features corresponding to the eye region key points during blink detection, and judging whether blinking exists. And obtaining a corresponding action trend according to the distance change between the first face key point and the second face key point, thereby judging whether the appointed action is finished. If the distance between the nose and the mouth is gradually reduced due to the upward movement of the head for the upward head raising action, and if the picture is moved upward in parallel, the distance between the nose and the mouth is constant, and a real person and the picture can be distinguished. If the photo is inclined at a certain angle to simulate the head-up action, the face can be deformed and changed after the angle is changed because the photo is planar, so that key points cannot be positioned, and the specified action cannot be completed. The position change of the face key point set can conveniently judge whether the appointed action is finished or not, and the photos which cannot finish the appointed action are excluded. If the specified action is not completed, no processing is performed, and unlocking cannot be performed.
And step S540, acquiring the position information of the first face key point, and determining the corresponding position information of the virtual mapping icon according to the position information of the first face key point.
Specifically, when the face key point set includes a plurality of types of face key points, the first face key point is one of the types of face key points. The virtual map icon is a virtual map of the first facial keypoints with positions that follow corresponding movements of the positions of the first facial keypoints. When the first face key point is located for the first time, displaying a corresponding initial virtual mapping icon on the screen, wherein the displayed position can be a self-defined preset position, such as the center of the screen, or displaying the virtual mapping icon according to the current position information of the first face key point and a preset position mapping relation. The position mapping relationship can be customized according to needs, such as a mapping relationship taking a transverse coordinate and/or a longitudinal coordinate as a variable, a mapping relationship taking a displacement difference as a variable, and the like. When the first face key point moves along with the completion of the specified action of the face, the position information of the first face key point changes along with the change of coordinates, and the position information of the moved virtual mapping icon can be calculated according to the preset position mapping relation, such as calculating a new coordinate value. When calculating the position information of the virtual mapping icon, the calculation can be performed by using the change of the position instead of the absolute coordinate, for example, the displacement difference of the virtual mapping icon is calculated according to the preset proportion by using the change value of the coordinate of the first face key point, namely the displacement difference, so that the position information after the movement is obtained according to the displacement difference of the virtual mapping icon and the position information before the movement. In one embodiment, for the vertical designated action, the designated position is directly above or below the initial position of the virtual mapping icon, only the position information in the ordinate direction is used when the position information of the virtual mapping icon corresponding to the first face key point is determined, and the position information in the abscissa direction is ignored, so that the situation that the horizontal movement of the virtual mapping icon deviates from the designated position due to the fact that the head moves in the horizontal direction when the face finishes the action is avoided. In one embodiment, for the left-right direction designated action, the designated position is right left or right of the initial position of the virtual mapping icon, the position information in the horizontal coordinate direction is only used when the position information of the virtual mapping icon corresponding to the first face key point is determined, and the position information in the vertical coordinate direction is ignored, so that the situation that the virtual mapping icon longitudinally moves to deviate from the designated position due to the fact that the head moves in the longitudinal direction when the face finishes the action is avoided.
In one embodiment, the first facial keypoint is the tip of the nose.
Specifically, the nose tip is located at the relative center of the face, the spatial distance between the nose tip and the face is large, the nose tip serves as a first key point, the movement of the virtual mapping icon is conveniently controlled according to the movement of the nose tip, and the movement range is large.
And step S550, sending the position information of the virtual mapping icon to the terminal, so that the terminal displays the virtual mapping icon according to the position information of the virtual mapping icon, judging whether the virtual mapping icon moves to a specified position in a screen, and if so, running an unlocking instruction.
Specifically, the position information of the virtual mapping icon changes correspondingly with the position information of the first face key point, so that the displayed virtual mapping icon moves along with the movement direction of the first face key point on the terminal screen. And when the virtual mapping icon is moved to the specified position in the screen, executing the unlocking instruction, and if the virtual mapping icon is not moved to the specified position in the screen, not executing the unlocking instruction. The virtual mapping icon is required to be moved to the designated position in the process of completing the designated action, and the designated position can be randomly determined, so that the recorded video is difficult to achieve the unlocking condition, the interaction between a user and terminal display in the unlocking process is increased, the interestingness is increased, the unlocking difficulty is improved, multiple conditions are required, and the unlocking safety is greatly improved.
In the embodiment, by acquiring a random designated action, sending information corresponding to the designated action to a terminal so as to enable the terminal to prompt a user to complete the designated action, acquiring a face video sequence, identifying a face image in the face video sequence, positioning a face key point set corresponding to the face image, tracking the face key point set, judging whether the designated action is completed according to the position information of the face key point set, if so, acquiring the position information of a first face key point, determining the position information of a corresponding virtual mapping icon according to the position information of the first face key point, sending the position information of the virtual mapping icon to the terminal so as to enable the terminal to display the virtual mapping icon according to the position information of the virtual mapping icon, judging whether the virtual mapping icon moves to a designated position in a screen, and if so, running an unlocking instruction to complete the designated action, the virtual mapping icon corresponding to the key point of the first face is moved to the designated position in the process of finishing the designated action, so that the interaction between a user and terminal display in the unlocking process is increased, the interestingness is increased, the photo and video camouflage face can be effectively removed, the unlocking difficulty is improved, multiple conditions are needed, and the unlocking safety is greatly improved.
In one embodiment, as shown in fig. 15, step S540 includes:
in step S541, the current first position information of the virtual mapping icon is obtained.
Specifically, the virtual mapping icon is currently displayed at a first position on the screen of the terminal, and the first position information may be in the form of coordinates, which may include an abscissa and an ordinate. In one embodiment, the corresponding first position information is determined according to the direction of the specified action, and if the direction is up and down, only the ordinate needs to be acquired, and if the direction is left and right, only the abscissa needs to be acquired. Or acquiring first position information according to the relation between the initial virtual mapping icon and the designated position, acquiring the ordinate if the line segment formed by the initial virtual mapping icon and the designated position as the 2-end point is in the ordinate direction, and acquiring the abscissa if the line segment formed by the initial virtual mapping icon and the designated position as the 2-end point is in the abscissa direction.
And step S542, obtaining the displacement difference of the key points of the first face, and determining the displacement difference of the virtual mapping icon according to the displacement difference.
Specifically, the displacement difference is a distance difference formed by the positions of the first face key points before and after movement, and can be represented by a vector. The displacement difference includes a direction or a positive and negative component. In one embodiment, the displacement difference of the corresponding first face key point is determined according to the direction of the designated action, if the direction is up and down, only the displacement difference in the ordinate direction needs to be obtained, if the direction is left and right, only the displacement difference in the abscissa direction needs to be obtained, and it can be ensured that the calculated displacement difference of the virtual mapping icon is only in one direction, if the displacement difference only moves in the up and down direction, or moves in the left and right direction, and does not deviate. And acquiring a mapping relation corresponding to the displacement difference of the first face key point and the displacement difference of the virtual mapping icon, and calculating to obtain the displacement difference of the virtual mapping icon according to the mapping relation if the displacement difference of the virtual mapping icon is 3 times of the displacement difference of the first face key point.
Step S543, obtaining second position information of the moved virtual mapping icon according to the displacement difference of the virtual mapping icon and the first position information.
Specifically, a positive movement direction may be specified, and if the position difference is a positive value, the movement is performed in the positive direction, and if the position difference is a negative value, the movement is performed in the negative direction. The displacement difference may be a distance difference in only one direction, such as the abscissa direction or the ordinate direction. And when the displacement difference comprises the position information of the abscissa and the ordinate at the same time, respectively calculating to obtain the movement in different directions, and obtaining the final second position of the moved virtual mapping icon. In one embodiment, the nose tip is used as a key point of the first face, and the virtual mapping icon moves along with the nose tip.
In one embodiment, before the step of executing the unlocking instruction, the method further includes: and receiving an authentication request sent by the terminal, matching the face image in the face video sequence with pre-stored face data, and generating corresponding authentication information according to a matching result so that the terminal operates an unlocking instruction according to the authentication information.
Specifically, the terminal may send the authentication request again when the virtual mapping icon moves to the designated position in the screen, or may send the authentication request in advance as needed. The identity authentication request can comprise a terminal identification or a user identification, and the pre-stored face data can be acquired in advance and uploaded by the terminal, and can be acquired face images or characteristic data acquired after the face images are analyzed. The pre-stored face data can be stored corresponding to the terminal identification or the user identification, so that the corresponding face data can be obtained according to the terminal identification or the user identification. And generating corresponding identity authentication information according to the matching result, if the matching degree exceeds a preset threshold value, passing the identity authentication and having unlocking authority, otherwise, failing to pass the identity authentication and having no authority to unlock. The face image in the face video sequence is matched with the pre-stored face data, so that the identity can be accurately verified, and the safety of the unlocking operation of the terminal is further ensured.
In one embodiment, as shown in fig. 16, there is provided an unlocking apparatus including:
and the prompting module 610 is used for acquiring the random specified action and prompting the user to complete the specified action.
And the positioning module 620 is configured to acquire a face video sequence, identify a face image in the face video sequence, and position a face key point set corresponding to the face image.
And the action judging module 630 is configured to track the face key point set, judge whether to complete a specified action according to the position information of the face key point set, and if so, enter the virtual mapping icon module.
The virtual mapping icon display module 640 is configured to obtain position information of the first face key point, determine corresponding position information of the virtual mapping icon according to the position information of the first face key point, and display the virtual mapping icon according to the position information of the virtual mapping icon.
And the first unlocking module 650 is configured to determine whether the virtual mapping icon is moved to a specified position in the screen, and if so, execute an unlocking instruction.
In one embodiment, the first facial keypoint is the tip of the nose.
In an embodiment, the action is designated as nodding or shaking, and the action determining module 630 is further configured to obtain a three-dimensional face model, calculate a corresponding face angle according to the position information of the face key point set and the three-dimensional face model, and determine whether to complete the designated action according to the face angle.
In one embodiment, as shown in fig. 17, the virtual map icon display module 640 includes:
the first obtaining unit 641 is configured to obtain current first position information of the virtual mapping icon.
The first displacement difference determining unit 642 is configured to obtain a displacement difference of the first face key point, and determine a displacement difference of the virtual mapping icon according to the displacement difference.
The first information determining unit 643 is configured to obtain second position information of the moved virtual mapping icon according to the displacement difference of the virtual mapping icon and the first position information.
In one embodiment, as shown in fig. 18, the apparatus further comprises:
and the identity verification module 660 is configured to match the face image in the face video sequence with pre-stored face data, generate corresponding identity verification information according to a matching result, and execute an unlocking instruction according to the identity verification information.
In one embodiment, as shown in fig. 19, there is provided an unlocking apparatus including:
the action designating module 710 is configured to obtain a random designated action, and send information corresponding to the designated action to the terminal, so that the terminal prompts the user to complete the designated action.
The face key point positioning module 720 is configured to obtain a face video sequence, identify a face image in the face video sequence, and position a face key point set corresponding to the face image.
The judging module 730 is configured to track the face key point set, judge whether to complete a specified action according to the position information of the face key point set, and if so, enter the virtual mapping icon module 740.
The virtual mapping icon module 740 is configured to obtain position information of the first face key point, and determine corresponding position information of the virtual mapping icon according to the position information of the first face key point.
And a second unlocking module 750, configured to send the position information of the virtual mapping icon to the terminal, so that the terminal displays the virtual mapping icon according to the position information of the virtual mapping icon, determines whether the virtual mapping icon moves to a designated position in the screen, and if so, executes an unlocking instruction.
In one embodiment, the virtual map icon module 740 includes:
a second obtaining unit 741, configured to obtain current first location information of the virtual mapping icon;
a second displacement difference determining unit 742, configured to obtain a displacement difference of the first face key point, and determine a displacement difference of the virtual mapping icon according to the displacement difference;
a second information determining unit 743, configured to obtain second position information of the moved virtual mapping icon according to the displacement difference of the virtual mapping icon and the first position information.
In one embodiment, the apparatus further comprises:
and the identity authentication response module 760 is configured to receive an identity authentication request sent by the terminal, match a face image in the face video sequence with pre-stored face data, and generate corresponding identity authentication information according to a matching result, so that the terminal runs an unlocking instruction according to the identity authentication information.
It will be understood by those skilled in the art that all or part of the processes in the methods of the embodiments described above may be implemented by hardware related to instructions of a computer program, which may be stored in a computer readable storage medium, for example, in the storage medium of a computer system, and executed by at least one processor in the computer system, so as to implement the processes of the embodiments including the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. A method of unlocking, the method comprising:
acquiring a random appointed action, and prompting a user to complete the appointed action;
acquiring a face video sequence, identifying a face image in the face video sequence, and positioning a face key point set corresponding to the face image;
tracking a face key point set, and judging whether the specified action is finished according to the position information of the face key point set;
in the process of finishing the specified action, acquiring current first position information of the virtual mapping icon, acquiring a displacement difference of key points of a first face, determining the displacement difference of the virtual mapping icon according to the displacement difference, and acquiring second position information of the moved virtual mapping icon according to the displacement difference of the virtual mapping icon and the first position information;
displaying a virtual mapping icon according to the position information of the virtual mapping icon, judging whether the virtual mapping icon moves to a specified position in a screen in the process of finishing the specified action, and running an unlocking instruction if the specified action is finished and the virtual mapping icon moves to the specified position in the screen in the process of finishing the specified action, wherein the specified position is related to the direction of the specified action.
2. The method of claim 1, wherein the first facial keypoint is a nasal tip.
3. The method according to claim 1, wherein the designated action is nodding or shaking, and the step of determining whether to complete the designated action according to the position information of the face key point set comprises:
acquiring a three-dimensional face model, and calculating a corresponding face angle according to the position information of the face key point set and the three-dimensional face model;
and judging whether the specified action is finished according to the face angle.
4. The method of claim 1, wherein the step of executing an unlock instruction is preceded by:
and matching the face image in the face video sequence with pre-stored face data, generating corresponding identity authentication information according to a matching result, and operating an unlocking instruction according to the identity authentication information.
5. A method of unlocking, the method comprising:
acquiring a random appointed action, and sending information corresponding to the appointed action to a terminal so that the terminal prompts a user to finish the appointed action;
acquiring a face video sequence, identifying a face image in the face video sequence, and positioning a face key point set corresponding to the face image;
tracking a face key point set, and judging whether the specified action is finished according to the position information of the face key point set;
in the process of finishing the specified action, acquiring current first position information of the virtual mapping icon, acquiring a displacement difference of key points of a first face, determining the displacement difference of the virtual mapping icon according to the displacement difference, and acquiring second position information of the moved virtual mapping icon according to the displacement difference of the virtual mapping icon and the first position information;
and sending the position information of the virtual mapping icon to a terminal so that the terminal displays the virtual mapping icon according to the position information of the virtual mapping icon, judging whether the virtual mapping icon moves to a specified position in a screen in the process of finishing the specified action, and running an unlocking instruction if the specified action is finished and the virtual mapping icon moves to the specified position in the screen in the process of finishing the specified action, wherein the specified position is related to the direction of the specified action.
6. The method of claim 5, wherein the step of executing an unlock instruction is preceded by:
receiving an identity authentication request sent by a terminal;
and matching the face image in the face video sequence with pre-stored face data, and generating corresponding authentication information according to a matching result so that the terminal operates an unlocking instruction according to the authentication information.
7. An unlocking device, comprising:
the prompting module is used for acquiring a random specified action and prompting a user to complete the specified action;
the positioning module is used for acquiring a face video sequence, identifying a face image in the face video sequence and positioning a face key point set corresponding to the face image;
the action judgment module is used for tracking the face key point set and judging whether the specified action is finished according to the position information of the face key point set;
the virtual mapping icon display module is used for acquiring current first position information of a virtual mapping icon in the process of finishing the specified action, acquiring the displacement difference of key points of a first face, determining the displacement difference of the virtual mapping icon according to the displacement difference, and acquiring second position information of the moved virtual mapping icon according to the displacement difference of the virtual mapping icon and the first position information;
the first unlocking module is used for judging whether the virtual mapping icon moves to a specified position in a screen in the process of finishing the specified action or not, and if the specified action is finished and the virtual mapping icon moves to the specified position in the screen in the process of finishing the specified action, an unlocking instruction is operated, wherein the specified position is related to the direction of the specified action.
8. The apparatus of claim 7, wherein the first facial keypoint is a nasal tip.
9. The apparatus according to claim 7, wherein the designated action is nodding or shaking, the action determining module is further configured to obtain a three-dimensional face model, calculate a corresponding face angle according to the position information of the face key point set and the three-dimensional face model, and determine whether to complete the designated action according to the face angle.
10. The apparatus of claim 7, further comprising:
and the identity verification module is used for matching the face image in the face video sequence with pre-stored face data, generating corresponding identity verification information according to a matching result, and operating an unlocking instruction according to the identity verification information.
11. An unlocking device, comprising:
the action designating module is used for acquiring a random designated action and sending information corresponding to the designated action to the terminal so as to prompt the user to complete the designated action;
the face key point positioning module is used for acquiring a face video sequence, identifying a face image in the face video sequence and positioning a face key point set corresponding to the face image;
the judging module is used for tracking the face key point set and judging whether the specified action is finished according to the position information of the face key point set;
the virtual mapping icon module is used for acquiring current first position information of a virtual mapping icon in the process of finishing the specified action, acquiring the displacement difference of key points of a first face, determining the displacement difference of the virtual mapping icon according to the displacement difference, and acquiring second position information of the moved virtual mapping icon according to the displacement difference of the virtual mapping icon and the first position information;
and the second unlocking module is used for sending the position information of the virtual mapping icon to a terminal so that the terminal displays the virtual mapping icon according to the position information of the virtual mapping icon, judging whether the virtual mapping icon moves to a specified position in a screen in the process of finishing the specified action, and running an unlocking instruction if the specified action is finished and the virtual mapping icon moves to the specified position in the screen in the process of finishing the specified action, wherein the specified position is related to the direction of the specified action.
12. The apparatus of claim 11, further comprising:
and the identity verification response module is used for receiving an identity verification request sent by the terminal, matching the face image in the face video sequence with pre-stored face data, and generating corresponding identity verification information according to a matching result so that the terminal operates an unlocking instruction according to the identity verification information.
13. A terminal, characterized in that it comprises a storage medium and a processor, the storage medium having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the method of unlocking according to any of claims 1 to 4.
14. A server, characterized by comprising a storage medium and a processor, the storage medium having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the method of unlocking according to any of claims 5 to 6.
15. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, causes the processor to carry out the steps of the method of unlocking according to any one of claims 1 to 6.
CN201610070773.6A 2016-02-01 2016-02-01 Unlocking method and device Active CN105740688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610070773.6A CN105740688B (en) 2016-02-01 2016-02-01 Unlocking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610070773.6A CN105740688B (en) 2016-02-01 2016-02-01 Unlocking method and device

Publications (2)

Publication Number Publication Date
CN105740688A CN105740688A (en) 2016-07-06
CN105740688B true CN105740688B (en) 2021-04-09

Family

ID=56242175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610070773.6A Active CN105740688B (en) 2016-02-01 2016-02-01 Unlocking method and device

Country Status (1)

Country Link
CN (1) CN105740688B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358152B (en) * 2017-06-02 2020-09-08 广州视源电子科技股份有限公司 Living body identification method and system
CN107423687B (en) * 2017-06-15 2020-12-29 易联众信息技术股份有限公司 Identity authentication method and device based on face recognition technology
CN107391985B (en) * 2017-06-21 2020-10-09 江苏泮池信息技术有限公司 Decrypted image verification method, terminal and computer readable storage medium
CN107609373A (en) * 2017-09-07 2018-01-19 欧东方 A kind of terminal device and its method for safeguard protection
CN107657428A (en) * 2017-09-30 2018-02-02 四川民工加网络科技有限公司 A kind of multi-stag rural migrant worker recruitment method
CN108090336B (en) * 2017-12-19 2021-06-11 西安易朴通讯技术有限公司 Unlocking method applied to electronic equipment and electronic equipment
CN108509781B (en) * 2018-03-27 2023-04-07 百度在线网络技术(北京)有限公司 Method and device for unlocking
CN108629305B (en) * 2018-04-27 2021-10-22 广州市中启正浩信息科技有限公司 Face recognition method
CN109670287A (en) * 2018-12-21 2019-04-23 努比亚技术有限公司 Intelligent terminal unlocking method, intelligent terminal and computer readable storage medium
CN109819114B (en) * 2019-02-20 2021-11-30 北京市商汤科技开发有限公司 Screen locking processing method and device, electronic equipment and storage medium
CN110555928A (en) * 2019-08-15 2019-12-10 创新奇智(成都)科技有限公司 Intelligent store entrance guard method based on face recognition and settlement method thereof
CN111897435B (en) * 2020-08-06 2022-08-02 陈涛 Man-machine identification method, identification system, MR intelligent glasses and application

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408800A (en) * 2008-11-14 2009-04-15 东南大学 Method for performing three-dimensional model display control by CCD camera
CN101739719A (en) * 2009-12-24 2010-06-16 四川大学 Three-dimensional gridding method of two-dimensional front view human face image
CN103870843A (en) * 2014-03-21 2014-06-18 杭州电子科技大学 Head posture estimation method based on multi-feature-point set active shape model (ASM)

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4093273B2 (en) * 2006-03-13 2008-06-04 オムロン株式会社 Feature point detection apparatus, feature point detection method, and feature point detection program
CN100492399C (en) * 2007-03-15 2009-05-27 上海交通大学 Method for making human face posture estimation utilizing dimension reduction method
CN101398886B (en) * 2008-03-17 2010-11-10 杭州大清智能技术开发有限公司 Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN101908149A (en) * 2010-07-06 2010-12-08 北京理工大学 Method for identifying facial expressions from human face image sequence
CN102663413B (en) * 2012-03-09 2013-11-27 中盾信安科技(江苏)有限公司 Multi-gesture and cross-age oriented face image authentication method
CN102737235B (en) * 2012-06-28 2014-05-07 中国科学院自动化研究所 Head posture estimation method based on depth information and color image
EP2893479B1 (en) * 2012-09-05 2018-10-24 Sizer Technologies Ltd System and method for deriving accurate body size measures from a sequence of 2d images
CN103778360A (en) * 2012-10-26 2014-05-07 华为技术有限公司 Face unlocking method and device based on motion analysis
CN103295002B (en) * 2013-06-03 2016-08-10 北京工业大学 A kind of full Method of pose-varied face based on the complete affine scale invariant feature of two-value attitude
CN103824089B (en) * 2014-02-17 2017-05-03 北京旷视科技有限公司 Cascade regression-based face 3D pose recognition method
CN104036255B (en) * 2014-06-21 2017-07-07 电子科技大学 A kind of facial expression recognizing method
CN105260726B (en) * 2015-11-11 2018-09-21 杭州海量信息技术有限公司 Interactive video biopsy method and its system based on human face posture control

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408800A (en) * 2008-11-14 2009-04-15 东南大学 Method for performing three-dimensional model display control by CCD camera
CN101739719A (en) * 2009-12-24 2010-06-16 四川大学 Three-dimensional gridding method of two-dimensional front view human face image
CN103870843A (en) * 2014-03-21 2014-06-18 杭州电子科技大学 Head posture estimation method based on multi-feature-point set active shape model (ASM)

Also Published As

Publication number Publication date
CN105740688A (en) 2016-07-06

Similar Documents

Publication Publication Date Title
CN105740688B (en) Unlocking method and device
US11295474B2 (en) Gaze point determination method and apparatus, electronic device, and computer storage medium
US10242364B2 (en) Image analysis for user authentication
JP6610906B2 (en) Activity detection method and device, and identity authentication method and device
EP3528156B1 (en) Virtual reality environment-based identity authentication method and apparatus
US9607138B1 (en) User authentication and verification through video analysis
WO2017101267A1 (en) Method for identifying living face, terminal, server, and storage medium
EP3868610A1 (en) Driving environment smart adjustment and driver sign-in methods and apparatuses, vehicle, and device
USRE42205E1 (en) Method and system for real-time facial image enhancement
EP3862897B1 (en) Facial recognition for user authentication
KR101242390B1 (en) Method, apparatus and computer-readable recording medium for identifying user
Zhao et al. Mobile user authentication using statistical touch dynamics images
CN109343698A (en) Data processing system, computer implemented method and non-transitory computer-readable medium
WO2016127437A1 (en) Live body face verification method and system, and computer program product
US20150177842A1 (en) 3D Gesture Based User Authorization and Device Control Methods
US20210326428A1 (en) Systems and methods for authenticating users
WO2017000218A1 (en) Living-body detection method and device and computer program product
US10846514B2 (en) Processing images from an electronic mirror
US11281760B2 (en) Method and apparatus for performing user authentication
EP4099198A1 (en) Unlocking method and apparatus based on facial expression, and computer device and storage medium
KR102082418B1 (en) Electronic device and method for controlling the same
US11710353B2 (en) Spoof detection based on challenge response analysis
WO2017000217A1 (en) Living-body detection method and device and computer program product
US10599934B1 (en) Spoof detection using optokinetic response
KR20190095141A (en) Face authentication method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant