CN113051535A - Equipment unlocking method and device - Google Patents

Equipment unlocking method and device Download PDF

Info

Publication number
CN113051535A
CN113051535A CN201911362158.2A CN201911362158A CN113051535A CN 113051535 A CN113051535 A CN 113051535A CN 201911362158 A CN201911362158 A CN 201911362158A CN 113051535 A CN113051535 A CN 113051535A
Authority
CN
China
Prior art keywords
password
lip movement
lip
equipment
movement characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911362158.2A
Other languages
Chinese (zh)
Other versions
CN113051535B (en
Inventor
王晶
白博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201911362158.2A priority Critical patent/CN113051535B/en
Publication of CN113051535A publication Critical patent/CN113051535A/en
Application granted granted Critical
Publication of CN113051535B publication Critical patent/CN113051535B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints

Abstract

The application provides an equipment unlocking method and device, which can utilize the biological attribute of lip movement characteristics to carry out identity authentication, so that the equipment is unlocked, and the equipment unlocking method and device have flexibility, forgetfulness resistance, plagiarism resistance and modifiability. The method comprises the following steps: acquiring a current lip moving image of a user; acquiring a first lip movement characteristic according to the current lip movement image; and judging whether the equipment is unlocked or not according to the matching degree of the first lip movement characteristic and the second lip movement characteristic, wherein the second lip movement characteristic is a pre-stored lip movement characteristic used for unlocking the equipment.

Description

Equipment unlocking method and device
Technical Field
The present application relates to the field of image recognition, and more particularly, to a device unlocking method and apparatus.
Background
Generally, a terminal device is in a locked non-interactive state when unmanned interaction is performed, and when a user needs to operate the terminal device, the terminal device needs to be unlocked in a certain identity authentication mode first, so that the terminal device enters an interactive state. And unlocking the terminal equipment after the user has the right authority so as to ensure the safety of the information of the terminal equipment.
With the development of the technology, the unlocking mode of the terminal equipment is also diversified. The current unlocking modes of the terminal equipment are mainly divided into two types: password feature unlock and biometric unlock. The password characteristic unlocking comprises digital password unlocking, pattern password unlocking and the like, and is easily seen by other people or cracked violently when in inputting, and complicated passwords are easily forgotten; the biological characteristic unlocking comprises fingerprint characteristic unlocking, face unlocking and the like, when a finger is in contact with equipment, the collection of the fingerprint can be influenced by the dryness, the wetness and the heat of the finger, the face can be replaced by a picture, extra living body detection is needed, and in addition, the biological characteristic unlocking cannot be modified due to the uniqueness, and the use is not convenient enough.
Disclosure of Invention
The application provides an equipment unlocking method and device, which can carry out identity authentication by using the biological attribute of lip movement characteristics, thereby flexibly unlocking equipment.
In a first aspect, a device unlocking method is provided, including: acquiring a current lip moving image of a user; acquiring a first lip movement characteristic according to the current lip movement image; and judging whether the equipment is unlocked or not according to the matching degree of the first lip movement characteristic and the second lip movement characteristic, wherein the second lip movement characteristic is a pre-stored lip movement characteristic used for unlocking the equipment.
With reference to the first aspect, in certain implementations of the first aspect, determining whether to unlock the device according to a matching degree of the first lip movement feature and the second lip movement feature includes: if the matching degree of the first lip movement characteristic and the second lip movement characteristic is higher than a preset first threshold value, unlocking the equipment; or if the matching degree of the first lip movement characteristic and the second lip movement characteristic is lower than a preset second threshold value, the unlocking of the equipment is forbidden.
The current lip movement characteristics are compared with the pre-stored lip movement characteristics, whether the equipment is unlocked is judged according to the matching degree, the situation that a password is possibly forgotten in traditional password unlocking is avoided, and the forgetting resistance is achieved. And the unlocking is carried out according to the lip movement characteristics, the contact with equipment is not needed, and the additional in-vivo detection is not needed, so that the convenience and the high efficiency are realized.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: determining a first password corresponding to the current lip moving image; and whether equipment is unlocked is judged according to the matching degree of the first lip movement characteristic and the second lip movement characteristic, and the method comprises the following steps: and if the matching degree of the first lip movement characteristic and the second lip movement characteristic is lower than a preset third threshold value, judging whether the equipment is unlocked or not according to the first password.
With reference to the first aspect, in some implementations of the first aspect, determining whether to unlock the device according to the first password includes: if the first password is a password in a preset password white list, unlocking the equipment, wherein the password in the password white list is a password capable of unlocking the equipment; or if the first password is not the password in the password white list, forbidding unlocking the equipment; or if the first password is a password in a preset password blacklist, forbidding unlocking the equipment, wherein the password in the password blacklist is a password which cannot unlock the equipment.
Password authentication can be carried out after lip movement characteristic verification, so that further confirmation can be carried out when whether the equipment is unlocked or not can not be clearly judged according to the lip movement characteristic, and more protection can be provided for the equipment.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: determining a first password corresponding to the current lip movement video; and prior to acquiring the first lip motion feature from the current lip motion image, the method further comprises: and if the first password is neither the password in the password white list nor the password in the password black list, acquiring a first lip movement characteristic according to the current lip movement image, and unlocking the equipment according to the first lip movement characteristic.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: determining a first password corresponding to the current lip movement video; and prior to acquiring the first lip motion feature from the current lip motion image, the method further comprises: it is determined that the first password is neither a password in the password whitelist nor a password in the password blacklist.
The method can be applied to the situation that when the user forgets the password, the user can speak the password which is not in the password white list or the password which is not in the password black list, so that the lip movement feature authentication is carried out. The situation that the user cannot unlock the equipment because the user forgets the password is avoided.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: acquiring a first lip moving image of a user; and acquiring the passwords in the password blacklist or the passwords in the password whitelist according to the first lip movement image.
The password stored by the user according to the lip motion characteristics can be set as a password white list in advance, and the password can also be set as a password black list after the lip motion image of the user is stolen, so that the situation that other people use the stolen lip motion image of the user to unlock equipment is avoided.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: acquiring a second lip moving image of the user; and acquiring a second lip movement characteristic according to the second lip movement image.
The user can store corresponding lip movement characteristics according to the lip movement image of the user in advance, and the lip movement characteristics are used for authenticating the lip movement characteristics when the device is unlocked later.
In a second aspect, an unlocking device is provided, which includes: the acquisition unit is used for acquiring a current lip moving image of a user; the processing unit is used for acquiring a first lip movement characteristic according to the current lip movement image; the processing unit is further used for judging whether the equipment is unlocked or not according to the matching degree of the first lip movement characteristic and the second lip movement characteristic, and the second lip movement characteristic is a pre-stored lip movement characteristic used for unlocking the equipment.
With reference to the second aspect, in some implementations of the second aspect, the determining, by the processing unit, whether to unlock the device according to a matching degree of the first lip movement feature and the second lip movement feature includes: if the matching degree of the first lip movement characteristic and the second lip movement characteristic is higher than a preset first threshold value, unlocking the equipment; or if the matching degree of the first lip movement characteristic and the second lip movement characteristic is lower than a preset second threshold value, the unlocking of the equipment is forbidden.
With reference to the second aspect, in some implementations of the second aspect, the processing unit is further configured to: determining a first password corresponding to the current lip moving image; and the processing unit judges whether to unlock the equipment according to the matching degree of the first lip movement characteristic and the second lip movement characteristic, and the method comprises the following steps: and if the matching degree of the first lip movement characteristic and the second lip movement characteristic is lower than a preset third threshold value, judging whether the equipment is unlocked or not according to the first password.
With reference to the second aspect, in some implementations of the second aspect, the determining, by the processing unit, whether to unlock the device according to the first password includes: if the first password is a password in a preset password white list, unlocking the equipment, wherein the password in the password white list is a password capable of unlocking the equipment; or if the first password is not the password in the password white list, forbidding unlocking the equipment; or if the first password is a password in a preset password blacklist, forbidding unlocking the equipment, wherein the password in the password blacklist is a password which cannot unlock the equipment.
With reference to the second aspect, in some implementations of the second aspect, the processing unit is further configured to: determining a first password corresponding to the current lip movement video; and prior to obtaining the first lip motion feature from the current lip motion image, the processing unit is further to: and if the first password is neither the password in the password white list nor the password in the password black list, acquiring a first lip movement characteristic according to the current lip movement image, and unlocking the equipment according to the first lip movement characteristic.
With reference to the second aspect, in certain implementations of the second aspect, the apparatus further includes: an acquisition unit configured to acquire a first lip moving image of a user; and the processing unit is used for acquiring the passwords in the password blacklist or the passwords in the password whitelist according to the first lip movement image.
With reference to the second aspect, in certain implementations of the second aspect, the apparatus further includes: an acquisition unit configured to acquire a second lip motion image of a user; and the processing unit is used for acquiring the second lip movement characteristics according to the second lip movement image.
In a third aspect, a computer program product is provided, the computer program product comprising: computer program (also called code, or instructions) for causing a computer to perform the method of any of the possible implementations of the first aspect, as described above, when the computer program runs on a computer.
In a fourth aspect, a computer-readable storage medium is provided for storing a computer program comprising instructions for performing the method of any one of the possible implementations of the first aspect and the first aspect.
Drawings
Fig. 1 is a schematic diagram of a hardware system of a terminal device according to a device unlocking method provided in an embodiment of the present application.
Fig. 2 is a schematic flowchart of a device unlocking method provided in an embodiment of the present application.
Fig. 3 is a schematic block diagram of a multimodal device unlocking method using a priori password according to an embodiment of the present application.
Fig. 4 is a schematic block diagram of a multimodal device unlocking method of a priori lip movement characteristics provided by an embodiment of the application.
Fig. 5 is a schematic block diagram of an unlocking device provided in an embodiment of the present application.
Fig. 6 is a schematic block diagram of a terminal device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
The terminal device of the embodiments of the present application may be an access terminal, a User Equipment (UE), a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a user terminal, a wireless communication device, a user agent, or a user equipment. The terminal device may be a cellular telephone, a cordless telephone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA), a handheld device having wireless communication capabilities, a computing device or other processing device connected to a wireless modem, a vehicle mounted device, a wearable device, etc.
Fig. 1 is a schematic diagram of a hardware system 100 of a terminal device implementing the device unlocking method of the present application. The system 100 shown in FIG. 1 includes: a light source emitter 110, a spectrum analysis module 130, a color camera 140, a central processor 150, a touch screen 160, a non-volatile memory 170, and a memory 180.
The color camera 140 and the light source emitter 110 form a spectrum input module, and the spectrum analysis module 130 forms an image generation module. The light source emitter 110 and the color camera 140 may be mounted side-by-side above the device (e.g., in a central position directly above the device). The light source emitter 110 may be an infrared emitter and the spectral analysis module 130 may be an infrared spectral analysis module. In this case, the light source emitter 110 projects a scene by infrared light encoded imagery. The light source emitter 110 outputs a common laser light source, and forms near-infrared light after being filtered by the ground glass and the infrared filter. The light source emitter 110 may continuously and comprehensively output infrared light having a wavelength of 840 nanometers (nm).
The central processing unit 150 is used for lip movement characteristic analysis, unlocking behavior judgment and peripheral control. The non-volatile memory 170 is used to store program files, system files, and lip movement characterization information. The memory 180 is used for system and program operation caching. The touch screen 160 is used for interaction with a user. Specifically, the central processor 150 reads the depth data and extracts the user lip movement feature data. The lip movement characteristics are analyzed in real time as the user is setting the unlock lip movement characteristics. Meanwhile, the CPU 150 further extracts the lip movement feature value and stores it in the nonvolatile memory 170. And when the user needs to verify the lip movement characteristics to unlock the screen, acquiring lip movement characteristic data in real time, extracting the lip movement characteristics, comparing the lip movement characteristics with the stored lip movement characteristic data, if the lip movement characteristics are matched with the stored lip movement characteristic data, unlocking successfully, and otherwise, unlocking fails.
It should be understood that the hardware system 100 in fig. 1 is only one hardware implementation manner of the device unlocking method in the embodiment of the present application, and does not constitute a limitation on the embodiment of the present application.
The following describes a device unlocking method provided in an embodiment of the present application.
Fig. 2 is a schematic flowchart of a device unlocking method provided in an embodiment of the present application. The method shown in fig. 2 is performed by a terminal device, for example, may be performed by the terminal device shown in fig. 1.
S201, acquiring a current lip moving image of the user.
Lip motion images can be obtained by lip motion video, which can be specifically a user reading a text or a string of numbers vocally or silently against a camera. The image range acquired by the camera should include the whole or part of the human face, and at least include the whole lip region.
S202, acquiring a first lip movement characteristic according to the current lip movement image.
Illustratively, if the current lip motion image is obtained from a lip motion video, a first lip motion sequence is obtained according to the current lip motion video, and then a first lip motion feature is obtained according to the first lip motion sequence. Specifically, the face region of the user in the lip motion video is extracted through a face recognition technology, and then the lip region of the user is extracted from the face region through feature point positioning. Since the lip motion video is composed of images of a plurality of frames, the lip motion video can be converted into a lip motion sequence according to the lip region in each frame of image in the lip motion video. The lip feature of each frame image in the lip motion sequence is extracted, and specifically, the lip feature extraction can be realized by using a Convolutional Neural Network (CNN). Lip motion features (lip motion features) in the lip motion sequence are extracted according to the lip features of each frame image, and specifically, the lip motion features can be extracted by using a Recurrent Neural Network (RNN) or a long short term memory network (LSTM). If the current lip moving image of the user is directly obtained, lip features in the current lip moving image can be directly extracted. According to the lip movement characteristics, a corresponding password can be obtained, and the password is used for reading a text or a string of numbers by the user in a sound or soundless way towards the camera.
Wherein, because the lip movement characteristic of each person has the particularity, the lip movement characteristic can be used for having an identity characteristic and can be used for representing the identity of the user.
S203, judging whether the equipment is unlocked or not according to the matching degree of the first lip motion characteristic and the second lip motion characteristic, wherein the second lip motion characteristic is a pre-stored lip motion characteristic used for unlocking the equipment.
Specifically, if the matching degree of the first lip movement characteristic and the second lip movement characteristic is higher than a preset first threshold value, unlocking the equipment; and if the matching degree of the first lip movement characteristic and the second lip movement characteristic is lower than a preset second threshold value, forbidding unlocking the equipment. The lip motion feature is a vector, matching the first lip motion feature with the second lip motion feature is to compare the current lip motion feature vector with a pre-stored lip motion feature vector by using a deep learning network, calculate the similarity of the current lip motion feature vector and the pre-stored lip motion feature vector, and compare the similarity of the current lip motion feature vector and the pre-stored lip motion feature vector. The similarity is expressed by calculating the distance between the current lip movement characteristic vector and a pre-stored lip movement characteristic vector, wherein if the distance is smaller, the similarity is larger, and if the distance is larger, the similarity is smaller. The method for calculating the similarity may be euclidean distance, hamming distance, cosine distance, etc. The first threshold value and the second threshold value may be predetermined specific values. For example, the first threshold is 80% and the second threshold is 50%. Specifically, when the user unlocks the device, and when the matching degree of the first lip movement feature and the pre-stored second lip movement feature is higher than 80%, the authentication is passed, and the device can be directly unlocked at this time; when the matching degree of the first lip movement characteristic and the second lip movement characteristic stored in advance is lower than 50%, the authentication is not passed, and the unlocking of the equipment can be directly prohibited.
Optionally, if the matching degree of the first lip movement feature and the second lip movement feature is lower than a preset third threshold, the method of the embodiment of the application further includes determining the first password according to the current lip movement image of the user. Specifically, a character or number string read by the user in a sound or soundless manner towards the camera is identified through the lip movement characteristics, and the character or number string is the first password. And judging whether the equipment is unlocked or not according to the first password. The third threshold value may be a certain value specified by human. For example, the third threshold is 90%. Specifically, if the matching degree of the first lip movement characteristic and the second lip movement characteristic is lower than that of the second lip movement characteristic, whether the equipment is unlocked is judged according to the first password. Optionally, if the first password is a password in a preset password white list, where the password in the password white list is a password capable of unlocking the device, for example, the password white list stored in advance is "1111", and the first password is also "1111", the authentication result is passed, and at this time, the device may be directly unlocked; optionally, if the first password is not a password in the password white list, for example, the first password is "3333", the authentication result is not passed, and unlocking the device may be prohibited; optionally, if the first password is a password in a preset password blacklist, where the password in the password blacklist is a password that cannot unlock the device, for example, the prestored password blacklist is "2222", and the first password is also "2222", the authentication result is not passed, and unlocking the device may be prohibited at this time.
Optionally, the device unlocking method provided in the embodiment of the present application further includes: before judging whether to unlock the equipment according to the matching degree of the first lip movement characteristic and the second lip movement characteristic, judging whether to unlock the equipment by using the lip movement characteristic according to the first password.
Specifically, if the first password is a password in a preset password white list, the authentication is passed, and the device can be directly unlocked at this time; if the first password is a password in a preset password blacklist, the authentication is not passed, and the unlocking of the equipment can be directly forbidden at the moment. Optionally, if the first password is neither in the preset password white list nor in the preset password black list, the lip movement feature is authenticated at this time.
As can be seen from the above description, the device unlocking method provided in the embodiment of the present application can implement multi-modal unlocking of the terminal device, and the multi-modal device unlocking method provided in the embodiment of the present application is described in detail below with specific examples.
Fig. 3 illustrates a multimodal device unlocking method of an a priori password provided by an embodiment of the present application, wherein password authentication is performed before lip movement feature authentication. For example, a user may read a string of numbers against the camera, and the terminal device obtains a password of the user, and compares the password with a preset password white list and a preset password black list to determine whether to unlock the terminal device or to perform further verification of lip movement characteristics.
Firstly, password authentication is carried out, if the password provided by the user is in a preset password white list, for example, the password white list is '1111' and the password provided by the user is '1111', the authentication is passed, and the equipment can be directly unlocked at the moment; if the password provided by the user is in a preset password blacklist, for example, the password blacklist is '2222', and the password provided by the user is also '2222', the authentication is not passed, and the unlocking of the equipment can be directly prohibited at this time; if the user provided a password that is neither in the password whitelist nor in the password blacklist, e.g., the user provided a password of "3333," then further lip movement feature authentication may be performed at this point. The terminal equipment acquires the current lip movement characteristics of the user, matches the current lip movement characteristics of the user with the pre-stored lip movement characteristics for unlocking the equipment, and if the matching degree of the current lip movement characteristics of the user and the pre-stored lip movement characteristics is greater than 80%, the authentication is passed, and the equipment can be directly unlocked; alternatively, if the matching degree of the current lip movement characteristic of the user and the pre-stored lip movement characteristic for unlocking the device is less than 80%, the authentication is not passed, and the unlocking of the device can be directly prohibited. It should be understood that, since each person's lip movement characteristics have a specificity that is related to the lip movement characteristics and is not affected by the lip specific actions, when the password characteristics spoken by the user are inconsistent with the pre-stored password, i.e., the lip specific actions are inconsistent with the pre-stored lip movements, the identity of the user can still be identified based on the lip movement characteristics.
Fig. 4 shows a multimodal device unlocking method of a priori lip movement characteristics provided by an embodiment of the application, wherein after password authentication is performed after the lip movement characteristics, the device unlocking method is as follows.
Firstly, authenticating lip movement characteristics, matching the current lip movement characteristics of a user with pre-stored lip movement characteristics for unlocking equipment, and if the matching degree of the two lip movement characteristics is more than 80%, passing the authentication, and directly unlocking the equipment at the moment; if the matching degree of the current lip movement characteristics of the user and the pre-stored lip movement characteristics for unlocking the equipment is less than 50%, the authentication is not passed, and the equipment can be directly forbidden to be unlocked; if the matching degree of the current lip movement characteristic of the user and the pre-stored lip movement characteristic used for unlocking the equipment is between [ 50%, 80% ], the password authentication is further carried out at the moment. If the password provided by the user is in the preset password white list, the authentication is passed, and the equipment can be unlocked directly at the moment; optionally, if the password provided by the user is not in the preset password white list, the authentication is not passed, and the unlocking of the device may be directly prohibited at this time; optionally, if the password provided by the user is in the preset password blacklist, the authentication is not passed, and the unlocking of the device may be directly prohibited.
Optionally, the device unlocking method provided by the embodiment of the application may also perform lip movement feature authentication only. For example, matching the current lip movement characteristics of the user with pre-stored lip movement characteristics for unlocking the device, if the matching degree of the two is greater than 80%, the authentication is passed, and the device can be directly unlocked at this time; if the matching degree of the current lip movement characteristic of the user and the pre-stored lip movement characteristic for unlocking the equipment is less than 80%, the authentication is not passed, and the equipment can be directly prohibited from being unlocked.
It should also be understood that the above description is only an example of the device unlocking method provided in the embodiments of the present application, and does not constitute a limitation on the embodiments of the present application.
In the embodiment of the present application, the order of the password feature authentication and the lip movement feature authentication is not limited. For example, lip movement feature authentication may be performed first, followed by password authentication if necessary; or firstly carrying out password authentication and then carrying out lip movement characteristic authentication if necessary.
Alternatively, obtaining the password in the password white list or the password in the password black list may be implemented in the following manner.
A first lip motion image of a user is acquired.
Specifically, it may be that the user records a video including lip movements against the camera, wherein the lip movements may be that the user reads a speech or a string of numbers acoustically or silently, for example, the user reads the string "1111" acoustically.
And acquiring the passwords in the password blacklist or the passwords in the password whitelist according to the first lip movement image.
Optionally, before acquiring the password in the password blacklist or the password in the password whitelist, the method further includes acquiring a lip movement sequence according to the lip movement video, then acquiring a lip movement feature according to the lip movement sequence, and finally acquiring the password according to the lip movement feature. Alternatively, if the user reads a voice or a string of numbers, etc., audibly, the password may also be recognized. For a specific process, reference may be made to the method for acquiring the first lip motion characteristic in S202, and for brevity, details of the embodiment of the present application are not described herein again.
Optionally, a password whitelist may be set according to the password. For example, the number "1111" recognized according to the lip movement feature is set as the password in the password whitelist.
Optionally, a password blacklist can also be set according to the password, and the password blacklist can include passwords obtained through lip movement characteristics and/or any other password or passwords besides the passwords. For example, the password feature "1111" may be set in the blacklist, the password in the blacklist may be set as the other number string "2222", and the password in the blacklist may also include "1111" and the other number string "2222".
Alternatively, obtaining the second lip movement characteristic may be achieved in the following manner.
A second lip motion image of the user is acquired.
And acquiring a second lip movement characteristic according to the second lip movement image.
The second lip motion characteristic is stored.
The specific process of obtaining the second lip movement characteristics may refer to the process of obtaining the first lip movement characteristics in S202, and for brevity, the embodiment of the present application is not described herein again.
It should be understood that the above method for obtaining the password in the password white list or the password in the password black list and obtaining the second lip movement feature is an exemplary method, and the obtaining of the password in the password white list or the password in the password black list and obtaining the second lip movement feature may also be implemented by other methods such as directly inputting the password by the user or obtaining the second lip movement feature from the server.
By setting the password white list, the device can be unlocked directly without lip movement feature authentication for the password in the white list. Therefore, the user can authorize other people to unlock the device in a mode of informing other people in the password white list, so that the situation that other people cannot conveniently unlock the device except the user when the face identification is the only unlocking mode in the biological characteristic unlocking is avoided, and the flexibility of unlocking the device is greatly improved.
By setting the password blacklist, the device can be directly refused to be unlocked for the password in the blacklist. For example, when the lip movement video of the user is stolen or the password feature is stolen, the user can add the password feature in the stolen lip movement video or the stolen password feature into a blacklist, so that other people can be prevented from unlocking the device again, and then the user can re-enter a new lip movement video, store a new lip movement feature and set a new password whitelist. Therefore, the unlocking safety of the equipment is improved, and meanwhile, compared with other biological feature unlocking (such as face recognition unlocking) methods, the method provided by the embodiment of the application has the advantages of being modifiable.
For the passwords which are not in the white list or the black list, lip movement feature authentication can be further carried out, and unlocking of the equipment is realized. For example, if the user forgets a pre-stored password feature, it is only necessary to speak any password that is not in the blacklist audibly or silently to the camera upon unlocking (assuming that the password feature is a four digit number, the user can speak any four digit number that is not in the blacklist). And then entering a lip movement characteristic authentication link, and unlocking the equipment as long as the lip movement characteristic in the first lip movement characteristic of the user is consistent with the pre-stored lip movement characteristic.
The multi-modal equipment unlocking method provided by the embodiment of the application can also be used for directly authenticating lip movement characteristics. For example, as long as the matching degree of the first lip movement characteristic of the user and the pre-stored lip movement characteristic meets the preset value, the equipment can be unlocked without performing the authentication of the password characteristic.
The embodiment of the application does not limit the sequence of the lip movement characteristic authentication and the password characteristic authentication. In the actual use process, the lip movement characteristic authentication can be firstly carried out, if the authentication is passed, the equipment is directly unlocked, if the authentication is not passed, the equipment is not unlocked, and if necessary, the password authentication can also be carried out after the lip movement characteristic. Or firstly, the password authentication is carried out, if the password is in the white list, the device is unlocked directly, if the password is in the black list, the device is not unlocked, and if necessary, for example, if the password is not in the white list or the black list, the lip movement characteristic authentication is carried out. It should be understood that the above device unlocking methods all belong to the multi-modal device unlocking method provided in the embodiments of the present application.
Fig. 5 shows a schematic block diagram of an unlocking device 500 according to an embodiment of the present application. As shown in fig. 5, the terminal device 500 includes: an acquisition unit 510 and a processing unit 520.
An obtaining unit 510 is configured to obtain a current lip moving image of the user.
Lip motion images can be obtained by lip motion video, which can be specifically a user reading a text or a string of numbers vocally or silently against a camera. The image range acquired by the camera should include the whole or part of the human face, and at least include the whole lip region.
And a processing unit 520, configured to obtain a first lip motion characteristic according to the current lip motion image. Optionally, the processing unit 520 is further configured to obtain a current password of the user according to the first lip movement characteristic. Reference may be made to S202 for a process of obtaining the first lip movement feature and the password, and for brevity, no further description is provided herein.
The processing unit 520 is further configured to determine whether to unlock the device according to a matching degree of the first lip movement characteristic and a second lip movement characteristic, where the second lip movement characteristic is a pre-stored lip movement characteristic for unlocking the device. Specifically, if the matching degree of the first lip movement characteristic and the second lip movement characteristic is higher than a preset first threshold value, unlocking the equipment; and if the matching degree of the first lip movement characteristic and the second lip movement characteristic is lower than a preset second threshold value, forbidding unlocking the equipment. Optionally, if the matching degree of the first lip movement feature and the second lip movement feature is lower than a preset third threshold, the processing unit 520 further determines a first password according to the current lip movement image of the user, and determines whether to unlock the device according to the first password. Specifically, if the first password is a password in a preset password white list, unlocking the equipment; and if the first password is not the password in the password white list or the first password is the password in the preset password black list, forbidding unlocking the equipment.
Optionally, the processing unit 520 is further configured to determine whether to unlock the device using the lip movement feature according to the first password before determining whether to unlock the device according to the matching degree of the first lip movement feature and the second lip movement feature. Specifically, if the first password is neither in the preset password white list nor in the preset password black list, the lip movement feature is authenticated at this time.
Optionally, the unlocking device 500 is further configured to perform setting of the password white list and the password black list before unlocking. Specifically, the obtaining unit 510 obtains a first lip moving image of the user, and the processing unit 520 obtains a password in a password blacklist or a password in a password whitelist according to the first lip moving image.
Optionally, the unlocking device 500 is also used to access the second lip motion feature prior to unlocking. Specifically, the obtaining unit 510 obtains a second lip motion image of the user, and the processing unit 520 obtains a second lip motion feature according to the second lip motion image.
Therefore, the unlocking device 500 according to the embodiment of the present application obtains the lip motion image presented by the user in front of the camera in real time, extracts the lip motion feature of the user in the lip motion image, and achieves the purpose of unlocking the terminal device by matching with the lip motion feature set by the user before. Therefore, a brand-new unlocking mode with strong interestingness, high accuracy and good rapidness is provided for the user.
Optionally, the unlocking device of the embodiment of the present application includes, but is not limited to, a mobile phone, a tablet, a computer, a multimedia device, and a game device. All devices using a mobile communication network are within the scope of the embodiments of the present application.
Fig. 6 is a schematic block diagram of a terminal device according to an embodiment of the present application. The terminal device 600 shown in fig. 6 includes: radio Frequency (RF) circuitry 610, memory 620, other input devices 630, display screen 640, sensors 650, audio circuitry 660, I/O subsystem 670, processor 680, and power supply 690. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 6 does not constitute a limitation of the terminal device, and may include more or fewer components than those shown, or combine certain components, or split certain components, or a different arrangement of components. Those skilled in the art will appreciate that the display 640 is part of a User Interface (UI) and that the terminal device 600 may include fewer or more User interfaces than shown.
The respective constituent elements of the terminal device 600 will be briefly described below with reference to fig. 6:
the RF circuit 610 may be used for receiving and transmitting signals during a message transmission or call, and in particular, for receiving downlink messages from a base station and then processing the received downlink messages to the processor 680. The memory 620 may be used to store software programs and modules, and the processor 680 may execute various functional applications and data processing of the terminal device 600 by operating the software programs and modules stored in the memory 620. Other input devices 630 may be used to receive input numeric or character information and generate key signal inputs relating to user settings and function control of terminal device 600. The display screen 640 may be used to display information input by or provided to the user and various menus of the terminal apparatus 600, and may also accept user input. The display screen 640 may include a display panel 641 and a touch panel 642. The terminal device 600 may also include at least one sensor 650, such as a light sensor, a motion sensor, and other sensors. The audio circuit 660, speaker 661, and microphone 662 can provide an audio interface between a user and the terminal device 600. The I/O subsystem 670 may control input and output of external devices, including other device input controllers 671, sensor controllers 672, and display controller 673. The processor 680 is a control center of the terminal device 600, connects various parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device 600 and processes data by running or executing software programs and/or modules stored in the memory 620 and calling data stored in the memory 620, thereby monitoring the terminal device as a whole. The processor 680 is configured to perform the methods described in S201 to S203. Although not shown, the terminal device 600 may further include a camera, a bluetooth module, and the like, which will not be described herein.
It should be understood that the terminal device 600 may correspond to a terminal device in the device unlocking method according to the embodiment of the present application, and the terminal device 600 may include an entity unit for executing the method executed by the terminal device or the electronic device in the above-described method. Moreover, each entity unit and the other operations and/or functions in the terminal device 600 are respectively corresponding to the flows of the method, and are not described herein again for brevity.
It is also to be understood that the terminal device 600 may comprise physical units for performing the above-described method of acquiring lip motion images. Moreover, each entity unit and the other operations and/or functions in the terminal device 600 are respectively corresponding to the flows of the method, and are not described herein again for brevity.
It should also be understood that the processor in the embodiments of the present application may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software in the decoding processor. The software may be in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
It will also be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous link SDRAM (SLDRAM), and Direct Rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
It will also be appreciated that the bus system may include a power bus, a control bus, a status signal bus, etc., in addition to the data bus. For clarity of illustration, however, the various buses are labeled as a bus system in the figures.
It should also be understood that in the present embodiment, "B corresponding to a" means that B is associated with a, from which B can be determined. It should also be understood that determining B from a does not mean determining B from a alone, but may be determined from a and/or other information. It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously 10, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of the method for transmitting an uplink signal disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software in the processor. The software may be in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
Embodiments of the present application also provide a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable electronic device comprising a plurality of application programs, enable the portable electronic device to perform the method of the embodiment shown in fig. 2 and/or fig. 3.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application, which essentially or partly contribute to the prior art, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present application, and all the changes or substitutions should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method for unlocking a device, comprising:
acquiring a current lip moving image of a user;
acquiring a first lip movement characteristic according to the current lip movement image;
and judging whether the equipment is unlocked or not according to the matching degree of the first lip movement characteristic and the second lip movement characteristic, wherein the second lip movement characteristic is a pre-stored lip movement characteristic used for unlocking the equipment.
2. The method of claim 1, wherein determining whether to unlock the device based on a matching degree of the first lip movement characteristic and the second lip movement characteristic comprises:
if the matching degree of the first lip movement characteristic and the second lip movement characteristic is higher than a preset first threshold value, unlocking the equipment;
or
And if the matching degree of the first lip movement characteristic and the second lip movement characteristic is lower than a preset second threshold value, forbidding unlocking the equipment.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
determining a first password corresponding to the current lip moving image; and
the judging whether to unlock the equipment according to the matching degree of the first lip movement characteristic and the second lip movement characteristic comprises the following steps:
and if the matching degree of the first lip movement characteristic and the second lip movement characteristic is lower than a preset third threshold value, judging whether the equipment is unlocked or not according to the first password.
4. The method of claim 3, wherein determining whether to unlock the device based on the first password comprises:
if the first password is a password in a preset password white list, unlocking the equipment, wherein the password in the password white list is a password capable of unlocking the equipment; or
If the first password is not a password in the password white list, forbidding unlocking the equipment; or
And if the first password is a password in a preset password blacklist, forbidding unlocking the equipment, wherein the password in the password blacklist is a password which can not unlock the equipment.
5. The method of claim 1, further comprising:
determining a first password corresponding to the current lip moving image; and
prior to acquiring a first lip motion feature from the current lip motion image, the method further comprises:
and if the first password is neither the password in the password white list nor the password in the password black list, acquiring a first lip movement characteristic according to the current lip movement image, and unlocking the equipment according to the first lip movement characteristic.
6. The method according to claim 4 or 5, characterized in that the method further comprises:
acquiring a first lip moving image of the user;
and acquiring the passwords in the password blacklist or the passwords in the password whitelist according to the first lip movement image.
7. The method according to any one of claims 1 to 6, further comprising:
acquiring a second lip moving image of the user;
and acquiring the second lip movement characteristic according to the second lip movement image.
8. An apparatus unlocking device, comprising:
the acquisition unit is used for acquiring a current lip moving image of a user;
the processing unit is used for acquiring a first lip movement characteristic according to the current lip movement image;
the processing unit is further used for judging whether the equipment is unlocked according to the matching degree of the first lip movement characteristic and the second lip movement characteristic, and the second lip movement characteristic is a pre-stored lip movement characteristic used for unlocking the equipment.
9. The apparatus of claim 8, wherein the processing unit determines whether to unlock the device according to a matching degree of the first lip movement characteristic and the second lip movement characteristic, and comprises:
if the matching degree of the first lip movement characteristic and the second lip movement characteristic is higher than a preset first threshold value, unlocking the equipment;
or
And if the matching degree of the first lip movement characteristic and the second lip movement characteristic is lower than a preset second threshold value, forbidding unlocking the equipment.
10. The apparatus according to claim 8 or 9, wherein the processing unit is further configured to:
determining a first password corresponding to the current lip moving image; and
the processing unit judges whether to unlock the equipment according to the matching degree of the first lip movement characteristic and the second lip movement characteristic, and the method comprises the following steps:
and if the matching degree of the first lip movement characteristic and the second lip movement characteristic is lower than a preset third threshold value, judging whether the equipment is unlocked or not according to the first password.
11. The apparatus of claim 10, wherein the processing unit determines whether to unlock the device according to the first password, comprising:
if the first password is a password in a preset password white list, unlocking the equipment, wherein the password in the password white list is a password capable of unlocking the equipment; or
If the first password is not a password in the password white list, forbidding unlocking the equipment; or
And if the first password is a password in a preset password blacklist, forbidding unlocking the equipment, wherein the password in the password blacklist is a password which can not unlock the equipment.
12. The apparatus of claim 8, wherein the processing unit is further configured to:
determining a first password corresponding to the current lip movement video; and
prior to obtaining a first lip motion feature from the current lip motion image, the processing unit is further to:
and if the first password is neither the password in the password white list nor the password in the password black list, acquiring a first lip movement characteristic according to the current lip movement image, and unlocking the equipment according to the first lip movement characteristic.
13. The apparatus of claim 11 or 12, further comprising:
an acquisition unit configured to acquire a first lip image of the user;
and the processing unit is used for acquiring the passwords in the password blacklist or the passwords in the password whitelist according to the first lip movement image.
14. The apparatus of any one of claims 8 to 13, further comprising:
an acquisition unit configured to acquire a second lip image of the user;
and the processing unit is used for acquiring a second lip movement characteristic according to the second lip movement image.
15. A terminal device, comprising: a memory, a processor;
the memory is used for storing programs;
the processor for executing the program stored by the memory, the processor for performing the method of any of claims 1 to 7 when the program is executed.
CN201911362158.2A 2019-12-26 2019-12-26 Equipment unlocking method and device Active CN113051535B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911362158.2A CN113051535B (en) 2019-12-26 2019-12-26 Equipment unlocking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911362158.2A CN113051535B (en) 2019-12-26 2019-12-26 Equipment unlocking method and device

Publications (2)

Publication Number Publication Date
CN113051535A true CN113051535A (en) 2021-06-29
CN113051535B CN113051535B (en) 2023-03-03

Family

ID=76505125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911362158.2A Active CN113051535B (en) 2019-12-26 2019-12-26 Equipment unlocking method and device

Country Status (1)

Country Link
CN (1) CN113051535B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150040209A1 (en) * 2013-07-31 2015-02-05 8318808 Canada Inc. System and method for application specific locking
CN106295501A (en) * 2016-07-22 2017-01-04 中国科学院自动化研究所 The degree of depth based on lip movement study personal identification method
CN107122646A (en) * 2017-04-26 2017-09-01 大连理工大学 A kind of method for realizing lip reading unblock
CN107358085A (en) * 2017-07-28 2017-11-17 惠州Tcl移动通信有限公司 A kind of unlocking terminal equipment method, storage medium and terminal device
CN107977559A (en) * 2017-11-22 2018-05-01 杨晓艳 A kind of identity identifying method, device, equipment and computer-readable recording medium
US20190130172A1 (en) * 2017-10-31 2019-05-02 Baidu Usa Llc Identity authentication method, terminal device, and computer-readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150040209A1 (en) * 2013-07-31 2015-02-05 8318808 Canada Inc. System and method for application specific locking
CN106295501A (en) * 2016-07-22 2017-01-04 中国科学院自动化研究所 The degree of depth based on lip movement study personal identification method
CN107122646A (en) * 2017-04-26 2017-09-01 大连理工大学 A kind of method for realizing lip reading unblock
CN107358085A (en) * 2017-07-28 2017-11-17 惠州Tcl移动通信有限公司 A kind of unlocking terminal equipment method, storage medium and terminal device
US20190130172A1 (en) * 2017-10-31 2019-05-02 Baidu Usa Llc Identity authentication method, terminal device, and computer-readable storage medium
CN107977559A (en) * 2017-11-22 2018-05-01 杨晓艳 A kind of identity identifying method, device, equipment and computer-readable recording medium

Also Published As

Publication number Publication date
CN113051535B (en) 2023-03-03

Similar Documents

Publication Publication Date Title
US11783018B2 (en) Biometric authentication
KR101358444B1 (en) Biometric portable memory chip for mobile devices
US9262615B2 (en) Methods and systems for improving the security of secret authentication data during authentication transactions
EP2685401B1 (en) Methods and systems for improving the security of secret authentication data during authentication transactions
US11496471B2 (en) Mobile enrollment using a known biometric
US10282532B2 (en) Secure storage of fingerprint related elements
US20140020058A1 (en) Methods and systems for improving the security of secret authentication data during authentication transactions
CN108475306B (en) User interface for mobile device
EP3403211B1 (en) User interface for a mobile device
JP2015176555A (en) Communication terminal and method for authenticating communication terminal
CN110582771A (en) method and apparatus for performing authentication based on biometric information
CN113051535B (en) Equipment unlocking method and device
EP3811254A1 (en) Method and electronic device for authenticating a user
CN110930154A (en) Identity verification method and device
KR101906141B1 (en) Apparatus and Method for Multi-level Iris Scan in Mobile Communication Terminal
CN109766679B (en) Identity authentication method and device, storage medium and electronic equipment
CN110659461A (en) Screen unlocking method and device, terminal and storage medium
JP2006135679A (en) Living body collation system and entering and leaving management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant