WO2020041971A1 - 一种人脸识别的方法及装置 - Google Patents

一种人脸识别的方法及装置 Download PDF

Info

Publication number
WO2020041971A1
WO2020041971A1 PCT/CN2018/102680 CN2018102680W WO2020041971A1 WO 2020041971 A1 WO2020041971 A1 WO 2020041971A1 CN 2018102680 W CN2018102680 W CN 2018102680W WO 2020041971 A1 WO2020041971 A1 WO 2020041971A1
Authority
WO
WIPO (PCT)
Prior art keywords
face recognition
state
mobile terminal
recognition
face
Prior art date
Application number
PCT/CN2018/102680
Other languages
English (en)
French (fr)
Inventor
胡靓
徐杰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US17/270,165 priority Critical patent/US20210201001A1/en
Priority to KR1020217005781A priority patent/KR20210035277A/ko
Priority to JP2021510869A priority patent/JP7203955B2/ja
Priority to EP18931528.6A priority patent/EP3819810A4/en
Priority to CN201880096890.7A priority patent/CN112639801A/zh
Priority to PCT/CN2018/102680 priority patent/WO2020041971A1/zh
Publication of WO2020041971A1 publication Critical patent/WO2020041971A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/17Image acquisition using hand-held instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/987Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns with the intervention of an operator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/60Static or dynamic means for assisting the user to position a body part for biometric acquisition
    • G06V40/67Static or dynamic means for assisting the user to position a body part for biometric acquisition by interactive indications to the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/60Aspects of automatic or semi-automatic exchanges related to security aspects in telephonic communication systems
    • H04M2203/6054Biometric subscriber identification

Definitions

  • the present application relates to the technical field of terminals, and in particular, to a method and device for face recognition.
  • face recognition unlocking has gradually been applied to various terminal devices. For example, when a user uses a mobile terminal, if the result of face recognition meets a preset threshold, they will get corresponding operation permissions, such as unlocking the mobile device, entering the corresponding operating system, or gaining access to an application If the result of face recognition does not meet the preset threshold, the corresponding operation authority cannot be obtained, such as unlocking failure, access denied, etc.
  • the user first needs to trigger the face recognition process. Common triggering methods can be clicking the power button or other buttons, picking up the mobile terminal to light up the screen, or triggering through a voice assistant.
  • the camera may not be able to capture a suitable face image, which may cause the face unlock to fail.
  • face recognition fails to be unlocked, the user needs to verify again; however, due to power consumption of existing mobile terminals and other reasons, after face recognition fails, continuous recognition will not be performed, and the user will need to actively trigger face recognition again. Unlock. Triggering face recognition again requires the user to click the power button or other buttons again, put the mobile terminal down and pick it up again, or issue a command again through a voice assistant, and so on. These operations are not smooth and complicated; at the same time, face recognition may still fail again, causing inconvenience in use.
  • the embodiments of the present application provide a method and device for face recognition, which can automatically trigger face recognition unlocking, and at the same time give a gesture adjustment prompt according to the state of the mobile terminal, thereby simplifying operations and improving the success rate of face recognition. Improve user experience.
  • an embodiment of the present application provides a method for face recognition.
  • the method includes: triggering face recognition; when face recognition fails, detecting a first state of a mobile terminal; A gesture adjustment prompt is displayed; a second state of the mobile terminal is detected; it is determined whether there is a gesture adjustment according to the second state, and if there is a gesture adjustment, face recognition is automatically triggered.
  • triggering face recognition includes: collecting a user's face image and comparing it with a pre-stored face image.
  • a pre-stored face image There are many ways to trigger, for example, you can click the mobile terminal's keys, including the power key, volume key, or other keys; you can also touch the display to light up the display to trigger face recognition; you can also pick up the mobile terminal to detect by sensors to trigger Face recognition; face recognition voice commands can also be issued through a voice assistant to trigger face recognition and so on.
  • the facial image includes a facial picture or a video.
  • the pre-stored face image is stored in the memory of the mobile terminal, or stored in a server that can communicate with the mobile terminal.
  • the method further includes: when the face recognition is successful, obtaining the operation authority of the mobile terminal.
  • the method further includes: when the face recognition is successful, obtaining the operation authority of the mobile terminal.
  • obtaining the operation permission of the mobile terminal includes any of the following: unlocking the mobile terminal, obtaining access permissions of applications installed on the mobile terminal, or obtaining access permissions on the mobile terminal Access to stored data.
  • after automatically triggering face recognition it further includes: when face recognition fails, use any of the following authentication methods for verification: password verification, gesture recognition, fingerprint recognition, iris recognition Voiceprint recognition.
  • the method further includes: when the face recognition fails, determining whether to meet the conditions of the face recognition again; if the conditions of the face recognition are met again, the automatic recognition is performed again Trigger face recognition; if it does not meet the conditions for face recognition again, use any of the following authentication methods for verification: password verification, gesture recognition, fingerprint recognition, iris recognition, voiceprint recognition.
  • meeting the condition of face recognition again means that the number of times of face recognition failure is less than a preset threshold.
  • a gesture adjustment prompt is provided according to the first state, including: the mobile terminal analyzes the cause of the face recognition failure according to the first state, and finds a solution corresponding to the cause in a preset database, Give attitude adjustment tips according to the solution.
  • determining whether there is a posture adjustment according to the second state includes: determining whether a change in the second state from the first state is the same as the content of the posture adjustment prompt.
  • the gesture adjustment prompt includes any combination of the following prompt modes: text, picture, voice, video, light, or vibration.
  • the first state is a first distance between the mobile terminal and the user's face when face recognition fails
  • the second state is the second distance between the mobile terminal and the user's face after a gesture adjustment prompt is given. distance.
  • the first state is a first tilt angle of a plane where the display of the mobile terminal is located with respect to a horizontal plane when face recognition fails
  • the second state is where the display of the mobile terminal is located after a gesture adjustment prompt is given.
  • the second tilt angle of the plane relative to the horizontal plane.
  • an embodiment of the present application provides a device, including a camera, a processor, a memory, and a sensor.
  • the processor is configured to: trigger face recognition, instruct the camera to collect a user's face image, and communicate with a face stored in the memory in advance.
  • face recognition fails, the first state of the sensor detection device is instructed; the attitude adjustment prompt is given according to the first state; the second state of the sensor detection device is instructed; the presence or absence of the posture is determined according to the second state Adjustment, if there is posture adjustment, it will automatically trigger face recognition.
  • the facial image includes a facial picture or a video.
  • the processor is further configured to: when the face recognition is successful, obtain the operation authority of the device.
  • obtaining the operation right of the device includes any of the following: unlocking the device, obtaining the access right of an application installed on the device, or obtaining the access right of data stored on the device.
  • the processor is further configured to: when face recognition fails, use any of the following authentication methods for verification: password verification, gesture recognition, fingerprint recognition , Iris recognition, voiceprint recognition.
  • the processor is further configured to: when the face recognition fails, determine whether the conditions for the face recognition are met again; if the conditions for the face recognition are met again, Then face recognition is automatically triggered again; if the conditions for face recognition are not met again, use any of the following authentication methods for verification: password verification, gesture recognition, fingerprint recognition, iris recognition, voiceprint recognition.
  • meeting the condition of face recognition again means that the number of times of face recognition failure is less than a preset threshold.
  • a gesture adjustment prompt is provided according to the first state, including: analyzing the cause of face recognition failure according to the first state, and finding a solution corresponding to the cause in a preset database, and according to the solution The plan gives attitude adjustment tips.
  • determining whether there is a posture adjustment according to the second state includes: determining whether a change in the second state from the first state is the same as the content of the posture adjustment prompt.
  • the gesture adjustment prompt includes any combination of the following prompt modes: text, picture, voice, video, light, or vibration.
  • the first state is a first distance between the device and the user's face when face recognition fails
  • the second state is the second distance between the device and the user's face after a gesture adjustment prompt is given.
  • the device further includes a display.
  • the first state is a first tilt angle of the plane where the display is located with respect to the horizontal plane when the face recognition fails
  • the second state is the plane where the display is located after a gesture adjustment prompt is given. A second tilt angle relative to the horizontal plane.
  • an embodiment of the present application provides a device including a face recognition unit, a processing unit, a prompting unit, and a state detection unit, which is characterized in that the processing unit is used to trigger face recognition; the face recognition unit is used to It is used to collect the user's face image and compare it with the pre-stored face image; the state detection unit is used to detect the first state of the device when the face recognition fails; the prompt unit is used to give a posture according to the first state Adjustment prompts; a state detection unit is further configured to detect a second state of the device; a processing unit is further configured to determine whether there is a posture adjustment according to the second state, and if there is a posture adjustment, a face recognition is automatically triggered.
  • the facial image includes a facial picture or a video.
  • the processing unit is further configured to: when the face recognition is successful, obtain the operation authority of the device.
  • obtaining the operation right of the device includes any of the following: unlocking the device, obtaining the access right of an application installed on the device, or obtaining the access right of data stored on the device.
  • the processing unit is further configured to: when face recognition fails, use any of the following authentication methods for verification: password verification, gesture recognition, fingerprint recognition , Iris recognition, voiceprint recognition.
  • the processing unit is further configured to: determine whether the conditions for re-face recognition are met when the face recognition fails; if the re-face recognition is not met Conditions, then use any of the following authentication methods for verification: password verification, gesture recognition, fingerprint recognition, iris recognition, voiceprint recognition; a face recognition unit, which is used to meet the conditions of the face recognition again, then Facial recognition is triggered automatically again.
  • meeting the condition of face recognition again means that the number of times of face recognition failure is less than a preset threshold.
  • a gesture adjustment prompt is provided according to the first state, including: analyzing the cause of face recognition failure according to the first state, and finding a solution corresponding to the cause in a preset database, and according to the solution The plan gives attitude adjustment tips.
  • determining whether there is a posture adjustment according to the second state includes: determining whether a change in the second state from the first state is the same as the content of the posture adjustment prompt.
  • the gesture adjustment prompt includes any combination of the following prompt modes: text, picture, voice, video, light, or vibration.
  • the first state is a first distance between the device and the user's face when face recognition fails
  • the second state is the second distance between the device and the user's face after a gesture adjustment prompt is given.
  • the first state is a first inclination angle of the plane where the terminal is located with respect to the horizontal plane when face recognition fails
  • the second state is the first angle of the plane where the terminal is located with respect to the horizontal plane after the gesture adjustment prompt is given. Two tilt angles.
  • an embodiment of the present application provides a computer storage medium.
  • the computer storage medium stores instructions. When the instructions are run on the mobile terminal, the mobile terminal is caused to execute the method as in the first aspect.
  • an embodiment of the present application provides a computer program product including instructions.
  • the computer program product runs on a mobile terminal, the mobile terminal executes the method as in the first aspect.
  • FIG. 1 is a schematic diagram of face recognition using a mobile phone according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a hardware structure of a mobile phone according to an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for triggering face recognition according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a tilt angle of a mobile terminal according to an embodiment of the present application.
  • FIG. 5 is a flowchart of a method for unlocking a mobile terminal using face recognition according to an embodiment of the present application
  • FIG. 6 is a flowchart of a method for obtaining access to an application program using face recognition according to an embodiment of the present application
  • FIG. 7 is a flowchart of a method for obtaining certain data access authority using face recognition according to an embodiment of the present application
  • FIG. 8 is a flowchart of a method for unlocking a mobile terminal using face recognition according to an embodiment of the present application
  • FIG. 9 is a schematic diagram of unlocking a mobile terminal using face recognition according to an embodiment of the present application.
  • FIG. 10 is a flowchart of another method for unlocking a mobile terminal using face recognition according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of unlocking a mobile terminal using face recognition according to another embodiment of the present application.
  • FIG. 12 is a flowchart of a method for unlocking a mobile terminal using face recognition according to an embodiment of the present application
  • FIG. 13 is a schematic structural diagram of a device according to an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of another device according to an embodiment of the present application.
  • Face recognition is a kind of biometric recognition technology based on the facial feature information of a person.
  • the camera of the mobile terminal can collect pictures or videos containing the user's face, and compare them with the features of the pre-stored face pictures or video.
  • the matching degree of the two is greater than a preset threshold, the face recognition is successful, and the user can be given corresponding operation permissions, for example, the mobile terminal can be unlocked, or the operating system with corresponding permissions can be unlocked, or an application can be accessed. Permission, or access to certain data, etc .; when the matching degree of the two is less than a preset threshold, face recognition fails, and the user cannot obtain corresponding operation permissions, such as unlocking failure, denying access to an application, or Some data and so on.
  • face recognition can also be performed in conjunction with other authentication methods, such as password verification, gesture recognition, fingerprint recognition, iris recognition, voiceprint recognition, and so on.
  • face recognition technology can be combined with certain algorithms, such as extracting feature points, 3D modeling, local magnification, automatic adjustment of exposure, and infrared detection.
  • the mobile terminal in the embodiments of the present application may be a mobile phone, a tablet computer, a wearable device, a notebook computer, a personal digital assistant (PDA), an augmented reality (AR) / virtual reality (VR) device, an in-vehicle device, etc. Any form of mobile terminal.
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • Some embodiments of the present application use a mobile phone as an example to introduce a mobile terminal. It can be understood that these embodiments are also applicable to other mobile terminals.
  • FIG. 1 is a schematic diagram of face recognition using a mobile phone.
  • User 1 holds a mobile phone 200 for face recognition.
  • the mobile phone 200 includes a display 203 and a camera 204, where the camera 204 can be used to capture a face picture or video of the user 1 and the display 203 can display a collection interface.
  • the collection interface may be a shooting interface, and is used to display a face shooting effect of the user 1.
  • user 1 first triggers face recognition.
  • face recognition There are several ways to trigger. For example, you can click the keys of the mobile phone 200, including the power key, the volume key, or other keys; you can also touch the display 203 to click The bright display 203 triggers face recognition; it can also be picked up by the mobile phone 200 and detected by the sensor to trigger face recognition; it can also be triggered by a voice assistant issuing a voice recognition face command, and so on.
  • the mobile phone 200 can collect a face image of the user 1.
  • the front camera 204 can be used to shoot the face of the user 1.
  • the face image described in the embodiment of the present application may include a picture or video of the face.
  • the captured picture or video may be displayed on the display 203.
  • the mobile phone 200 may use a pre-stored face image for comparison to confirm whether the user 1 has face recognition, and thereby obtain corresponding operation authority.
  • the pre-stored face image may be stored in the memory of the mobile phone 200 in advance, or may be stored in a server or a database that can communicate with the mobile phone 200 in advance.
  • the “corresponding operation authority” described herein may be unlocking the mobile phone 200, or entering an operating system with corresponding authority, or obtaining access to certain applications, or obtaining access to certain data.
  • the unlocking of the mobile phone 200 is used as a result of face recognition. It can be understood that in these embodiments, obtaining other corresponding operation rights may also be used as a result of face recognition.
  • the “face recognition passed” here can also be referred to as “face recognition success”, which means that the face image of user 1 collected by the mobile phone 200 and the pre-stored face image have a matching degree greater than a preset
  • the threshold may be sufficient, and may not necessarily be an exact match; for example, the preset threshold may be a feature point matching of 80% of the two, or the preset threshold may be dynamically adjusted according to factors such as the operation location where the user 1 is located, the permission to be obtained, and the like.
  • FIG. 2 is a schematic diagram of the hardware structure of a mobile phone 200.
  • the mobile phone 200 may include a processor 201, a memory 202, a display 203, a camera 204, an I / O device 205, a sensor 206, a power source 207, a Bluetooth device 208, a positioning device 209, and an audio circuit. 210, WiFi device 211, radio frequency circuit 212, etc. These components communicate through one or more communication buses or signal lines. It can be understood that the mobile phone 200 is only an example of a mobile device that can realize face recognition, and does not constitute a limitation on the structure of the mobile phone 200.
  • the mobile phone 200 may have more or fewer components than those shown in FIG. Two or more components are combined or may have different configurations or arrangements of these components.
  • the operating system running on the mobile phone 200 includes, but is not limited to DOS, Unix, Linux, or other operating systems.
  • the processor 201 includes a single processor or processing unit, multiple processors, multiple processing units, or one or more other suitably configured computing elements.
  • the processor 201 may be a microprocessor, a central processing unit (CPU), an application specific integrated circuit (ASIC), a digital signal processor (DSP), or a combination of such devices.
  • the processor 201 may integrate an application processor and a modem.
  • the application processor mainly processes an operating system, a user interface, and an application program, and the modem mainly processes wireless communications.
  • the processor 201 is a control center of the mobile phone 200, and uses various interfaces and lines to directly or indirectly connect various parts of the mobile phone 200, and runs or executes a software program or instruction set stored in the memory 202, and calls stored in the memory 202 To perform various functions of the mobile phone 200 and process the data, thereby performing overall monitoring of the mobile phone 200.
  • the memory 202 can store electronic data that can be used by the mobile phone 200, such as an operating system, applications and data generated by it, various documents such as text, pictures, audio, and video, device settings and user preferences, contact lists and communication records, memos, and Calendar, biometric data, data structure or database, etc.
  • the memory 202 may be configured as any type of memory such as random access memory, read-only memory, flash memory, removable memory, or other types of storage elements, or a combination of such devices.
  • the memory 200 may be used to store a preset face image for comparison with the collected face image during face recognition.
  • the display 203 may be used to display information input by the user or information provided to the user and various interfaces of the mobile phone 200. Common display types are LCD (Liquid Crystal Display), OLED (Organic Light Emitting Diode), and the like.
  • the display 203 can also be integrated with a touch panel.
  • the touch panel can detect whether a contact has occurred, and the pressure value, movement speed, direction, and position information of the contact.
  • the detection methods of the touch panel include, but are not limited to, a capacitive type, a resistive type, an infrared type, and a surface acoustic wave type.
  • the touch panel detects a touch operation on or near the touch panel, it is transmitted to the processor 201 to determine the type of the touch event.
  • the processor 201 then provides a corresponding visual output on the display 203 according to the type of the touch event.
  • the visual output includes Text, graphics, icons, videos, and any combination.
  • the camera 204 is used for taking pictures or videos.
  • the camera 204 may be divided into a front camera and a rear camera, and used in conjunction with other components such as a flash.
  • a front camera may be used to collect a face image of the user 1.
  • an RGB camera, an infrared camera, a ToF (Time of Flight) camera, and a structured light device may be used to perform image collection for face recognition.
  • the I / O device 205 that is, an input / output device, can receive data and instructions sent by a user or other device, and can also output data or instructions to the user or other device.
  • the I / O device 205 includes various buttons, interfaces, keyboards, touch input devices, touch pads, mice, and other components of the mobile phone 200.
  • the broad I / O device may also include the display 203, the camera 204, Audio circuit 210 and so on.
  • the mobile phone 200 may include one or more sensors 206, and the sensors 206 may be configured to detect any type of attributes including, but not limited to, images, pressure, light, touch, heat, magnetism, movement, relative motion, and so on.
  • the sensor 206 may be an image sensor, a thermometer, a hygrometer, a proximity sensor, an infrared sensor, an accelerometer, an angular velocity sensor, a gravity sensor, a gyroscope, a geomagnetic meter, a heart rate detector, and the like.
  • a proximity sensor In some embodiments of the present application, a proximity sensor, a distance sensor, an infrared sensor, a gravity sensor, a gyroscope, or other types of sensors may be used to detect the distance, angle, or relative position between the user 1 and the mobile phone 200.
  • the power source 207 can provide power to the mobile phone 200 and its components.
  • the power source 207 may be one or more rechargeable batteries, or non-rechargeable batteries, or an external power source connected to the mobile phone 200 in a wired / wireless manner.
  • the power supply 207 may further include related equipment such as a power management system, a fault detection system, and a power conversion system.
  • the Bluetooth device 208 is used to implement data exchange between the mobile phone 200 and other devices through a Bluetooth protocol. It can be understood that the mobile phone 200 may further include other short-range communication devices such as an NFC device.
  • the positioning device 209 can provide geographic location information for the mobile phone 200 and the installed applications.
  • the positioning device 209 may be a positioning system such as GPS, Beidou satellite navigation system, or GLONASS.
  • the positioning device 209 further includes an assisted global positioning system AGPS, assisted positioning based on a base station or a WiFi access point, and the like.
  • the audio circuit 210 may perform functions such as processing, input, and output of audio signals, and may include a speaker 210-1, a microphone 210-2, and other audio processing devices.
  • the WiFi device 211 is used to provide the mobile phone 200 with network access that complies with WiFi-related standard protocols.
  • the mobile phone 200 can access the WiFi access point through the WiFi device 211 to connect to the network.
  • a radio frequency circuit (RF, Radio Freqency) 212 can be used to receive and send information during transmission and reception of information or during a call, convert electrical signals into electromagnetic signals or convert electromagnetic signals into electrical signals, and communicate with communication networks and other communication equipment via electromagnetic signals. Communication.
  • the structure of the radio frequency circuit 212 includes, but is not limited to, an antenna system, a radio frequency transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, and a SIM (Subscriber Identity Module) card. and many more.
  • the radio frequency circuit 212 may communicate with a network and other devices through wireless communication, such as the Internet, an intranet, and / or a wireless network (such as a cellular telephone network, a wireless local area network, and / or a metropolitan area network).
  • Wireless communications can use any of a variety of communication standards, protocols, and technologies, including but not limited to the Global System for Mobile Communications, Enhanced Data GSM Environment, High-Speed Downlink Packet Access, High-Speed Uplink Packet Access, Broadband Code Division Multiple access, code division multiple access, time division multiple access, Bluetooth, wireless fidelity (e.g., IEEE802.11a, IEEE802.11b, IEEE802.11g, and / or IEEE802.11n), Voice over Internet Protocol, Wi-MAX, email protocols (E.g., Internet Message Access Protocol (IMAP) and / or Post Office Protocol (POP)), instant messaging (e.g., Extensible Messaging Presence Protocol (XMPP), extended session initiation protocol (SIMPLE) for instant messaging and field
  • the mobile phone 200 may further include other components, and details are not described herein again.
  • the mobile phone 200 may not be able to collect a suitable face image, which may cause the failure of face recognition.
  • the face of User 1 is too close to the mobile phone 200, resulting in an incomplete face image; or the face of User 1 is too far away from the mobile phone 200, causing the details of the face image to be unrecognizable; or the angle at which User 1 holds the mobile phone 200 Excessive tilt results in distortion, deformation, or loss of the facial image; or the environment in which user 1 is located is too dim or too bright, which causes the exposure and contrast of the facial image to exceed the identifiable range.
  • the mobile phone 200 Due to the limited power of the mobile phone 200, due to the consideration of saving power consumption, the mobile phone 200 will not continuously perform face recognition again after the face recognition failure, and the user 1 needs to trigger the face recognition again, that is, repeat the above-mentioned Triggering process. This will cause inconvenience in operation, and after re-triggering face recognition, it may still fail to recognize.
  • FIG. 3 is a method for triggering face recognition provided by an embodiment of the present application, which is used to determine whether the user has a posture adjustment after the face recognition fails, thereby determining whether to automatically trigger the face recognition. This method includes the following steps:
  • step S300 may also be performed-triggering face recognition.
  • trigger There are many ways to trigger, for example, you can click the keys of the mobile phone 200, including the power key, volume key, or other keys; you can also touch the display 203 to light up the display 203 to trigger face recognition; you can also pick up the mobile phone 200 to detect by the sensor
  • a voice command for face recognition can also be issued through a voice assistant to trigger face recognition.
  • S300 may be executed first.
  • triggering face recognition may be to enable a camera and other face recognition-related functions for face recognition.
  • the mobile terminal After triggering the face recognition, the mobile terminal (such as the mobile phone 200) will perform face recognition on the user, such as collecting the user's face image and comparing it with the pre-stored face image, etc., to confirm whether the user has passed face recognition, that is, the face Identify whether it was successful.
  • face recognition the user can obtain the corresponding permissions to operate the mobile terminal, such as unlocking the mobile terminal, entering the operating system with corresponding permissions, gaining access to certain applications, or access to certain data.
  • face recognition fails, the user cannot obtain corresponding permissions to operate the mobile terminal.
  • the term "when” can be interpreted as "if” or “after” or “in response to a determination" or "in response to a detection”.
  • the “detection of the state of the mobile terminal when face recognition fails” can be the state of the mobile terminal detected at the same time as the face recognition failure, or the movement can be detected after the face recognition fails.
  • the status of the terminal for example, the status of the mobile terminal is detected after 1 second after the face recognition fails.
  • the first state of the mobile terminal may be a state of the mobile terminal when face recognition fails. Detecting the first state of the mobile terminal may specifically be a state in which the tilt angle of the mobile terminal, the distance from the user's face, or the brightness of the surrounding environment of the mobile terminal is detected by the sensor when the face recognition fails. It can be understood that any suitable sensor can be used to detect the status of the mobile terminal, such as a proximity sensor, a distance sensor, a gravity sensor, a gyroscope, a light sensor, an infrared sensor, and so on.
  • the distance between the mobile terminal and the user's face described in the embodiments of the present application may be the distance between the front camera of the mobile terminal and the user's face, for example, the distance between the front camera of the mobile phone and the user's nose.
  • the inclination angle described in the embodiment of the present application may be that the angle between the plane of the display of the mobile terminal and the horizontal plane (or the ground) is less than or equal to 90 degrees when the user uses the mobile terminal in an upright position (such as standing upright or sitting upright). (As shown in Figure 4), it can be seen that the smaller the tilt angle, the more difficult it is for face recognition to collect images, that is, the angle at which the user holds the mobile terminal is too tilted, which may lead to face recognition failure. It can be understood that when the shape of the mobile terminal is a regular cuboid, such as most mobile phones on the market, the plane on which the display of the mobile terminal is located can also be understood as the plane on which the mobile terminal is located.
  • the mobile terminal After detecting the first state of the mobile terminal when the face recognition fails, the mobile terminal will give a gesture adjustment prompt according to the first state.
  • the mobile terminal analyzes the cause of the face recognition failure according to the first state, for example, the first state is that the angle at which the user holds the mobile terminal is too tilted so that face recognition fails, or the first state is that the user's face is away from the mobile terminal Too close or too far for face recognition to fail, etc.
  • the mobile terminal after knowing the cause of the face recognition failure, the mobile terminal can find a solution corresponding to the cause in a preset database, and then give a corresponding attitude adjustment prompt.
  • a gesture adjustment prompt of “please bring the phone closer” may be given, or a gesture adjustment of “the phone is too far away from the face” may be given prompt.
  • a gesture adjustment prompt of "please hold the phone vertically” may be given, or a gesture adjustment prompt of "the phone is too inclined” may be given.
  • the gesture adjustment prompt may be any form of text, picture, voice, video, etc., or a combination of these forms.
  • the content of the attitude adjustment prompt may be displayed on the display screen of the mobile terminal; or the content of the attitude adjustment prompt may be played by the speaker.
  • the attitude adjustment prompt may also be a prompt in any form of the mobile terminal through light display, vibration, or a combination of these forms.
  • the LED indicator of the mobile terminal emits a certain color of light, or lights up or flashes for a period of time, or the mobile terminal vibrates several times to represent a corresponding attitude adjustment prompt.
  • step S302 can also be omitted, that is, no attitude adjustment prompt is given, and step S303 is directly performed after performing step S301.
  • step S303 is directly performed after performing step S301.
  • the step of giving a gesture adjustment prompt can also be omitted.
  • S303 Determine whether there is a posture adjustment according to the second state of the mobile terminal, and if there is a posture adjustment, the face recognition is automatically triggered.
  • the state of the mobile terminal After the attitude adjustment prompt is given, the state of the mobile terminal, that is, the second state may be detected again to determine whether there is an attitude adjustment.
  • S303 can be divided into three steps: (1) detecting the second state of the mobile terminal; (2) determining whether there is a posture adjustment according to the second state of the mobile terminal; (3) if there is a posture adjustment, the face recognition is automatically triggered.
  • the state of the mobile terminal may be detected again using a sensor at the same time or after a period of time after the attitude adjustment is given, to determine whether there is a corresponding change from the first state, thereby determining whether there is an attitude adjustment. From the perspective of the user, if the user has posture adjustment, it means that the second state of the mobile terminal has a corresponding change from the first state.
  • the first state of the mobile terminal is that the distance between the mobile terminal and the user's face is 30 centimeters
  • face recognition fails because it is too far away
  • the second state is that the distance between the mobile terminal and the user's face is 20 centimeters. It is close, that is, the second state has a corresponding change from the first state, and then there is an attitude adjustment.
  • the content of the gesture adjustment and the gesture adjustment prompt are the same, and if they are the same, the face recognition is automatically triggered.
  • the content of the attitude adjustment prompt may be a corresponding solution obtained from the database when analyzing the cause of the face recognition failure in step S302.
  • the gesture adjustment prompt is "Please bring the phone closer”, and it can be determined whether there is an attitude adjustment to bring the phone closer according to the second state of the mobile terminal; if so, the face recognition is automatically triggered.
  • the attitude adjustment prompt is "please hold the phone vertically”.
  • Automatically triggering face recognition that is, the user does not need to perform the method for triggering face recognition in S300. Instead, a posture adjustment is used as a condition for triggering face recognition, so that the mobile terminal performs face recognition again.
  • the automatic triggering of face recognition may be to automatically enable the front camera and other related functions of face recognition to perform face recognition again.
  • the state change of the mobile terminal is detected by the sensor, and it is determined whether there is a corresponding posture adjustment action, so as to determine whether to perform face recognition again.
  • This wake-up method for face recognition based on attitude adjustment not only saves power consumption of the mobile terminal, but also provides a simpler and more convenient wake-up method for face recognition. It also improves the success rate of face recognition through gesture adjustment prompts.
  • FIG. 5 is a method for unlocking a mobile terminal using face recognition according to an embodiment of the present application, including the following steps:
  • S501 Trigger face recognition.
  • the mobile terminal is unlocked.
  • S504 Determine whether there is a posture adjustment according to the second state of the mobile terminal, and if there is a posture adjustment, the face recognition is automatically triggered. Similar to S303, S504 can also be split into three steps: (1) detecting the second state of the mobile terminal; (2) determining whether there is an attitude adjustment according to the second state of the mobile terminal; (3) if there is an attitude adjustment, it will automatically Trigger face recognition.
  • FIG. 6 is a method for obtaining access right of an application program using face recognition provided in an embodiment of the present application. The steps include:
  • S604 Determine whether there is a posture adjustment according to the second state of the mobile terminal, and if there is a posture adjustment, the face recognition is automatically triggered. Similar to S303, S604 can also be split into three steps: (1) detecting the second state of the mobile terminal; (2) determining whether there is an attitude adjustment according to the second state of the mobile terminal; (3) if there is an attitude adjustment, it will automatically Trigger face recognition.
  • FIG. 7 is a method for obtaining certain data access authority using face recognition provided by an embodiment of the present application, including the following steps:
  • S701 Trigger face recognition. Optionally, when face recognition is successful, access to the data is obtained.
  • S703 Give a gesture adjustment prompt according to the first state of the mobile terminal.
  • S704 Determine whether there is a posture adjustment according to the second state of the mobile terminal, and if there is a posture adjustment, the face recognition is automatically triggered. Similar to S303, S704 can also be split into three steps: (1) detecting the second state of the mobile terminal; (2) determining whether there is an attitude adjustment according to the second state of the mobile terminal; (3) if there is an attitude adjustment, it will automatically Trigger face recognition.
  • Figure 8 uses face recognition to unlock a mobile terminal as an example.
  • a method for automatically triggering face recognition includes the following steps:
  • the reason for the face recognition failure may be that the distance between the mobile terminal and the user's face is too close or too far.
  • an attitude adjustment prompt is provided to adjust the distance between the mobile terminal and the user's face.
  • the first distance may be the distance between the mobile terminal and the user's face detected by the sensor when face recognition fails. When the distance is too close, an attitude adjustment prompt for increasing the distance can be given; when the distance is too far, an attitude adjustment prompt for decreasing the distance can be given.
  • S804 Determine whether there is a posture adjustment according to the detected second distance between the mobile terminal and the user's face, and if there is a posture adjustment, the face recognition is automatically triggered.
  • the sensor After the mobile terminal gives a gesture adjustment prompt, the sensor detects the second distance between the mobile terminal and the user's face, compares it with the first distance, and if a corresponding change occurs, it determines that there is a gesture adjustment. For example, when the attitude adjustment prompt is to increase the distance, if the second distance is greater than the first distance, it is determined that there is an attitude adjustment. Similar to S303, S804 can also be split into three steps: (1) detecting the second distance between the mobile terminal and the user's face; (2) determining whether there is a posture adjustment based on the second distance between the mobile terminal and the user's face; (3) ) If there is a posture adjustment, face recognition is automatically triggered.
  • FIG. 9 is a schematic diagram of unlocking a mobile terminal using face recognition.
  • User 1 triggers face recognition to unlock mobile phone 200. Assuming that mobile phone 200 is located at position A at the beginning, the distance between mobile phone 200 and the face of user 1 is the first distance. If the first distance is too close, for example, the first distance is 10 centimeters, the mobile phone 200 may not be able to collect a suitable face image, which may cause the face recognition unlocking failure.
  • the sensor of the mobile phone 200 can detect the first distance between the mobile phone 200 and the face of the user 1. Based on this first distance, the mobile phone 200 can determine that the reason for the failure of face recognition is that the mobile phone 200 is too close to the face of the user 1 and thus give a gesture adjustment prompt to increase the distance. For example, the screen of the mobile phone 200 displays "Please Take the phone farther "prompt, or use the speaker to play the prompt" Please take the phone farther "and so on. Optionally, the step of giving a gesture adjustment prompt may also be omitted.
  • the sensor of the mobile phone 200 can detect the second distance between the mobile phone 200 and the face of the user 1.
  • the user 1 moves the mobile phone 200 away according to the gesture adjustment prompt.
  • the distance between the mobile phone 200 and the face of the user 1 is the second distance. Assuming that the second distance is 20 cm, since the second distance is greater than the first distance and meets the attitude adjustment prompt for increasing the distance, it is determined that there is an attitude adjustment, and the mobile phone 200 may start the front camera to automatically trigger face recognition.
  • the mobile phone 200 can collect a suitable face image at a second distance from the face of the user 1 and the comparison degree with the pre-stored face image is greater than a set threshold, the face recognition is successful. When the face recognition is successful, the mobile phone 200 can be unlocked.
  • FIG. 10 is an example of unlocking a mobile terminal with face recognition as an example.
  • a method for automatically triggering face recognition when a user holds the mobile terminal at an excessively inclined angle includes the following steps:
  • the reason for the failure of face recognition may be that the tilt angle is too small, which makes it impossible to collect a suitable face image.
  • a posture adjustment prompt is provided to adjust the angle of the plane of the display of the mobile terminal with respect to the horizontal plane.
  • the first tilt angle may be an angle that is less than or equal to 90 degrees from an angle formed by the plane of the mobile terminal and the horizontal plane detected by the sensor when the face recognition fails.
  • the sensor After the mobile terminal gives a gesture adjustment prompt, the sensor detects a second tilt angle formed by the plane of the mobile terminal's display relative to the horizontal plane, and compares the first tilt angle with the first tilt angle. If a corresponding change occurs, it is determined that there is a gesture adjustment. For example, when the attitude adjustment prompt is to hold the phone vertically (equivalent to increasing the tilt angle), if the second tilt angle is greater than the first tilt angle, it is determined that there is a gesture adjustment.
  • S1004 can also be split into three steps: (1) detecting the second tilt angle formed by the plane of the mobile terminal's display relative to the horizontal plane; (2) based on the plane of the mobile terminal's display relative to the horizontal plane The second tilt angle determines whether there is a posture adjustment; (3) If there is a posture adjustment, face recognition is automatically triggered.
  • FIG. 11 is a schematic diagram of unlocking a mobile terminal using face recognition.
  • User 1 triggers face recognition to unlock the mobile phone 200.
  • the angle of the plane where the display of the mobile phone 200 is relative to the horizontal plane is A tilt angle. If the mobile phone 200 is too inclined, that is, the first inclination angle is too small, for example, the first inclination is 40 degrees, it may cause the mobile phone 200 to fail to collect a suitable face image, and cause the face recognition unlock to fail.
  • the sensor of the mobile phone 200 can detect a first inclination angle formed by the plane where the display of the mobile phone 200 is located with respect to the horizontal plane. According to this first tilt angle, the mobile phone 200 can determine that the reason for the failure of face recognition is that the first tilt angle formed by the plane where the display of the mobile phone 200 is relative to the horizontal plane is too small, so that a posture adjustment prompt to increase the tilt angle is given. For example, the prompt of "please hold the phone vertically" is displayed on the display screen of the mobile phone 200, or the prompt of "please hold the phone vertically” is displayed on the speaker.
  • the step of giving a gesture adjustment prompt may also be omitted.
  • the sensor of the mobile phone 200 can detect a second inclination angle formed by the plane where the display of the mobile phone 200 is relative to the horizontal plane.
  • the user 1 adjusts the tilt angle of the mobile phone 200 according to the gesture adjustment prompt. If the mobile phone 200 is located at the position B, the angle formed by the plane of the mobile phone 200 relative to the horizontal plane is the second tilt angle. Assume that the second tilt angle is 80 degrees. Since the second tilt angle is greater than the first tilt angle, and it meets the attitude adjustment prompt for increasing the tilt angle, it is determined that there is a posture adjustment, and the mobile phone 200 may start the front camera to automatically trigger face recognition.
  • a suitable face image can be collected at a second tilt angle of the plane where the display of the mobile phone 200 is relative to the horizontal plane, and the matching degree is greater than a set threshold after comparison with the pre-stored face image, the face recognition is successful .
  • the mobile phone 200 can be unlocked.
  • the first distance between the mobile terminal and the user's face can be detected at the same time, and the first tilt angle formed by the plane on which the display of the mobile terminal is relative to the horizontal plane can be detected, and then Posture adjustment tips for adjusting distance and tilt angle.
  • Posture adjustment tips for adjusting distance and tilt angle.
  • face recognition is successful, the mobile terminal is unlocked.
  • face recognition can be used in conjunction with other authentication methods, such as password verification, gesture recognition, fingerprint recognition, iris recognition, voiceprint recognition, and so on.
  • other authentication methods such as password verification, gesture recognition, fingerprint recognition, iris recognition, voiceprint recognition, and so on.
  • a certain threshold for example, 3 times
  • other authentication methods may be used to unlock the mobile terminal instead.
  • FIG. 12 is a schematic diagram of unlocking a mobile terminal using face recognition, including the following steps:
  • S1201 Trigger face recognition.
  • S1202 Determine whether the face recognition is successful. Optionally, when the face recognition is successful, the mobile terminal is unlocked and the process ends.
  • the condition for meeting the face recognition again is that the number of times of face recognition failure is less than a preset threshold. For example, if the preset threshold is 3 times, it is determined whether the number of times of face recognition is less than 3 times, and if yes, S1204 is performed; if it is 3 times or more, it does not meet the conditions of face recognition again. If it does not meet the conditions for face recognition again, the process ends, or another unlocking method is used, such as password verification.
  • the first state may be a first distance between the mobile terminal and the user's face when face recognition fails, or a first tilt angle formed by a plane on which the display of the mobile terminal is located with respect to a horizontal plane, and so on.
  • the mobile terminal may detect the first state of the mobile terminal with any suitable sensor.
  • S1205 Give a gesture adjustment prompt according to the first state of the mobile terminal.
  • S1207 Determine whether there is a posture adjustment according to the second state of the mobile terminal. If there is no attitude adjustment, the process ends.
  • an embodiment of the present application provides a device including a camera 131, a processor 132, a memory 133, and a sensor 134.
  • the processor 132 instructs the camera 131 to collect an image of the user's face, and compares the image with the face image pre-stored in the memory 133.
  • the processor 132 determines the degree of matching between the collected image and the pre-stored image. When the degree of matching is greater than a preset threshold, it determines that the face recognition is successful and grants the user the corresponding operation authority; when the degree of matching is less than the preset threshold, then It is determined that the face recognition fails, and the user is not given the corresponding operation authority.
  • the processor 132 instructs the sensor 134 to detect the first state of the device.
  • the first state may be a state of the device when face recognition fails, and specifically may detect a tilt angle of the device, or a distance from a user's face, and the like.
  • the processor 132 gives a gesture adjustment prompt according to the first state.
  • the attitude adjustment prompt can be output to the user through a display, a speaker, and other components.
  • the processor 132 After giving the attitude adjustment prompt, the processor 132 instructs the sensor 134 to detect the second state of the device to determine whether there is an attitude adjustment. If it is determined that there is a posture adjustment, the processor 132 triggers face recognition.
  • an embodiment of the present application provides an apparatus, including a processor, a memory, and one or more programs.
  • one or more programs are stored in the memory and configured to be executed by one or more processors.
  • the program or programs include instructions for detecting the first state of the device when face recognition fails; giving a posture adjustment prompt based on the first state; determining whether there is a posture adjustment based on the second state, and if there is a posture adjustment , It will automatically trigger face recognition.
  • an embodiment of the present application provides a storage medium or a computer program product for storing computer software instructions, which are used to: detect a first state of a mobile terminal when face recognition fails; and according to the first state of the mobile terminal Give a gesture adjustment prompt; determine whether there is a gesture adjustment according to the second state of the mobile terminal, and if there is a gesture adjustment, face recognition is automatically triggered.
  • a storage medium or a computer program product for storing computer software instructions, which are used to: detect a first state of a mobile terminal when face recognition fails; and according to the first state of the mobile terminal Give a gesture adjustment prompt; determine whether there is a gesture adjustment according to the second state of the mobile terminal, and if there is a gesture adjustment, face recognition is automatically triggered.
  • an embodiment of the present application provides a device including a face recognition unit 141, a processing unit 142, a prompting unit 143, and a state detection unit 144.
  • the face recognition unit 141 can collect an image of the user's face and compare it with a pre-stored face image.
  • the processing unit 142 determines the matching degree between the collected image and the pre-stored image. When the matching degree is greater than a preset threshold, it determines that the face recognition is successful and gives the user corresponding operation authority; when the matching degree is less than the preset threshold, then It is determined that the face recognition fails, and the user is not given the corresponding operation authority.
  • the state detection unit 144 detects a first state of the device.
  • the first state may be a state of the device when face recognition fails, and specifically may detect a tilt angle of the device, or a distance from a user's face, and the like.
  • the prompting unit 143 gives a posture adjustment prompt according to the first state.
  • the attitude adjustment prompt can be output to the user through a display, a speaker, and other components.
  • the state detection unit 144 After giving a gesture adjustment prompt, the state detection unit 144 detects the second state of the device to determine whether there is a gesture adjustment. If it is determined that there is a posture adjustment, the face recognition unit 141 automatically triggers face recognition.
  • the disclosed apparatus and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the modules or units is only a logical function division.
  • multiple units or components may be divided.
  • the combination can either be integrated into another device, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may be one physical unit or multiple physical units, that is, may be located in one place, or may be distributed to multiple different places. . Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated unit When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium.
  • the technical solution of the embodiments of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution may be embodied in the form of a software product that is stored in a storage medium. Included are several instructions for causing a device (which can be a single-chip microcomputer, a chip, etc.) or a processor to execute all or part of the steps of the method described in the embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Quality & Reliability (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Telephone Function (AREA)
  • Collating Specific Patterns (AREA)
  • Image Input (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例提供一种人脸识别的方法及装置。该方法包括:当人脸识别失败时,检测移动终端的第一状态;根据移动终端的第一状态给出姿态调整提示;根据移动终端的第二状态确定是否有姿态调整,如果有姿态调整,则自动触发人脸识别。可以节省移动终端的功耗,简化操作,提高人脸识别成功率,提升用户的使用体验。

Description

一种人脸识别的方法及装置 技术领域
本申请涉及终端技术领域,尤其涉及一种人脸识别的方法及装置。
背景技术
随着人脸识别技术的发展,人脸识别解锁已逐渐应用于各种终端设备中。例如,当用户使用移动终端时,如果人脸识别的结果满足预设的阈值,则会获得相应的操作权限,比如解锁移动设备、进入相应权限的操作系统,或者获得某个应用程序的访问权限等;如果人脸识别的结果不满足预设的阈值,则无法获得相应的操作权限,例如解锁失败、拒绝访问等。在人脸识别解锁的过程中,用户首先需要触发人脸识别的过程,常见的触发方式可以是点击电源键或其他按钮、拿起移动终端从而点亮屏幕,或者通过语音助手触发等。
在人脸识别解锁的过程中,由于用户的持机角度或者脸部距离远近等原因,可能造成摄像头无法采集到合适的人脸图像,从而导致人脸解锁失败。当人脸识别解锁失败后,需要用户再次进行验证;但由于现有的移动终端功耗等原因,在人脸识别失败后,不会进行持续地识别,需要用户主动地再次触发人脸识别进行解锁。再次触发人脸识别需要用户再次点击电源键或其他按钮、将移动终端放下后再次拿起,或者通过语音助手再次发出命令等。这些操作不够流畅,比较复杂;同时,再次人脸识别仍然可能失败,从而造成使用上的不便。
发明内容
有鉴于此,本申请实施例提供一种人脸识别的方法及装置,可以自动触发人脸识别解锁,同时根据移动终端的状态给出姿态调整提示,从而简化操作,提高人脸识别成功率,提升用户的使用体验。
第一方面,本申请实施例提供了一种人脸识别的方法,所述方法包括:触发人脸识别;当人脸识别失败时,检测移动终端的第一状态;根据所述第一状态给出姿态调整提示;检测所述移动终端的第二状态;根据所述第二状态确定是否有姿态调整,如果有所述姿态调整,则自动触发人脸识别。
在一种可能的实现方式中,触发人脸识别,包括:采集用户的脸部图像,与预存的脸部图像进行对比。触发的方法有多种,例如,可以点击移动终端的按键,包括电源键、音量键或者其他按键;也可以触摸显示器从而点亮显示器触发人脸识别;也可以拿起移动终端通过传感器检测从而触发人脸识别;也可以通过语音助手发出人脸识别的语音命令从而触发人脸识别等。
在另一种可能的实现方式中,脸部图像包括脸部图片或视频。
在另一种可能的实现方式中,预存的脸部图像存储在移动终端的存储器中,或者存储在可与移动终端进行通信的服务器中。
在另一种可能的实现方式中,在自动触发人脸识别之后,还包括:当人脸识别成功时,获取移动终端的操作权限。
在另一种可能的实现方式中,在自动触发人脸识别之后,还包括:当人脸识别成功时,获取移动终端的操作权限。
在另一种可能的实现方式中,获取移动终端的操作权限,包括以下任一项:解锁所述移动终端,获得所述移动终端上安装的应用程序的访问权限,或者获得所述移动终端上存储的数据的访问权限。
在另一种可能的实现方式中,在自动触发人脸识别之后,还包括:当人脸识别失败时,使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别。
在另一种可能的实现方式中,在自动触发人脸识别之后,还包括:当人脸识别失败时,确定是否符合再次人脸识别的条件;如果符合再次人脸识别的条件,则再次自动触发人脸识别;如果不符合再次人脸识别的条件,则使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别。
在另一种可能的实现方式中,符合再次人脸识别的条件指的是人脸识别失败的次数小于预设的阈值。
在另一种可能的实现方式中,根据第一状态给出姿态调整提示,包括:移动终端根据第一状态分析人脸识别失败的原因,并在预先设置的数据库中找到原因对应的解决方案,根据解决方案给出姿态调整提示。
在另一种可能的实现方式中,根据第二状态确定是否有姿态调整,包括:确定第二状态相对于第一状态的变化与姿态调整提示的内容是否相同。
在另一种可能的实现方式中,姿态调整提示包括以下提示方式的任意组合:文字、图片、语音、视频、灯光或者振动。
在另一种可能的实现方式中,第一状态为当人脸识别失败时移动终端与用户脸部的第一距离,第二状态为给出姿态调整提示后移动终端与用户脸部的第二距离。
在另一种可能的实现方式中,第一状态为当人脸识别失败时移动终端的显示器所在平面相对于水平面的第一倾斜角度,第二状态为给出姿态调整提示后移动终端的显示器所在平面相对于水平面的第二倾斜角度。
第二方面,本申请实施例提供了一种装置,包括摄像头、处理器、存储器、传感器,处理器用于:触发人脸识别,指令摄像头采集用户的脸部图像,并与预存在存储器中的脸部图像进行对比;当人脸识别失败时,指令所述传感器检测装置的第一状态;根据第一状态给出姿态调整提示;指令传感器检测装置的第二状态;根据第二状态确定是否有姿态调整,如果有姿态调整,则自动触发人脸识别。
在一种可能的实现方式中,脸部图像包括脸部图片或视频。
在另一种可能的实现方式中,在自动触发人脸识别之后,处理器还用于:当人脸识别成功时,获取装置的操作权限。
在另一种可能的实现方式中,获取装置的操作权限,包括以下任一项:解锁装置,获得装置上安装的应用程序的访问权限,或者获得装置上存储的数据的访问权限。
在另一种可能的实现方式中,在自动触发人脸识别之后,处理器还用于:当人脸识别失败时,使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别。
在另一种可能的实现方式中,在自动触发人脸识别之后,处理器还用于:当人脸识别失败时,确定是否符合再次人脸识别的条件;如果符合再次人脸识别的条件,则再次自动触发人脸识别;如果不符合再次人脸识别的条件,则使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别。
在另一种可能的实现方式中,符合再次人脸识别的条件指的是人脸识别失败的次数小于预设的阈值。
在另一种可能的实现方式中,根据第一状态给出姿态调整提示,包括:根据第一状态分析人脸识别失败的原因,并在预先设置的数据库中找到原因对应的解决方案,根据解决方案给出姿态调整提示。
在另一种可能的实现方式中,根据第二状态确定是否有姿态调整,包括:确定第二状态相对于第一状态的变化与姿态调整提示的内容是否相同。
在另一种可能的实现方式中,姿态调整提示包括以下提示方式的任意组合:文字、图片、语音、视频、灯光或者振动。
在另一种可能的实现方式中,第一状态为当人脸识别失败时装置与用户脸部的第一距离,第二状态为给出姿态调整提示后装置与用户脸部的第二距离。
在另一种可能的实现方式中,装置还包括显示器,第一状态为当人脸识别失败时显示器所在平面相对于水平面的第一倾斜角度,第二状态为给出姿态调整提示后显示器所在平面相对于水平面的第二倾斜角度。
第三方面,本申请实施例提供了一种装置,包括人脸识别单元、处理单元、提示单元、状态检测单元,其特征在于:处理单元,用于触发人脸识别;人脸识别单元,用于采集用户的脸部图像,并与预存的脸部图像进行对比;状态检测单元,用于当人脸识别失败时,检测装置的第一状态;提示单元,用于根据第一状态给出姿态调整提示;状态检测单元,还用于检测装置的第二状态;处理单元,还用于根据第二状态确定是否有姿态调整,如果有姿态调整,则自动触发人脸识别。
在一种可能的实现方式中,脸部图像包括脸部图片或视频。
在另一种可能的实现方式中,在自动触发人脸识别之后,处理单元还用于:当人脸识别成功时,获取装置的操作权限。
在另一种可能的实现方式中,获取装置的操作权限,包括以下任一项:解锁装置,获得装置上安装的应用程序的访问权限,或者获得装置上存储的数据的访问权限。
在另一种可能的实现方式中,在自动触发人脸识别之后,处理单元还用于:当人脸识别失败时,使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别。
在另一种可能的实现方式中,在自动触发人脸识别之后,处理单元还用于:当人脸识别失败时,确定是否符合再次人脸识别的条件;如果不符合所述再次人脸识别的条件,则使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别;人脸识别单元,用于如果符合所述再次人脸识别的条件,则再次自动触发人脸识别。
在另一种可能的实现方式中,符合再次人脸识别的条件指的是人脸识别失败的次数小于预设的阈值。
在另一种可能的实现方式中,根据第一状态给出姿态调整提示,包括:根据第一状态分析人脸识别失败的原因,并在预先设置的数据库中找到原因对应的解决方案,根据解决方案给出姿态调整提示。
在另一种可能的实现方式中,根据第二状态确定是否有姿态调整,包括:确定第二状态相对于第一状态的变化与姿态调整提示的内容是否相同。
在另一种可能的实现方式中,姿态调整提示包括以下提示方式的任意组合:文字、图片、语音、视频、灯光或者振动。
在另一种可能的实现方式中,第一状态为当人脸识别失败时装置与用户脸部的第一距离,第二状态为给出姿态调整提示后装置与用户脸部的第二距离。
在另一种可能的实现方式中,第一状态为当人脸识别失败时终端所在平面相对于水平面的第一倾斜角度,第二状态为给出姿态调整提示后终端所在平面相对于水平面的第二倾斜角度。
第四方面,本申请实施例提供了一种计算机存储介质,计算机存储介质中存储有指令,当指令在移动终端上运行时,使得移动终端执行如第一方面的方法。
第五方面,本申请实施例提供了一种包含指令的计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行如第一方面的方法。
附图说明
图1为本申请实施例提供的一种使用手机进行人脸识别的示意图;
图2为本申请实施例提供的一种手机的硬件结构示意图;
图3为本申请实施例提供的一种触发人脸识别的方法流程图;
图4为本申请实施例提供的一种移动终端的倾斜角度的示意图;
图5为本申请实施例提供的一种使用人脸识别解锁移动终端的方法流程图;
图6为本申请实施例提供的一种使用人脸识别获得某应用程序访问权限的方法流程图;
图7为本申请实施例提供的一种使用人脸识别获得某些数据访问权限的方法流程图;
图8为本申请实施例提供的一种使用人脸识别解锁移动终端的方法流程图;
图9为本申请实施例提供的一种使用人脸识别解锁移动终端的示意图;
图10为本申请实施例提供的另一种使用人脸识别解锁移动终端的方法流程图;
图11为本申请实施例提供的另一种使用人脸识别解锁移动终端的示意图;
图12为本申请实施例提供的一种使用人脸识别解锁移动终端的方法流程图;
图13为本申请实施例提供的一种装置结构示意图;
图14为本申请实施例提供的另一种装置结构示意图。
具体实施方式
人脸识别是基于人的脸部特征信息进行身份识别的一种生物识别技术。移动终端的摄像头可以采集含有用户脸部的图片或视频,与预存的人脸图片或视频的特征进行比对。当两者的匹配度大于预设的阈值时,则人脸识别成功,进而可以赋予用户相应的操作权限,例如可以解锁移动终端,或者进入相应权限的操作系统,或者获得某个应用程序的访问权限,或者获得某些数据的访问权限等;当两者的匹配度小于预设的 阈值时,则人脸识别失败,用户无法获得相应的操作权限,例如解锁失败、拒绝访问某个应用程序或者某些数据等等。可以理解的是,人脸识别也可以和其他鉴权方式配合进行,比如密码验证、手势识别、指纹识别、虹膜识别、声纹识别等。在具体实现时,人脸识别技术可以结合某些算法,例如提取特征点、3D建模、局部放大、自动调整曝光度、红外侦测等。
可以理解的是,本申请实施例中的移动终端可以是手机、平板电脑、可穿戴设备、笔记本电脑、个人数字助理(PDA)、增强现实(AR)/虚拟现实(VR)设备、车载设备等任何形式的移动终端。本申请的一些实施例以手机为例介绍移动终端,可以理解的是,这些实施例也适用于其他移动终端。
图1是一种使用手机进行人脸识别的示意图,用户1手持手机200进行人脸识别。手机200包括显示器203和摄像头204,其中摄像头204开启后可以用来采集用户1的脸部图片或视频,显示器203可以显示采集界面。其中,采集界面可以是拍摄界面,用于显示用户1的脸部拍摄效果。
在使用人脸识别的过程中,用户1首先触发人脸识别,触发的方法有多种,例如,可以点击手机200的按键,包括电源键、音量键或者其他按键;也可以触摸显示器203从而点亮显示器203触发人脸识别;也可以拿起手机200通过传感器检测从而触发人脸识别;也可以通过语音助手发出人脸识别的语音命令来触发,等等。
触发人脸识别后,手机200可以采集用户1的脸部图像,具体的,可以用前置摄像头204对着用户1的脸部进行拍摄。可以理解的是,本申请实施例所述的脸部图像,可以包括脸部的图片或者视频。可选的,可以在显示器203上显示采集到的图片或者视频。在采集到用户1的脸部图像后,手机200可以用预存的人脸图像进行对比,以确认用户1是否通过人脸识别,从而获得相应的操作权限。可以理解的是,预存的人脸图像可以是预先存储在手机200的存储器中,也可以是预先存储在可与手机200进行通信的服务器或者数据库中。这里所述的“相应的操作权限”,可以是解锁手机200,或者进入相应权限的操作系统,或者获得某些应用程序的访问权限,或者获得某些数据的访问权限等。本申请的一些实施例以解锁手机200作为人脸识别通过的结果,可以理解的是,这些实施例中,获得其他相应的操作权限也可以作为人脸识别通过的结果。这里所述的“人脸识别通过”,也可以称作人脸识别成功,是指手机200采集到的用户1的脸部图像,与预存的人脸图像,两者的匹配度大于预设的阈值即可,不一定是完全匹配;例如,预设的阈值可以是两者80%的特征点匹配,或者根据用户1所处的操作地点、要获取的权限等因素动态调整预设的阈值。
图2是手机200的硬件结构示意图,该手机200可以包括处理器201,存储器202,显示器203,摄像头204,I/O设备205,传感器206,电源207,蓝牙装置208,定位装置209,音频电路210,WiFi装置211,射频电路212等。这些部件通过一个或多个通信总线或信号线来通信。可以理解的是,手机200只是可以实现人脸识别的移动装置的一个示例,并不构成对手机200结构的限定;手机200可以具有比图2所示出的更多或更少的部件,可以组合两个或更多个部件,或者可具有这些部件的不同配置或布置。手机200上运行的操作系统包括但不限于
Figure PCTCN2018102680-appb-000001
DOS、Unix、Linux或者其他操作系统。
处理器201包括单个处理器或处理单元、多个处理器、多个处理单元或者一个或多个其他适当配置的计算元件。例如,处理器201可以是微处理器、中央处理单元(CPU)、专用集成电路(ASIC)、数字信号处理器(DSP)或者这样的设备的组合。可选的,处理器201可集成应用处理器和调制解调器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调器主要处理无线通信。处理器201是手机200的控制中心,利用各种接口和线路直接或间接地连接手机200的各个部分,通过运行或执行存储在存储器202内的软件程序或指令集,以及调用存储在存储器202内的数据,执行手机200的各种功能和处理数据,从而对手机200进行整体监控。
存储器202可以存储可被手机200使用的电子数据,例如操作系统、应用程序及其生成的数据、文字图片音频视频等各种文档、设备设定和用户偏好、联系人列表和通信记录、备忘录和日程表、生物测量数据、数据结构或数据库等等。存储器202可被配置为任何类型的存储器例如随机访问存储器、只读存储器、闪存、可移除存储器或者其他类型的存储元件,或者这样的设备的组合。在本申请的一些实施例中,存储器200可以用来存储预设的人脸图像,以供人脸识别时和采集到的人脸图像比对。
显示器203可用于显示由用户输入的信息或提供给用户的信息以及手机200的各种界面。常见的显示器类型有LCD(液晶显示器)、OLED(有机发光二极管)等。可选的,显示器203还可以和触控面板集成使用,触控面板可以检测到是否发生了接触,以及接触的压力值、移动速度和方向、位置信息等。触控面板的检测方式包括但不限于电容式、电阻式、红外线式、表面声波式等。当触控面板检测到在其上或附近的触摸操作后,传送给处理器201以确定触摸事件的类型,随后处理器201根据触摸事件的类型在显示器203上提供相应的视觉输出,视觉输出包括文本、图形、图标、视频及其任意组合。
摄像头204用于拍摄图片或者视频。可选的,摄像头204可以分为前置摄像头和后置摄像头,并配合闪光灯等其他部件使用。在本申请的一些实施例中,可以用前置摄像头采集用户1的脸部图像。在本申请的一些实施例中,可以用RGB摄像头、红外摄像头、ToF(Time of Flight)摄像头、结构光器件来进行人脸识别的图像采集。
I/O设备205,即输入/输出设备,可以接收用户或其他设备发送的数据及指令,也可以输出数据或指令给用户或其他设备。I/O设备205包括手机200的各种按钮、接口、键盘、触摸输入装置、触控板、鼠标等部件;广义的I/O设备也可以包括图2中所示的显示器203、摄像头204、音频电路210等。
手机200可以包含一个或多个传感器206,传感器206可以被配置为检测任何类型的属性,包括但不限于图像、压力、光、触摸、热、磁、移动、相对运动,等等。例如,传感器206可以是图像传感器、温度计、湿度计、接近传感器、红外线传感器、加速度计、角速度传感器、重力传感器、陀螺仪、地磁计、心率探测器等等。在本申请的一些实施例中,可以用接近传感器、距离传感器、红外线传感器、重力传感器、陀螺仪或者其他类型的传感器检测用户1与手机200之间的距离、角度或相对位置。
电源207可以向手机200及其组成部件提供电量。电源207可以是一个或多个可充电电池、或者不可充电电池、或者通过有线/无线方式连接到手机200的外部电源。可选的,电源207还可以包括电源管理系统、故障检测系统、功率转换系统等相关设 备。
蓝牙装置208用于实现手机200和其他设备之间通过蓝牙协议进行数据交换。可以理解的是,手机200还可以包括其他短距通信装置例如NFC装置等。
定位装置209可以为手机200及所安装的应用程序提供地理位置信息。定位装置209可以是GPS、北斗卫星导航系统、GLONASS等定位系统。可选的,定位装置209还包括辅助全球定位系统AGPS,基于基站或者WiFi接入点进行辅助定位等。
音频电路210可以执行音频信号的处理、输入、输出等功能,可以包括扬声器210-1,麦克风210-2及其他音频处理装置。
WiFi装置211用于为手机200提供遵循WiFi相关标准协议的网络接入,例如手机200可以通过WiFi装置211接入到WiFi接入点从而连接到网络。
射频电路(RF,Radio Freqency)212可用于收发信息或通话过程中信号的接收和发送,将电信号转换为电磁信号或将电磁信号转换为电信号,并且经由电磁信号与通信网络及其他通信设备通信。射频电路212的结构包括但不限于:天线系统、射频收发器、一个或多个放大器、调谐器、一个或多个振荡器、数字信号处理器、编解码芯片组、SIM(Subscriber Identity Module)卡等等。射频电路212可通过无线通信与网络以及其他设备通信,网络诸如是互联网、内联网和/或无线网络(诸如蜂窝电话网络、无线局域网和/或城域网)。无线通信可使用多种通信标准、协议和技术中的任何类型,包括但不限于全球移动通信系统、增强数据GSM环境、高速下行链路分组接入、高速上行链路分组接入、宽带码分多址、码分多址、时分多址、蓝牙、无线保真(例如,IEEE 802.11a、IEEE 802.11b、IEEE 802.11g和/或IEEE 802.11n)、因特网语音协议、Wi-MAX、电子邮件协议(例如,因特网消息访问协议(IMAP)和/或邮局协议(POP))、即时消息(例如,可扩展消息处理现场协议(XMPP)、用于即时消息和现场利用扩展的会话发起协议(SIMPLE)、即时消息和到场服务(IMPS))、和/或短消息服务(SMS)、或者其他任何适当的通信协议,包括在本申请提交日还未开发出的通信协议。
尽管图2未示出,手机200还可以包括其他部件,在此不再赘述。
在使用手机200进行人脸识别的过程中,由于用户1拿着手机200的角度、距离的不同,可能导致手机200无法采集到合适的脸部图像,从而导致人脸识别失败。例如,用户1的脸部距离手机200过近,导致脸部图像不完整;或者用户1的脸部距离手机200过远,导致脸部图像的细节无法识别;或者用户1拿着手机200的角度过于倾斜,导致脸部图像的失真、变形或者缺失;或者用户1所处的环境过于昏暗或者过于明亮,导致脸部图像的曝光度、对比度超出可识别的范围等。
由于手机200的电量有限,出于节省功耗的考虑,手机200在人脸识别失败以后,不会持续性地再次进行人脸识别,需要用户1重新触发人脸识别,即重复上文所述触发过程。这将造成操作上的不便,而且重新触发人脸识别后,仍然有可能识别失败。
图3是本申请实施例提供的一种触发人脸识别的方法,用于在人脸识别失败后,确定用户是否有姿态调整,从而确定是否自动触发人脸识别。此方法包括以下步骤:
S301,当人脸识别失败时,检测移动终端的第一状态。
可选的,在步骤S301之前,还可以执行步骤S300——触发人脸识别。触发的方法有多种,例如,可以点击手机200的按键,包括电源键、音量键或者其他按键;也 可以触摸显示器203从而点亮显示器203触发人脸识别;也可以拿起手机200通过传感器检测从而触发人脸识别;也可以通过语音助手发出人脸识别的语音命令从而触发人脸识别等。可以理解的是,以下各实施例的方法在执行之前,都可以先执行S300.可选的,触发人脸识别可以是开启摄像头和其他人脸识别相关的功能,以进行人脸识别。
在触发人脸识别后,移动终端(例如手机200)会对用户进行人脸识别,如采集用户脸部图像、与预存的人脸图像进行对比等,确认用户是否通过人脸识别,即人脸识别是否成功。当人脸识别成功时,则用户可以获得操作移动终端的相应权限,例如解锁移动终端、进入相应权限的操作系统、获得某些应用程序的访问权限、或者获得某些数据的访问权限等。当人脸识别失败时,则用户无法获得操作移动终端的相应权限。术语“当...时”可以被解释为“如果”或“在...后”或“响应于确定”或“响应于检测到”。可以理解的是,这里所说的“当人脸识别失败时,检测移动终端的状态”,可以是人脸识别失败的同时检测移动终端的状态,也可以是在人脸识别失败后再检测移动终端的状态(例如人脸识别失败后隔1秒再检测移动终端的状态)。
移动终端的第一状态可以是人脸识别失败时移动终端的状态。检测移动终端的第一状态,具体的,可以是通过传感器检测人脸识别失败时移动终端的倾斜角度、与用户脸部的距离、或者移动终端周围环境的明暗等状态。可以理解的是,可以使用任何合适的传感器来检测移动终端的状态,例如接近传感器、距离传感器、重力传感器、陀螺仪、光传感器、红外线传感器等等。
值得注意的是,本申请实施例所述的移动终端与用户脸部的距离,可以是移动终端前置摄像头距离用户脸部的距离,例如可以是手机前置摄像头距离用户鼻尖的距离。本申请实施例所述的倾斜角度可以是用户在直立使用移动终端时(例如站直或者坐直),移动终端的显示器所在平面相对于水平面(或者地面)所成的夹角中小于等于90度的那个角(如图4所示),可以看出,所述的倾斜角度越小,则人脸识别采集图像时越困难,也就是说用户手持移动终端的角度过于倾斜,可能导致人脸识别失败。可以理解的是,当移动终端外形是规则的长方体时,比如市场上的大部分手机,移动终端的显示器所在平面也可以理解为移动终端所在平面。
S302,根据移动终端的第一状态给出姿态调整提示。
在检测到人脸识别失败时移动终端的第一状态后,移动终端会根据此第一状态给出姿态调整提示。可选的,移动终端会根据第一状态分析人脸识别失败的原因,例如第一状态是用户手持移动终端的角度过于倾斜以至于人脸识别失败,或者第一状态是用户脸部距离移动终端过近或过远以至于人脸识别失败,等等。可选的,得知人脸识别失败的原因后,移动终端可以在预先设置的数据库中找到此原因对应的解决方案,从而给出相应的姿态调整提示。例如,当用户脸部距离移动终端过远以至于人脸识别失败时,可以给出“请把手机拿近”的姿态调整提示,或者可以给出“手机离人脸太远了”的姿态调整提示。又例如,当用户手持移动终端的角度过于倾斜以至于人脸识别失败时,可以给出“请竖直拿手机”的姿态调整提示,或者可以给出“手机过于倾斜”的姿态调整提示。
可以理解的是,姿态调整提示可以是文字、图片、语音、视频等任何形式的提示,或者是这些形式的组合。例如,移动终端的显示器屏幕上可以显示姿态调整提示的内 容;或者扬声器播放姿态调整提示的内容。或者,姿态调整提示也可以是移动终端通过灯光显示、振动等任何形式的提示,或者是这些形式的组合。例如,移动终端的LED指示灯发出某种颜色的光、或者点亮或闪烁一段时间,或者移动终端振动若干下以代表相应的姿态调整提示。
可选的,步骤S302也可以省略,即不给出姿态调整提示,在执行步骤S301后直接执行步骤S303.类似的,在以下各实施例中,也可以省略给出姿态调整提示的步骤。
S303,根据移动终端的第二状态确定是否有姿态调整,如果有姿态调整,则自动触发人脸识别。
在给出姿态调整提示后,可以再次检测移动终端的状态,即第二状态,以确定是否有姿态调整。S303可以拆分为三步:(1)检测移动终端的第二状态;(2)根据移动终端的第二状态确定是否有姿态调整;(3)如果有姿态调整,则自动触发人脸识别。可选的,可以在给出姿态调整之后的同时或者间隔一段时间内,使用传感器再次检测移动终端的状态,以确定相对于第一状态是否有相应的变化,从而确定是否有姿态调整。从用户的角度来说,用户如果有姿态调整,则意味着移动终端的第二状态相对于第一状态有了相应的变化。例如,如果移动终端的第一状态是移动终端与用户脸部的距离为30厘米,因为过远而导致人脸识别失败,第二状态是移动终端与用户脸部的距离为20厘米,由于距离近了,即第二状态相对于第一状态有了相应的变化,则有姿态调整。
可选的,如果有姿态调整,还可以确定姿态调整与姿态调整提示的内容是否相同,如果相同时再自动触发人脸识别。姿态调整提示的内容,可以是S302步骤中分析人脸识别失败的原因时从数据库得到的相应的解决方案。例如,姿态调整提示是“请把手机拿近”,可以根据移动终端的第二状态确定是否有把手机拿近的姿态调整;如有,自动触发人脸识别。又例如,姿态调整提示是“请竖直拿手机”,可以根据移动终端的第二状态确定是否有把手机竖直拿着以使得手机显示器所在平面垂直于水平面,或者显示器所在平面与水平面的夹角达到可识别的范围的姿态调整;如有,则自动触发人脸识别。
自动触发人脸识别,即无需用户执行S300中触发人脸识别的方法,而是把有姿态调整作为触发人脸识别的条件,以使得移动终端再次执行人脸识别。可选的,自动触发人脸识别可以是自动开启前置摄像头和其他人脸识别的相关功能,以再次进行人脸识别。
以上实施例通过传感器检测移动终端的状态变化,确定是否有相应的姿态调整动作,从而决定是否再次进行人脸识别。这种基于姿态调整的人脸识别唤醒方法,不仅节约了移动终端的功耗,提供了更为简单、方便的人脸识别唤醒方式,还通过姿态调整提示提高了人脸识别的成功率。
图5是本申请实施例提供的一种使用人脸识别解锁移动终端的方法,包括以下步骤:
S501,触发人脸识别。可选的,当人脸识别成功时,解锁移动终端。
S502,当人脸识别失败时,检测移动终端的第一状态。
S503,根据移动终端的第一状态给出姿态调整提示。
S504,根据移动终端的第二状态确定是否有姿态调整,如果有姿态调整,则自动 触发人脸识别。类似于S303,S504也可以拆分为三步:(1)检测移动终端的第二状态;(2)根据移动终端的第二状态确定是否有姿态调整;(3)如果有姿态调整,则自动触发人脸识别。
S505,当人脸识别成功时,解锁移动终端。
本实施例有些步骤和上文实施例有类似之处,不再赘述。
手机200中安装的某些应用程序,进入应用程序时需要验证用户身份,以获得该应用程序的访问权限。图6是本申请实施例提供的一种使用人脸识别获得某应用程序访问权限的方法,步骤包括:
S601,触发人脸识别。可选的,当人脸识别成功时,获得该应用程序的访问权限。
S602,当人脸识别失败时,检测移动终端的第一状态。
S603,根据移动终端的第一状态给出姿态调整提示。
S604,根据移动终端的第二状态确定是否有姿态调整,如果有姿态调整,则自动触发人脸识别。类似于S303,S604也可以拆分为三步:(1)检测移动终端的第二状态;(2)根据移动终端的第二状态确定是否有姿态调整;(3)如果有姿态调整,则自动触发人脸识别。
S605,当人脸识别成功时,获得该应用程序的访问权限。
手机200中存储的某些数据,访问时需要验证用户身份,例如隐私照片、个人备忘录等。图7是本申请实施例提供的一种使用人脸识别获得某些数据访问权限的方法,包括以下步骤:
S701,触发人脸识别。可选的,当人脸识别成功时,获得该数据的访问权限。
S702,当人脸识别失败时,检测移动终端的第一状态。
S703,根据移动终端的第一状态给出姿态调整提示。
S704,根据移动终端的第二状态确定是否有姿态调整,如果有姿态调整,则自动触发人脸识别。类似于S303,S704也可以拆分为三步:(1)检测移动终端的第二状态;(2)根据移动终端的第二状态确定是否有姿态调整;(3)如果有姿态调整,则自动触发人脸识别。
S705,当人脸识别成功时,获得该数据的访问权限。
图8是以人脸识别解锁移动终端为例,当移动终端距离用户脸部过近或者过远时,自动触发人脸识别的方法,包括以下步骤:
S801,触发人脸识别。可选的,当人脸识别成功时,解锁移动终端。
S802,当人脸识别失败时,检测移动终端和用户脸部的第一距离。
人脸识别失败的原因可能是移动终端和用户脸部的距离过近或者过远。
S803,根据检测到的移动终端和用户脸部的第一距离,给出调整移动终端和用户脸部的距离的姿态调整提示。
第一距离可以是人脸识别失败时传感器检测到的移动终端和用户脸部的距离。当距离过近时,可以给出增大距离的姿态调整提示;当距离过远时,可以给出减小距离的姿态调整提示。
S804,根据检测到的移动终端和用户脸部的第二距离,确定是否有姿态调整,如果有姿态调整,则自动触发人脸识别。
移动终端给出姿态调整提示之后,传感器检测移动终端和用户脸部的第二距离,与第一距离对比,如果发生了相应的变化,则确定有姿态调整。例如,当姿态调整提示是增大距离时,如果第二距离大于第一距离,则确定有姿态调整。类似于S303,S804也可以拆分为三步:(1)检测移动终端和用户脸部的第二距离;(2)根据移动终端和用户脸部的第二距离确定是否有姿态调整;(3)如果有姿态调整,则自动触发人脸识别。
S805,当人脸识别成功时,解锁移动终端。
图9是一种使用人脸识别解锁移动终端的示意图,用户1触发人脸识别解锁手机200,假设开始时手机200位于A位置,此时手机200距离用户1脸部的距离为第一距离。如果第一距离过近,例如第一距离为10厘米,则可能造成手机200无法采集到合适的脸部图像,导致人脸识别解锁失败。
当人脸识别失败时,手机200的传感器可以检测手机200和用户1脸部的第一距离。根据此第一距离,手机200可以确定出人脸识别失败的原因是手机200距离用户1脸部的距离过近,从而给出增大距离的姿态调整提示,例如手机200显示器屏幕上显示“请把手机拿远些”的提示,或者用扬声器播放“请把手机拿远些”的提示等。可选的,给出姿态调整提示这一步骤也可以省略。
然后,手机200的传感器可以检测手机200和用户1脸部的第二距离。可选的,用户1根据姿态调整提示拿远了手机200,假如此时的手机200位于B位置,手机200距离用户1脸部的距离为第二距离。假设第二距离为20厘米,由于第二距离大于第一距离,符合增大距离的姿态调整提示,则确定有姿态调整,手机200可以启动前置摄像头自动触发人脸识别。如果手机200在距离用户1脸部第二距离时可以采集到合适的人脸图像,且与预存的人脸图像对比后匹配度大于设定的阈值,则人脸识别成功。当人脸识别成功时,就可以解锁手机200。
图10是以人脸识别解锁移动终端为例,用户手持移动终端的角度过于倾斜,自动触发人脸识别的方法,包括以下步骤:
S1001,触发人脸识别。可选的,当人脸识别成功时,解锁移动终端。
S1002,当人脸识别失败时,检测移动终端的显示器所在平面相对于水平面所成的第一倾斜角度。
人脸识别失败的原因可能是倾斜角度过小,导致无法采集到合适的人脸图像。
S1003,根据检测到的移动终端的显示器所在平面相对于水平面所成的第一倾斜角度,给出调整移动终端的显示器所在平面相对于水平面所成角度的姿态调整提示。
第一倾斜角度可以是人脸识别失败时传感器检测到的移动终端的显示器所在平面相对于水平面所成的夹角中小于等于90度的那个角。
S1004,根据检测到的移动终端的显示器所在平面相对于水平面所成的第二倾斜角度,确定是否有姿态调整,如果有姿态调整,则自动触发人脸识别。
移动终端给出姿态调整提示之后,传感器检测移动终端的显示器所在平面相对于水平面所成的第二倾斜角度,与第一倾斜角度对比,如果发生了相应的变化,则确定有姿态调整。例如,当姿态调整提示是竖直拿手机(相当于增大倾斜角度)时,如果第二倾斜角度大于第一倾斜角度,则确定有姿态调整。
类似于S303,S1004也可以拆分为三步:(1)检测移动终端的显示器所在平面相对 于水平面所成的第二倾斜角度;(2)根据移动终端的显示器所在平面相对于水平面所成的第二倾斜角度确定是否有姿态调整;(3)如果有姿态调整,则自动触发人脸识别。
S1005,当人脸识别成功时,解锁移动终端。
图11是一种使用人脸识别解锁移动终端的示意图,用户1触发人脸识别解锁手机200,假设开始时手机200位于A位置,此时手机200的显示器所在平面相对于水平面所成角度为第一倾斜角度。如果手机200过于倾斜,即第一倾斜角度过小,例如第一倾斜为40度,则可能造成手机200无法采集到合适的脸部图像,导致人脸识别解锁失败。
当人脸识别失败时,手机200的传感器可以检测手机200的显示器所在平面相对于水平面所成的第一倾斜角度。根据此第一倾斜角度,手机200可以确定出人脸识别失败的原因是手机200的显示器所在平面相对于水平面所成的第一倾斜角度过小,从而给出增大倾斜角度的姿态调整提示,例如手机200显示器屏幕上显示“请竖直拿手机”的提示,或者用扬声器播放“请竖直拿手机”的提示等。可选的,给出姿态调整提示这一步骤也可以省略。
然后,手机200的传感器可以检测手机200的显示器所在平面相对于水平面所成的第二倾斜角度。可选的,用户1根据姿态调整提示调整了手机200的倾斜角度,假如此时的手机200位于B位置,手机200的显示器所在平面相对于水平面所成的角度为第二倾斜角度。假设第二倾斜角度为80度,由于第二倾斜角度大于第一倾斜角度,符合增大倾斜角度的姿态调整提示,则确定有姿态调整,手机200可以启动前置摄像头自动触发人脸识别。如果在手机200的显示器所在平面相对于水平面所成的第二倾斜角度时可以采集到合适的人脸图像,且与预存的人脸图像对比后匹配度大于设定的阈值,则人脸识别成功。当人脸识别成功时,就可以解锁手机200。
可以理解的是,当人脸识别失败时,也可以同时检测移动终端和用户脸部的第一距离,以及检测移动终端的显示器所在平面相对于水平面所成的第一倾斜角度,然后给出同时调整距离和倾斜角度的姿态调整提示。根据检测到的移动终端和用户脸部的第二距离,以及检测移动终端的显示器所在平面相对于水平面所成的第二倾斜角度,确定是否有姿态调整,如果有姿态调整,则自动触发人脸识别。当人脸识别成功时,解锁移动终端。
可以理解的是,在以上的各实施例中,人脸识别可以和其他鉴权方式配合使用,比如密码验证、手势识别、指纹识别、虹膜识别、声纹识别等。例如,当使用人脸识别来解锁移动终端的失败次数达到一定阈值时(例如3次),可以改用其他鉴权方式来解锁移动终端。
图12是一种使用人脸识别解锁移动终端的示意图,包括以下步骤:
S1201,触发人脸识别。
S1202,确定人脸识别是否成功。可选的,当人脸识别成功时,解锁移动终端,流程结束。
S1203,当人脸识别失败时,确定是否符合再次人脸识别的条件。可选的,所述符合再次人脸识别的条件为:人脸识别失败次数小于预设的阈值。例如,预设的阈值为3次,则确定人脸识别的次数是否小于3次,如果是,执行S1204;如果大于等于3 次,则不符合再次人脸识别的条件。如果不符合再次人脸识别的条件,则流程结束,或者换成其他的解锁方式,比如密码验证等。
S1204,如果符合再次人脸识别的条件,则检测移动终端的第一状态。第一状态可以是人脸识别失败时移动终端和用户脸部的第一距离,或者移动终端的显示器所在平面相对于水平面所成的第一倾斜角度,等等。移动终端可以用任何合适的传感器检测移动终端的第一状态。
S1205,根据移动终端的第一状态给出姿态调整提示。
S1206,检测移动终端的第二状态。
S1207,根据移动终端的第二状态确定是否有姿态调整。如果没有姿态调整,则流程结束。
S1208,如果有姿态调整,则自动触发人脸识别。
S1209,当人脸识别成功时,解锁移动终端,流程结束。当人脸识别仍然失败时,回到S1203,继续确定是否符合再次人脸识别的条件。
参见图13,本申请实施例提供一种装置,包括摄像头131,处理器132,存储器133,传感器134。用户触发人脸识别后,处理器132指令摄像头131采集用户脸部的图像,并与预存在存储器133中的人脸图像进行对比。处理器132确定采集到的图像和预存图像的匹配度,当匹配度大于预设的阈值时,则确定人脸识别成功,赋予用户相应的操作权限;当匹配度小于预设的阈值时,则确定人脸识别失败,不赋予用户相应的操作权限。
当人脸识别失败时,处理器132指令传感器134检测该装置的第一状态。第一状态可以是人脸识别失败时该装置的状态,具体可以检测该装置的倾斜角度,或者与用户脸部的距离等。处理器132根据第一状态给出姿态调整提示。姿态调整提示可以通过显示器、扬声器等部件输出给用户。
在给出姿态调整提示后,处理器132指令传感器134检测该装置的第二状态,以确定是否有姿态调整。如果确定有姿态调整,则处理器132触发人脸识别。
具体实现方式可以参考以上各个方法实施例,此处不再赘述。
另外,本申请实施例提供一种装置,包括处理器、存储器,一个或多个程序;其中,一个或多个程序被存储在存储器中并被配置为被一个或多个处理器执行,这一个或多个程序包括指令,该指令用于:当人脸识别失败时,检测装置的第一状态;根据第一状态给出姿态调整提示;根据第二状态确定是否有姿态调整,如果有姿态调整,则自动触发人脸识别。具体实现方式可以参考以上各个方法实施例。
另外,本申请实施例提供一种存储介质或计算机程序产品,用于存储计算机软件指令,该指令用于:当人脸识别失败时,检测移动终端的第一状态;根据移动终端的第一状态给出姿态调整提示;根据移动终端的第二状态确定是否有姿态调整,如果有姿态调整,则自动触发人脸识别。具体实现方式可以参考以上各个方法实施例。
另外,参见图14,本申请实施例提供一种装置,包括人脸识别单元141,处理单元142,提示单元143,状态检测单元144。
用户触发人脸识别后,人脸识别单元141可以采集用户脸部的图像,并与预存的人脸图像进行对比。处理单元142确定采集到的图像和预存图像的匹配度,当匹配度 大于预设的阈值时,则确定人脸识别成功,赋予用户相应的操作权限;当匹配度小于预设的阈值时,则确定人脸识别失败,不赋予用户相应的操作权限。
当人脸识别失败时,状态检测单元144检测该装置的第一状态。第一状态可以是人脸识别失败时该装置的状态,具体可以检测该装置的倾斜角度,或者与用户脸部的距离等。提示单元143根据第一状态给出姿态调整提示。姿态调整提示可以通过显示器、扬声器等部件输出给用户。
在给出姿态调整提示后,状态检测单元144检测该装置的第二状态,以确定是否有姿态调整。如果确定有姿态调整,则人脸识别单元141自动触发人脸识别。
具体实现方式可以参考以上各个方法实施例。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其他的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其他的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (40)

  1. 一种人脸识别的方法,其特征在于,包括:
    触发人脸识别;
    当人脸识别失败时,检测移动终端的第一状态;
    根据所述第一状态给出姿态调整提示;
    检测所述移动终端的第二状态;
    根据所述第二状态确定是否有姿态调整,如果有所述姿态调整,则自动触发人脸识别。
  2. 根据权利要求1所述的方法,其特征在于,所述触发人脸识别,包括:
    采集用户的脸部图像,与预存的脸部图像进行对比。
  3. 根据权利要求2所述的方法,其特征在于,所述脸部图像包括脸部图片或视频。
  4. 根据权利要求2所述的方法,其特征在于,所述预存的脸部图像存储在所述移动终端的存储器中,或者存储在可与所述移动终端进行通信的服务器中。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,在所述自动触发人脸识别之后,所述方法还包括:
    当所述人脸识别成功时,获取所述移动终端的操作权限。
  6. 根据权利要求5所述的方法,其特征在于,所述获取所述移动终端的操作权限,包括以下任一项:
    解锁所述移动终端,获得所述移动终端上安装的应用程序的访问权限,或者获得所述移动终端上存储的数据的访问权限。
  7. 根据权利要求1-4任一项所述的方法,其特征在于,在所述自动触发人脸识别之后,所述方法还包括:
    当所述人脸识别失败时,使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别。
  8. 根据权利要求1-4任一项所述的方法,其特征在于,在所述自动触发人脸识别之后,所述方法还包括:
    当所述人脸识别失败时,确定是否符合再次人脸识别的条件;
    如果符合所述再次人脸识别的条件,则再次自动触发人脸识别;
    如果不符合所述再次人脸识别的条件,则使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别。
  9. 根据权利要求8所述的方法,其特征在于,所述符合再次人脸识别的条件,包括:所述人脸识别失败的次数小于预设的阈值。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述根据所述第一状态给出姿态调整提示,包括:
    所述移动终端根据所述第一状态分析所述人脸识别失败的原因,并在预先设置的数据库中找到所述原因对应的解决方案,根据所述解决方案给出所述姿态调整提示。
  11. 根据权利要求1-10任一项所述的方法,其特征在于,所述根据所述第二状态确定是否有姿态调整,包括:
    确定所述第二状态相对于所述第一状态的变化与所述姿态调整提示的内容是否相 同。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述姿态调整提示包括以下提示方式的任意组合:
    文字、图片、语音、视频、灯光或者振动。
  13. 根据权利要求1-12任一项所述的方法,其特征在于:
    所述第一状态为当所述人脸识别失败时所述移动终端与用户脸部的第一距离,所述第二状态为给出所述姿态调整提示后所述移动终端与用户脸部的第二距离。
  14. 根据权利要求1-12任一项所述的方法,其特征在于:
    所述第一状态为当所述人脸识别失败时所述移动终端的显示器所在平面相对于水平面的第一倾斜角度,所述第二状态为给出所述姿态调整提示后所述移动终端的显示器所在平面相对于水平面的第二倾斜角度。
  15. 一种装置,包括摄像头、处理器、存储器、传感器,其特征在于:
    所述处理器用于:
    触发人脸识别,指令所述摄像头采集用户的脸部图像,并与预存在所述存储器中的脸部图像进行对比;
    当所述人脸识别失败时,指令所述传感器检测所述装置的第一状态;
    根据所述第一状态给出姿态调整提示;
    指令所述传感器检测所述装置的第二状态;
    根据所述第二状态确定是否有姿态调整,如果有所述姿态调整,则自动触发人脸识别。
  16. 根据权利要求15所述的装置,其特征在于,所述脸部图像包括脸部图片或视频。
  17. 根据权利要求15或16所述的装置,其特征在于,在所述自动触发人脸识别之后,所述处理器还用于:当所述人脸识别成功时,获取所述装置的操作权限。
  18. 根据权利要求17所述的装置,其特征在于,所述获取所述装置的操作权限,包括以下任一项:
    解锁所述装置,获得所述装置上安装的应用程序的访问权限,或者获得所述装置上存储的数据的访问权限。
  19. 根据权利要求15或16所述的装置,其特征在于,在所述自动触发人脸识别之后,所述处理器还用于:
    当所述人脸识别失败时,使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别。
  20. 根据权利要求15或16所述的装置,其特征在于,在所述自动触发人脸识别之后,所述处理器还用于:
    当所述人脸识别失败时,确定是否符合再次人脸识别的条件;
    如果符合所述再次人脸识别的条件,则再次自动触发人脸识别;
    如果不符合所述再次人脸识别的条件,则使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别。
  21. 根据权利要求20所述的装置,其特征在于,所述符合再次人脸识别的条件, 包括:所述人脸识别失败的次数小于预设的阈值。
  22. 根据权利要求15-21任一项所述的装置,其特征在于,所述根据所述第一状态给出姿态调整提示,包括:
    根据所述第一状态分析所述人脸识别失败的原因,并在预先设置的数据库中找到所述原因对应的解决方案,根据所述解决方案给出所述姿态调整提示。
  23. 根据权利要求15-22任一项所述的装置,其特征在于,所述根据所述第二状态确定是否有姿态调整,包括:
    确定所述第二状态相对于所述第一状态的变化与所述姿态调整提示的内容是否相同。
  24. 根据权利要求15-23任一项所述的装置,其特征在于,所述姿态调整提示包括以下提示方式的任意组合:
    文字、图片、语音、视频、灯光或者振动。
  25. 根据权利要求15-24任一项所述的装置,其特征在于:
    所述第一状态为当所述人脸识别失败时所述装置与用户脸部的第一距离,所述第二状态为给出所述姿态调整提示后所述装置与用户脸部的第二距离。
  26. 根据权利要求15-24任一项所述的装置,其特征在于:所述装置还包括显示器,所述第一状态为当所述人脸识别失败时所述显示器所在平面相对于水平面的第一倾斜角度,所述第二状态为给出所述姿态调整提示后所述显示器所在平面相对于水平面的第二倾斜角度。
  27. 一种装置,包括人脸识别单元、处理单元、提示单元、状态检测单元,其特征在于:
    所述处理单元,用于触发人脸识别;
    所述人脸识别单元,用于采集用户的脸部图像,并与预存的脸部图像进行对比;
    所述状态检测单元,用于当所述人脸识别失败时,检测所述装置的第一状态;
    所述提示单元,用于根据所述第一状态给出姿态调整提示;
    所述状态检测单元,还用于检测所述装置的第二状态;
    所述处理单元,还用于根据所述第二状态确定是否有姿态调整,如果有所述姿态调整,则自动触发人脸识别。
  28. 根据权利要求27所述的装置,其特征在于,所述脸部图像包括脸部图片或视频。
  29. 根据权利要求27或28所述的装置,其特征在于,在所述自动触发人脸识别之后,所述处理单元还用于:当所述人脸识别成功时,获取所述装置的操作权限。
  30. 根据权利要求29所述的装置,其特征在于,所述获取所述装置的操作权限,包括以下任一项:
    解锁所述装置,获得所述装置上安装的应用程序的访问权限,或者获得所述装置上存储的数据的访问权限。
  31. 根据权利要求27或28所述的装置,其特征在于,在所述自动触发人脸识别之后,所述处理单元还用于:
    当所述人脸识别失败时,使用以下任一项鉴权方式进行验证:密码验证、手势识 别、指纹识别、虹膜识别、声纹识别。
  32. 根据权利要求27或28所述的装置,其特征在于,在所述自动触发人脸识别之后,所述处理单元还用于:当所述人脸识别失败时,确定是否符合再次人脸识别的条件;如果不符合所述再次人脸识别的条件,则使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别;
    所述人脸识别单元,用于如果符合所述再次人脸识别的条件,则再次自动触发人脸识别。
  33. 根据权利要求32所述的装置,其特征在于,所述符合再次人脸识别的条件,包括:所述人脸识别失败的次数小于预设的阈值。
  34. 根据权利要求27-33任一项所述的装置,其特征在于,所述根据所述第一状态给出姿态调整提示,包括:
    根据所述第一状态分析所述人脸识别失败的原因,并在预先设置的数据库中找到所述原因对应的解决方案,根据所述解决方案给出所述姿态调整提示。
  35. 根据权利要求27-34任一项所述的装置,其特征在于,所述根据第二状态确定是否有姿态调整,包括:确定所述第二状态相对于所述第一状态的变化与所述姿态调整提示的内容是否相同。
  36. 根据权利要求27-35任一项所述的装置,其特征在于,所述姿态调整提示包括以下提示方式的任意组合:
    文字、图片、语音、视频、灯光或者振动。
  37. 根据权利要求27-36任一项所述的装置,其特征在于:
    所述第一状态为当所述人脸识别失败时所述装置与用户脸部的第一距离,所述第二状态为给出所述姿态调整提示后所述装置与用户脸部的第二距离。
  38. 根据权利要求27-36任一项所述的装置,其特征在于:所述第一状态为当所述人脸识别失败时所述终端所在平面相对于水平面的第一倾斜角度,所述第二状态为给出所述姿态调整提示后所述终端所在平面相对于水平面的第二倾斜角度。
  39. 一种计算机存储介质,所述计算机存储介质中存储有指令,其特征在于,当所述指令在移动终端上运行时,使得所述移动终端执行如权利要求1-14中任一项所述的方法。
  40. 一种包含指令的计算机程序产品,当所述计算机程序产品在移动终端上运行时,使得所述移动终端执行如权利要求1-14中任一项所述的方法。
PCT/CN2018/102680 2018-08-28 2018-08-28 一种人脸识别的方法及装置 WO2020041971A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US17/270,165 US20210201001A1 (en) 2018-08-28 2018-08-28 Facial Recognition Method and Apparatus
KR1020217005781A KR20210035277A (ko) 2018-08-28 2018-08-28 얼굴 인식 방법 및 장치
JP2021510869A JP7203955B2 (ja) 2018-08-28 2018-08-28 顔認識方法および装置
EP18931528.6A EP3819810A4 (en) 2018-08-28 2018-08-28 FACIAL RECOGNITION PROCESS AND APPARATUS
CN201880096890.7A CN112639801A (zh) 2018-08-28 2018-08-28 一种人脸识别的方法及装置
PCT/CN2018/102680 WO2020041971A1 (zh) 2018-08-28 2018-08-28 一种人脸识别的方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/102680 WO2020041971A1 (zh) 2018-08-28 2018-08-28 一种人脸识别的方法及装置

Publications (1)

Publication Number Publication Date
WO2020041971A1 true WO2020041971A1 (zh) 2020-03-05

Family

ID=69644752

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/102680 WO2020041971A1 (zh) 2018-08-28 2018-08-28 一种人脸识别的方法及装置

Country Status (6)

Country Link
US (1) US20210201001A1 (zh)
EP (1) EP3819810A4 (zh)
JP (1) JP7203955B2 (zh)
KR (1) KR20210035277A (zh)
CN (1) CN112639801A (zh)
WO (1) WO2020041971A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863510A (zh) * 2022-03-25 2022-08-05 荣耀终端有限公司 一种人脸识别方法和装置

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11941629B2 (en) * 2019-09-27 2024-03-26 Amazon Technologies, Inc. Electronic device for automated user identification
EP4124027A1 (en) * 2020-03-18 2023-01-25 NEC Corporation Program, mobile terminal, authentication processing device, image transmission method, and authentication processing method
US11270102B2 (en) 2020-06-29 2022-03-08 Amazon Technologies, Inc. Electronic device for automated user identification
KR102589834B1 (ko) * 2021-12-28 2023-10-16 동의과학대학교 산학협력단 치매 스크리닝 디퓨저 장치
WO2023149708A1 (en) * 2022-02-01 2023-08-10 Samsung Electronics Co., Ltd. Method and system for a face detection management
CN114694282A (zh) * 2022-03-11 2022-07-01 深圳市凯迪仕智能科技有限公司 一种基于智能锁的语音互动的方法及相关装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101771539A (zh) * 2008-12-30 2010-07-07 北京大学 一种基于人脸识别的身份认证方法
CN105389575A (zh) * 2015-12-24 2016-03-09 北京旷视科技有限公司 生物数据的处理方法和装置
CN106886697A (zh) * 2015-12-15 2017-06-23 中国移动通信集团公司 认证方法、认证平台、用户终端及认证系统

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100467152B1 (ko) * 2003-11-25 2005-01-24 (주)버뮤다정보기술 얼굴인식시스템에서 개인 인증 방법
WO2013100697A1 (en) * 2011-12-29 2013-07-04 Intel Corporation Method, apparatus, and computer-readable recording medium for authenticating a user
CN102760042A (zh) * 2012-06-18 2012-10-31 惠州Tcl移动通信有限公司 一种基于图片脸部识别进行解锁的方法、系统及电子设备
US20170013464A1 (en) * 2014-07-10 2017-01-12 Gila FISH Method and a device to detect and manage non legitimate use or theft of a mobile computerized device
JP2016081071A (ja) * 2014-10-09 2016-05-16 富士通株式会社 生体認証装置、生体認証方法及びプログラム
JP6418033B2 (ja) * 2015-03-30 2018-11-07 オムロン株式会社 個人識別装置、識別閾値設定方法、およびプログラム
CN104898832B (zh) * 2015-05-13 2020-06-09 深圳彼爱其视觉科技有限公司 一种基于智能终端的3d实时眼镜试戴方法
JP6324939B2 (ja) * 2015-11-05 2018-05-16 株式会社ソニー・インタラクティブエンタテインメント 情報処理装置およびログイン制御方法
US10210318B2 (en) * 2015-12-09 2019-02-19 Daon Holdings Limited Methods and systems for capturing biometric data
WO2017208519A1 (ja) * 2016-05-31 2017-12-07 シャープ株式会社 生体認証装置、携帯端末装置、制御プログラム
CN107016348B (zh) * 2017-03-09 2022-11-22 Oppo广东移动通信有限公司 结合深度信息的人脸检测方法、检测装置和电子装置
CN107463883A (zh) * 2017-07-18 2017-12-12 广东欧珀移动通信有限公司 生物识别方法及相关产品
CN107818251B (zh) * 2017-09-27 2021-03-23 维沃移动通信有限公司 一种人脸识别方法及移动终端
CN107679514A (zh) * 2017-10-20 2018-02-09 维沃移动通信有限公司 一种人脸识别方法及电子设备
CN108090340B (zh) * 2018-02-09 2020-01-10 Oppo广东移动通信有限公司 人脸识别处理方法、人脸识别处理装置及智能终端
CN108319837A (zh) * 2018-02-13 2018-07-24 广东欧珀移动通信有限公司 电子设备、人脸模板录入方法及相关产品

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101771539A (zh) * 2008-12-30 2010-07-07 北京大学 一种基于人脸识别的身份认证方法
CN106886697A (zh) * 2015-12-15 2017-06-23 中国移动通信集团公司 认证方法、认证平台、用户终端及认证系统
CN105389575A (zh) * 2015-12-24 2016-03-09 北京旷视科技有限公司 生物数据的处理方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3819810A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863510A (zh) * 2022-03-25 2022-08-05 荣耀终端有限公司 一种人脸识别方法和装置
CN114863510B (zh) * 2022-03-25 2023-08-01 荣耀终端有限公司 一种人脸识别方法和装置

Also Published As

Publication number Publication date
JP2021535503A (ja) 2021-12-16
JP7203955B2 (ja) 2023-01-13
EP3819810A1 (en) 2021-05-12
US20210201001A1 (en) 2021-07-01
KR20210035277A (ko) 2021-03-31
EP3819810A4 (en) 2021-08-11
CN112639801A (zh) 2021-04-09

Similar Documents

Publication Publication Date Title
WO2020041971A1 (zh) 一种人脸识别的方法及装置
CN108960209B (zh) 身份识别方法、装置及计算机可读存储介质
WO2017181769A1 (zh) 一种人脸识别方法、装置和系统、设备、存储介质
US9049983B1 (en) Ear recognition as device input
CN108924737B (zh) 定位方法、装置、设备及计算机可读存储介质
CN107231470B (zh) 图像处理方法、移动终端及计算机可读存储介质
CN110096865B (zh) 下发验证方式的方法、装置、设备及存储介质
US20220236848A1 (en) Display Method Based on User Identity Recognition and Electronic Device
WO2018133282A1 (zh) 一种动态识别的方法及终端设备
WO2020048392A1 (zh) 应用程序的病毒检测方法、装置、计算机设备及存储介质
CN106548144B (zh) 一种虹膜信息的处理方法、装置及移动终端
US12039023B2 (en) Systems and methods for providing a continuous biometric authentication of an electronic device
US20220417359A1 (en) Remote control device, information processing method and recording program
CN111241499B (zh) 应用程序登录的方法、装置、终端及存储介质
US20240095329A1 (en) Cross-Device Authentication Method and Electronic Device
WO2022062808A1 (zh) 头像生成方法及设备
WO2021129698A1 (zh) 拍摄方法及电子设备
US11617055B2 (en) Delivering information to users in proximity to a communication device
WO2021083086A1 (zh) 信息处理方法及设备
US10270963B2 (en) Angle switching method and apparatus for image captured in electronic terminal
WO2021082620A1 (zh) 一种图像识别方法及电子设备
CN111694892B (zh) 资源转移方法、装置、终端、服务器及存储介质
US20230040115A1 (en) Information processing method and electronic device
EP3667497A1 (en) Facial information preview method and related product
CN112765571B (zh) 权限管理方法、系统、装置、服务器及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18931528

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217005781

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2018931528

Country of ref document: EP

Effective date: 20210208

Ref document number: 2021510869

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE