WO2020041971A1 - 一种人脸识别的方法及装置 - Google Patents
一种人脸识别的方法及装置 Download PDFInfo
- Publication number
- WO2020041971A1 WO2020041971A1 PCT/CN2018/102680 CN2018102680W WO2020041971A1 WO 2020041971 A1 WO2020041971 A1 WO 2020041971A1 CN 2018102680 W CN2018102680 W CN 2018102680W WO 2020041971 A1 WO2020041971 A1 WO 2020041971A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face recognition
- state
- mobile terminal
- recognition
- face
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/17—Image acquisition using hand-held instruments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/987—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns with the intervention of an operator
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/60—Static or dynamic means for assisting the user to position a body part for biometric acquisition
- G06V40/67—Static or dynamic means for assisting the user to position a body part for biometric acquisition by interactive indications to the user
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2203/00—Aspects of automatic or semi-automatic exchanges
- H04M2203/60—Aspects of automatic or semi-automatic exchanges related to security aspects in telephonic communication systems
- H04M2203/6054—Biometric subscriber identification
Definitions
- the present application relates to the technical field of terminals, and in particular, to a method and device for face recognition.
- face recognition unlocking has gradually been applied to various terminal devices. For example, when a user uses a mobile terminal, if the result of face recognition meets a preset threshold, they will get corresponding operation permissions, such as unlocking the mobile device, entering the corresponding operating system, or gaining access to an application If the result of face recognition does not meet the preset threshold, the corresponding operation authority cannot be obtained, such as unlocking failure, access denied, etc.
- the user first needs to trigger the face recognition process. Common triggering methods can be clicking the power button or other buttons, picking up the mobile terminal to light up the screen, or triggering through a voice assistant.
- the camera may not be able to capture a suitable face image, which may cause the face unlock to fail.
- face recognition fails to be unlocked, the user needs to verify again; however, due to power consumption of existing mobile terminals and other reasons, after face recognition fails, continuous recognition will not be performed, and the user will need to actively trigger face recognition again. Unlock. Triggering face recognition again requires the user to click the power button or other buttons again, put the mobile terminal down and pick it up again, or issue a command again through a voice assistant, and so on. These operations are not smooth and complicated; at the same time, face recognition may still fail again, causing inconvenience in use.
- the embodiments of the present application provide a method and device for face recognition, which can automatically trigger face recognition unlocking, and at the same time give a gesture adjustment prompt according to the state of the mobile terminal, thereby simplifying operations and improving the success rate of face recognition. Improve user experience.
- an embodiment of the present application provides a method for face recognition.
- the method includes: triggering face recognition; when face recognition fails, detecting a first state of a mobile terminal; A gesture adjustment prompt is displayed; a second state of the mobile terminal is detected; it is determined whether there is a gesture adjustment according to the second state, and if there is a gesture adjustment, face recognition is automatically triggered.
- triggering face recognition includes: collecting a user's face image and comparing it with a pre-stored face image.
- a pre-stored face image There are many ways to trigger, for example, you can click the mobile terminal's keys, including the power key, volume key, or other keys; you can also touch the display to light up the display to trigger face recognition; you can also pick up the mobile terminal to detect by sensors to trigger Face recognition; face recognition voice commands can also be issued through a voice assistant to trigger face recognition and so on.
- the facial image includes a facial picture or a video.
- the pre-stored face image is stored in the memory of the mobile terminal, or stored in a server that can communicate with the mobile terminal.
- the method further includes: when the face recognition is successful, obtaining the operation authority of the mobile terminal.
- the method further includes: when the face recognition is successful, obtaining the operation authority of the mobile terminal.
- obtaining the operation permission of the mobile terminal includes any of the following: unlocking the mobile terminal, obtaining access permissions of applications installed on the mobile terminal, or obtaining access permissions on the mobile terminal Access to stored data.
- after automatically triggering face recognition it further includes: when face recognition fails, use any of the following authentication methods for verification: password verification, gesture recognition, fingerprint recognition, iris recognition Voiceprint recognition.
- the method further includes: when the face recognition fails, determining whether to meet the conditions of the face recognition again; if the conditions of the face recognition are met again, the automatic recognition is performed again Trigger face recognition; if it does not meet the conditions for face recognition again, use any of the following authentication methods for verification: password verification, gesture recognition, fingerprint recognition, iris recognition, voiceprint recognition.
- meeting the condition of face recognition again means that the number of times of face recognition failure is less than a preset threshold.
- a gesture adjustment prompt is provided according to the first state, including: the mobile terminal analyzes the cause of the face recognition failure according to the first state, and finds a solution corresponding to the cause in a preset database, Give attitude adjustment tips according to the solution.
- determining whether there is a posture adjustment according to the second state includes: determining whether a change in the second state from the first state is the same as the content of the posture adjustment prompt.
- the gesture adjustment prompt includes any combination of the following prompt modes: text, picture, voice, video, light, or vibration.
- the first state is a first distance between the mobile terminal and the user's face when face recognition fails
- the second state is the second distance between the mobile terminal and the user's face after a gesture adjustment prompt is given. distance.
- the first state is a first tilt angle of a plane where the display of the mobile terminal is located with respect to a horizontal plane when face recognition fails
- the second state is where the display of the mobile terminal is located after a gesture adjustment prompt is given.
- the second tilt angle of the plane relative to the horizontal plane.
- an embodiment of the present application provides a device, including a camera, a processor, a memory, and a sensor.
- the processor is configured to: trigger face recognition, instruct the camera to collect a user's face image, and communicate with a face stored in the memory in advance.
- face recognition fails, the first state of the sensor detection device is instructed; the attitude adjustment prompt is given according to the first state; the second state of the sensor detection device is instructed; the presence or absence of the posture is determined according to the second state Adjustment, if there is posture adjustment, it will automatically trigger face recognition.
- the facial image includes a facial picture or a video.
- the processor is further configured to: when the face recognition is successful, obtain the operation authority of the device.
- obtaining the operation right of the device includes any of the following: unlocking the device, obtaining the access right of an application installed on the device, or obtaining the access right of data stored on the device.
- the processor is further configured to: when face recognition fails, use any of the following authentication methods for verification: password verification, gesture recognition, fingerprint recognition , Iris recognition, voiceprint recognition.
- the processor is further configured to: when the face recognition fails, determine whether the conditions for the face recognition are met again; if the conditions for the face recognition are met again, Then face recognition is automatically triggered again; if the conditions for face recognition are not met again, use any of the following authentication methods for verification: password verification, gesture recognition, fingerprint recognition, iris recognition, voiceprint recognition.
- meeting the condition of face recognition again means that the number of times of face recognition failure is less than a preset threshold.
- a gesture adjustment prompt is provided according to the first state, including: analyzing the cause of face recognition failure according to the first state, and finding a solution corresponding to the cause in a preset database, and according to the solution The plan gives attitude adjustment tips.
- determining whether there is a posture adjustment according to the second state includes: determining whether a change in the second state from the first state is the same as the content of the posture adjustment prompt.
- the gesture adjustment prompt includes any combination of the following prompt modes: text, picture, voice, video, light, or vibration.
- the first state is a first distance between the device and the user's face when face recognition fails
- the second state is the second distance between the device and the user's face after a gesture adjustment prompt is given.
- the device further includes a display.
- the first state is a first tilt angle of the plane where the display is located with respect to the horizontal plane when the face recognition fails
- the second state is the plane where the display is located after a gesture adjustment prompt is given. A second tilt angle relative to the horizontal plane.
- an embodiment of the present application provides a device including a face recognition unit, a processing unit, a prompting unit, and a state detection unit, which is characterized in that the processing unit is used to trigger face recognition; the face recognition unit is used to It is used to collect the user's face image and compare it with the pre-stored face image; the state detection unit is used to detect the first state of the device when the face recognition fails; the prompt unit is used to give a posture according to the first state Adjustment prompts; a state detection unit is further configured to detect a second state of the device; a processing unit is further configured to determine whether there is a posture adjustment according to the second state, and if there is a posture adjustment, a face recognition is automatically triggered.
- the facial image includes a facial picture or a video.
- the processing unit is further configured to: when the face recognition is successful, obtain the operation authority of the device.
- obtaining the operation right of the device includes any of the following: unlocking the device, obtaining the access right of an application installed on the device, or obtaining the access right of data stored on the device.
- the processing unit is further configured to: when face recognition fails, use any of the following authentication methods for verification: password verification, gesture recognition, fingerprint recognition , Iris recognition, voiceprint recognition.
- the processing unit is further configured to: determine whether the conditions for re-face recognition are met when the face recognition fails; if the re-face recognition is not met Conditions, then use any of the following authentication methods for verification: password verification, gesture recognition, fingerprint recognition, iris recognition, voiceprint recognition; a face recognition unit, which is used to meet the conditions of the face recognition again, then Facial recognition is triggered automatically again.
- meeting the condition of face recognition again means that the number of times of face recognition failure is less than a preset threshold.
- a gesture adjustment prompt is provided according to the first state, including: analyzing the cause of face recognition failure according to the first state, and finding a solution corresponding to the cause in a preset database, and according to the solution The plan gives attitude adjustment tips.
- determining whether there is a posture adjustment according to the second state includes: determining whether a change in the second state from the first state is the same as the content of the posture adjustment prompt.
- the gesture adjustment prompt includes any combination of the following prompt modes: text, picture, voice, video, light, or vibration.
- the first state is a first distance between the device and the user's face when face recognition fails
- the second state is the second distance between the device and the user's face after a gesture adjustment prompt is given.
- the first state is a first inclination angle of the plane where the terminal is located with respect to the horizontal plane when face recognition fails
- the second state is the first angle of the plane where the terminal is located with respect to the horizontal plane after the gesture adjustment prompt is given. Two tilt angles.
- an embodiment of the present application provides a computer storage medium.
- the computer storage medium stores instructions. When the instructions are run on the mobile terminal, the mobile terminal is caused to execute the method as in the first aspect.
- an embodiment of the present application provides a computer program product including instructions.
- the computer program product runs on a mobile terminal, the mobile terminal executes the method as in the first aspect.
- FIG. 1 is a schematic diagram of face recognition using a mobile phone according to an embodiment of the present application
- FIG. 2 is a schematic diagram of a hardware structure of a mobile phone according to an embodiment of the present application.
- FIG. 3 is a flowchart of a method for triggering face recognition according to an embodiment of the present application.
- FIG. 4 is a schematic diagram of a tilt angle of a mobile terminal according to an embodiment of the present application.
- FIG. 5 is a flowchart of a method for unlocking a mobile terminal using face recognition according to an embodiment of the present application
- FIG. 6 is a flowchart of a method for obtaining access to an application program using face recognition according to an embodiment of the present application
- FIG. 7 is a flowchart of a method for obtaining certain data access authority using face recognition according to an embodiment of the present application
- FIG. 8 is a flowchart of a method for unlocking a mobile terminal using face recognition according to an embodiment of the present application
- FIG. 9 is a schematic diagram of unlocking a mobile terminal using face recognition according to an embodiment of the present application.
- FIG. 10 is a flowchart of another method for unlocking a mobile terminal using face recognition according to an embodiment of the present application.
- FIG. 11 is a schematic diagram of unlocking a mobile terminal using face recognition according to another embodiment of the present application.
- FIG. 12 is a flowchart of a method for unlocking a mobile terminal using face recognition according to an embodiment of the present application
- FIG. 13 is a schematic structural diagram of a device according to an embodiment of the present application.
- FIG. 14 is a schematic structural diagram of another device according to an embodiment of the present application.
- Face recognition is a kind of biometric recognition technology based on the facial feature information of a person.
- the camera of the mobile terminal can collect pictures or videos containing the user's face, and compare them with the features of the pre-stored face pictures or video.
- the matching degree of the two is greater than a preset threshold, the face recognition is successful, and the user can be given corresponding operation permissions, for example, the mobile terminal can be unlocked, or the operating system with corresponding permissions can be unlocked, or an application can be accessed. Permission, or access to certain data, etc .; when the matching degree of the two is less than a preset threshold, face recognition fails, and the user cannot obtain corresponding operation permissions, such as unlocking failure, denying access to an application, or Some data and so on.
- face recognition can also be performed in conjunction with other authentication methods, such as password verification, gesture recognition, fingerprint recognition, iris recognition, voiceprint recognition, and so on.
- face recognition technology can be combined with certain algorithms, such as extracting feature points, 3D modeling, local magnification, automatic adjustment of exposure, and infrared detection.
- the mobile terminal in the embodiments of the present application may be a mobile phone, a tablet computer, a wearable device, a notebook computer, a personal digital assistant (PDA), an augmented reality (AR) / virtual reality (VR) device, an in-vehicle device, etc. Any form of mobile terminal.
- PDA personal digital assistant
- AR augmented reality
- VR virtual reality
- Some embodiments of the present application use a mobile phone as an example to introduce a mobile terminal. It can be understood that these embodiments are also applicable to other mobile terminals.
- FIG. 1 is a schematic diagram of face recognition using a mobile phone.
- User 1 holds a mobile phone 200 for face recognition.
- the mobile phone 200 includes a display 203 and a camera 204, where the camera 204 can be used to capture a face picture or video of the user 1 and the display 203 can display a collection interface.
- the collection interface may be a shooting interface, and is used to display a face shooting effect of the user 1.
- user 1 first triggers face recognition.
- face recognition There are several ways to trigger. For example, you can click the keys of the mobile phone 200, including the power key, the volume key, or other keys; you can also touch the display 203 to click The bright display 203 triggers face recognition; it can also be picked up by the mobile phone 200 and detected by the sensor to trigger face recognition; it can also be triggered by a voice assistant issuing a voice recognition face command, and so on.
- the mobile phone 200 can collect a face image of the user 1.
- the front camera 204 can be used to shoot the face of the user 1.
- the face image described in the embodiment of the present application may include a picture or video of the face.
- the captured picture or video may be displayed on the display 203.
- the mobile phone 200 may use a pre-stored face image for comparison to confirm whether the user 1 has face recognition, and thereby obtain corresponding operation authority.
- the pre-stored face image may be stored in the memory of the mobile phone 200 in advance, or may be stored in a server or a database that can communicate with the mobile phone 200 in advance.
- the “corresponding operation authority” described herein may be unlocking the mobile phone 200, or entering an operating system with corresponding authority, or obtaining access to certain applications, or obtaining access to certain data.
- the unlocking of the mobile phone 200 is used as a result of face recognition. It can be understood that in these embodiments, obtaining other corresponding operation rights may also be used as a result of face recognition.
- the “face recognition passed” here can also be referred to as “face recognition success”, which means that the face image of user 1 collected by the mobile phone 200 and the pre-stored face image have a matching degree greater than a preset
- the threshold may be sufficient, and may not necessarily be an exact match; for example, the preset threshold may be a feature point matching of 80% of the two, or the preset threshold may be dynamically adjusted according to factors such as the operation location where the user 1 is located, the permission to be obtained, and the like.
- FIG. 2 is a schematic diagram of the hardware structure of a mobile phone 200.
- the mobile phone 200 may include a processor 201, a memory 202, a display 203, a camera 204, an I / O device 205, a sensor 206, a power source 207, a Bluetooth device 208, a positioning device 209, and an audio circuit. 210, WiFi device 211, radio frequency circuit 212, etc. These components communicate through one or more communication buses or signal lines. It can be understood that the mobile phone 200 is only an example of a mobile device that can realize face recognition, and does not constitute a limitation on the structure of the mobile phone 200.
- the mobile phone 200 may have more or fewer components than those shown in FIG. Two or more components are combined or may have different configurations or arrangements of these components.
- the operating system running on the mobile phone 200 includes, but is not limited to DOS, Unix, Linux, or other operating systems.
- the processor 201 includes a single processor or processing unit, multiple processors, multiple processing units, or one or more other suitably configured computing elements.
- the processor 201 may be a microprocessor, a central processing unit (CPU), an application specific integrated circuit (ASIC), a digital signal processor (DSP), or a combination of such devices.
- the processor 201 may integrate an application processor and a modem.
- the application processor mainly processes an operating system, a user interface, and an application program, and the modem mainly processes wireless communications.
- the processor 201 is a control center of the mobile phone 200, and uses various interfaces and lines to directly or indirectly connect various parts of the mobile phone 200, and runs or executes a software program or instruction set stored in the memory 202, and calls stored in the memory 202 To perform various functions of the mobile phone 200 and process the data, thereby performing overall monitoring of the mobile phone 200.
- the memory 202 can store electronic data that can be used by the mobile phone 200, such as an operating system, applications and data generated by it, various documents such as text, pictures, audio, and video, device settings and user preferences, contact lists and communication records, memos, and Calendar, biometric data, data structure or database, etc.
- the memory 202 may be configured as any type of memory such as random access memory, read-only memory, flash memory, removable memory, or other types of storage elements, or a combination of such devices.
- the memory 200 may be used to store a preset face image for comparison with the collected face image during face recognition.
- the display 203 may be used to display information input by the user or information provided to the user and various interfaces of the mobile phone 200. Common display types are LCD (Liquid Crystal Display), OLED (Organic Light Emitting Diode), and the like.
- the display 203 can also be integrated with a touch panel.
- the touch panel can detect whether a contact has occurred, and the pressure value, movement speed, direction, and position information of the contact.
- the detection methods of the touch panel include, but are not limited to, a capacitive type, a resistive type, an infrared type, and a surface acoustic wave type.
- the touch panel detects a touch operation on or near the touch panel, it is transmitted to the processor 201 to determine the type of the touch event.
- the processor 201 then provides a corresponding visual output on the display 203 according to the type of the touch event.
- the visual output includes Text, graphics, icons, videos, and any combination.
- the camera 204 is used for taking pictures or videos.
- the camera 204 may be divided into a front camera and a rear camera, and used in conjunction with other components such as a flash.
- a front camera may be used to collect a face image of the user 1.
- an RGB camera, an infrared camera, a ToF (Time of Flight) camera, and a structured light device may be used to perform image collection for face recognition.
- the I / O device 205 that is, an input / output device, can receive data and instructions sent by a user or other device, and can also output data or instructions to the user or other device.
- the I / O device 205 includes various buttons, interfaces, keyboards, touch input devices, touch pads, mice, and other components of the mobile phone 200.
- the broad I / O device may also include the display 203, the camera 204, Audio circuit 210 and so on.
- the mobile phone 200 may include one or more sensors 206, and the sensors 206 may be configured to detect any type of attributes including, but not limited to, images, pressure, light, touch, heat, magnetism, movement, relative motion, and so on.
- the sensor 206 may be an image sensor, a thermometer, a hygrometer, a proximity sensor, an infrared sensor, an accelerometer, an angular velocity sensor, a gravity sensor, a gyroscope, a geomagnetic meter, a heart rate detector, and the like.
- a proximity sensor In some embodiments of the present application, a proximity sensor, a distance sensor, an infrared sensor, a gravity sensor, a gyroscope, or other types of sensors may be used to detect the distance, angle, or relative position between the user 1 and the mobile phone 200.
- the power source 207 can provide power to the mobile phone 200 and its components.
- the power source 207 may be one or more rechargeable batteries, or non-rechargeable batteries, or an external power source connected to the mobile phone 200 in a wired / wireless manner.
- the power supply 207 may further include related equipment such as a power management system, a fault detection system, and a power conversion system.
- the Bluetooth device 208 is used to implement data exchange between the mobile phone 200 and other devices through a Bluetooth protocol. It can be understood that the mobile phone 200 may further include other short-range communication devices such as an NFC device.
- the positioning device 209 can provide geographic location information for the mobile phone 200 and the installed applications.
- the positioning device 209 may be a positioning system such as GPS, Beidou satellite navigation system, or GLONASS.
- the positioning device 209 further includes an assisted global positioning system AGPS, assisted positioning based on a base station or a WiFi access point, and the like.
- the audio circuit 210 may perform functions such as processing, input, and output of audio signals, and may include a speaker 210-1, a microphone 210-2, and other audio processing devices.
- the WiFi device 211 is used to provide the mobile phone 200 with network access that complies with WiFi-related standard protocols.
- the mobile phone 200 can access the WiFi access point through the WiFi device 211 to connect to the network.
- a radio frequency circuit (RF, Radio Freqency) 212 can be used to receive and send information during transmission and reception of information or during a call, convert electrical signals into electromagnetic signals or convert electromagnetic signals into electrical signals, and communicate with communication networks and other communication equipment via electromagnetic signals. Communication.
- the structure of the radio frequency circuit 212 includes, but is not limited to, an antenna system, a radio frequency transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, and a SIM (Subscriber Identity Module) card. and many more.
- the radio frequency circuit 212 may communicate with a network and other devices through wireless communication, such as the Internet, an intranet, and / or a wireless network (such as a cellular telephone network, a wireless local area network, and / or a metropolitan area network).
- Wireless communications can use any of a variety of communication standards, protocols, and technologies, including but not limited to the Global System for Mobile Communications, Enhanced Data GSM Environment, High-Speed Downlink Packet Access, High-Speed Uplink Packet Access, Broadband Code Division Multiple access, code division multiple access, time division multiple access, Bluetooth, wireless fidelity (e.g., IEEE802.11a, IEEE802.11b, IEEE802.11g, and / or IEEE802.11n), Voice over Internet Protocol, Wi-MAX, email protocols (E.g., Internet Message Access Protocol (IMAP) and / or Post Office Protocol (POP)), instant messaging (e.g., Extensible Messaging Presence Protocol (XMPP), extended session initiation protocol (SIMPLE) for instant messaging and field
- the mobile phone 200 may further include other components, and details are not described herein again.
- the mobile phone 200 may not be able to collect a suitable face image, which may cause the failure of face recognition.
- the face of User 1 is too close to the mobile phone 200, resulting in an incomplete face image; or the face of User 1 is too far away from the mobile phone 200, causing the details of the face image to be unrecognizable; or the angle at which User 1 holds the mobile phone 200 Excessive tilt results in distortion, deformation, or loss of the facial image; or the environment in which user 1 is located is too dim or too bright, which causes the exposure and contrast of the facial image to exceed the identifiable range.
- the mobile phone 200 Due to the limited power of the mobile phone 200, due to the consideration of saving power consumption, the mobile phone 200 will not continuously perform face recognition again after the face recognition failure, and the user 1 needs to trigger the face recognition again, that is, repeat the above-mentioned Triggering process. This will cause inconvenience in operation, and after re-triggering face recognition, it may still fail to recognize.
- FIG. 3 is a method for triggering face recognition provided by an embodiment of the present application, which is used to determine whether the user has a posture adjustment after the face recognition fails, thereby determining whether to automatically trigger the face recognition. This method includes the following steps:
- step S300 may also be performed-triggering face recognition.
- trigger There are many ways to trigger, for example, you can click the keys of the mobile phone 200, including the power key, volume key, or other keys; you can also touch the display 203 to light up the display 203 to trigger face recognition; you can also pick up the mobile phone 200 to detect by the sensor
- a voice command for face recognition can also be issued through a voice assistant to trigger face recognition.
- S300 may be executed first.
- triggering face recognition may be to enable a camera and other face recognition-related functions for face recognition.
- the mobile terminal After triggering the face recognition, the mobile terminal (such as the mobile phone 200) will perform face recognition on the user, such as collecting the user's face image and comparing it with the pre-stored face image, etc., to confirm whether the user has passed face recognition, that is, the face Identify whether it was successful.
- face recognition the user can obtain the corresponding permissions to operate the mobile terminal, such as unlocking the mobile terminal, entering the operating system with corresponding permissions, gaining access to certain applications, or access to certain data.
- face recognition fails, the user cannot obtain corresponding permissions to operate the mobile terminal.
- the term "when” can be interpreted as "if” or “after” or “in response to a determination" or "in response to a detection”.
- the “detection of the state of the mobile terminal when face recognition fails” can be the state of the mobile terminal detected at the same time as the face recognition failure, or the movement can be detected after the face recognition fails.
- the status of the terminal for example, the status of the mobile terminal is detected after 1 second after the face recognition fails.
- the first state of the mobile terminal may be a state of the mobile terminal when face recognition fails. Detecting the first state of the mobile terminal may specifically be a state in which the tilt angle of the mobile terminal, the distance from the user's face, or the brightness of the surrounding environment of the mobile terminal is detected by the sensor when the face recognition fails. It can be understood that any suitable sensor can be used to detect the status of the mobile terminal, such as a proximity sensor, a distance sensor, a gravity sensor, a gyroscope, a light sensor, an infrared sensor, and so on.
- the distance between the mobile terminal and the user's face described in the embodiments of the present application may be the distance between the front camera of the mobile terminal and the user's face, for example, the distance between the front camera of the mobile phone and the user's nose.
- the inclination angle described in the embodiment of the present application may be that the angle between the plane of the display of the mobile terminal and the horizontal plane (or the ground) is less than or equal to 90 degrees when the user uses the mobile terminal in an upright position (such as standing upright or sitting upright). (As shown in Figure 4), it can be seen that the smaller the tilt angle, the more difficult it is for face recognition to collect images, that is, the angle at which the user holds the mobile terminal is too tilted, which may lead to face recognition failure. It can be understood that when the shape of the mobile terminal is a regular cuboid, such as most mobile phones on the market, the plane on which the display of the mobile terminal is located can also be understood as the plane on which the mobile terminal is located.
- the mobile terminal After detecting the first state of the mobile terminal when the face recognition fails, the mobile terminal will give a gesture adjustment prompt according to the first state.
- the mobile terminal analyzes the cause of the face recognition failure according to the first state, for example, the first state is that the angle at which the user holds the mobile terminal is too tilted so that face recognition fails, or the first state is that the user's face is away from the mobile terminal Too close or too far for face recognition to fail, etc.
- the mobile terminal after knowing the cause of the face recognition failure, the mobile terminal can find a solution corresponding to the cause in a preset database, and then give a corresponding attitude adjustment prompt.
- a gesture adjustment prompt of “please bring the phone closer” may be given, or a gesture adjustment of “the phone is too far away from the face” may be given prompt.
- a gesture adjustment prompt of "please hold the phone vertically” may be given, or a gesture adjustment prompt of "the phone is too inclined” may be given.
- the gesture adjustment prompt may be any form of text, picture, voice, video, etc., or a combination of these forms.
- the content of the attitude adjustment prompt may be displayed on the display screen of the mobile terminal; or the content of the attitude adjustment prompt may be played by the speaker.
- the attitude adjustment prompt may also be a prompt in any form of the mobile terminal through light display, vibration, or a combination of these forms.
- the LED indicator of the mobile terminal emits a certain color of light, or lights up or flashes for a period of time, or the mobile terminal vibrates several times to represent a corresponding attitude adjustment prompt.
- step S302 can also be omitted, that is, no attitude adjustment prompt is given, and step S303 is directly performed after performing step S301.
- step S303 is directly performed after performing step S301.
- the step of giving a gesture adjustment prompt can also be omitted.
- S303 Determine whether there is a posture adjustment according to the second state of the mobile terminal, and if there is a posture adjustment, the face recognition is automatically triggered.
- the state of the mobile terminal After the attitude adjustment prompt is given, the state of the mobile terminal, that is, the second state may be detected again to determine whether there is an attitude adjustment.
- S303 can be divided into three steps: (1) detecting the second state of the mobile terminal; (2) determining whether there is a posture adjustment according to the second state of the mobile terminal; (3) if there is a posture adjustment, the face recognition is automatically triggered.
- the state of the mobile terminal may be detected again using a sensor at the same time or after a period of time after the attitude adjustment is given, to determine whether there is a corresponding change from the first state, thereby determining whether there is an attitude adjustment. From the perspective of the user, if the user has posture adjustment, it means that the second state of the mobile terminal has a corresponding change from the first state.
- the first state of the mobile terminal is that the distance between the mobile terminal and the user's face is 30 centimeters
- face recognition fails because it is too far away
- the second state is that the distance between the mobile terminal and the user's face is 20 centimeters. It is close, that is, the second state has a corresponding change from the first state, and then there is an attitude adjustment.
- the content of the gesture adjustment and the gesture adjustment prompt are the same, and if they are the same, the face recognition is automatically triggered.
- the content of the attitude adjustment prompt may be a corresponding solution obtained from the database when analyzing the cause of the face recognition failure in step S302.
- the gesture adjustment prompt is "Please bring the phone closer”, and it can be determined whether there is an attitude adjustment to bring the phone closer according to the second state of the mobile terminal; if so, the face recognition is automatically triggered.
- the attitude adjustment prompt is "please hold the phone vertically”.
- Automatically triggering face recognition that is, the user does not need to perform the method for triggering face recognition in S300. Instead, a posture adjustment is used as a condition for triggering face recognition, so that the mobile terminal performs face recognition again.
- the automatic triggering of face recognition may be to automatically enable the front camera and other related functions of face recognition to perform face recognition again.
- the state change of the mobile terminal is detected by the sensor, and it is determined whether there is a corresponding posture adjustment action, so as to determine whether to perform face recognition again.
- This wake-up method for face recognition based on attitude adjustment not only saves power consumption of the mobile terminal, but also provides a simpler and more convenient wake-up method for face recognition. It also improves the success rate of face recognition through gesture adjustment prompts.
- FIG. 5 is a method for unlocking a mobile terminal using face recognition according to an embodiment of the present application, including the following steps:
- S501 Trigger face recognition.
- the mobile terminal is unlocked.
- S504 Determine whether there is a posture adjustment according to the second state of the mobile terminal, and if there is a posture adjustment, the face recognition is automatically triggered. Similar to S303, S504 can also be split into three steps: (1) detecting the second state of the mobile terminal; (2) determining whether there is an attitude adjustment according to the second state of the mobile terminal; (3) if there is an attitude adjustment, it will automatically Trigger face recognition.
- FIG. 6 is a method for obtaining access right of an application program using face recognition provided in an embodiment of the present application. The steps include:
- S604 Determine whether there is a posture adjustment according to the second state of the mobile terminal, and if there is a posture adjustment, the face recognition is automatically triggered. Similar to S303, S604 can also be split into three steps: (1) detecting the second state of the mobile terminal; (2) determining whether there is an attitude adjustment according to the second state of the mobile terminal; (3) if there is an attitude adjustment, it will automatically Trigger face recognition.
- FIG. 7 is a method for obtaining certain data access authority using face recognition provided by an embodiment of the present application, including the following steps:
- S701 Trigger face recognition. Optionally, when face recognition is successful, access to the data is obtained.
- S703 Give a gesture adjustment prompt according to the first state of the mobile terminal.
- S704 Determine whether there is a posture adjustment according to the second state of the mobile terminal, and if there is a posture adjustment, the face recognition is automatically triggered. Similar to S303, S704 can also be split into three steps: (1) detecting the second state of the mobile terminal; (2) determining whether there is an attitude adjustment according to the second state of the mobile terminal; (3) if there is an attitude adjustment, it will automatically Trigger face recognition.
- Figure 8 uses face recognition to unlock a mobile terminal as an example.
- a method for automatically triggering face recognition includes the following steps:
- the reason for the face recognition failure may be that the distance between the mobile terminal and the user's face is too close or too far.
- an attitude adjustment prompt is provided to adjust the distance between the mobile terminal and the user's face.
- the first distance may be the distance between the mobile terminal and the user's face detected by the sensor when face recognition fails. When the distance is too close, an attitude adjustment prompt for increasing the distance can be given; when the distance is too far, an attitude adjustment prompt for decreasing the distance can be given.
- S804 Determine whether there is a posture adjustment according to the detected second distance between the mobile terminal and the user's face, and if there is a posture adjustment, the face recognition is automatically triggered.
- the sensor After the mobile terminal gives a gesture adjustment prompt, the sensor detects the second distance between the mobile terminal and the user's face, compares it with the first distance, and if a corresponding change occurs, it determines that there is a gesture adjustment. For example, when the attitude adjustment prompt is to increase the distance, if the second distance is greater than the first distance, it is determined that there is an attitude adjustment. Similar to S303, S804 can also be split into three steps: (1) detecting the second distance between the mobile terminal and the user's face; (2) determining whether there is a posture adjustment based on the second distance between the mobile terminal and the user's face; (3) ) If there is a posture adjustment, face recognition is automatically triggered.
- FIG. 9 is a schematic diagram of unlocking a mobile terminal using face recognition.
- User 1 triggers face recognition to unlock mobile phone 200. Assuming that mobile phone 200 is located at position A at the beginning, the distance between mobile phone 200 and the face of user 1 is the first distance. If the first distance is too close, for example, the first distance is 10 centimeters, the mobile phone 200 may not be able to collect a suitable face image, which may cause the face recognition unlocking failure.
- the sensor of the mobile phone 200 can detect the first distance between the mobile phone 200 and the face of the user 1. Based on this first distance, the mobile phone 200 can determine that the reason for the failure of face recognition is that the mobile phone 200 is too close to the face of the user 1 and thus give a gesture adjustment prompt to increase the distance. For example, the screen of the mobile phone 200 displays "Please Take the phone farther "prompt, or use the speaker to play the prompt" Please take the phone farther "and so on. Optionally, the step of giving a gesture adjustment prompt may also be omitted.
- the sensor of the mobile phone 200 can detect the second distance between the mobile phone 200 and the face of the user 1.
- the user 1 moves the mobile phone 200 away according to the gesture adjustment prompt.
- the distance between the mobile phone 200 and the face of the user 1 is the second distance. Assuming that the second distance is 20 cm, since the second distance is greater than the first distance and meets the attitude adjustment prompt for increasing the distance, it is determined that there is an attitude adjustment, and the mobile phone 200 may start the front camera to automatically trigger face recognition.
- the mobile phone 200 can collect a suitable face image at a second distance from the face of the user 1 and the comparison degree with the pre-stored face image is greater than a set threshold, the face recognition is successful. When the face recognition is successful, the mobile phone 200 can be unlocked.
- FIG. 10 is an example of unlocking a mobile terminal with face recognition as an example.
- a method for automatically triggering face recognition when a user holds the mobile terminal at an excessively inclined angle includes the following steps:
- the reason for the failure of face recognition may be that the tilt angle is too small, which makes it impossible to collect a suitable face image.
- a posture adjustment prompt is provided to adjust the angle of the plane of the display of the mobile terminal with respect to the horizontal plane.
- the first tilt angle may be an angle that is less than or equal to 90 degrees from an angle formed by the plane of the mobile terminal and the horizontal plane detected by the sensor when the face recognition fails.
- the sensor After the mobile terminal gives a gesture adjustment prompt, the sensor detects a second tilt angle formed by the plane of the mobile terminal's display relative to the horizontal plane, and compares the first tilt angle with the first tilt angle. If a corresponding change occurs, it is determined that there is a gesture adjustment. For example, when the attitude adjustment prompt is to hold the phone vertically (equivalent to increasing the tilt angle), if the second tilt angle is greater than the first tilt angle, it is determined that there is a gesture adjustment.
- S1004 can also be split into three steps: (1) detecting the second tilt angle formed by the plane of the mobile terminal's display relative to the horizontal plane; (2) based on the plane of the mobile terminal's display relative to the horizontal plane The second tilt angle determines whether there is a posture adjustment; (3) If there is a posture adjustment, face recognition is automatically triggered.
- FIG. 11 is a schematic diagram of unlocking a mobile terminal using face recognition.
- User 1 triggers face recognition to unlock the mobile phone 200.
- the angle of the plane where the display of the mobile phone 200 is relative to the horizontal plane is A tilt angle. If the mobile phone 200 is too inclined, that is, the first inclination angle is too small, for example, the first inclination is 40 degrees, it may cause the mobile phone 200 to fail to collect a suitable face image, and cause the face recognition unlock to fail.
- the sensor of the mobile phone 200 can detect a first inclination angle formed by the plane where the display of the mobile phone 200 is located with respect to the horizontal plane. According to this first tilt angle, the mobile phone 200 can determine that the reason for the failure of face recognition is that the first tilt angle formed by the plane where the display of the mobile phone 200 is relative to the horizontal plane is too small, so that a posture adjustment prompt to increase the tilt angle is given. For example, the prompt of "please hold the phone vertically" is displayed on the display screen of the mobile phone 200, or the prompt of "please hold the phone vertically” is displayed on the speaker.
- the step of giving a gesture adjustment prompt may also be omitted.
- the sensor of the mobile phone 200 can detect a second inclination angle formed by the plane where the display of the mobile phone 200 is relative to the horizontal plane.
- the user 1 adjusts the tilt angle of the mobile phone 200 according to the gesture adjustment prompt. If the mobile phone 200 is located at the position B, the angle formed by the plane of the mobile phone 200 relative to the horizontal plane is the second tilt angle. Assume that the second tilt angle is 80 degrees. Since the second tilt angle is greater than the first tilt angle, and it meets the attitude adjustment prompt for increasing the tilt angle, it is determined that there is a posture adjustment, and the mobile phone 200 may start the front camera to automatically trigger face recognition.
- a suitable face image can be collected at a second tilt angle of the plane where the display of the mobile phone 200 is relative to the horizontal plane, and the matching degree is greater than a set threshold after comparison with the pre-stored face image, the face recognition is successful .
- the mobile phone 200 can be unlocked.
- the first distance between the mobile terminal and the user's face can be detected at the same time, and the first tilt angle formed by the plane on which the display of the mobile terminal is relative to the horizontal plane can be detected, and then Posture adjustment tips for adjusting distance and tilt angle.
- Posture adjustment tips for adjusting distance and tilt angle.
- face recognition is successful, the mobile terminal is unlocked.
- face recognition can be used in conjunction with other authentication methods, such as password verification, gesture recognition, fingerprint recognition, iris recognition, voiceprint recognition, and so on.
- other authentication methods such as password verification, gesture recognition, fingerprint recognition, iris recognition, voiceprint recognition, and so on.
- a certain threshold for example, 3 times
- other authentication methods may be used to unlock the mobile terminal instead.
- FIG. 12 is a schematic diagram of unlocking a mobile terminal using face recognition, including the following steps:
- S1201 Trigger face recognition.
- S1202 Determine whether the face recognition is successful. Optionally, when the face recognition is successful, the mobile terminal is unlocked and the process ends.
- the condition for meeting the face recognition again is that the number of times of face recognition failure is less than a preset threshold. For example, if the preset threshold is 3 times, it is determined whether the number of times of face recognition is less than 3 times, and if yes, S1204 is performed; if it is 3 times or more, it does not meet the conditions of face recognition again. If it does not meet the conditions for face recognition again, the process ends, or another unlocking method is used, such as password verification.
- the first state may be a first distance between the mobile terminal and the user's face when face recognition fails, or a first tilt angle formed by a plane on which the display of the mobile terminal is located with respect to a horizontal plane, and so on.
- the mobile terminal may detect the first state of the mobile terminal with any suitable sensor.
- S1205 Give a gesture adjustment prompt according to the first state of the mobile terminal.
- S1207 Determine whether there is a posture adjustment according to the second state of the mobile terminal. If there is no attitude adjustment, the process ends.
- an embodiment of the present application provides a device including a camera 131, a processor 132, a memory 133, and a sensor 134.
- the processor 132 instructs the camera 131 to collect an image of the user's face, and compares the image with the face image pre-stored in the memory 133.
- the processor 132 determines the degree of matching between the collected image and the pre-stored image. When the degree of matching is greater than a preset threshold, it determines that the face recognition is successful and grants the user the corresponding operation authority; when the degree of matching is less than the preset threshold, then It is determined that the face recognition fails, and the user is not given the corresponding operation authority.
- the processor 132 instructs the sensor 134 to detect the first state of the device.
- the first state may be a state of the device when face recognition fails, and specifically may detect a tilt angle of the device, or a distance from a user's face, and the like.
- the processor 132 gives a gesture adjustment prompt according to the first state.
- the attitude adjustment prompt can be output to the user through a display, a speaker, and other components.
- the processor 132 After giving the attitude adjustment prompt, the processor 132 instructs the sensor 134 to detect the second state of the device to determine whether there is an attitude adjustment. If it is determined that there is a posture adjustment, the processor 132 triggers face recognition.
- an embodiment of the present application provides an apparatus, including a processor, a memory, and one or more programs.
- one or more programs are stored in the memory and configured to be executed by one or more processors.
- the program or programs include instructions for detecting the first state of the device when face recognition fails; giving a posture adjustment prompt based on the first state; determining whether there is a posture adjustment based on the second state, and if there is a posture adjustment , It will automatically trigger face recognition.
- an embodiment of the present application provides a storage medium or a computer program product for storing computer software instructions, which are used to: detect a first state of a mobile terminal when face recognition fails; and according to the first state of the mobile terminal Give a gesture adjustment prompt; determine whether there is a gesture adjustment according to the second state of the mobile terminal, and if there is a gesture adjustment, face recognition is automatically triggered.
- a storage medium or a computer program product for storing computer software instructions, which are used to: detect a first state of a mobile terminal when face recognition fails; and according to the first state of the mobile terminal Give a gesture adjustment prompt; determine whether there is a gesture adjustment according to the second state of the mobile terminal, and if there is a gesture adjustment, face recognition is automatically triggered.
- an embodiment of the present application provides a device including a face recognition unit 141, a processing unit 142, a prompting unit 143, and a state detection unit 144.
- the face recognition unit 141 can collect an image of the user's face and compare it with a pre-stored face image.
- the processing unit 142 determines the matching degree between the collected image and the pre-stored image. When the matching degree is greater than a preset threshold, it determines that the face recognition is successful and gives the user corresponding operation authority; when the matching degree is less than the preset threshold, then It is determined that the face recognition fails, and the user is not given the corresponding operation authority.
- the state detection unit 144 detects a first state of the device.
- the first state may be a state of the device when face recognition fails, and specifically may detect a tilt angle of the device, or a distance from a user's face, and the like.
- the prompting unit 143 gives a posture adjustment prompt according to the first state.
- the attitude adjustment prompt can be output to the user through a display, a speaker, and other components.
- the state detection unit 144 After giving a gesture adjustment prompt, the state detection unit 144 detects the second state of the device to determine whether there is a gesture adjustment. If it is determined that there is a posture adjustment, the face recognition unit 141 automatically triggers face recognition.
- the disclosed apparatus and method may be implemented in other ways.
- the device embodiments described above are only schematic.
- the division of the modules or units is only a logical function division.
- multiple units or components may be divided.
- the combination can either be integrated into another device, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may be one physical unit or multiple physical units, that is, may be located in one place, or may be distributed to multiple different places. . Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
- the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
- the integrated unit When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium.
- the technical solution of the embodiments of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution may be embodied in the form of a software product that is stored in a storage medium. Included are several instructions for causing a device (which can be a single-chip microcomputer, a chip, etc.) or a processor to execute all or part of the steps of the method described in the embodiments of the present application.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and other media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Quality & Reliability (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Telephone Function (AREA)
- Collating Specific Patterns (AREA)
- Image Input (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (40)
- 一种人脸识别的方法,其特征在于,包括:触发人脸识别;当人脸识别失败时,检测移动终端的第一状态;根据所述第一状态给出姿态调整提示;检测所述移动终端的第二状态;根据所述第二状态确定是否有姿态调整,如果有所述姿态调整,则自动触发人脸识别。
- 根据权利要求1所述的方法,其特征在于,所述触发人脸识别,包括:采集用户的脸部图像,与预存的脸部图像进行对比。
- 根据权利要求2所述的方法,其特征在于,所述脸部图像包括脸部图片或视频。
- 根据权利要求2所述的方法,其特征在于,所述预存的脸部图像存储在所述移动终端的存储器中,或者存储在可与所述移动终端进行通信的服务器中。
- 根据权利要求1-4任一项所述的方法,其特征在于,在所述自动触发人脸识别之后,所述方法还包括:当所述人脸识别成功时,获取所述移动终端的操作权限。
- 根据权利要求5所述的方法,其特征在于,所述获取所述移动终端的操作权限,包括以下任一项:解锁所述移动终端,获得所述移动终端上安装的应用程序的访问权限,或者获得所述移动终端上存储的数据的访问权限。
- 根据权利要求1-4任一项所述的方法,其特征在于,在所述自动触发人脸识别之后,所述方法还包括:当所述人脸识别失败时,使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别。
- 根据权利要求1-4任一项所述的方法,其特征在于,在所述自动触发人脸识别之后,所述方法还包括:当所述人脸识别失败时,确定是否符合再次人脸识别的条件;如果符合所述再次人脸识别的条件,则再次自动触发人脸识别;如果不符合所述再次人脸识别的条件,则使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别。
- 根据权利要求8所述的方法,其特征在于,所述符合再次人脸识别的条件,包括:所述人脸识别失败的次数小于预设的阈值。
- 根据权利要求1-9任一项所述的方法,其特征在于,所述根据所述第一状态给出姿态调整提示,包括:所述移动终端根据所述第一状态分析所述人脸识别失败的原因,并在预先设置的数据库中找到所述原因对应的解决方案,根据所述解决方案给出所述姿态调整提示。
- 根据权利要求1-10任一项所述的方法,其特征在于,所述根据所述第二状态确定是否有姿态调整,包括:确定所述第二状态相对于所述第一状态的变化与所述姿态调整提示的内容是否相 同。
- 根据权利要求1-11任一项所述的方法,其特征在于,所述姿态调整提示包括以下提示方式的任意组合:文字、图片、语音、视频、灯光或者振动。
- 根据权利要求1-12任一项所述的方法,其特征在于:所述第一状态为当所述人脸识别失败时所述移动终端与用户脸部的第一距离,所述第二状态为给出所述姿态调整提示后所述移动终端与用户脸部的第二距离。
- 根据权利要求1-12任一项所述的方法,其特征在于:所述第一状态为当所述人脸识别失败时所述移动终端的显示器所在平面相对于水平面的第一倾斜角度,所述第二状态为给出所述姿态调整提示后所述移动终端的显示器所在平面相对于水平面的第二倾斜角度。
- 一种装置,包括摄像头、处理器、存储器、传感器,其特征在于:所述处理器用于:触发人脸识别,指令所述摄像头采集用户的脸部图像,并与预存在所述存储器中的脸部图像进行对比;当所述人脸识别失败时,指令所述传感器检测所述装置的第一状态;根据所述第一状态给出姿态调整提示;指令所述传感器检测所述装置的第二状态;根据所述第二状态确定是否有姿态调整,如果有所述姿态调整,则自动触发人脸识别。
- 根据权利要求15所述的装置,其特征在于,所述脸部图像包括脸部图片或视频。
- 根据权利要求15或16所述的装置,其特征在于,在所述自动触发人脸识别之后,所述处理器还用于:当所述人脸识别成功时,获取所述装置的操作权限。
- 根据权利要求17所述的装置,其特征在于,所述获取所述装置的操作权限,包括以下任一项:解锁所述装置,获得所述装置上安装的应用程序的访问权限,或者获得所述装置上存储的数据的访问权限。
- 根据权利要求15或16所述的装置,其特征在于,在所述自动触发人脸识别之后,所述处理器还用于:当所述人脸识别失败时,使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别。
- 根据权利要求15或16所述的装置,其特征在于,在所述自动触发人脸识别之后,所述处理器还用于:当所述人脸识别失败时,确定是否符合再次人脸识别的条件;如果符合所述再次人脸识别的条件,则再次自动触发人脸识别;如果不符合所述再次人脸识别的条件,则使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别。
- 根据权利要求20所述的装置,其特征在于,所述符合再次人脸识别的条件, 包括:所述人脸识别失败的次数小于预设的阈值。
- 根据权利要求15-21任一项所述的装置,其特征在于,所述根据所述第一状态给出姿态调整提示,包括:根据所述第一状态分析所述人脸识别失败的原因,并在预先设置的数据库中找到所述原因对应的解决方案,根据所述解决方案给出所述姿态调整提示。
- 根据权利要求15-22任一项所述的装置,其特征在于,所述根据所述第二状态确定是否有姿态调整,包括:确定所述第二状态相对于所述第一状态的变化与所述姿态调整提示的内容是否相同。
- 根据权利要求15-23任一项所述的装置,其特征在于,所述姿态调整提示包括以下提示方式的任意组合:文字、图片、语音、视频、灯光或者振动。
- 根据权利要求15-24任一项所述的装置,其特征在于:所述第一状态为当所述人脸识别失败时所述装置与用户脸部的第一距离,所述第二状态为给出所述姿态调整提示后所述装置与用户脸部的第二距离。
- 根据权利要求15-24任一项所述的装置,其特征在于:所述装置还包括显示器,所述第一状态为当所述人脸识别失败时所述显示器所在平面相对于水平面的第一倾斜角度,所述第二状态为给出所述姿态调整提示后所述显示器所在平面相对于水平面的第二倾斜角度。
- 一种装置,包括人脸识别单元、处理单元、提示单元、状态检测单元,其特征在于:所述处理单元,用于触发人脸识别;所述人脸识别单元,用于采集用户的脸部图像,并与预存的脸部图像进行对比;所述状态检测单元,用于当所述人脸识别失败时,检测所述装置的第一状态;所述提示单元,用于根据所述第一状态给出姿态调整提示;所述状态检测单元,还用于检测所述装置的第二状态;所述处理单元,还用于根据所述第二状态确定是否有姿态调整,如果有所述姿态调整,则自动触发人脸识别。
- 根据权利要求27所述的装置,其特征在于,所述脸部图像包括脸部图片或视频。
- 根据权利要求27或28所述的装置,其特征在于,在所述自动触发人脸识别之后,所述处理单元还用于:当所述人脸识别成功时,获取所述装置的操作权限。
- 根据权利要求29所述的装置,其特征在于,所述获取所述装置的操作权限,包括以下任一项:解锁所述装置,获得所述装置上安装的应用程序的访问权限,或者获得所述装置上存储的数据的访问权限。
- 根据权利要求27或28所述的装置,其特征在于,在所述自动触发人脸识别之后,所述处理单元还用于:当所述人脸识别失败时,使用以下任一项鉴权方式进行验证:密码验证、手势识 别、指纹识别、虹膜识别、声纹识别。
- 根据权利要求27或28所述的装置,其特征在于,在所述自动触发人脸识别之后,所述处理单元还用于:当所述人脸识别失败时,确定是否符合再次人脸识别的条件;如果不符合所述再次人脸识别的条件,则使用以下任一项鉴权方式进行验证:密码验证、手势识别、指纹识别、虹膜识别、声纹识别;所述人脸识别单元,用于如果符合所述再次人脸识别的条件,则再次自动触发人脸识别。
- 根据权利要求32所述的装置,其特征在于,所述符合再次人脸识别的条件,包括:所述人脸识别失败的次数小于预设的阈值。
- 根据权利要求27-33任一项所述的装置,其特征在于,所述根据所述第一状态给出姿态调整提示,包括:根据所述第一状态分析所述人脸识别失败的原因,并在预先设置的数据库中找到所述原因对应的解决方案,根据所述解决方案给出所述姿态调整提示。
- 根据权利要求27-34任一项所述的装置,其特征在于,所述根据第二状态确定是否有姿态调整,包括:确定所述第二状态相对于所述第一状态的变化与所述姿态调整提示的内容是否相同。
- 根据权利要求27-35任一项所述的装置,其特征在于,所述姿态调整提示包括以下提示方式的任意组合:文字、图片、语音、视频、灯光或者振动。
- 根据权利要求27-36任一项所述的装置,其特征在于:所述第一状态为当所述人脸识别失败时所述装置与用户脸部的第一距离,所述第二状态为给出所述姿态调整提示后所述装置与用户脸部的第二距离。
- 根据权利要求27-36任一项所述的装置,其特征在于:所述第一状态为当所述人脸识别失败时所述终端所在平面相对于水平面的第一倾斜角度,所述第二状态为给出所述姿态调整提示后所述终端所在平面相对于水平面的第二倾斜角度。
- 一种计算机存储介质,所述计算机存储介质中存储有指令,其特征在于,当所述指令在移动终端上运行时,使得所述移动终端执行如权利要求1-14中任一项所述的方法。
- 一种包含指令的计算机程序产品,当所述计算机程序产品在移动终端上运行时,使得所述移动终端执行如权利要求1-14中任一项所述的方法。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/270,165 US20210201001A1 (en) | 2018-08-28 | 2018-08-28 | Facial Recognition Method and Apparatus |
KR1020217005781A KR20210035277A (ko) | 2018-08-28 | 2018-08-28 | 얼굴 인식 방법 및 장치 |
JP2021510869A JP7203955B2 (ja) | 2018-08-28 | 2018-08-28 | 顔認識方法および装置 |
EP18931528.6A EP3819810A4 (en) | 2018-08-28 | 2018-08-28 | FACIAL RECOGNITION PROCESS AND APPARATUS |
CN201880096890.7A CN112639801A (zh) | 2018-08-28 | 2018-08-28 | 一种人脸识别的方法及装置 |
PCT/CN2018/102680 WO2020041971A1 (zh) | 2018-08-28 | 2018-08-28 | 一种人脸识别的方法及装置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/102680 WO2020041971A1 (zh) | 2018-08-28 | 2018-08-28 | 一种人脸识别的方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020041971A1 true WO2020041971A1 (zh) | 2020-03-05 |
Family
ID=69644752
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/102680 WO2020041971A1 (zh) | 2018-08-28 | 2018-08-28 | 一种人脸识别的方法及装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20210201001A1 (zh) |
EP (1) | EP3819810A4 (zh) |
JP (1) | JP7203955B2 (zh) |
KR (1) | KR20210035277A (zh) |
CN (1) | CN112639801A (zh) |
WO (1) | WO2020041971A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114863510A (zh) * | 2022-03-25 | 2022-08-05 | 荣耀终端有限公司 | 一种人脸识别方法和装置 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11941629B2 (en) * | 2019-09-27 | 2024-03-26 | Amazon Technologies, Inc. | Electronic device for automated user identification |
EP4124027A1 (en) * | 2020-03-18 | 2023-01-25 | NEC Corporation | Program, mobile terminal, authentication processing device, image transmission method, and authentication processing method |
US11270102B2 (en) | 2020-06-29 | 2022-03-08 | Amazon Technologies, Inc. | Electronic device for automated user identification |
KR102589834B1 (ko) * | 2021-12-28 | 2023-10-16 | 동의과학대학교 산학협력단 | 치매 스크리닝 디퓨저 장치 |
WO2023149708A1 (en) * | 2022-02-01 | 2023-08-10 | Samsung Electronics Co., Ltd. | Method and system for a face detection management |
CN114694282A (zh) * | 2022-03-11 | 2022-07-01 | 深圳市凯迪仕智能科技有限公司 | 一种基于智能锁的语音互动的方法及相关装置 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101771539A (zh) * | 2008-12-30 | 2010-07-07 | 北京大学 | 一种基于人脸识别的身份认证方法 |
CN105389575A (zh) * | 2015-12-24 | 2016-03-09 | 北京旷视科技有限公司 | 生物数据的处理方法和装置 |
CN106886697A (zh) * | 2015-12-15 | 2017-06-23 | 中国移动通信集团公司 | 认证方法、认证平台、用户终端及认证系统 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100467152B1 (ko) * | 2003-11-25 | 2005-01-24 | (주)버뮤다정보기술 | 얼굴인식시스템에서 개인 인증 방법 |
WO2013100697A1 (en) * | 2011-12-29 | 2013-07-04 | Intel Corporation | Method, apparatus, and computer-readable recording medium for authenticating a user |
CN102760042A (zh) * | 2012-06-18 | 2012-10-31 | 惠州Tcl移动通信有限公司 | 一种基于图片脸部识别进行解锁的方法、系统及电子设备 |
US20170013464A1 (en) * | 2014-07-10 | 2017-01-12 | Gila FISH | Method and a device to detect and manage non legitimate use or theft of a mobile computerized device |
JP2016081071A (ja) * | 2014-10-09 | 2016-05-16 | 富士通株式会社 | 生体認証装置、生体認証方法及びプログラム |
JP6418033B2 (ja) * | 2015-03-30 | 2018-11-07 | オムロン株式会社 | 個人識別装置、識別閾値設定方法、およびプログラム |
CN104898832B (zh) * | 2015-05-13 | 2020-06-09 | 深圳彼爱其视觉科技有限公司 | 一种基于智能终端的3d实时眼镜试戴方法 |
JP6324939B2 (ja) * | 2015-11-05 | 2018-05-16 | 株式会社ソニー・インタラクティブエンタテインメント | 情報処理装置およびログイン制御方法 |
US10210318B2 (en) * | 2015-12-09 | 2019-02-19 | Daon Holdings Limited | Methods and systems for capturing biometric data |
WO2017208519A1 (ja) * | 2016-05-31 | 2017-12-07 | シャープ株式会社 | 生体認証装置、携帯端末装置、制御プログラム |
CN107016348B (zh) * | 2017-03-09 | 2022-11-22 | Oppo广东移动通信有限公司 | 结合深度信息的人脸检测方法、检测装置和电子装置 |
CN107463883A (zh) * | 2017-07-18 | 2017-12-12 | 广东欧珀移动通信有限公司 | 生物识别方法及相关产品 |
CN107818251B (zh) * | 2017-09-27 | 2021-03-23 | 维沃移动通信有限公司 | 一种人脸识别方法及移动终端 |
CN107679514A (zh) * | 2017-10-20 | 2018-02-09 | 维沃移动通信有限公司 | 一种人脸识别方法及电子设备 |
CN108090340B (zh) * | 2018-02-09 | 2020-01-10 | Oppo广东移动通信有限公司 | 人脸识别处理方法、人脸识别处理装置及智能终端 |
CN108319837A (zh) * | 2018-02-13 | 2018-07-24 | 广东欧珀移动通信有限公司 | 电子设备、人脸模板录入方法及相关产品 |
-
2018
- 2018-08-28 US US17/270,165 patent/US20210201001A1/en not_active Abandoned
- 2018-08-28 KR KR1020217005781A patent/KR20210035277A/ko not_active Application Discontinuation
- 2018-08-28 WO PCT/CN2018/102680 patent/WO2020041971A1/zh unknown
- 2018-08-28 CN CN201880096890.7A patent/CN112639801A/zh active Pending
- 2018-08-28 EP EP18931528.6A patent/EP3819810A4/en active Pending
- 2018-08-28 JP JP2021510869A patent/JP7203955B2/ja active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101771539A (zh) * | 2008-12-30 | 2010-07-07 | 北京大学 | 一种基于人脸识别的身份认证方法 |
CN106886697A (zh) * | 2015-12-15 | 2017-06-23 | 中国移动通信集团公司 | 认证方法、认证平台、用户终端及认证系统 |
CN105389575A (zh) * | 2015-12-24 | 2016-03-09 | 北京旷视科技有限公司 | 生物数据的处理方法和装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3819810A4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114863510A (zh) * | 2022-03-25 | 2022-08-05 | 荣耀终端有限公司 | 一种人脸识别方法和装置 |
CN114863510B (zh) * | 2022-03-25 | 2023-08-01 | 荣耀终端有限公司 | 一种人脸识别方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
JP2021535503A (ja) | 2021-12-16 |
JP7203955B2 (ja) | 2023-01-13 |
EP3819810A1 (en) | 2021-05-12 |
US20210201001A1 (en) | 2021-07-01 |
KR20210035277A (ko) | 2021-03-31 |
EP3819810A4 (en) | 2021-08-11 |
CN112639801A (zh) | 2021-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020041971A1 (zh) | 一种人脸识别的方法及装置 | |
CN108960209B (zh) | 身份识别方法、装置及计算机可读存储介质 | |
WO2017181769A1 (zh) | 一种人脸识别方法、装置和系统、设备、存储介质 | |
US9049983B1 (en) | Ear recognition as device input | |
CN108924737B (zh) | 定位方法、装置、设备及计算机可读存储介质 | |
CN107231470B (zh) | 图像处理方法、移动终端及计算机可读存储介质 | |
CN110096865B (zh) | 下发验证方式的方法、装置、设备及存储介质 | |
US20220236848A1 (en) | Display Method Based on User Identity Recognition and Electronic Device | |
WO2018133282A1 (zh) | 一种动态识别的方法及终端设备 | |
WO2020048392A1 (zh) | 应用程序的病毒检测方法、装置、计算机设备及存储介质 | |
CN106548144B (zh) | 一种虹膜信息的处理方法、装置及移动终端 | |
US12039023B2 (en) | Systems and methods for providing a continuous biometric authentication of an electronic device | |
US20220417359A1 (en) | Remote control device, information processing method and recording program | |
CN111241499B (zh) | 应用程序登录的方法、装置、终端及存储介质 | |
US20240095329A1 (en) | Cross-Device Authentication Method and Electronic Device | |
WO2022062808A1 (zh) | 头像生成方法及设备 | |
WO2021129698A1 (zh) | 拍摄方法及电子设备 | |
US11617055B2 (en) | Delivering information to users in proximity to a communication device | |
WO2021083086A1 (zh) | 信息处理方法及设备 | |
US10270963B2 (en) | Angle switching method and apparatus for image captured in electronic terminal | |
WO2021082620A1 (zh) | 一种图像识别方法及电子设备 | |
CN111694892B (zh) | 资源转移方法、装置、终端、服务器及存储介质 | |
US20230040115A1 (en) | Information processing method and electronic device | |
EP3667497A1 (en) | Facial information preview method and related product | |
CN112765571B (zh) | 权限管理方法、系统、装置、服务器及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18931528 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20217005781 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2018931528 Country of ref document: EP Effective date: 20210208 Ref document number: 2021510869 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |