WO2019109768A1 - 任务执行方法、终端设备及计算机可读存储介质 - Google Patents

任务执行方法、终端设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2019109768A1
WO2019109768A1 PCT/CN2018/113787 CN2018113787W WO2019109768A1 WO 2019109768 A1 WO2019109768 A1 WO 2019109768A1 CN 2018113787 W CN2018113787 W CN 2018113787W WO 2019109768 A1 WO2019109768 A1 WO 2019109768A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
terminal device
infrared
camera
Prior art date
Application number
PCT/CN2018/113787
Other languages
English (en)
French (fr)
Inventor
黄源浩
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Publication of WO2019109768A1 publication Critical patent/WO2019109768A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Definitions

  • the present application belongs to the field of computer technology, and more particularly, to a task execution method, a terminal device, and a computer readable storage medium.
  • Biometrics are widely used in security, home, smart hardware and many other fields. At present, more mature biometrics (such as fingerprint recognition, iris recognition, etc.) have been widely used in mobile phones, computers and other terminal devices.
  • the current face recognition method is mainly based on the face recognition method of color image. This face recognition method is affected by factors such as ambient light intensity and illumination direction, resulting in low recognition accuracy.
  • the application provides a task execution method, a terminal device and a computer readable storage medium of a terminal device to improve the accuracy of face recognition.
  • a first aspect provides a task execution method of a terminal device, including: after the face recognition application of the terminal device is activated, projecting active invisible light into a space; acquiring an image including depth information; and analyzing the image to implement: Determining whether the image contains a human face, and identifying the human face when the human face is included; and controlling the terminal device to perform a corresponding operation according to the recognition result.
  • the active invisible light includes infrared floodlight
  • the image includes a pure infrared image.
  • the image includes a depth image.
  • the active invisible light comprises infrared structured light.
  • the analyzing includes acquiring the distance and/or posture of the face by using the depth information.
  • the identifying includes: adjusting an image of the face or an authorized face image by using a distance of the face, so that an image of the face and an authorized face image size Consistent (ie basically the same).
  • the recognizing includes: adjusting an image of the human face or an authorized face image by using a posture of the human face, so that the gesture of the human face and the authorized human face Consistent (ie basically the same).
  • the corresponding operations include unlocking, paying.
  • a second aspect provides a method for recognizing a face, comprising: projecting active invisible light into a space after the face recognition application of the terminal device is activated; acquiring an image including depth information; and analyzing the image to implement: Whether the image contains a human face, and the face is recognized when a human face is included.
  • the active invisible light includes infrared floodlight
  • the image includes a pure infrared image.
  • the image includes a depth image.
  • the active invisible light comprises infrared structured light.
  • the analyzing includes acquiring the distance and/or posture of the face by using the depth information.
  • the identifying includes: adjusting an image of the face or an authorized face image by using a distance of the face, so that an image of the face and an authorized face image size Consistent (ie basically the same).
  • the recognizing includes: adjusting an image of the human face or an authorized face image by using a posture of the human face, so that the gesture of the human face and the authorized human face Consistent (ie basically the same).
  • a computer readable storage medium storing instructions for performing the method of the first aspect or any one of the possible implementations of the first aspect is provided.
  • a computer readable storage medium storing instructions for performing the method of any one of the second aspect or the second aspect of the second aspect is provided.
  • a computer program product comprising instructions for performing the method of the first aspect or any one of the possible implementations of the first aspect.
  • a computer program product comprising instructions for performing the method of any of the possible implementations of the second aspect or the second aspect.
  • a terminal device includes: an active light illuminator; a camera; a memory storing instructions; and a processor configured to execute the instructions to perform the first aspect or any one of the possible implementations of the first aspect The method described in the manner.
  • the active light illuminator is an infrared structured light projection module
  • the camera is an infrared camera
  • the infrared camera and the active light illuminator constitute a depth camera
  • the image includes Depth image.
  • a terminal device includes: an active light illuminator; a camera; a memory storing instructions; and a processor configured to execute the instructions to perform any one of the second aspect or the second aspect The method described in the manner.
  • the active light illuminator is an infrared structured light projection module
  • the camera is an infrared camera
  • the infrared camera and the active light illuminator constitute a depth camera
  • the image includes Depth image.
  • the present application utilizes active invisible illumination to solve the problem of ambient light interference, and uses the image containing the depth information to perform face recognition, thereby improving the accuracy of face recognition.
  • FIG. 1 is a schematic diagram of a face recognition application according to an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a task execution method according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a face information recognition method based on depth information according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
  • connection can be used for both stationary and circuit communication.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include one or more of the features either explicitly or implicitly.
  • the meaning of "a plurality” is two or more unless specifically and specifically defined otherwise.
  • Face recognition technology can be used in security, surveillance and other fields.
  • face recognition can be applied to perform operations such as unlocking and payment, and can also be applied to various aspects such as entertainment games.
  • Intelligent terminal devices such as mobile phones, tablets, computers, televisions, etc.
  • the images can be used for face detection and recognition, thereby further utilizing the results of the recognition to perform other correlations.
  • the environment of terminal devices especially mobile devices such as mobile phones and tablets
  • environmental changes can affect the imaging of color cameras. For example, when the light is weak, the face cannot be well imaged.
  • the randomness of the face pose and/or the distance between the face and the camera increases the difficulty and stability of face recognition.
  • the present application first provides a face recognition method and a terminal device based on depth information, which utilizes active invisible light to acquire an image containing depth information, and performs face recognition based on the image. Since the depth information is not sensitive to illumination, the accuracy of face recognition can be improved. Further, based on this, the present application provides a task execution method and a terminal device of a terminal device, which can perform different operations, such as unlocking, payment, and the like, by using the recognition result of the face recognition method described above.
  • the embodiments of the present application are exemplified in detail below with reference to the specific drawings.
  • FIG. 1 is a schematic diagram of a face recognition application according to an embodiment of the present application.
  • the user 10 holds a mobile terminal 11 (such as a mobile phone, a tablet, a player, etc.), and the mobile terminal 11 internally contains a camera 111 that can acquire a target (human face) image.
  • the camera 111 collects an image including the face 101, and recognizes the face in the image, when the recognized face When the face is authorized, the mobile terminal 11 performs unlocking, otherwise it is still in the locked state.
  • the current face recognition application is a payment or other application, the principle is similar to the unlock application.
  • FIG. 2 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
  • the terminal device referred to herein may also be referred to as a face recognition device.
  • the terminal device may be, for example, the mobile terminal 11 as shown in FIG. 1.
  • the terminal device may include the processor 20 and the ambient light/proximity sensor 21 connected thereto, the display 22, the microphone 23, the radio frequency and baseband processor 24, the interface 25, the memory 26, the battery 27, and the micro electro mechanical system. , MEMS) sensor 28, audio device 29, camera 30, and the like.
  • the data transmission and signal communication can be realized by circuit connection between different units in FIG. 2.
  • FIG. 2 is only one example of the structure of the terminal device, and in other embodiments, the terminal device may also contain fewer structures or contain more other components.
  • the processor 20 can be used for overall control of the terminal device, and the processor 20 can be a single processor or a plurality of processor units.
  • processor 20 may include processor units of different functions.
  • Display 22 can be used to display images to present an application or the like to a user.
  • the display 22 can also include a touch function, and the display 22 can also function as a human-computer interaction interface for receiving user input.
  • the microphone 23 can be used to receive voice information and can be used to implement voice interaction with the user.
  • the RF and baseband processor 24 can be responsible for the communication functions of the terminal device, such as receiving and translating signals such as voice or text to enable information exchange between remote users.
  • the interface 25 can be used to connect the terminal device to the outside to further implement functions such as data transmission, power transmission, and the like.
  • the interface 25 can be, for example, a universal serial bus (USB) interface, a wireless fidelity (WIFI) interface, or the like.
  • the memory 26 can be used to save applications such as the unlock program 261, the payment program 262, and the like.
  • the memory 26 can also be used to store data related to the execution of the application, such as facial images, features, and the like.
  • the memory 26 can also be used to store code and data involved in the execution of the processor 20.
  • Memory 26 may include a single or multiple memories, which may be any form of memory that can be used to hold data, such as random access memory (RAM), FLASH (flash), and the like. It can be understood that the memory 26 can be either part of the terminal device or independent of the terminal device, such as a cloud memory, and the saved data can communicate with the terminal device through the interface 25 or the like.
  • the application program such as the unlocking program 261, the payment program 262, and the like are generally stored in a computer readable storage medium (such as a non-volatile readable storage medium) from which the processor 20 can call the corresponding one when executing the application. The program is executed. Some data involved in the execution of the program, such as authorized face images or authorized face feature data, may also be stored in the memory 26.
  • the computer in the computer readable storage medium is a generalized concept and may refer to any device having an information processing function. In the embodiment of the present application, the computer may refer to the terminal device.
  • the terminal device may also include an ambient light/proximity sensor.
  • the ambient light sensor and proximity sensor can be an integrated single sensor or a separate ambient light sensor as well as a proximity sensor.
  • the ambient light sensor can be used to obtain illumination information of the current environment in which the terminal device is located. In one embodiment, automatic adjustment of screen brightness can be achieved based on the illumination information to provide a more comfortable display brightness for the human eye.
  • the proximity sensor measures whether an object is close to the terminal device, based on which some specific functions can be implemented. For example, in the process of answering a call, when the face is close enough to the terminal device, the touch function of the screen can be turned off to prevent accidental touch. In some embodiments, the proximity sensor can also quickly determine the approximate distance between the face and the terminal device.
  • Battery 27 can be used to provide power.
  • Audio device 29 can be used to implement voice input.
  • the audio device 29 can be, for example, a microphone or the like.
  • the MEMS sensor 28 can be used to obtain current state information of the terminal device, such as position, direction, acceleration, gravity, and the like.
  • the MEMS sensor 28 can include sensors such as accelerometers, gravimeters, gyroscopes, and the like.
  • MEMS sensor 28 can be used to activate some face recognition applications. For example, when the user picks up the terminal device, MEMS sensor 28 can take this change while transmitting this change to processor 20, which can call the unlock application of memory 26 to activate the unlock application.
  • Camera 30 can be used to capture images, and in some applications, such as when a self-timer application is executed, processor 20 can control camera 30 to capture images and transmit the images to display 22 for display.
  • processor 20 can control camera 30 to capture images and transmit the images to display 22 for display.
  • the camera 30 may acquire an image, and the processor 20 may process the image (including face detection and recognition) and perform corresponding according to the recognition result. Unlock the task.
  • Camera 30 may be a single camera or multiple cameras; in some embodiments, camera 30 may include both an RGB camera for acquiring visible light information, a grayscale camera, and an infrared camera that collects invisible light information and/or UV camera, etc.
  • camera 30 may include a depth camera for acquiring a depth image, which may be, for example, one or more of the following: a structured light depth camera, time of flight (TOF) Depth camera, binocular depth camera, etc.
  • camera 30 may include one or more of the following cameras: a light field camera, a wide-angle camera, a telephoto camera, and the like.
  • the camera 30 can be disposed at any position of the terminal device, such as a front end or a bottom end of the front plane (ie, the plane of the display 22), a rear plane, and the like.
  • camera 30 can be placed in a front plane for capturing a user's face image.
  • camera 30 can be placed in a rear plane for taking pictures of the scene, and the like.
  • camera 30 can be placed in a pre- and post-plane, both of which can acquire images independently or can be controlled by processor 20 to acquire images simultaneously.
  • the active light illuminator 31 can be used as a light source such as a laser diode, a semiconductor laser, a light emitting diode (LED) or the like for projecting active light.
  • the active light projected by the active light illuminator 31 may be infrared light, ultraviolet light, or the like.
  • the active light illuminator 31 can be used to project infrared light having a wavelength of 940 nm, thereby enabling the active light illuminator 31 to operate in different environments and be disturbed by less ambient light.
  • the number of active light illuminators 31 is configured according to actual needs, such as one or more active light illuminators.
  • the active light illuminator 31 can be a separate module mounted on the terminal device or integrated with other modules, such as the active light illuminator 31 can be part of the proximity sensor.
  • Face recognition technology based on color image, in the face image acquisition, due to lighting, angle, distance and other factors seriously affect the recognition accuracy and speed. For example, if the angle and distance of the currently collected face are inconsistent with the authorized face (generally the target entered in advance and the saved target is compared to the face), it will be more time-consuming and timely to perform feature extraction and comparison. The accuracy will also decrease.
  • FIG. 3 is a schematic diagram of an unlocking application based on face recognition according to an embodiment of the present application.
  • the unlocking application can be saved in the terminal device in the form of software or hardware. If the terminal device is currently in the locked state, the unlocking application is executed after activation.
  • the unlocking application is activated based on the output of the MEMS sensor, such as activation of the unlocking application when the MEMS sensor detects a certain acceleration, and when the MEMS sensor detects a particular orientation of the terminal device (such as the device orientation in FIG. 1) Activate the unlock app when it is activated.
  • the terminal device When the unlocking application is activated, the terminal device will use the active light illuminator to project the active invisible light (301) to the target object, such as a human face; the projected active invisible light may be infrared, ultraviolet, or the like, or may be floodlight. Light in the form of structured light. Active invisible light will illuminate the target to avoid the problem of not being able to acquire the target image due to factors such as ambient light direction and lack of ambient light. Secondly, the target image is acquired by the camera. In order to improve the face recognition accuracy and speed of the conventional color image, in the present application, the acquired image contains the depth information of the target (302).
  • the camera is an RGBD camera, and the acquired image includes an RGB image and a depth image of the target; in one embodiment, the camera is an infrared camera, and the captured image includes an infrared image and a depth image of the target, where The infrared image contains a pure infrared flood image; in one embodiment, the image captured by the camera is a structured light image and a depth image.
  • the depth image reflects the depth information of the target, and the distance, the size, the posture, and the like of the target can be acquired based on the depth information. Therefore, the analysis can be performed based on the acquired image to realize the detection and recognition of the face.
  • the unlocking application is passed and the terminal device is unlocked.
  • the waiting time may be set, and the active invisible light projection, image acquisition and analysis are performed within the waiting time range, and are not detected when the waiting time ends. To the face, unlock the app and wait for the next activation.
  • Face detection and recognition may be based only on depth images, and may also combine two-dimensional images with depth images, where the two-dimensional images may be RGB images, infrared images, structured light images, and the like.
  • the infrared LED floodlight and the structured light projector respectively project infrared floodlight and structured light
  • the infrared image and the structured light image are sequentially acquired by the infrared camera, and the depth image is further obtained based on the structured light image.
  • Infrared images and depth images are used separately for face detection.
  • the invisible light here includes infrared flooding and infrared structured light, and can be time-division or synchronous projection when performing projection.
  • analyzing the depth information in the image includes acquiring a distance value of the face, and combining the distance value for face detection and recognition to improve face detection and recognition accuracy and speed. In one embodiment, analyzing the depth information in the image includes acquiring the posture information of the face, and combining the posture information to perform face detection and recognition to improve the accuracy and speed of the face detection and recognition.
  • the depth information can be used to accelerate the face detection.
  • the size of the pixel area occupied by the face can be initially determined, and then directly A face of this size is used for face determination. This allows you to quickly find the location and area of your face.
  • FIG. 4 is a schematic diagram of face detection and recognition based on depth information according to an embodiment of the present application.
  • an infrared image and a depth image of a human face will be described as an example.
  • the similarity comparison between the current face infrared image and the authorized face infrared image can be performed. Since the size and posture of the current face and the authorized face in the infrared image are different, the accuracy of the face recognition is affected when the face is compared.
  • the depth information can be used to acquire the distance and posture of the face (402), and then the current face infrared image or the authorized face infrared image is adjusted by using the distance and the posture to make the size of the two
  • the posture is consistent (that is, basically the same).
  • the face image is adjusted (403), that is, enlarged or reduced so that the areas of the two are similar in size.
  • the depth information can also be adjusted (403).
  • One way is to enter a 3D model of the authorized face and an infrared image in the face entry stage, and when performing face recognition, identify the current face pose according to the depth image of the current face, and authorize the authorized person based on the posture information.
  • the 3D model of the face is projected in two dimensions to project the same authorized face infrared image, and then the feature face infrared image is extracted with the current face infrared image (404) and the feature similarity is compared. (405), because the two poses are similar, the face regions and features included in the image are similar, and the face recognition accuracy will be improved.
  • Another method is: after obtaining the face pose information, correcting the current face infrared image, for example, uniformly correcting into a frontal face infrared image, and then performing feature comparison and comparison with the front face infrared image of the authorized face.
  • the distance and posture information of the face can be acquired, and the face image is further adjusted by using the distance and/or posture information, so that the current face image and the authorized face image are consistent in size and/or posture. To speed up face recognition and improve face recognition accuracy.
  • FIG. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
  • the terminal device can include a projection module 502 and an acquisition module 507.
  • the projection module 502 can be used to project an infrared structured light image (such as an infrared structured light image projected onto a target space), and the acquisition module 507 can be used to collect a structured light image.
  • the terminal device may further include a processor (not shown), and after receiving the structured light image, the processor may utilize the structured light image to calculate the depth image of the target.
  • the structured light image herein may include face texture information in addition to structured light information. Therefore, the structured light image can also participate in face identity entry and authentication as a face infrared image and a depth image.
  • the acquisition module 507 is both a part of the depth camera and an infrared camera. In other words, the depth camera and the infrared camera here can be considered to be the same camera.
  • the terminal device may further include an infrared floodlight 506 that can emit infrared light having the same wavelength as the structured light emitted by the projection module 502.
  • the projection module 502 and the infrared floodlight 506 can be time-switched to respectively acquire the depth image and the infrared image of the target.
  • the infrared image acquired at this time is a pure infrared image, and the facial feature information contained in the structured light image is more obvious, which can make the face recognition precision higher.
  • the infrared floodlight 506 and projection module 502 herein may correspond to the active light illuminator shown in FIG. 2.
  • depth information may be acquired using a depth camera based on TOF technology.
  • the projection module 502 can be used to emit light pulses
  • the acquisition module 507 can be used to receive light pulses.
  • the processor can be used to record the time difference between the pulse transmission and the reception, and calculate the depth image of the target based on the time difference.
  • the acquisition module 507 can simultaneously acquire the depth image and the infrared image of the target, and there is almost no parallax between the two.
  • an additional infrared camera 503 can be provided to acquire an infrared image.
  • the acquisition module 507 and the infrared camera 503 can be used to acquire the depth image and the infrared image of the target.
  • the difference between the terminal device and the terminal device described above is that since the depth image is different from the camera of the infrared image, there is a parallax between the two, and if there is no parallax in the calculation process performed by the subsequent face recognition The image needs to register the depth image with the infrared image in advance.
  • the terminal device may also include handset 504, ambient light/proximity sensor 505, etc. to achieve more functionality.
  • the proximity of the face can be detected by the proximity sensor 505, and when the face is too close, the projection module 502 is closed. Project or reduce the projection power.
  • an automatic call can be implemented in combination with face recognition and an earpiece. For example, when the terminal device receives an incoming call, the face recognition application can be activated to simultaneously open the desired depth camera and the infrared camera to collect the depth image and the infrared image. After the identification is passed, the call is connected and the device such as the handset is turned on to implement the call.
  • the terminal device may also include a screen 501, a display, which may be used to display image content as well as for touch interaction.
  • a screen 501 a display
  • the terminal device when the terminal device is in a state of sleep or the like, the user picks up the terminal device, and the inertial measurement unit in the terminal device recognizes the acceleration due to the pick up.
  • the screen When the screen is illuminated and the unlocking application is started, the screen will appear to be unlocked.
  • the terminal device turns on the depth camera and the infrared camera is used to collect the depth image and/or the infrared image to further perform face detection and recognition.
  • the preset direction of the line of sight of the human eye can be set to the direction in which the human eye looks at the screen 501, and only when the human eye looks at the screen. Further unlock.
  • the terminal device may further include a memory (not shown) for storing feature information such as entry during the entry phase, and may also store applications, instructions, and the like.
  • the face recognition related application such as unlocking, paying, anti-peeping, etc.
  • the processor calls the instruction in the memory and performs the entry and authentication. method.
  • the application program can also be directly written into the processor as a processor code function module or a corresponding independent processor in the form of instruction code, thereby improving execution efficiency.
  • the methods described in this application can be configured in the device either in software or in hardware.

Abstract

本申请提供一种任务执行方法、终端设备及计算机可读存储介质。该任务执行方法包括:当终端设备的人脸识别应用程序被激活后,向空间投射主动不可见光;获取包含深度信息的图像;分析图像以实现:判断图像是否含有人脸,以及,当含有人脸时对人脸进行识别;根据识别结果,控制终端设备执行相应操作。由于本申请采用主动光照明并结合深度信息来进行人脸识别,从而解决环境光干扰导致的人脸识别精度低的问题。

Description

任务执行方法、终端设备及计算机可读存储介质 技术领域
本申请属于计算机技术领域,更具体地说,是涉及一种任务执行方法、终端设备及计算机可读存储介质。
背景技术
人体有诸多唯一的特征,比如人脸、指纹、虹膜、人耳等,这些特征被统称为生物特征。生物特征识别被广泛用于安防、家居、智能硬件等众多领域。目前较为成熟的生物特征识别(比如指纹识别、虹膜识别等)已被普遍应用于手机、电脑等终端设备。
对于人脸等特征,尽管相关的研究已经非常深入,而对于人脸等特征的识别则仍未普及。
目前的人脸识别方式主要是基于彩色图像的人脸识别方式,这种人脸识别方式会受环境光光强、光照方向等因素的影响,导致识别精度低。
发明内容
本申请提供一种终端设备的任务执行方法、终端设备及计算机可读存储介质,以提高人脸识别的精度。
第一方面,提供一种终端设备的任务执行方法,包括:当终端设备的人脸识别应用程序被激活后,向空间投射主动不可见光;获取包含深度信息的图像;分析所述图像以实现:判断所述图像是否含有人脸,以及,当含有人脸时对所述人脸进行识别;根据识别结果,控制所述终端设备执行相应操作。
在一种可能的实现方式中,所述主动不可见光包括红外泛光,所述图像包含纯红外图像。
在一种可能的实现方式中,所述图像包含深度图像。
在一种可能的实现方式中,所述主动不可见光包含红外结构光。
在一种可能的实现方式中,所述分析包括利用所述深度信息获取所述人脸的距离和/或姿态。
在一种可能的实现方式中,所述识别包括:利用所述人脸的距离对所述人脸的图像或授权人脸图像进行调整,以使得所述人脸的图像与授权人脸图 像大小保持一致(即基本相同)。
在一种可能的实现方式中,所述识别包括:利用所述人脸的姿态对所述人脸的图像或授权人脸图像进行调整,以使得所述人脸与所述授权人脸的姿态保持一致(即基本相同)。
在一种可能的实现方式中,所述相应操作包括解锁、支付。
第二方面,提供一种人脸识别方法,包括:当终端设备的人脸识别应用程序被激活后,向空间投射主动不可见光;获取包含深度信息的图像;分析所述图像以实现:判断所述图像是否含有人脸,以及,当含有人脸时对所述人脸进行识别。
在一种可能的实现方式中,所述主动不可见光包括红外泛光,所述图像包含纯红外图像。
在一种可能的实现方式中,所述图像包含深度图像。
在一种可能的实现方式中,所述主动不可见光包含红外结构光。
在一种可能的实现方式中,所述分析包括利用所述深度信息获取所述人脸的距离和/或姿态。
在一种可能的实现方式中,所述识别包括:利用所述人脸的距离对所述人脸的图像或授权人脸图像进行调整,以使得所述人脸的图像与授权人脸图像大小保持一致(即基本相同)。
在一种可能的实现方式中,所述识别包括:利用所述人脸的姿态对所述人脸的图像或授权人脸图像进行调整,以使得所述人脸与所述授权人脸的姿态保持一致(即基本相同)。
第三方面,提供一种计算机可读存储介质,存储有用于执行第一方面或第一方面任意一种可能的实现方式所述的方法的指令。
第四方面,提供一种计算机可读存储介质,存储有用于执行第二方面或第二方面任意一种可能的实现方式所述的方法的指令。
第五方面,提供一种计算机程序产品,包括用于执行第一方面或第一方面任意一种可能的实现方式所述的方法的指令。
第六方面,提供一种计算机程序产品,包括用于执行第二方面或第二方面任意一种可能的实现方式所述的方法的指令。
第七方面,提供一种终端设备,包括:主动光照明器;相机;存储器,存储有指令;处理器,用于执行所述指令,以执行第一方面或第一方面任意 一种可能的实现方式所述的方法。
在一种可能的实现方式中,所述主动光照明器为红外结构光投影模组,所述相机为红外相机,所述红外相机与所述主动光照明器组成了深度相机,所述图像包括深度图像。
第八方面,提供一种终端设备,包括:主动光照明器;相机;存储器,存储有指令;处理器,用于执行所述指令,以执行第二方面或第二方面任意一种可能的实现方式所述的方法。
在一种可能的实现方式中,所述主动光照明器为红外结构光投影模组,所述相机为红外相机,所述红外相机与所述主动光照明器组成了深度相机,所述图像包括深度图像。
相对于已有技术,本申请利用主动不可见光照明以解决环境光干扰问题,并利用包含深度信息的图像进行人脸识别,提高了人脸识别的精度。
附图说明
图1是根据本申请一个实施例的人脸识别应用示意图。
图2是根据本申请一个实施例的终端设备的结构示意图。
图3是根据本申请一个实施例的任务执行方法的示意性流程图。
图4是根据本申请一个实施例的基于深度信息的人脸识别方法的示意性流程图。
图5是根据本申请一个实施例的终端设备的结构示意图。
具体实施方式
为了使本申请实施例所要解决的技术问题、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
需要说明的是,当元件被称为“固定于”或“设置于”另一个元件,它可以直接在另一个元件上或者间接在该另一个元件上。当一个元件被称为是“连接于”另一个元件,它可以是直接连接到另一个元件或间接连接至该另一个元件上。另外,连接既可以是用于固定作用也可以是用于电路连通作用。
需要理解的是,术语“长度”、“宽度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”等指示的方位或位置关 系为基于附图所示的方位或位置关系,仅是为了便于描述本申请实施例和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多该特征。在本申请实施例的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
人脸识别技术可以被用于安检、监控等领域。目前随着智能的终端设备(比如手机、平板)的普及,人脸识别可以被应用于执行解锁、支付等操作,也可以被应用于娱乐游戏等多个方面。智能终端设备,比如手机、平板、电脑、电视等大都配备了彩色相机,利用彩色相机采集包含人脸的图像后,可以利用该图像进行人脸检测及识别,从而进一步利用识别的结果执行其他相关的应用。然而,终端设备(尤其是像手机、平板等移动终端设备)的环境常常变化,环境变化会影响彩色相机的成像,比如光线较弱时则无法对人脸进行很好的成像。另一方面,在进行人脸识别时,人脸姿态和/或人脸与相机的距离的随机性增加了人脸识别的难度及稳定性。
本申请首先提供一种基于深度信息的人脸识别方法和终端设备,其利用主动不可见光采集包含深度信息的图像,并基于该图像进行人脸识别。由于深度信息对光照不敏感,因此,能够提升人脸识别的精度。进一步地,在此基础上,本申请提供一种终端设备的任务执行方法及终端设备,可以利用上述人脸识别方法的识别结果执行不同的操作,比如解锁、支付等操作。下面结合具体的附图对本申请实施例进行详细地举例说明。
图1是根据本申请一个实施例的人脸识别应用示意图。用户10手持移动终端11(如手机、平板电脑、播放器等),移动终端11内部含有可以获取目标(人脸)图像的相机111。若当前的人脸识别应用为解锁时,移动终端11处于锁定状态,在解锁程序启动后,相机111采集包含人脸101的图像,并对该图像中的人脸进行识别,当所识别的人脸为授权人脸时,移动终端11执行解锁,否则依旧处于锁定状态。若当前的人脸识别应用为支付或其他应用时,其原理与解锁应用相近。
图2是根据本申请一个实施例的终端设备的结构示意图。在某些实施例 中,本申请提及的终端设备也可称为人脸识别装置。终端设备例如可以是如图1所示的移动终端11。终端设备可以包含处理器20以及与之连接的环境光/接近传感器21、显示器22、麦克风23、射频及基带处理器24、接口25、存储器26、电池27、微电子机械系统(micro electro mechanical system,MEMS)传感器28、音频装置29、相机30等。图2中的不同的单元之间可以通过电路连接实现数据传输与信号通讯。图2仅是终端设备的结构的一个示例,在其他实施例中,终端设备也可以包含更少的结构或者包含更多的其他组成结构。
处理器20可用于对终端设备进行整体控制,处理器20可以是单个的处理器,也可以包含多个处理器单元。比如,处理器20可以包括不同功能的处理器单元。
显示器22可用于显示图像以将应用等呈现给用户。另外,在某些实施例中,显示器22也可以包含触控功能,此时显示器22也可作为人机交互接口,用于接收用户的输入。
麦克风23可用于接收语音信息,可以用来实现与用户的语音交互。
射频及基带处理器24可负责终端设备的通讯功能,比如接收及翻译语音或文字等信号以实现远程用户之间的信息交流。
接口25可用于使得终端设备与外部进行连接,以进一步实现数据传输、电力传输等功能。接口25例如可以是通用串行总线(universal serial bus,USB)接口、无线保真(wireless fidelity,WIFI)接口等。
存储器26可用于保存应用程序,比如解锁程序261、支付程序262等。存储器26还可以用于保存应用程序执行所需的相关数据,如人脸图像、特征等数据263。存储器26还可用于存储处理器20在执行过程中涉及的代码和数据。
存储器26可以包括单个或多个存储器,其可以是随机存取存储器(random access memory,RAM)、FLASH(闪存)等任何可用于保存数据的存储器形式。可以理解的是,存储器26既可以作为终端设备的一部分,也可以独立于终端设备存在,比如云端存储器,其保存的数据可以通过接口25等与终端设备通信。解锁程序261、支付程序262等应用程序一般被保存在计算机可读存储介质(如非易失性可读存储介质)中,当执行该应用时,处理器20可以从该存储介质中调用相应的程序来执行。在程序执行过程中 涉及到的一些数据,比如授权人脸图像或者授权人脸特征数据也可以被保存在存储器26中。应理解,计算机可读存储介质中的计算机是一个广义的概念,可以指具有信息处理功能的任意设备,在本申请实施例中,该计算机可以指终端设备。
终端设备还可包含环境光/接近传感器。环境光传感器和接近传感器可以是集成的单一传感器,也可以是独立的环境光传感器以及接近传感器。其中环境光传感器可用来获取终端设备所处当前环境的光照信息。在一个实施例中,基于该光照信息可以实现屏幕亮度的自动调整以提供对人眼更加舒适的显示亮度。接近传感器可以测量是否有物体靠近终端设备,基于此可以实现一些特定功能。比如,在接听电话过程中,当人脸足够靠近终端设备时,可以关闭屏幕的触控功能,以防止误触。在一些实施例中,接近传感器还可以快速判断人脸与终端设备之间的大致距离。
电池27可用于提供电力。音频装置29可用于实现语音输入。音频装置29例如可以是话筒等。
MEMS传感器28可用于获取终端设备当前的状态信息,比如位置、方向、加速度、重力等。MEMS传感器28可以包含加速度计、重力计、陀螺仪等传感器。在一个实施例中,MEMS传感器28可以用来激活一些人脸识别应用。比如,当用户拿起终端设备时,MEMS传感器28可以获取这一变化,同时将这一变化传输到处理器20,处理器20可以调用存储器26的解锁应用程序以激活解锁应用。
相机30可用于采集图像,在一些应用中,比如自拍应用执行时,处理器20可以控制相机30采集图像,并将图像传输到显示器22进行显示。在一些实施例中,比如基于人脸识别的解锁程序,当解锁程序激活时,相机30可以采集图像,处理器20可以对图像进行处理(包括人脸检测与识别),并根据识别结果执行相应的解锁任务。相机30可以是单个相机也可以包括多个相机;在一些实施例中,相机30既可以包含用于采集可见光信息的RGB相机、灰度相机,也可以包含采集不可见光信息的红外相机和/或紫外相机等。在一些实施例中,相机30可以包含用于获取深度图像的深度相机,该深度相机例如可以是以下相机中的一种或多种:结构光深度相机、时间飞行法(time of flight,TOF)深度相机、双目深度相机等。在一些实施例中,相机30可以包含以下相机中的一种或多种:光场相机、广角相机、长焦相机等。
相机30可以设置在终端设备的任意位置,比如前置平面(即显示器22所在平面)的顶端或底端等、后置平面等位置。在一个实施例中,相机30可以被设置在前置平面,用于采集用户的人脸图像。在一个实施例中,相机30可以被设置在后置平面,用于对场景进行拍照等。在一个实施例中,相机30可以被设置在前置以及后置平面,二者可以独立采集图像,也可以被处理器20控制以同步采集图像。
主动光照明器31可以采用如激光二极管、半导体激光、发光二极管(light emitting diode,LED)等形式作为其光源,用于投射主动光。主动光照明器31所投射的主动光可以是红外光、紫外光等。可选地,主动光照明器31可用于投射波长为940nm的红外光,从而可以使得该主动光照明器31能在不同环境下工作,且受较小的环境光干扰。主动光照明器31的数量根据实际需要配置,如可以配置一个或多个主动光照明器。主动光照明器31可以是独立的模块安装在终端设备上,也可以与其他模块集成,比如该主动光照明器31可以作为接近传感器的一部分。
对于基于人脸识别的应用,比如解锁、支付等,已有的基于彩色图像的人脸识别技术遇到诸多问题。比如,环境光强弱、光照方向均会影响人脸图像的采集、特征提取与特征比对,另外,无可见光照明条件下,基于彩色图像的人脸识别技术无法获取人脸图像,亦即无法进行人脸识别,导致应用执行失败。人脸识别的精度与速度影响了基于人脸识别的应用的体验,比如对于解锁应用,识别精度越高将带来更高的安全性,识别速度越快将带来更舒适的用户体验,在一个实施例中,十万分之一甚至百万分之一的误识率以及几十毫秒甚至更快的识别速度被认为是较佳的人脸识别体验。而基于彩色图像的人脸识别技术,在人脸图像采集环节,由于光照、角度、距离等因素严重影响了识别精度与速度。比如,若当前被采集人脸的角度、距离与被授权人脸(一般是提前录入且保存的目标比对人脸)的不一致时,在进行特征提取及比对时将会更耗时、识别精度也会下降。
图3是根据本申请一个实施例的基于人脸识别的解锁应用示意图。解锁应用可以以软件或硬件的形式保存在终端设备中,若终端设备当前处于锁定状态,解锁应用在激活后执行。在一种实施例中,解锁应用根据MEMS传感器的输出来激活,比如当MEMS传感器检测到一定的加速度后激活解锁应用、当MEMS传感器检测到终端设备的特定方位(比如图1中的设备方位) 时激活解锁应用。当解锁应用激活后,终端设备将利用主动光照明器投射主动不可见光(301)至目标物,如人脸;所投射的主动不可见光可以是红外、紫外等波长的光,也可以是泛光、结构光等形式的光。主动不可见光将对目标进行照明,避免因为环境光方向、缺乏环境光等因素导致无法获取目标图像的问题。其次利用相机采集目标图像,为了改善传统彩色图像的人脸识别精度与速度问题,本申请中,所采集的图像中包含了目标的深度信息(302)。在一个实施例中,相机为RGBD相机,所采集的图像包含目标的RGB图像与深度图像;在一个实施例中,相机为红外相机,所采集的图像包含目标的红外图像与深度图像,这里的红外图像包含纯红外泛光图像;在一个实施例中,相机所采集的图像为结构光图像与深度图像。可以理解的是,深度图像反映了目标的深度信息,基于深度信息可以获取目标的距离、尺寸、姿态等信息。因此,接下来可以基于获取的图像进行分析,以实现人脸的检测、识别。当检测到人脸并且对人脸识别后确认当前人脸为授权人脸时,解锁应用通过、终端设备解锁。
在一个实施例中,考虑终端设备存在被误激活等相关情况,此时可以设置等待时间,在该等待时间范围内进行主动不可见光的投射、图像采集与分析,当在等待时间结束时未检测到人脸,将解锁应用关闭,等待下次激活。
人脸检测与识别可以仅基于深度图像,也可以结合二维图像与深度图像,这里的二维图像可以是RGB图像、红外图像以及结构光图像等。比如在一个实施例中,红外LED泛光灯以及结构光投影仪分别投射红外泛光以及结构光,利用红外相机先后获取红外图像以及结构光图像,进一步基于结构光图像可获取深度图像,在进行人脸检测时将分别利用红外图像以及深度图像。可以理解的是,这里的不可见光包括红外泛光以及红外结构光,在进行投影时可以采用分时投影或同步投影的方式。
在一个实施例中,对图像中的深度信息进行分析包含获取人脸的距离值,结合该距离值进行人脸检测与识别,以提高人脸检测与识别精度与速度。在一个实施例中,对图像中的深度信息进行分析包含获取人脸的姿态信息,结合该姿态信息进行人脸检测与识别,以提高人脸检测与识别的精度与速度。
利用深度信息可以加速人脸检测,在一个实施例中,对于深度图像中各个像素上的深度值,利用相机的焦距等属性,可以初步确定人脸所占的像素区域大小,在随后将直接对该大小的区域进行人脸判定。由此可以快速寻找 到人脸的位置及区域。
图4是根据本申请一个实施例的基于深度信息的人脸检测与识别示意图。在本实施例中,以人脸的红外图像以及深度图像为例进行说明。在检测到当前人脸(401)后,随之可以进行当前人脸红外图像与授权人脸红外图像的相似度比对。由于当前人脸与授权人脸在红外图像中的大小、姿态大都不一样,在进行人脸比对时会影响到人脸识别的精度。因此,在本实施例中,可以利用深度信息获取人脸的距离与姿态(402),接下来利用距离及姿态对当前人脸红外图像或授权人脸红外图像进行调整,以使得二者的大小、姿态保持一致(即基本相同)。对于图像中人脸区域的大小,根据成像原理可知,距离越远,人脸区域越小,因此只要知道授权人脸的距离,并结合当前人脸的距离就可以对授权人脸图像或当前人脸图像进行调整(403),即放大或缩小,以使得二者的区域大小相近。而对于姿态,则可以同样利用深度信息进行调整(403)。一种方式是,在人脸录入阶段录入授权人脸的3D模型以及红外图像,在进行人脸识别时,根据当前人脸的深度图像识别出当前人脸姿态,并基于该姿态信息将授权人脸的3D模型进行二维投影以投影出与当前人脸姿态相同的授权人脸红外图像,再将该授权人脸红外图像与当前人脸红外图像进行特征提取(404)以及特征相似度比对(405),由于二者姿态相近,因此图像中所包含的人脸区域及特征也相近,人脸识别精度将提高。另一种方式是,获取人脸姿态信息后,对当前人脸红外图像进行校正,比如统一校正成正面人脸红外图像,再与授权人脸的正面红外图像进行特征提到与比对。
总之,基于深度信息,可以获取人脸的距离以及姿态信息,利用距离和/或姿态信息进一步对人脸图像进行调整,以使得当前人脸图像与授权人脸图像的大小和/或姿态一致,以加速人脸识别速度以及提高人脸识别精度。
可以理解的是,以上的基于人脸识别的解锁应用同样适用于支付、认证、游戏、防偷看等其他应用中。
图5是根据本申请一个实施例的终端设备的示意性结构图。终端设备可以包括由投影模组502、采集模组507,其中投影模组502可用于投射红外结构光图像(如向目标所在空间投影红外结构光图像),采集模组507可用于采集结构光图像。终端设备还可以包括处理器(图中未示出),处理器接收到结构光图像后,可以利用结构光图像用于计算出目标的深度图像。这里的结构光图像中除了包含结构光信息外,还可以包含人脸纹理信息。因此, 结构光图像也可以作为人脸红外图像与深度图像一起参与人脸身份录入与认证。此时,采集模组507既是深度相机的一部分,也是红外相机。换句话说,这里的深度相机与红外相机可以认为是同一个相机。
在一些实施例中,终端设备还可以包括有红外泛光灯506,其可以发射出与投影模组502所发射结构光具有相同波长的红外光。在进行人脸图像采集过程中,可以通过将投影模组502与红外泛光灯506分时开关以分别获取目标的深度图像与红外图像。此时所获取的红外图像为纯红外图像,相对于结构光图像而言,其含有的人脸特征信息更加明显,可以使得人脸识别精度更高。
这里的红外泛光灯506与投影模组502可对应于图2所示的主动光照明器。
在一些实施例中,可以利用基于TOF技术的深度相机采集深度信息。此时投影模组502可用于发射光脉冲,而采集模组507可用于接收光脉冲。处理器可用于记录脉冲发射以及接收的时间差,并根据时间差计算出目标的深度图像。在该实施例中,采集模组507可以同时获取目标的深度图像与红外图像,并且二者几乎没有视差。
在一些实施例中,可以设置额外的红外相机503,以获取红外图像。当红外泛光灯506发射的光束波长与投影模组502所发射的光束波长不同时,可以同步利用采集模组507与红外相机503获取目标的深度图像与红外图像。这一终端设备与前文所述的终端设备的区别在于,由于获取深度图像与红外图像的相机不同,因此二者之间会有视差,如果后续的人脸识别所进行的计算处理中需要没有视差的图像,则需要提前将深度图像与红外图像进行配准。
终端设备还可以包括听筒504、环境光/接近传感器505等器件以实现更多的功能。比如在一些实施例中,考虑到红外光对人体的危害性,当人脸靠的过近时,可以通过接近传感器505对人脸的接近度进行检测,当太近时关闭投影模组502的投影或减小投影功率。在一些实施例中,可以结合人脸识别以及听筒实现自动通话,比如当终端设备收到来电后,可以启动人脸识别应用同时打开所需要的深度相机与红外相机采集深度图像与红外图像,当识别通过后,接通通话并打开听筒等器件以实现通话。
终端设备还可以包括屏幕501,即显示器,屏幕501可以用来显示图像内容也可以用来进行触摸交互。比如,对于人脸识别的解锁应用,在一个实 施例中,当终端设备处于睡眠等状态时,用户通过拿起终端设备,终端设备中的惯性测量单元识别到由于拿起引起的加速度时会点亮屏幕,同时启动解锁应用程序,屏幕会出现待解锁指令,此时终端设备打开深度相机以及红外相机用于采集深度图像和/或红外图像,进一步进行人脸检测与识别。在一些实施例中,还可以在人脸检测过程中通过检测人眼视线方向中,可以将预设的人眼视线方向设置为人眼注视屏幕501的方向,且只有当人眼注视屏幕时才会进一步进行解锁。
终端设备还可以包括存储器(图中未示出),存储器用于存储如录入阶段录入的特征信息,还可以存储应用程序、指令等。比如将前面所述的人脸识别相关应用(比如解锁、支付、防偷看等)以软件程序的形式保存到存储器中,当应用程序需要时,处理器调用存储器中的指令并执行录入以及认证方法。可以理解的是,应用程序也可以被直接以指令代码形式写入到处理器中形成具体特定功能的处理器功能模块或相应的独立处理器,由此提高执行效率。另外,随着技术的不断发展,软、硬件之间的界限将逐渐消失,因此本申请中所述的方法既可以以软件形式也可以是以硬件形式配置在装置中。
以上内容是结合具体的优选实施方式对本申请所作的进一步详细说明,不能认定本申请的具体实施只局限于这些说明。对于本申请所属技术领域的技术人员来说,在不脱离本申请构思的前提下,还可以做出若干等同替代或明显变型,而且性能或用途相同,都应当视为属于本申请的保护范围。

Claims (11)

  1. 一种终端设备的任务执行方法,其特征在于,包括:
    当终端设备的人脸识别应用程序被激活后,向空间投射主动不可见光;
    获取包含深度信息的图像;
    分析所述图像以实现:判断所述图像是否含有人脸,以及,当含有人脸时对所述人脸进行识别;
    根据识别结果,控制所述终端设备执行相应操作。
  2. 如权利要求1所述的方法,其特征在于,所述主动不可见光包括红外泛光,所述图像包含纯红外图像。
  3. 如权利要求1所述的方法,其特征在于,所述图像包含深度图像。
  4. 如权利要求3所述的方法,其特征在于,所述主动不可见光包含红外结构光。
  5. 如权利要求1所述的方法,其特征在于,所述分析包括利用所述深度信息获取所述人脸的距离和/或姿态。
  6. 如权利要求5所述的方法,其特征在于,所述识别包括:利用所述人脸的距离对所述人脸的图像或授权人脸图像进行调整,以使得所述人脸的图像与授权人脸图像大小保持一致。
  7. 如权利要求5所述的方法,其特征在于,所述识别包括:利用所述人脸的姿态对所述人脸的图像或授权人脸图像进行调整,以使得所述人脸与所述授权人脸的姿态保持一致。
  8. 如权利要求1-7中任一项所述的方法,其特征在于,所述相应操作包括解锁、支付。
  9. 一种计算机可读存储介质,其特征在于,存储有用于执行如权利要求1-8中任一项方法的指令。
  10. 一种终端设备,其特征在于,包括:
    主动光照明器;
    相机;
    存储器,存储有指令;
    处理器,用于执行所述指令,以执行如权利要求1-8中任一项所述的方法。
  11. 如权利要求10所述的终端设备,其特征在于,所述主动光照明器为 红外结构光投影模组,所述相机为红外相机,所述红外相机与所述主动光照明器组成了深度相机,所述图像包括深度图像。
PCT/CN2018/113787 2017-12-04 2018-11-02 任务执行方法、终端设备及计算机可读存储介质 WO2019109768A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201711262543.0 2017-12-04
CN201711262543 2017-12-04
CN201810336303.9A CN108537187A (zh) 2017-12-04 2018-04-16 任务执行方法、终端设备及计算机可读存储介质
CN201810336303.9 2018-04-16

Publications (1)

Publication Number Publication Date
WO2019109768A1 true WO2019109768A1 (zh) 2019-06-13

Family

ID=63480245

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2018/113787 WO2019109768A1 (zh) 2017-12-04 2018-11-02 任务执行方法、终端设备及计算机可读存储介质
PCT/CN2018/113784 WO2019109767A1 (zh) 2017-12-04 2018-11-02 任务执行方法、终端设备及计算机可读存储介质

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/113784 WO2019109767A1 (zh) 2017-12-04 2018-11-02 任务执行方法、终端设备及计算机可读存储介质

Country Status (3)

Country Link
US (1) US20200293754A1 (zh)
CN (2) CN108537187A (zh)
WO (2) WO2019109768A1 (zh)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537187A (zh) * 2017-12-04 2018-09-14 深圳奥比中光科技有限公司 任务执行方法、终端设备及计算机可读存储介质
WO2019205889A1 (zh) 2018-04-28 2019-10-31 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN109635539B (zh) * 2018-10-30 2022-10-14 荣耀终端有限公司 一种人脸识别方法及电子设备
CN109445231B (zh) * 2018-11-20 2022-03-29 奥比中光科技集团股份有限公司 一种深度相机及深度相机保护方法
CN109635682B (zh) * 2018-11-26 2021-09-14 上海集成电路研发中心有限公司 一种人脸识别装置和方法
US11250144B2 (en) * 2019-03-29 2022-02-15 Lenovo (Singapore) Pte. Ltd. Apparatus, method, and program product for operating a display in privacy mode
TWI709130B (zh) * 2019-05-10 2020-11-01 技嘉科技股份有限公司 自動調整顯示畫面的顯示裝置及其方法
CN110333779B (zh) * 2019-06-04 2022-06-21 Oppo广东移动通信有限公司 控制方法、终端和存储介质
CN112036222B (zh) * 2019-06-04 2023-12-29 星宸科技股份有限公司 脸部辨识系统及方法
CN111131872A (zh) * 2019-12-18 2020-05-08 深圳康佳电子科技有限公司 一种集成深度相机的智能电视及其控制方法与控制系统
KR102291593B1 (ko) * 2019-12-26 2021-08-18 엘지전자 주식회사 영상표시장치 및 그의 동작방법
CN112183480A (zh) * 2020-10-29 2021-01-05 深圳奥比中光科技有限公司 一种人脸识别方法、装置、终端设备及存储介质
US11394825B1 (en) * 2021-03-15 2022-07-19 Motorola Mobility Llc Managing mobile device phone calls based on facial recognition
CN113378139B (zh) * 2021-06-11 2022-11-29 平安国际智慧城市科技股份有限公司 界面内容的防窥方法、装置、设备以及存储介质
CN113687899A (zh) * 2021-08-25 2021-11-23 读书郎教育科技有限公司 一种解决查看通知与人脸解锁冲突的方法及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932847A (zh) * 2006-10-12 2007-03-21 上海交通大学 复杂背景下彩色图像人脸检测的方法
US8447098B1 (en) * 2010-08-20 2013-05-21 Adobe Systems Incorporated Model-based stereo matching
CN104850842A (zh) * 2015-05-21 2015-08-19 北京中科虹霸科技有限公司 移动终端虹膜识别的人机交互方法
CN104899579A (zh) * 2015-06-29 2015-09-09 小米科技有限责任公司 人脸识别方法和装置
CN107169483A (zh) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 基于人脸识别的任务执行
CN108537187A (zh) * 2017-12-04 2018-09-14 深圳奥比中光科技有限公司 任务执行方法、终端设备及计算机可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008310515A (ja) * 2007-06-13 2008-12-25 Nippon Telegr & Teleph Corp <Ntt> 情報機器監視装置
CN105354960A (zh) * 2015-10-30 2016-02-24 夏翊 一种金融自助服务终端安全区域控制方法
CN107105217B (zh) * 2017-04-17 2018-11-30 深圳奥比中光科技有限公司 多模式深度计算处理器以及3d图像设备
CN107194288A (zh) * 2017-04-25 2017-09-22 上海与德科技有限公司 显示屏的控制方法及终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932847A (zh) * 2006-10-12 2007-03-21 上海交通大学 复杂背景下彩色图像人脸检测的方法
US8447098B1 (en) * 2010-08-20 2013-05-21 Adobe Systems Incorporated Model-based stereo matching
CN104850842A (zh) * 2015-05-21 2015-08-19 北京中科虹霸科技有限公司 移动终端虹膜识别的人机交互方法
CN104899579A (zh) * 2015-06-29 2015-09-09 小米科技有限责任公司 人脸识别方法和装置
CN107169483A (zh) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 基于人脸识别的任务执行
CN108537187A (zh) * 2017-12-04 2018-09-14 深圳奥比中光科技有限公司 任务执行方法、终端设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN108563936B (zh) 2020-12-18
CN108563936A (zh) 2018-09-21
WO2019109767A1 (zh) 2019-06-13
CN108537187A (zh) 2018-09-14
US20200293754A1 (en) 2020-09-17

Similar Documents

Publication Publication Date Title
WO2019109768A1 (zh) 任务执行方法、终端设备及计算机可读存储介质
US10255417B2 (en) Electronic device with method for controlling access to same
CN109544618B (zh) 一种获取深度信息的方法及电子设备
US10922395B2 (en) Facial authentication systems and methods utilizing time of flight sensing
EP3872658B1 (en) Face recognition method and electronic device
CN108664783B (zh) 基于虹膜识别的识别方法和支持该方法的电子设备
CN108399349B (zh) 图像识别方法及装置
WO2019080580A1 (zh) 3d人脸身份认证方法与装置
WO2019080578A1 (zh) 3d人脸身份认证方法与装置
US20170061210A1 (en) Infrared lamp control for use with iris recognition authentication
WO2019080579A1 (zh) 3d人脸身份认证方法与装置
JP2017538300A (ja) 無人航空機の撮影制御方法及び撮影制御装置、電子デバイス、コンピュータプログラム及びコンピュータ読取可能記憶媒体
TWI706270B (zh) 身分識別方法、裝置和電腦可讀儲存媒體
WO2021037157A1 (zh) 图像识别方法及电子设备
US20150347732A1 (en) Electronic Device and Method for Controlling Access to Same
CN114090102B (zh) 启动应用程序的方法、装置、电子设备和介质
CN111103922A (zh) 摄像头、电子设备和身份验证方法
CN115087975A (zh) 用于识别对象的电子装置和方法
WO2022206494A1 (zh) 目标跟踪方法及其装置
US20190180131A1 (en) Image recognition method and apparatus
CN109766806A (zh) 高效的人脸识别方法及电子设备
CN115032640B (zh) 手势识别方法和终端设备
CN115184956A (zh) Tof传感器系统和电子设备
CN115066882A (zh) 用于执行自动对焦的电子装置和方法
CN114111704A (zh) 测量距离的方法、装置、电子设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18885285

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18885285

Country of ref document: EP

Kind code of ref document: A1