WO2019109768A1 - Procédé d'exécution de tâches, dispositif terminal et support de stockage lisible par ordinateur - Google Patents

Procédé d'exécution de tâches, dispositif terminal et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2019109768A1
WO2019109768A1 PCT/CN2018/113787 CN2018113787W WO2019109768A1 WO 2019109768 A1 WO2019109768 A1 WO 2019109768A1 CN 2018113787 W CN2018113787 W CN 2018113787W WO 2019109768 A1 WO2019109768 A1 WO 2019109768A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
terminal device
infrared
camera
Prior art date
Application number
PCT/CN2018/113787
Other languages
English (en)
Chinese (zh)
Inventor
黄源浩
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Publication of WO2019109768A1 publication Critical patent/WO2019109768A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Definitions

  • the present application belongs to the field of computer technology, and more particularly, to a task execution method, a terminal device, and a computer readable storage medium.
  • Biometrics are widely used in security, home, smart hardware and many other fields. At present, more mature biometrics (such as fingerprint recognition, iris recognition, etc.) have been widely used in mobile phones, computers and other terminal devices.
  • the current face recognition method is mainly based on the face recognition method of color image. This face recognition method is affected by factors such as ambient light intensity and illumination direction, resulting in low recognition accuracy.
  • the application provides a task execution method, a terminal device and a computer readable storage medium of a terminal device to improve the accuracy of face recognition.
  • a first aspect provides a task execution method of a terminal device, including: after the face recognition application of the terminal device is activated, projecting active invisible light into a space; acquiring an image including depth information; and analyzing the image to implement: Determining whether the image contains a human face, and identifying the human face when the human face is included; and controlling the terminal device to perform a corresponding operation according to the recognition result.
  • the active invisible light includes infrared floodlight
  • the image includes a pure infrared image.
  • the image includes a depth image.
  • the active invisible light comprises infrared structured light.
  • the analyzing includes acquiring the distance and/or posture of the face by using the depth information.
  • the identifying includes: adjusting an image of the face or an authorized face image by using a distance of the face, so that an image of the face and an authorized face image size Consistent (ie basically the same).
  • the recognizing includes: adjusting an image of the human face or an authorized face image by using a posture of the human face, so that the gesture of the human face and the authorized human face Consistent (ie basically the same).
  • the corresponding operations include unlocking, paying.
  • a second aspect provides a method for recognizing a face, comprising: projecting active invisible light into a space after the face recognition application of the terminal device is activated; acquiring an image including depth information; and analyzing the image to implement: Whether the image contains a human face, and the face is recognized when a human face is included.
  • the active invisible light includes infrared floodlight
  • the image includes a pure infrared image.
  • the image includes a depth image.
  • the active invisible light comprises infrared structured light.
  • the analyzing includes acquiring the distance and/or posture of the face by using the depth information.
  • the identifying includes: adjusting an image of the face or an authorized face image by using a distance of the face, so that an image of the face and an authorized face image size Consistent (ie basically the same).
  • the recognizing includes: adjusting an image of the human face or an authorized face image by using a posture of the human face, so that the gesture of the human face and the authorized human face Consistent (ie basically the same).
  • a computer readable storage medium storing instructions for performing the method of the first aspect or any one of the possible implementations of the first aspect is provided.
  • a computer readable storage medium storing instructions for performing the method of any one of the second aspect or the second aspect of the second aspect is provided.
  • a computer program product comprising instructions for performing the method of the first aspect or any one of the possible implementations of the first aspect.
  • a computer program product comprising instructions for performing the method of any of the possible implementations of the second aspect or the second aspect.
  • a terminal device includes: an active light illuminator; a camera; a memory storing instructions; and a processor configured to execute the instructions to perform the first aspect or any one of the possible implementations of the first aspect The method described in the manner.
  • the active light illuminator is an infrared structured light projection module
  • the camera is an infrared camera
  • the infrared camera and the active light illuminator constitute a depth camera
  • the image includes Depth image.
  • a terminal device includes: an active light illuminator; a camera; a memory storing instructions; and a processor configured to execute the instructions to perform any one of the second aspect or the second aspect The method described in the manner.
  • the active light illuminator is an infrared structured light projection module
  • the camera is an infrared camera
  • the infrared camera and the active light illuminator constitute a depth camera
  • the image includes Depth image.
  • the present application utilizes active invisible illumination to solve the problem of ambient light interference, and uses the image containing the depth information to perform face recognition, thereby improving the accuracy of face recognition.
  • FIG. 1 is a schematic diagram of a face recognition application according to an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a task execution method according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a face information recognition method based on depth information according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
  • connection can be used for both stationary and circuit communication.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include one or more of the features either explicitly or implicitly.
  • the meaning of "a plurality” is two or more unless specifically and specifically defined otherwise.
  • Face recognition technology can be used in security, surveillance and other fields.
  • face recognition can be applied to perform operations such as unlocking and payment, and can also be applied to various aspects such as entertainment games.
  • Intelligent terminal devices such as mobile phones, tablets, computers, televisions, etc.
  • the images can be used for face detection and recognition, thereby further utilizing the results of the recognition to perform other correlations.
  • the environment of terminal devices especially mobile devices such as mobile phones and tablets
  • environmental changes can affect the imaging of color cameras. For example, when the light is weak, the face cannot be well imaged.
  • the randomness of the face pose and/or the distance between the face and the camera increases the difficulty and stability of face recognition.
  • the present application first provides a face recognition method and a terminal device based on depth information, which utilizes active invisible light to acquire an image containing depth information, and performs face recognition based on the image. Since the depth information is not sensitive to illumination, the accuracy of face recognition can be improved. Further, based on this, the present application provides a task execution method and a terminal device of a terminal device, which can perform different operations, such as unlocking, payment, and the like, by using the recognition result of the face recognition method described above.
  • the embodiments of the present application are exemplified in detail below with reference to the specific drawings.
  • FIG. 1 is a schematic diagram of a face recognition application according to an embodiment of the present application.
  • the user 10 holds a mobile terminal 11 (such as a mobile phone, a tablet, a player, etc.), and the mobile terminal 11 internally contains a camera 111 that can acquire a target (human face) image.
  • the camera 111 collects an image including the face 101, and recognizes the face in the image, when the recognized face When the face is authorized, the mobile terminal 11 performs unlocking, otherwise it is still in the locked state.
  • the current face recognition application is a payment or other application, the principle is similar to the unlock application.
  • FIG. 2 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
  • the terminal device referred to herein may also be referred to as a face recognition device.
  • the terminal device may be, for example, the mobile terminal 11 as shown in FIG. 1.
  • the terminal device may include the processor 20 and the ambient light/proximity sensor 21 connected thereto, the display 22, the microphone 23, the radio frequency and baseband processor 24, the interface 25, the memory 26, the battery 27, and the micro electro mechanical system. , MEMS) sensor 28, audio device 29, camera 30, and the like.
  • the data transmission and signal communication can be realized by circuit connection between different units in FIG. 2.
  • FIG. 2 is only one example of the structure of the terminal device, and in other embodiments, the terminal device may also contain fewer structures or contain more other components.
  • the processor 20 can be used for overall control of the terminal device, and the processor 20 can be a single processor or a plurality of processor units.
  • processor 20 may include processor units of different functions.
  • Display 22 can be used to display images to present an application or the like to a user.
  • the display 22 can also include a touch function, and the display 22 can also function as a human-computer interaction interface for receiving user input.
  • the microphone 23 can be used to receive voice information and can be used to implement voice interaction with the user.
  • the RF and baseband processor 24 can be responsible for the communication functions of the terminal device, such as receiving and translating signals such as voice or text to enable information exchange between remote users.
  • the interface 25 can be used to connect the terminal device to the outside to further implement functions such as data transmission, power transmission, and the like.
  • the interface 25 can be, for example, a universal serial bus (USB) interface, a wireless fidelity (WIFI) interface, or the like.
  • the memory 26 can be used to save applications such as the unlock program 261, the payment program 262, and the like.
  • the memory 26 can also be used to store data related to the execution of the application, such as facial images, features, and the like.
  • the memory 26 can also be used to store code and data involved in the execution of the processor 20.
  • Memory 26 may include a single or multiple memories, which may be any form of memory that can be used to hold data, such as random access memory (RAM), FLASH (flash), and the like. It can be understood that the memory 26 can be either part of the terminal device or independent of the terminal device, such as a cloud memory, and the saved data can communicate with the terminal device through the interface 25 or the like.
  • the application program such as the unlocking program 261, the payment program 262, and the like are generally stored in a computer readable storage medium (such as a non-volatile readable storage medium) from which the processor 20 can call the corresponding one when executing the application. The program is executed. Some data involved in the execution of the program, such as authorized face images or authorized face feature data, may also be stored in the memory 26.
  • the computer in the computer readable storage medium is a generalized concept and may refer to any device having an information processing function. In the embodiment of the present application, the computer may refer to the terminal device.
  • the terminal device may also include an ambient light/proximity sensor.
  • the ambient light sensor and proximity sensor can be an integrated single sensor or a separate ambient light sensor as well as a proximity sensor.
  • the ambient light sensor can be used to obtain illumination information of the current environment in which the terminal device is located. In one embodiment, automatic adjustment of screen brightness can be achieved based on the illumination information to provide a more comfortable display brightness for the human eye.
  • the proximity sensor measures whether an object is close to the terminal device, based on which some specific functions can be implemented. For example, in the process of answering a call, when the face is close enough to the terminal device, the touch function of the screen can be turned off to prevent accidental touch. In some embodiments, the proximity sensor can also quickly determine the approximate distance between the face and the terminal device.
  • Battery 27 can be used to provide power.
  • Audio device 29 can be used to implement voice input.
  • the audio device 29 can be, for example, a microphone or the like.
  • the MEMS sensor 28 can be used to obtain current state information of the terminal device, such as position, direction, acceleration, gravity, and the like.
  • the MEMS sensor 28 can include sensors such as accelerometers, gravimeters, gyroscopes, and the like.
  • MEMS sensor 28 can be used to activate some face recognition applications. For example, when the user picks up the terminal device, MEMS sensor 28 can take this change while transmitting this change to processor 20, which can call the unlock application of memory 26 to activate the unlock application.
  • Camera 30 can be used to capture images, and in some applications, such as when a self-timer application is executed, processor 20 can control camera 30 to capture images and transmit the images to display 22 for display.
  • processor 20 can control camera 30 to capture images and transmit the images to display 22 for display.
  • the camera 30 may acquire an image, and the processor 20 may process the image (including face detection and recognition) and perform corresponding according to the recognition result. Unlock the task.
  • Camera 30 may be a single camera or multiple cameras; in some embodiments, camera 30 may include both an RGB camera for acquiring visible light information, a grayscale camera, and an infrared camera that collects invisible light information and/or UV camera, etc.
  • camera 30 may include a depth camera for acquiring a depth image, which may be, for example, one or more of the following: a structured light depth camera, time of flight (TOF) Depth camera, binocular depth camera, etc.
  • camera 30 may include one or more of the following cameras: a light field camera, a wide-angle camera, a telephoto camera, and the like.
  • the camera 30 can be disposed at any position of the terminal device, such as a front end or a bottom end of the front plane (ie, the plane of the display 22), a rear plane, and the like.
  • camera 30 can be placed in a front plane for capturing a user's face image.
  • camera 30 can be placed in a rear plane for taking pictures of the scene, and the like.
  • camera 30 can be placed in a pre- and post-plane, both of which can acquire images independently or can be controlled by processor 20 to acquire images simultaneously.
  • the active light illuminator 31 can be used as a light source such as a laser diode, a semiconductor laser, a light emitting diode (LED) or the like for projecting active light.
  • the active light projected by the active light illuminator 31 may be infrared light, ultraviolet light, or the like.
  • the active light illuminator 31 can be used to project infrared light having a wavelength of 940 nm, thereby enabling the active light illuminator 31 to operate in different environments and be disturbed by less ambient light.
  • the number of active light illuminators 31 is configured according to actual needs, such as one or more active light illuminators.
  • the active light illuminator 31 can be a separate module mounted on the terminal device or integrated with other modules, such as the active light illuminator 31 can be part of the proximity sensor.
  • Face recognition technology based on color image, in the face image acquisition, due to lighting, angle, distance and other factors seriously affect the recognition accuracy and speed. For example, if the angle and distance of the currently collected face are inconsistent with the authorized face (generally the target entered in advance and the saved target is compared to the face), it will be more time-consuming and timely to perform feature extraction and comparison. The accuracy will also decrease.
  • FIG. 3 is a schematic diagram of an unlocking application based on face recognition according to an embodiment of the present application.
  • the unlocking application can be saved in the terminal device in the form of software or hardware. If the terminal device is currently in the locked state, the unlocking application is executed after activation.
  • the unlocking application is activated based on the output of the MEMS sensor, such as activation of the unlocking application when the MEMS sensor detects a certain acceleration, and when the MEMS sensor detects a particular orientation of the terminal device (such as the device orientation in FIG. 1) Activate the unlock app when it is activated.
  • the terminal device When the unlocking application is activated, the terminal device will use the active light illuminator to project the active invisible light (301) to the target object, such as a human face; the projected active invisible light may be infrared, ultraviolet, or the like, or may be floodlight. Light in the form of structured light. Active invisible light will illuminate the target to avoid the problem of not being able to acquire the target image due to factors such as ambient light direction and lack of ambient light. Secondly, the target image is acquired by the camera. In order to improve the face recognition accuracy and speed of the conventional color image, in the present application, the acquired image contains the depth information of the target (302).
  • the camera is an RGBD camera, and the acquired image includes an RGB image and a depth image of the target; in one embodiment, the camera is an infrared camera, and the captured image includes an infrared image and a depth image of the target, where The infrared image contains a pure infrared flood image; in one embodiment, the image captured by the camera is a structured light image and a depth image.
  • the depth image reflects the depth information of the target, and the distance, the size, the posture, and the like of the target can be acquired based on the depth information. Therefore, the analysis can be performed based on the acquired image to realize the detection and recognition of the face.
  • the unlocking application is passed and the terminal device is unlocked.
  • the waiting time may be set, and the active invisible light projection, image acquisition and analysis are performed within the waiting time range, and are not detected when the waiting time ends. To the face, unlock the app and wait for the next activation.
  • Face detection and recognition may be based only on depth images, and may also combine two-dimensional images with depth images, where the two-dimensional images may be RGB images, infrared images, structured light images, and the like.
  • the infrared LED floodlight and the structured light projector respectively project infrared floodlight and structured light
  • the infrared image and the structured light image are sequentially acquired by the infrared camera, and the depth image is further obtained based on the structured light image.
  • Infrared images and depth images are used separately for face detection.
  • the invisible light here includes infrared flooding and infrared structured light, and can be time-division or synchronous projection when performing projection.
  • analyzing the depth information in the image includes acquiring a distance value of the face, and combining the distance value for face detection and recognition to improve face detection and recognition accuracy and speed. In one embodiment, analyzing the depth information in the image includes acquiring the posture information of the face, and combining the posture information to perform face detection and recognition to improve the accuracy and speed of the face detection and recognition.
  • the depth information can be used to accelerate the face detection.
  • the size of the pixel area occupied by the face can be initially determined, and then directly A face of this size is used for face determination. This allows you to quickly find the location and area of your face.
  • FIG. 4 is a schematic diagram of face detection and recognition based on depth information according to an embodiment of the present application.
  • an infrared image and a depth image of a human face will be described as an example.
  • the similarity comparison between the current face infrared image and the authorized face infrared image can be performed. Since the size and posture of the current face and the authorized face in the infrared image are different, the accuracy of the face recognition is affected when the face is compared.
  • the depth information can be used to acquire the distance and posture of the face (402), and then the current face infrared image or the authorized face infrared image is adjusted by using the distance and the posture to make the size of the two
  • the posture is consistent (that is, basically the same).
  • the face image is adjusted (403), that is, enlarged or reduced so that the areas of the two are similar in size.
  • the depth information can also be adjusted (403).
  • One way is to enter a 3D model of the authorized face and an infrared image in the face entry stage, and when performing face recognition, identify the current face pose according to the depth image of the current face, and authorize the authorized person based on the posture information.
  • the 3D model of the face is projected in two dimensions to project the same authorized face infrared image, and then the feature face infrared image is extracted with the current face infrared image (404) and the feature similarity is compared. (405), because the two poses are similar, the face regions and features included in the image are similar, and the face recognition accuracy will be improved.
  • Another method is: after obtaining the face pose information, correcting the current face infrared image, for example, uniformly correcting into a frontal face infrared image, and then performing feature comparison and comparison with the front face infrared image of the authorized face.
  • the distance and posture information of the face can be acquired, and the face image is further adjusted by using the distance and/or posture information, so that the current face image and the authorized face image are consistent in size and/or posture. To speed up face recognition and improve face recognition accuracy.
  • FIG. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
  • the terminal device can include a projection module 502 and an acquisition module 507.
  • the projection module 502 can be used to project an infrared structured light image (such as an infrared structured light image projected onto a target space), and the acquisition module 507 can be used to collect a structured light image.
  • the terminal device may further include a processor (not shown), and after receiving the structured light image, the processor may utilize the structured light image to calculate the depth image of the target.
  • the structured light image herein may include face texture information in addition to structured light information. Therefore, the structured light image can also participate in face identity entry and authentication as a face infrared image and a depth image.
  • the acquisition module 507 is both a part of the depth camera and an infrared camera. In other words, the depth camera and the infrared camera here can be considered to be the same camera.
  • the terminal device may further include an infrared floodlight 506 that can emit infrared light having the same wavelength as the structured light emitted by the projection module 502.
  • the projection module 502 and the infrared floodlight 506 can be time-switched to respectively acquire the depth image and the infrared image of the target.
  • the infrared image acquired at this time is a pure infrared image, and the facial feature information contained in the structured light image is more obvious, which can make the face recognition precision higher.
  • the infrared floodlight 506 and projection module 502 herein may correspond to the active light illuminator shown in FIG. 2.
  • depth information may be acquired using a depth camera based on TOF technology.
  • the projection module 502 can be used to emit light pulses
  • the acquisition module 507 can be used to receive light pulses.
  • the processor can be used to record the time difference between the pulse transmission and the reception, and calculate the depth image of the target based on the time difference.
  • the acquisition module 507 can simultaneously acquire the depth image and the infrared image of the target, and there is almost no parallax between the two.
  • an additional infrared camera 503 can be provided to acquire an infrared image.
  • the acquisition module 507 and the infrared camera 503 can be used to acquire the depth image and the infrared image of the target.
  • the difference between the terminal device and the terminal device described above is that since the depth image is different from the camera of the infrared image, there is a parallax between the two, and if there is no parallax in the calculation process performed by the subsequent face recognition The image needs to register the depth image with the infrared image in advance.
  • the terminal device may also include handset 504, ambient light/proximity sensor 505, etc. to achieve more functionality.
  • the proximity of the face can be detected by the proximity sensor 505, and when the face is too close, the projection module 502 is closed. Project or reduce the projection power.
  • an automatic call can be implemented in combination with face recognition and an earpiece. For example, when the terminal device receives an incoming call, the face recognition application can be activated to simultaneously open the desired depth camera and the infrared camera to collect the depth image and the infrared image. After the identification is passed, the call is connected and the device such as the handset is turned on to implement the call.
  • the terminal device may also include a screen 501, a display, which may be used to display image content as well as for touch interaction.
  • a screen 501 a display
  • the terminal device when the terminal device is in a state of sleep or the like, the user picks up the terminal device, and the inertial measurement unit in the terminal device recognizes the acceleration due to the pick up.
  • the screen When the screen is illuminated and the unlocking application is started, the screen will appear to be unlocked.
  • the terminal device turns on the depth camera and the infrared camera is used to collect the depth image and/or the infrared image to further perform face detection and recognition.
  • the preset direction of the line of sight of the human eye can be set to the direction in which the human eye looks at the screen 501, and only when the human eye looks at the screen. Further unlock.
  • the terminal device may further include a memory (not shown) for storing feature information such as entry during the entry phase, and may also store applications, instructions, and the like.
  • the face recognition related application such as unlocking, paying, anti-peeping, etc.
  • the processor calls the instruction in the memory and performs the entry and authentication. method.
  • the application program can also be directly written into the processor as a processor code function module or a corresponding independent processor in the form of instruction code, thereby improving execution efficiency.
  • the methods described in this application can be configured in the device either in software or in hardware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Collating Specific Patterns (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé d'exécution de tâches, un dispositif terminal, et un support de stockage lisible par ordinateur. Le procédé d'exécution de tâches comprend les étapes suivantes : après activation d'un programme d'application de reconnaissance faciale d'un dispositif terminal, projection d'une lumière invisible active vers un espace ; obtention d'une image comprenant des informations de profondeur ; analyse de l'image de façon à mettre en œuvre les étapes suivantes : déterminer si l'image comprend un visage, et effectuer une reconnaissance faciale lorsque l'image comprend un visage ; selon un résultat de reconnaissance, commander le dispositif terminal pour exécuter une opération correspondante. La présente invention utilise un éclairage de lumière active et combine des informations de profondeur pour effectuer une reconnaissance faciale, ce qui permet de résoudre le problème de faible précision de reconnaissance faciale provoquée par une interférence de lumière ambiante.
PCT/CN2018/113787 2017-12-04 2018-11-02 Procédé d'exécution de tâches, dispositif terminal et support de stockage lisible par ordinateur WO2019109768A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201711262543.0 2017-12-04
CN201711262543 2017-12-04
CN201810336303.9 2018-04-16
CN201810336303.9A CN108537187A (zh) 2017-12-04 2018-04-16 任务执行方法、终端设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2019109768A1 true WO2019109768A1 (fr) 2019-06-13

Family

ID=63480245

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2018/113787 WO2019109768A1 (fr) 2017-12-04 2018-11-02 Procédé d'exécution de tâches, dispositif terminal et support de stockage lisible par ordinateur
PCT/CN2018/113784 WO2019109767A1 (fr) 2017-12-04 2018-11-02 Procédé d'exécution de tâches, dispositif terminal et support de stockage lisible par ordinateur

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/113784 WO2019109767A1 (fr) 2017-12-04 2018-11-02 Procédé d'exécution de tâches, dispositif terminal et support de stockage lisible par ordinateur

Country Status (3)

Country Link
US (1) US20200293754A1 (fr)
CN (2) CN108563936B (fr)
WO (2) WO2019109768A1 (fr)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108563936B (zh) * 2017-12-04 2020-12-18 深圳奥比中光科技有限公司 任务执行方法、终端设备及计算机可读存储介质
EP3644261B1 (fr) 2018-04-28 2023-09-06 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Procédé de traitement d'image, appareil, support d'informations lisible par ordinateur, et dispositif électronique
CN109635539B (zh) * 2018-10-30 2022-10-14 荣耀终端有限公司 一种人脸识别方法及电子设备
CN109445231B (zh) * 2018-11-20 2022-03-29 奥比中光科技集团股份有限公司 一种深度相机及深度相机保护方法
CN109635682B (zh) * 2018-11-26 2021-09-14 上海集成电路研发中心有限公司 一种人脸识别装置和方法
US11250144B2 (en) * 2019-03-29 2022-02-15 Lenovo (Singapore) Pte. Ltd. Apparatus, method, and program product for operating a display in privacy mode
TWI709130B (zh) * 2019-05-10 2020-11-01 技嘉科技股份有限公司 自動調整顯示畫面的顯示裝置及其方法
CN110333779B (zh) * 2019-06-04 2022-06-21 Oppo广东移动通信有限公司 控制方法、终端和存储介质
CN112036222B (zh) * 2019-06-04 2023-12-29 星宸科技股份有限公司 脸部辨识系统及方法
CN111131872A (zh) * 2019-12-18 2020-05-08 深圳康佳电子科技有限公司 一种集成深度相机的智能电视及其控制方法与控制系统
KR102291593B1 (ko) * 2019-12-26 2021-08-18 엘지전자 주식회사 영상표시장치 및 그의 동작방법
CN112183480A (zh) * 2020-10-29 2021-01-05 深圳奥比中光科技有限公司 一种人脸识别方法、装置、终端设备及存储介质
US11394825B1 (en) * 2021-03-15 2022-07-19 Motorola Mobility Llc Managing mobile device phone calls based on facial recognition
CN113378139B (zh) * 2021-06-11 2022-11-29 平安国际智慧城市科技股份有限公司 界面内容的防窥方法、装置、设备以及存储介质
CN113687899A (zh) * 2021-08-25 2021-11-23 读书郎教育科技有限公司 一种解决查看通知与人脸解锁冲突的方法及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932847A (zh) * 2006-10-12 2007-03-21 上海交通大学 复杂背景下彩色图像人脸检测的方法
US8447098B1 (en) * 2010-08-20 2013-05-21 Adobe Systems Incorporated Model-based stereo matching
CN104850842A (zh) * 2015-05-21 2015-08-19 北京中科虹霸科技有限公司 移动终端虹膜识别的人机交互方法
CN104899579A (zh) * 2015-06-29 2015-09-09 小米科技有限责任公司 人脸识别方法和装置
CN107169483A (zh) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 基于人脸识别的任务执行
CN108537187A (zh) * 2017-12-04 2018-09-14 深圳奥比中光科技有限公司 任务执行方法、终端设备及计算机可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008310515A (ja) * 2007-06-13 2008-12-25 Nippon Telegr & Teleph Corp <Ntt> 情報機器監視装置
CN105354960A (zh) * 2015-10-30 2016-02-24 夏翊 一种金融自助服务终端安全区域控制方法
CN107105217B (zh) * 2017-04-17 2018-11-30 深圳奥比中光科技有限公司 多模式深度计算处理器以及3d图像设备
CN107194288A (zh) * 2017-04-25 2017-09-22 上海与德科技有限公司 显示屏的控制方法及终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932847A (zh) * 2006-10-12 2007-03-21 上海交通大学 复杂背景下彩色图像人脸检测的方法
US8447098B1 (en) * 2010-08-20 2013-05-21 Adobe Systems Incorporated Model-based stereo matching
CN104850842A (zh) * 2015-05-21 2015-08-19 北京中科虹霸科技有限公司 移动终端虹膜识别的人机交互方法
CN104899579A (zh) * 2015-06-29 2015-09-09 小米科技有限责任公司 人脸识别方法和装置
CN107169483A (zh) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 基于人脸识别的任务执行
CN108537187A (zh) * 2017-12-04 2018-09-14 深圳奥比中光科技有限公司 任务执行方法、终端设备及计算机可读存储介质

Also Published As

Publication number Publication date
WO2019109767A1 (fr) 2019-06-13
CN108563936B (zh) 2020-12-18
CN108537187A (zh) 2018-09-14
US20200293754A1 (en) 2020-09-17
CN108563936A (zh) 2018-09-21

Similar Documents

Publication Publication Date Title
WO2019109768A1 (fr) Procédé d&#39;exécution de tâches, dispositif terminal et support de stockage lisible par ordinateur
US10255417B2 (en) Electronic device with method for controlling access to same
CN109544618B (zh) 一种获取深度信息的方法及电子设备
US10922395B2 (en) Facial authentication systems and methods utilizing time of flight sensing
EP3872658B1 (fr) Procédé de reconnaissance faciale et dispositif électronique
CN108664783B (zh) 基于虹膜识别的识别方法和支持该方法的电子设备
CN108399349B (zh) 图像识别方法及装置
WO2019080580A1 (fr) Procédé et appareil d&#39;authentification d&#39;identité de visage 3d
WO2019080578A1 (fr) Procédé et appareil d&#39;authentification d&#39;identité de visage 3d
US20170061210A1 (en) Infrared lamp control for use with iris recognition authentication
WO2019080579A1 (fr) Procédé et appareil d&#39;authentification d&#39;identité de visage en 3d
JP2017538300A (ja) 無人航空機の撮影制御方法及び撮影制御装置、電子デバイス、コンピュータプログラム及びコンピュータ読取可能記憶媒体
TWI706270B (zh) 身分識別方法、裝置和電腦可讀儲存媒體
WO2021037157A1 (fr) Procédé de reconnaissance d&#39;image et dispositif électronique
US20150347732A1 (en) Electronic Device and Method for Controlling Access to Same
CN114090102B (zh) 启动应用程序的方法、装置、电子设备和介质
CN115087975A (zh) 用于识别对象的电子装置和方法
WO2022206494A1 (fr) Procédé et dispositif de suivi de cible
CN109766806A (zh) 高效的人脸识别方法及电子设备
CN115032640B (zh) 手势识别方法和终端设备
CN115184956A (zh) Tof传感器系统和电子设备
CN115066882A (zh) 用于执行自动对焦的电子装置和方法
CN114111704A (zh) 测量距离的方法、装置、电子设备及可读存储介质
WO2022222705A1 (fr) Procédé de commande de dispositif et dispositif électronique
EP4184298A1 (fr) Procédé et appareil de lancement d&#39;application, dispositif électronique et support

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18885285

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18885285

Country of ref document: EP

Kind code of ref document: A1