WO2019196558A1 - 亮屏方法、装置、移动终端及存储介质 - Google Patents

亮屏方法、装置、移动终端及存储介质 Download PDF

Info

Publication number
WO2019196558A1
WO2019196558A1 PCT/CN2019/075383 CN2019075383W WO2019196558A1 WO 2019196558 A1 WO2019196558 A1 WO 2019196558A1 CN 2019075383 W CN2019075383 W CN 2019075383W WO 2019196558 A1 WO2019196558 A1 WO 2019196558A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
mobile terminal
light image
pupil
imaging
Prior art date
Application number
PCT/CN2019/075383
Other languages
English (en)
French (fr)
Inventor
周海涛
惠方方
郭子青
谭筱
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP19734656.2A priority Critical patent/EP3579086B1/en
Priority to US16/477,439 priority patent/US11537696B2/en
Publication of WO2019196558A1 publication Critical patent/WO2019196558A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/84Protecting input, output or interconnection devices output devices, e.g. displays or monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the field of mobile terminal technologies, and in particular, to a bright screen method, device, mobile terminal, and storage medium.
  • the face unlocking technology of the mobile terminal application is generally that when the user lifts the mobile terminal, the mobile terminal detects the face, and when the face is detected, the mobile terminal automatically unlocks the screen.
  • the embodiment of the present application provides a bright screen method, device, mobile terminal, and storage medium.
  • the mobile terminal is controlled by identifying the position of the pupil in the constructed face depth model when the position of the pupil is within a designated area in the human eye.
  • the bright screen can control the mobile terminal to perform the bright screen unlocking when the human eye looks at the screen of the mobile terminal, thereby effectively preventing the mobile terminal from being accidentally unlocked and improving the user experience.
  • the first aspect of the present application provides a bright screen method, including:
  • the mobile terminal is controlled to be bright.
  • the embodiment of the second aspect of the present application provides a bright screen device, including:
  • a first control module configured to: when the change process of the motion state of the mobile terminal meets the preset unlocking condition, control to open the structured light image sensor for imaging;
  • An acquiring module configured to acquire a depth map obtained by imaging the structured light image sensor
  • a building module configured to construct a face depth model according to the depth map
  • An identification module configured to identify a position of the pupil from the face depth model
  • a second control module configured to control the mobile terminal to be bright if the position of the pupil is within a designated area in the human eye.
  • a third aspect of the present application provides a mobile terminal, including: an imaging sensor, a memory, a micro control unit MCU, a processor, and a memory stored on the memory and operable in a trusted execution environment of the processor Trusted application;
  • the MCU which is dedicated hardware of the trusted execution environment, is connected to the imaging sensor and the processor, is configured to control the imaging sensor to perform imaging, and send imaging data to the processor;
  • the processor executes the trusted application
  • the following bright screen step is implemented: when the change process of the motion state of the mobile terminal satisfies a preset unlocking condition, then controlling to turn on the structured light image sensor for imaging; acquiring the structure a depth map obtained by imaging the light image sensor; constructing a face depth model according to the depth map; identifying a position of the pupil from the face depth model; and if the position of the pupil is within a designated area in the human eye, the control station
  • the mobile terminal is bright.
  • the fourth aspect of the present application provides a computer readable storage medium having stored thereon a computer program, the program being executed by the processor to implement the bright screen method as described in the first aspect.
  • FIG. 1 is a schematic flowchart of a bright screen method according to Embodiment 1 of the present application.
  • FIG. 2 is a schematic flowchart of a bright screen method according to Embodiment 2 of the present application.
  • FIG. 3 is a schematic flowchart of a bright screen method according to Embodiment 3 of the present application.
  • FIG. 4 is a schematic flowchart of a bright screen method according to Embodiment 4 of the present application.
  • FIG. 5 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a bright screen device according to Embodiment 1 of the present application.
  • FIG. 7 is a schematic structural diagram of a bright screen device according to Embodiment 2 of the present application.
  • FIG. 8 is a schematic structural diagram of a bright screen device according to Embodiment 3 of the present application.
  • FIG. 9 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
  • the bright screen method of the embodiment of the present application includes the following steps:
  • Step 101 When the change process of the motion state of the mobile terminal 80 satisfies the preset unlocking condition, then control the open structure light image sensor to perform imaging;
  • Step 102 Acquire a depth map obtained by imaging the structured light image sensor.
  • Step 103 Construct a face depth model according to the depth map.
  • Step 104 Identify a position of the pupil from the face depth model
  • Step 105 If the position of the pupil is within a designated area in the human eye, the mobile terminal 80 is controlled to be bright.
  • step 103 includes the following steps:
  • Step 1032 Construct multiple face depth models according to multiple depth maps acquired within a predetermined duration
  • Step 105 includes:
  • Step 1052 If the position of the pupil of each face depth model is within a designated area in the human eye, the mobile terminal 80 is controlled to be bright.
  • step 105 includes the following steps:
  • Step 201 Identify a human eye from a face depth model according to a shape of a human eye
  • Step 202 Identify a pupil in a human eye according to a shape of the pupil
  • Step 203 Determine the first center position of the pupil, and use the first center position as the position of the pupil.
  • step 105 further includes:
  • Step 204 Extract a second central location of a designated area in a human eye
  • Step 205 Acquire a first offset of the first central location relative to the second central location
  • Step 206 If the first offset is within the set first offset range, it is determined that the position of the pupil is within a designated area in the human eye.
  • step 104 the following steps are also included before step 104:
  • Step 301 Control the opening of the visible light image sensor to perform imaging while controlling the opening of the structured light image sensor for imaging;
  • Step 302 Acquire a visible light image obtained by imaging the visible light image sensor.
  • Step 303 Perform face recognition on the visible light image to determine a position of the human face in the visible light image.
  • Step 304 If the face is within a specified area of the visible light image, triggering the construction of the face depth model.
  • step 303 includes the following steps:
  • the second offset is within the set second offset range, it is determined that the face is in a designated area of the visible light image.
  • the method prior to controlling the open structured light image sensor for imaging, the method further includes the steps of:
  • the face contour matches the pre-existing face contour, it is determined that the current imaging object is the owner.
  • the bright screen method is performed by a trusted application 850 that runs in a trusted execution environment.
  • communication with trusted application 850 is through dedicated hardware of a trusted execution environment.
  • the bright screen device 50 of the present application includes a first control module 510 , an acquisition module 520 , a construction module 530 , an identification module 540 , and a second control module 550 .
  • the first control module 510 is configured to control the open structure light image sensor to perform imaging when the change process of the motion state of the mobile terminal 80 satisfies the preset unlocking condition.
  • the acquisition module 520 is configured to acquire a depth map obtained by imaging the structured light image sensor.
  • the building module 530 is configured to construct a face depth model according to the depth map.
  • the identification module 540 is for identifying the location of the pupil from the face depth model.
  • the second control module 550 is configured to control the mobile terminal 80 to be bright when the position of the pupil is within a designated area in the human eye.
  • the mobile terminal 80 of the embodiment of the present application includes an imaging sensor 810, a memory 820, a micro control unit MCU 830, a processor 840, and a trusted execution environment stored in the memory 820 and in the processor 840.
  • Trusted application 850 running under;
  • the MCU 830 is dedicated to the trusted execution environment, and is connected to the imaging sensor 810 and the processor 840 for controlling the imaging sensor 810 for imaging and transmitting the imaging data to the processor 840;
  • Step 101 When the change process of the motion state of the mobile terminal 80 satisfies the preset unlocking condition, then control the open structure light image sensor to perform imaging;
  • Step 102 Acquire a depth map obtained by imaging the structured light image sensor.
  • Step 103 Construct a face depth model according to the depth map.
  • Step 104 Identify a position of the pupil from the face depth model
  • Step 105 If the position of the pupil is within a designated area in the human eye, the mobile terminal 80 is controlled to be bright.
  • Step 1032 Construct multiple face depth models according to multiple depth maps acquired within a predetermined duration
  • Step 1052 If the position of the pupil of each face depth model is within a designated area in the human eye, the mobile terminal 80 is controlled to be bright.
  • the processor 840 executes the trusted application 850, the following steps are implemented:
  • Step 201 Identify a human eye from a face depth model according to a shape of a human eye
  • Step 202 Identify a pupil in a human eye according to a shape of the pupil
  • Step 203 Determine the first center position of the pupil, and use the first center position as the position of the pupil.
  • the processor 840 executes the trusted application 850, the following steps are implemented:
  • Step 204 Extract a second central location of a designated area in a human eye
  • Step 205 Acquire a first offset of the first central location relative to the second central location
  • Step 206 If the first offset is within the set first offset range, it is determined that the position of the pupil is within a designated area in the human eye.
  • Step 301 Control the opening of the visible light image sensor to perform imaging while controlling the opening of the structured light image sensor for imaging;
  • Step 302 Acquire a visible light image obtained by imaging the visible light image sensor.
  • Step 303 Perform face recognition on the visible light image to determine a position of the human face in the visible light image.
  • Step 304 If the face is within a specified area of the visible light image, triggering the construction of the face depth model. .
  • the processor 840 executes the trusted application 850, the following steps are implemented:
  • the second offset is within the set second offset range, it is determined that the face is in a designated area of the visible light image.
  • the processor 840 executes the trusted application 850, the following steps are implemented:
  • the face contour matches the pre-existing face contour, it is determined that the current imaging object is the owner.
  • trusted application 850 runs in a trusted execution environment.
  • communication with trusted application 850 is through dedicated hardware of a trusted execution environment.
  • the MCU 830 and the processor 840 communicate by encryption.
  • a computer readable storage medium according to an embodiment of the present application, wherein a computer program is stored thereon, and when executed by a processor, the bright screen method of any of the above embodiments is implemented.
  • the mobile terminal when the mobile terminal detects that the user lifts the mobile terminal, the mobile terminal performs face recognition, and if the detected face is, the user automatically unlocks the screen.
  • the above unlocking method does not consider whether the user currently has an intention to use the mobile terminal, and in the case of no intention, the face may be photographed to cause an accidental unlocking.
  • the mobile terminal detects the face and The automatic bright screen is unlocked, and since the user only needs to move the location of the mobile terminal and does not need to use the mobile terminal, the bright screen unlocking of the mobile terminal is not desired by the user, that is, it is an erroneous operation and affects the user experience.
  • the embodiment of the present application provides a bright screen method, which can control the mobile terminal to perform bright screen unlocking when the human eye looks at the screen of the mobile terminal, thereby effectively preventing the mobile terminal from being accidentally unlocked and improving the user experience.
  • FIG. 1 is a schematic flowchart of a bright screen method according to Embodiment 1 of the present application, and the method may be performed by a mobile terminal.
  • the bright screen method includes the following steps:
  • Step 101 When the change process of the motion state of the mobile terminal satisfies the preset unlocking condition, then the structure light image sensor is controlled to be imaged.
  • a gyroscope, a gravity sensor, or the like may be installed in the mobile terminal to detect the motion state of the mobile terminal.
  • the structure light image sensor is controlled to be imaged.
  • the preset unlocking condition may be stored in a local memory of the mobile terminal.
  • the unlocking condition may be that the mobile terminal is in a motion state for a preset threshold.
  • the timer in the mobile terminal is controlled to start timing to acquire the duration in which the mobile terminal is in motion, and the mobile terminal is in motion.
  • the duration of the state is compared with a preset threshold, and when the preset threshold is reached, the structured light image sensor is controlled to be imaged.
  • the unlocking condition may be that the mobile terminal recognizes a human face from a moving state to a stopped state.
  • the mobile terminal controls the front camera to be turned on to detect a face appearing within the front camera angle range, and stops at the mobile terminal.
  • the mobile terminal recognizes the image acquired by the front camera during this process, and when the face is recognized, controls the structured light image sensor to perform imaging.
  • the unlocking condition is that the motion track during the motion state of the mobile terminal is the target track that triggers the bright screen.
  • the target trajectory for triggering the bright screen may be stored in the mobile terminal in advance.
  • the mobile terminal has a certain angle with the ground, and the angle of the angle is generally 30° to 70°.
  • the mobile terminal can be used.
  • the gyro detects the angular motion of the mobile terminal, and moves according to when the mobile terminal stops moving.
  • the angular motion of the terminal forms a motion trajectory of the mobile terminal.
  • the motion trajectory of the mobile terminal After detecting the motion trajectory of the mobile terminal, the motion trajectory of the mobile terminal can be analyzed, and the angle between the mobile terminal and the ground after the mobile terminal stops moving is extracted, and the size of the extracted angle is stored in the mobile terminal. The angle of the target trajectory is compared. If the extracted angle is within the angle range of the target trajectory, it is determined that the motion trajectory of the mobile terminal is the target trajectory that triggers the bright screen, and then the structured light image sensor is controlled to perform imaging.
  • a structured light image sensor is used to project structured light to an imaged object, wherein the projected set of known spatially directed beams is referred to as structured light.
  • the type of the structured light may be any one of a grating type, a spot type, a streak type (including a circular streak and a cross streak), a non-uniform speckle pattern, and the like.
  • Step 102 Obtain a depth map obtained by imaging the structured light image sensor.
  • the structured light emitted by the structured light image sensor reaches the human face, since the facial organs of the human face may hinder the structured light, the structured light may be reflected at the human face.
  • the camera pair structure provided in the mobile terminal may be adopted. Light is collected from the reflected light on the human face, and the depth map of the face can be obtained by the collected reflected light.
  • Step 103 Construct a face depth model according to the depth map.
  • the depth map of the face may include a face and a background.
  • the depth map is denoised and smoothed to obtain an image of the area where the face is located, and then the face and the background are processed by the front and back scene segmentation.
  • Figure segmentation is a technique that is used to segment the face and the background.
  • the feature point data can be extracted from the depth map of the face, and the feature points are connected into a network according to the extracted feature point data. For example, according to the spatial distance relationship of each point, the points of the same plane, or the points within the threshold range are connected into a triangular network, and then the networks are spliced to construct a face depth model.
  • step 104 the position of the pupil is identified from the face depth model.
  • the user's eyes When the user is ready to turn on the mobile terminal, the user's eyes generally look at the screen of the mobile terminal. At this time, the user's eyes are in a split state. In the face depth model constructed according to the depth map of the face, the eyes should also be The state is opened so that the position of the human eye can be determined from the face depth model, thereby identifying the position of the pupil.
  • Step 105 If the position of the pupil is within a designated area in the human eye, the mobile terminal is controlled to be bright.
  • the pupil is located in the middle of the human eye.
  • whether the user is looking at the screen or not when the user is looking at the screen can be determined according to the position of the recognized pupil. Control the mobile terminal to brighten.
  • a circular area having a midpoint of the human eye as a dot and a radius of 4 mm may be used as the designated area. After the position of the pupil is recognized from the face depth model, it can be further determined whether the position of the pupil is within the designated area. If the position of the identified pupil is within the designated area of the human eye, the user is considered to be watching the screen and controlling the mobile terminal to be bright.
  • the bright screen method of the embodiment controls the open structure light image sensor to perform imaging when the change process of the motion state of the mobile terminal satisfies the preset unlocking condition, acquires a depth map obtained by imaging the structured light image sensor, and constructs a face according to the depth map.
  • the depth model identifies the position of the pupil from the face depth model and controls the mobile terminal to brighten when the position of the pupil is within a designated area in the human eye.
  • FIG. 2 is a bright screen method provided by the second embodiment of the present application. Schematic diagram of the process.
  • step 105 may include the following steps:
  • Step 201 Identify a human eye from a face depth model according to a shape of a human eye.
  • each facial organ in the human face is different, and the shape of the human eye is mostly elliptical and distributed in the upper half of the human face, and thus, in this embodiment, the person can be constructed according to the shape of the human eye.
  • the human eye is recognized in the face depth model.
  • Step 202 identifying the pupil in the human eye according to the shape of the pupil.
  • the pupil is a round shape with a neat edge and a small diameter. In a natural light environment, the diameter of the pupil is between 2.5 mm and 5 mm.
  • the pupil can be recognized in the human eye according to the size and shape of the pupil.
  • step 203 the first central position of the pupil is determined, and the first central position is taken as the position of the pupil.
  • the circular shape having a smaller diameter in the human eye can be determined as the pupil, and the first central position of the pupil can be determined, and the first central position is taken as the position where the pupil is located.
  • the first central position may be any position of the pupil, for example, the first central position is the center of the pupil, and the first central position may be represented by coordinates.
  • step 203 the following steps may be further included:
  • Step 204 extracting a second central location of the designated area in the human eye.
  • the middle area of the human eye can be used as the designated area.
  • the circular area formed by drawing the circle with a radius of 3 mm can be taken as the center point of the human eye as a circle.
  • the designated area and determine the second center position of the specified area.
  • the second central position can be represented by coordinates.
  • Step 205 Acquire a first offset of the first central location relative to the second central location.
  • the first center position and the second center position may be compared to obtain a first offset of the first center position relative to the second center position.
  • the first offset can be represented by the difference of the coordinates of the different coordinate axes.
  • Step 206 if the first offset is within the set first offset range, it is determined that the position of the pupil is within a designated area in the human eye.
  • the first offset range may be preset and stored in the mobile terminal, and the first offset range may be, for example, -2 mm to +2 mm.
  • “-" indicates a leftward/lower offset with respect to the second central position;
  • “+” indicates a rightward/upward offset with respect to the second central position.
  • the obtained first offset is compared with a preset first offset range, and if the first offset is within a preset first offset range, determining that the pupil position is Within the designated area of the human eye, it can be determined that the user is currently looking at the screen, and the mobile terminal can be controlled to be bright.
  • the bright screen method of the embodiment controls the open structure light image sensor to perform imaging when the change process of the motion state of the mobile terminal satisfies the preset unlocking condition, acquires a depth map obtained by imaging the structured light image sensor, and constructs a face according to the depth map.
  • the depth model identifies the human eye from the face depth model according to the shape of the human eye, recognizes the pupil according to the shape of the pupil, and then determines the first center position of the pupil, and as the position of the pupil, can accurately recognize the pupil for accurate execution
  • the bright screen operation lays the foundation.
  • Obtaining a first offset of the first central location relative to the second central location by extracting a second central location of the designated area in the human eye, and determining the pupil when the first offset is within the set first offset range In the designated area of the human eye, it is possible to determine whether the human eye is watching the screen according to the position of the pupil, and then determine whether to control the bright screen of the mobile terminal, and the false unlocking operation can be effectively avoided.
  • FIG. 4 A schematic flowchart of a bright screen method, and step 103 includes:
  • Step 1032 Construct multiple face depth models according to multiple depth maps acquired within a predetermined duration
  • the structured light image sensor can acquire a plurality of depth maps within a predetermined length of time, and then perform denoising processing and smoothing processing on each depth map to obtain an image of a region in which the face is located in the plurality of depth maps, and then pass through the front and rear scenes. Segmentation and other processing, segmenting the face and the background image.
  • the feature point data can be extracted from the depth map of the face, and the feature points are connected into a network according to the extracted feature point data. For example, according to the spatial distance relationship of each point, the points of the same plane, or the points within the threshold range are connected into a triangular network, and then the networks are spliced to construct a face depth model.
  • constructing the plurality of face depth models may be: the images of the face regions of each depth image construct a corresponding face depth model; thus, all within a predetermined duration The depth maps all construct corresponding face depth models, thereby improving the accuracy of determining whether the user is looking at the screen; or: selecting several depth maps from multiple depth maps, and constructing one for each selected depth map Corresponding face depth model, for example, the obtained depth map is nine, then select a depth map for every two depth images to construct a corresponding face depth model, that is, select three depth maps and correspondingly construct Three corresponding face depth models are generated; thus, since the time span of the selected plurality of depth maps substantially fills the entire predetermined time length, it is possible to ensure that the gaze is determined without constructing a corresponding face depth model for each depth map. The accuracy of the screen and the amount of calculation can be reduced.
  • Step 105 includes:
  • Step 1052 If the position of the pupil of each face depth model is within a designated area in the human eye, the mobile terminal is controlled to be bright.
  • the pupil position in each face depth model can be identified, and when each of the identified pupil positions is within a designated area of the human eye, the user can be determined to watch the movement.
  • the duration of the screen of the terminal exceeds the preset duration. When the duration of the user's gaze on the screen exceeds the preset duration, the user does not accidentally look at the screen.
  • controlling the mobile terminal to brighten the screen can improve the condition for controlling the bright screen of the mobile terminal, and further Reduce the probability of false unlocking.
  • the visible light image may be obtained by using the visible light image sensor to determine whether the user intends to turn on the mobile terminal according to the visible light image. Therefore, the embodiment of the present application provides another bright screen method, and FIG. 3 is a schematic flowchart of the bright screen method provided by the third embodiment of the present application. As shown in FIG. 3, on the basis of the embodiment shown in FIG. 1, before step 104, the following steps may also be included:
  • step 301 while controlling the opening of the structured light image sensor for imaging, the visible light image sensor is controlled to be imaged.
  • Step 302 Acquire a visible light image obtained by imaging the visible light image sensor.
  • the control structure light image sensor and the visible light image sensor are turned on, and the depth map of the face is obtained by using the structured light image sensor, and the visible light image is utilized.
  • the sensor is imaged to obtain a visible light image.
  • Step 303 Perform face recognition on the visible light image to determine the position of the human face in the visible light image.
  • the related face recognition technology can be used to perform face recognition on the visible light image, and after the human face is recognized from the visible light image, the position of the human face in the visible light image is further determined.
  • Step 304 If the face is within a specified area of the visible light image, triggering the construction of the face depth model.
  • a third center position of the face may be determined, and a fourth center position of the designated area of the visible light image may be extracted, thereby acquiring a second offset of the third center position relative to the fourth center position, if the second offset Within the set second offset range, it is determined that the face is in a designated area of the visible light image.
  • the second offset range may be preset and stored in the mobile terminal.
  • the face of the user is facing the screen of the mobile terminal.
  • the face image captured at this time the face is usually up and down in the middle of the image, and thus, in this embodiment, The middle position of the visible light image is used as the designated area.
  • an intermediate area starting from 1/4 of the top of the visible light image and ending at 1/4 of the bottom end of the visible light image may be used as the designated area, and the area of the designated area accounts for half of the area of the visible light image.
  • the third center position of the face may be further determined, and the fourth center position of the designated area of the visible light image may be extracted, wherein the third center position may be a face
  • the coordinate representation of the position in the visible light image, the fourth central position is the coordinate representation of the designated area in the visible light image. Further, comparing the third center position with the fourth center position, and acquiring a second offset of the third center position relative to the fourth center position, and determining when the second offset is within the second offset range The face is in a specified area of the visible image, triggering the construction of the face depth model.
  • the visible light image sensor is first turned on for imaging to obtain a visible light image, and the visible image is recognized by the face and the face is determined to be in the visible light image.
  • the face is in a specified area of the visible light image, triggering the construction of the face depth model can avoid the energy consumption of constructing the face depth model without controlling the brightness of the mobile terminal, thereby reducing the energy consumption of the mobile terminal and improving the battery. Endurance ability.
  • the infrared sensor before controlling to open the structured light image sensor for imaging, the infrared sensor may be controlled to be imaged, the infrared image obtained by the infrared sensor image is obtained, and the infrared image is extracted from the infrared image. Face outline. Further, comparing the extracted face contour with the face contour of the owner stored in advance in the mobile terminal, when the face contour matches the pre-stored face contour, determining that the current imaging object is the owner, in the imaging object When it is the owner, the control structure light image sensor is controlled to perform imaging.
  • the structured light image sensor By performing face recognition on the infrared image to determine the imaging object, when the imaging object is the owner, the structured light image sensor is activated for imaging, which can save the running space and power consumption of the mobile terminal.
  • the recognition rate By combining infrared sensors, visible light image sensors, and structured light image sensors for face recognition and living body detection, the recognition rate can be further improved.
  • FIG. 5 is a mobile terminal according to an embodiment of the present application. Schematic. As shown in FIG. 5, the mobile terminal may include: a laser camera, a floodlight, a visible light camera, a laser, an MCU, and a processing chip.
  • the dedicated hardware may be, for example, an MCU. The security of the mobile terminal can be ensured by executing the bright screen method of the embodiment of the present application by a trusted application running in a trusted execution environment.
  • the MCU includes a Pulse Width Modulation (PWM), a depth engine, a bus interface, and a Random Access Memory (RAM); the processing execution chip runs a normal execution environment (Rich Execution Environment, REE). ) and the Trusted Execution Environment (TEE), REE and TEE are isolated from each other.
  • PWM Pulse Width Modulation
  • REE normal execution environment
  • TEE Trusted Execution Environment
  • the PWM is used to modulate the floodlight to generate infrared light, and/or to modulate the laser light to emit structured light, and to project the emitted infrared light and/or structured light onto the imaged object.
  • the laser camera is used to acquire a structured light image and send the acquired structured light image to a depth engine.
  • the depth engine calculates the depth of field data corresponding to the imaged object according to the structured light image, and sends the depth of field data to the trusted application through the bus interface.
  • Bus interface includes: Mobile Industry Processor Interface (MIPI), I2C (Inter-Integrated Circuit) synchronous serial bus interface and Serial Peripheral Interface (SPI), bus interface and trusted execution
  • MIPI Mobile Industry Processor Interface
  • I2C Inter-Integrated Circuit
  • SPI Serial Peripheral Interface
  • bus interface includes: Mobile Industry Processor Interface (MIPI), I2C (Inter-Integrated Circuit) synchronous serial bus interface and Serial Peripheral Interface (SPI), bus interface and trusted execution
  • the trusted application (not shown in FIG
  • the present application also proposes a bright screen device.
  • FIG. 6 is a schematic structural diagram of a bright screen device according to Embodiment 1 of the present application.
  • the bright screen device 50 includes a first control module 510 , an acquisition module 520 , a construction module 530 , an identification module 540 , and a second control module 550 . among them,
  • the first control module 510 is configured to control to turn on the structured light image sensor for imaging when the change process of the motion state of the mobile terminal meets the preset unlocking condition.
  • the obtaining module 520 is configured to acquire a depth map obtained by imaging the structured light image sensor.
  • the building module 530 is configured to construct a face depth model according to the depth map.
  • the identification module 540 is configured to identify the position of the pupil from the face depth model.
  • the second control module 550 is configured to control the mobile terminal to be bright when the position of the pupil is within a designated area in the human eye.
  • the identification module 540 includes:
  • the recognition unit 541 is configured to recognize the human eye from the face depth model according to the shape of the human eye; and identify the pupil in the human eye according to the shape of the pupil.
  • the determining unit 542 is configured to determine the first central position of the pupil, and the first central position is taken as the position of the pupil.
  • the determining unit 542 is further configured to: after the first center position is used as the position of the pupil, extract a second center position of the designated area in the human eye to obtain the first A first offset of the center position relative to the second center position, and determining that the position of the pupil is within a designated area in the human eye when the first offset is within the set first offset range.
  • the pupil is recognized according to the shape of the pupil, thereby determining the first center position of the pupil, and as the position of the pupil, the pupil can be accurately recognized, and the bright screen is accurately performed.
  • the operation lays the foundation. Obtaining a first offset of the first central location relative to the second central location by extracting a second central location of the designated area in the human eye, and determining the pupil when the first offset is within the set first offset range In the designated area of the human eye, it is possible to determine whether the human eye is watching the screen according to the position of the pupil, and then determine whether to control the bright screen of the mobile terminal, and the false unlocking operation can be effectively avoided.
  • the building module 530 is further configured to construct a plurality of face depth models according to the plurality of depth maps acquired within a predetermined duration.
  • the second control module 550 is further configured to control the mobile terminal to be bright when the positions of the pupils of each face depth model are all within a designated area in the human eye.
  • the bright screen device 50 may further include:
  • the face recognition module 560 is configured to control the opening of the visible light image sensor for imaging while controlling the open structure light image sensor for imaging; obtaining the visible light image obtained by the visible light image sensor, and performing face recognition on the visible light image to determine the face is The position in the visible light image; when the face is within a specified area of the visible light image, triggering the construction of the face depth model.
  • the face recognition module 560 is further configured to determine a third center position of the face, and extract a fourth center position of the designated area of the visible light image, and acquire a second offset of the third center position relative to the fourth center position, When the second offset is within the set second offset range, it is determined that the face is in a designated area of the visible light image, and the triggering construction module 530 constructs the face depth model according to the depth map.
  • the face depth model Before constructing the face depth model according to the depth map, first open the visible light image sensor for imaging, obtain the visible light image, perform face recognition on the visible light image and determine the position of the human face in the visible light image, when the face is specified in the visible light image
  • the face depth model is triggered to avoid the energy consumption of constructing the face depth model without controlling the bright screen of the mobile terminal, thereby reducing the energy consumption of the mobile terminal and improving the battery life.
  • the bright screen device 50 can also control the infrared sensor to be imaged and obtain the infrared sensor image before the first control module 510 controls the open structure light image sensor to perform imaging. Infrared image; extracting the contour of the face from the infrared image; when the contour of the face matches the contour of the pre-existing face, it determines that the current imaged object is the owner. Further, the first control module 510 controls the open structure light image sensor to perform imaging.
  • the structured light image sensor By performing face recognition on the infrared image to determine the imaging object, when the imaging object is the owner, the structured light image sensor is activated for imaging, which can save the running space and power consumption of the mobile terminal.
  • the recognition rate By combining infrared sensors, visible light image sensors, and structured light image sensors for face recognition and living body detection, the recognition rate can be further improved.
  • the bright screen device of the embodiment controls the open structure light image sensor to perform imaging when the change process of the motion state of the mobile terminal satisfies the preset unlocking condition, acquires a depth map obtained by imaging the structured light image sensor, and constructs a face according to the depth map.
  • the depth model identifies the position of the pupil from the face depth model and controls the mobile terminal to brighten when the position of the pupil is within a designated area in the human eye.
  • the present application also proposes a mobile terminal.
  • FIG. 9 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
  • the mobile terminal 80 includes an imaging sensor 810, a memory 820, a micro control unit MCU 830, a processor 840, and a trusted application stored on the memory 820 and operable in the trusted execution environment of the processor 840.
  • the MCU 830 is dedicated hardware of the trusted execution environment, and is connected to the imaging sensor 810 and the processor 840, and the MCU 830 and the processor 840 communicate by encryption to ensure the security of data communication.
  • the MCU 830 is used to control the imaging sensor 810 for imaging and to transmit imaging data to the processor 840.
  • the imaging sensor 810 can include a laser camera, a floodlight, a visible light camera, and a laser.
  • the MCU 830 may include a pulse width modulation PWM, a depth engine, a bus interface, and a random access memory RAM.
  • the PWM is used to modulate the floodlight to generate infrared light, and/or to modulate the laser light to emit structured light, and to project the emitted infrared light and/or structured light onto the imaged object.
  • the laser camera is used to acquire a structured light image and send the acquired structured light image to a depth engine.
  • the depth engine calculates the depth of field data corresponding to the imaged object according to the structured light image, and transmits the depth of field data to the processor 840 through the bus interface.
  • the processor 840 executes the trusted application 850, the bright screen method as described in the foregoing embodiments is implemented.
  • the mobile terminal 80 of the present embodiment is provided with an imaging sensor 810, a memory 820, a micro processing chip MCU 830, a processor 840, and a trusted application 850 stored on the memory 820 and operable in a trusted execution environment of the processor 840.
  • the imaging sensor 810 is controlled by the MCU 830 to perform imaging, and the imaging data is sent to the processor 840.
  • the processor 840 implements the bright screen method as described in the first aspect of the embodiment by executing the trusted application 850, so that the mobile terminal 80 Bright screen.
  • the mobile terminal 80 By recognizing the position of the pupil in the constructed face depth model, when the position of the pupil is within the designated area in the human eye, the mobile terminal 80 is controlled to be bright, and the mobile terminal 80 can be controlled when the human eye looks at the screen of the mobile terminal 80.
  • the bright screen unlocking is performed to effectively prevent the mobile terminal 80 from being accidentally unlocked, thereby improving the user experience.
  • the present application further provides a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the bright screen method as described in the foregoing embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Optics & Photonics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Geometry (AREA)
  • Telephone Function (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种亮屏方法、装置(50)、移动终端(80)及存储介质,其中,亮屏方法包括:当移动终端(80)的运动状态的变化过程满足预设的解锁条件时,则控制开启结构光图像传感器进行成像;获取所述结构光图像传感器成像得到的深度图;根据所述深度图,构建人脸深度模型;从所述人脸深度模型识别瞳孔的位置;如果所述瞳孔的位置在人眼中的指定区域内,则控制所述移动终端(80)亮屏。

Description

亮屏方法、装置、移动终端及存储介质
优先权信息
本申请请求2018年4月12日向中国国家知识产权局提交的、专利申请号为201810327835.6的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本申请涉及移动终端技术领域,尤其涉及一种亮屏方法、装置、移动终端及存储介质。
背景技术
随着电子技术的发展,人脸解锁逐渐应用于移动终端中。移动终端应用的人脸解锁技术,通常是在用户抬起移动终端时,移动终端对人脸进行检测,当检测到人脸时,移动终端自动亮屏解锁。
发明内容
本申请实施例提供了一种亮屏方法、装置、移动终端及存储介质,通过识别构建的人脸深度模型中瞳孔的位置,当瞳孔的位置在人眼中的指定区域内时,才控制移动终端亮屏,能够在人眼注视移动终端的屏幕时才控制移动终端进行亮屏解锁,有效避免移动终端误解锁的情况发生,提升用户体验。
本申请第一方面实施例提出了一种亮屏方法,包括:
当检测移动终端的运动状态的变化过程满足预设的解锁条件时,则控制开启结构光图像传感器进行成像;
获取所述结构光图像传感器成像得到的深度图;
根据所述深度图,构建人脸深度模型;
从所述人脸深度模型识别瞳孔的位置;
如果所述瞳孔的位置在人眼中的指定区域内,则控制所述移动终端亮屏。
本申请第二方面实施例提出了一种亮屏装置,包括:
第一控制模块,用于当移动终端的运动状态的变化过程满足预设的解锁条件时,则控制开启结构光图像传感器进行成像;
获取模块,用于获取所述结构光图像传感器成像得到的深度图;
构建模块,用于根据所述深度图,构建人脸深度模型;
识别模块,用于从所述人脸深度模型识别瞳孔的位置;
第二控制模块,用于如果所述瞳孔的位置在人眼中的指定区域内,则控制所述移动终端亮屏。
本申请第三方面实施例提出了一种移动终端,包括:成像传感器、存储器、微控制单元MCU、处理器及存储在所述存储器上并可在所述处理器的可信执行环境下运行的可信应用程序;
所述MCU,为所述可信执行环境的专用硬件,与所述成像传感器和所述处理器连接,用于控制所述成像传感器进行成像,并将成像数据发送至所述处理器;
所述处理器执行所述可信应用程序时,实现以下亮屏步骤:当移动终端的运动状态的变化过程满足预设的解锁条件时,则控制开启结构光图像传感器进行成像;获取所述结构光图像传感器成像得到的深度图;根据所述深度图,构建人脸深度模型;从所述人脸深度模型识别瞳孔的位置;如果所述瞳孔的位置在人眼中的指定区域内,则控制所述移动终端亮屏。
本申请第四方面实施例提出了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如第一方面实施例所述的亮屏方法。
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请的上述和/或附加的方面和优点可以从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1为本申请实施例一所提供的亮屏方法的流程示意图;
图2为本申请实施例二所提供的亮屏方法的流程示意图;
图3为本申请实施例三所提供的亮屏方法的流程示意图;
图4为本申请实施例四所提供的亮屏方法的流程示意图;
图5为本申请一实施例的移动终端的结构示意图;
图6为本申请实施例一所提供的亮屏装置的结构示意图;
图7为本申请实施例二所提供的亮屏装置的结构示意图;
图8为本申请实施例三所提供的亮屏装置的结构示意图;以及
图9为本申请实施例所提供的移动终端的结构示意图。
具体实施方式
以下结合附图对本申请的实施方式作进一步说明。附图中相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。另外,下面结合附图描述的本申请的实施方式是示例性的,仅用于解释本申请的实施方式,而不能理解为对本申请的限制。
下面参考附图描述本申请实施例的亮屏方法、装置、移动终端及存储介质。
请参阅图1和图9,本申请实施方式的亮屏方法包括以下步骤:
步骤101:当移动终端80的运动状态的变化过程满足预设的解锁条件时,则控制开启结构光图像传感器进行成像;
步骤102:获取结构光图像传感器成像得到的深度图;
步骤103:根据深度图,构建人脸深度模型;
步骤104:从人脸深度模型识别瞳孔的位置;
步骤105:如果瞳孔的位置在人眼中的指定区域内,则控制移动终端80亮屏。
请参阅图4和图9,在某些实施方式中,步骤103包括以下步骤:
步骤1032:根据在预定时长内获取的多张深度图,构建多个人脸深度模型;
步骤105包括:
步骤1052:如果每个人脸深度模型的瞳孔的位置均在人眼中的指定区域内,则控制移动终端80亮屏。
请参阅图2,在某些实施方式中,步骤105包括以下步骤:
步骤201:根据人眼的形状,从人脸深度模型中识别人眼;
步骤202:根据瞳孔的形状,在人眼中识别瞳孔;
步骤203:确定瞳孔的第一中心位置,将第一中心位置作为瞳孔的位置。
请参阅图2,在某些实施方式中,步骤105还包括:
步骤204:提取人眼中的指定区域的第二中心位置;
步骤205:获取第一中心位置相对第二中心位置的第一偏移量;
步骤206:如果第一偏移量在设定的第一偏移量范围内,则确定瞳孔的位置在人眼中的指定区域内。
请参阅图3,在某些实施方式中,步骤104之前还包括以下步骤:
步骤301:在控制开启结构光图像传感器进行成像的同时,控制开启可见光图像传感器进行成像;
步骤302:获取可见光图像传感器成像得到的可见光图像;
步骤303:对可见光图像进行人脸识别,确定人脸在可见光图像中的位置;
步骤304:如果人脸在可见光图像的指定区域内,则触发构建人脸深度模型。
在某些实施方式中,步骤303之后包括以下步骤:
确定人脸的第三中心位置;
提取可见光图像的指定区域的第四中心位置;
获取第三中心位置相对第四中心位置的第二偏移量;
如果第二偏移量在设定的第二偏移量范围内,则确定人脸在可见光图像的指定区域。
在某些实施方式中,在控制开启结构光图像传感器进行成像之前,还包括以下步骤:
控制开启红外传感器进行成像;
获取红外传感器成像得到的红外图像;
从红外图像中提取人脸轮廓;
当人脸轮廓与预存的人脸轮廓匹配时,则确定当前成像对象为机主。
在某些实施方式中,亮屏方法由可信应用程序850执行,可信应用程序850运行于可信执行环境中。
在某些实施方式中,通过可信执行环境的专用硬件与可信应用程序850进行通信。
请参阅图6和图9,本申请的亮屏装置50包括第一控制模块510、获取模块520、构建模块530、识别模块540,以及第二控制模块550。第一控制模块510用于当移动终端80的运动状态的变化过程满足预设的解锁条件时,则控制开启结构光图像传感器进行成像。获取模块520用于获取结构光图像传感器成像得到的深度图。构建模块530用于根据深度图,构建人脸深度模型。识别模块540用于从人脸深度模型识别瞳孔的位置。第二控制模块550用于当瞳孔的位置在人眼中的指定区域内时,控制移动终端80亮屏。
请参阅图1和图9,本申请实施方式的移动终端80包括:成像传感器810、存储器820、微控制单元MCU830、处理器840及存储在存储器820上并可在处理器840的可信执行环境下运行的可信应用程序850;
MCU830,为可信执行环境的专用硬件,与成像传感器810和处理器840连接,用于控制成像传感器810进行成像,并将成像数据发送至处理器840;
处理器840执行可信应用程序850时,实现以下亮屏步骤:
步骤101:当移动终端80的运动状态的变化过程满足预设的解锁条件时,则控制开启结构光图像传感器进行成像;
步骤102:获取结构光图像传感器成像得到的深度图;
步骤103:根据深度图,构建人脸深度模型;
步骤104:从人脸深度模型识别瞳孔的位置;
步骤105:如果瞳孔的位置在人眼中的指定区域内,则控制移动终端80亮屏。
请参阅图4和图9,在某些实施方式中,处理器840执行可信应用程序850时,实现以下步骤:
步骤1032:根据在预定时长内获取的多张深度图,构建多个人脸深度模型;
步骤1052:如果每个人脸深度模型的瞳孔的位置均在人眼中的指定区域内,则控制移动终端80亮屏。
请参阅图2和图9,在某些实施方式中,处理器840执行可信应用程序850时,实现以下步骤:
步骤201:根据人眼的形状,从人脸深度模型中识别人眼;
步骤202:根据瞳孔的形状,在人眼中识别瞳孔;
步骤203:确定瞳孔的第一中心位置,将第一中心位置作为瞳孔的位置。
请参阅图2和图9,在某些实施方式中,处理器840执行可信应用程序850时,实现以下步骤:
步骤204:提取人眼中的指定区域的第二中心位置;
步骤205:获取第一中心位置相对第二中心位置的第一偏移量;
步骤206:如果第一偏移量在设定的第一偏移量范围内,则确定瞳孔的位置在人眼中的指定区域内。
请参阅图3和图9,在某些实施方式中,处理器840执行可信应用程序850时,实现以下步骤:
步骤301:在控制开启结构光图像传感器进行成像的同时,控制开启可见光图像传感器进行成像;
步骤302:获取可见光图像传感器成像得到的可见光图像;
步骤303:对可见光图像进行人脸识别,确定人脸在可见光图像中的位置;
步骤304:如果人脸在可见光图像的指定区域内,则触发构建人脸深度模型。。
请参阅图9,在某些实施方式中,处理器840执行可信应用程序850时,实现以下步骤:
确定人脸的第三中心位置;
提取可见光图像的指定区域的第四中心位置;
获取第三中心位置相对第四中心位置的第二偏移量;
如果第二偏移量在设定的第二偏移量范围内,则确定人脸在可见光图像的指定区域。
请参阅图9,在某些实施方式中,处理器840执行可信应用程序850时,实现以下步骤:
控制开启红外传感器进行成像;
获取红外传感器成像得到的红外图像;
从红外图像中提取人脸轮廓;
当人脸轮廓与预存的人脸轮廓匹配时,则确定当前成像对象为机主。
请参阅图9,在某些实施方式中,可信应用程序850运行于可信执行环境中。
请参阅图9,在某些实施方式中,通过可信执行环境的专用硬件与可信应用程序850进行通信。
请参阅图9,在某些实施方式中,MCU830与处理器840之间通过加密方式进行通信。
本申请实施方式的计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一实施方式的亮屏方法。
现有移动终端的人脸解锁技术中,当移动终端检测到用户抬起该移动终端时,移动终端进行人脸识别,如果检测的人脸,则自动亮屏解锁。然而,上述解锁方式未考虑用户当前是否有需要使用移动终端的意图,在无意图的情况,可能拍到了人脸导致误解锁。比如,当用户拿起移动终端将其从一个位置移动至另一个位置时,如果这一过程中出现了极短暂的人脸面对移动终端的屏幕的情形,此时移动终端检测到人脸并自动亮屏解锁,而由于用户只是移动该移动终端的位置,并不需要使用该移动终端,移动终端亮屏解锁并非用户所希望的,即为误操作,影响用户体验。
针对上述问题,本申请实施例提出了一种亮屏方法,能够在人眼注视移动终端的屏幕时才控制移动终端进行亮屏解锁,有效避免移动终端误解锁的情况发生,提升用户体验。
图1为本申请实施例一所提供的亮屏方法的流程示意图,该方法可以由移动终端执行。
如图1所示,该亮屏方法包括以下步骤:
步骤101,当移动终端的运动状态的变化过程满足预设的解锁条件时,则控制开启结构光图像传感器进行成像。
本实施例中,可以在移动终端中安装陀螺仪、重力传感器等,来检测移动终端的运动状态。当移动终端的运动状态的变化过程满足预设的解锁条件时,控制开启结构光图像传感器进行成像。其中,预设的解锁条件可以存储在移动终端的本地存储器中。
示例一,解锁条件可以是移动终端处于运动状态的时长达到预设阈值。本示例中,当移动终端中安装的陀螺仪、重力传感器等检测到移动终端开始运动时,控制移动终端中的计时器开始计时,以获取移动终端处于运动状态的时长,并将移动终端处于运动状态的时长与预设阈值进行比较,当达到预设阈值时,控制开启结构光图像传感器进行成像。
示例二,解锁条件可以是移动终端由运动状态到停止状态的过程中识别到人脸。本示例中,当移动终端中安装的陀螺仪、重力传感器等检测到移动终端开始运动时,移动终端控制前置摄像头开启,以检测前置摄像头视角范围内出现的人脸,并在移动终端停止运动 时,关闭前置摄像头。移动终端对这一过程中前置摄像头采集的图像进行识别,当识别到人脸时,控制开启结构光图像传感器进行成像。
示例三,解锁条件为移动终端处于运动状态过程中的运动轨迹为触发亮屏的目标轨迹。本示例中,为了根据移动终端的运动轨迹判断是否需要控制移动终端亮屏,可以预先在移动终端中存储用于触发亮屏的目标轨迹。一般而言,用户手持移动终端并使用该移动终端时,该移动终端与地面的之间呈一定夹角,夹角的大小一般为30°~70°,从而,本示例中,可以将移动终端停止运动后的当前状态与地面之间的夹角处于30°~70°范围内时的运动轨迹作为触发亮屏的目标轨迹,并存储于移动终端中。
以利用陀螺仪获取移动终端的运动轨迹为例,当用户拿起移动终端时,在移动终端被抬起的过程中,陀螺仪检测到移动终端的角运动,并根据移动终端运动停止时,移动终端的角运动情况,形成移动终端的运动轨迹。
当检测到移动终端的运动轨迹后,可以对移动终端的运动轨迹进行分析,从中提取出移动终端停止运动后与地面之间的夹角大小,并将提取的夹角的大小与移动终端中存储的目标轨迹的夹角大小进行比较,若提取的夹角的处于目标轨迹的夹角范围内,则判定移动终端的运动轨迹为触发亮屏的目标轨迹,进而控制开启结构光图像传感器进行成像。
结构光图像传感器用于向成像对象投射结构光,其中,已知空间方向光束的投影集合称为结构光(structured light)。本实施例中,结构光的类型可以是光栅型、光点型、斑纹型(包括圆形斑纹和十字斑纹)、非均匀的散斑图案等中的任意一种。
步骤102,获取结构光图像传感器成像得到的深度图。
当结构光图像传感器发射的结构光到达人脸之后,由于人脸上各个面部器官会对结构光造成阻碍结构光会在人脸处发生反射,此时,可以通过移动终端中设置的摄像头对结构光在人脸上的反射光进行采集,通过采集到的反射光可以得到人脸的深度图。
步骤103,根据深度图,构建人脸深度模型。
具体的,人脸的深度图中可能包括人脸和背景,首先对深度图进行去噪处理及平滑处理,来获取人脸所在区域的图像,进而通过前后景分割等处理,将人脸与背景图分割。
在将人脸从深度图中提取出来后,即可从人脸的深度图中提取特征点数据,进而根据提取的特征点数据,将这些特征点连接成网络。比如根据各个点在空间上的距离关系,将相同平面的点,或者距离在阈值范围内的点连接成三角形网络,进而将这些网络进行拼接,可以构建出人脸深度模型。
步骤104,从人脸深度模型识别瞳孔的位置。
当用户准备开启移动终端时,用户的眼睛一般是看向移动终端的屏幕的,此时用户的眼睛处于睁开状态,根据人脸的深度图所构建的人脸深度模型中,眼睛也应当是睁开状态, 从而能够从人脸深度模型中确定人眼的位置,进而识别出瞳孔的位置。
步骤105,如果瞳孔的位置在人眼中的指定区域内,则控制移动终端亮屏。
当用户的眼睛注视着移动终端的屏幕时,瞳孔位于人眼的正中间,从而,本实施例中,可以根据识别出的瞳孔的位置来判断用户是否在注视着屏幕,并在用户注视屏幕时控制移动终端亮屏。
作为一种示例,可以将以人眼的中间点为圆点、4mm为半径的圆形区域作为指定区域。当从人脸深度模型中识别出瞳孔的位置之后,可以进一步判断瞳孔的位置是否在指定区域内。如果识别出的瞳孔的位置处于人眼的指定区域内,则认为用户在注视着屏幕,控制移动终端亮屏。
本实施例的亮屏方法,当移动终端的运动状态的变化过程满足预设的解锁条件时控制开启结构光图像传感器进行成像,获取结构光图像传感器成像得到的深度图,根据深度图构建人脸深度模型,从人脸深度模型识别瞳孔的位置,并在瞳孔的位置在人眼中的指定区域内时,控制移动终端亮屏。通过识别构建的人脸深度模型中瞳孔的位置,当瞳孔的位置在人眼中的指定区域内时,才控制移动终端亮屏,能够在人眼注视移动终端的屏幕时才控制移动终端进行亮屏解锁,有效避免移动终端误解锁的情况发生,提升用户体验。
为了更加清楚地说明前述实施例中从人脸深度模型识别瞳孔的位置的具体实现过程,本申请实施例提出了另一种亮屏方法,图2为本申请实施例二所提供的亮屏方法的流程示意图。
如图2所示,在如图1所示实施例的基础上,步骤105可以包括以下步骤:
步骤201,根据人眼的形状,从人脸深度模型中识别人眼。
人脸中的各个面部器官的形状各不相同,人眼的形状大多为椭圆形,且分布于人脸的上半部分,从而,本实施例中,可以根据人眼的形状,从构建的人脸深度模型中识别出人眼。
步骤202,根据瞳孔的形状,在人眼中识别瞳孔。
瞳孔为边缘整齐的圆形,且直径较小,在自然光环境下,瞳孔的直径约为2.5mm~5mm之间。
本实施例中,从人脸深度模型中识别出人眼之后,可以进一步根据瞳孔的大小、形状等特点,在人眼中识别出瞳孔。
步骤203,确定瞳孔的第一中心位置,将第一中心位置作为瞳孔的位置。
本实施例中,可以将人眼中直径较小的圆形确定为瞳孔,并确定瞳孔的第一中心位置,将第一中心位置作为瞳孔所在的位置。其中,第一中心位置可以是瞳孔的任意位置,例如第一中心位置为瞳孔的中心,第一中心位置可以用坐标表示。
进一步地,在本申请实施例一种可能的实现方式中,如图2所示,在步骤203之后,还可以包括以下步骤:
步骤204,提取人眼中的指定区域的第二中心位置。
本实施例中,在识别出的人眼中,可以将人眼的中间区域作为指定区域,比如,可以以人眼的正中间位置为圆点、以3mm为半径画圆将所形成的圆形区域作为指定区域,并确定指定区域的第二中心位置。其中,第二中心位置可以用坐标表示。
步骤205,获取第一中心位置相对第二中心位置的第一偏移量。
确定了指定区域的第二中心位置和瞳孔的第一中心位置之后,可以比较第一中心位置和第二中心位置,以获取第一中心位置相对第二中心位置的第一偏移量。其中,第一偏移量可以用不同坐标轴的坐标的差值表示。
步骤206,如果第一偏移量在设定的第一偏移量范围内,则确定瞳孔的位置在人眼中的指定区域内。
其中,第一偏移量范围可以预先设定并存储在移动终端中,第一偏移量范围比如可以为-2mm~+2mm。其中,“-”表示相对于第二中心位置向左/下偏移;“+”表示相对于第二中心位置向右/上偏移。
本实施例中,将获取的第一偏移量与预设的第一偏移量范围进行比较,若第一偏移量在预设的第一偏移量范围内,则确定瞳孔的位置在人眼的指定区域内,从而能够确定用户当前在注视着屏幕,可以控制移动终端亮屏。
本实施例的亮屏方法,当移动终端的运动状态的变化过程满足预设的解锁条件时控制开启结构光图像传感器进行成像,获取结构光图像传感器成像得到的深度图,根据深度图构建人脸深度模型,根据人眼的形状从人脸深度模型中识别人眼,根据瞳孔的形状识别出瞳孔,进而确定瞳孔的第一中心位置,并作为瞳孔的位置,能够准确识别出瞳孔,为准确执行亮屏操作奠定基础。通过提取人眼中指定区域的第二中心位置,获取第一中心位置相对第二中心位置的第一偏移量,当第一偏移量在设定的第一偏移量范围内时,确定瞳孔的位置在人眼中的指定区域内,能够实现根据瞳孔的位置判断人眼是否在注视着屏幕,进而判断是否控制移动终端亮屏,能够有效避免误解锁操作。
为了更为准确地判断用户当前是否在注视屏幕,在如图1所示的实施例的基础上,本申请实施例提出了另一种亮屏方法,图4为本申请实施例四所提供的亮屏方法的流程示意图,步骤103包括:
步骤1032:根据在预定时长内获取的多张深度图,构建多个人脸深度模型;
具体地,结构光图像传感器可在预定时长内获取多张深度图,然后对每张深度图进行去噪处理及平滑处理,来获取多张深度图中人脸所在区域的图像,进而通过前后景分割等 处理,将人脸与背景图分割。
在将人脸从深度图中提取出来后,即可从人脸的深度图中提取特征点数据,进而根据提取的特征点数据,将这些特征点连接成网络。比如根据各个点在空间上的距离关系,将相同平面的点,或者距离在阈值范围内的点连接成三角形网络,进而将这些网络进行拼接,可以构建出人脸深度模型。根据在预定时长内获取的多张深度图,构建多个人脸深度模型可以是:每个深度图像的人脸区域的图像均构建出一个对应的人脸深度模型;如此,在预定时长内的所有深度图均构建对应的人脸深度模型,从而提高判断用户是否注视屏幕的准确性;也可以是:从多张深度图中选择几张深度图,并为选出的每张深度图构建出一个对应的人脸深度模型,例如,获取的深度图为九张,那么每隔两张深度图像选一张深度图构建出一个对应的人脸深度模型,即选出了三张深度图并对应构建出三个对应的人脸深度模型;如此,由于选择的多张深度图的时间跨度基本占满整个预定时长,可无需对每张深度图都构建对应的人脸深度模型也可保证判断是否注视屏幕的准确性,且可减少计算量。
步骤105包括:
步骤1052:如果每个人脸深度模型的瞳孔的位置均在人眼中的指定区域内,则控制移动终端亮屏。
具体地,在构建了多个人脸深度模型后,可以识别出每个人脸深度模型中的瞳孔位置,当识别出的每个瞳孔的位置均在人眼的指定区域内时,可确定用户注视移动终端的屏幕的时长超过预设时长,当用户注视屏幕的时长超过预设时长时,说明用户不是偶然看向屏幕,此时,控制移动终端亮屏,能够提高控制移动终端亮屏的条件,进一步降低误解锁的概率。
为了进一步提高人脸识别的准确度,在本申请实施例一种可能的实现方式中,可以利用可见光图像传感器成像得到可见光图像,以根据可见光图像确定用户是否意图开启移动终端。从而,本申请实施例提出了另一种亮屏方法,图3为本申请实施例三所提供的亮屏方法的流程示意图。如图3所示,在如图1所示实施例的基础上,在步骤104之前,还可以包括以下步骤:
步骤301,在控制开启结构光图像传感器进行成像的同时,控制开启可见光图像传感器进行成像。
步骤302,获取可见光图像传感器成像得到的可见光图像。
本实施例中,当检测的移动终端的运动轨迹为触发亮屏的目标轨迹时,控制结构光图像传感器和可见光图像传感器开启,利用结构光图像传感器成像得到人脸的深度图,并利用可见光图像传感器成像得到可见光图像。
步骤303,对可见光图像进行人脸识别,确定人脸在可见光图像中的位置。
利用可见光图像传感器成像得到可见光图像之后,可以采用相关的人脸识别技术,对 可见光图像进行人脸识别,并在从可见光图像中识别到人脸后,进一步确定人脸在可见光图像中的位置。
步骤304,如果人脸在可见光图像的指定区域内,则触发构建人脸深度模型。
具体地,可以确定人脸的第三中心位置,并提取可见光图像的指定区域的第四中心位置,进而获取第三中心位置相对第四中心位置的第二偏移量,如果第二偏移量在设定的第二偏移量范围内,则确定人脸在可见光图像的指定区域。其中,第二偏移量范围可以预先设定并存储在移动终端中。
一般而言,当用户使用移动终端时,用户的脸部正对移动终端的屏幕,此时拍摄的人脸图像中,人脸通常在图像的中间位置上下,从而,本实施例中,可以将可见光图像的中间位置上下作为指定区域。比如,可以将距离可见光图像顶端1/4处开始,至距离可见光图像底端1/4处结束的中间区域作为指定区域,指定区域的面积占可见光图像面积的一半。
本实施例中,确定人脸在可见光图像中的位置之后,可以进一步确定人脸的第三中心位置,并提取可见光图像的指定区域的第四中心位置,其中,第三中心位置可以是人脸在可见光图像中的位置的坐标表示,第四中心位置为指定区域在可见光图像中的坐标表示。进而,将第三中心位置和第四中心位置进行比较,并获取第三中心位置相对第四中心位置的第二偏移量,当第二偏移量在第二偏移量范围内时,确定人脸在可见光图像的指定区域内,则触发构建人脸深度模型。
本实施例的亮屏方法,通过在根据深度图构建人脸深度模型之前,先开启可见光图像传感器进行成像,获取可见光图像,对可见光图像进行人脸识别并确定人脸在可见光图像中的为准,当人脸在可见光图像的指定区域内时,触发构建人脸深度模型,能够避免无需控制移动终端亮屏时构建人脸深度模型带来的能耗,从而降低移动终端的能耗,提高电池续航能力。
在本申请实施例一种可能的实现方式中,在控制开启结构光图像传感器进行成像之前,还可以先控制开启红外传感器进行成像,获取红外传感器成像得到的红外图像,并从红外图像中提取人脸轮廓。进而,将提取的人脸轮廓与移动终端中预先存储的机主的人脸轮廓进行比较,当人脸轮廓与预存的人脸轮廓匹配时,则确定当前成像对象为机主,以在成像对象为机主时,再控制开启结构光图像传感器进行成像。通过对红外图像进行人脸识别以判断成像对象,当成像对象为机主时才启动结构光图像传感器进行成像,能够节约移动终端的运行空间和功耗。通过结合红外传感器、可见光图像传感器和结构光图像传感器进行人脸识别和活体检测,能够进一步提高识别率。
前述实施例的亮屏方法,可以由可信应用程序执行,其中,可信应用程序运行于可信执行环境中,通过可信执行环境的专用硬件与可信应用程序进行通信。作为一种可能的结 构形式,移动终端中可以安装有激光摄像头、镭射灯、微控制单元(Microcontroller Unit,MCU)等,具体地,参见图5,图5为本申请一实施例的移动终端的结构示意图。如图5所示,移动终端可以包括:激光摄像头、泛光灯、可见光摄像头、镭射灯、MCU以及处理芯片。其中,专用硬件比如可以为MCU。通过由运行于可信执行环境中的可信应用程序执行本申请实施例的亮屏方法,能够保证移动终端的安全性。
本实施例中,MCU包括脉冲宽度调制(Pulse Width Modulation,PWM)、深度引擎、总线接口以及随机存取存储器(Random Access Memory,RAM);处理芯片上运行有普通执行环境(Rich Execution Environment,REE)和可信执行环境(Trusted Execution Environment,TEE),REE和TEE之间相互隔离。
如图5所示的移动终端中,PWM用于调制泛光灯生成红外光,和/或调制镭射灯发出结构光,并将发出的红外光和/或结构光投射至成像对象上。激光摄像头用于采集结构光图像,并将采集的结构光图像发送至深度引擎。深度引擎根据结构光图像计算获得成像对象对应的景深数据,并将景深数据通过总线接口发送至可信应用程序。总线接口包括:移动产业处理器接口(Mobile Industry Processor Interface,MIPI)、I2C(Inter-Integrated Circuit)同步串行总线接口和串行外设接口(Serial Peripheral Interface,SPI),总线接口与可信执行环境下运行的可信应用程序进行信息交互。可信应用程序(图5中未示出)运行于可信执行环境TEE中,用于根据景深数据进行亮屏等操作。
为了实现上述实施例,本申请还提出一种亮屏装置。
图6为本申请实施例一所提供的亮屏装置的结构示意图。
如图6所示,该亮屏装置50包括:第一控制模块510、获取模块520、构建模块530、识别模块540,以及第二控制模块550。其中,
第一控制模块510,用于当移动终端的运动状态的变化过程满足预设的解锁条件时,则控制开启结构光图像传感器进行成像。
获取模块520,用于获取结构光图像传感器成像得到的深度图。
构建模块530,用于根据深度图,构建人脸深度模型。
识别模块540,用于从人脸深度模型识别瞳孔的位置。
第二控制模块550,用于当瞳孔的位置在人眼中的指定区域内时,控制移动终端亮屏。
进一步地,在本申请实施例一种可能的实现方式中,如图7所示,在如图6所示实施例的基础上,识别模块540包括:
识别单元541,用于根据人眼的形状,从人脸深度模型中识别人眼;以及,根据瞳孔的形状,在人眼中识别瞳孔。
确定单元542,用于确定瞳孔的第一中心位置,将第一中心位置作为瞳孔的位置。
进一步地,在本申请实施例一种可能的实现方式中,确定单元542还用于:在将第一中心位置作为瞳孔的位置之后,提取人眼中的指定区域的第二中心位置,获取第一中心位置相对第二中心位置的第一偏移量,并在第一偏移量在设定的第一偏移量范围内时,确定瞳孔的位置在人眼中的指定区域内。
通过根据人眼的形状从人脸识别模型中识别人眼,根据瞳孔的形状识别出瞳孔,进而确定瞳孔的第一中心位置,并作为瞳孔的位置,能够准确识别出瞳孔,为准确执行亮屏操作奠定基础。通过提取人眼中指定区域的第二中心位置,获取第一中心位置相对第二中心位置的第一偏移量,当第一偏移量在设定的第一偏移量范围内时,确定瞳孔的位置在人眼中的指定区域内,能够实现根据瞳孔的位置判断人眼是否在注视着屏幕,进而判断是否控制移动终端亮屏,能够有效避免误解锁操作。
进一步地,在本申请实施例四中,如图6所示,构建模块530还用于根据在预定时长内获取的多张深度图,构建多个人脸深度模型。第二控制模块550还用于当每个人脸深度模型的瞳孔的位置均在人眼中的指定区域内时,控制移动终端亮屏。
进一步地,在本申请实施例一种可能的实现方式中,如图8所示,在如图6所示实施例的基础上,该亮屏装置50还可以包括:
人脸识别模块560,用于在控制开启结构光图像传感器进行成像的同时,控制开启可见光图像传感器进行成像;获取可见光图像传感器成像得到的可见光图像,对可见光图像进行人脸识别,确定人脸在可见光图像中的位置;当人脸在可见光图像的指定区域内时,触发构建人脸深度模型。
具体地,人脸识别模块560还用于确定人脸的第三中心位置,并提取可见光图像的指定区域的第四中心位置,获取第三中心位置相对第四中心位置的第二偏移量,当第二偏移量在设定的第二偏移量范围内时,确定人脸在可见光图像的指定区域,进而触发构建模块530根据深度图,构建人脸深度模型。
通过在根据深度图构建人脸深度模型之前,先开启可见光图像传感器进行成像,获取可见光图像,对可见光图像进行人脸识别并确定人脸在可见光图像中的位置,当人脸在可见光图像的指定区域内时,触发构建人脸深度模型,能够避免无需控制移动终端亮屏时构建人脸深度模型带来的能耗,从而降低移动终端的能耗,提高电池续航能力。
在本申请实施例一种可能的实现方式中,该亮屏装置50还可以在第一控制模块510控制开启结构光图像传感器进行成像之前,先控制开启红外传感器进行成像,获取红外传感器成像得到的红外图像;从红外图像中提取人脸轮廓;当人脸轮廓与预存的人脸轮廓匹配时,则确定当前成像对象为机主。进而,再由第一控制模块510控制开启结构光图像传感器进行成像。
通过对红外图像进行人脸识别以判断成像对象,当成像对象为机主时才启动结构光图像传感器进行成像,能够节约移动终端的运行空间和功耗。通过结合红外传感器、可见光图像传感器和结构光图像传感器进行人脸识别和活体检测,能够进一步提高识别率。
需要说明的是,前述对亮屏方法实施例的解释说明也适用于该实施例的亮屏装置,其实现原理类似,此处不再赘述。
本实施例的亮屏装置,当移动终端的运动状态的变化过程满足预设的解锁条件时控制开启结构光图像传感器进行成像,获取结构光图像传感器成像得到的深度图,根据深度图构建人脸深度模型,从人脸深度模型识别瞳孔的位置,并在瞳孔的位置在人眼中的指定区域内时,控制移动终端亮屏。通过识别构建的人脸深度模型中瞳孔的位置,当瞳孔的位置在人眼中的指定区域内时,才控制移动终端亮屏,能够在人眼注视移动终端的屏幕时才控制移动终端进行亮屏解锁,有效避免移动终端误解锁的情况发生,提升用户体验。
为了实现上述实施例,本申请还提出一种移动终端。
图9为本申请实施例所提供的移动终端的结构示意图。
如图9所示,该移动终端80包括:成像传感器810、存储器820、微控制单元MCU830、处理器840及存储在存储器820上并可在处理器840的可信执行环境下运行的可信应用程序850。其中,MCU830为可信执行环境的专用硬件,与成像传感器810和处理器840连接,且MCU830与处理器840通过加密方式进行通信,从而保证数据通信的安全性。MCU830用于控制成像传感器810进行成像,并将成像数据发送至处理器840。
成像传感器810可以包括:激光摄像头、泛光灯、可见光摄像头和镭射灯。MCU830可以包括:脉冲宽度调制PWM、深度引擎、总线接口以及随机存取存储器RAM。其中,PWM用于调制泛光灯生成红外光,和/或调制镭射灯发出结构光,并将发出的红外光和/或结构光投射至成像对象上。激光摄像头用于采集结构光图像,并将采集的结构光图像发送至深度引擎。深度引擎根据结构光图像计算获得成像对象对应的景深数据,并将景深数据通过总线接口发送至处理器840。
处理器840执行可信应用程序850时,实现如前述实施例所述的亮屏方法。
本实施例的移动终端80,通过设置成像传感器810、存储器820、微处理芯片MCU830、处理器840及存储在存储器820上并可在处理器840的可信执行环境下运行的可信应用程序850,由MCU830控制成像传感器810进行成像,并将成像数据发送至处理器840,处理器840通过执行可信应用程序850,实现如第一方面实施例所述的亮屏方法,以使移动终端80亮屏。通过识别构建的人脸深度模型中瞳孔的位置,当瞳孔的位置在人眼中的指定区域内时,才控制移动终端80亮屏,能够在人眼注视移动终端80的屏幕时才控制移动终端80进行亮屏解锁,有效避免移动终端80误解锁的情况发生,提升用户体验。
为了实现上述实施例,本申请还提出一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如前述实施例所述的亮屏方法。
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型,本申请的范围由权利要求及其等同物限定。

Claims (21)

  1. 一种亮屏方法,其特征在于,包括以下步骤:
    当移动终端的运动状态的变化过程满足预设的解锁条件时,则控制开启结构光图像传感器进行成像;
    获取所述结构光图像传感器成像得到的深度图;
    根据所述深度图,构建人脸深度模型;
    从所述人脸深度模型识别瞳孔的位置;
    如果所述瞳孔的位置在人眼中的指定区域内,则控制所述移动终端亮屏。
  2. 根据权利要求1所述的亮屏方法,其特征在于,所述根据所述深度图,构建人脸深度模型,包括:
    根据在预定时长内获取的多张所述深度图,构建多个所述人脸深度模型;
    所述如果所述瞳孔的位置在人眼中的指定区域内,则控制所述移动终端亮屏,包括:
    如果每个所述人脸深度模型的所述瞳孔的位置均在人眼中的指定区域内,则控制所述移动终端亮屏。
  3. 根据权利要求1所述的亮屏方法,其特征在于,所述从所述人脸深度模型识别瞳孔的位置,包括:
    根据人眼的形状,从所述人脸深度模型中识别所述人眼;
    根据瞳孔的形状,在所述人眼中识别所述瞳孔;
    确定所述瞳孔的第一中心位置,将所述第一中心位置作为所述瞳孔的位置。
  4. 根据权利要求3所述的亮屏方法,其特征在于,所述将所述第一中心位置作为所述瞳孔的位置之后,还包括:
    提取所述人眼中的指定区域的第二中心位置;
    获取所述第一中心位置相对所述第二中心位置的第一偏移量;
    如果所述第一偏移量在设定的第一偏移量范围内,则确定所述瞳孔的位置在人眼中的指定区域内。
  5. 根据权利要求1所述的亮屏方法,其特征在于,所述根据所述深度图,构建人脸深度模型之前,还包括:
    在控制开启所述结构光图像传感器进行成像的同时,控制开启可见光图像传感器进行成像;
    获取所述可见光图像传感器成像得到的可见光图像;
    对所述可见光图像进行人脸识别,确定人脸在所述可见光图像中的位置;
    如果所述人脸在可见光图像的指定区域内,则触发构建所述人脸深度模型。
  6. 根据权利要求5所述的亮屏方法,其特征在于,所述如果所述人脸在可见光图像的指定区域内,则触发构建所述人脸深度模型,包括:
    确定人脸的第三中心位置;
    提取所述可见光图像的指定区域的第四中心位置;
    获取第三中心位置相对所述第四中心位置的第二偏移量;
    如果所述第二偏移量在设定的第二偏移量范围内,则确定人脸在可见光图像的指定区域。
  7. 根据权利要求1所述的亮屏方法,其特征在于,所述控制开启结构光图像传感器进行成像之前,还包括:
    控制开启红外传感器进行成像;
    获取所述红外传感器成像得到的红外图像;
    从所述红外图像中提取人脸轮廓;
    当所述人脸轮廓与预存的人脸轮廓匹配时,则确定当前成像对象为机主。
  8. 根据权利要求1-7任一项所述的亮屏方法,其特征在于,所述亮屏方法由可信应用程序执行,所述可信应用程序运行于可信执行环境中。
  9. 根据权利要求8所述的亮屏方法,其特征在于,通过可信执行环境的专用硬件与所述可信应用程序进行通信。
  10. 一种亮屏装置,其特征在于,包括:
    第一控制模块,用于当移动终端的运动状态的变化过程满足预设的解锁条件时,则控制开启结构光图像传感器进行成像;
    获取模块,用于获取所述结构光图像传感器成像得到的深度图;
    构建模块,用于根据所述深度图,构建人脸深度模型;
    识别模块,用于从所述人脸深度模型识别瞳孔的位置;
    第二控制模块,用于如果所述瞳孔的位置在人眼中的指定区域内,则控制所述移动终端亮屏。
  11. 一种移动终端,其特征在于,包括:成像传感器、存储器、微控制单元MCU、处理器及存储在所述存储器上并可在所述处理器的可信执行环境下运行的可信应用程序;
    所述MCU,为所述可信执行环境的专用硬件,与所述成像传感器和所述处理器连接,用于控制所述成像传感器进行成像,并将成像数据发送至所述处理器;
    所述处理器执行所述可信应用程序时,实现以下亮屏步骤:
    当移动终端的运动状态的变化过程满足预设的解锁条件时,则控制开启结构光图像传感器进行成像;
    获取所述结构光图像传感器成像得到的深度图;
    根据所述深度图,构建人脸深度模型;
    从所述人脸深度模型识别瞳孔的位置;
    如果所述瞳孔的位置在人眼中的指定区域内,则控制所述移动终端亮屏。
  12. 根据权利要求11所述的移动终端,其特征在于,所述处理器执行所述可信应用程序时,实现以下步骤:
    根据在预定时长内获取的多张所述深度图,构建多个所述人脸深度模型;
    如果每个所述人脸深度模型的所述瞳孔的位置均在人眼中的指定区域内,则控制所述移动终端亮屏。
  13. 根据权利要求11所述的移动终端,其特征在于,所述处理器执行所述可信应用程序时,实现以下步骤:
    根据人眼的形状,从所述人脸深度模型中识别所述人眼;
    根据瞳孔的形状,在所述人眼中识别所述瞳孔;
    确定所述瞳孔的第一中心位置,将所述第一中心位置作为所述瞳孔的位置。
  14. 根据权利要求13所述的移动终端,其特征在于,所述处理器执行所述可信应用程序时,实现以下步骤:
    提取所述人眼中的指定区域的第二中心位置;
    获取所述第一中心位置相对所述第二中心位置的第一偏移量;
    如果所述第一偏移量在设定的第一偏移量范围内,则确定所述瞳孔的位置在人眼中的指定区域内。
  15. 根据权利要求11所述的移动终端,其特征在于,所述处理器执行所述可信应用程序时,实现以下步骤:
    在控制开启所述结构光图像传感器进行成像的同时,控制开启可见光图像传感器进行成像;
    获取所述可见光图像传感器成像得到的可见光图像;
    对所述可见光图像进行人脸识别,确定人脸在所述可见光图像中的位置;
    如果所述人脸在可见光图像的指定区域内,则触发构建所述人脸深度模型。
  16. 根据权利要求15所述的移动终端,其特征在于,所述处理器执行所述可信应用程序时,实现以下步骤:
    确定人脸的第三中心位置;
    提取所述可见光图像的指定区域的第四中心位置;
    获取第三中心位置相对所述第四中心位置的第二偏移量;
    如果所述第二偏移量在设定的第二偏移量范围内,则确定人脸在可见光图像的指定区域。
  17. 根据权利要求11所述的移动终端,其特征在于,所述处理器执行所述可信应用程序时,实现以下步骤:
    控制开启红外传感器进行成像;
    获取所述红外传感器成像得到的红外图像;
    从所述红外图像中提取人脸轮廓;
    当所述人脸轮廓与预存的人脸轮廓匹配时,则确定当前成像对象为机主。
  18. 根据权利要求11-17所述的移动终端,其特征在于,所述可信应用程序运行于可信执行环境中。
  19. 根据权利要求18所述的移动终端,其特征在于,通过可信执行环境的专用硬件与所述可信应用程序进行通信。
  20. 根据权利要求11所述的移动终端,其特征在于,所述MCU与所述处理器之间通过加密方式进行通信。
  21. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-9中任一项所述的亮屏方法。
PCT/CN2019/075383 2018-04-12 2019-02-18 亮屏方法、装置、移动终端及存储介质 WO2019196558A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19734656.2A EP3579086B1 (en) 2018-04-12 2019-02-18 Screen light method, device, mobile terminal, and storage medium
US16/477,439 US11537696B2 (en) 2018-04-12 2019-02-18 Method and apparatus for turning on screen, mobile terminal and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810327835.6A CN108628448B (zh) 2018-04-12 2018-04-12 亮屏方法、装置、移动终端及存储介质
CN201810327835.6 2018-04-12

Publications (1)

Publication Number Publication Date
WO2019196558A1 true WO2019196558A1 (zh) 2019-10-17

Family

ID=63705301

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075383 WO2019196558A1 (zh) 2018-04-12 2019-02-18 亮屏方法、装置、移动终端及存储介质

Country Status (5)

Country Link
US (1) US11537696B2 (zh)
EP (1) EP3579086B1 (zh)
CN (2) CN108628448B (zh)
TW (1) TWI701606B (zh)
WO (1) WO2019196558A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108628448B (zh) 2018-04-12 2019-12-06 Oppo广东移动通信有限公司 亮屏方法、装置、移动终端及存储介质
US11527107B1 (en) * 2018-06-29 2022-12-13 Apple Inc. On the fly enrollment for facial recognition
CN109739344B (zh) * 2018-11-20 2021-12-14 平安科技(深圳)有限公司 基于眼球运动轨迹的解锁方法、装置、设备和存储介质
CN109670287A (zh) * 2018-12-21 2019-04-23 努比亚技术有限公司 智能终端解锁方法、智能终端及计算机可读存储介质
CN110443021A (zh) * 2019-08-12 2019-11-12 珠海格力电器股份有限公司 基于解锁按键进行人脸解锁的方法、存储介质及移动终端
CN110597426A (zh) * 2019-08-30 2019-12-20 捷开通讯(深圳)有限公司 亮屏处理方法、装置、存储介质及终端
CN110929241B (zh) * 2019-11-12 2023-05-16 北京字节跳动网络技术有限公司 一种小程序的快速启动方法、装置、介质和电子设备
CN113190119A (zh) * 2021-05-06 2021-07-30 Tcl通讯(宁波)有限公司 一种移动终端屏幕点亮控制方法、装置、移动终端及存储介质
CN113885708A (zh) * 2021-10-22 2022-01-04 Oppo广东移动通信有限公司 电子设备的屏幕控制方法、装置、电子设备以及存储介质
US12001261B2 (en) * 2022-06-27 2024-06-04 Qualcomm Incorporated Power optimization for smartwatch

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893934A (zh) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 一种智能调整屏幕显示的方法和装置
CN104749945A (zh) * 2015-04-13 2015-07-01 深圳市欧珀通信软件有限公司 点亮屏幕的方法、装置及智能手表
CN108628448A (zh) * 2018-04-12 2018-10-09 Oppo广东移动通信有限公司 亮屏方法、装置、移动终端及存储介质

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8272053B2 (en) * 2003-12-18 2012-09-18 Honeywell International Inc. Physical security management system
WO2006102625A2 (en) * 2005-03-24 2006-09-28 Privaris, Inc. Biometric identification device with smartcard capabilities
US20070061590A1 (en) * 2005-09-13 2007-03-15 Boye Dag E Secure biometric authentication system
US20080018451A1 (en) * 2006-07-11 2008-01-24 Jason Benfielt Slibeck Passenger screening system and method
JP4996904B2 (ja) * 2006-10-04 2012-08-08 株式会社日立製作所 生体認証システム、登録端末、認証端末、及び認証サーバ
US20080162943A1 (en) * 2006-12-28 2008-07-03 Ali Valiuddin Y Biometric security system and method
EP2168282A1 (en) * 2007-07-12 2010-03-31 Innovation Investments, LLC Identity authentication and secured access systems, components, and methods
CN101339607B (zh) * 2008-08-15 2012-08-01 北京中星微电子有限公司 人脸识别方法及系统、人脸识别模型训练方法及系统
JP5245971B2 (ja) * 2009-03-26 2013-07-24 富士通株式会社 生体情報処理装置および方法
WO2011123699A2 (en) * 2010-03-31 2011-10-06 Orsini Rick L Systems and methods for securing data in motion
JP5669549B2 (ja) 2010-12-10 2015-02-12 オリンパスイメージング株式会社 撮像装置
US8381969B1 (en) * 2011-04-28 2013-02-26 Amazon Technologies, Inc. Method and system for using machine-readable codes to perform a transaction
US20120331557A1 (en) * 2011-06-21 2012-12-27 Keith Anthony Washington Global identity protector E-commerce payment code certified processing system
US9202105B1 (en) * 2012-01-13 2015-12-01 Amazon Technologies, Inc. Image analysis for user authentication
US20130342672A1 (en) 2012-06-25 2013-12-26 Amazon Technologies, Inc. Using gaze determination with device input
US11017211B1 (en) * 2012-09-07 2021-05-25 Stone Lock Global, Inc. Methods and apparatus for biometric verification
US9549323B2 (en) 2012-12-03 2017-01-17 Samsung Electronics Co., Ltd. Method and mobile terminal for controlling screen lock
US9166961B1 (en) * 2012-12-11 2015-10-20 Amazon Technologies, Inc. Social networking behavior-based identity system
CN103064520B (zh) 2013-01-31 2016-03-09 东莞宇龙通信科技有限公司 移动终端及其控制页面滚动的方法
CN104133548A (zh) 2013-05-03 2014-11-05 中国移动通信集团公司 确定视点区域及控制屏幕亮度的方法及装置
US9369870B2 (en) * 2013-06-13 2016-06-14 Google Technology Holdings LLC Method and apparatus for electronic device access
TW201506676A (zh) * 2013-08-09 2015-02-16 Acer Inc 螢幕解鎖方法及裝置
CN103902043B (zh) * 2014-04-02 2018-08-31 努比亚技术有限公司 智能终端及其提醒方法
CN103971408B (zh) * 2014-05-21 2017-05-03 中国科学院苏州纳米技术与纳米仿生研究所 三维人脸模型生成系统及方法
CN105224065A (zh) 2014-05-29 2016-01-06 北京三星通信技术研究有限公司 一种视线估计设备和方法
CN104238948B (zh) * 2014-09-29 2018-01-16 广东欧珀移动通信有限公司 一种智能手表点亮屏幕的方法及智能手表
US9767358B2 (en) * 2014-10-22 2017-09-19 Veridium Ip Limited Systems and methods for performing iris identification and verification using mobile devices
CN104376599A (zh) * 2014-12-11 2015-02-25 苏州丽多网络科技有限公司 一种简便的三维头部模型生成系统
US9838596B2 (en) * 2015-04-30 2017-12-05 Jrd Communication Inc. Method and system for quickly starting camera based on eyeprint identification
US9754555B2 (en) * 2015-08-06 2017-09-05 Mediatek Inc. Method for adjusting display of electronic device and electronic device capable of adjusting display
US11380008B2 (en) * 2016-05-06 2022-07-05 Innovega Inc. Gaze tracking system with contact lens fiducial
US10579860B2 (en) * 2016-06-06 2020-03-03 Samsung Electronics Co., Ltd. Learning model for salient facial region detection
US10491598B2 (en) * 2016-06-30 2019-11-26 Amazon Technologies, Inc. Multi-factor authentication to access services
CN106250851B (zh) 2016-08-01 2020-03-17 徐鹤菲 一种身份认证方法、设备及移动终端
US20180081430A1 (en) * 2016-09-17 2018-03-22 Sean William Konz Hybrid computer interface system
CN106774796A (zh) 2016-11-30 2017-05-31 深圳市金立通信设备有限公司 一种屏幕点亮方法及终端
CN206431724U (zh) * 2017-01-25 2017-08-22 辛明江 一种基于人脸识别技术的门禁系统
CN107368725B (zh) * 2017-06-16 2020-04-10 Oppo广东移动通信有限公司 虹膜识别方法、电子装置和计算机可读存储介质
CN107504621A (zh) * 2017-07-07 2017-12-22 珠海格力电器股份有限公司 空调线控器及其控制方法和控制装置
CN107621867A (zh) * 2017-08-09 2018-01-23 广东欧珀移动通信有限公司 熄屏控制方法、装置和终端设备
CN107577930B (zh) * 2017-08-22 2020-02-07 广东小天才科技有限公司 一种触屏终端的解锁检测方法及触屏终端
CN107832669B (zh) * 2017-10-11 2021-09-14 Oppo广东移动通信有限公司 人脸检测方法及相关产品
CN107748869B (zh) * 2017-10-26 2021-01-22 奥比中光科技集团股份有限公司 3d人脸身份认证方法与装置
CN107885352A (zh) * 2017-11-28 2018-04-06 珠海市魅族科技有限公司 一种终端屏幕亮屏控制方法、装置和介质
US11113510B1 (en) * 2018-06-03 2021-09-07 Apple Inc. Virtual templates for facial recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893934A (zh) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 一种智能调整屏幕显示的方法和装置
CN104749945A (zh) * 2015-04-13 2015-07-01 深圳市欧珀通信软件有限公司 点亮屏幕的方法、装置及智能手表
CN108628448A (zh) * 2018-04-12 2018-10-09 Oppo广东移动通信有限公司 亮屏方法、装置、移动终端及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3579086A4 *

Also Published As

Publication number Publication date
US11537696B2 (en) 2022-12-27
CN110580102A (zh) 2019-12-17
EP3579086B1 (en) 2021-11-03
TW201944288A (zh) 2019-11-16
US20200380100A1 (en) 2020-12-03
EP3579086A1 (en) 2019-12-11
TWI701606B (zh) 2020-08-11
EP3579086A4 (en) 2020-01-01
CN108628448A (zh) 2018-10-09
CN110580102B (zh) 2021-09-24
CN108628448B (zh) 2019-12-06

Similar Documents

Publication Publication Date Title
WO2019196558A1 (zh) 亮屏方法、装置、移动终端及存储介质
US11238270B2 (en) 3D face identity authentication method and apparatus
CN108563936B (zh) 任务执行方法、终端设备及计算机可读存储介质
CN107609383B (zh) 3d人脸身份认证方法与装置
US9117109B2 (en) Facial recognition
US9607138B1 (en) User authentication and verification through video analysis
EP3647129A1 (en) Vehicle, vehicle door unlocking control method and apparatus, and vehicle door unlocking system
US8457367B1 (en) Facial recognition
WO2017161867A1 (zh) 一种调节屏幕亮度的方法、装置及智能终端
JP6052399B2 (ja) 画像処理プログラム、画像処理方法及び情報端末
US11163995B2 (en) User recognition and gaze tracking in a video system
KR20180109109A (ko) 홍채 기반 인증 방법 및 이를 지원하는 전자 장치
CN110505549A (zh) 耳机的控制方法和装置
JP2008194309A (ja) 目検知装置、居眠り検知装置及び目検知装置の方法
JP6080572B2 (ja) 通行物体検出装置
WO2017201927A1 (zh) 防炫目方法及装置、投影设备
TWI737588B (zh) 拍照系統及方法
WO2018072179A1 (zh) 一种基于虹膜识别的图像预览方法及装置
JP7384157B2 (ja) 情報処理装置、ウエアラブル機器、情報処理方法及びプログラム
JP2012227830A (ja) 情報処理装置、その処理方法、プログラム及び撮像装置
WO2020215229A1 (zh) 一种人脸注册方法、人脸注册装置、服务器和可存储介质
US20230306790A1 (en) Spoof detection using intraocular reflection correspondences

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019734656

Country of ref document: EP

Effective date: 20190710

ENP Entry into the national phase

Ref document number: 2019734656

Country of ref document: EP

Effective date: 20190710

ENP Entry into the national phase

Ref document number: 2019734656

Country of ref document: EP

Effective date: 20190710

NENP Non-entry into the national phase

Ref country code: DE