WO2023029631A1 - 控制方法、装置、设备和存储介质 - Google Patents

控制方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2023029631A1
WO2023029631A1 PCT/CN2022/096741 CN2022096741W WO2023029631A1 WO 2023029631 A1 WO2023029631 A1 WO 2023029631A1 CN 2022096741 W CN2022096741 W CN 2022096741W WO 2023029631 A1 WO2023029631 A1 WO 2023029631A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature data
facial feature
user
screen
terminal device
Prior art date
Application number
PCT/CN2022/096741
Other languages
English (en)
French (fr)
Inventor
徐千
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023029631A1 publication Critical patent/WO2023029631A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the present application relates to the field of computer technology, for example, to a control method, device, device and storage medium.
  • the anti-false touch of the call mainly adopts the proximity-sensitive screen-off solution, that is, when the terminal device is in the call state, when the terminal device is close to the ear, the screen of the terminal device is turned off, and when the terminal device is far away from the ear, the screen of the terminal device is turned on. Screen.
  • the realization of the proximity screen off solution depends on the distance sensor, but due to the limitations of the distance sensor (sensor false alarm, judgment distance limit, etc.), there are situations where the recognition fails. In some cases, a slight jitter of the terminal device will trigger proximity and distance, and the screen will be lit.
  • the terminal device switches to an interactive state, which will trigger a false touch operation, and because The distance sensor is implemented based on light perception. Oil and light will also have a great impact on the distance sensor, causing false touch operations to be triggered.
  • the present application provides a control method, device, device, and storage medium.
  • the control method provided in the present application can effectively avoid triggering false touch operations and improve user experience.
  • the application provides a control method, including:
  • the terminal device When the terminal device is in a call state and satisfies the screen state switching condition, the facial feature data of the user closest to the screen is obtained; screen status.
  • the application provides a control device, including:
  • the acquisition module is configured to obtain the facial feature data of the user closest to the screen when the terminal device is in a call state and satisfies the screen state switching condition; the first control module is configured to if the user's facial feature data is a side facial feature data, then control the screen of the terminal device to be in an off-screen state.
  • This application provides a terminal device, including:
  • One or more processors ; a storage device configured to store one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors realize the above-mentioned Control Method.
  • the present application provides a storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, the above control method is implemented.
  • FIG. 1 is a flow chart of a control method provided in an embodiment of the present application
  • Fig. 1a is a schematic structural diagram of a screen state control device provided in an embodiment of the present application.
  • Fig. 1b is a schematic structural diagram of another screen state control device provided by an embodiment of the present application.
  • Fig. 1c is a flowchart of a screen state control method provided by an embodiment of the present application.
  • Fig. 1d is a flow chart of another screen state control method provided by the embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a control device provided in an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a terminal device provided in an embodiment of the present application.
  • FIG. 1 is a schematic flow chart of a control method provided in the embodiment of the present application.
  • the method can be applied to the situation of controlling terminal equipment.
  • the method can be executed by a control device, and the control device can Realized by software and/or hardware, and integrated on a terminal device, the terminal device may be a mobile phone.
  • control method provided by this application includes S110 and S120.
  • the screen state switching condition may be that the proximity sensor is far away, or that the terminal device is detected to switch from a motion state to a static state, or that the proximity sensor is triggered, or the end of an action.
  • This application The embodiments do not limit this.
  • the method of obtaining the facial feature data of the user closest to the screen may be to collect the facial feature data of the user closest to the screen through the off-screen camera of the terminal device, and the method of obtaining the facial feature data of the user closest to the screen may also be through the terminal device
  • the front-facing camera of the mobile phone collects facial feature data of the user closest to the screen.
  • the user's facial feature data may be the user's facial image, or data extracted from the user's facial image that can characterize the user's facial features, for example, it may be the feature data corresponding to the user's eyes, the user's nose corresponding to , the feature data corresponding to the user's ears, and the feature data corresponding to the user's hair.
  • the terminal device may be a device with a calling function, including a mobile phone, a tablet, and a smart watch.
  • the facial feature data of the user closest to the screen is obtained. For example, when the mobile phone is in a call state and the proximity is triggered, the distance is collected through the front camera The facial feature data of the user closest to the screen.
  • the facial feature data of the user closest to the screen is collected by the off-screen camera of the terminal device.
  • the method of controlling the screen of the terminal device to be in the off-screen state may be: obtain the current state of the screen of the terminal device in advance, and if the current state of the screen of the terminal device is the off-screen state, keep the current screen state unchanged , if the current state of the screen of the terminal device is the on-screen state, switching the current state of the screen of the terminal device from the on-screen state to the off-screen state.
  • the manner of controlling the screen of the terminal device to be in the off-screen state may also be: directly controlling the screen of the terminal device to be in the off-screen state.
  • the method of controlling the screen of the terminal device to be in the off-screen state may be as follows: comparing the user's facial feature data with the pre-stored frontal facial feature data, if the user If the similarity of the facial feature data and the front face feature data stored in advance is less than the first threshold, then it is determined that the user's facial feature data is side face feature data; if the user's facial feature data is side face feature data, then control all
  • the method in which the screen of the terminal device is in the off-screen state can also be as follows: feature extraction is performed on the user's facial feature data to obtain at least one target facial feature element.
  • the method may also be: obtain the number of preset facial feature elements in the user's facial feature data, and if the number of preset facial feature elements in the user's facial feature data is less than the quantity threshold, then determine the user's face The feature data is side face feature data; if the user's face feature data is side face feature data, then the method of controlling the screen of the terminal device to be in the off-screen state can also be: obtain the user's face feature data in the preset Assuming the area of the facial feature element, if the area of the preset facial feature element in the user's facial feature data is smaller than the area threshold, then it is determined that the user's facial feature data is side face feature data.
  • the identification principle of the facial feature data is: compare the difference between the front facial feature data and the user's facial feature data, and determine whether the user's facial feature data is profile facial feature data or front facial feature data according to the difference.
  • Method 1 Enter the frontal facial feature data in advance (the frontal facial feature data includes facial feature elements, such as eyes, nose, ears, and hair, etc.) and store it in the facial feature database.
  • the facial feature data extraction module extracts the target facial feature elements in the user's facial feature data, and compares the target facial feature elements with the corresponding feature elements in the pre-stored front face feature element collection, if any target facial feature elements (such as eyes, nose, etc.) , ears, or hair, etc.) and the corresponding feature elements in the pre-stored front facial feature element set have a low matching degree. If the feature elements (such as eyes, nose, ears, and hair, etc.) have a high matching degree with the corresponding feature elements in the pre-stored front face feature element set, then the user's facial feature data is considered to be non-side face feature data.
  • Method 2 Extract the facial feature data of the user closest to the screen to obtain the number of eyes. If the number of eyes is 0 or 1, the facial feature data of the user is considered to be the profile facial feature data. If the number of eyes is not 0 and not 1, it is considered that the user's facial feature data is non-side facial feature data.
  • Facial feature data extraction and matching method This method is mainly used for identification of facial feature data.
  • Method 1 Extract the currently entered user's facial feature data to obtain at least one target facial feature element, and classify and store at least one target facial feature element, and combine each target facial feature element (such as eyes) with the facial feature database
  • the same type of frontal facial feature list matching multiple facial features may be entered, so it is a list
  • the similarity between each target facial feature element and the corresponding feature element in the facial feature library is low, it means that the user's facial feature data is the profile facial feature data, if there is at least one target facial feature element with a high similarity with the corresponding feature element in the facial feature database, then the user's facial feature data is non-profile facial feature data.
  • the terminal device when the terminal device is in a call state and satisfies the screen state switching condition, the facial feature data of the user closest to the screen is obtained, if the user's facial feature data is side face feature data, Then control the screen of the terminal device to be in the off-screen state, which not only solves the limitations of the distance sensor (sensor false alarm, judgment distance limit, etc.), which leads to false triggering of proximity, the screen lights up, and then the terminal device switches to an interactive state , triggering the problem of false touch operation, and can solve the problem of triggering false touch operation because the distance sensor is based on light perception, oil and light will also have a great impact on the distance sensor, resulting in triggering false touch operation , improve user experience.
  • the distance sensor sensor false alarm, judgment distance limit, etc.
  • the terminal device when the terminal device is in a call state and satisfies the screen state switching condition, after obtaining the facial feature data of the user closest to the screen, it further includes:
  • the facial feature data of the user is non-side facial feature data, control the screen of the terminal device to be in a bright screen state.
  • the method of controlling the screen of the terminal device to be in the bright screen state may be: acquiring the current state of the screen of the terminal device in advance, and if the current state of the screen of the terminal device is the off-screen state, then setting the screen of the terminal device to The current state of the screen is switched from the off-screen state to the bright screen state, and if the current state of the screen of the terminal device is the bright screen state, the current screen state remains unchanged.
  • the manner of controlling the screen of the terminal device to be in the off-screen state may also be: directly controlling the screen of the terminal device to be in the bright screen state.
  • the method of controlling the screen of the terminal device to be in a bright screen state may be: comparing the user's facial feature data with the front face feature data stored in advance, if If the similarity between the user's facial feature data and the front face feature data stored in advance is greater than or equal to the first threshold, then it is determined that the user's facial feature data is front face feature data rather than side face feature data; if the user's facial feature If the data is non-side facial feature data, the method of controlling the screen of the terminal device to be in the off-screen state may also be: extracting features from the user's facial feature data to obtain at least one target facial feature element, if each target facial feature If the similarity between the element and the corresponding feature element in the pre-stored front facial feature element set is greater than or equal to the second threshold, then it is determined that the user's facial feature data is front face feature data rather than side face feature data; if the user's face If the characteristic data is non-side
  • controlling the screen of the terminal device to be in an off-screen state includes:
  • the first threshold may be preset or set by a user, which is not limited in this embodiment of the present application.
  • the pre-stored frontal facial feature data is the frontal facial feature data entered by the user in advance, for example, it may be the user's frontal facial image entered by the user in advance, or may be the frontal image corresponding to the user's facial feature elements entered in advance by the user.
  • the similarity between the user's facial feature data and the pre-stored front face feature data is less than the first threshold, then determine that the user's facial feature data is side face feature data, for example, it may be to calculate the front face feature data entered by the user in advance The similarity with the user's facial feature data, if the similarity between the frontal facial feature data entered by the user and the user's facial feature data is less than the first threshold, then determine that the user's facial feature data is side facial feature data.
  • the user's facial feature data includes: at least one target facial feature element; correspondingly, if the user's facial feature data is side face feature data, the screen of the terminal device is controlled to be in the off-screen status, including:
  • the similarity between at least one target facial feature element and the corresponding feature element in the pre-stored front face feature element set is less than the second threshold, then it is determined that the user's facial feature data is side face feature data; screen status.
  • the target facial feature element may be: the user's eyes, the user's nose, the user's ears, or the user's hair.
  • the at least one target facial feature element may be acquired by: acquiring facial feature data of the user closest to the screen, and extracting elements from the facial feature data of the user closest to the screen to obtain at least one target facial feature element.
  • the second threshold and the first threshold may be the same or different, which is not limited in this embodiment of the present application.
  • the similarity between at least one target facial feature element and the corresponding feature element in the pre-stored front facial feature element set is less than the second threshold, then it is determined that the user's facial feature data is side face feature data, for example, if at least one target face
  • the feature elements include the user's eye image and the user's nose image. If the similarity between the user's eye image and the eye image in the pre-stored frontal facial features is less than a second threshold, the similarity between the user's nose image and the pre-stored frontal facial features in the nose image is less than For the second threshold, it is determined that the facial feature data of the user is profile facial feature data.
  • At least one target facial feature element includes the user's eye image and the user's nose image
  • the similarity between the user's eye image and the eye image in the pre-stored frontal facial features is less than the second threshold
  • the user's nose image and the pre-stored frontal facial features are determined that the facial feature data of the user is profile facial feature data.
  • the similarity between at least one target facial feature element and the corresponding feature element in the pre-stored front face feature element set is less than the second threshold, then it is determined that the user's facial feature data is side face feature data, and the screen of the terminal device is controlled to be in the information mode.
  • the state of the screen can be, for example, to acquire the facial feature data of the user closest to the screen, extract the elements of the facial feature data of the user closest to the screen, and obtain the user's eye image, user's nose image, user's ear image and user's Hair image, obtain the first similarity between the user's eye image and the eye image in the pre-stored frontal facial features, the second similarity between the user's nose image and the pre-stored frontal facial features in the nose image; the user's ear image and the pre-stored frontal facial features The third degree of similarity of the ear image; the fourth degree of similarity between the user's hair image and the hair image in the pre-stored frontal facial features. If at least one of the first similarity, the second similarity, the third similarity and the fourth similarity is smaller than the second threshold, it is determined that the facial feature data of the user is profile facial feature data.
  • obtain the facial feature data of the user closest to the screen perform element extraction on the facial feature data of the user closest to the screen, and obtain the user's eye image, user's nose image, user's ear image, and user's hair image , if the similarity between the user's eye image and the eye image in the pre-stored frontal facial features is less than the second threshold, it is directly determined that the user's facial feature data is side facial feature data, without obtaining other target facial feature elements and pre-stored frontal facial features The similarity of corresponding feature elements in the element set.
  • controlling the screen of the terminal device to be in an off-screen state includes:
  • the similarity between the user's facial feature data and the pre-stored side face feature data is greater than the third threshold, then determine that the user's facial feature data is side face feature data; control the screen of the terminal device to be in an off-screen state.
  • the third threshold can be set by the system or by the user, which is not limited in this embodiment of the application.
  • the third threshold can be the same as or different from the first threshold.
  • the terms “first”, “second”, “third”, “fourth” and so on are only used to distinguish the description, and should not be understood as indicating or implying relative importance.
  • the pre-stored side face feature data can be the side face feature data entered by the user in advance, for example, it can be the user's profile face image entered by the user in advance, or can be the profile image corresponding to the user's face feature elements entered in advance by the user.
  • the similarity between the user's facial feature data and the pre-stored side face feature data is greater than the third threshold, then it is determined that the user's face feature data is the side face feature data, for example, it may be to calculate the side face feature data entered by the user in advance.
  • the similarity with the user's facial feature data if the similarity between the user's profile facial feature data entered in advance and the user's facial feature data is greater than the third threshold, then determine that the user's facial feature data is profile facial feature data.
  • the user's facial feature data includes: at least one target facial feature element; correspondingly, if the user's facial feature data is side face feature data, the screen of the terminal device is controlled to be in the off-screen status, including:
  • the similarity of at least one target facial feature element and the corresponding feature element in the pre-stored side face feature element set is greater than the fourth threshold, then it is determined that the user's facial feature data is side face feature data; screen status.
  • the fourth threshold may be the same as the second threshold, or may be different from the second threshold, which is not limited in this embodiment of the present application.
  • the similarity between at least one target facial feature element and the corresponding feature element in the pre-stored side face feature element set is greater than the fourth threshold, then it is determined that the user's facial feature data is side face feature data, for example, if at least one target face
  • the feature elements include the user's eye image and the user's nose image. If the similarity between the user's eye image and the eye image in the pre-stored side facial features is greater than the fourth threshold, the user's nose image and the pre-stored The similarity of the nose image in the side facial features is greater than If the fourth threshold is used, it is determined that the facial feature data of the user is profile facial feature data.
  • At least one target facial feature element includes the user's eye image and the user's nose image, if the similarity between the user's eye image and the eye image in the pre-stored side face feature is greater than the fourth threshold, the user's nose image and the pre-stored side face feature in the nose If the similarity of the images is less than or equal to the fourth threshold, it is determined that the facial feature data of the user is profile facial feature data.
  • the similarity between at least one target facial feature element and the corresponding feature element in the pre-stored side face feature element set is greater than the fourth threshold, then it is determined that the user's facial feature data is side face feature data, and the screen of the terminal device is controlled to be in the information state.
  • the state of the screen can be, for example, to acquire the facial feature data of the user closest to the screen, extract the elements of the facial feature data of the user closest to the screen, and obtain the user's eye image, user's nose image, user's ear image and user's Hair image, obtain the first similarity between the user's eye image and the eye image in the pre-stored side facial features, the second similarity between the user's nose image and the pre-stored side facial features in the nose image; the user's ear image and the pre-stored side facial features The third degree of similarity of the ear image; the fourth degree of similarity between the user's hair image and the hair image in the pre-stored side facial features. If at least one of the first similarity, the second similarity, the third similarity and the fourth similarity is greater than a fourth threshold, it is determined that the facial feature data of the user is profile facial feature data.
  • obtain the facial feature data of the user closest to the screen perform element extraction on the facial feature data of the user closest to the screen, and obtain the user's eye image, user's nose image, user's ear image, and user's hair image , if the similarity between the user's eye image and the eye image in the pre-stored side facial features is greater than the fourth threshold, it is directly determined that the user's facial feature data is the side facial feature data, without obtaining other target facial feature elements and pre-stored side facial features The similarity of corresponding feature elements in the element set.
  • controlling the screen of the terminal device to be in an off-screen state includes:
  • the screen of the terminal device is controlled to be in an off-screen state.
  • the quantity threshold can be set by the user or by the system.
  • the preset facial feature elements are two feature elements in the front facial feature data.
  • the preset facial feature elements may be eyes of the user, ears of the user, or eyebrows of the user, which is not limited in this embodiment of the present application.
  • the user's facial feature data is profile facial feature data, for example, if the preset facial feature elements are the user's eyes. If feature extraction is performed on the facial feature data of the user nearest to the screen, and the number of eyes of the user is obtained as one eye, or no eyes, then the user's facial feature data is determined to be side face feature data.
  • the preset facial feature elements include: the user's eyes and the user's ears
  • feature extraction is performed on the facial feature data of the user closest to the screen
  • the user's face corresponding to the user's facial feature data closest to the screen is obtained.
  • the number of eyes is one eye
  • the number of ears of the user is one, then it is determined that the facial feature data of the user is profile facial feature data.
  • the preset facial feature elements include: the user's eyes and the user's ears
  • feature extraction is performed on the facial feature data of the user closest to the screen, the number of eyes of the user corresponding to the facial feature data of the user closest to the screen is obtained is one eye, or the number of the user's ears is one, then it is determined that the user's facial feature data is the profile facial feature data.
  • the preset facial feature elements include: the user's eyes and the user's ears, if the feature extraction is performed on the user's facial feature data closest to the screen, the number of the user's eyes is obtained as one eye, and the user's ears do not need to be Judgment is made based on the quantity of the user's facial features, and the user's facial feature data is directly determined as profile facial feature data.
  • the preset facial feature elements include: the user's eyes and the user's ears, if the feature extraction is performed on the facial feature data of the user closest to the screen, the number of the user's ears is obtained as one ear, and there is no need to identify the user's eyes. Judgment is made based on the quantity of the user's facial features, and the user's facial feature data is directly determined as profile facial feature data.
  • controlling the screen of the terminal device to be in an off-screen state includes:
  • the area of the preset facial feature elements in the user's facial feature data is less than the area threshold, then determine that the user's facial feature data is side face feature data; control the screen of the terminal device to be in a screen-off state.
  • the area threshold can be set by the user or by the system.
  • the preset facial feature elements are feature elements with an area greater than an area threshold in the front face feature data.
  • the preset facial feature element may be the user's hair, the user's cheek, or the user's forehead, which is not limited in this embodiment of the present application.
  • the area of the preset facial feature elements in the user's facial feature data is less than the area threshold, then determine that the user's facial feature data is side face feature data, for example, if the preset facial feature elements are the user's hair, If the facial feature data of the user closest to the screen is extracted and the area of the user's hair is smaller than the area threshold, then it is determined that the user's facial feature data is side face feature data.
  • the preset facial feature elements include: the user's hair and the user's cheek
  • the feature extraction is performed on the facial feature data of the user closest to the screen
  • the corresponding user's facial feature data corresponding to the user's facial feature data closest to the screen is obtained.
  • the area of the hair is area A
  • the area of the user's cheek is area B
  • area A is smaller than the area threshold
  • area B is smaller than the area threshold
  • the preset facial feature elements include: the user's hair and the user's cheeks
  • the area of the user's hair corresponding to the facial feature data of the user closest to the screen is obtained is smaller than the area threshold, or the area of the user's cheeks is smaller than the area threshold, then it is determined that the user's facial feature data is side face feature data.
  • the preset facial feature elements include: the user's hair and the user's cheeks, if the feature extraction is performed on the facial feature data of the user closest to the screen, and the area of the user's hair is smaller than the area threshold, then there is no need to extract the user's cheeks. The area is judged, and the user's facial feature data is directly determined as the profile facial feature data.
  • the screen state switching conditions include:
  • Proximity sensing away is triggered; or, it is detected that the terminal device switches from a motion state to a static state; or proximity sensing is triggered.
  • solution 1 which is used in conjunction with the proximity-sensing off-screen solution.
  • the terminal device When the terminal device is in a call state, it is currently in the proximity state (screen off), and when the proximity is triggered, instead of immediately lighting the screen, it starts the camera to capture the facial feature data of the user closest to the screen. If the user's facial feature data is profile facial feature data, then keep the off-screen state, if the user's facial feature data is non-profile facial feature data, then turn on the screen.
  • Solution 2 Do not use proximity at all.
  • the terminal device When the terminal device is in a call state, when it detects that the mobile phone has changed from a motion state to a static state, start the camera to capture the facial feature data of the user closest to the screen.
  • the screen state control device includes: a call module, a distance sensing module, a user facial feature data extraction module, a front facial feature data storage module, a user facial feature data judging module, and an on/off screen management module , where the call module is set to call initiation.
  • the distance sensing module is mainly a distance sensor, which is set to recognize whether it is a sense of proximity or distance, and the recognition result is used as the basis for turning on or off the screen.
  • the user facial feature data extraction module calls the frontal facial feature data (feature elements, eyes, nose, ears, hair, etc.) recorded by the camera to build a facial feature library.
  • the facial feature element extraction can use the current mature facial recognition algorithm.
  • the facial feature storage module also has a function.
  • the front facial feature data storage module is also set to ingest facial features.
  • the user's facial feature data judging module judges whether the extracted facial feature is a side face feature according to the facial feature extracted by the user's facial feature data extraction module, if the extracted facial feature is a side face feature, then light up the screen, if the extracted facial feature is not a side profile Facial features turn off the screen.
  • the above-mentioned judging methods include: method 1 and method 2 two kinds of judging methods, method 1: input frontal facial feature data in advance (frontal facial feature data includes facial feature elements, such as eyes, nose, ears, and hair, etc.) and store facial features library, when the user’s facial feature data needs to be judged, the user’s facial feature data extraction module extracts the target facial feature elements in the user’s facial feature data, and combines the target facial feature elements with the corresponding features in the pre-stored front facial feature element set Elements are compared, if any target facial feature elements (such as eyes, nose, ears, or hair, etc.) and the corresponding feature elements in the pre-stored front face feature element set have a low matching degree, for example, the matching degree is lower than 60%, then it is determined that the user's
  • the facial feature data is side facial feature data.
  • the facial feature data is non-profile facial feature data.
  • Method 2 Extract the facial feature data of the user closest to the screen to obtain the number of eyes. If the number of eyes is 0 or 1, the facial feature data of the user is considered to be the profile facial feature data. If the number of eyes is not 0 and not 1, it is considered that the user's facial feature data is non-side facial feature data.
  • the on/off screen management module is configured to perform the operation of turning on or off the screen.
  • another screen state control device includes: a call module, a motion detection module, a user facial feature data extraction module, a front facial feature data storage module, a user facial feature data determination module, and a screen on/off management module.
  • the motion detection module detects that the mobile phone is switched from the motion state to the static state (or close to the static state), and starts the camera to capture the facial feature data of the user closest to the screen.
  • the motion state detection of the mobile phone can be based on the gravity acceleration sensor (Gravity sensor, G-sensor) to judge.
  • the front face feature data entry operation is used to input the user's front face feature data in advance, and the entered front face feature data is stored in the front face feature data storage module for use in When performing facial feature data identification method 1, it is used as a basis for comparison.
  • the captured facial feature data of the user closest to the screen is significantly different from the stored front facial feature data, the user’s facial feature data is identified as side face feature data.
  • the frontal facial feature data you can input as many frontal facial images of the user as possible from different distances and angles.
  • the system extracts and classifies the facial feature elements in the frontal facial image according to the facial recognition algorithm (such as eyes, ears, etc.) etc).
  • the terminal device When the terminal device is connected to the call, that is, when the terminal device is in a call state, it is judged whether to trigger the proximity sense.
  • the proximity sense When the mobile phone in the call is close to the ear, the proximity sense will be triggered, and the screen will be off; when the mobile phone in the call is far away from the ear, the Trigger proximity and distance, and the screen lights up. In actual operation, a slight shake of the mobile phone away from the ear may trigger proximity and distance, and false touches will occur at this time.
  • facial feature recognition should be triggered to obtain the facial feature data of the user closest to the screen instead of directly brightening the screen, and compare the facial feature data of the user closest to the screen with the model in the facial feature library , if the facial feature data of the user is profile facial feature data, controlling the screen of the terminal device to be in an off-screen state.
  • the terminal device when the terminal device is connected to the call, that is, when the terminal device is in the call state, if the end of the target action is detected (the mobile phone motion state can be judged by the G-sensor), facial feature extraction is triggered to obtain the distance from the screen
  • the facial feature data of the nearest user compares the facial feature data of the user closest to the screen with the model in the facial feature library, and if the user's facial feature data is side face feature data, then control the screen of the terminal device to be in the information mode. Screen status, keep the current screen status (screen off), if the user's facial feature data is non-side facial feature data, then light up the screen.
  • the end of the target action may be: the state of the mobile phone is switched from a moving state to a stationary state or a relatively close to stationary state.
  • FIG. 2 is a schematic structural diagram of a control device provided in an embodiment of the present application.
  • the device is configured in a terminal device. See FIG. 2 .
  • the control device includes:
  • the acquisition module 210 is configured to obtain the facial feature data of the user closest to the screen when the terminal device is in a call state and satisfies the screen state switching condition; the first control module 220 is configured to if the user's facial feature data is a Facial characteristic data, then control the screen of the terminal device to be in an off-screen state.
  • the control device provided in this embodiment is configured to implement the control method of the embodiment of the present application.
  • the realization principle and technical effect of the control device provided in this embodiment are similar to the control method of the embodiment of the present application, and will not be repeated here.
  • the second control module is configured to obtain the facial feature data of the user closest to the screen when the terminal device is in a call state and satisfies the screen state switching condition, if the user's facial feature data is non-side facial feature data, Then control the screen of the terminal device to be in a bright screen state.
  • the first control module is set to:
  • the facial feature data of the user includes: at least one target facial feature element; the first control module is set to:
  • the similarity between at least one target facial feature element and the corresponding feature element in the pre-stored front face feature element set is less than the second threshold, then it is determined that the user's facial feature data is side face feature data; screen status.
  • the first control module is set to:
  • the similarity between the user's facial feature data and the pre-stored side face feature data is greater than the third threshold, then determine that the user's facial feature data is side face feature data; control the screen of the terminal device to be in an off-screen state.
  • the facial feature data of the user includes: at least one target facial feature element; the first control module is set to:
  • the similarity of at least one target facial feature element and the corresponding feature element in the pre-stored side face feature element set is greater than the fourth threshold, then it is determined that the user's facial feature data is side face feature data; screen status.
  • the first control module is set to:
  • the screen of the terminal device is controlled to be in an off-screen state.
  • the first control module is set to:
  • the area of the preset facial feature elements in the user's facial feature data is less than the area threshold, then determine that the user's facial feature data is side face feature data; control the screen of the terminal device to be in a screen-off state.
  • the screen state switching conditions include:
  • Proximity sensing away is triggered; or, it is detected that the terminal device switches from a motion state to a static state; or proximity sensing is triggered.
  • a control device provided by the present application includes: an acquisition module configured to acquire facial feature data of the user closest to the screen when the terminal device is in a call state and satisfies the screen state switching condition, and the first control module is configured to If the facial feature data of the user is profile facial feature data, the screen of the terminal device is controlled to be in an off-screen state.
  • the distance sensor sensor false positives, judgment distance limit, etc.
  • the sensor is implemented based on light perception. Oil and light will also have a great impact on the distance sensor, causing the problem of triggering false touch operations, which can effectively avoid triggering false touch operations and improve user experience.
  • FIG. 3 is a schematic structural diagram of a terminal device provided in an embodiment of the present application.
  • the terminal device provided in the present application includes one or more processors 51 and a storage device 52; there may be one or more processors 51 in the terminal device, and one processor 51 is taken as an example in FIG. 3; the storage device 52 is set to store one or more programs; the one or more programs are described One or more processors 51 execute, so that the one or more processors 51 implement the control method as described in FIG. 1 in the embodiment of the present application.
  • the terminal device further includes: a communication device 53 , an input device 54 and an output device 55 .
  • the processor 51, the storage device 52, the communication device 53, the input device 54 and the output device 55 in the terminal device may be connected through a bus or in other ways. In FIG. 3, connection through a bus is taken as an example.
  • the input device 54 can be configured to receive input numbers or character information, and generate key signal input related to user settings and function control of the terminal device.
  • the output device 55 may include a display terminal such as a display screen.
  • the communication device 53 may include a receiver and a transmitter.
  • the communication device 53 is configured to perform information sending and receiving communication according to the control of the processor 51 .
  • the information includes but not limited to uplink authorization information.
  • the storage device 52 can be configured to store software programs, computer-executable programs and modules, such as program instructions/modules corresponding to the control method described in FIG. 1 in the embodiment of the present application (for example, the acquisition module 210 and the first control module 220).
  • the storage device 52 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and at least one application required by a function; the data storage area may store data created according to the use of the terminal device, and the like.
  • the storage device 52 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage devices.
  • the storage device 52 may include memories that are set remotely relative to the processor 51, and these remote memories may be connected to the terminal device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the embodiment of the present application also provides a storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, the control method described in the embodiment of the present application is implemented, and the control method includes:
  • the terminal device When the terminal device is in a call state and satisfies the screen state switching condition, the facial feature data of the user closest to the screen is obtained; screen status.
  • the computer storage medium in the embodiments of the present application may use any combination of one or more computer-readable media.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer-readable storage medium may be, for example, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof.
  • Examples (non-exhaustive list) of computer-readable storage media include: electrical connections with one or more conductors, portable computer disks, hard disks, Random Access Memory (RAM), Read Only Memory (Read Only) Memory, ROM), Erasable Programmable Read Only Memory (Erasable Programmable Read Only Memory, EPROM), flash memory, optical fiber, portable CD-ROM (Compact Disk-ROM, CD-ROM), optical storage device, magnetic storage device , or any suitable combination of the above.
  • a computer readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to: electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wires, optical cables, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • any appropriate medium including but not limited to: wireless, wires, optical cables, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional Procedural Programming Language - such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer via any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it can be connected to an external computer (for example, using the Internet) service provider via Internet connection).
  • LAN Local Area Network
  • WAN Wide Area Network
  • user terminal equipment covers any suitable type of wireless user terminal equipment, such as mobile phones, portable data processing devices, portable web browsers or vehicle-mounted mobile stations.
  • the various embodiments of the present application can be implemented in hardware or special purpose circuits, software, logic or any combination thereof.
  • some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software, which may be executed by a controller, microprocessor or other computing device, although the application is not limited thereto.
  • Computer program instructions may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or written in any combination of one or more programming languages source or object code.
  • ISA Instruction Set Architecture
  • Any logic flow block diagrams in the drawings of the present application may represent program steps, or may represent interconnected logic circuits, modules and functions, or may represent a combination of program steps and logic circuits, modules and functions.
  • Computer programs can be stored on memory.
  • the memory may be of any type suitable for the local technical environment and may be implemented using any suitable data storage technology, such as Read-Only Memory, ROM, RAM, optical memory devices and systems (Digital Video Disc (DVD) or CD) etc.
  • Computer readable media may include non-transitory storage media.
  • Data processors can be of any type suitable for the local technical environment, such as but not limited to general purpose computers, special purpose computers, microprocessors, digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC ), programmable logic devices (Field-Programmable Gate Array, FPGA), and processors based on multi-core processor architectures.
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • processors based on multi-core processor architectures.

Abstract

本文公开一种控制方法、装置、设备和存储介质。该控制方法包括:当终端设备处于通话状态且满足屏幕状态切换条件时,则获取距离屏幕最近的用户的面部特征数据;若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态。

Description

控制方法、装置、设备和存储介质 技术领域
本申请涉及计算机技术领域,例如涉及控制方法、装置、设备和存储介质。
背景技术
市场上已经存在一些防误触方案,比如“口袋模式”:利用光感和距离感应器实现在触发“口袋模式”后屏幕不可用;“边缘防误触”:分析触控点位移,对于会产生位移的触控点,系统将当前触控识别为有效触控,执行触控响应,对于固定无位移的触控点,系统将当前触控判断为误触,不响应;另一种“边缘防误触”的实现方式是设置无效区域。
相关技术中,通话防误触主要采用的是近感灭屏方案,即在终端设备处于通话状态下,当终端设备贴近耳朵时终端设备的屏幕熄灭,当终端设备远离耳朵时点亮终端设备的屏幕。近感灭屏方案的实现依赖于距离传感器,但由于距离传感器的局限性(sensor误报、判断距离限制等),存在识别失败的情况。在一些情况下,终端设备稍微抖动偏移就会触发近感远离,屏幕就会被点亮,这时候因为屏幕被点亮,终端设备切换为可交互状态,就会触发误触操作,且因为距离传感器是基于光感实现的,油污和光线等也会对距离传感器有较大影响,导致触发误触操作。
发明内容
本申请提供控制方法、装置、设备和存储介质,通过本申请提供的控制方法能够有效避免触发误触操作,提高用户体验。
本申请提供一种控制方法,包括:
当终端设备处于通话状态且满足屏幕状态切换条件时,则获取距离屏幕最近的用户的面部特征数据;若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态。
本申请提供一种控制装置,包括:
获取模块,设置为当终端设备处于通话状态且满足屏幕状态切换条件时,则获取距离屏幕最近的用户的面部特征数据;第一控制模块,设置为若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态。
本申请提供了一种终端设备,包括:
一个或多个处理器;存储装置,设置为存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述的控制方法。
本申请提供了一种存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述的控制方法。
附图说明
图1是本申请实施例提供的一种控制方法的流程图;
图1a是本申请实施例提供的一种屏幕状态控制装置的结构示意图;
图1b是本申请实施例提供的另一种屏幕状态控制装置的结构示意图;
图1c是本申请实施例提供的一种屏幕状态控制方法的流程图;
图1d是本申请实施例提供的另一种屏幕状态控制方法的流程图;
图2为本申请实施例提供的一种控制装置的结构示意图;
图3为本申请实施例提供的一种终端设备的结构示意图。
具体实施方式
下文中将结合附图对本申请的实施例进行说明。
在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在一些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
在一个示例性实施方式中,图1为本申请实施例提供的一种控制方法的流程示意图,该方法可以适用于对终端设备进行控制的情况,该方法可以由控制装置执行,该控制装置可以由软件和/或硬件实现,并集成在终端设备上,所述终端设备可以为手机。
如图1所示,本申请提供的控制方法,包括S110和S120。
S110、当终端设备处于通话状态且满足屏幕状态切换条件时,则获取距离屏幕最近的用户的面部特征数据。
所述屏幕状态切换条件可以为近感远离被触发,也可以为检测到所述终端设备由运动状态切换到静止状态,或者可以为近感靠近被触发,还可以为一动作的结束,本申请实施例对此不进行限制。
获取距离屏幕最近的用户的面部特征数据的方式可以为通过终端设备的屏下摄像头采集距离屏幕最近的用户的面部特征数据,获取距离屏幕最近的用户的面部特征数据的方式还可以为通过终端设备的前置摄像头采集距离屏幕最近的用户的面部特征数据。
所述用户的面部特征数据可以为用户的面部图像,也可以为从用户的面部图像中提取出的能够表征用户面部特征的数据,例如可以是,用户的眼睛对应的特征数据、用户的鼻子对应的特征数据,用户的耳朵对应的特征数据以及用户的头发对应的特征数据等。
所述终端设备可以为具有打电话功能的设备,包括手机、平板以及智能手表等。
当终端设备处于通话状态且满足屏幕状态切换条件时,则获取距离屏幕最近的用户的面部特征数据,例如可以是,当手机处于通话状态且近感远离被触发时,则通过前置摄像头采集距离屏幕最近的用户的面部特征数据。
在一个例子中,当终端设备处于通话状态且检测到所述终端设备由运动状态切换到静止状态时,则通过终端设备的屏下摄像头采集距离屏幕最近的用户的面部特征数据。
S120、若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态。
控制所述终端设备的屏幕处于息屏状态的方式可以为:预先获取所述终端设备的屏幕的当前状态,若所述终端设备的屏幕的当前状态为息屏状态,则保持当前屏幕状态不变,若所述终端设备的屏幕的当前状态为亮屏状态,则将所述终端设备的屏幕的当前状态从亮屏状态切换至息屏状态。控制所述终端设备的屏幕处于息屏状态的方式还可以为:直接控制终端设备的屏幕处于息屏状态。
若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态的方式可以为:将用户的面部特征数据和事先存储的正面面部特征数据进行比较,若用户的面部特征数据和事先存储的正面面部特征数据的相似度小于第一阈值,则确定用户的面部特征数据为侧面面部特征数据;若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态的方式还可以为:对用户的面部特征数据进行特征提取,得到至少一个目标面部特征元素,若至少一个目标面部特征元素和预存正面面部特征元素集合中对应特征元素的相似度小于第二阈值,则确定所述用户的面部特征数据为侧面面部特征数据;若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态的方式还可以为:获取所述用户的面部 特征数据中预设面部特征元素的数量,若所述用户的面部特征数据中预设面部特征元素的数量小于数量阈值,则确定所述用户的面部特征数据为侧面面部特征数据;若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态的方式还可以为:获取所述用户的面部特征数据中预设面部特征元素的面积,若所述用户的面部特征数据中预设面部特征元素的面积小于面积阈值,则确定所述用户的面部特征数据为侧面面部特征数据。
在一个例子中,面部特征数据的认定原理是:比较正面面部特征数据和用户的面部特征数据的差异,根据该差异判定用户的面部特征数据是侧面面部特征数据还是正面面部特征数据。方法1:事先录入正面面部特征数据(正面面部特征数据包括面部特征元素,例如眼睛,鼻子,耳朵,以及头发等)并存入面部特征库,当需要对用户的面部特征数据进行判定时,用户面部特征数据提取模块提取用户的面部特征数据中的目标面部特征元素,并将目标面部特征元素和预存正面面部特征元素集合中对应的特征元素进行比较,如果任意目标面部特征元素(比如眼睛、鼻子、耳朵、或头发等)和预存正面面部特征元素集合中对应特征元素的匹配度较低,比如匹配度低于60%,则认定用户的面部特征数据为侧面面部特征数据,如果每个目标面部特征元素(比如眼睛、鼻子、耳朵、以及头发等)和预存正面面部特征元素集合中对应的特征元素的匹配度较高,则认为用户的面部特征数据为非侧面面部特征数据。方法2:对距离屏幕最近的用户的面部特征数据进行提取,得到眼睛的数量,若眼睛的数量为0或1,则认定为用户的面部特征数据为侧面面部特征数据,若眼睛的数量不为0且不为1,则认为用户的面部特征数据为非侧面面部特征数据。
面部特征数据提取匹配方法:此方法主要用于面部特征数据的认定。方法1:对当前录入的用户的面部特征数据进行提取得到至少一个目标面部特征元素,并对至少一个目标面部特征元素归类存储,将每一个目标面部特征元素(比如眼睛)和面部特征库中的同类正面面部特征列表匹配(可能录入多张面部特征,所以是一个列表),若每一个目标面部特征元素与面部特征库中的对应的特征元素的相似度较低,说明用户的面部特征数据为侧面面部特征数据,若至少存在一个目标面部特征元素与面部特征库中的对应的特征元素的相似度较高,则用户的面部特征数据为非侧面面部特征数据。
本申请提供的一种控制方法,通过当终端设备处于通话状态且满足屏幕状态切换条件时,则获取距离屏幕最近的用户的面部特征数据,若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态,既解决了距离传感器的局限性(sensor误报、判断距离限制等)导致误触发近感远离,屏幕点亮,进而终端设备切换为可交互状态,触发误触操作的问题,又能够解决因为距离传感器是基于光感实现的,油污和光线等也会对距离传感器 有较大影响,导致触发误触操作的问题,能够有效避免触发误触操作,提高用户体验。
在上述实施例的基础上,提出了上述实施例的变型实施例,为了使描述简要,在变型实施例中仅描述与上述实施例的不同之处。
在一个实施例中,在当终端设备处于通话状态且满足屏幕状态切换条件时,则获取距离屏幕最近的用户的面部特征数据之后,还包括:
若所述用户的面部特征数据为非侧面面部特征数据,则控制所述终端设备的屏幕处于亮屏状态。
控制所述终端设备的屏幕处于亮屏状态的方式可以为:预先获取所述终端设备的屏幕的当前状态,若所述终端设备的屏幕的当前状态为息屏状态,则将所述终端设备的屏幕的当前状态从息屏状态切换至亮屏状态,若所述终端设备的屏幕的当前状态为亮屏状态,则保持当前屏幕状态不变。控制所述终端设备的屏幕处于息屏状态的方式还可以为:直接控制终端设备的屏幕处于亮屏状态。
若所述用户的面部特征数据为非侧面面部特征数据,则控制所述终端设备的屏幕处于亮屏状态的方式可以为:将用户的面部特征数据和事先存储的正面面部特征数据进行比较,若用户的面部特征数据和事先存储的正面面部特征数据的相似度大于或者等于第一阈值,则确定用户的面部特征数据为正面面部特征数据,而非侧面面部特征数据;若所述用户的面部特征数据为非侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态的方式还可以为:对用户的面部特征数据进行特征提取,得到至少一个目标面部特征元素,若每个目标面部特征元素和预存正面面部特征元素集合中对应特征元素的相似度大于或者等于第二阈值,则确定所述用户的面部特征数据为正面面部特征数据,而非侧面面部特征数据;若所述用户的面部特征数据为非侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态的方式还可以为:获取所述用户的面部特征数据中预设面部特征元素的数量,若所述用户的面部特征数据中预设面部特征元素的数量等于数量阈值,则确定所述用户的面部特征数据为正面面部特征数据,而非侧面面部特征数据;若所述用户的面部特征数据为非侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态的方式还可以为:获取所述用户的面部特征数据中预设面部特征元素的面积,若所述用户的面部特征数据中预设面部特征元素的面积大于或者等于面积阈值,则确定所述用户的面部特征数据为正面面部特征数据,而非侧面面部特征数据。
在一个实施例中,若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态,包括:
若所述用户的面部特征数据和预存正面面部特征数据的相似度小于第一阈值,则确定所述用户的面部特征数据为侧面面部特征数据;控制所述终端设备的屏幕处于息屏状态。
所述第一阈值可以为预先设定,也可以为用户设定,本申请实施例对此不进行限制。
所述预存正面面部特征数据为用户事先录入的正面面部特征数据,例如可以是,用户事先录入的用户的正面面部图像,或者可以是,用户事先录入的用户的面部特征元素对应的正面图像。
若所述用户的面部特征数据和预存正面面部特征数据的相似度小于第一阈值,则确定所述用户的面部特征数据为侧面面部特征数据,例如可以是,计算用户事先录入的正面面部特征数据和用户的面部特征数据的相似度,若用户事先录入的正面面部特征数据和用户的面部特征数据的相似度小于第一阈值,则确定所述用户的面部特征数据为侧面面部特征数据。
在一个实施例中,所述用户的面部特征数据包括:至少一个目标面部特征元素;相应的,若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态,包括:
若至少一个目标面部特征元素和预存正面面部特征元素集合中对应特征元素的相似度小于第二阈值,则确定所述用户的面部特征数据为侧面面部特征数据;控制所述终端设备的屏幕处于息屏状态。
所述目标面部特征元素可以为:用户的眼睛、用户的鼻子、用户的耳朵或用户的头发等。
所述至少一个目标面部特征元素的获取方式可以为:获取距离屏幕最近的用户的面部特征数据,对距离屏幕最近的用户的面部特征数据进行元素提取,得到至少一个目标面部特征元素。
所述第二阈值和所述第一阈值可以相同,也可以不同,本申请实施例对此不进行限制。
若至少一个目标面部特征元素和预存正面面部特征元素集合中对应特征元素的相似度小于第二阈值,则确定所述用户的面部特征数据为侧面面部特征数据,例如可以是,若至少一个目标面部特征元素包括用户的眼睛图像和用户的鼻子图像,若用户的眼睛图像和预存正面面部特征中眼睛图像的相似度小于第二阈值,用户的鼻子图像和预存正面面部特征中鼻子图像的相似度小于第二阈值,则确定所述用户的面部特征数据为侧面面部特征数据。若至少一个目标面部特征元素包括用户的眼睛图像和用户的鼻子图像,若用户的眼睛图像和预存 正面面部特征中眼睛图像的相似度小于第二阈值,用户的鼻子图像和预存正面面部特征中鼻子图像的相似度大于或者等于第二阈值,则确定所述用户的面部特征数据为侧面面部特征数据。
若至少一个目标面部特征元素和预存正面面部特征元素集合中对应特征元素的相似度小于第二阈值,则确定所述用户的面部特征数据为侧面面部特征数据,控制所述终端设备的屏幕处于息屏状态,例如可以是,获取距离屏幕最近的用户的面部特征数据,对距离屏幕最近的用户的面部特征数据进行元素提取,得到用户的眼睛图像、用户的鼻子图像、用户的耳朵图像和用户的头发图像,获取用户的眼睛图像和预存正面面部特征中眼睛图像的第一相似度,用户的鼻子图像和预存正面面部特征中鼻子图像的第二相似度;用户的耳朵图像和预存正面面部特征中耳朵图像的第三相似度;用户的头发图像和预存正面面部特征中头发图像的第四相似度。若第一相似度、第二相似度、第三相似度和第四相似度中的至少一个相似度小于第二阈值,则确定所述用户的面部特征数据为侧面面部特征数据。
在一个例子中,获取距离屏幕最近的用户的面部特征数据,对距离屏幕最近的用户的面部特征数据进行元素提取,得到用户的眼睛图像、用户的鼻子图像、用户的耳朵图像和用户的头发图像,若用户的眼睛图像和预存正面面部特征中眼睛图像的相似度小于第二阈值,则直接确定所述用户的面部特征数据为侧面面部特征数据,无需获取其他目标面部特征元素和预存正面面部特征元素集合中对应特征元素的相似度。
在一个实施例中,若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态,包括:
若所述用户的面部特征数据和预存侧面面部特征数据的相似度大于第三阈值,则确定所述用户的面部特征数据为侧面面部特征数据;控制所述终端设备的屏幕处于息屏状态。
所述第三阈值可以为系统设定,也可以为用户设定,本申请实施例对此不进行限制,所述第三阈值可以与所述第一阈值相同,也可以不同,在本申请实施例的描述中,术语“第一”、“第二”、“第三”、“第四”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
所述预存侧面面部特征数据可以为用户事先录入的侧面面部特征数据,例如可以是,用户事先录入的用户的侧面面部图像,或者可以是,用户事先录入的用户的面部特征元素对应的侧面图像。
若所述用户的面部特数据和预存侧面面部特征数据的相似度大于第三阈值, 则确定所述用户的面部特征数据为侧面面部特征数据,例如可以是,计算用户事先录入的侧面面部特征数据和用户的面部特征数据的相似度,若用户事先录入的侧面面部特征数据和用户的面部特征数据的相似度大于第三阈值,则确定所述用户的面部特征数据为侧面面部特征数据。
在一个实施例中,所述用户的面部特征数据包括:至少一个目标面部特征元素;相应的,若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态,包括:
若至少一个目标面部特征元素和预存侧面面部特征元素集合中对应特征元素的相似度大于第四阈值,则确定所述用户的面部特征数据为侧面面部特征数据;控制所述终端设备的屏幕处于息屏状态。
所述第四阈值可以和所述第二阈值相同,也可以和所述第二阈值不同,本申请实施例对此不进行限制。
若至少一个目标面部特征元素和预存侧面面部特征元素集合中对应特征元素的相似度大于第四阈值,则确定所述用户的面部特征数据为侧面面部特征数据,例如可以是,若至少一个目标面部特征元素包括用户的眼睛图像和用户的鼻子图像,若用户的眼睛图像和预存侧面面部特征中眼睛图像的相似度大于第四阈值,用户的鼻子图像和预存侧面面部特征中鼻子图像的相似度大于第四阈值,则确定所述用户的面部特征数据为侧面面部特征数据。若至少一个目标面部特征元素包括用户的眼睛图像和用户的鼻子图像,若用户的眼睛图像和预存侧面面部特征中眼睛图像的相似度大于第四阈值,用户的鼻子图像和预存侧面面部特征中鼻子图像的相似度小于或者等于第四阈值,则确定所述用户的面部特征数据为侧面面部特征数据。
若至少一个目标面部特征元素和预存侧面面部特征元素集合中对应特征元素的相似度大于第四阈值,则确定所述用户的面部特征数据为侧面面部特征数据,控制所述终端设备的屏幕处于息屏状态,例如可以是,获取距离屏幕最近的用户的面部特征数据,对距离屏幕最近的用户的面部特征数据进行元素提取,得到用户的眼睛图像、用户的鼻子图像、用户的耳朵图像和用户的头发图像,获取用户的眼睛图像和预存侧面面部特征中眼睛图像的第一相似度,用户的鼻子图像和预存侧面面部特征中鼻子图像的第二相似度;用户的耳朵图像和预存侧面面部特征中耳朵图像的第三相似度;用户的头发图像和预存侧面面部特征中头发图像的第四相似度。若第一相似度、第二相似度、第三相似度和第四相似度中的至少一个相似度大于第四阈值,则确定所述用户的面部特征数据为侧面面部特征数据。
在一个例子中,获取距离屏幕最近的用户的面部特征数据,对距离屏幕最 近的用户的面部特征数据进行元素提取,得到用户的眼睛图像、用户的鼻子图像、用户的耳朵图像和用户的头发图像,若用户的眼睛图像和预存侧面面部特征中眼睛图像的相似度大于第四阈值,则直接确定所述用户的面部特征数据为侧面面部特征数据,无需获取其他目标面部特征元素和预存侧面面部特征元素集合中对应特征元素的相似度。
在一个实施例中,若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态,包括:
若所述用户的面部特征数据中预设面部特征元素的数量小于数量阈值,则确定所述用户的面部特征数据为侧面面部特征数据;控制所述终端设备的屏幕处于息屏状态。
所述数量阈值可以为用户设定,也可以系统设定。
预设面部特征元素为正面面部特征数据中数量为两个的特征元素。例如可以是,所述预设面部特征元素可以为用户的眼睛,也可以为用户的耳朵,或者可以为用户的眉毛,本申请实施例对此不进行限制。
若所述用户的面部特征数据中预设面部特征元素的数量小于数量阈值,则确定所述用户的面部特征数据为侧面面部特征数据,例如可以是,若预设面部特征元素为用户的眼睛,若对距离屏幕最近的用户的面部特征数据进行特征提取,得到用户的眼睛的数量为一只眼睛,或者没有眼睛,则确定用户的面部特征数据为侧面面部特征数据。
在一个例子中,若预设面部特征元素包括:用户的眼睛和用户的耳朵,若对距离屏幕最近的用户的面部特征数据进行特征提取,得到距离屏幕最近的用户的面部特征数据对应的用户的眼睛的数量为一只眼睛,且用户的耳朵的数量为一只,则确定用户的面部特征数据为侧面面部特征数据。或者,若预设面部特征元素包括:用户的眼睛和用户的耳朵,若对距离屏幕最近的用户的面部特征数据进行特征提取,得到距离屏幕最近的用户的面部特征数据对应的用户的眼睛的数量为一只眼睛,或者,用户的耳朵的数量为一只,则确定用户的面部特征数据为侧面面部特征数据。或者,若预设面部特征元素包括:用户的眼睛和用户的耳朵,若对距离屏幕最近的用户的面部特征数据进行特征提取,得到用户的眼睛的数量为一只眼睛,则无需对用户的耳朵的数量进行判断,直接确定用户的面部特征数据为侧面面部特征数据。或者,若预设面部特征元素包括:用户的眼睛和用户的耳朵,若对距离屏幕最近的用户的面部特征数据进行特征提取,得到用户的耳朵的数量为一只耳朵,则无需对用户的眼睛的数量进行判断,直接确定用户的面部特征数据为侧面面部特征数据。
在一个实施例中,若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态,包括:
若所述用户的面部特征数据中预设面部特征元素的面积小于面积阈值,则确定所述用户的面部特征数据为侧面面部特征数据;控制所述终端设备的屏幕处于息屏状态。
所述面积阈值可以为用户设定,也可以系统设定。
预设面部特征元素为正面面部特征数据中面积大于面积阈值的特征元素。例如可以是,所述预设面部特征元素可以为用户的头发,也可以为用户的面颊,或者可以为用户的额头,本申请实施例对此不进行限制。
若所述用户的面部特征数据中预设面部特征元素的面积小于面积阈值,则确定所述用户的面部特征数据为侧面面部特征数据,例如可以是,若预设面部特征元素为用户的头发,若对距离屏幕最近的用户的面部特征数据进行特征提取,得到用户的头发的面积小于面积阈值,则确定用户的面部特征数据为侧面面部特征数据。
在一个例子中,若预设面部特征元素包括:用户的头发和用户的面颊,若对距离屏幕最近的用户的面部特征数据进行特征提取,得到距离屏幕最近的用户的面部特征数据对应的用户的头发的面积为面积A,且用户的面颊的面积为面积B,若面积A小于面积阈值,且面积B小于面积阈值,则确定用户的面部特征数据为侧面面部特征数据。或者,若预设面部特征元素包括:用户的头发和用户的面颊,若对距离屏幕最近的用户的面部特征数据进行特征提取,得到距离屏幕最近的用户的面部特征数据对应的用户的头发的面积小于面积阈值,或者,用户的面颊的面积小于面积阈值,则确定用户的面部特征数据为侧面面部特征数据。或者,若预设面部特征元素包括:用户的头发和用户的面颊,若对距离屏幕最近的用户的面部特征数据进行特征提取,得到用户的头发的面积小于面积阈值,则无需对用户的面颊的面积进行判断,直接确定用户的面部特征数据为侧面面部特征数据。
在一个实施例中,所述屏幕状态切换条件包括:
近感远离被触发;或者,检测到所述终端设备由运动状态切换到静止状态;或者,近感靠近被触发。
本申请实施例提出了方案一、和近感灭屏方案配合使用。在终端设备处于通话状态的情况下,当前是近感靠近状态(息屏),在近感远离触发时,不是立即点亮屏幕,而是启动摄像头去捕捉距离屏幕最近的用户的面部特征数据,如果用户的面部特征数据为侧面面部特征数据,则保持息屏状态,如果用户的 面部特征数据为非侧面面部特征数据,则点亮屏幕。方案二、完全不使用近感,在终端设备处于通话状态的情况下,当检测到手机由运动状态转换到静止状态,启动相机捕捉距离屏幕最近的用户的面部特征数据,如果用户的面部特征数据为侧面面部特征数据,则熄灭屏幕;如果用户的面部特征数据为非侧面面部特征数据,则点亮屏幕。在一个例子中,如图1a所示,屏幕状态控制装置包括:通话模块、距离感应模块、用户面部特征数据提取模块、正面面部特征数据存储模块、用户面部特征数据判定模块和亮灭屏管理模块,其中,通话模块设置为通话发起。距离感应模块主要是距离传感器,设置为识别是近感靠近还是近感远离,将识别结果作为点亮或熄灭屏幕的依据。用户面部特征数据提取模块调用相机录入的正面面部特征数据(特征元素,眼睛,鼻子,耳朵,头发等),用来构建面部特征库,面部特征元素提取可以采用当前成熟的面部识别算法。面部特征存储模块还有一个功能,距离感应模块或运动检测模块判定需要提取面部特征时,正面面部特征数据存储模块还设置为摄取面部特征。用户面部特征数据判定模块根据用户面部特征数据提取模块提取的面部特征,判断提取的面部特征是否是侧面面部特征,若提取的面部特征是侧面面部特征则点亮屏幕,若提取的面部特征不是侧面面部特征则熄灭屏幕。上述判断方法包括:方法1和方法2两种判断方法,方法1:事先录入正面面部特征数据(正面面部特征数据包括面部特征元素,例如眼睛,鼻子,耳朵,以及头发等)并存入面部特征库,当需要对用户的面部特征数据进行判定时,用户面部特征数据提取模块提取用户的面部特征数据中的目标面部特征元素,并将目标面部特征元素和预存正面面部特征元素集合中对应的特征元素进行比较,如果任意目标面部特征元素(比如眼睛、鼻子、耳朵、或头发等)和预存正面面部特征元素集合中对应特征元素匹配度较低,比如匹配度低于60%,则认定用户的面部特征数据为侧面面部特征数据,如果每个目标面部特征元素(比如眼睛、鼻子、耳朵、以及头发等)和预存正面面部特征元素集合中对应的特征元素的匹配度较高,则认为用户的面部特征数据为非侧面面部特征数据。方法2:对距离屏幕最近的用户的面部特征数据进行提取,得到眼睛的数量,若眼睛的数量为0或1,则认定为用户的面部特征数据为侧面面部特征数据,若眼睛的数量不为0且不为1,则认为用户的面部特征数据为非侧面面部特征数据。亮灭屏管理模块设置为执行点亮屏幕或熄灭屏幕的操作。如图1b所示,另一种屏幕状态控制装置包括:通话模块、运动检测模块、用户面部特征数据提取模块、正面面部特征数据存储模块、用户面部特征数据判定模块和亮灭屏管理模块,当运动检测模块检测到手机由运动状态切换到静止状态(或接近静止),启动相机抓取距离屏幕最近的用户的面部特征数据,手机运动状态检测可以根据重力加速度传感器(Gravity sensor,G-sensor)来判断。
在另一个例子中,如图1c所示,正面面部特征数据录入的操作用于提前录入用户的正面面部特征数据,并将录入的正面面部特征数据存储在正面面部特征数据存储模块,用于在执行面部特征数据的认定方法1时作为比较的依据,当捕捉的距离屏幕最近的用户的面部特征数据和存储的正面面部特征数据差别比较大时则认定用户的面部特征数据为侧面面部特征数据。录入正面面部特征数据时可以从不同距离和角度尽量多地录入用户的正面面部图像,系统根据面部识别算法对正面面部图像中的面部特征元素进行提取并归类(比如眼睛一类,耳朵一类等等)。
当终端设备来电接通后,也就是终端设备处于通话状态时,判断是否触发近感,当通话中的手机贴近耳朵时会触发近感靠近,屏幕息屏;当通话中的手机远离耳朵时则触发近感远离,屏幕点亮。实际操作时,手机稍微抖动远离耳朵可能就会触发近感远离,这时就会发生误触。当触发近感远离时,要触发面部特征识别,获取距离屏幕最近的用户的面部特征数据,而不是直接去亮屏,将距离屏幕最近的用户的面部特征数据和面部特征库里的模型比对,若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态。
如图1d所示,当终端设备来电接通后,也就是终端设备处于通话状态时,若检测到目标动作的结束(手机运动状态可以根据G-sensor判断),触发面部特征提取,获取距离屏幕最近的用户的面部特征数据,将距离屏幕最近的用户的面部特征数据和面部特征库里的模型比对,如果用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态,保持当前屏幕状态(灭屏),如果用户的面部特征数据为非侧面面部特征数据,则点亮屏幕。例如可以是,目标动作的结束可以为:手机状态由运动状态切换到静止或相对接近静止状态。
本申请提供了一种控制装置,图2为本申请实施例提供的一种控制装置的结构示意图,该装置配置于终端设备,参见图2,该控制装置包括:
获取模块210,设置为当终端设备处于通话状态且满足屏幕状态切换条件时,则获取距离屏幕最近的用户的面部特征数据;第一控制模块220,设置为若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态。
本实施例提供的控制装置设置为实现本申请实施例的控制方法,本实施例提供的控制装置实现原理和技术效果与本申请实施例的控制方法类似,此处不再赘述。
在上述实施例的基础上,提出了上述实施例的变型实施例,为了使描述简要,在变型实施例中仅描述与上述实施例的不同之处。
在一个实施例中,还包括:
第二控制模块,设置为在当终端设备处于通话状态且满足屏幕状态切换条件时,则获取距离屏幕最近的用户的面部特征数据之后,若所述用户的面部特征数据为非侧面面部特征数据,则控制所述终端设备的屏幕处于亮屏状态。
在一个实施例中,所述第一控制模块设置为:
若所述用户的面部特征数据和预存正面面部特征数据的相似度小于第一阈值,则确定所述用户的面部特征数据为侧面面部特征数据;控制所述终端设备的屏幕处于息屏状态。
在一个实施例中,所述用户的面部特征数据包括:至少一个目标面部特征元素;所述第一控制模块设置为:
若至少一个目标面部特征元素和预存正面面部特征元素集合中对应特征元素的相似度小于第二阈值,则确定所述用户的部特征数据为侧面面部特征数据;控制所述终端设备的屏幕处于息屏状态。
在一个实施例中,所述第一控制模块设置为:
若所述用户的面部特征数据和预存侧面面部特征数据的相似度大于第三阈值,则确定所述用户的面部特征数据为侧面面部特征数据;控制所述终端设备的屏幕处于息屏状态。
在一个实施例中,所述用户的面部特征数据包括:至少一个目标面部特征元素;所述第一控制模块设置为:
若至少一个目标面部特征元素和预存侧面面部特征元素集合中对应特征元素的相似度大于第四阈值,则确定所述用户的面部特征数据为侧面面部特征数据;控制所述终端设备的屏幕处于息屏状态。
在一个实施例中,所述第一控制模块设置为:
若所述用户的面部特征数据中预设面部特征元素的数量小于数量阈值,则确定所述用户的面部特征数据为侧面面部特征数据;控制所述终端设备的屏幕处于息屏状态。
在一个实施例中,所述第一控制模块设置为:
若所述用户的面部特征数据中预设面部特征元素的面积小于面积阈值,则确定所述用户的面部特征数据为侧面面部特征数据;控制所述终端设备的屏幕处于息屏状态。
在一个实施例中,所述屏幕状态切换条件包括:
近感远离被触发;或者,检测到所述终端设备由运动状态切换到静止状态;或者,近感靠近被触发。
本申请提供的一种控制装置,包括:获取模块,设置为当终端设备处于通话状态且满足屏幕状态切换条件时,则获取距离屏幕最近的用户的面部特征数据,第一控制模块,设置为若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态。既解决了距离传感器的局限性(sensor误报、判断距离限制等)导致误触发近感远离,屏幕点亮,进而终端设备切换为可交互状态,触发误触操作的问题,又能够解决因为距离传感器是基于光感实现的,油污和光线等也会对距离传感器有较大影响,导致触发误触操作的问题,能够有效避免触发误触操作,提高用户体验。
本申请提供了一种终端设备,图3为本申请实施例提供的一种终端设备的结构示意图,如图3所示,本申请提供的终端设备,包括一个或多个处理器51和存储装置52;该终端设备中的处理器51可以是一个或多个,图3中以一个处理器51为例;存储装置52设置为存储一个或多个程序;所述一个或多个程序被所述一个或多个处理器51执行,使得所述一个或多个处理器51实现如本申请实施例中图1所述的控制方法。
终端设备还包括:通信装置53、输入装置54和输出装置55。
终端设备中的处理器51、存储装置52、通信装置53、输入装置54和输出装置55可以通过总线或其他方式连接,图3中以通过总线连接为例。
输入装置54可设置为接收输入的数字或字符信息,以及产生与终端设备的用户设置以及功能控制有关的按键信号输入。输出装置55可包括显示屏等显示终端设备。
通信装置53可以包括接收器和发送器。通信装置53设置为根据处理器51的控制进行信息收发通信。信息包括但不限于上行授权信息。
存储装置52作为一种计算机可读存储介质,可设置为存储软件程序、计算机可执行程序以及模块,如本申请实施例图1所述控制方法对应的程序指令/模块(例如,控制装置中的获取模块210和第一控制模块220)。存储装置52可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端设备的使用所创建的数据等。此外,存储装置52可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一 些实例中,存储装置52可包括相对于处理器51远程设置的存储器,这些远程存储器可以通过网络连接至终端设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
本申请实施例还提供一种存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现本申请实施例所述的控制方法,该控制方法包括:
当终端设备处于通话状态且满足屏幕状态切换条件时,则获取距离屏幕最近的用户的面部特征数据;若所述用户的面部特征数据为侧面面部特征数据,则控制所述终端设备的屏幕处于息屏状态。
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存、光纤、便携式光盘只读存储器(Compact Disk-ROM,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于:电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、无线电频率(Radio Frequency,RF)等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程 序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
以上所述,仅为本申请的示例性实施例而已。
本领域内的技术人员应明白,术语用户终端设备涵盖任何适合类型的无线用户终端设备,例如移动电话、便携数据处理装置、便携网络浏览器或车载移动台。
一般来说,本申请的多种实施例可以在硬件或专用电路、软件、逻辑或其任何组合中实现。例如,一些方面可以被实现在硬件中,而其它方面可以被实现在可以被控制器、微处理器或其它计算装置执行的固件或软件中,尽管本申请不限于此。
本申请的实施例可以通过移动装置的数据处理器执行计算机程序指令来实现,例如在处理器实体中,或者通过硬件,或者通过软件和硬件的组合。计算机程序指令可以是汇编指令、指令集架构(Instruction Set Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码。
本申请附图中的任何逻辑流程的框图可以表示程序步骤,或者可以表示相互连接的逻辑电路、模块和功能,或者可以表示程序步骤与逻辑电路、模块和功能的组合。计算机程序可以存储在存储器上。存储器可以具有任何适合于本地技术环境的类型并且可以使用任何适合的数据存储技术实现,例如Read-Only Memory,ROM、RAM、光存储器装置和系统(数码多功能光碟(Digital Video Disc,DVD)或CD)等。计算机可读介质可以包括非瞬时性存储介质。数据处理器可以是任何适合于本地技术环境的类型,例如但不限于通用计算机、专用计算机、微处理器、数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑器件(Field-Programmable Gate Array,FPGA)以及基于多核处理器架构的处理器。

Claims (12)

  1. 一种控制方法,包括:
    在终端设备处于通话状态且满足屏幕状态切换条件的情况下,获取距离屏幕最近的用户的面部特征数据;
    在所述用户的面部特征数据为侧面面部特征数据的情况下,控制所述终端设备的屏幕处于息屏状态。
  2. 根据权利要求1所述的方法,在所述获取距离屏幕最近的用户的面部特征数据之后,还包括:
    在所述用户的面部特征数据为非侧面面部特征数据的情况下,控制所述终端设备的屏幕处于亮屏状态。
  3. 根据权利要求1所述的方法,其中,在所述用户的面部特征数据为侧面面部特征数据的情况下,控制所述终端设备的屏幕处于息屏状态,包括:
    在所述用户的面部特征数据和预存正面面部特征数据的相似度小于第一阈值的情况下,确定所述用户的面部特征数据为侧面面部特征数据;
    控制所述终端设备的屏幕处于息屏状态。
  4. 根据权利要求1所述的方法,其中,所述用户的面部特征数据包括:至少一个目标面部特征元素;
    所述在所述用户的面部特征数据为侧面面部特征数据的情况下,控制所述终端设备的屏幕处于息屏状态,包括:
    在至少存在一个目标面部特征元素和预存正面面部特征元素集合中对应特征元素的相似度小于第二阈值的情况下,确定所述用户的面部特征数据为侧面面部特征数据;
    控制所述终端设备的屏幕处于息屏状态。
  5. 根据权利要求1所述的方法,其中,所述在所述用户的面部特征数据为侧面面部特征数据的情况下,控制所述终端设备的屏幕处于息屏状态,包括:
    在所述用户的面部特征数据和预存侧面面部特征数据的相似度大于第三阈值的情况下,确定所述用户的面部特征数据为侧面面部特征数据;
    控制所述终端设备的屏幕处于息屏状态。
  6. 根据权利要求1所述的方法,其中,所述用户的面部特征数据包括:至少一个目标面部特征元素;
    所述在所述用户的面部特征数据为侧面面部特征数据的情况下,控制所述 终端设备的屏幕处于息屏状态,包括:
    在至少存在一个目标面部特征元素和预存侧面面部特征元素集合中对应特征元素的相似度大于第四阈值的情况下,确定所述用户的面部特征数据为侧面面部特征数据;
    控制所述终端设备的屏幕处于息屏状态。
  7. 根据权利要求1所述的方法,其中,所述在所述用户的面部特征数据为侧面面部特征数据的情况下,控制所述终端设备的屏幕处于息屏状态,包括:
    在所述用户的面部特征数据中预设面部特征元素的数量小于数量阈值的情况下,确定所述用户的面部特征数据为侧面面部特征数据;
    控制所述终端设备的屏幕处于息屏状态。
  8. 根据权利要求1所述的方法,其中,所述在所述用户的面部特征数据为侧面面部特征数据的情况下,控制所述终端设备的屏幕处于息屏状态,包括:
    在所述用户的面部特征数据中预设面部特征元素的面积小于面积阈值的情况下,确定所述用户的面部特征数据为侧面面部特征数据;
    控制所述终端设备的屏幕处于息屏状态。
  9. 根据权利要求1所述的方法,其中,所述屏幕状态切换条件包括:
    近感远离被触发;
    或者,
    检测到所述终端设备由运动状态切换到静止状态;
    或者,
    近感靠近被触发。
  10. 一种控制装置,包括:
    获取模块,设置为在终端设备处于通话状态且满足屏幕状态切换条件的情况下,获取距离屏幕最近的用户的面部特征数据;
    第一控制模块,设置为在所述用户的面部特征数据为侧面面部特征数据的情况下,控制所述终端设备的屏幕处于息屏状态。
  11. 一种终端设备,包括:
    至少一个处理器;
    存储装置,设置为存储至少一个程序;
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-9中任一项所述的控制方法。
  12. 一种存储介质,其中,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-9中任一项所述的控制方法。
PCT/CN2022/096741 2021-08-30 2022-06-02 控制方法、装置、设备和存储介质 WO2023029631A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111006626.XA CN115731590A (zh) 2021-08-30 2021-08-30 控制方法、装置、设备和存储介质
CN202111006626.X 2021-08-30

Publications (1)

Publication Number Publication Date
WO2023029631A1 true WO2023029631A1 (zh) 2023-03-09

Family

ID=85291010

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/096741 WO2023029631A1 (zh) 2021-08-30 2022-06-02 控制方法、装置、设备和存储介质

Country Status (2)

Country Link
CN (1) CN115731590A (zh)
WO (1) WO2023029631A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108803896A (zh) * 2018-05-28 2018-11-13 Oppo(重庆)智能科技有限公司 控制屏幕的方法、装置、终端及存储介质
CN109167877A (zh) * 2018-08-01 2019-01-08 Oppo(重庆)智能科技有限公司 终端屏幕控制方法、装置、终端设备和存储介质
CN109756630A (zh) * 2019-01-25 2019-05-14 维沃移动通信有限公司 一种屏幕控制方法及终端设备
WO2019228067A1 (zh) * 2018-05-29 2019-12-05 奇酷互联网络科技(深圳)有限公司 移动终端唤醒方法、装置及移动终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108803896A (zh) * 2018-05-28 2018-11-13 Oppo(重庆)智能科技有限公司 控制屏幕的方法、装置、终端及存储介质
WO2019228067A1 (zh) * 2018-05-29 2019-12-05 奇酷互联网络科技(深圳)有限公司 移动终端唤醒方法、装置及移动终端
CN109167877A (zh) * 2018-08-01 2019-01-08 Oppo(重庆)智能科技有限公司 终端屏幕控制方法、装置、终端设备和存储介质
CN109756630A (zh) * 2019-01-25 2019-05-14 维沃移动通信有限公司 一种屏幕控制方法及终端设备

Also Published As

Publication number Publication date
CN115731590A (zh) 2023-03-03

Similar Documents

Publication Publication Date Title
WO2017181769A1 (zh) 一种人脸识别方法、装置和系统、设备、存储介质
WO2018121428A1 (zh) 一种活体检测方法、装置及存储介质
CN108563936B (zh) 任务执行方法、终端设备及计算机可读存储介质
WO2022183661A1 (zh) 事件检测方法、装置、电子设备、存储介质及程序产品
US11386698B2 (en) Method and device for sending alarm message
US11328044B2 (en) Dynamic recognition method and terminal device
KR20160088224A (ko) 객체 인식 방법 및 장치
WO2020259073A1 (zh) 图像处理方法及装置、电子设备和存储介质
EP3623973B1 (en) Unlocking control method and related product
CN108090340B (zh) 人脸识别处理方法、人脸识别处理装置及智能终端
CN112650405B (zh) 一种电子设备的交互方法及电子设备
WO2019011098A1 (zh) 解锁控制方法及相关产品
US11620995B2 (en) Voice interaction processing method and apparatus
WO2022134388A1 (zh) 乘车逃票检测方法及装置、电子设备、存储介质、计算机程序产品
US20140369553A1 (en) Method for triggering signal and in-vehicle electronic apparatus
CN110597426A (zh) 亮屏处理方法、装置、存储介质及终端
CN107977636B (zh) 人脸检测方法及装置、终端、存储介质
CN106648042B (zh) 一种识别控制方法以及装置
EP3121790B1 (en) Image sensing apparatus, object detecting method thereof and non-transitory computer readable recording medium
WO2022183663A1 (zh) 事件检测方法、装置、电子设备、存储介质及程序产品
WO2023029631A1 (zh) 控制方法、装置、设备和存储介质
CN110874131A (zh) 楼宇对讲室内机及其控制方法、存储介质
JP6044633B2 (ja) 情報処理装置、情報処理方法およびプログラム
WO2023231479A1 (zh) 瞳孔检测方法、装置、存储介质及电子设备
CN111240835A (zh) Cpu工作频率调整方法、cpu工作频率调整装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22862773

Country of ref document: EP

Kind code of ref document: A1