CN109660662B - Information processing method and terminal - Google Patents

Information processing method and terminal Download PDF

Info

Publication number
CN109660662B
CN109660662B CN201811474406.8A CN201811474406A CN109660662B CN 109660662 B CN109660662 B CN 109660662B CN 201811474406 A CN201811474406 A CN 201811474406A CN 109660662 B CN109660662 B CN 109660662B
Authority
CN
China
Prior art keywords
user
terminal
preset condition
features
prompt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811474406.8A
Other languages
Chinese (zh)
Other versions
CN109660662A (en
Inventor
周席龙
王哲鹏
张晓平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201811474406.8A priority Critical patent/CN109660662B/en
Publication of CN109660662A publication Critical patent/CN109660662A/en
Application granted granted Critical
Publication of CN109660662B publication Critical patent/CN109660662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72484User interfaces specially adapted for cordless or mobile telephones wherein functions are triggered by incoming communication events
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72451User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to schedules, e.g. using calendar applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an information processing method, which comprises the following steps: when a communication prompt is received, acquiring eye features of a first user; wherein, the eye features of the first user at least comprise an angle between the sight line of the first user and the terminal; detecting whether the eye features of the first user meet a first preset condition; the first preset condition represents that an angle between a sight line of a user and the terminal meets a first threshold range; and when the eye features of the first user meet a first preset condition, determining to respond to the communication prompt. The embodiment of the invention also discloses electronic equipment and a computer storage medium at all times.

Description

Information processing method and terminal
Technical Field
The present invention relates to the field of information technologies, and in particular, to an information presentation method and a terminal.
Background
With the continuous development and improvement of terminal technology, the terminal can provide great convenience for the working life of human beings; such as making a call through the terminal, receiving various types of notification messages, etc. However, in many cases, when an incoming call or other notification message is received, the user cannot answer the incoming call or open the notification message for checking by using his/her hand, for example, the user is washing his/her hands during the incoming call, and it is inconvenient for the user to answer the call with a wet mobile phone; therefore, there is a need for an information presentation method that solves the problem of how to answer or view communication messages when it is not convenient for a user to answer a call or open a notification message with both hands.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present invention are intended to provide an information processing method and a terminal.
The technical scheme of the invention is realized as follows:
the embodiment of the invention provides an information processing method, which comprises the following steps:
when a communication prompt is received, acquiring eye features of a first user; wherein, the eye features of the first user at least comprise an angle between the sight line of the first user and the terminal;
detecting whether the eye features of the first user meet a first preset condition; the first preset condition represents that an angle between a sight line of a user and the terminal meets a first threshold range;
and when the eye features of the first user meet a first preset condition, determining to respond to the communication prompt.
The embodiment of the invention also provides an information presenting terminal, which comprises: a processor and a memory configured to store a computer program capable of running on the processor,
wherein the processor is configured to execute the steps of the information processing method when running the computer program.
According to the information processing method and the terminal provided by the embodiment of the invention, when the communication prompt is received, the eye characteristics of the first user can be obtained firstly; wherein, the eye features of the first user at least comprise an angle between the sight line of the first user and the terminal; then, detecting whether the eye features of the first user meet a first preset condition; the first preset condition represents that an angle between a sight line of a user and the terminal meets a first threshold range; and when the eye features of the first user meet a first preset condition, determining to respond to the communication prompt. In this way, when the user receives the communication prompt, the user does not need to use two hands to respond to the communication prompt, but the user can determine whether to respond to the communication prompt by detecting the eye characteristics of the user, so that the two hands of the user can be liberated, and the experience of the user is greatly improved.
Drawings
Fig. 1 is a schematic flowchart of an information processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating another information processing method according to an embodiment of the present invention;
fig. 3 is a schematic view of a scene of an information processing method according to an embodiment of the present invention;
fig. 4 is a schematic view of another information processing method according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a further information processing method according to an embodiment of the present invention;
FIG. 6 is a flow chart illustrating an information processing method according to another embodiment of the invention;
fig. 7 is a schematic view of a scene of another information processing method according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an information processing terminal according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
An embodiment of the present invention provides an information processing method, which is shown in fig. 1 and includes the following steps:
step 101, when a communication prompt is received, the eye features of the first user are obtained.
The eye features of the first user at least include an angle between the sight line of the first user and the terminal.
In other embodiments of the present invention, when the communication prompt is received in step 101, obtaining the eye feature of the first user may be implemented by the terminal; here, the terminal may be any type of electronic device; in practical applications, the electronic device includes but is not limited to: smart phones, tablet computers, smart wearable devices (smart devices that can be worn on the body of a user), and so forth. The first user refers to an operator of the terminal that receives the communication prompt, that is, an operator of the current terminal. The communication prompt may be a call prompt initiated by the second user, or an information prompt sent by at least one second user, such as a WeChat message prompt or a short message prompt. The second user is different from the first user, that is, the first user and the second user are not the same user. In this embodiment, the eye feature of the first user may be acquired by the image acquisition device; the image acquisition device is preferably a front-facing camera of the terminal.
In other embodiments of the present invention, when the terminal receives the communication prompt, the front-facing camera is started from the background, and the environment image is collected in real time through the front-facing camera; further, the terminal acquires an eye image of the first user from the acquired environment image; the terminal can extract muscle variation characteristics inside and around the eyes of the first user according to the collected eye image of the first user to obtain the eye characteristics of the first user. Wherein the eye feature at least comprises an angle between the sight of the first user and the terminal; here, the angle between the line of sight of the first user and the terminal may be a minimum angle between a line connecting the line of sight of the first user and a plane in which the terminal is located.
And 102, detecting whether the eye features of the first user meet a first preset condition.
The first preset condition represents that an angle between the sight line of the user and the terminal meets a first threshold range.
In other embodiments of the present invention, the step 102 of detecting whether the eye feature of the first user meets the first preset condition may be implemented by a terminal. Here, after acquiring the angle between the line of sight of the first user and the terminal through step 101, the terminal determines whether the angle between the line of sight of the first user and the terminal satisfies a preset angle threshold, that is, a first threshold range, so as to determine whether the line of sight of the first user falls on the terminal screen. Thus, whether the attention of the first user is focused on the terminal can be determined by judging the included angle between the implementation of the first user and the terminal.
And 103, determining to respond to the communication prompt when the eye features of the first user meet a first preset condition.
In other embodiments of the present invention, step 103 determines that responding to the communication prompt may be implemented by a terminal when the eye feature of the first user satisfies a first preset condition. Here, when the eye feature of the first user satisfies a first preset condition, it may be considered that the current line of sight of the first user falls in the screen, that is, the attention of the user is focused on the display screen; at this time, the terminal determines that the first user's intention is to respond to the communication prompt, and the terminal responds to the communication prompt.
Specifically, when the communication prompt in step 101 is a call prompt initiated by a second user, the responding to the communication prompt is: and connecting the call initiated by the second user. When the communication prompt in step 101 is a prompt of information sent by at least one second user, the responding to the communication prompt is: and displaying the information sent by the at least one second user.
In another embodiment, the terminal may receive information prompts sent by a plurality of second users, wherein different second users send information prompts at different positions in the terminal display screen, and the first user may select the information prompt desired to be seen by looking at a certain information prompt through eyes. Specifically, after determining that the eye features of the user meet a first preset condition, the terminal determines the contact position of the sight of the first user and a display screen of the terminal based on the collected eye features of the first user; wherein the contact position may be represented by two-dimensional coordinates; the terminal determines that the prompt of the information displayed at the position is the information selected by the first user and wanted to be opened based on the contact position of the sight line of the first user and the display screen of the terminal.
According to the information processing method provided by the embodiment of the invention, when the communication prompt is received, the eye characteristics of the first user can be obtained firstly; wherein, the eye features of the first user at least comprise an angle between the sight line of the first user and the terminal; then, detecting whether the eye features of the first user meet a first preset condition; the first preset condition represents that an angle between a sight line of a user and the terminal meets a first threshold range; and when the eye features of the first user meet a first preset condition, determining to respond to the communication prompt. In this way, when the user receives the communication prompt, the user does not need to use two hands to respond to the communication prompt, but the user can determine whether to respond to the communication prompt by detecting the eye characteristics of the user, so that the two hands of the user can be liberated, and the experience of the user is greatly improved.
Based on the foregoing embodiments, an embodiment of the present invention provides an information processing method, which is shown in fig. 2 and includes the following steps:
step 201, when the terminal receives a call prompt initiated by a second user, the eye feature of the first user and the face feature of the first user are obtained.
Wherein, the eye features of the first user at least comprise an angle between the sight line of the first user and the terminal; the facial features of the first user at least comprise the facial area of the first user.
In other embodiments of the present invention, in order to more accurately determine whether the attention of the first user is on the terminal, in addition to the eye feature of the first user, the facial feature of the first user may also be acquired. The terminal comprehensively determines the attention of the first user by combining the eye features and the face features of the first user. In this embodiment, the facial features include at least a facial area of the first user.
Step 202, detecting whether the eye feature of the first user meets a first preset condition and whether the face feature of the first user meets a second preset condition.
The first preset condition represents that an angle between a sight line of a user and the terminal meets a first threshold range; the second preset condition at least comprises that the face area of the first user meets a second threshold range.
In other embodiments of the present invention, when determining that the eye feature of the first user meets the first preset condition, the terminal further needs to determine whether the face feature of the first user can meet the second preset condition. Here, the facial feature may be a face area of the first user; the second threshold range may be three-quarters of the total area of the first user's face. It can be understood that the first user can confirm that the first user's intention is to answer the incoming call prompt or turn on the information prompt only when the face is facing the display screen and the eyes are looking at the terminal screen.
In another embodiment, when the terminal captures the surrounding environment through the front camera, the environment may include a portrait photo, and the like, and in order to prevent the terminal from recognizing the portrait photo as the first user and responding to the communication prompt, the first user needs to be subjected to living body detection. Specifically, the terminal may further acquire face depth information of the first user from the acquired face image of the first user, and determine whether the acquired face features of the first user are three-dimensional face information based on the face depth information of the first user. Here, the face depth information of the first user may be acquired through a structured light principle, a binocular imaging principle, or a Time-of-Flight (TOF) camera. The determining whether the collected facial features of the first user are three-dimensional facial information comprises the facial depth information of the first user,
step 203, when the eye feature of the first user meets a first preset condition and the face feature of the first user meets a second preset condition, the terminal obtains a duration that the eye feature of the first user meets the first preset condition and the face feature of the first user meets the second preset condition.
And step 204, when the duration meets a preset duration threshold value, the terminal connects the call initiated by the second user.
Specifically, when the terminal determines that the eye feature of the first user meets a first preset condition and the facial feature of the first user meets a second preset condition, the terminal may consider that the first user wants to respond to the incoming call prompt; in this embodiment, in order to prevent the misoperation, the terminal may further obtain a duration that the eye feature of the first user meets a first preset condition and the facial feature of the first user meets a second preset condition. And when the recorded duration meets a preset duration threshold value, the terminal connects the call initiated by the second user. Here, the preset duration threshold may be 3 to 5 seconds.
In other embodiments of the present invention, when the terminal determines that the eye feature of the first user meets a first preset condition and the facial feature of the first user meets a second preset condition, a countdown display interface may be displayed on the communication prompt interface; wherein, the countdown duration is the same as the preset duration threshold value in the foregoing. As shown in fig. 3, when receiving an incoming call prompt, the terminal 30 obtains eye features and facial features of a first user through the camera 31, and when the eye features of the first user meet a first preset condition and the facial features of the first user meet a second preset condition, a countdown interface 33 is displayed on the incoming call prompt interface 32, where "answer countdown" and time "3 s" are displayed in the countdown interface 33; in this way, the terminal can show the first user in real time when the first user is focused on the display screen of the terminal. In addition, in the counting-down process, when the eye feature of the first user does not meet the first preset condition or the facial feature of the first user does not meet the second preset condition, counting down is stopped, the counting-down interface is hidden, and a prompt message of 'answer failure' can be displayed on the incoming call prompt interface, as shown in fig. 4, the terminal 40 can pop up a prompt message frame 42 on the incoming call prompt interface 41 and display a message of 'answer failure' in the prompt message frame 42; at this time, the terminal detects whether the eye feature of the first user meets a first preset condition and whether the face feature of the first user meets a second preset condition again, when the eye feature of the first user meets the first preset condition and the face feature of the first user meets the second preset condition, the countdown display interface is displayed on the communication prompt interface again, and the steps are repeated; until the call initiated by the second user is connected or the call answering time is exceeded. Therefore, the first user can intuitively acquire whether the current eye characteristics and face characteristics meet the requirement of responding to the communication prompt in real time.
And step 205, the terminal acquires the distance between the first user and the terminal through a distance sensor.
And step 206, if the distance between the first user and the terminal is greater than a fourth threshold, the terminal starts a hands-free function.
In other embodiments of the present invention, after the terminal connects to a call initiated by a second user, the current distance between the first user and the terminal may also be detected, and if the distance between the first user and the terminal is less than or equal to a fourth threshold, it may be considered that the first user is closer to the terminal, and the call can be made with the second user only through the handset. If the distance between the first user and the terminal is greater than the fourth threshold, the terminal starts the hands-free function, so that the first user can communicate with the second user in a far condition.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
According to the information processing method provided by the embodiment of the invention, when the communication prompt is received, the eye characteristics of the first user can be obtained firstly; wherein, the eye features of the first user at least comprise an angle between the sight line of the first user and the terminal; then, detecting whether the eye features of the first user meet a first preset condition; the first preset condition represents that an angle between a sight line of a user and the terminal meets a first threshold range; and when the eye features of the first user meet a first preset condition, determining to respond to the communication prompt. In this way, when the user receives the communication prompt, the user does not need to use two hands to respond to the communication prompt, but the user can determine whether to respond to the communication prompt by detecting the eye characteristics of the user, so that the two hands of the user can be liberated, and the experience of the user is greatly improved.
Based on the foregoing embodiments, an embodiment of the present invention provides an information processing method, which is shown in fig. 5 and includes the following steps:
step 501, when the terminal receives the communication prompt, the eye feature of the first user and the face feature of the first user are obtained.
In other embodiments of the present invention, the communication prompt includes a second user initiated communication prompt and at least one prompt for information from the second user. The eye features of the first user at least comprise an angle between the sight line of the first user and the terminal; the first user may be an operator of the current terminal. The first facial features include various features for facial recognition, such as: geometric features of the face, histogram features, etc.
And 502, the terminal acquires the contact information of the second user corresponding to the communication prompt.
In other embodiments of the present invention, the second user may be a contact stored in the terminal or a strange contact. The owner of the terminal can set the authority for processing the communication message of the contact person for each contact person stored in the terminal; for example, for some important contacts, the owner of the terminal wants to be able to process the communication prompt of the important contact only by himself; some of the secondary important contacts can be used for processing the communication prompt of the secondary important contacts by own family except the owner of the terminal; for strange contacts, anyone can process the communication prompt of the strange contacts.
Specifically, the owner of the terminal may input his/her facial features in advance; and associating the facial features of the user with the communication prompts of the important contacts. When an important contact initiates a call or sends information to the terminal, the terminal first detects whether the facial features of the first user (i.e., the operator of the current terminal) match the facial features of the owner of the terminal. If the current important contact person is matched with the first user, the first user can be considered to be the terminal owner, and the first user can process the communication prompt sent by the current important contact person. If the current important contact person is not matched with the first user, the first user is not considered to be the terminal owner, and the first user does not have the right to process the communication prompt sent by the current important contact person.
In addition, the owner of the terminal can input the facial features of at least one preset processor in advance; and associating the facial features of the at least one preset processor with the second important contact. When the second important contact initiates a call or sends information to the terminal, the terminal firstly detects whether the facial features of the first user are matched with the facial features of at least one preset processor. If the current important contact is matched with the communication prompt, the first user can be considered as a preset processor, and the first user can process the communication prompt sent by the current important contact. If not, the first user is not considered to be a preset processor, and the first user does not have the right to process the communication prompt sent by the current important contact.
Step 503, based on the contact information of the second user, the terminal determines a target facial feature corresponding to the contact information of the second user.
In other embodiments of the present invention, the terminal may obtain, according to the corresponding contact information of the second user, a target facial feature of the user having a processing right, which is associated with the contact information; for example, if the contact information of the second user indicates that the second user is an important contact, the facial feature of the terminal owner is acquired as the target facial feature; and if the contact information of the second user indicates that the second user is a secondary important contact, acquiring the facial features of at least one preset processor associated with the secondary important contact, and taking the facial features of the at least one preset processor as target facial features.
Step 504, if the facial features of the first user are matched with the target facial features, the terminal detects whether the eye features of the first user meet a first preset condition and whether the facial features of the first user meet a second preset condition.
The first preset condition represents that an angle between a sight line of a user and the terminal meets a first threshold range; the second preset condition at least comprises that the face area of the first user meets a second threshold range.
In another embodiment of the present invention, if the facial feature of the first user matches the target facial feature, the current first user is considered to be a user with operation authority, and the first user communication prompt is allowed to be processed. In this way, when the facial feature of the first user is matched with the target facial feature, whether the eye feature of the first user meets a first preset condition and whether the facial feature of the first user meets a second preset condition is judged to determine whether a condition for responding to the communication prompt is met.
Step 505, when the eye feature of the first user meets a first preset condition and the face feature of the first user meets a second preset condition, the terminal obtains a duration that the eye feature of the first user meets the first preset condition and the face feature of the first user meets the second preset condition.
And step 506, when the duration meets a preset duration threshold value, the terminal responds to the communication prompt.
In the information processing method provided by this embodiment, when receiving the communication prompt message, the terminal may first collect facial features of the first user, and then determine whether the first user has the authority to process the communication prompt based on the facial features of the first user. And if the facial features of the first user are matched with the target facial features, continuously detecting whether the eye features and the facial features of the first user meet the condition for responding to the communication prompt. Therefore, on the premise of ensuring answering safety, whether the communication prompt is responded or not can be determined by detecting the eye characteristics of the user, so that the hands of the user can be liberated, and the experience degree of the user is greatly improved.
Based on the foregoing embodiments, an embodiment of the present invention provides an information processing method, as shown in fig. 6, the method including the following steps:
step 601, when the terminal receives an information prompt sent by at least one second user, the eye feature of the first user and the eye feature of the first user are obtained.
Wherein, the eye features of the first user at least comprise an angle between the sight line of the first user and the terminal; the first user may be an operator of the current terminal. The first facial features include various features for facial recognition, such as: geometric features of the face, histogram features, etc.
Step 602, the terminal obtains the contact information of the second user corresponding to the information prompt.
Here, when the terminal receives information prompts sent by more than two second users, the contact information of each second user is acquired.
Step 603, based on the contact information of the second user, the terminal determines a target facial feature corresponding to the contact information of the second user.
Here, when the terminal receives two or more information prompts from the second user, the face feature of the terminal owner may be set as the target face feature by default; in another embodiment, an intersection of the target facial features corresponding to the contacts of each second user may also be obtained, and the target facial feature information in the intersection is used as the final target facial feature.
Step 604, if the facial feature of the first user matches the target facial feature, detecting whether the eye feature of the first user meets a first preset condition and whether the facial feature of the first user meets a second preset condition.
The first preset condition represents that an angle between a sight line of a user and the terminal meets a first threshold range; the second preset condition includes that the face area of the first user meets a second threshold range, and the face depth information of the first user meets a third threshold range.
Step 605, when the eye feature of the first user meets a first preset condition and the face feature of the first user meets a second preset condition, determining a prompt of the target information selected by the first user based on the eye feature of the first user.
Here, when the eye feature of the first user meets a first preset condition and the face feature of the first user meets a second preset condition, the current attention of the first user may be considered to be on the display screen of the terminal, and then the specific position watched by the first user is continuously determined.
In other embodiments of the present invention, the terminal may receive information prompts sent by a plurality of second users, wherein different information prompts sent by different second users are located at different positions in the terminal display screen, and the first user may select the information prompt that the first user wants to see by looking at a certain information prompt. Specifically, after determining that the eye features of the user meet a first preset condition, the terminal determines the contact position of the sight of the first user and a display screen of the terminal based on the collected eye features of the first user; wherein the contact position may be represented by two-dimensional coordinates; the terminal determines that the prompt of the information displayed at the position is the prompt of the target information selected by the first user and wanted to be opened based on the contact position of the sight line of the first user and the display screen of the terminal. As shown in fig. 7, the display screen 71 of the terminal 70 displays information prompts sent by three different contacts, including a prompt of first information 72, a prompt of second information 73, and a prompt of third information 74; the terminal may determine the target information selected by the first user as the second information according to a contact position of a line of sight (shown by a dotted line) of the first user 75 with the display screen 71 of the terminal 70.
And 606, displaying the target information.
In another embodiment, the terminal may further record a time when the first user gazes at the prompt position of the target information, and display the target information to the first user if the time when the first user gazes at the prompt position of the target information exceeds a preset time threshold.
According to the information processing method provided by the embodiment of the invention, the target information which the first user wants to check can be determined according to the watching position of the first user, and the target information is displayed in the display screen of the terminal, so that the two hands of the user can be liberated, and the experience degree of the user is greatly improved.
In order to implement the method according to the embodiment of the present invention, an embodiment of the present invention further provides an information processing terminal, where the information processing terminal may be applied to the information processing method according to the embodiment corresponding to fig. 1 to 4, and as shown in fig. 8, the terminal 80 may include: a processor 81, a memory 82, and a communication bus 83;
the communication bus 83 is used for realizing communication connection between the processor 81 and the memory 82;
the processor 81 is used to execute the program for terminal control stored in the memory to implement the following steps:
when a communication prompt is received, acquiring eye features of a first user; wherein, the eye features of the first user at least comprise an angle between the sight line of the first user and the terminal;
detecting whether the eye features of the first user meet a first preset condition; the first preset condition represents that an angle between a sight line of a user and the terminal meets a first threshold range;
and when the eye features of the first user meet a first preset condition, determining to respond to the communication prompt.
In other embodiments of the present invention, the processor 81 is configured to, when performing the obtaining of the eye feature of the first user, further obtain a facial feature of the first user; wherein the facial features of the first user at least comprise the facial area of the first user.
The processor 81 is configured to perform the detecting whether the eye feature of the first user meets a first preset condition, and if so, perform:
detecting whether the facial features of the first user meet a second preset condition; the second preset condition at least comprises that the face area of the first user meets a second threshold range.
In other embodiments of the present invention, the facial features of the first user further include facial depth information of the first user; the second preset condition includes that the face area of the first user meets a second threshold range, and the face depth information of the first user meets a third threshold range.
In this embodiment of the present invention, when the communication prompt is a call prompt initiated by a second user, the responding to the communication prompt includes: and connecting the call initiated by the second user.
In other embodiments of the present invention, the processor 81 is further configured to perform a call initiated by a second user, and further includes:
acquiring the duration that the eye features of the first user meet a first preset condition and the facial features of the first user meet a second preset condition;
and when the time length meets a preset time length threshold value, connecting the call initiated by the second user.
In other embodiments of the present invention, processor 81 is further configured to perform:
acquiring the distance between the first user and a terminal through a distance sensor;
and if the distance between the first user and the terminal is greater than a fourth threshold value, starting a hands-free function.
In other embodiments of the present invention, when the communication prompt is an information prompt sent by at least one second user, the processor 81 is configured to execute the determining to respond to the communication prompt when the eye feature of the user meets a first preset condition, and further includes:
when the eye feature of the first user meets a first preset condition and the face feature of the first user meets a second preset condition,
determining a prompt of target information selected by the first user based on the eye features of the first user;
and displaying the target information.
In other embodiments of the present invention, before the detecting whether the eye feature of the first user meets the first preset condition, the processor 81 is further configured to:
acquiring contact person information of a second user corresponding to the communication prompt;
determining a target facial feature corresponding to the contact information of the second user based on the contact information of the second user;
if the facial features of the first user are matched with the target facial features, detecting whether the eye features of the first user meet a first preset condition or not, and whether the facial features of the first user meet a second preset condition or not.
In an exemplary embodiment, the present invention further provides a computer readable storage medium, such as a memory 82 including a computer program, which can be executed by the processor 81 of the information processing terminal 80 to perform the steps of the foregoing method. The computer-readable storage medium may be a magnetic random access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM), among other memories.
The technical schemes described in the embodiments of the present invention can be combined arbitrarily without conflict.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (8)

1. An information processing method, the method comprising:
when a communication prompt is received, acquiring eye features of a first user and face features of the first user; wherein, the eye features of the first user at least comprise an angle between the sight line of the first user and the terminal; the facial features of the first user at least comprise facial geometric features;
acquiring contact person information of a second user corresponding to the communication prompt; determining a target facial feature corresponding to the contact information of the second user based on the contact information of the second user; the owner of the terminal can set a preset processor with the authority of processing the communication message of the contact person for each contact person stored in the terminal; the target facial feature is a facial feature of a preset processor associated with the contact information of the second user;
if the facial features of the first user are matched with the target facial features, detecting whether the eye features of the first user meet a first preset condition; the first preset condition represents that an angle between a sight line of a user and the terminal meets a first threshold range;
detecting whether the facial features of the first user meet a second preset condition; the second preset condition at least comprises that the face area of the first user meets a second threshold range;
and determining to respond to the communication prompt when the eye feature of the first user meets a first preset condition and the facial feature of the first user meets a second preset condition.
2. The method of claim 1, wherein the facial features of the first user further comprise facial depth information of the first user;
correspondingly, the second preset condition includes that the face area of the first user meets a second threshold range, and the face depth information of the first user meets a third threshold range.
3. The method according to claim 1 or 2, wherein when the communication prompt is a second user-initiated call prompt, the responding to the communication prompt is: and connecting the call initiated by the second user.
4. The method of claim 3, wherein the placing the call initiated by the second user further comprises:
acquiring the duration that the eye features of the first user meet a first preset condition and the facial features of the first user meet a second preset condition;
and when the time length meets a preset time length threshold value, connecting the call initiated by the second user.
5. The method of claim 3, further comprising:
acquiring the distance between the first user and a terminal through a distance sensor;
and if the distance between the first user and the terminal is greater than a fourth threshold value, starting a hands-free function.
6. The method according to claim 1 or 2, wherein when the communication prompt is an information prompt sent by at least one second user, and when the eye feature of the user meets a first preset condition, determining to respond to the communication prompt further comprises:
when the eye feature of the first user meets a first preset condition and the face feature of the first user meets a second preset condition,
determining a prompt of target information selected by the first user based on the eye features of the first user;
and displaying the target information.
7. The method according to claim 4, wherein before the obtaining of the duration that the eye feature of the first user satisfies the first preset condition and the facial feature of the second user satisfies the second preset condition, the method further comprises:
and displaying a countdown display interface on the communication prompt interface.
8. An information processing terminal, the terminal comprising: a processor and a memory configured to store a computer program capable of running on the processor,
wherein the processor is configured to perform the steps of the method of any one of claims 1 to 7 when running the computer program.
CN201811474406.8A 2018-12-04 2018-12-04 Information processing method and terminal Active CN109660662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811474406.8A CN109660662B (en) 2018-12-04 2018-12-04 Information processing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811474406.8A CN109660662B (en) 2018-12-04 2018-12-04 Information processing method and terminal

Publications (2)

Publication Number Publication Date
CN109660662A CN109660662A (en) 2019-04-19
CN109660662B true CN109660662B (en) 2021-07-16

Family

ID=66112322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811474406.8A Active CN109660662B (en) 2018-12-04 2018-12-04 Information processing method and terminal

Country Status (1)

Country Link
CN (1) CN109660662B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120997A (en) * 2019-06-12 2019-08-13 Oppo广东移动通信有限公司 Call control method and Related product

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103576839B (en) * 2012-07-24 2019-03-12 广州三星通信技术研究有限公司 The device and method operated based on face recognition come controlling terminal
JP2014116716A (en) * 2012-12-07 2014-06-26 Samsung Display Co Ltd Tracking device
KR102062310B1 (en) * 2013-01-04 2020-02-11 삼성전자주식회사 Method and apparatus for prividing control service using head tracking in an electronic device
CN105334961A (en) * 2015-10-27 2016-02-17 惠州Tcl移动通信有限公司 Method for controlling mobile terminal based on eyeball tracking and mobile terminal
CN105592224A (en) * 2015-12-31 2016-05-18 宇龙计算机通信科技(深圳)有限公司 Communication information processing method and mobile terminal
CN107426422A (en) * 2017-07-13 2017-12-01 广东欧珀移动通信有限公司 Event-handling method and Related product
CN108803867A (en) * 2018-04-12 2018-11-13 珠海市魅族科技有限公司 A kind of information processing method and device

Also Published As

Publication number Publication date
CN109660662A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
US10313288B2 (en) Photo sharing method and device
EP3288238B1 (en) Terminal alarm method and apparatus
KR102610013B1 (en) An apparatus for providinng privacy protection and method thereof
KR102488563B1 (en) Apparatus and Method for Processing Differential Beauty Effect
EP2960822A1 (en) Method and device for locking file
US9924090B2 (en) Method and device for acquiring iris image
CN106529339A (en) Picture display method, device and terminal
CN105224924A (en) Living body faces recognition methods and device
CN104660907A (en) Shooting method and device as well as mobile terminal
CN110532957B (en) Face recognition method and device, electronic equipment and storage medium
CN106126082B (en) Terminal control method and device and terminal
US20170339287A1 (en) Image transmission method and apparatus
EP3261046A1 (en) Method and device for image processing
US10769743B2 (en) Method, device and non-transitory storage medium for processing clothes information
CN106980836B (en) Identity verification method and device
CN109660662B (en) Information processing method and terminal
US10846513B2 (en) Method, device and storage medium for processing picture
CN111027812A (en) Person identification method, person identification system, and computer-readable storage medium
CN107563395B (en) Method and device for dressing management through intelligent mirror
CN104902102B (en) Incoming call request response method and electronic equipment
CN106201576A (en) A kind of fingerprint recognition camera system and method
CN109981890B (en) Reminding task processing method, terminal and computer readable storage medium
CN107133551B (en) Fingerprint verification method and device
CN112492103A (en) Terminal peep-proof method and device and storage medium
CN112883791B (en) Object recognition method, object recognition device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant