CN116909439A - Electronic equipment and interaction method thereof - Google Patents

Electronic equipment and interaction method thereof Download PDF

Info

Publication number
CN116909439A
CN116909439A CN202311178188.4A CN202311178188A CN116909439A CN 116909439 A CN116909439 A CN 116909439A CN 202311178188 A CN202311178188 A CN 202311178188A CN 116909439 A CN116909439 A CN 116909439A
Authority
CN
China
Prior art keywords
application
user
screen
interface
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311178188.4A
Other languages
Chinese (zh)
Other versions
CN116909439B (en
Inventor
谢字希
邸皓轩
李丹洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311178188.4A priority Critical patent/CN116909439B/en
Publication of CN116909439A publication Critical patent/CN116909439A/en
Application granted granted Critical
Publication of CN116909439B publication Critical patent/CN116909439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Abstract

The application relates to an electronic device and an interaction method, wherein the method comprises the following steps: displaying a first interface, wherein the first interface comprises a first application control; detecting a first inertial action, executed by a user on the electronic equipment, corresponding to a first application control, and detecting that a gazing area where the user gazes at a screen of the electronic equipment corresponds to the first application control; and executing a triggering function corresponding to the first application control. According to the interaction method, in the process that a user holds and uses the electronic equipment, under the condition that the gazing area of the user falls on the screen of the electronic equipment (the gazing area comprises an application interface of an application program, a local area in the screen of the electronic equipment and the like), the application control in the gazing area can be controlled according to the inertia action executed by the user on the electronic equipment, so that the electronic equipment can be safely and conveniently used even if the user holds the electronic equipment with one hand, and the risk that the electronic equipment falls or even the user himself is reduced.

Description

Electronic equipment and interaction method thereof
Technical Field
The application relates to the technical field of intelligent terminals. And more particularly, to an electronic device and an interaction method thereof.
Background
With the development of the screen of the electronic device toward a large screen and a full screen, the screen of the electronic device can improve user experience of browsing pictures, videos and playing games. However, as the screen continues to grow larger, the entire area of the screen may not be accessible between a single hand of a user holding the electronic device (e.g., riding a subway, going up and down stairs, etc.) and a single hand holding the electronic device. Therefore, when a user operates with one hand, the user needs to adjust the holding gesture or the holding position with one hand to touch the whole area of the screen, and the adjustment process is tedious and increases the risk of dropping the electronic equipment; if the user changes to two-hand operation, the risk of accidents of the user is increased, for example: when a user gets up a subway, if the user operates the electronic equipment with both hands so that the user releases the handle in the carriage, the user can fall down once emergency braking occurs. Therefore, there is a need for a method that can safely and easily implement controlling the entire area of the screen of an electronic device.
Disclosure of Invention
The application provides electronic equipment and an interaction method thereof.
In a first aspect, an embodiment of the present application provides an interaction method, applied to an electronic device, where the method includes: displaying a first interface, wherein the first interface comprises a first application control; detecting a first inertial action, executed by a user on the electronic equipment, corresponding to a first application control, and detecting that a gazing area where the user gazes at a screen of the electronic equipment corresponds to the first application control; and executing a triggering function corresponding to the first application control.
In the present application, the electronic device herein may include: various electronic devices configured with a display screen (touch screen) and a front-facing camera, such as a cellular phone, a tablet computer, a wearable device, and the like. The first interface may be a screen of the electronic device, the first application control may be an application control of an application program displayed in the first interface, the first inertial action may be an operation performed by a user on the electronic device, and the gazing area may include: such as the entire screen of the electronic device or a localized area in the screen of the electronic device. The triggering function may be an execution result (such as jumping to another application interface, deleting display content, etc.) corresponding to the user performing the user operation (such as clicking operation, sliding operation) on the application control.
It can be seen that, by the interaction method of the application, in the process of holding and using the electronic device by the user, under the condition that the eyes of the user are determined to fixedly watch the screen of the electronic device, namely, the watch area of the user falls on the screen of the electronic device (the watch area comprises an application interface of an application program, a local area in the screen of the electronic device and the like), the application control in the watch area can be controlled according to the inertia action executed by the user on the electronic device, so that the electronic device can be safely and simply used even if the user holds the electronic device by one hand, and the risk of dropping the electronic device and even the user is reduced.
In one possible implementation of the first aspect, the first interface includes a first application interface of the first application, and the first application control belongs to the first application.
In the application, the screen of the electronic device can only display the first application interface of the first application, and the first application control can be an application control in the first application interface of the first application.
In one possible implementation manner of the first aspect, executing the trigger function corresponding to the first application control includes:
and displaying the first display content corresponding to the first application control corresponding to the first inertia action as the electronic equipment is close to the user.
In the application, if the first application is a short message application, the first application control is a short message control in a list interface of the short message application, and after the electronic equipment is close to a user, the first display content displayed is a detailed interface of a short message corresponding to the short message control.
In a possible implementation manner of the first aspect, executing the trigger function corresponding to the first application control further includes:
and turning the electronic equipment to the left corresponding to the first inertia action, and deleting the first display content corresponding to the first application control from the first application interface.
In the application, if the first application is a short message application, the first application control is a short message control in a list interface of the short message application, and after the electronic equipment is turned to the left, the short message corresponding to the short message control is deleted.
In a possible implementation manner of the first aspect, executing the trigger function corresponding to the first application control further includes:
and turning the electronic equipment to the right corresponding to the first inertia action, and marking first display content corresponding to the first application control in the first application interface.
In the application, if the first application is a short message application, the first application control is a short message control in a list interface of the short message application, and after the electronic equipment is turned to the right, the short message corresponding to the short message control is marked.
In one possible implementation of the first aspect, the first interface includes a first application interface of the first application, and the first application control belongs to the second application.
In the application, the screen of the electronic device can only display the first application interface of the first application, and the first application control can be used for controlling the first application interface of the first application by the following steps: and popup boxes, prompt boxes and the like.
In one possible implementation manner of the first aspect, executing the trigger function corresponding to the first application control includes:
and opening a second application interface of a second application corresponding to the first inertial motion being tilting the electronic device to the right.
In the application, if the first application is a news application, the first application control is a pop-up box of the mail application in a list interface of the news application, and after the electronic equipment is turned to the right, an application interface of the mail application, namely, a second application interface is displayed.
In a possible implementation of the first aspect, the second application interface includes a second application control; and the method further comprises: detecting that a user inclines the electronic equipment leftwards, and the gazing area of the screen of the electronic equipment, which is gazed by the user, corresponds to a second application control; displaying a first application interface of the first application, minimizing a second application interface and running the second application through the background.
In the present application, the second application interface may be an application interface of the mail application, and the second application control may be a return control in the application interface of the mail application, and after the electronic device is turned to the left, a list interface of the news application, that is, a list interface of the return news application is displayed. The mail application may be running in the background.
In a possible implementation of the first aspect, detecting that the user gazes at a screen of the electronic device includes: collecting facial images of a user; acquiring an eye region of a user when judging that the face of the user faces to a screen of the electronic equipment in the face image; and when the gazing direction of the eyes in the eye area is judged to be the screen of the electronic equipment, determining the gazing area of the eyes of the user in the screen of the electronic equipment.
In the present application, the neural network model may be used, for example: a face recognition model, a face gaze detection model, a gaze estimation model and the like, and determines a gaze area of eyes of a user in a screen of the electronic device from face images of the user acquired by the electronic device.
In a second aspect, an embodiment of the present application provides an electronic device, including:
a memory for storing instructions for execution by one or more processors of the electronic device, and
the processor, being one of the processors of the electronic device, is configured to perform the interaction method of the first aspect.
In a third aspect, embodiments of the present application provide a computer program product comprising: a non-transitory computer readable storage medium containing computer program code for performing the interaction method of the first aspect.
Drawings
Fig. 1 (a) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 1 (b) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 1 (c) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 2 (a) is a schematic diagram of a scenario in which a user holds and operates a mobile phone with one hand in a subway scenario according to an embodiment of the present application;
fig. 2 (b) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
FIG. 2 (c) is a schematic diagram of a set of application interfaces provided by an embodiment of the present application;
FIG. 3 (a) is a block diagram of a control system for performing an interaction method according to an embodiment of the present application;
fig. 3 (b) is a schematic diagram of a face detection framework according to an embodiment of the present application;
fig. 3 (c) is a schematic diagram of a face gaze detection framework according to an embodiment of the present application;
fig. 3 (d) is a schematic diagram of a gaze estimation framework provided by an embodiment of the present application;
FIG. 3 (e) is a schematic diagram of a set of inertial motions provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 5 is a schematic software structure of an electronic device according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of an interaction method according to an embodiment of the present application;
fig. 7 (a) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 7 (b) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 7 (c) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 7 (d) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 7 (e) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 7 (f) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 7 (g) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 8 (a) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 8 (b) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 8 (c) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
Fig. 8 (d) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 8 (e) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 8 (f) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application;
fig. 8 (g) is a schematic diagram of a scenario in which a user holds and operates a mobile phone according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly and thoroughly described below with reference to the accompanying drawings.
It is understood that the technical manner of the present application is applicable to various electronic devices configured with a display screen (touch screen) and a front camera, for example, mobile terminals such as a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digitalassistant, PDA), and the like.
The following describes an embodiment of the present application by taking an electronic device as a mobile phone.
Referring to fig. 1 (a) to 1 (c), fig. 1 (a) to 1 (c) show a schematic view of a scenario in which a user holds and operates a mobile phone 100.
As shown in fig. 1 (a), the screen of the mobile phone 100 displays an application interface of the short message application 101. The application interface of the sms application 101 may be a detailed interface of sms/mms. The application interface of the short message application 101 includes short message content 102, a return information control 103, and a contact control 104. The user may click on the return information control 103 to return a list interface of the sms/mms, and the user may click on the contact control 104 to switch to a detailed interface of the contact of the sms/mms. When the user holds the lower part of the mobile phone 100 with one hand (holds with the left hand), if the user wants to click the return information control 103 and the thumb of the left hand of the user cannot touch the screen area corresponding to the return information control 103, the user needs to adjust the holding position with one hand, as shown in fig. 1 (b), and the user can adjust the holding position to the middle part of the mobile phone 100 and click the return information control 103 with the thumb of the left hand. In some embodiments, as shown in fig. 1 (c), the user may also hold the mobile phone 100 in place, clicking the return information control 103 with the right hand.
As can be seen, with continued reference to fig. 1 (b), the process of the user adjusting from holding the lower portion of the mobile phone 100 in one hand (left hand) to holding the middle portion of the mobile phone 100 in one hand is cumbersome, and the mobile phone 100 may also be accidentally dropped; as shown in fig. 1 (c), if the user touches the mobile phone 100 with his right hand and is currently riding on a subway, the user falls down once emergency braking occurs. The above scenario brings risks to the user using the mobile phone.
In order to solve the above-mentioned problems, an embodiment of the present application proposes an interaction method of an electronic device, where in a specific scenario (such as the aforementioned subway scenario), a trigger function is preset to execute, according to an inertial action (Inertial Measurement Unit, IMU) performed on the electronic device by a user, an application control in a gaze area of the current user on the electronic device. The inertial action here may be an operation performed by the user on the electronic device, for example: tilting the electronic device to the right, etc., where the triggering function may be a result of the user performing a corresponding operation (e.g., clicking operation, sliding operation) on the application control (e.g., jumping to another application interface, deleting display content, etc.).
For example, in a subway scene, if a short message application is used, the corresponding user performs an inertia action of turning right (tilting right) on the mobile phone, and sets a corresponding trigger function: the mobile phone automatically opens a certain short message in the short message application.
In this way, in the subsequent use process of the electronic device by the user, corresponding operations can be automatically executed according to the use scene of the electronic device and the inertial motion of the user, that is, after the fixation area of the eyes of the user is determined to fall on the screen of the electronic device, the execution triggering function of at least one application control in the fixation area is realized, and the electronic device is realized to change the display content of the electronic device. In some embodiments, the gaze region herein may include: for example, the entire screen of the electronic device or a localized area in the screen of the electronic device, where the application controls may include: application controls in an application interface of an application program or application controls of other application programs (hereinafter, simply referred to as controls), such as: pop-up boxes, prompt boxes, and the like.
For example: referring to fig. 2 (a) to 2 (c), in the scenario shown in fig. 2 (a) to 2 (c), a user may be in a subway scenario and hold the mobile phone 100 with a single hand using the short message application 101, the user performs a left-turn (tilting to the left) inertial motion on the mobile phone 100, and the left-turn inertial motion has a correspondence with the return information control 103 in the application interface of the short message application 101, and the mobile phone 100 may change the display content of the short message application 101 in response to the left-turn inertial motion.
As shown in fig. 2 (a), in the process of using the sms application of the mobile phone 100 by the user, the mobile phone 100 may determine that the eyes of the user gazes at the screen of the mobile phone 100 based on the face image by collecting the face image of the user, and the gazing area is an application interface of the sms application.
As shown in fig. 2 (b), the screen of the mobile phone 100 displays an application interface of the short message application 101. The application interface of the short message application 101 includes short message content 102, a return information control 103, and a contact control 104. When the user holds the lower part of the mobile phone 100 with one hand (holds with the left hand), if the user wants to click the return information control 103 and the thumb of the left hand of the user cannot touch the screen area corresponding to the return information control 103, the user can keep the holding position unchanged for turning left the mobile phone 100. In response to the above-mentioned left-turn inertia action, as shown in fig. 2 (c), the screen of the mobile phone 100 displays an application interface of the sms application 101, where the application interface may be a list interface of the sms application 101, that is, through the left-turn inertia action, the mobile phone 100 controls the sms application 101 to return to the list interface from the detailed interface, that is, realizes the clicking operation on the return information control 103.
In some embodiments, the interaction method provided by the embodiment of the present application further includes: and acquiring a face image of the user in the process that the user uses the target application program and the electronic equipment displays an application interface of the target application program, acquiring an eye region of the user when judging that the face orientation of the user in the face image faces the screen of the electronic equipment, and determining a gazing region of the eyes of the user in the screen of the electronic equipment when judging that the gazing direction of the eyes of the user is gazing at the screen of the electronic equipment. The gaze area here may comprise an application interface of the application program, a local area of the application interface, a local area in the screen of the electronic device, etc. In response to an inertial action performed by a user on the electronic device, performing an operation corresponding to the inertial action (such as clicking a control in an application interface, closing the application interface, returning to a previous interface, etc.) on at least one target control (including a control in an application interface of a target application program and a control of other application programs) in the noted area. The inertial action here may include: tilting (left turn, right turn) the electronic device left/right/front/back, moving the electronic device up/down, and moving the electronic device closer to/farther from the user, etc.
In some embodiments, the process of determining the face orientation of the user in the face image as facing the screen of the electronic device may include: key points (also referred to as feature points, such as eyes, nose, mouth and the like) corresponding to the faces of the users in the face images are acquired, face angles of the faces of the users relative to a screen of the electronic device are calculated according to the positions of the key points, and face orientations of the users are calculated based on the face angles. The process of determining that the gaze direction of the eyes of the user is gazing at the screen of the electronic device may include: and acquiring an eye region of a user in the facial image, identifying the positions of key points corresponding to eyes in the eye region (such as the positions of the corresponding key points of pupils, eye corners and the like), and determining the gazing direction of the eyes according to the positions of the key points of the eyes. And determining a gaze region of the user's eyes in the screen of the electronic device may include: establishing a coordinate system of a screen of the electronic equipment, measuring the relative position of the face of the user and the screen based on the face of the user in the face image, calculating an intersection point of the gazing direction and the screen according to the relative position, the size and resolution of the screen of the electronic equipment and other parameters, and determining the gazing area based on the coordinate system of the screen and the intersection point.
According to the interaction method provided by the embodiment of the application, in the process that a user holds and uses the target application program of the electronic device, under the condition that the user's eyes are determined to fixedly watch the screen of the electronic device, namely, the user's watch area falls on the screen of the electronic device (the watch area comprises an application interface of the target application program, a local area in the screen of the electronic device and the like), the control of the target application program or the control of other applications in the watch area can be controlled according to the inertia action of the user on the electronic device. The electronic equipment can be safely and simply used by a user even if the user holds the electronic equipment with one hand, the risk that the electronic equipment falls down or even the user himself is reduced, and better user experience is achieved.
For ease of understanding, taking an electronic device as an example of the mobile phone 100, a structural diagram of a control system 300 for executing an interaction method provided in the mobile phone 100 is further described with reference to fig. 3 (a), wherein the control system 300 may be used for executing the interaction method mentioned in the present application.
As shown in fig. 3 (a), the control system 300 includes an image acquisition module 301, a gaze estimation module 302, an inertial motion recognition module 303, and a trigger control module 304. The image acquisition module 301 is configured to acquire a facial image of a user, and acquire an area corresponding to a face in the facial image when determining that the facial image includes the face of the user. The image acquisition module 301 is further configured to determine a direction of a face, and acquire an eye region of the face of the user when determining that the direction of the face of the user in the face image is a screen facing the mobile phone 100. If the facial image does not include the user's face or the user's face is not facing the screen of the cell phone 100, the interactive method is exited.
The gaze estimation module 302 is configured to determine a gaze direction of eyes of the user based on the eye area, and determine a gaze area of eyes of the user in a screen of the electronic device based on the gaze direction. If the gaze area corresponding to the eyes of the user is not the screen of the mobile phone 100 or the gaze estimation module 302 is not started, the interaction method is exited.
The inertia motion recognition module 303 is configured to respond to an inertia motion performed by a user on the mobile phone 100, and match a control operation corresponding to the inertia motion, where the control operation may include an operation performed on at least one target control in the noted area, and includes: operations performed on controls in the application interface of the target application program displayed on the screen of the mobile phone 100 and operations performed on controls of other application programs, for example: the inertia action of the left turn may correspond to a click operation performed by a control of an application interface of a target application program displayed on a screen of the mobile phone 100. It can be appreciated that if the control operation corresponding to the inertial motion is not matched, the interactive method is exited.
The trigger control module 304 is configured to perform a control operation corresponding to the inertial motion on a target control (including an application interface of a target application program displayed on a screen of the mobile phone 100) in the noted area. It will be appreciated that after the control operation is completed, the interactive method is exited.
Through the control system 300 shown in fig. 3 (a), under different scenarios where the user uses the mobile phone 100, that is, the user uses different target application programs through the mobile phone 100, different inertial actions can be implemented to trigger different control operations (trigger different events), so that the process of using the target application programs by the user is simplified. The flow of the interaction method for implementing the embodiment of the present application by the control system 300 shown in fig. 3 (a) is shown in fig. 3 (a). The interaction method may include:
s301: the image acquisition module 301 acquires an image.
The image acquisition module 301 may acquire at least one image in real time by using a front camera of the mobile phone 100. If the user is using the mobile phone 100, that is, the user uses the target application program through the mobile phone 100, and the screen of the mobile phone 100 faces the face of the user, the image collected by the mobile phone 100 includes the face of the user.
S302: the image acquisition module 301 determines whether the face of the user is included in the image.
Illustratively, if the user is using the target application through the handset 100, such as: when the user opens the short message application through the mobile phone 100 and browses the short message, the image collected by the image collection module 301 includes the face of the user, and step S304 is executed to further obtain the area corresponding to the face of the user in the image. Otherwise, step S303 is executed to exit the flow of the interaction method. It will be appreciated that if the user opens the music application through the mobile phone 100 and is listening to music, the user may put the mobile phone 100 in a pocket or backpack, and in this scenario, the image captured by the image capturing module 301 will not include the face of the user.
In some embodiments, the image acquisition module 301 may pass through a face detection framework, such as: the dbface model (a res net50 model) acquires the face of the user in the image, and if the face detection frame outputs a region corresponding to the face, the face of the user is determined to be included in the image. Fig. 3 (b) shows a schematic diagram of a face detection framework used by the image acquisition module 301, where the face detection framework may include N (N may be a natural number) feature processing layers and at least one vector machine (supportvector machines, SVM), which may also be referred to as a classifier. The process of determining that the face of the user is included in the image based on the face detection framework may include: the image (input image) acquired by the image acquisition module 301 is input into a face detection frame shown in fig. 3 (b), the face detection frame may include a plurality of feature processing layers (convolution layers), a classifier, and the like, and after the image is subjected to feature extraction by the feature processing layers, if the image includes a face of a user, the face detection frame may set a classification flag for the output image, where the classification flag is used to identify that the image includes the face of the user. In other embodiments, if the image does not include a face, the output image may not be provided with a classification flag.
S303: and (5) ending the execution.
For example, in a case where the image acquisition module 301 determines that the image does not include the face of the user, the face of the user does not face the screen of the mobile phone 100, and the gaze area of the eyes of the user is not the screen of the mobile phone 100, the control system 300 may exit the flow of the interaction method.
S304: the image acquisition module 301 acquires an area corresponding to a face in a face image.
For example, the region corresponding to the face here may be referred to as a face region, i.e., a region corresponding to the face of the user in the image. In some embodiments, the face detection frame corresponding to the image acquisition module 301 may also extract a face region from the input image.
S305: the image capturing module 301 determines whether the face is facing the screen of the mobile phone 100.
Illustratively, the image capturing module 301 may determine whether the face of the user faces the screen of the mobile phone 100, and if so, execute step S306 to further acquire the eye region in the face. Otherwise, step S303 is performed, and the control system 300 may exit the flow of the interaction method in the case that the face of the user is not facing the screen of the mobile phone 100.
In some embodiments, the image capturing module 301 may determine that step S304 obtains the orientation of the face in the face area through the face gaze detection framework. Fig. 3 (c) shows a schematic diagram of a face gaze detection framework used by the image acquisition module 301, and it can be seen that the face region is input into the face gaze detection framework shown in fig. 3 (c), and the face gaze detection framework may be a convolutional neural network (Convolutional Neural Networks, CNN) and may include a plurality of convolutional layers (Conv) and a full-link layer (full ConnectedLayer, FC). The human face gaze detection framework can finally output gaze classification results through a plurality of convolution layers and full connection layers according to the left eye, the right eye, the whole face and also the grid graph (grid graph) of the face in the input face region. The gaze classification result here may include a screen with the face facing the mobile phone 100 or a screen with the face not facing the mobile phone 100.
In other embodiments, the face gaze detection framework may also directly output gaze classification results directly according to the input whole face, through multiple convolution layers and full connection layers. For example: the human face gaze detection framework may be a classification task neural network model that simply determines whether to gaze or not, and the neural network model may determine a gaze classification result by calculating a face orientation, including: key points in the face area (positions corresponding to the outline of the entire face in the face area) are detected first, pitch angle (Pitch), yaw angle (Yaw), roll angle (Roll), or the like corresponding to the key points are calculated, for example: the gaze classification result of the face gaze detection frame may be a screen with the face facing the handset 100 when the yaw angle and pitch angle are at 30 degrees and 20 degrees, respectively.
S306: the gaze estimation module 302 obtains the eye region corresponding to the face.
Illustratively, the gaze estimation module 302 may further obtain eye regions corresponding to the face, i.e., regions corresponding to the left and right eyes of the user in the face region.
S307: the gaze estimation module 302 determines a gaze area of the user's eyes on the screen of the handset 100 based on the eye area.
Illustratively, the gaze estimation module 302 may determine a gaze area of the user's eyes on the screen of the cell phone 100 based on the eye area. It is to be appreciated that the gaze area herein may include: an application interface of an application program, a local area of the application interface, a local area in a screen of an electronic device, controls in the screen (e.g., pop-up boxes, hover boxes, buttons, etc.), and so forth.
In some embodiments, gaze estimation module 302 may determine a gaze area of a user's eyes on a screen of mobile phone 100 through a gaze estimation framework. Fig. 3 (d) shows a schematic view of a gaze estimation framework used by the gaze estimation module 302, and it can be seen that the eye area is input into the gaze estimation framework shown in fig. 3 (d), and the gaze estimation framework may also be a convolutional neural network (Convolutional Neural Networks, CNN), and may include a plurality of convolutional layers, pooling layers, and full-connection layers, and projective transformation layers (Landmark Affine Txmn), output transformation layers, and so on. The gaze estimation framework may be based on key points corresponding to left eye, right eye, left and right eye corners in the input eye region, such as: the keypoints for the left and right eyes are passed through multiple convolution layers, average pooling layers (Avgpool). And (3) the key points corresponding to the left eye corner and the right eye corner are as follows: (x 1, x 1), (x 2, x 2), (x 3, x 3), and (x 4, x 4) pass through the projective transformation layer (Landmark Affine Txmn), the full connection layer, and so on. And the corresponding results of the left eye, the right eye and the left and right eye angles pass through the full connection layer, the output transformation layer (Output AffineTxmn) and the like, and finally, the Gaze Prediction result is output. The gaze regression results include: a gaze area of the user's eyes on the screen of the mobile phone 100, the gaze area comprising: an application interface of the target application, a local area in the screen of the electronic device, etc.
It will be appreciated that the process of the gaze estimation framework outputting the gaze regression result based on the eye area may include a gaze correction process, which may include, due to the different parameters of the respective handsets 100 (e.g., screen size, etc.) and the characteristics of the user: the screen of the mobile phone 100 displays random points, images of the random points in the screen of the mobile phone 100 where the user gazes are acquired over a first preset time period (e.g., 500 ms), the screen of the mobile phone 100 displays characters and hides the random points passing through the random points over a second preset time period (e.g., 1500 ms), and in response to the user clicking on the left or right side of the mobile phone 100, the gaze correction process is ended, and a response is waited (i.e., a gaze correction result is waited). In doing the gaze correction, if the gaze estimation module 302 determines that all gaze points are in a fixed region, then this region is considered to have captured a gaze event, i.e., this region is a gaze region.
In some embodiments, the method of determining the gaze area may also be to collect an eye area of the user in the face image, identify positions of key points corresponding to eyes in the eye area (such as positions of corresponding key points of pupils, corners of eyes, etc.), and determine gaze directions of the eyes according to the positions of the key points of the eyes. Establishing a coordinate system of a screen of the electronic equipment, measuring the relative position of the face of the user and the screen based on the face of the user in the face image, calculating an intersection point of the gazing direction and the screen according to the relative position, the size and resolution of the screen of the electronic equipment and other parameters, and determining the gazing area based on the coordinate system of the screen and the intersection point.
S308: the gaze estimation module 302 determines whether the gaze area is a screen of the handset 100.
Illustratively, the gaze regression results by the gaze estimation module 302 through the gaze estimation framework may include: the gazing area corresponding to the eye area falls on the screen of the mobile phone 100 or the gazing area corresponding to the eye area is not on the screen of the mobile phone 100. If the gazing area is the screen of the mobile phone 100, step S309 is executed, and the inertia motion identification module 303 may determine the user operation corresponding to the inertia motion in response to the inertia motion performed by the user on the mobile phone 100. Otherwise, the flow of the interaction method may be exited, indicating that the gaze area of the eyes of the user is not the screen of the mobile phone 100.
S309: in response to the inertial motion performed by the user on the cellular phone 100, the inertial motion recognition module 303 determines a control operation corresponding to the inertial motion.
The inertial motion recognition module 303 can determine the type of inertial motion from the data such as the linear acceleration and the angular velocity of the mobile phone 100 obtained after the user performs the inertial motion on the mobile phone 100. For example: in a preset detection period (e.g., 200 ms), the collected data such as acceleration, angular velocity, etc. of the mobile phone 100 are input into a classifier (e.g., a vector machine, a decision tree, etc.), the minimum value, the maximum value, the mean value, the range, the standard deviation, etc. of the data are selected as features, the features are compared with inertial action thresholds (root mean square errors (Root Mean Square Error, RMS)) of the types of inertial actions, the type of inertial action corresponding to the inertial action threshold with the smallest error between the inertial action thresholds is selected, and the inertial action executed by the user on the mobile phone 100 is determined.
In some embodiments, as illustrated in fig. 3 (e), the inertial motion recognition module 303 may preset 8 inertial motions, including: chinese style (holding the mobile phone 100 stationary), forward turning (turning the mobile phone 100 to the user side), left turning, right turning, approaching, and leaving. It will be appreciated that the inertial motion recognition module 303 may also provide a greater variety of inertial motions, not limited to the inertial motions illustrated above.
In some embodiments, the inertial motion recognition module 303 may further preset a correspondence between inertial motion and control operations, where the correspondence may be determined based on the application interface of the target application program, various controls in the application interface, and various controls of other application programs.
S310: the trigger control module 304 performs control operations.
Illustratively, for example: if the mobile phone 100 displays the detailed interface of the short message application, the user may perform the left-turn inertial action on the mobile phone 100 by triggering the control module 304 to control the screen of the mobile phone 100 to return to the list interface of the short message application from the detailed interface of the short message application.
Through the control system 300 shown in fig. 3 (a), under different scenarios where the user uses the mobile phone 100, that is, the user uses different target application programs through the mobile phone 100, different inertial actions can be implemented to trigger different control operations (trigger different events), so that the process of using the target application programs by the user is simplified.
In some embodiments, the control system 300 shown in fig. 3 (a) includes: the multiple judgment conditions (threshold) reduce judgment errors, avoid accidental start of the control system 300 (e.g., control system with eye gaze alone is easy to trigger by mistake when a user reads an electronic book or news), and reduce overhead of accidental start of the control system 300. The model-based gaze estimation module 302 and the inertial motion recognition module 303 have high precision, and the combination of actions between inertial motion and control operation is many, so that richer trigger events can be supported without requiring the user to perform any touch input on the mobile phone 100 and change the holding gesture of the mobile phone 100.
Fig. 4 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. As shown in fig. 4, an electronic device (e.g., a mobile phone) may include: processor 410, external memory interface 420, internal memory 421, universal serial bus (universal serial bus, USB) interface 430, charge management module 440, power management module 441, battery 442, antenna 1, antenna 2, mobile communication module 450, wireless communication module 460, audio module 470, speaker 470A, receiver 470B, microphone 470C, headset interface 470D, sensor module 480, keys 490, motor 491, indicator 492, camera 493, display screen 494, and subscriber identity module (subscriber identification module, SIM) card interface 495, among others.
The sensor module 480 may include a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, and the like. The sensor module 480 may be used to determine inertial actions performed by a user on the electronic device 100.
It is to be understood that the configuration illustrated in this embodiment does not constitute a specific limitation on the electronic apparatus. In other embodiments, the electronic device may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 410 may include one or more processing units, such as: the processor 410 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digitalsignal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. A processor may be used to perform the interaction method of the present application.
A memory may also be provided in the processor 410 for storing instructions and data. In some embodiments, the memory in the processor 410 is a cache memory. The memory may hold instructions or data that the processor 410 has just used or recycled. If the processor 410 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided, reducing the latency of the processor 410 and thus improving the efficiency of the system.
In some embodiments, processor 410 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integratedcircuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industryprocessor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serialbus, USB) interface, among others.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device. In other embodiments, the electronic device may also use different interfacing manners in the foregoing embodiments, or a combination of multiple interfacing manners.
The electronic device implements display functions through the GPU, the display screen 494, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 494 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 410 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 494 is used to display images, videos, and the like. The display screen 494 includes a display panel.
The electronic device may implement shooting functions through the ISP, the camera 493, the video codec, the GPU, the display screen 494, the application processor, and the like. The ISP is used to process the data fed back by the camera 493. The camera 493 is used to capture still images or video. In some embodiments, the electronic device may include 1 or N cameras 493, N being a positive integer greater than 1. The camera 493 may be used to capture images of the face of the user as referred to in the present application.
External memory interface 420 may be used to interface with an external memory card, such as a (Micro SD) card, to enable expansion of the memory capabilities of the electronic device. The external memory card communicates with the processor 410 through an external memory interface 420 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 421 may be used to store computer-executable program code that includes instructions. The processor 410 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 421. For example, in an embodiment of the present application, the processor 410 may include a storage program area and a storage data area by executing instructions stored in the internal memory 421. The internal memory 421 may be used to store a face detection framework, a face gaze detection framework, and the like.
The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 421 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universalflash storage, UFS), and the like.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device. In other embodiments of the application, the electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Fig. 5 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
As shown in FIG. 5, the application layer may include life, video, reading, shopping, gallery, calendar, conversation, navigation, and music applications. It is understood that the application program herein may be an application or service installed on the electronic device 100 or an application or service not installed on the electronic device 100 retrieved through a quick service center.
The application framework layer may include layout subsystems, window management modules, control services, and the like. Wherein the layout subsystem is for determining the location of each display element in the screen of the electronic device. The window management module is used for acquiring the attribute of the window corresponding to the application. The control service may be the control system 300 shown in fig. 3 (a), and is configured to control the application interface of the target application program or other applications according to the inertial action performed by the user on the electronic device, where it is determined that the eyes of the user are fixed to watch the screen of the electronic device (i.e., the user's watch area falls on the screen of the electronic device, including the application interface of the target application program, a local area of the screen of the electronic device, etc.) during the process of holding and using the target application program of the electronic device by the user.
The system library may include a rendering services module (SurfaceFlinger), a layout module, and a view module, among others. The drawing service module (SurfaceFlinger) is used for drawing and synthesizing one or more layers in one or more windows of the application to obtain frame data. The layout module and the view module may be used to manage the operating mode of the application interface in the window.
The kernel layer includes display drivers, event drivers, sensor drivers, and the like.
The following describes the interaction method provided by the embodiment of the present application in detail based on the method flowchart shown in fig. 6. The method shown in fig. 6 may be implemented by a processor of the handset 100 executing associated instructions.
Referring to fig. 6, the call interaction method may include the steps of:
s601: and opening the first application program and displaying an application interface of the first application program.
The first application program herein may be a short message application, for example. As shown in fig. 7 (a), the screen of the mobile phone 100 displays an application interface (i.e., a first application interface) of the sms application 701, where the application interface may be a list interface of the sms application 701. In response to a user operation of the user on the short message 702 in the list interface of the short message application 701, the following are: long press operation, the list interface of the short message application 701 displays the selected short message 702.
S602: a face image of the user is acquired and a gaze area of the user's eyes on the screen of the handset 100 is determined from the eye areas in the face image.
Illustratively, the mobile phone 100 may determine that the user's gaze area is the screen of the mobile phone 100 through the control system 300 shown in fig. 3 (a) to 3 (e). It can be seen that the mobile phone 100 presents the application interface of the short message application 701 in a full screen display manner, where the gazing area may include the application interface of the short message application 701.
S603: in response to the inertial motion performed by the user on the mobile phone 100, a control operation corresponding to the inertial motion is performed on the gaze area.
Illustratively, with continued reference to fig. 7 (a), the user holds the handset 100 in proximity to the user, where the inertial action may be proximity in response to the inertial action performed by the user on the handset 100. As shown in fig. 7 (b), the screen of the mobile phone 100 displays an application interface of the sms application 701, where the application interface may be a detailed interface of the sms application 701. The application interface of the sms application 701 includes a return information control 703, a contact control 704, and sms content 705. At this point, the user holds the cell phone 100 away from the user, where the inertial motion may be away in response to the inertial motion performed by the user on the cell phone 100. As shown in fig. 7 (c), the screen of the mobile phone 100 displays an application interface of the sms application 701, where the application interface may be a list interface of the sms application 701, that is, by responding to a remote inertial action, the mobile phone 100 controls the sms application 701 to return from the detailed interface to the list interface, that is, by inertial action, to implement a clicking operation on the return information control 703.
In some embodiments, with continued reference to fig. 7 (d), in response to a user operation of the sms 702 in the list interface of the sms application 701 by the user, such as: long press operation, the list interface of the short message application 701 displays the selected short message 702. At this time, the user holds the handset 100 to turn the page left (i.e., turns the handset 100 to the left), where the inertial action may be a turn-left in response to the inertial action performed by the user on the handset 100. As shown in fig. 7 (e), the screen of the mobile phone 100 continues to display a list interface of the short message application 701, where the list interface does not include the short message 702, that is, the deletion of the short message 702 in the list interface of the short message application 701 is implemented by responding to the inertia action of the left page.
In some embodiments, with continued reference to fig. 7 (f), a list interface of the sms application 701 displays the selected sms 702. At this time, the user holds the handset 100 to turn the page right (i.e., turns the handset 100 to the right), where the inertial action may be turning the page right in response to the inertial action performed by the user on the handset 100. As shown in fig. 7 (g), the screen of the mobile phone 100 continues to display a list interface of the short message application 701, where the short message 702 in the list interface is added with a mark 706, that is, the short message 702 is marked in the list interface of the short message application 701 by responding to the inertia action of turning pages right.
It can be seen that, in the scenarios shown in fig. 7 (a) to 7 (g), the application interfaces of the target application program displayed on the screen of the mobile phone 100 are all application interfaces of the short message application 701, including: a list interface and a detail interface. The control operation corresponding to the inertial action executed by the user on the mobile phone 100 is also an application interface of the short message application 701 acting on the mobile phone 100. A scenario in which a user performs an inertia action with respect to the mobile phone 100 and performs a control operation with respect to a control of an application other than a target application displayed on the screen of the mobile phone 100 is described below by means of schematic diagrams shown in fig. 8 (a) to 8 (g).
As shown in fig. 8 (a), the screen of the mobile phone 100 displays an application interface of the news application 801, and the screen of the mobile phone 100 also displays a pop-up box 802 of the schedule application and a pop-up box 803 of the mail application, the pop-up box 802 and the pop-up box 803 covering a partial area (upper area) of the application interface of the news application 801. Where pop-up box 802 alerts the calendar and pop-up box 803 alerts the new mail.
Illustratively, with continued reference to fig. 8 (a), the mobile phone 100 may determine, through the control system 300 shown in fig. 3 (a) to 3 (e), that the user's gaze area is the area corresponding to the pop-up box 802 of the scheduling application in the screen of the mobile phone 100. It can be seen that the handset 100 presents the application interface of the news application 801 by way of a full screen display, where the gaze area may include a localized area in the screen of the handset 100.
At this point, the user turns left the cell phone 100, where the inertial motion may be left turn, in response to the inertial motion performed by the user on the cell phone 100. As shown in fig. 8 (b), the pop-up 803 of the mail application displayed only in the screen of the mobile phone 100, that is, by responding to the inertia action of the left turn, the mobile phone 100 controls the pop-up 802 of the schedule application to slide away, and the screen of the mobile phone 100 does not display the pop-up 802 of the schedule application.
In some embodiments, as shown in fig. 8 (c), the display content of the screen of the mobile phone 100 shown in fig. 8 (c) is the same as that of fig. 8 (a), an application interface of a news application 801 is displayed, a pop-up box 802 of a schedule application and a pop-up box 803 of a mail application are also displayed on the screen of the mobile phone 100, and a gazing area of a user is an area corresponding to the pop-up box 802 of the schedule application in the screen of the mobile phone 100.
At this time, the user turns right the cell phone 100, where the inertial motion may be a right turn, in response to the inertial motion performed by the user on the cell phone 100. As shown in fig. 8 (d), a pop-up 803 of the mail application is displayed only in the screen of the mobile phone 100, and then a pop-up 802 of the schedule application is displayed. That is, the mobile phone 100 controls the pop-up box 802 of the schedule application to slide away by responding to the inertia action of the right turn, and the screen of the mobile phone 100 displays the pop-up box 802 of the schedule application later.
In some embodiments, as shown in fig. 8 (e), the display content of the screen of the mobile phone 100 shown in fig. 8 (e) is the same as that of fig. 8 (a), an application interface of a news application 801 is displayed, and a pop-up box 802 of a schedule application and a pop-up box 803 of a mail application are also displayed on the screen of the mobile phone 100, however, the gazing area of the user is an area corresponding to the pop-up box 803 of the mail application in the screen of the mobile phone 100.
At this point, the user holds the cell phone 100 in proximity to the user, where the inertial motion may be in response to the inertial motion performed by the user on the cell phone 100. As shown in fig. 8 (f), the screen of the mobile phone 100 displays an application interface of the mail application 804, where the application interface may be a detailed interface corresponding to the mail displayed in the pop-up box 803 of the mail application. That is, by responding to the approaching inertial motion, the cellular phone 100 controls the application interface of the open mail application 804 to display the mail content corresponding to the mail displayed in the pop-up box 803 of the mail application.
At this point, the user holds the cell phone 100 away from the user, where the inertial motion may be away in response to the inertial motion performed by the user on the cell phone 100. As shown in fig. 8 (g), the screen of the mobile phone 100 displays an application interface of the news application 801, where the application interface may be a list interface of the news application 801, that is, the mobile phone 100 controls to close the application interface of the mail application 804 by responding to a remote inertial motion, and returns to the application interface of the news application 801.
It can be seen that in the scenario shown in fig. 8 (a) to 8 (g), according to the gaze area of the user in the screen of the mobile phone 100, in response to the inertial motion performed by the user on the mobile phone 100, the mobile phone 100 may perform a control operation on the controls of the other applications than the target application displayed on the screen. The user experience of the user using the handset 100 is improved.
In some embodiments, in the foregoing illustration, there is no specific limitation between the inertia action and the control operation performed on the control by the electronic device, and the correspondence between the inertia action and the control operation may be arbitrarily configured, for example: in fig. 8 (f), the user may also perform an inertial motion tilting to the right with respect to the electronic device, and the electronic device displays a detailed interface corresponding to the mail displayed in the pop-up box 803.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various features, these features should not be limited by these terms. These terms are used merely for distinguishing and are not to be construed as indicating or implying relative importance. For example, a first feature may be referred to as a second feature, and similarly a second feature may be referred to as a first feature, without departing from the scope of the example embodiments.
Furthermore, various operations will be described as multiple discrete operations, in a manner that is most helpful in understanding the illustrative embodiments; however, the order of description should not be construed as to imply that these operations are necessarily order dependent, and that many of the operations be performed in parallel, concurrently or with other operations. Furthermore, the order of the operations may also be rearranged. When the described operations are completed, the process may be terminated, but may also have additional operations not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
References in the specification to "one embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature is described in connection with a particular embodiment, it is within the knowledge of one skilled in the art to affect such feature in connection with other embodiments, whether or not such embodiment is explicitly described.
The terms "comprising," "having," and "including" are synonymous, unless the context dictates otherwise. The phrase "A/B" means "A or B". The phrase "a and/or B" means "(a), (B) or (a and B)".
As used herein, the term "module" may refer to, be part of, or include: a memory (shared, dedicated, or group) for running one or more software or firmware programs, an Application Specific Integrated Circuit (ASIC), an electronic circuit and/or processor (shared, dedicated, or group), a combinational logic circuit, and/or other suitable components that provide the described functionality.
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering is not required. Rather, in some embodiments, these features may be described in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or methodological feature in a particular drawing does not imply that all embodiments need to include such feature, and in some embodiments may not be included or may be combined with other features.
The embodiments of the present application have been described in detail above with reference to the accompanying drawings, but the use of the technical solution of the present application is not limited to the applications mentioned in the embodiments of the present application, and various structures and modifications can be easily implemented with reference to the technical solution of the present application to achieve the various advantageous effects mentioned herein. Various changes, which may be made by those skilled in the art without departing from the spirit of the application, are deemed to be within the scope of the application as defined by the appended claims.

Claims (11)

1. An interaction method applied to an electronic device, the method comprising:
Displaying a first interface, wherein the first interface comprises a first application control;
detecting a first inertial action, executed by a user on the electronic device, corresponding to the first application control, and detecting that a gazing area, gazing at a screen of the electronic device, of the user corresponds to the first application control;
and executing a triggering function corresponding to the first application control.
2. The method of claim 1, wherein the first interface comprises a first application interface of a first application, and the first application control belongs to the first application.
3. The method of claim 2, wherein the executing the trigger function corresponding to the first application control comprises:
and displaying first display content corresponding to the first application control corresponding to the first inertial motion as approaching the electronic equipment to the user.
4. The method of claim 2, wherein the executing the trigger function corresponding to the first application control further comprises:
and turning the electronic equipment to the left corresponding to the first inertial motion, and deleting the first display content corresponding to the first application control from the first application interface.
5. The method of claim 2, wherein the executing the trigger function corresponding to the first application control further comprises:
and turning the electronic equipment to the right corresponding to the first inertial motion, and marking first display content corresponding to the first application control in the first application interface.
6. The method of claim 1, wherein the first interface comprises a first application interface of a first application and the first application control belongs to a second application.
7. The method of claim 6, wherein the executing the trigger function corresponding to the first application control comprises:
and opening a second application interface of the second application corresponding to the first inertial motion as tilting the electronic device to the right.
8. The method of claim 7, wherein the second application interface comprises a second application control; and is also provided with
The method further comprises the steps of:
detecting that the user tilts the electronic equipment to the left, and the gazing area of the screen of the electronic equipment, where the user gazes, corresponds to the second application control;
displaying a first application interface of the first application, minimizing the second application interface and running the second application through the background.
9. The method of claim 1, wherein the detecting that the user gazes at a screen of the electronic device comprises:
collecting a facial image of the user;
acquiring an eye region of the user when judging that the face of the user faces a screen of the electronic equipment in the face image;
and when the gazing direction of the eyes in the eye area is judged to be the screen of the electronic equipment, determining the gazing area of the eyes of the user in the screen of the electronic equipment.
10. An electronic device, comprising:
a memory for storing instructions for execution by one or more processors of the electronic device, and
a processor, being one of the processors of an electronic device, for performing the interaction method of any of claims 1-9.
11. A computer program product, comprising: a non-transitory computer readable storage medium containing computer program code for performing the interaction method of any of claims 1-9.
CN202311178188.4A 2023-09-13 2023-09-13 Electronic equipment and interaction method thereof Active CN116909439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311178188.4A CN116909439B (en) 2023-09-13 2023-09-13 Electronic equipment and interaction method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311178188.4A CN116909439B (en) 2023-09-13 2023-09-13 Electronic equipment and interaction method thereof

Publications (2)

Publication Number Publication Date
CN116909439A true CN116909439A (en) 2023-10-20
CN116909439B CN116909439B (en) 2024-03-22

Family

ID=88356950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311178188.4A Active CN116909439B (en) 2023-09-13 2023-09-13 Electronic equipment and interaction method thereof

Country Status (1)

Country Link
CN (1) CN116909439B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120256967A1 (en) * 2011-04-08 2012-10-11 Baldwin Leo B Gaze-based content display
US20160216761A1 (en) * 2012-01-04 2016-07-28 Tobii Ab System for gaze interaction
CN105843383A (en) * 2016-03-21 2016-08-10 努比亚技术有限公司 Application starting device and application starting method
CN105929932A (en) * 2015-02-27 2016-09-07 联想(新加坡)私人有限公司 Gaze Based Notification Response
CN106850404A (en) * 2017-01-18 2017-06-13 腾讯科技(深圳)有限公司 Information security processing method and system, first terminal and second terminal
CN109933267A (en) * 2018-12-28 2019-06-25 维沃移动通信有限公司 The method and terminal device of controlling terminal equipment
CN110955378A (en) * 2019-11-28 2020-04-03 维沃移动通信有限公司 Control method and electronic equipment
CN111694434A (en) * 2020-06-15 2020-09-22 掌阅科技股份有限公司 Interactive display method of electronic book comment information, electronic equipment and storage medium
CN112114653A (en) * 2019-06-19 2020-12-22 北京小米移动软件有限公司 Terminal device control method, device, equipment and storage medium
CN112527103A (en) * 2020-11-24 2021-03-19 安徽鸿程光电有限公司 Remote control method and device for display equipment, equipment and computer readable storage medium
CN113655927A (en) * 2021-08-24 2021-11-16 亮风台(上海)信息科技有限公司 Interface interaction method and device
CN115543135A (en) * 2021-06-29 2022-12-30 青岛海尔洗衣机有限公司 Control method, device and equipment for display screen

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120256967A1 (en) * 2011-04-08 2012-10-11 Baldwin Leo B Gaze-based content display
US20160216761A1 (en) * 2012-01-04 2016-07-28 Tobii Ab System for gaze interaction
CN105929932A (en) * 2015-02-27 2016-09-07 联想(新加坡)私人有限公司 Gaze Based Notification Response
CN105843383A (en) * 2016-03-21 2016-08-10 努比亚技术有限公司 Application starting device and application starting method
CN106850404A (en) * 2017-01-18 2017-06-13 腾讯科技(深圳)有限公司 Information security processing method and system, first terminal and second terminal
CN109933267A (en) * 2018-12-28 2019-06-25 维沃移动通信有限公司 The method and terminal device of controlling terminal equipment
CN112114653A (en) * 2019-06-19 2020-12-22 北京小米移动软件有限公司 Terminal device control method, device, equipment and storage medium
CN110955378A (en) * 2019-11-28 2020-04-03 维沃移动通信有限公司 Control method and electronic equipment
CN111694434A (en) * 2020-06-15 2020-09-22 掌阅科技股份有限公司 Interactive display method of electronic book comment information, electronic equipment and storage medium
CN112527103A (en) * 2020-11-24 2021-03-19 安徽鸿程光电有限公司 Remote control method and device for display equipment, equipment and computer readable storage medium
CN115543135A (en) * 2021-06-29 2022-12-30 青岛海尔洗衣机有限公司 Control method, device and equipment for display screen
CN113655927A (en) * 2021-08-24 2021-11-16 亮风台(上海)信息科技有限公司 Interface interaction method and device

Also Published As

Publication number Publication date
CN116909439B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN110045908B (en) Control method and electronic equipment
CN110544272B (en) Face tracking method, device, computer equipment and storage medium
CN111382624B (en) Action recognition method, device, equipment and readable storage medium
CN111669462B (en) Method and related device for displaying image
CN112044065B (en) Virtual resource display method, device, equipment and storage medium
EP4057137A1 (en) Display control method and terminal device
US11386586B2 (en) Method and electronic device for adding virtual item
CN111459363B (en) Information display method, device, equipment and storage medium
US11816924B2 (en) Method for behaviour recognition based on line-of-sight estimation, electronic equipment, and storage medium
CN111880648B (en) Three-dimensional element control method and terminal
CN111437600A (en) Plot showing method, plot showing device, plot showing equipment and storage medium
CN116048244A (en) Gaze point estimation method and related equipment
WO2021254113A1 (en) Control method for three-dimensional interface and terminal
CN112818979B (en) Text recognition method, device, equipment and storage medium
CN110728167A (en) Text detection method and device and computer readable storage medium
CN116909439B (en) Electronic equipment and interaction method thereof
CN110232417B (en) Image recognition method and device, computer equipment and computer readable storage medium
CN113391775A (en) Man-machine interaction method and equipment
CN113469322B (en) Method, device, equipment and storage medium for determining executable program of model
CN115686187A (en) Gesture recognition method and device, electronic equipment and storage medium
CN116661587B (en) Eye movement data processing method and electronic equipment
US11531426B1 (en) Edge anti-false-touch method and apparatus, electronic device and computer-readable storage medium
CN115484393B (en) Abnormality prompting method and electronic equipment
CN112835445B (en) Interaction method, device and system in virtual reality scene
CN117784936A (en) Control method, terminal device, wearable device, communication system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant