CN113467614A - Display interface selection method and device - Google Patents

Display interface selection method and device Download PDF

Info

Publication number
CN113467614A
CN113467614A CN202110736403.2A CN202110736403A CN113467614A CN 113467614 A CN113467614 A CN 113467614A CN 202110736403 A CN202110736403 A CN 202110736403A CN 113467614 A CN113467614 A CN 113467614A
Authority
CN
China
Prior art keywords
target
display interface
image
information
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110736403.2A
Other languages
Chinese (zh)
Inventor
孔祥晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110736403.2A priority Critical patent/CN113467614A/en
Publication of CN113467614A publication Critical patent/CN113467614A/en
Priority to PCT/CN2021/134293 priority patent/WO2023273138A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Abstract

The disclosure relates to the technical field of electronic equipment, in particular to a display interface selection method and device. The display interface selection method comprises the following steps: acquiring an image to be processed acquired by image acquisition equipment; the sight line stopping information of the target object is detected from the image to be processed; determining a target area from a plurality of areas to be selected of a display interface of the electronic equipment according to the sight line staying information of the target object; and displaying the target area as a selected area through the display interface. The method can realize the quick selection of the display interface based on the sight line stay information of the target object, simplify the user operation and improve the interface selection efficiency.

Description

Display interface selection method and device
Technical Field
The disclosure relates to the technical field of electronic equipment, in particular to a display interface selection method and device.
Background
For large-screen display devices such as televisions and commercial display screens, since they do not have a mouse operation environment, interface selection is often required to be performed based on operations such as remote controls or gesture recognition. Taking a television as an example, when a screen of the television includes a plurality of icons to be selected, a user needs to use a direction key of a remote controller to realize the operation of selecting a certain icon, the steps are complicated, and the efficiency of interface selection is low.
Disclosure of Invention
In order to improve the efficiency of display interface selection, the embodiments of the present disclosure provide a display interface selection method, device, display system, and storage medium.
In a first aspect, an embodiment of the present disclosure provides a display interface selection method, including:
acquiring an image to be processed acquired by image acquisition equipment;
the sight line stopping information of the target object is detected from the image to be processed;
determining a target area from a plurality of areas to be selected of a display interface of the electronic equipment according to the sight line staying information of the target object;
and displaying the target area as a selected area through the display interface.
In some embodiments, the acquiring the image to be processed acquired by the image acquisition device includes:
responding to the display interface to jump to a target interface, and acquiring the image to be processed through the image acquisition equipment;
and/or the presence of a gas in the gas,
and acquiring the image to be processed through the image acquisition equipment in response to the fact that the electronic equipment is switched from a power-off state to a power-on state or the electronic equipment is switched from a standby state to an operating state.
In some embodiments, the detecting, from the image to be processed, gaze dwell information to a target object includes:
detecting a plurality of preset objects from the image to be processed;
determining the target object from the plurality of preset objects;
and detecting to obtain the sight line stopping information of the target object according to the image to be processed.
In some embodiments, the determining the target object from the plurality of preset objects includes:
detecting the image to be processed to obtain operation part information of each preset object in the plurality of preset objects;
and determining a preset object corresponding to the operation part information meeting preset conditions as the target object according to the obtained operation part information.
In some embodiments, the detecting, from the image to be processed, gaze dwell information to a target object includes:
detecting the image to be processed to obtain eye characteristic information of the target object;
according to the eye feature information, determining sight line staying information of the fixation point of the target object in the coordinate system of the image acquisition equipment;
determining a target area from a plurality of areas to be selected of a display interface according to the sight line stay information of the target object, wherein the determining comprises the following steps:
and determining the target area according to the sight line stopping information based on the pre-established mapping relation between the image acquisition equipment coordinate system and the display interface coordinate system.
In some embodiments, the candidate area includes at least one selectable target, and the target area is displayed as the selected area through the display interface, including at least one of:
responding to the situation that the current selection frame is not located in the target area, moving the selection frame to the target area, and displaying the selectable target at a preset position in the target area as a selected area;
in response to the current selection box being located in the target area, displaying the selectable target currently selected by the selection box as a selected area.
In some embodiments, the displaying the selectable target at a preset position in the target area as a selected area includes:
according to historical display interface data, determining the selectable target with the maximum number of selected times in the at least one selectable target as the selectable target at the preset position, and displaying the selectable target at the preset position as a selected area;
alternatively, the first and second electrodes may be,
and obtaining the distance between the fixation point of the target object and each selectable target in the at least one selectable target according to the sight line stay information of the target object, determining the selectable target corresponding to the minimum distance as the selectable target at the preset position, and displaying the selectable target at the preset position as a selected area.
In some embodiments, after the displaying the target region as the selected region through the display interface, the method further comprises:
and moving the selection frame on the display interface in response to the received operation signal.
In some embodiments, the displaying, by the display interface, the target region as the selected region includes:
determining the stay time of the fixation point of the target object in the target area according to the sight stay information of the target object;
and responding to the fact that the stay time length is not smaller than a preset time length threshold value, and displaying the target area as a selected area through the display interface.
In a second aspect, an embodiment of the present disclosure provides a display interface selection apparatus, including:
the acquisition module is configured to acquire an image to be processed acquired by the image acquisition equipment;
the detection module is configured to detect sight line stopping information of a target object from the image to be processed;
the area determination module is configured to determine a target area from a plurality of areas to be selected of a display interface of the electronic equipment according to the sight line dwell information of the target object;
a selection module configured to display the target area as a selected area through the display interface.
In some embodiments, the obtaining module is specifically configured to:
responding to the display interface to jump to a target interface, and acquiring the image to be processed through the image acquisition equipment;
and/or the presence of a gas in the gas,
and acquiring the image to be processed through the image acquisition equipment in response to the fact that the electronic equipment is switched from a power-off state to a power-on state or the electronic equipment is switched from a standby state to an operating state.
In some embodiments, the detection module is specifically configured to:
detecting a plurality of preset objects from the image to be processed;
determining the target object from the plurality of preset objects;
and detecting to obtain the sight line stopping information of the target object according to the image to be processed.
In some embodiments, the detection module is specifically configured to:
detecting the image to be processed to obtain operation part information of each preset object in the plurality of preset objects;
and determining a preset object corresponding to the operation part information meeting preset conditions as the target object according to the obtained operation part information.
In some embodiments, the detection module is specifically configured to:
detecting the image to be processed to obtain eye characteristic information of the target object;
according to the eye feature information, determining sight line staying information of the fixation point of the target object in the coordinate system of the image acquisition equipment;
determining a target area from a plurality of areas to be selected of a display interface according to the sight line stay information of the target object, wherein the determining comprises the following steps:
and determining the target area according to the sight line stopping information based on the pre-established mapping relation between the image acquisition equipment coordinate system and the display interface coordinate system.
In some embodiments, the candidate area includes at least one selectable target, the selection module being specifically configured to at least one of:
responding to the situation that the current selection frame is not located in the target area, moving the selection frame to the target area, and displaying the selectable target at a preset position in the target area as a selected area;
in response to the current selection box being located in the target area, displaying the selectable target currently selected by the selection box as a selected area.
In some embodiments, the selection module is specifically configured to:
determining the stay time of the fixation point of the target object in the target area according to the sight stay information of the target object;
and responding to the fact that the stay time length is not smaller than a preset time length threshold value, and displaying the target area as a selected area through the display interface.
In a third aspect, the disclosed embodiments provide an electronic device, including:
a display having a display interface;
an image acquisition device;
a processor; and
a memory storing computer instructions for causing a processor to perform the method according to any of the embodiments of the first aspect.
In a fourth aspect, the embodiments of the present disclosure provide a storage medium storing computer instructions for causing a computer to execute the method according to any one of the embodiments of the first aspect.
The display interface selection method comprises the steps of obtaining an image to be processed, acquired by image acquisition equipment, detecting sight line stay information of a target object in the image to be processed, determining a target area from a plurality of areas to be selected of a display interface of electronic equipment according to the sight line stay information of the target object, and displaying the target area as the selected area through the display interface. In the embodiment of the disclosure, based on the sight line stay information of the target object, the quick selection of the display interface can be realized, the user operation is simplified, and the interface selection efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram illustrating a display interface selection method in the prior art.
FIG. 2 is a schematic block diagram of an electronic device according to some embodiments of the present disclosure.
FIG. 3 is a flow chart of a display interface selection method according to some embodiments of the present disclosure.
FIG. 4 is a schematic diagram of a display interface selection method according to some embodiments of the present disclosure.
FIG. 5 is a flow chart of a display interface selection method in some embodiments according to the present disclosure.
FIG. 6 is a flow chart of a display interface selection method in some embodiments according to the present disclosure.
FIG. 7 is a flow chart of a display interface selection method in some embodiments according to the present disclosure.
FIG. 8 is a flow chart of a display interface selection method in some embodiments according to the present disclosure.
FIG. 9 is a schematic diagram of a display interface selection method according to some embodiments of the present disclosure.
FIG. 10 is a schematic diagram of a display interface selection method according to some embodiments of the present disclosure.
Fig. 11 is a block diagram of a display interface selection apparatus according to some embodiments of the present disclosure.
FIG. 12 is a block diagram of an electronic device in accordance with some embodiments of the present disclosure.
Detailed Description
The technical solutions of the present disclosure will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be derived by one of ordinary skill in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure. In addition, technical features involved in different embodiments of the present disclosure described below may be combined with each other as long as they do not conflict with each other.
For large-screen display devices such as televisions and commercial display screens, the selection of a display screen interface is often realized by using a remote controller. Taking a television as an example, fig. 1 shows a schematic diagram of human-computer interaction of a television in the prior art.
As shown in fig. 1, a plurality of icons 11 are displayed on a display screen of a television 10, and a user 20 needs to select a desired icon from the plurality of icons 11. The interaction mode in the prior art is as follows: the user 20 operates the direction key on the remote controller 30 to control the selection frame 12 to move, so as to realize icon selection. However, the operation steps of this operation method are very complicated, for example, when a user wants to move the selection box 12 from the icon 11-a to the icon 11-B, the user needs to press the direction key of the remote controller many times to complete the selection operation of the icon 11-B, which is very complicated in operation process and affects user experience.
Based on the above defects in the prior art, the embodiments of the present disclosure provide a method, an apparatus, a display system, and a storage medium for selecting a display interface, and aim to realize quick selection of a display interface and improve efficiency of interface selection.
In a first aspect, the embodiments of the present disclosure provide a display interface selection method, which may be applied to any electronic device with a display interface, such as a television, a display screen, a tablet computer, a handheld mobile terminal, and the like, and the disclosure is not limited thereto. In addition, the method of the embodiments of the present disclosure may be executed by a processor of an electronic device.
Some embodiments of an electronic device suitable for implementing the method of the present disclosure are shown in fig. 2, and an application scenario of the method of the present disclosure is described below with reference to fig. 2.
As shown in fig. 2, in some embodiments of the present disclosure, the electronic device is exemplified by a television 100, and the television 100 has a display screen 110, and a display interface is output on the display screen 110 for being viewed by a user.
The television 100 of the embodiment of the present disclosure further includes an image capturing device 120, and the image capturing device 120 is configured to capture image information in front of the television 100. In one example, image capture device 120 may be a camera, such as one or more combinations of an RGB camera, an infrared camera, and a ToF (Time of flight) camera.
It is noted that in some embodiments, the image capturing device 120 may be disposed above the tv 100, for example, as shown in fig. 2, the image capturing device 120 may be telescopically disposed above and in the center of the tv 100. In other embodiments, the image capturing device 120 may be provided separately from the tv 100, as long as it is ensured that the image capturing device can capture the image before the tv 100. The present disclosure is not so limited.
On the basis of the foregoing, fig. 3 illustrates some embodiments of the display interface selection method of the present disclosure, which will be described in detail below with reference to fig. 3.
As shown in fig. 3, in some embodiments, a display interface selection method of an example of the present disclosure includes:
and S310, acquiring the image to be processed acquired by the image acquisition equipment.
Specifically, the image capturing device 120 may capture an image in front of the television 100, i.e., an image to be processed.
In some embodiments, in consideration that there is no interface selection requirement when the user normally watches the television program, the image capturing device 120 does not need to acquire the image to be processed at any time, and the image capturing device 120 may start capturing the image only when there is an interface selection requirement for the user.
For example, in one example, in response to the display interface jumping to the target interface, a pending image is captured by image capture device 120.
The target interface may be, for example, an interface for selecting a television program, as shown in fig. 4, when the user jumps from the interface for normally watching the television program to the program selection interface (channel selection interface) shown in fig. 4, it may be determined that the display interface jumps to the target interface, and the user has a requirement for interface selection, and at this time, the image to be processed may be captured by the image capturing device 120.
Of course, it is understood that the target interface is not limited to the program selection interface shown in fig. 4, and any other display interface with interface selection may be used as the target interface, which is not limited in this disclosure.
For example, in another example, in response to the electronic device being switched from the power-off state to the power-on state or the electronic device being switched from the standby state to the operating state, the image to be processed is acquired by the image acquisition device.
Still taking the television shown in fig. 2 as an example, when the television 100 is switched from the off state to the on state, that is, the television 100 is just started, or the television 100 is switched from the standby state to the operating state, that is, the television 100 is just wakened up, a user has a need for interface selection (for example, changing a television station), so that an image to be processed can be acquired by the image acquisition device 120.
The image to be processed refers to an image in front of the television 100, and the image to be processed includes at least one preset object, for example, an object in a shooting range of the image capturing device 120, such as a user watching a television.
And S320, detecting the sight line stopping information of the target object from the image to be processed.
Specifically, after the image to be processed is acquired, the sight line dwell information of the target object can be detected from the image to be processed by a sight line tracking technology. In some embodiments, the gaze fixation information may include coordinate information of a fixation point of the target object in a coordinate system of the image capturing device, a gaze fixation time, a gaze movement trend, and the like, which is not limited by the present disclosure.
The target object refers to an object determined from at least one preset object of the image to be processed. For example, in the scenario shown in fig. 2, a total of 3 users watching tv are included: user 201, user 202, and user 203. So that the image to be processed collected by the image collecting device 120 includes three preset objects in total, i.e., three users. The target object refers to a target user determined by the user 201, the user 202, and the user 203 based on an image detection technique.
For example, in one example, the user 202 of the handheld remote control 300 may be determined to be the target object based on image detection techniques. The present disclosure will be specifically explained in the following embodiments, and will not be described in detail here.
After the target object is determined, the sight line dwell information of the target object can be detected from the image to be processed based on the sight line tracking technology. The gaze fixation information represents position information of a gaze point of the target object, i.e., a current gaze direction of the target object.
S330, determining a target area from a plurality of areas to be selected of a display interface of the electronic equipment according to the sight line stopping information of the target object.
Specifically, the gaze dwell information represents the current gaze direction of the target object, that is, the direction in which the target area that the user desires to select is located, so that the target area that the user desires to select can be determined from a plurality of areas to be selected on the display interface based on the gaze dwell information.
For example, as shown in fig. 4, it is assumed that the initial position of the selection box 112 is "CCTV-1", and the position indicated by the gaze dwell information of the target object (i.e., the user 202) is the position where "CCTV-6" is located, so that the television 100 determines "CCTV-6" as the target area from the 6 candidate areas "CCTV-1" to "CCTV-6".
As for a specific process of determining the target area based on the gaze stay information, the following embodiments of the present disclosure will be specifically described, and will not be described in detail here.
And S340, displaying the target area as a selected area through a display interface.
Specifically, after the target area is determined in S330, the current selection box may be moved to the target area, so that the target area is taken as the selected area.
Still taking the scenario of fig. 4 as an example, assuming that the initial position of the selection box 112 is "CCTV-1", the target area determined according to the gaze dwell information of the user 202 is "CCTV-6", so that the television 100 can move the selection box 112 to the target area "CCTV-6".
It can be seen that, in the embodiment of the present disclosure, when the user selects the target area, the movement of the selection box can be implemented according to the sight line stay information of the user without using a remote controller for operation. In the scenario of fig. 4, in the prior art, the user needs to press the right key twice and press the down key once to move the selection box 112 from "CCTV-1" to "CCTV-6" only by operating the remote controller. In the embodiment of the present disclosure, the user can realize the quick movement of the selection box 112 without any pressing operation, which greatly simplifies the selection operation.
Therefore, in the embodiment of the disclosure, based on the sight line staying information of the target object, the quick selection of the display interface can be realized, the user operation is simplified, and the interface selection efficiency is improved.
In an actual scene, there is often a case where a plurality of persons watch television together, for example, as shown in fig. 2, a user 201, a user 202, and a user 203 watch the television set 100 together. In this case, the image to be processed includes three preset objects in total, and since the direction of the line of sight of each user may be different, the detected line of sight stay information is also different. In order to reduce the interference of a plurality of users, main control personnel can be determined from the plurality of users in a multi-user scene, and the sight line stopping information of the main control personnel is used as the basis for interface selection. This will be described in detail with reference to fig. 5.
As shown in fig. 5, in some embodiments, in the display interface selection method according to the examples of the present disclosure, the method for selecting a display interface, where the gaze dwell information of the target object is detected from the image to be processed, includes:
and S510, detecting a plurality of preset objects in the image to be processed.
Specifically, taking the scene shown in fig. 2 as an example, the preset object is a human body, and after the to-be-processed image is acquired, all human body regions, such as the user 201, the user 202, and the user 203, can be detected from the to-be-processed image based on a human body detection technology.
And S520, determining a target object from a plurality of preset objects.
Specifically, the target object is a master object in a plurality of preset objects. For example, as shown in fig. 2, the preset objects include a user 201, a user 202 and a user 203, and in some embodiments, the user 202 of the handheld remote controller 300 may be determined as a target object.
For example, as shown in fig. 6, in some embodiments, the process of determining a target object may include:
s521, detecting the image to be processed to obtain the operation part information of each preset object in the plurality of preset objects.
And S522, according to the obtained operation part information, determining a preset object corresponding to the operation part information meeting the preset condition as a target object.
Firstly, human body detection can be performed on the image to be processed by using the human body detection model, so that each preset object in the image to be processed, that is, the user 201, the user 202, and the user 203, described in fig. 2, can be obtained. Next, the operator area of each preset object is detected by the operator detection model, and the operator information of each preset object is specified.
It is understood that the operation unit refers to a portion where the human body operates the remote controller. For example, in one example, the remote controller is a handheld remote controller, so that the operation portion can correspond to a human hand region. For example, in another example, the remote controller is a head-mounted remote controller, so that the operation portion corresponds to a head region of a human body. The operating part may also be any other body part suitable for implementation, which is not enumerated in this disclosure.
In the example of fig. 2, the remote controller 300 is a handheld remote controller as an example, and in an actual scene, a user who often holds the remote controller is a master user, so in the embodiment of the present disclosure, the hand region of each preset object may be detected based on an image detection technology to obtain hand information of each user.
After determining the hand information of each user, the hand information of each user can be detected based on an image detection technology, and whether the user holds an article or whether the article held by the user is a remote controller is judged. If the user holds the remote controller, the user of the hand-held remote controller is determined to be the master control user, that is, the preset object corresponding to the hand information is the target object.
With continued reference to fig. 2, the remote controller 300 is held by the user 202, and thus, when detecting that the operation portion information of the user 202 satisfies the preset condition, it is determined that the user 202 is the target object.
It should be noted that the embodiments of the present disclosure are not limited to the above examples, and in other embodiments, any other suitable implementation manner may be adopted to determine the target object. For example, a preset object through which face recognition passes may be determined as a target object based on a face recognition technique. This can be understood and fully implemented by those skilled in the art, and the present disclosure will not be described in detail.
And S530, obtaining the sight line stopping information of the target object according to the image to be processed.
After the target object is determined, the sight line dwell information of the target object can be detected from the image to be processed based on a sight line tracking algorithm. The present disclosure will be specifically explained in the following embodiments, and will not be described in detail here.
Therefore, in the embodiment of the disclosure, the target object corresponding to the operation portion information meeting the preset condition is determined from the plurality of preset objects, so that interference of a multi-user scene can be avoided, and accuracy of selecting a display interface is improved.
As shown in fig. 7, in some embodiments, a display interface selection method of an example of the present disclosure includes:
and S710, detecting the image to be processed to obtain the eye characteristic information of the target object.
Specifically, the eye feature information of the target object can be extracted from the image to be processed through a pre-trained eye detection network.
S720, according to the eye feature information, determining the sight line stay information of the fixation point of the target object in the coordinate system of the image acquisition equipment.
Specifically, by performing feature analysis on the extracted eye feature information, coordinate information of the fixation point of the target object can be obtained.
It is understood that the coordinate information of the gazing point of the target object refers to the coordinate information of the gazing point of the target object in the coordinate system of the image capturing apparatus 120.
And S730, determining a target area according to the sight line stopping information based on the pre-established mapping relation between the coordinate system of the image acquisition equipment and the coordinate system of the display interface.
First, as shown in fig. 2, a mapping relationship between the image capturing device coordinate system and the display interface coordinate system, which represents a registration relationship of image coordinates to display interface coordinates, may be constructed in advance based on the positional relationship of the image capturing device 120 and the display screen 110.
In one example, the mapping relationship between the image capture device coordinate system and the display interface coordinate system may be derived based on an isometric mapping of the distance between the user and the image capture device 120. Those skilled in the art will understand and fully implement the present disclosure based on the related art, and the detailed description thereof is omitted.
In S720, it is determined to obtain the gaze dwell information of the target object, which represents the coordinate information of the gaze point in the coordinate system of the image capturing device, so that the target area corresponding to the gaze dwell information on the display interface can be obtained based on the coordinate information and the pre-constructed mapping relationship.
Taking fig. 4 as an example, the gaze dwell information is determined according to the eye feature information of the target object 202, and the target area is mapped to the display interface to obtain "CCTV-6" based on the gaze dwell information, so that "CCTV-6" is used as the target area that the target object 202 desires to select.
Therefore, in the embodiment of the disclosure, based on the sight line staying information of the target object, the quick selection of the display interface can be realized, the user operation is simplified, and the interface selection efficiency is improved.
In some embodiments, before the sight line stopping information of the target object is detected based on the image to be processed, a calibration process for sight line tracking is further included. Specifically, in one example, a user may send an instruction via a remote control to control the television to enter a calibration procedure.
In the calibration process, a plurality of calibration points can be output at the preset position of the display interface of the television, and the sight of the user is guided to stay at the calibration point position. Meanwhile, the image acquisition equipment of the television can acquire the user image, the position deviation of the real-time user watching point and the calibration point is obtained based on the user image detection, and the mapping relation between the coordinate system of the image acquisition equipment and the coordinate system of the display interface is calibrated based on the position deviation.
In some embodiments, it is contemplated that the user's gaze may change frequently, thereby interfering with interface selection. Therefore, in some embodiments of the present disclosure, whether to select the target area may be determined according to a dwell time of a gazing point of the target object in the target area. This will be described in detail with reference to fig. 8.
As shown in fig. 8, in some embodiments, a display interface selection method of an example of the present disclosure includes:
and S810, determining the stay time of the fixation point of the target object in the target area according to the sight stay information of the target object.
And S820, responding to the fact that the stay time is not less than the preset time threshold, and displaying the target area as the selected area through the display interface.
Specifically, after the target area is determined based on the gaze stay information of the target object, the selection frame is not immediately moved to the target area, but a stay time period during which the gaze point of the target object is located in the target area is determined. If the dwell time is short, it indicates that the user may merely look through the area and does not want to select the area; conversely, if the dwell time is longer, it indicates that the user desires to select the region.
The preset time length threshold is a threshold value for determining that the user desires to select the target area, and when the staying time length of the target object with the fixation point located in the target area is not less than the preset time length threshold value, the user desires to select the target area, so that the selection frame can be moved to the target area. When the staying time of the target object in the target area is shorter than the preset time threshold, the user only needs to scan the target area with the eyes, and the selection frame does not need to be moved, so that the misjudgment is avoided.
It is to be understood that the preset duration threshold may be obtained according to a priori knowledge or a limited number of experiments, and may be set by a person skilled in the art according to a specific scenario, which is not limited by the present disclosure. For example, in the example of fig. 4, the preset duration threshold may be set to 1 second, and when the stay duration of the gazing point of the user 202 in the "CCTV-6" area is greater than 1 second, the selection box 112 may be rapidly moved from "CCTV-1" to "CCTV-6".
Therefore, in the embodiment of the disclosure, the target area is selected according to the stay time of the target object fixation point, so that misjudgment of interface selection is avoided, and the selection accuracy is improved.
It should be noted that in some scenarios, in order to display more effective information, a large screen device often includes multiple selectable targets in each of the selectable regions, where the selectable targets may be icons, menu bars, and the like.
For example, as shown in fig. 9, the display interface includes a candidate region a1, a candidate region a2, a candidate region A3, a candidate region B1, a candidate region B2, and a candidate region B3, and 2 × 3 — 6 candidate regions are included in the display interface, where each candidate region includes 3 × 3 and 9 selectable targets. That is, a total of 6 × 9 — 54 selectable targets are included in the display interface. The selectable targets are icons in this example, and will be referred to as icons in the following.
It can be understood that, because there are many icons on the display interface, the interference to the line of sight for 202 is relatively serious, and in order to ensure the reliability of the interface selection method, in some embodiments of the present disclosure, a target area is determined based on the line of sight dwell information of the user, and after the target area is determined, selectable targets are further determined from the target area. That is, based on the sight line staying information, the selectable target desired to be selected by the user is determined in a mode of 'rough search + fine search', interference of more selectable targets on the sight line of the user is reduced, and reliability of the interface selection method is improved. The following is a detailed description.
In some embodiments, a target area may be first determined from a plurality of candidate areas by the method of any of the preceding embodiments. After determining the target area, there may be two cases:
case 1), the current selection box is not located in the target area. In this case, the selection box needs to be moved to a certain icon of the target area.
Case 2), the current selection box is located in the target area. In this case, a certain icon currently selected in the selection box may be kept unchanged.
The following description is based on the above two cases.
In some embodiments, in response to the current selection box not being located in the target area, the selection box is moved to the target area, and the selectable target at the preset position in the target area is displayed as the selected area.
In the example of fig. 9, based on the method of any of the foregoing embodiments, the target area determined based on the gaze fixation information of the target object 202 is B3, and the current selection box 112 is located on the icon 15 in the candidate area a 1.
At this time, the position of the selection box 112 is not located in the target area B3, so that the selection box 112 needs to be moved from the candidate area a1 to the target area B3. However, the target area B3 includes 9 icons from 31 to 39, and therefore, it is a matter of consideration to which icon position in the target area B3 the selection box 112 is moved, and the following several implementation modes are given in the present disclosure:
example 1, according to the historical display interface data, the selectable object which is selected most frequently in the at least one selectable object is determined as the selectable object at the preset position.
Specifically, history display interface data including the number of times each icon was selected in a past period of time may be acquired, and the greater the number of times an icon was selected, the higher the probability that the user desires to select the icon is.
Therefore, the icon with the largest number of times of historical selection in the icons 31-39 in the target area B3 is used as the selectable target of the preset position by analyzing the historical display interface data, and the selectable target represents the icon which is most likely to be selected by the user.
For example, in fig. 9, it is assumed that the icon 35 is the icon with the largest number of times of historical selection in the target area B3, so that the selection box 112 can be quickly moved from the icon 15 in the candidate area a1 to the icon 35 in the target area B3, and quick selection of the icon is realized.
Example 2, the distance between the gaze point of the target object and each selectable target is obtained according to the gaze dwell information of the target object, and the selectable target corresponding to the minimum distance is determined as the selectable target at the preset position.
Specifically, it is still possible to confirm the position information in which the gaze point coordinates of the target object are mapped to the display interface coordinate system based on the previously obtained gaze point information of the target object, and to confirm the distance between each icon in the target area B3 and the gaze point based on the position information, where a smaller distance indicates a higher possibility that the user desires to select the icon.
For example, in the example of fig. 9, the icon 35 is the smallest distance from the gazing point of the user 202, and the icon 35 may be confirmed as the icon that the user desires to select, so that the selection box 112 may be quickly moved from the icon 15 of the candidate area a1 to the icon 35 of the target area B3, thereby achieving quick selection of the icon.
Example 3, an icon at a default position in the target area is identified as a selectable icon at a preset position.
The default position is the default position of the moving selection frame when each area to be selected is taken as the target area. The default location may be a target area center location, such as icon 35 in target area B3; or may be the upper left corner position of the target area, such as icon 31 in target area B3. Of course, any other suitable location for implementation is also possible, and the disclosure is not limited thereto.
In some embodiments, in response to the current selection box being located in the target area, the selectable target currently selected by the selection box is displayed as the selected area.
Specifically, after the target area is determined by any of the foregoing embodiments, if the current selection box is exactly located in the target area, the selection box is not moved.
It will be appreciated that after the target area is determined by the rough search, if the selection box itself is located within the target area, the current selection box position is no longer moved based on the default position or historical data in order to avoid redundant selection box movement operations.
For example, in the description of fig. 10, it is assumed that the determined target area is B3, and the position of the current selection box 112 is the selected icon 31, i.e., the selection box 112 is located in the target area B3. In this case, regardless of whether the default position is the icon 31, the selection box 112 may not be moved any more, but the selection box 112 may be kept selecting the icon 31, thereby reducing redundant movement operations.
Therefore, in the embodiment of the disclosure, the target area is determined from the multiple areas to be selected, then the selection frame is moved to the preset position in the target area, and the selection frame is rapidly moved by using the mode of "rough search + fine search", so that the interface selection efficiency is improved.
In some embodiments, considering that there may be a deviation in the selectable target determined based on the user gaze dwell information, after implementing the selection box movement, the method of an embodiment of the present disclosure may further include:
and moving the selection frame on the display interface according to the received user operation signal.
Specifically, the user operation signal refers to a signal for moving a selection box transmitted to the television set by the user via a remote controller or a mobile terminal. After moving the selection box by the method of any of the preceding embodiments. The position of the selection box may be deviated so that the user can manually fine-tune the position of the selection box through the remote controller.
For example, as shown in fig. 10, after the selection box 112 is moved to the icon 31 by the above method, the user actually desires that the selected icon is the icon 35, so that the user can manually move the selection box 112 from the icon 31 to the icon 35 by operating the direction key of the remote controller.
It should be noted that, in the display interface selection method according to the embodiment of the present disclosure, the icon is selected based on the user sight line dwell information, and the user only needs to fine-tune the position of the selection frame when there is a deviation in the position of the selection frame.
In some embodiments, the selectable objects in the embodiments of the present disclosure are not limited to the icons described above, but may be any other objects that can be selected by the user, such as a menu bar, a text list, and the like, which are not limited by the present disclosure.
In some embodiments, the electronic device in the embodiments of the present disclosure is not limited to the television set described above, and may also be any other device suitable for implementation, such as a tablet computer, and the present disclosure does not limit this.
In some embodiments, the arrangement of the candidate areas and the selectable objects in the embodiments of the present disclosure is not limited to the above-described arrangement, and may also be any other arrangement suitable for implementation, which is not limited by the present disclosure.
Therefore, in the embodiment of the disclosure, based on the sight line staying information of the target object, the quick selection of the display interface can be realized, the user operation is simplified, and the interface selection efficiency is improved. And determining a target object corresponding to the operation part information meeting the preset conditions from the plurality of preset objects, so that the interference of a multi-user scene can be avoided, and the accuracy of display interface selection is improved.
In a second aspect, the embodiments of the present disclosure provide a display interface selection apparatus, which may be applied to any electronic device having a display interface, such as a television, a display screen, a tablet computer, a handheld mobile terminal, and the like, and the disclosure is not limited thereto.
As shown in fig. 11, in some embodiments, a display interface selection apparatus of an example of the present disclosure includes:
an obtaining module 1110 configured to obtain an image to be processed, which is collected by an image collecting device;
a detection module 1120 configured to detect gaze dwell information to a target object from the image to be processed;
an area determination module 1130 configured to determine a target area from a plurality of areas to be selected on a display interface of the electronic device according to the gaze stay information of the target object;
a selection module 1140 configured to display the target region as a selected region via the display interface.
Therefore, in the embodiment of the disclosure, based on the sight line staying information of the target object, the quick selection of the display interface can be realized, the user operation is simplified, and the interface selection efficiency is improved.
In some embodiments, the obtaining module is specifically configured to:
responding to the display interface to jump to a target interface, and acquiring the image to be processed through the image acquisition equipment;
and/or the presence of a gas in the gas,
and acquiring the image to be processed through the image acquisition equipment in response to the fact that the electronic equipment is switched from a power-off state to a power-on state or the electronic equipment is switched from a standby state to an operating state.
In some embodiments, the detection module is specifically configured to:
detecting a plurality of preset objects from the image to be processed;
determining the target object from the plurality of preset objects;
and detecting to obtain the sight line stopping information of the target object according to the image to be processed.
In some embodiments, the detection module is specifically configured to:
detecting the image to be processed to obtain operation part information of each preset object in the plurality of preset objects;
and determining a preset object corresponding to the operation part information meeting preset conditions as the target object according to the obtained operation part information.
In some embodiments, the detection module is specifically configured to:
detecting the image to be processed to obtain eye characteristic information of the target object;
according to the eye feature information, determining sight line staying information of the fixation point of the target object in the coordinate system of the image acquisition equipment;
determining a target area from a plurality of areas to be selected of a display interface according to the sight line stay information of the target object, wherein the determining comprises the following steps:
and determining the target area according to the sight line stopping information based on the pre-established mapping relation between the image acquisition equipment coordinate system and the display interface coordinate system.
In some embodiments, the candidate area includes at least one selectable target, the selection module being specifically configured to at least one of:
responding to the situation that the current selection frame is not located in the target area, moving the selection frame to the target area, and displaying the selectable target at a preset position in the target area as a selected area;
in response to the current selection box being located in the target area, displaying the selectable target currently selected by the selection box as a selected area.
In some embodiments, the selection module is specifically configured to:
determining the stay time of the fixation point of the target object in the target area according to the sight stay information of the target object;
and responding to the fact that the stay time length is not smaller than a preset time length threshold value, and displaying the target area as a selected area through the display interface.
Therefore, in the embodiment of the disclosure, based on the sight line staying information of the target object, the quick selection of the display interface can be realized, the user operation is simplified, and the interface selection efficiency is improved. And determining a target object corresponding to the operation part information meeting the preset conditions from the plurality of preset objects, so that the interference of a multi-user scene can be avoided, and the accuracy of display interface selection is improved.
In a third aspect, the embodiments of the present disclosure provide an electronic device, which may be any electronic device with a display interface, such as a television, a display screen, a tablet computer, a handheld mobile terminal, and the like, and the present disclosure is not limited thereto.
In some embodiments, an electronic device of an example of the present disclosure includes:
a display having a display interface;
an image acquisition device;
a processor; and
a memory storing computer instructions for causing a processor to perform the method according to any of the embodiments of the first aspect.
In a fourth aspect, the embodiments of the present disclosure provide a storage medium storing computer instructions for causing a computer to execute the method according to any one of the embodiments of the first aspect.
Specifically, fig. 12 is a schematic structural diagram of a system suitable for implementing the method of the present disclosure, and the corresponding functions of the processor and the storage medium can be implemented by the system shown in fig. 12.
As shown in fig. 12, the system 600 includes a processor 601 that can perform various appropriate actions and processes according to a program stored in a memory 602 or a program loaded from a storage section 608 into the memory 602. In the memory 602, various programs and data required for the operation of the system 600 are also stored. The processor 601 and the memory 602 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, the above method processes may be implemented as a computer software program according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the above-described method. In such embodiments, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be understood that the above embodiments are only examples for clearly illustrating the present invention, and are not intended to limit the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications of the present disclosure may be made without departing from the scope of the present disclosure.

Claims (12)

1. A display interface selection method, comprising:
acquiring an image to be processed acquired by image acquisition equipment;
the sight line stopping information of the target object is detected from the image to be processed;
determining a target area from a plurality of areas to be selected of a display interface of the electronic equipment according to the sight line staying information of the target object;
and displaying the target area as a selected area through the display interface.
2. The method according to claim 1, wherein the acquiring the image to be processed acquired by the image acquisition device comprises:
responding to the display interface to jump to a target interface, and acquiring the image to be processed through the image acquisition equipment;
and/or the presence of a gas in the gas,
and acquiring the image to be processed through the image acquisition equipment in response to the fact that the electronic equipment is switched from a power-off state to a power-on state or the electronic equipment is switched from a standby state to an operating state.
3. The method according to claim 1 or 2, wherein the detecting of the gaze dwell information of the target object from the image to be processed comprises:
detecting a plurality of preset objects from the image to be processed;
determining the target object from the plurality of preset objects;
and detecting to obtain the sight line stopping information of the target object according to the image to be processed.
4. The method of claim 3, wherein the determining the target object from the plurality of preset objects comprises:
detecting the image to be processed to obtain operation part information of each preset object in the plurality of preset objects;
and determining a preset object corresponding to the operation part information meeting preset conditions as the target object according to the obtained operation part information.
5. The method according to any one of claims 1 to 4, wherein the detecting of the gaze dwell information to the target object from the image to be processed comprises:
detecting the image to be processed to obtain eye characteristic information of the target object;
according to the eye feature information, determining sight line staying information of the fixation point of the target object in the coordinate system of the image acquisition equipment;
determining a target area from a plurality of areas to be selected of a display interface according to the sight line stay information of the target object, wherein the determining comprises the following steps:
and determining the target area according to the sight line stopping information based on the pre-established mapping relation between the image acquisition equipment coordinate system and the display interface coordinate system.
6. The method according to any one of claims 1 to 5, wherein the candidate area comprises at least one selectable target, and the target area is displayed as a selected area through the display interface, and the target area comprises at least one of the following items:
responding to the situation that the current selection frame is not located in the target area, moving the selection frame to the target area, and displaying the selectable target at a preset position in the target area as a selected area;
in response to the current selection box being located in the target area, displaying the selectable target currently selected by the selection box as a selected area.
7. The method according to claim 6, wherein the displaying the selectable target at a preset position in the target area as a selected area comprises:
according to historical display interface data, determining the selectable target with the maximum number of selected times in the at least one selectable target as the selectable target at the preset position, and displaying the selectable target at the preset position as a selected area;
alternatively, the first and second electrodes may be,
and obtaining the distance between the fixation point of the target object and each selectable target in the at least one selectable target according to the sight line stay information of the target object, determining the selectable target corresponding to the minimum distance as the selectable target at the preset position, and displaying the selectable target at the preset position as a selected area.
8. The method according to claim 6 or 7, wherein after said displaying the target region as a selected region via the display interface, the method further comprises:
and moving the selection frame on the display interface in response to the received operation signal.
9. The method according to any one of claims 1 to 8, wherein the displaying the target region as a selected region through the display interface comprises:
determining the stay time of the fixation point of the target object in the target area according to the sight stay information of the target object;
and responding to the fact that the stay time length is not smaller than a preset time length threshold value, and displaying the target area as a selected area through the display interface.
10. A display interface selection apparatus, comprising:
the acquisition module is configured to acquire an image to be processed acquired by the image acquisition equipment;
the detection module is configured to detect sight line stopping information of a target object from the image to be processed;
the area determination module is configured to determine a target area from a plurality of areas to be selected of a display interface of the electronic equipment according to the sight line dwell information of the target object;
a selection module configured to display the target area as a selected area through the display interface.
11. An electronic device, comprising:
a display having a display interface;
an image acquisition device;
a processor; and
a memory storing computer instructions for causing a processor to perform the method according to any one of claims 1 to 9.
12. A storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1 to 9.
CN202110736403.2A 2021-06-30 2021-06-30 Display interface selection method and device Pending CN113467614A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110736403.2A CN113467614A (en) 2021-06-30 2021-06-30 Display interface selection method and device
PCT/CN2021/134293 WO2023273138A1 (en) 2021-06-30 2021-11-30 Display interface selection method and apparatus, device, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110736403.2A CN113467614A (en) 2021-06-30 2021-06-30 Display interface selection method and device

Publications (1)

Publication Number Publication Date
CN113467614A true CN113467614A (en) 2021-10-01

Family

ID=77876558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110736403.2A Pending CN113467614A (en) 2021-06-30 2021-06-30 Display interface selection method and device

Country Status (2)

Country Link
CN (1) CN113467614A (en)
WO (1) WO2023273138A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023273138A1 (en) * 2021-06-30 2023-01-05 北京市商汤科技开发有限公司 Display interface selection method and apparatus, device, storage medium, and program product
CN116382549A (en) * 2023-05-22 2023-07-04 昆山嘉提信息科技有限公司 Image processing method and device based on visual feedback

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104822005A (en) * 2014-01-30 2015-08-05 京瓷办公信息系统株式会社 Electronic device and operation picture display method
US20170330343A1 (en) * 2016-05-10 2017-11-16 Fujitsu Limited Sight line identification apparatus and sight line identification method
CN108897589A (en) * 2018-05-31 2018-11-27 刘国华 Show man-machine interaction method, device, computer equipment and storage medium in equipment
CN111680503A (en) * 2020-06-08 2020-09-18 腾讯科技(深圳)有限公司 Text processing method, device and equipment and computer readable storage medium
CN111881763A (en) * 2020-06-30 2020-11-03 北京小米移动软件有限公司 Method and device for determining user gaze position, storage medium and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467614A (en) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 Display interface selection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104822005A (en) * 2014-01-30 2015-08-05 京瓷办公信息系统株式会社 Electronic device and operation picture display method
US20170330343A1 (en) * 2016-05-10 2017-11-16 Fujitsu Limited Sight line identification apparatus and sight line identification method
CN108897589A (en) * 2018-05-31 2018-11-27 刘国华 Show man-machine interaction method, device, computer equipment and storage medium in equipment
CN111680503A (en) * 2020-06-08 2020-09-18 腾讯科技(深圳)有限公司 Text processing method, device and equipment and computer readable storage medium
CN111881763A (en) * 2020-06-30 2020-11-03 北京小米移动软件有限公司 Method and device for determining user gaze position, storage medium and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023273138A1 (en) * 2021-06-30 2023-01-05 北京市商汤科技开发有限公司 Display interface selection method and apparatus, device, storage medium, and program product
CN116382549A (en) * 2023-05-22 2023-07-04 昆山嘉提信息科技有限公司 Image processing method and device based on visual feedback
CN116382549B (en) * 2023-05-22 2023-09-01 昆山嘉提信息科技有限公司 Image processing method and device based on visual feedback

Also Published As

Publication number Publication date
WO2023273138A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
US20170102776A1 (en) Information processing apparatus, method and program
US8112719B2 (en) Method for controlling gesture-based remote control system
US10180718B2 (en) Information processing apparatus and information processing method
US9704028B2 (en) Image processing apparatus and program
WO2023273138A1 (en) Display interface selection method and apparatus, device, storage medium, and program product
US20100141578A1 (en) Image display control apparatus, image display apparatus, remote controller, and image display system
EP2908215B1 (en) Method and apparatus for gesture detection and display control
EP1466238A2 (en) Method and apparatus for a gesture-based user interface
US9961394B2 (en) Display apparatus, controlling method thereof, and display system
US20160246366A1 (en) Control method and control apparatus for electronic equipment and electronic equipment
KR20150117820A (en) Method For Displaying Image and An Electronic Device Thereof
EP4240000A1 (en) Photographing processing method and apparatus, electronic device, and readable storage medium
EP3617851B1 (en) Information processing device, information processing method, and recording medium
CN111596760A (en) Operation control method and device, electronic equipment and readable storage medium
CN112954209B (en) Photographing method and device, electronic equipment and medium
US20230384868A1 (en) Display apparatus
CN112860212A (en) Volume adjusting method and display device
EP2256590A1 (en) Method for controlling gesture-based remote control system
US20160291804A1 (en) Display control method and display control device
JP5229928B1 (en) Gaze position specifying device and gaze position specifying program
CN112835506B (en) Display device and control method thereof
US20150103150A1 (en) Information processing method and electronic device
CN116235501A (en) Eye gaze based media display device control
CN112817557A (en) Volume adjusting method based on multi-person gesture recognition and display device
EP3247122A1 (en) Image processing terminal and method for controlling an external device using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40053441

Country of ref document: HK