WO2023273138A1 - 显示界面选择方法、装置、设备、存储介质及程序产品 - Google Patents

显示界面选择方法、装置、设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2023273138A1
WO2023273138A1 PCT/CN2021/134293 CN2021134293W WO2023273138A1 WO 2023273138 A1 WO2023273138 A1 WO 2023273138A1 CN 2021134293 W CN2021134293 W CN 2021134293W WO 2023273138 A1 WO2023273138 A1 WO 2023273138A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
display interface
area
image
target object
Prior art date
Application number
PCT/CN2021/134293
Other languages
English (en)
French (fr)
Inventor
孔祥晖
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Publication of WO2023273138A1 publication Critical patent/WO2023273138A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Definitions

  • the present disclosure relates to the technical field of electronic equipment, and relates to a display interface selection method, device, equipment, storage medium and program product.
  • embodiments of the present disclosure provide a display interface selection method, device, equipment, storage medium, and program product.
  • an embodiment of the present disclosure provides a method for selecting a display interface, including:
  • the target area is displayed as a selected area through the display interface.
  • the acquisition of the image to be processed collected by the image acquisition device includes:
  • the trigger event includes at least one of the following: the display interface jumps to the target interface, the electronic device switches from a power-off state to a power-on state, and the electronic device switches from a standby state to a running state.
  • the detecting and obtaining the gaze stop information of the target object from the image to be processed includes:
  • the gaze stop information of the target object is detected.
  • the determining the target object from the plurality of preset objects includes:
  • the operation unit information it is determined that a preset object corresponding to the operation unit information satisfying a preset condition is the target object.
  • the detecting and obtaining the gaze stop information of the target object from the image to be processed includes:
  • the eye feature information determine the gaze stay information of the gaze point of the target object in the coordinate system of the image acquisition device
  • the determining the target area from multiple candidate areas of the display interface according to the sight-stay information of the target object includes:
  • the target area is determined according to the gaze stop information.
  • the area to be selected includes at least one selectable target, and displaying the target area as a selected area through the display interface includes:
  • the selectable target currently selected by the selection box is displayed as a selected area.
  • the displaying the selectable target at a preset position in the target area as a selected area includes:
  • the optional target selected the most times among the at least one selectable target is confirmed as the selectable target at the preset position, and the selectable target at the preset position is displayed as a selected area.
  • the displaying the selectable target at a preset position in the target area as a selected area includes:
  • the method further includes:
  • the selection box is moved on the display interface in response to the received operation signal.
  • the displaying the target area as a selected area through the display interface includes:
  • the sight stay information of the target object determine the dwell time of the gaze point of the target object in the target area
  • the target area is displayed as a selected area through the display interface.
  • an embodiment of the present disclosure provides a device for selecting a display interface, including:
  • the acquisition part is configured to acquire the image to be processed collected by the image acquisition device
  • the detection part is configured to detect and obtain the gaze stop information of the target object from the image to be processed
  • the area determination part is configured to determine the target area from a plurality of candidate areas on the display interface of the electronic device according to the gaze stop information of the target object;
  • the selected part is configured to display the target area as a selected area through the display interface.
  • the acquisition part is further configured to:
  • the trigger event includes at least one of the following: the display interface jumps to the target interface, the electronic device switches from a power-off state to a power-on state, and the electronic device switches from a standby state to a running state.
  • the detection part is further configured to:
  • the gaze stop information of the target object is detected.
  • the detection part is further configured to:
  • the preset object corresponding to the operation unit information satisfying the preset condition is the target object.
  • the detection part is further configured to:
  • the eye feature information determine the gaze stay information of the gaze point of the target object in the coordinate system of the image acquisition device
  • the determining the target area from multiple candidate areas of the display interface according to the sight-stay information of the target object includes:
  • the target area is determined according to the gaze stop information.
  • the area to be selected includes at least one selectable target, and the selected part is further configured as at least one of the following:
  • the selectable target currently selected by the selection box is displayed as a selected area.
  • the selected portion is further configured to:
  • the sight stay information of the target object determine the dwell time of the gaze point of the target object in the target area
  • the target area is displayed as a selected area through the display interface.
  • an electronic device including:
  • the display has a display interface
  • the memory stores computer instructions, and when the computer instructions are executed by the processor, the method for selecting a display interface according to any one of the implementation manners of the first aspect is implemented.
  • the embodiments of the present disclosure provide a storage medium storing computer instructions, and when the computer instructions are executed by a processor, the display interface selection method described in any embodiment of the first aspect is implemented.
  • an embodiment of the present disclosure provides a computer program product, including computer readable code, when the computer readable code is run in an electronic device, a processor in the electronic device executes the first The display interface selection method described in any one of the implementation manners of the aspect.
  • the method for selecting a display interface includes acquiring an image to be processed collected by an image acquisition device, detecting and obtaining the gaze stop information of the target object from the image to be processed, and selecting the display interface of the electronic device according to the gaze stay information of the target object. Determine the target area among the five to-be-selected areas, and display the target area as the selected area through the display interface.
  • quick selection of the display interface can be realized, user operations can be simplified, and interface selection efficiency can be improved.
  • FIG. 1 is a schematic diagram of the principle of a display interface selection method in the prior art.
  • Fig. 2 is a schematic structural diagram of an electronic device according to some implementations of the present disclosure.
  • Fig. 3 is a flowchart of a method for selecting a display interface according to some implementations of the present disclosure.
  • Fig. 4 is a schematic diagram of a method for selecting a display interface according to some implementations of the present disclosure.
  • Fig. 5 is a flow chart of a method for selecting a display interface according to some implementations of the present disclosure.
  • Fig. 6 is a flow chart of a method for selecting a display interface according to some implementations of the present disclosure.
  • Fig. 7 is a flowchart of a method for selecting a display interface according to some implementations of the present disclosure.
  • Fig. 8 is a flowchart of a method for selecting a display interface according to some implementations of the present disclosure.
  • Fig. 9 is a schematic diagram of a method for selecting a display interface according to some implementations of the present disclosure.
  • Fig. 10 is a schematic diagram of a method for selecting a display interface according to some implementations of the present disclosure.
  • Fig. 11 is a structural block diagram of a display interface selection device according to some implementations of the present disclosure.
  • Fig. 12 is a structural block diagram of an electronic device according to some embodiments of the present disclosure.
  • FIG. 1 shows a schematic diagram of human-computer interaction of a TV in the prior art.
  • a plurality of icons 11 are displayed on the display screen of the television set 10 , and the user 20 needs to select the desired icon from the plurality of icons 11 .
  • the interaction mode in the prior art is: the user 20 operates the direction keys on the remote control 30 to control the movement of the selection box 12 to realize icon selection.
  • the operation steps of this operation method are very cumbersome. For example, if the user wants to move the selection box 12 from the icon 11-A to the icon 11-B, he needs to press the direction key of the remote control several times to complete the icon 11-B. Selecting an operation, the operation process is very complicated, which affects the user experience.
  • the embodiments of the present disclosure provide a display interface selection method, device, device, storage medium, and program product, aiming at realizing quick selection of display interfaces and improving the efficiency of interface selection.
  • the embodiment of the present disclosure provides a method for selecting a display interface, which is applicable to any electronic device with a display interface, such as a TV, a display screen, a tablet computer, a handheld mobile terminal, etc. This is not limited.
  • the method in the embodiments of the present disclosure may be processed by a processor of an electronic device.
  • Fig. 2 shows some implementations of an electronic device suitable for implementing the method of the present disclosure.
  • the application scenario of the method of the present disclosure will be described below with reference to the example in Fig. 2 .
  • the electronic device takes a TV 100 as an example.
  • the TV 100 has a display screen 110 for outputting a display interface for a user to watch.
  • the television set 100 in the embodiment of the present disclosure further includes an image acquisition device 120 for acquiring image information in front of the television set 100 .
  • the image acquisition device 120 may be a camera, such as one or more combinations of an RGB camera, an infrared camera, and a ToF (Time of flight, time of flight) camera.
  • the image capture device 120 may be disposed on the TV 100 , for example, as shown in FIG.
  • the image acquisition device 120 can also be set separately from the television 100 , as long as it is ensured that the image acquisition device can acquire images before the television 100 . This disclosure does not limit this.
  • FIG. 3 shows some implementations of the display interface selection method of the present disclosure, which will be described below with reference to FIG. 3 .
  • the display interface selection method of the example of the present disclosure includes:
  • the image acquisition device 120 may acquire images in front of the television 100 , that is, images to be processed.
  • the image acquisition device 120 does not need to acquire images to be processed all the time, and the image acquisition device 120 can start to acquire only when the user has an interface selection requirement. image.
  • a trigger event may be received, and in response to the trigger event, the image to be processed is collected by the image collection device.
  • the trigger event includes at least one of the following: the display interface jumps to the target interface, the electronic device switches from a power-off state to a power-on state, and the electronic device switches from a standby state to a running state.
  • the image to be processed is collected by the image collection device 120 .
  • the target interface can be for example the interface of TV program selection, as shown in Figure 4, when the user jumps to the program selection interface (channel selection interface) shown in Figure 4 by the interface of watching TV program normally, can confirm that display interface jumps Turning to the target interface, the user needs to select an interface, and at this time, the image to be processed can be collected by the image collection device 120 .
  • target interface is not limited to the program selection interface shown in FIG. 4 , and any other display interface with interface selection can be used as the target interface, which is not limited in the present disclosure.
  • the image to be processed is collected by the image acquisition device.
  • the user has a demand for interface selection (such as changing a TV station), so that the image to be processed can be collected by the image collection device 120 .
  • the image to be processed refers to the image in front of the TV set 100 , and the image to be processed includes at least one preset object, such as a user watching TV and other objects within the shooting range of the image acquisition device 120 .
  • gaze tracking information of the target object may be detected from the image to be processed by using gaze tracking technology.
  • the gaze stay information may include coordinate information of the gaze point of the target object in the coordinate system of the image acquisition device, gaze dwell duration, gaze movement trend and other information, which is not limited in the present disclosure.
  • the target object refers to an object determined from at least one preset object of the image to be processed. For example, in the scene shown in FIG. 2 , there are 3 users watching TV: user 201 , user 202 and user 203 . Therefore, the images to be processed collected by the image collection device 120 include three preset objects, that is, three users.
  • the target object refers to the target user determined from the user 201 , the user 202 and the user 203 based on the image detection technology.
  • the user 202 holding the remote controller 300 may be determined as the target object based on the image detection technology.
  • the present disclosure will be described in the following embodiments, and will not be described in detail here.
  • the gaze stay information of the target object can be detected from the image to be processed.
  • the gaze stay information represents the location information of the gaze point of the target object, that is, the current gaze direction of the target object.
  • S330 Determine the target area from multiple candidate areas on the display interface of the electronic device according to the gaze stop information of the target object.
  • the gaze-stop information indicates the current gaze direction of the target object, that is, the direction in which the user expects to select the target area, so that the user can be determined from multiple regions to be selected on the display interface based on the gaze-stop information.
  • the desired target area for selection is the current gaze direction of the target object, that is, the direction in which the user expects to select the target area, so that the user can be determined from multiple regions to be selected on the display interface based on the gaze-stop information. The desired target area for selection.
  • the position indicated by the gaze stop information of the target object is the position where "CCTV-6" is located, so that the television set 100 starts from Among the six candidate regions "CCTV-1" to "CCTV-6", determine "CCTV-6" as the target region.
  • the selection box can be moved to the target area, so that the target area can be used as the selected area.
  • the TV 100 can move the selection box 112 Go to the target area "CCTV-6".
  • the user when the user selects the target area, the user can move the selection frame according to the user's gaze stop information without using a remote controller.
  • the user needs to press the right button at least twice and the down button once to move the selection box 112 from "CCTV-1" to "CCTV-6".
  • the user can realize the rapid movement of the selection frame 112 without any pressing operation, which greatly simplifies the selection operation.
  • the image to be processed includes three preset objects. Since each user's gaze direction may be different, the detected gaze dwell information is also different.
  • the master controller can be determined from multiple users, and the interface selection is based on the gaze stop information of the master controller. The following description will be made in conjunction with FIG. 5 .
  • the line-of-sight information of the target object is detected from the image to be processed, including:
  • the preset object is a human body.
  • all human body regions can be detected from the image to be processed based on the human body detection technology, such as user 201, user 202 and user 203.
  • the target object is a master object among multiple preset objects.
  • the preset objects include a user 201 , a user 202 and a user 203 .
  • the user 202 holding the remote controller 300 may be determined as the target object.
  • the process of determining the target object may include:
  • the human body detection model can be used to perform human body detection on the image to be processed, and each preset object in the image to be processed can be obtained, that is, the user 201, the user 202, and the user 203 described in FIG. 2 .
  • the operation part area of each preset object can be detected by using the operation part detection model, and the operation part information of each preset object can be determined.
  • the operation part refers to the part where the human body operates the remote controller.
  • the remote controller is a hand-held remote controller, so that the operating part can correspond to the hand area of the human body.
  • the remote controller is a head-mounted remote controller, so that the operating part can correspond to the head area of the human body.
  • the operating part may also be any other body part suitable for implementation, which will not be enumerated in this disclosure.
  • the remote controller 300 takes a handheld remote controller as an example.
  • the user holding the remote controller is usually the main control user. Therefore, in the embodiments of the present disclosure, based on the image detection The hand area of the object is set to be detected, and the hand information of each user is obtained.
  • each hand information can be detected based on the image detection technology to determine whether the user is holding an object or whether the user is holding an object as a remote control. If the user holds the remote control, it is determined that the user holding the remote control is the master user, that is, the preset object corresponding to the hand information is the target object.
  • the remote control 300 is held by the user 202 , and it is detected that the operation part information of the user 202 satisfies a preset condition, and the user 202 can be determined as the target object.
  • the embodiments of the present disclosure are not limited to the above examples, and in other implementation manners, any other suitable implementation manners may also be used to determine the target object.
  • the preset object through face recognition may be determined as the target object. This is well understood and fully implemented by those skilled in the art.
  • gaze stop information of the target object can be detected from the image to be processed based on a gaze tracking algorithm.
  • the target object corresponding to the operation unit information satisfying the preset condition is determined from multiple preset objects, thereby avoiding the interference of multi-user scenarios and improving the accuracy of display interface selection.
  • the method for selecting a display interface in the example of the present disclosure includes:
  • the eye feature information of the target object can be extracted from the image to be processed through a pre-trained eye detection network.
  • the coordinate information of the gaze point of the target object can be obtained by performing feature analysis on the extracted eye feature information.
  • the coordinate information of the gaze point of the target object refers to the coordinate information of the gaze point of the target object in the coordinate system of the image acquisition device 120.
  • mapping relationship between the image acquisition device coordinate system and the display interface coordinate system can be constructed in advance.
  • the mapping relationship represents the image coordinates to the display interface coordinates. registration relationship.
  • mapping relationship between the coordinate system of the image capture device and the coordinate system of the display interface may be obtained based on the distance equivalence mapping between the user and the image capture device 120 .
  • Those skilled in the art can understand and fully implement based on related technologies.
  • the gaze stop information of the target object which represents the coordinate information of the gaze point under the coordinate system of the image acquisition device, so that based on the coordinate information and the pre-built mapping relationship, the gaze stay on the display interface can be obtained.
  • the target area corresponding to the information.
  • the gaze-stop information is determined according to the eye feature information of the target object 202, and based on the gaze-stop information is mapped to the display interface, the target area is "CCTV-6", so "CCTV-6" is used as The target object 202 desires a selected target area.
  • a calibration process for gaze tracking is also included.
  • the user can send an instruction through the remote control to control the TV to enter the calibration procedure.
  • the image acquisition device of the TV can collect user images, and obtain the position deviation between the user's real-time gaze point and the calibration point based on user image detection, and then realize the mapping between the image acquisition device coordinate system and the display interface coordinate system based on the position deviation Relationship Calibration.
  • whether to select the target area may be determined according to the duration of the gaze point of the target object staying in the target area. The following description will be made in conjunction with FIG. 8 .
  • the display interface selection method of the example of the present disclosure includes:
  • the target area is determined according to the gaze dwell information of the target object, instead of immediately moving the selection frame to the target area, it is determined how long the gaze point of the target object stays in the target area. If the stay time is short, it means that the user may just glance at the area, and does not expect to select the area; on the contrary, if the stay time is long, it means that the user expects to select the area.
  • the preset duration threshold is the threshold value for determining the user's expectation to select the target area.
  • the gaze point of the target object is located in the target area and the duration of stay is not less than the preset duration threshold, it means that the user expects to select the target area, so that the selection box can be Move to the target area.
  • the dwell time of the target object's gaze point in the target area is less than the preset duration threshold, it means that the user only glances over the target area without moving the selection box, thereby avoiding misjudgment.
  • the preset duration threshold can be obtained based on prior knowledge or a limited number of experiments, and those skilled in the art can set it according to actual scenarios, which is not limited in the present disclosure.
  • the preset duration threshold can be set to 1 second, and when the user 202's gaze point is located in the "CCTV-6" area and the duration of stay is longer than 1 second, the selection box 112 can be quickly moved from "CCTV-1" To "CCTV-6".
  • the target area is selected according to the dwell time of the gaze point of the target object, so as to avoid misjudgment of interface selection and improve selection accuracy.
  • each area to be selected often includes multiple optional objects, and the optional objects can be icons, menu bars, and so on.
  • the optional target is an icon, and the optional target is called an icon in the following.
  • the target area is first determined based on the user's line of sight stay information, and the target area is determined after determining the target area. After the area, the optional target is further determined from the target area. That is, based on the line-of-sight information, the "coarse search + fine search" method is used to determine the optional target that the user expects to select, reducing the interference of more optional targets on the user's line of sight, and improving the reliability of the interface selection method. Described below.
  • the target area may first be determined from multiple candidate areas by using the method in any one of the aforementioned implementations. After determining the target area, the method also includes:
  • the select box is not located in the target area. In this case, the selection box needs to be moved to an icon in the target area.
  • the selection box is located in this target area. In this case, one of the icons selected by the selection box can be kept unchanged.
  • the selection box in response to the selection box not being located in the target area, the selection box is moved to the target area, and a selectable target at a preset position in the target area is displayed as a selected area.
  • the target area determined based on the gaze stop information of the target object 202 is B3, and the selection box 112 is located on the icon 15 in the area A1 to be selected.
  • the position of the selection box 112 is not located in the target area B3, so the selection box 112 needs to be moved from the area to be selected A1 to the target area B3.
  • the target area B3 includes 9 icons from icons 31 to 39. Therefore, which icon position in the target area B3 to move the selection box 112 to is a question that needs to be considered.
  • the optional target selected the most times among the at least one selectable target can be confirmed as the selectable target at the preset position.
  • the historical display interface data can be obtained, and the historical display interface data includes the number of times each icon was selected in the past at one end of time, and the more times an icon is selected, the higher the possibility that the user expects to select the icon.
  • the icon in the target area B3 that has been selected the most times in history among the icons 31 to 39 is used as the optional target of the preset position, which represents the icon that the user most likely desires to select.
  • the selection box 112 can be quickly moved from the icon 15 in the area to be selected A1 to the icon 35 in the target area B3 to realize the icon 35 in the target area B3. quick selection.
  • the distances between the gaze point of the target object and each optional target can be obtained according to the gaze stop information of the target object, and the optional target corresponding to the minimum distance is confirmed as the optional target at the preset position.
  • the distance between the icon 35 and the gaze point of the user 202 is the smallest, and the icon 35 can be confirmed as the icon that the user expects to select, so that the selection box 112 can be quickly moved from the icon 15 in the area A1 to be selected to the target area B3
  • the icon 35 realizes quick selection of icons.
  • the icon at the default position in the target area can be confirmed as an optional icon at the preset position.
  • the default position is the default position where the selection box moves when each area to be selected is used as the target area.
  • the default position may be the center of the target area, such as the icon 35 in the target area B3 ; or it may be the upper left corner of the target area, such as the icon 31 in the target area B3 .
  • any other suitable location may also be used, which is not limited in the present disclosure.
  • the selectable target currently selected by the selection box is displayed as a selected area.
  • the selection box after the target area is determined through any of the foregoing implementations, if the selection box is located in the target area, the selection box will not be moved.
  • the position of the selection box is no longer moved based on the default position or historical data.
  • the determined target area is B3, and the position of the selection box 112 is the selected icon 31, that is, the selection box 112 is located in the target area B3.
  • the selection box 112 can no longer be moved, but the icon 31 can be selected by the selection box 112, thereby reducing redundant moving operations.
  • the target area is determined from multiple areas to be selected, and then the selection box is moved to the preset position in the target area, and the rapid movement of the selection box is realized by means of "rough search + fine search" , to improve interface selection efficiency.
  • the method in the implementation manner of the present disclosure may further include:
  • the selection box is moved on the display interface according to the received user operation signal.
  • the user operation signal refers to a signal sent by the user to the television to move the selection box through a remote control or a mobile terminal. After the selection box is moved through the method of any of the foregoing implementation manners. There may be a deviation in the position of the selection box, so that the user can manually fine-tune the position of the selection box through the remote control.
  • the user actually expects the selected icon to be the icon 35, so that the user can manually move the selection box 112 from the icon 31 to the icon 35 by operating the direction keys of the remote control. Icon 35.
  • the icon is selected based on the user's gaze stop information.
  • the user only needs to fine-tune the position of the selection frame when there is a deviation in the position of the selection frame.
  • the interface selection method simplifies the operation steps and improves the interface selection efficiency.
  • the selectable objects in the embodiments of the present disclosure are not limited to the above-mentioned icons, and may also be any other objects that are suitable for realization and can be selected by the user, such as a menu bar, a text list, etc. This is not limited.
  • the electronic device in the implementation manner of the present disclosure is not limited to the above-mentioned TV set, and may also be any other suitable equipment, such as a tablet computer, which is not limited in the present disclosure.
  • the arrangement of the regions to be selected and the selectable targets in the implementation of the present disclosure is not limited to the above-mentioned form, and can also be any other suitable arrangement form, which is not limited in the present disclosure.
  • the target object corresponding to the operation part information satisfying the preset condition is determined from a plurality of preset objects, thereby avoiding the interference of multi-user scenarios and improving the accuracy of display interface selection.
  • the embodiment of the present disclosure provides a device for selecting a display interface, which can be applied to any electronic device with a display interface, such as a TV, a display screen, a tablet computer, a handheld mobile terminal, etc. This is not limited.
  • the device for selecting a display interface in an example of the present disclosure includes:
  • the acquisition part 1110 is configured to acquire the image to be processed collected by the image acquisition device;
  • the detection part 1120 is configured to detect and obtain the gaze stop information of the target object from the image to be processed
  • the area determining part 1130 is configured to determine the target area from multiple candidate areas on the display interface of the electronic device according to the gaze stop information of the target object;
  • the selected part 1140 is configured to display the target area as a selected area through the display interface.
  • the acquisition part is further configured to:
  • the trigger event includes at least one of the following: the display interface jumps to the target interface, the electronic device switches from a power-off state to a power-on state, and the electronic device switches from a standby state to a running state.
  • the detection part is further configured to:
  • the gaze stop information of the target object is detected.
  • the detection part is further configured to:
  • the preset object corresponding to the operation unit information satisfying the preset condition is the target object.
  • the detection part is further configured to:
  • the eye feature information determine the gaze stay information of the gaze point of the target object in the coordinate system of the image acquisition device
  • the determining the target area from multiple candidate areas of the display interface according to the sight-stay information of the target object includes:
  • the target area is determined according to the gaze stop information.
  • the area to be selected includes at least one selectable target, and the selected part is further configured as at least one of the following:
  • the selectable target currently selected by the selection box is displayed as a selected area.
  • the selected portion is further configured to:
  • the sight stay information of the target object determine the dwell time of the gaze point of the target object in the target area
  • the target area is displayed as a selected area through the display interface.
  • the target object corresponding to the operation part information satisfying the preset condition is determined from a plurality of preset objects, thereby avoiding the interference of multi-user scenarios and improving the accuracy of display interface selection.
  • the embodiment of the present disclosure provides an electronic device, which can be any electronic device with a display interface, such as a TV, a display screen, a tablet computer, a handheld mobile terminal, etc., and this disclosure does not make any limit.
  • the electronic devices of the present disclosure include:
  • the memory stores computer instructions, and when the computer instructions are executed by the processor, the method for selecting a display interface according to any one of the implementation manners of the first aspect is implemented.
  • the embodiments of the present disclosure provide a storage medium storing computer instructions, and when the computer instructions are executed by a processor, the display interface selection method described in any embodiment of the first aspect is implemented.
  • an embodiment of the present disclosure provides a computer program product, including computer readable code, when the computer readable code is run in an electronic device, a processor in the electronic device executes the first The display interface selection method described in any one of the implementation manners of the aspect.
  • FIG. 12 shows a schematic structural diagram of a system suitable for implementing the method of the present disclosure. Through the system shown in FIG. 12 , corresponding functions of the processor and the storage medium described above can be realized.
  • the system 600 includes a processor 601 that can perform various appropriate actions and processes according to programs stored in the memory 602 or loaded into the memory 602 from the storage part 608 .
  • various programs and data required for the operation of the system 600 are also stored.
  • the processor 601 and the memory 602 are connected to each other through a bus 604 .
  • An input/output (I/O) interface 605 is also connected to the bus 604 .
  • the following parts are connected to the I/O interface 605: an input part 606 including a keyboard, a mouse, etc.; an output part 607 including a cathode ray tube (Cathode Ray Tube, CRT), a liquid crystal display (Liquid Crystal Display, LCD) etc., and a speaker a storage section 608 including a hard disk, etc.; and a communication section 609 including network interface cards such as local area network (Local Area Network, LAN) cards, modems, etc. The communication section 609 performs communication processing via a network such as the Internet.
  • a drive 610 is also connected to the I/O interface 605 as needed.
  • a removable medium 611 such as a magnetic disk, optical disk, magneto-optical disk, semiconductor memory, etc. is mounted on the drive 610 as necessary so that a computer program read therefrom is installed into the storage section 608 as necessary.
  • the above method process can be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method described above.
  • the computer program may be downloaded and installed from a network via the communication portion 609 and/or installed from a removable medium 611 .
  • each block in a flowchart or block diagram may represent a section, module, program segment, or portion of code that contains one or more logical Executable instructions for a function.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the present disclosure relates to the technical field of electronic equipment, and relates to a display interface selection method, device, equipment, storage medium and program product.
  • the method for selecting a display interface includes: acquiring an image to be processed collected by an image acquisition device; detecting and obtaining the sight-stop information of a target object from the image to be processed; Determine the target area among the candidate areas; display the target area as the selected area through the display interface.
  • the disclosed method is based on the gaze stop information of the target object, can realize quick selection of a display interface, simplifies user operations, and improves interface selection efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种显示界面选择方法、装置、设备、存储介质及程序产品。显示界面选择方法包括:获取图像采集设备采集的待处理图像(S310);从所述待处理图像中检测得到目标对象的视线停留信息(S320);根据所述目标对象的视线停留信息,从电子设备的显示界面的多个待选区域中确定目标区域(S330);通过所述显示界面将所述目标区域作为选中区域显示(S340)。

Description

显示界面选择方法、装置、设备、存储介质及程序产品
相关申请的交叉引用
本公开基于申请号为202110736403.2、申请日为2021年06月30日、申请名称为“显示界面选择方法及装置”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及电子设备技术领域,涉及一种显示界面选择方法、装置、设备、存储介质及程序产品。
背景技术
对于像电视机、商业显示屏幕等大屏显示设备,由于其不具备鼠标操作环境,因此往往需要基于遥控器或者手势识别等操作来实现界面选择。以电视机为例,电视机屏幕上包括多个待选图标时,用户需要利用遥控器的方向按键实现对某个图标的选中操作,步骤繁琐,界面选择的效率较低。
发明内容
为提高显示界面选择的效率,本公开实施方式提供了一种显示界面选择方法、装置、设备、存储介质及程序产品。
第一方面,本公开实施方式提供了一种显示界面选择方法,包括:
获取图像采集设备采集的待处理图像;
从所述待处理图像中检测得到目标对象的视线停留信息;
根据所述目标对象的视线停留信息,从电子设备的显示界面的多个待选区域中确定目标区域;
通过所述显示界面将所述目标区域作为选中区域显示。
在一些实施方式中,所述获取图像采集设备采集的待处理图像,包括:
响应于触发事件,通过所述图像采集设备采集所述待处理图像;
其中,所述触发事件包括以下至少之一:所述显示界面跳转至目标界面、所述电子设备由关机状态切换为开机状态、所述电子设备由待机状态切换为运行状态。
在一些实施方式中,所述从所述待处理图像中检测得到目标对象的视线停留信息,包括:
从所述待处理图像中检测得到多个预设对象;
从所述多个预设对象中确定所述目标对象;
根据所述待处理图像,检测得到所述目标对象的所述视线停留信息。
在一些实施方式中,所述从所述多个预设对象中确定所述目标对象,包 括:
对所述待处理图像进行检测,得到所述多个预设对象中每个所述预设对象的操作部信息;
根据所述操作部信息,确定满足预设条件的操作部信息对应的预设对象为所述目标对象。
在一些实施方式中,所述从所述待处理图像中检测得到目标对象的视线停留信息,包括:
对所述待处理图像进行检测,得到所述目标对象的眼部特征信息;
根据所述眼部特征信息,确定所述目标对象的注视点在所述图像采集设备坐标系的视线停留信息;
所述根据所述目标对象的视线停留信息,从显示界面的多个待选区域中确定目标区域,包括:
基于预先建立的所述图像采集设备坐标系与所述显示界面坐标系的映射关系,根据所述视线停留信息确定所述目标区域。
在一些实施方式中,所述待选区域包括至少一个可选目标,所述通过所述显示界面将所述目标区域作为选中区域显示,包括:
响应于选择框不位于所述目标区域中,将所述选择框移动至所述目标区域,并将所述目标区域中预设位置的所述可选目标作为选中区域显示;
响应于选择框位于所述目标区域中,将所述选择框当前选中的所述可选目标作为选中区域显示。
在一些实施方式中,所述将所述目标区域中预设位置的所述可选目标作为选中区域显示,包括:
根据历史显示界面数据,将所述至少一个可选目标中被选中次数最多的可选目标确认为预设位置的可选目标,并将所述预设位置的可选目标作为选中区域显示。
在一些实施方式中,所述将所述目标区域中预设位置的所述可选目标作为选中区域显示,包括:
根据所述目标对象的视线停留信息得到所述目标对象的注视点分别与所述至少一个可选目标中每个所述可选目标之间的距离,将最小距离对应的可选目标确认为预设位置的可选目标,并将所述预设位置的可选目标作为选中区域显示。
在一些实施方式中,在所述通过所述显示界面将所述目标区域作为选中区域显示之后,所述方法还包括:
响应于接收到的操作信号,在所述显示界面上移动所述选择框。
在一些实施方式中,所述通过所述显示界面将所述目标区域作为选中区 域显示,包括:
根据所述目标对象的视线停留信息,确定所述目标对象的注视点位于所述目标区域的停留时长;
响应于所述停留时长不小于预设时长阈值,通过所述显示界面将所述目标区域作为选中区域显示。
第二方面,本公开实施方式提供了一种显示界面选择装置,包括:
获取部分,被配置为获取图像采集设备采集的待处理图像;
检测部分,被配置为从所述待处理图像中检测得到目标对象的视线停留信息;
区域确定部分,被配置为根据所述目标对象的视线停留信息,从电子设备的显示界面的多个待选区域中确定目标区域;
选中部分,被配置为通过所述显示界面将所述目标区域作为选中区域显示。
在一些实施方式中,所述获取部分还被配置为:
响应于触发事件,通过所述图像采集设备采集所述待处理图像;
其中,所述触发事件包括以下至少之一:所述显示界面跳转至目标界面、所述电子设备由关机状态切换为开机状态、所述电子设备由待机状态切换为运行状态。
在一些实施方式中,所述检测部分还被配置为:
从所述待处理图像中检测得到多个预设对象;
从所述多个预设对象中确定所述目标对象;
根据所述待处理图像,检测得到所述目标对象的所述视线停留信息。
在一些实施方式中,所述检测部分还被配置为:
对所述待处理图像进行检测,得到所述多个预设对象中每个所述预设对象的操作部信息;
根据得到的所述操作部信息,确定满足预设条件的操作部信息对应的预设对象为所述目标对象。
在一些实施方式中,所述检测部分还被配置为:
对所述待处理图像进行检测,得到所述目标对象的眼部特征信息;
根据所述眼部特征信息,确定所述目标对象的注视点在所述图像采集设备坐标系的视线停留信息;
所述根据所述目标对象的视线停留信息,从显示界面的多个待选区域中确定目标区域,包括:
基于预先建立的所述图像采集设备坐标系与所述显示界面坐标系的映射关系,根据所述视线停留信息确定所述目标区域。
在一些实施方式中,所述待选区域包括至少一个可选目标,所述选中部分还被配置为如下至少一项:
响应于选择框不位于所述目标区域中,将所述选择框移动至所述目标区域,并将所述目标区域中预设位置的所述可选目标作为选中区域显示;
响应于选择框位于所述目标区域中,将所述选择框当前选中的所述可选目标作为选中区域显示。
在一些实施方式中,所述选中部分还被配置为:
根据所述目标对象的视线停留信息,确定所述目标对象的注视点位于所述目标区域的停留时长;
响应于所述停留时长不小于预设时长阈值,通过所述显示界面将所述目标区域作为选中区域显示。
第三方面,本公开实施方式提供了一种电子设备,包括:
显示器,具有显示界面;
图像采集设备;
处理器;以及
存储器,存储有计算机指令,所述计算机指令被所述处理器执行时实现第一方面任一实施方式所述的显示界面选择方法。
第四方面,本公开实施方式提供了一种存储介质,存储有计算机指令,所述计算机指令被处理器执行时实现第一方面任一实施方式所述的显示界面选择方法。
第五方面,本公开实施方式提供了一种计算机程序产品,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现第一方面任一实施方式所述的显示界面选择方法。
本公开实施方式的显示界面选择方法,包括获取图像采集设备采集的待处理图像,从待处理图像中检测得到目标对象的视线停留信息,根据目标对象的视线停留信息从电子设备的显示界面的多个待选区域中确定目标区域,通过显示界面将目标区域作为选中区域显示。本公开实施方式中,基于目标对象的视线停留信息,可实现显示界面的快速选择,简化用户操作,提高界面选择效率。
附图说明
为了更清楚地说明本公开具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是现有技术中显示界面选择方法的原理示意图。
图2是根据本公开一些实施方式中电子设备的结构示意图。
图3是根据本公开一些实施方式中显示界面选择方法的流程图。
图4是根据本公开一些实施方式中显示界面选择方法的原理示意图。
图5是根据本公开一些实施方式中显示界面选择方法的流程图。
图6是根据本公开一些实施方式中显示界面选择方法的流程图。
图7是根据本公开一些实施方式中显示界面选择方法的流程图。
图8是根据本公开一些实施方式中显示界面选择方法的流程图。
图9是根据本公开一些实施方式中显示界面选择方法的原理示意图。
图10是根据本公开一些实施方式中显示界面选择方法的原理示意图。
图11是根据本公开一些实施方式中显示界面选择装置的结构框图。
图12是根据本公开一些实施方式中电子设备的结构框图。
具体实施方式
下面将结合附图对本公开的技术方案进行清楚、完整地描述,显然,所描述的实施方式是本公开一部分实施方式,而不是全部的实施方式。基于本公开中的实施方式,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施方式,都属于本公开保护的范围。此外,下面所描述的本公开不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。
对于电视机、商业显示屏幕等大屏显示设备,往往需要利用遥控器实现显示屏界面的选择。以电视机为例,图1中示出了现有技术中电视机的人机交互示意图。
如图1所示,电视机10的显示屏幕上显示有多个图标11,用户20需要从多个图标11中选择自己想要的图标。现有技术的交互方式是:通过用户20操作遥控器30上的方向键控制选择框12移动,实现图标选择。但是,这种操作方式操作步骤十分繁琐,例如,用户想要将选择框12从图标11-A移动至图标11-B,需要多次按压遥控器的方向键,才可以完成图标11-B的选择操作,操作过程十分复杂,影响用户体验。
正是基于上述现有技术中存在的缺陷,本公开实施方式提供了一种显示界面选择方法、装置、设备、存储介质及程序产品,旨在实现显示界面的快速选择,提高界面选择的效率。
第一方面,本公开实施方式提供了一种显示界面选择方法,该方法可适用于任何具有显示界面的电子设备中,例如电视机、显示屏、平板电脑、手持式移动终端等,本公开对此不作限制。另外,本公开实施方式的方法,可以由电子设备的处理器执行处理。
图2中示出了适于实现本公开方法的电子设备的一些实施方式,下面结合图2示例对本公开方法的应用场景进行说明。
如图2所示,在本公开一些实施方式中,电子设备以电视机100为例,电视机100具有显示屏110,显示屏110上用于输出显示界面以供用户观看。
本公开实施方式的电视机100还包括图像采集设备120,图像采集设备120用于采集电视机100前方的图像信息。在一个示例中,图像采集设备120可以是摄像头,例如RGB摄像头、红外摄像头以及ToF(Time of flight,飞行时间)摄像头中的一种或多种组合。
值得说明的是,在一些实施方式中,图像采集设备120可以设置于电视机100之上,例如图2中所示,图像采集设备120可伸缩设于电视机100上方中央。在另一些实施方式中,图像采集设备120也可以与电视机100分体设置,只要保证图像采集设备可以采集到电视机100之前的图像即可。本公开对此不作限制。
在前述基础上,图3示出了本公开显示界面选择方法的一些实施方式,下面结合图3进行说明。
如图3所示,在一些实施方式中,本公开示例的显示界面选择方法包括:
S310、获取图像采集设备采集的待处理图像。
在一些实施方式中,图像采集设备120可以采集到电视机100前方的图像,也即待处理图像。
在一些实施方式中,考虑到用户正常观看电视节目时并没有界面选择的需求,因此图像采集设备120也无需时刻获取待处理图像,可在用户存在界面选择需求时,图像采集设备120才开始采集图像。
其中,可以接收触发事件,响应于触发事件,通过所述图像采集设备采集所述待处理图像。其中,所述触发事件包括以下至少之一:所述显示界面跳转至目标界面、所述电子设备由关机状态切换为开机状态、所述电子设备由待机状态切换为运行状态。
例如一个示例中,响应于显示界面跳转至目标界面,通过图像采集设备120采集待处理图像。
目标界面可以是例如电视节目选择的界面,如图4所示,当用户由正常观看电视节目的界面跳转至图4所示的节目选择界面(频道选择界面)时,即可确认显示界面跳转至目标界面,用户存在界面选择的需求,此时可以通过图像采集设备120来采集待处理图像。
当然,可以理解目标界面并不局限于图4所示的节目选择界面,其他任何存在界面选择的显示界面均可以作为目标界面,本公开对此不作限制。
例如另一个示例中,响应于电子设备由关机状态切换为开机状态,或电 子设备由待机状态切换为运行状态,通过图像采集设备采集待处理图像。
仍以图2所示的电视机为例,当电视机100由关机状态切换为开机状态,即电视机100刚刚启动,或者电视机100由待机状态切换为运行状态,即电视机100刚刚被唤醒,用户存在界面选择的需求(例如更换电视台),从而可以通过图像采集设备120来采集待处理图像。
待处理图像指电视机100前方的图像,待处理图像中包括至少一个预设对象,例如观看电视的用户等处于图像采集设备120拍摄范围内的对象。
S320、从待处理图像中检测得到目标对象的视线停留信息。
在一些实施方式中,在获取到待处理图像之后,可以通过视线跟踪技术,从待处理图像中检测得到目标对象的视线停留信息。在一些实施方式中,视线停留信息可以包括目标对象的注视点在图像采集设备坐标系的坐标信息、视线停留时长、视线运动趋势等信息,本公开对此不作限制。
目标对象指从待处理图像的至少一个预设对象中确定的对象。举例来说,在图2所示场景中共包括3个观看电视的用户:用户201、用户202和用户203。从而图像采集设备120采集的待处理图像中共包括三个预设对象,也即三个用户。目标对象指基于图像检测技术,从用户201、用户202和用户203中确定出来的目标用户。
其中,可基于图像检测技术,确定手持遥控器300的用户202为目标对象。本公开下述实施方式中进行说明,在此暂不详述。
在确定目标对象之后,即可基于视线跟踪技术,从待处理图像中检测得到目标对象的视线停留信息。视线停留信息表示目标对象注视点的位置信息,也即目标对象当前的视线方向。
S330、根据目标对象的视线停留信息,从电子设备的显示界面的多个待选区域中确定目标区域。
在一些实施方式中,视线停留信息表示目标对象当前的视线方向,也即用户期望选择的目标区域所在的方向,从而可基于视线停留信息,从显示界面上的多个待选区域中确定出用户期望选择的目标区域。
例如图4所示,假设选择框112的初始位置为“CCTV-1”,目标对象(也即用户202)的视线停留信息指示的位置为“CCTV-6”所在的位置,从而电视机100从“CCTV-1”至“CCTV-6”这6个待选区域中,确定“CCTV-6”为目标区域。
对于根据视线停留信息确定目标区域的过程,本公开下述实施方式中进行说明,在此暂不详述。
S340、通过显示界面将目标区域作为选中区域显示。
在一些实施方式中,在S330中确定目标区域之后,即可将选择框移动至 目标区域,从而将目标区域作为选中区域。
仍以图4场景为例,假设选择框112初始位置为“CCTV-1”,根据用户202的视线停留信息确定的目标区域为“CCTV-6”,从而电视机100即可将选择框112移动至目标区域“CCTV-6”。
可以看到,本公开实施方式中,用户在选择目标区域时,无需使用遥控器操作,即可根据用户的视线停留信息实现选择框的移动。在图4场景中,现有技术的遥控器操作,用户至少需要按压右键两次、按压下键一次,才可以将选择框112由“CCTV-1”移动至“CCTV-6”。而在本公开实施方式中,用户无需任何按压操作,即可实现选择框112的快速移动,大大简化了选择操作。
通过上述可知,本公开实施方式中,基于目标对象的视线停留信息,可实现显示界面的快速选择,简化用户操作,提高界面选择效率。
在实际场景中,往往存在多人共同观看电视的情况,例如图2中所示,用户201、用户202和用户203共同观看电视机100。在此情况下,待处理图像中共包括三个预设对象,由于每个用户的视线方向可能不同,从而检测得到的视线停留信息也不相同。为减少多个用户干扰,对于多用户场景可从多个用户中确定出主控人员,依据主控人员的视线停留信息作为界面选择的依据。下面结合图5进行说明。
如图5所示,在一些实施方式中,本公开示例的显示界面选择方法中,从待处理图像中检测得到目标对象的视线停留信息,包括:
S510、从待处理图像中检测得到多个预设对象。
在一些实施方式中,以图2所示场景为例,预设对象为人体,在获取待处理图像之后,可以基于人体检测技术从待处理图像中检测到所有的人体区域,例如用户201、用户202以及用户203。
S520、从多个预设对象中确定目标对象。
在一些实施方式中,目标对象为多个预设对象中的主控对象。例如图2所示,预设对象共包括用户201、用户202和用户203,在一些实施方式中,可将手持遥控器300的用户202确定为目标对象。
例如图6中所示,在一些实施方式中,确定目标对象的过程可包括:
S521、对待处理图像进行检测,得到多个预设对象中每个所述预设对象的操作部信息。
S522、根据操作部信息,确定满足预设条件的操作部信息对应的预设对象为目标对象。
首先,可利用人体检测模型对待处理图像进行人体检测,可得到待处理图像中各个预设对象,也即图2中所述的用户201、用户202以及用户203。 其次,可利用操作部检测模型对每个预设对象的操作部区域进行检测,确定每个预设对象的操作部信息。
可以理解,操作部指人体操作遥控器的部位。例如一个示例中,遥控器为手持式遥控器,从而操作部即可对应人体手部区域。例如另一个示例中,遥控器为头戴式遥控器,从而操作部即可对应人体头部区域。操作部还可以是其他任何适于实施的人体部位,本公开对此不再枚举。
在图2示例中,遥控器300以手持式遥控器为例,在实际场景中,往往手持遥控器的用户为主控用户,因此本公开实施方式中,可基于图像检测技术,对每个预设对象的手部区域进行检测,得到每个用户的手部信息。
在确定每个用户的手部信息之后,即可基于图像检测技术对每个手部信息进行检测,判断用户是否手持物品,或者判断用户手持物品是否为遥控器。若用户手持遥控器,则确定该手持遥控器的用户为主控用户,也即该手部信息对应的预设对象为目标对象。
继续参照图2,遥控器300被用户202手持,从而检测到用户202的操作部信息满足预设条件,即可确定用户202为目标对象。
值得说明的是,本公开实施方式并不局限于上述示例,在其他实施方式中,也可以采用其他任何适于实现的方式来确定目标对象。例如,可基于人脸识别技术,将人脸识别通过的预设对象确定为目标对象。本领域技术人员对此可以理解并充分实施。
S530、根据待处理图像检测得到目标对象的视线停留信息。
在确定目标对象之后,可基于视线跟踪算法从待处理图像中检测到该目标对象的视线停留信息。本公开下述实施方式中进行说明,在此暂不详述。
通过上述可知,本公开实施方式中,从多个预设对象中确定满足预设条件的操作部信息对应的目标对象,从而可避免多用户场景的干扰,提高显示界面选择的准确性。
如图7所示,在一些实施方式中,本公开示例的显示界面选择方法,包括:
S710、对待处理图像进行检测,得到目标对象的眼部特征信息。
在一些实施方式中,可通过预先训练好的眼部检测网络,从待处理图像中提取得到目标对象的眼部特征信息。
S720、根据眼部特征信息,确定目标对象的注视点在图像采集设备坐标系的视线停留信息。
在一些实施方式中,通过对提取得到的眼部特征信息进行特征分析,可得到目标对象的注视点的坐标信息。
可以理解,目标对象的注视点的坐标信息,指目标对象的注视点在图像 采集设备120的坐标系中的坐标信息。
S730、基于预先建立的图像采集设备坐标系与显示界面坐标系的映射关系,根据视线停留信息确定目标区域。
首先,如图2所示,可基于图像采集设备120和显示屏110的位置关系,预先构建图像采集设备坐标系与显示界面坐标系之间的映射关系,该映射关系表示图像坐标到显示界面坐标的配准关系。
在一个示例中,可基于用户与图像采集设备120之间的距离等比映射,得到图像采集设备坐标系与显示界面坐标系之间的映射关系。本领域技术人员基于相关技术可以理解并充分实施。
在S720中确定得到目标对象的视线停留信息,其表示在图像采集设备坐标系下的注视点的坐标信息,从而基于该坐标信息和预先构建的映射关系,即可得到显示界面上与该视线停留信息对应的目标区域。
以图4所示为例,根据目标对象202的眼部特征信息确定视线停留信息,基于该视线停留信息映射到显示界面中得到目标区域为“CCTV-6”,从而将“CCTV-6”作为目标对象202期望选择的目标区域。
通过上述可知,本公开实施方式中,基于目标对象的视线停留信息,可实现显示界面的快速选择,简化用户操作,提高界面选择效率。
在一些实施方式中,在基于待处理图像检测得到目标对象的视线停留信息之前,还包括对视线追踪的校准过程。在一个示例中,用户可通过遥控器发送指令,控制电视机进入校准程序。
在校准过程中,可以在电视机显示界面的预设位置输出若干校准点,引导用户视线停留在校准点位置。同时,电视机的图像采集设备可采集用户图像,基于用户图像检测得到用户实时注视点与校准点的位置偏差,进而基于该位置偏差实现对图像采集设备坐标系与显示界面坐标系之间的映射关系的校准。
在一些实施方式中,考虑到用户的视线可能会频繁发生改变,从而对界面选择产生干扰。因此,本公开一些实施方式中,可根据目标对象的注视点在目标区域中的停留时长,确定是否选中该目标区域。下面结合图8进行说明。
如图8所示,在一些实施方式中,本公开示例的显示界面选择方法包括:
S810、根据目标对象的视线停留信息,确定目标对象的注视点位于目标区域的停留时长。
S820、响应于停留时长不小于预设时长阈值,通过显示界面将目标区域作为选中区域显示。
在一些实施方式中,在根据目标对象的视线停留信息确定目标区域之后, 并非立即将选择框移动至该目标区域,而是确定目标对象注视点位于该目标区域的停留时长。若停留时长较短,说明用户可能只是目光扫过该区域,并非期望选中该区域;相反,若停留时长较长,说明用户期望选中该区域。
预设时长阈值即为确定用户期望选中目标区域的门限值,当目标对象的注视点位于目标区域的停留时长不小于该预设时长阈值,说明用户期望选中该目标区域,从而可以将选择框移动至该目标区域。当目标对象的注视点位于目标区域的停留时长小于该预设时长阈值,说明用户仅仅是目光扫过该目标区域,无需移动选择框,从而避免误判断。
可以理解,预设时长阈值可以根据先验知识或者有限次试验得到,本领域技术人员可以根据实际场景进行设置,本公开对此不作限制。例如图4示例中,预设时长阈值可以设置为1秒,当用户202的注视点位于“CCTV-6”区域的停留时长大于1秒,则可将选择框112由“CCTV-1”快速移动至“CCTV-6”。
通过上述可知,本公开实施方式中,根据目标对象注视点的停留时长选中目标区域,避免界面选择误判断,提高选择准确性。
值得说明的是,在一些场景中,大屏设备为了显示更多的有效信息,往往每个待选区域中会包括多个可选目标,可选目标可以是图标、菜单栏等等。
例如图9中所示,显示界面中包括待选区域A1、待选区域A2、待选区域A3、待选区域B1、待选区域B2、待选区域B3,共2*3=6个待选区域,其中每个待选区域中包括3*3共9个可选目标。也即,显示界面中共包括6*9=54个可选目标。本示例中可选目标以图标为例,下述中将可选目标称为图标。
可以理解,由于显示界面上图标很多,对用于202的视线干扰相对较为严重,为保证界面选择方法的可靠性,本公开一些实施方式中,首先基于用户视线停留信息确定目标区域,在确定目标区域之后,进一步从目标区域中确定可选目标。也即,基于视线停留信息,采用“粗搜+精搜”的方式确定用户期望选择的可选目标,降低较多的可选目标对用户视线干扰,提高界面选择方法的可靠性。下面进行说明。
在一些实施方式中,可首先通过前述任一实施方式的方法,从多个待选区域中确定出目标区域。在确定目标区域之后,所述方法还包括:
选择框不位于该目标区域中。在此情况下,需要将选择框移动至该目标区域的某个图标上。
选择框位于该目标区域中。在此情况下,可以保持选择框选中的某个图标不变。
下面基于上述两种情况分别进行说明。
在一些实施方式中,响应于选择框不位于目标区域中,将选择框移动至 目标区域,并将目标区域中预设位置的可选目标作为选中区域显示。
如图9示例中,基于前述任一实施方式的方法,基于目标对象202的视线停留信息确定的目标区域为B3,而选择框112位于待选区域A1中的图标15上。
此时,选择框112的位置不位于目标区域B3中,从而需要将选择框112从待选区域A1移动至目标区域B3中。但是目标区域B3中包括图标31~39共9个图标,因此,将选择框112移动至目标区域B3中的哪个图标位置是需要考虑的问题,下面本公开给出几种实现方式:
在一些实施方式中,可以根据历史显示界面数据,将至少一个可选目标中被选中次数最多的可选目标确认为预设位置的可选目标。
其中,可获取历史显示界面数据,历史显示界面数据中包括过往一端时间中各个图标被选中的次数,图标被选中的次数越多,表示用户期望选择该图标的可能性越高。
从而通过分析历史显示界面数据,将目标区域B3中图标31~39中历史被选中次数最多的图标作为预设位置的可选目标,其表示用户最有可能期望选择的图标。
例如图9所示中,假设图标35为目标区域B3中历史被选中次数最多的图标,从而可将选择框112由待选区域A1的图标15,快速移动至目标区域B3的图标35,实现图标的快速选择。
在一些实施方式中,可以根据目标对象的视线停留信息得到目标对象的注视点分别与每个可选目标之间的距离,将最小距离对应的可选目标确认为预设位置的可选目标。
其中,仍可基于前述得到的目标对象的视线停留信息,确认目标对象的注视点坐标映射到显示界面坐标系的位置信息,基于该位置信息可确认目标区域B3中每个图标与注视点的距离,距离越小表示用户期望选中该图标的可能性越高。
例如图9示例中,图标35距离用户202的注视点距离最小,可以将图标35确认为用户期望选中的图标,从而可将选择框112由待选区域A1的图标15,快速移动至目标区域B3的图标35,实现图标的快速选择。
在一些实施方式中,可以将目标区域中默认位置的图标确认为预设位置的可选图标。
默认位置为每个待选区域作为目标区域时选择框移动的默认位置。默认位置可以是目标区域中心位置,例如目标区域B3中的图标35;也可以是目标区域左上角位置,例如目标区域B3中的图标31。当然,还可以是其他任何适于实施的位置,本公开对此不作限制。
在一些实施方式中,响应于选择框位于目标区域中,将选择框当前选中的可选目标作为选中区域显示。
在一些实施方式中,在通过前述任一实施方式确定目标区域之后,若选择框正是位于目标区域中,则不再移动选择框。
可以理解,在通过粗搜确定目标区域之后,若选择框本身就位于目标区域内,为避免冗余的选择框移动操作,不再基于默认位置或者历史数据来移动选择框位置。
例如图10所述中,假设确定的目标区域为B3,并且选择框112的位置为选中图标31,也即选择框112位于目标区域B3中。在此情况下,无论默认位置是否是图标31,都可以不再移动选择框112,而是保持选择框112选中图标31,从而减少冗余移动操作。
通过上述可知,本公开实施方式中,从多个待选区域中确定目标区域,然后将选择框移动至目标区域中预设位置,利用“粗搜+精搜”的方式实现选择框的快速移动,提高界面选择效率。
在一些实施方式中,考虑到基于用户视线停留信息确定可选目标可能存在偏差,因此在实现选择框移动之后,本公开实施方式方法还可以包括:
根据接收到的用户操作信号,在显示界面上移动选择框。
在一些实施方式中,用户操作信号指用户通过遥控器或移动终端向电视机发送的移动选择框的信号。在通过前述任一实施方式的方法,移动完选择框之后。选择框的位置可能存在偏差,从而用户可以通过遥控器来手动微调选择框的位置。
例如图10所示,通过上述方法将选择框112移动至图标31之后,用户实际期望选中的图标为图标35,从而用户可以通过操作遥控器的方向键,手动将选择框112由图标31移动至图标35。
值得说明的是,本公开实施方式的显示界面选择方法,基于用户视线停留信息选择图标,用户只需要在选择框位置存在偏差时,微调选择框位置即可,相较于现有技术中的显示界面选择方法,简化操作步骤,提高界面选择效率。
在一些实施方式中,本公开实施方式中的可选目标并不局限于上述的图标,也可以是其他任何适于实现的可供用户选择的目标,例如菜单栏、文字列表等,本公开对此不作限制。
在一些实施方式中,本公开实施方式中的电子设备也不局限于上述的电视机,也可以是其他任何适于实现的设备,例如平板电脑等,本公开对此不作限制。
在一些实施方式中,本公开实施方式的待选区域、可选目标的排列方式 也不局限于上述的形式,还可以是其他任何适于实施的排列形式,本公开对此不作限制。
通过上述可知,本公开实施方式中,基于目标对象的视线停留信息,可实现显示界面的快速选择,简化用户操作,提高界面选择效率。从多个预设对象中确定满足预设条件的操作部信息对应的目标对象,从而可避免多用户场景的干扰,提高显示界面选择的准确性。
第二方面,本公开实施方式提供了一种显示界面选择装置,该装置可适用于任何具有显示界面的电子设备中,例如电视机、显示屏、平板电脑、手持式移动终端等,本公开对此不作限制。
如图11所示,在一些实施方式中,本公开示例的显示界面选择装置包括:
获取部分1110,被配置为获取图像采集设备采集的待处理图像;
检测部分1120,被配置为从所述待处理图像中检测得到目标对象的视线停留信息;
区域确定部分1130,被配置为根据所述目标对象的视线停留信息,从电子设备的显示界面的多个待选区域中确定目标区域;
选中部分1140,被配置为通过所述显示界面将所述目标区域作为选中区域显示。
通过上述可知,本公开实施方式中,基于目标对象的视线停留信息,可实现显示界面的快速选择,简化用户操作,提高界面选择效率。
在一些实施方式中,所述获取部分还被配置为:
响应于触发事件,通过所述图像采集设备采集所述待处理图像;
其中,所述触发事件包括以下至少之一:所述显示界面跳转至目标界面、所述电子设备由关机状态切换为开机状态、所述电子设备由待机状态切换为运行状态。
在一些实施方式中,所述检测部分还被配置为:
从所述待处理图像中检测得到多个预设对象;
从所述多个预设对象中确定所述目标对象;
根据所述待处理图像,检测得到所述目标对象的所述视线停留信息。
在一些实施方式中,所述检测部分还被配置为:
对所述待处理图像进行检测,得到所述多个预设对象中每个所述预设对象的操作部信息;
根据得到的所述操作部信息,确定满足预设条件的操作部信息对应的预设对象为所述目标对象。
在一些实施方式中,所述检测部分还被配置为:
对所述待处理图像进行检测,得到所述目标对象的眼部特征信息;
根据所述眼部特征信息,确定所述目标对象的注视点在所述图像采集设备坐标系的视线停留信息;
所述根据所述目标对象的视线停留信息,从显示界面的多个待选区域中确定目标区域,包括:
基于预先建立的所述图像采集设备坐标系与所述显示界面坐标系的映射关系,根据所述视线停留信息确定所述目标区域。
在一些实施方式中,所述待选区域包括至少一个可选目标,所述选中部分还被配置为如下至少一项:
响应于选择框不位于所述目标区域中,将所述选择框移动至所述目标区域,并将所述目标区域中预设位置的所述可选目标作为选中区域显示;
响应于选择框位于所述目标区域中,将所述选择框当前选中的所述可选目标作为选中区域显示。
在一些实施方式中,所述选中部分还被配置为:
根据所述目标对象的视线停留信息,确定所述目标对象的注视点位于所述目标区域的停留时长;
响应于所述停留时长不小于预设时长阈值,通过所述显示界面将所述目标区域作为选中区域显示。
通过上述可知,本公开实施方式中,基于目标对象的视线停留信息,可实现显示界面的快速选择,简化用户操作,提高界面选择效率。从多个预设对象中确定满足预设条件的操作部信息对应的目标对象,从而可避免多用户场景的干扰,提高显示界面选择的准确性。
第三方面,本公开实施方式提供了一种电子设备,该电子设备可以是任何具有显示界面的电子设备中,例如电视机、显示屏、平板电脑、手持式移动终端等,本公开对此不作限制。
在一些实施方式中,本公开示例的电子设备包括:
显示器,具有显示界面;
图像采集设备;
处理器;以及
存储器,存储有计算机指令,所述计算机指令被所述处理器执行时实现第一方面任一实施方式所述的显示界面选择方法。
第四方面,本公开实施方式提供了一种存储介质,存储有计算机指令,所述计算机指令被处理器执行时实现第一方面任一实施方式所述的显示界面选择方法。
第五方面,本公开实施方式提供了一种计算机程序产品,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处 理器执行用于实现第一方面任一实施方式所述的显示界面选择方法。
在一些实施方式中,图12示出了适于用来实现本公开方法的系统的结构示意图,通过图12所示系统,可实现上述处理器及存储介质相应功能。
如图12所示,系统600包括处理器601,其可以根据存储在存储器602中的程序或者从存储部分608加载到存储器602中的程序而执行各种适当的动作和处理。在存储器602中,还存储有系统600操作所需的各种程序和数据。处理器601和存储器602通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
以下部件连接至I/O接口605:包括键盘、鼠标等的输入部分606;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal Display,LCD)等以及扬声器等的输出部分607;包括硬盘等的存储部分608;以及包括诸如局域网(Local Area Network,LAN)卡、调制解调器等的网络接口卡的通信部分609。通信部分609经由诸如因特网的网络执行通信处理。驱动器610也根据需要连接至I/O接口605。可拆卸介质611,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器610上,以便于从其上读出的计算机程序根据需要被安装入存储部分608。
根据本公开的实施方式,上文方法过程可以被实现为计算机软件程序。例如,本公开的实施方式包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,计算机程序包含用于执行上述方法的程序代码。在这样的实施方式中,该计算机程序可以通过通信部分609从网络上被下载和安装,和/或从可拆卸介质611被安装。
附图中的流程图和框图,图示了按照本公开各种实施方式的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个部分、模块、程序段、或代码的一部分,模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
显然,上述实施方式仅仅是为清楚地说明所作的举例,而并非对实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。而由此所引伸出的显而易见的变化或变动仍处于本公开创造的保护范围 之中。
工业实用性
本公开涉及电子设备技术领域,涉及一种显示界面选择方法、装置、设备、存储介质及程序产品。显示界面选择方法包括:获取图像采集设备采集的待处理图像;从所述待处理图像中检测得到目标对象的视线停留信息;根据所述目标对象的视线停留信息,从电子设备的显示界面的多个待选区域中确定目标区域;通过所述显示界面将所述目标区域作为选中区域显示。本公开方法基于目标对象的视线停留信息,可实现显示界面的快速选择,简化用户操作,提高界面选择效率。

Claims (14)

  1. 一种显示界面选择方法,所述方法由电子设备执行,所述方法包括:
    获取图像采集设备采集的待处理图像;
    从所述待处理图像中检测得到目标对象的视线停留信息;
    根据所述目标对象的视线停留信息,从电子设备的显示界面的多个待选区域中确定目标区域;
    通过所述显示界面将所述目标区域作为选中区域显示。
  2. 根据权利要求1所述的方法,其中,所述获取图像采集设备采集的待处理图像,包括:
    响应于触发事件,通过所述图像采集设备采集所述待处理图像;
    其中,所述触发事件包括以下至少之一:所述显示界面跳转至目标界面、所述电子设备由关机状态切换为开机状态、所述电子设备由待机状态切换为运行状态。
  3. 根据权利要求1或2所述的方法,其中,所述从所述待处理图像中检测得到目标对象的视线停留信息,包括:
    从所述待处理图像中检测得到多个预设对象;
    从所述多个预设对象中确定所述目标对象;
    根据所述待处理图像,检测得到所述目标对象的所述视线停留信息。
  4. 根据权利要求3所述的方法,其中,所述从所述多个预设对象中确定所述目标对象,包括:
    对所述待处理图像进行检测,得到所述多个预设对象中每个所述预设对象的操作部信息;
    根据所述操作部信息,确定满足预设条件的操作部信息对应的预设对象为所述目标对象。
  5. 根据权利要求1至4任一项所述的方法,其中,所述从所述待处理图像中检测得到目标对象的视线停留信息,包括:
    对所述待处理图像进行检测,得到所述目标对象的眼部特征信息;
    根据所述眼部特征信息,确定所述目标对象的注视点在所述图像采集设备坐标系的视线停留信息;
    所述根据所述目标对象的视线停留信息,从显示界面的多个待选区域中确定目标区域,包括:
    基于预先建立的所述图像采集设备坐标系与所述显示界面坐标系的映射关系,根据所述视线停留信息确定所述目标区域。
  6. 根据权利要求1至5任一项所述的方法,其中,所述待选区域包括至少一个可选目标,所述通过所述显示界面将所述目标区域作为选中区域显示, 包括:
    响应于选择框不位于所述目标区域中,将所述选择框移动至所述目标区域,并将所述目标区域中预设位置的所述可选目标作为选中区域显示;
    响应于选择框位于所述目标区域中,将所述选择框当前选中的所述可选目标作为选中区域显示。
  7. 根据权利要求6所述的方法,其中,所述将所述目标区域中预设位置的所述可选目标作为选中区域显示,包括:
    根据历史显示界面数据,将所述至少一个可选目标中被选中次数最多的可选目标确认为预设位置的可选目标,并将所述预设位置的可选目标作为选中区域显示。
  8. 根据权利要求6所述的方法,其中,所述将所述目标区域中预设位置的所述可选目标作为选中区域显示,包括:
    根据所述目标对象的视线停留信息得到所述目标对象的注视点分别与所述至少一个可选目标中每个所述可选目标之间的距离,将最小距离对应的可选目标确认为预设位置的可选目标,并将所述预设位置的可选目标作为选中区域显示。
  9. 根据权利要求6至8任一项所述的方法,其中,在所述通过所述显示界面将所述目标区域作为选中区域显示之后,所述方法还包括:
    响应于接收到的操作信号,在所述显示界面上移动所述选择框。
  10. 根据权利要求1至9任一项所述的方法,其中,所述通过所述显示界面将所述目标区域作为选中区域显示,包括:
    根据所述目标对象的视线停留信息,确定所述目标对象的注视点位于所述目标区域的停留时长;
    响应于所述停留时长不小于预设时长阈值,通过所述显示界面将所述目标区域作为选中区域显示。
  11. 一种显示界面选择装置,包括:
    获取部分,被配置为获取图像采集设备采集的待处理图像;
    检测部分,被配置为从所述待处理图像中检测得到目标对象的视线停留信息;
    区域确定部分,被配置为根据所述目标对象的视线停留信息,从电子设备的显示界面的多个待选区域中确定目标区域;
    选中部分,被配置为通过所述显示界面将所述目标区域作为选中区域显示。
  12. 一种电子设备,包括:
    显示器,具有显示界面;
    图像采集设备;
    处理器;以及
    存储器,存储有计算机指令,所述计算机指令被所述处理器执行时实现权利要求1至10任一项所述的显示界面选择方法。
  13. 一种存储介质,存储有计算机指令,所述计算机指令被处理器执行时实现权利要求1至10任一项所述的显示界面选择方法。
  14. 一种计算机程序产品,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至10任一项所述的显示界面选择方法。
PCT/CN2021/134293 2021-06-30 2021-11-30 显示界面选择方法、装置、设备、存储介质及程序产品 WO2023273138A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110736403.2 2021-06-30
CN202110736403.2A CN113467614A (zh) 2021-06-30 2021-06-30 显示界面选择方法及装置

Publications (1)

Publication Number Publication Date
WO2023273138A1 true WO2023273138A1 (zh) 2023-01-05

Family

ID=77876558

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/134293 WO2023273138A1 (zh) 2021-06-30 2021-11-30 显示界面选择方法、装置、设备、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN113467614A (zh)
WO (1) WO2023273138A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116382549A (zh) * 2023-05-22 2023-07-04 昆山嘉提信息科技有限公司 基于视觉反馈的图像处理方法及装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467614A (zh) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 显示界面选择方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104822005A (zh) * 2014-01-30 2015-08-05 京瓷办公信息系统株式会社 电子设备以及操作画面显示方法
US20170330343A1 (en) * 2016-05-10 2017-11-16 Fujitsu Limited Sight line identification apparatus and sight line identification method
CN108897589A (zh) * 2018-05-31 2018-11-27 刘国华 显示设备中人机交互方法、装置、计算机设备和存储介质
CN111881763A (zh) * 2020-06-30 2020-11-03 北京小米移动软件有限公司 确定用户注视位置的方法、装置、存储介质和电子设备
CN113467614A (zh) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 显示界面选择方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111680503A (zh) * 2020-06-08 2020-09-18 腾讯科技(深圳)有限公司 文本处理方法、装置、设备及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104822005A (zh) * 2014-01-30 2015-08-05 京瓷办公信息系统株式会社 电子设备以及操作画面显示方法
US20170330343A1 (en) * 2016-05-10 2017-11-16 Fujitsu Limited Sight line identification apparatus and sight line identification method
CN108897589A (zh) * 2018-05-31 2018-11-27 刘国华 显示设备中人机交互方法、装置、计算机设备和存储介质
CN111881763A (zh) * 2020-06-30 2020-11-03 北京小米移动软件有限公司 确定用户注视位置的方法、装置、存储介质和电子设备
CN113467614A (zh) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 显示界面选择方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116382549A (zh) * 2023-05-22 2023-07-04 昆山嘉提信息科技有限公司 基于视觉反馈的图像处理方法及装置
CN116382549B (zh) * 2023-05-22 2023-09-01 昆山嘉提信息科技有限公司 基于视觉反馈的图像处理方法及装置

Also Published As

Publication number Publication date
CN113467614A (zh) 2021-10-01

Similar Documents

Publication Publication Date Title
CN106406710B (zh) 一种录制屏幕的方法及移动终端
WO2023273138A1 (zh) 显示界面选择方法、装置、设备、存储介质及程序产品
US20030095154A1 (en) Method and apparatus for a gesture-based user interface
JP2012108771A (ja) 画面操作システム
US9961394B2 (en) Display apparatus, controlling method thereof, and display system
CN111475059A (zh) 基于近距离传感器和图像传感器的手势检测
US20170053448A1 (en) Display apparatus and controlling method thereof
JP2011028366A (ja) 操作制御装置および操作制御方法
US11416068B2 (en) Method and apparatus for human-computer interaction in display device, and computer device and storage medium
CN108462729B (zh) 实现终端设备交互的方法和装置、终端设备及服务器
JP5358548B2 (ja) ジェスチャ認識装置
JP2012238293A (ja) 入力装置
KR20150117820A (ko) 영상 표시 방법 및 전자 장치
CN111596760A (zh) 操作控制方法、装置、电子设备及可读存储介质
CN111656313A (zh) 屏幕显示切换方法、显示设备、可移动平台
CN112835506B (zh) 一种显示设备及其控制方法
CN117918057A (zh) 显示设备及设备控制方法
US20160291804A1 (en) Display control method and display control device
US20230384868A1 (en) Display apparatus
US20180239440A1 (en) Information processing apparatus, information processing method, and program
CN112860212A (zh) 一种音量调节方法及显示设备
JP2009087095A (ja) 電子機器の制御装置、制御方法及び制御プログラム
JP2021015637A (ja) 表示装置
JP5229928B1 (ja) 注視位置特定装置、および注視位置特定プログラム
US20090059015A1 (en) Information processing device and remote communicating system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21948064

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21948064

Country of ref document: EP

Kind code of ref document: A1