WO2015078126A1 - 一种定位的方法及装置 - Google Patents

一种定位的方法及装置 Download PDF

Info

Publication number
WO2015078126A1
WO2015078126A1 PCT/CN2014/075008 CN2014075008W WO2015078126A1 WO 2015078126 A1 WO2015078126 A1 WO 2015078126A1 CN 2014075008 W CN2014075008 W CN 2014075008W WO 2015078126 A1 WO2015078126 A1 WO 2015078126A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
area
screen
projection area
gaze
Prior art date
Application number
PCT/CN2014/075008
Other languages
English (en)
French (fr)
Inventor
叶敏
刘远旺
贺真
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US14/532,749 priority Critical patent/US9971413B2/en
Publication of WO2015078126A1 publication Critical patent/WO2015078126A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a method and apparatus for positioning. Background technique
  • the touch screen is the simplest and most convenient way to interact with humans.
  • the commonly used method is: the user directly clicks the target application icon or the target function button to be selected on the terminal screen, in the terminal screen.
  • the user may click on the error, select other application icons or function buttons in the terminal screen, and the terminal will open the selected application icon or function button.
  • the user will close the selected application icon or function button, click the target application icon or target function button to be selected on the terminal screen again, and cycle the operation until the target application icon or the target function button is selected.
  • Embodiments of the present invention provide a method and apparatus for positioning that can be easily operated and takes less time when selecting a target application icon or a target function button.
  • an embodiment of the present invention provides a method for locating, including:
  • Detecting an operation of the user gazing at the screen obtaining a gaze area in which the user looks at the screen; detecting a hovering gesture operation in which the user presses the screen, obtaining a projection area in which the user presses the screen;
  • the gaze area does not include the projection area, re-detecting a hovering gesture operation of the user pressing the screen to obtain a projection area in which the user presses the screen;
  • the gaze area includes the projection area, an application icon or a function button included in the matching area is processed, and the matching area is the projection area.
  • the method further includes: determining the gaze area Whether or not the projection area is included.
  • the step of processing an application icon or a function button included in the matching area includes:
  • the gaze area includes the projection area, determining an application icon or a function button in the corresponding matching area according to the matching area;
  • the application corresponding to the application icon is opened, or the content corresponding to the function button is displayed.
  • the method before the detecting the user's gaze on the screen and obtaining the gaze area of the screen that the user is gazing, the method further includes:
  • the eyeball positioning function is turned on so that the gaze area of the user can be obtained.
  • the enabling eye positioning function includes:
  • the eyeball positioning function When the user selects an eyeball positioning function option, the eyeball positioning function is turned on; or, the user is sent a prompting information, the prompting information is used to remind the user to turn on the eyeball positioning function; The information of the eyeball positioning function is turned on, and the eyeball positioning function is turned on.
  • an embodiment of the present invention provides a device for positioning, including:
  • a first detecting unit configured to detect an operation of the user watching the screen
  • a first acquiring unit configured to obtain a gaze area of the user gazing screen
  • a second detecting unit configured to detect a hovering gesture operation of the user pressing the screen
  • a second acquiring unit configured to obtain a projection area in which the user presses the screen
  • the second acquiring unit is further configured to obtain a projection area that the user presses the screen; and a processing unit, configured to process an application icon or a function button included in the matching area when the gaze area includes the projection area
  • the matching area is the projection area.
  • the device further includes: a determining unit, configured to determine whether the gaze area includes the projection area.
  • the processing unit includes: a determining module, configured to: when the gaze area includes the projection area, determine a corresponding match according to the matching area Application icons or function buttons in the area;
  • a processing module configured to open an application corresponding to the application icon, or display content corresponding to the function button.
  • the device further includes: an opening unit, configured to enable an eyeball positioning function, so that the gaze area of the user can be obtained.
  • the enabling unit specifically includes:
  • a detecting module configured to detect the user selected eyeball positioning function option
  • a sending module configured to send prompt information to the user, where the prompt information is used to remind the user to open an eyeball positioning function
  • the receiving module is configured to receive information about the eyeball positioning function sent by the user, and the opening module is further configured to enable the eyeball positioning function.
  • the embodiment of the present invention provides a method and a device for positioning, by detecting an operation of a user gazing at a screen, obtaining a gaze area where the user looks at the screen, and by detecting a hovering gesture operation of the user pressing the screen, obtaining a projection area where the user presses the screen;
  • the user pauses the gesture of pressing the screen to obtain a projection gesture of the user pressing the screen;
  • the gaze area includes the projection area, the application icon or function button included in the matching area is processed, and the existing
  • the embodiment of the present invention can select the target application icon or target.
  • the function buttons are simple to operate and take less time.
  • FIG. 1 is a flowchart of a method for positioning according to an embodiment of the present invention
  • FIG. 2A is a schematic diagram of content displayed on a screen as content of a webpage according to another embodiment of the present invention.
  • FIG. 2B is a schematic diagram of a content displayed on a screen according to another embodiment of the present invention.
  • 2C is a schematic diagram of content displayed on a screen as input content according to another embodiment of the present invention.
  • 2D is a schematic diagram of a content displayed on a screen as a desktop icon according to another embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a projection area generated by a terminal according to a gaze area generated by tracking an eyeball and a suspension touch of a terminal according to a gesture according to another embodiment of the present invention
  • FIG. 5 is a block diagram of an apparatus for positioning according to another embodiment of the present invention.
  • FIG. 6 is a block diagram of another positioning apparatus according to another embodiment of the present invention.
  • FIG. 7 is a block diagram of another apparatus for positioning according to another embodiment of the present invention. detailed description
  • the embodiment of the present invention provides a method for locating, and the execution body of the method is a terminal, which may be a mobile phone, a tablet computer, a player, a television, a computer, a set top box, and the like. As shown in FIG. 1 , the method includes:
  • Step 101 Detect an operation of the user gazing at the screen, and obtain a gaze area where the user looks at the screen.
  • the content displayed on the screen in this step may be content that is densely arranged in content and error-prone when selecting a target.
  • the content displayed on the screen may be webpage content; as shown in FIG. 2B, The content displayed on the screen can also be a menu list; as shown in FIG. 2C, the content displayed on the screen can also be input content; as shown in FIG. 2D, the content displayed on the screen can also be a desktop icon, of course, displayed on the screen.
  • the content can also be other content, no longer here - repeat.
  • the camera of the terminal automatically captures the eye movement of the user, and generates a gaze area through the obtained gaze direction of the user's eyeball, as shown in FIG.
  • the gaze area may be a selection area to be matched.
  • the selection area to be matched may be displayed on the screen, or the selection area to be matched may not be displayed on the screen, but the terminal saves the selection area to be matched to further determine the target selected by the user.
  • the gaze area threshold range may be the original slice range of the object such as the gaze interface, the icon, and the label.
  • the Internet work Operating System (IOS) icon size is (120 pixels*120 pixels).
  • the gaze area threshold range is the icon slice size. The main role of eyeball positioning is to pre-select the gaze object to generate the gaze area.
  • IOS Internet work Operating System
  • Step 102 Detect a hovering gesture operation in which the user presses the screen to obtain a projection area in which the user presses the screen.
  • the finger is hovered over the target to be selected on the terminal screen, and a projection area is generated on the terminal screen, as shown in FIG. 3, the projection area may be suspended.
  • Touch selection area can be (10 pixels * 10 pixels), and the hover touch selection area will be expanded by 20% _80%. To help users select the objects they need to select more quickly.
  • Step 1 03 When the gaze area does not include the projection area, the user pauses the gesture of pressing the screen to obtain a projection area where the user presses the screen.
  • the solution provided by the embodiment of the present invention may not open the wrong application icon or activate the error function button. Then, the terminal re-detects the hovering gesture operation of the user pressing the screen, and obtains the projection area where the user presses the screen until the judgment result that the gaze area contains the projection area is obtained. For example, when the content displayed on the terminal screen is a menu list, the user wants to open the music player to listen to music, and the projection area obtained by the user through the floating operation points to other icons in the list, such as a video player. At this time, the gaze area obtained by the user through the eyeball positioning function will not include the projection area obtained by the user through the gesture operation, and the terminal will not open the video player pointed by the projection area.
  • the terminal will open the selected application icon or function button, and the terminal needs to re-detect the user to press the floating gesture operation of the screen.
  • the solution provided by the embodiment of the present invention does not open the selected application icon or function button, and re-detects the user. Pressing the hover gesture operation of the screen can be simple and less time consuming when selecting the target application icon or the target function button.
  • Step 1 04 When the gaze area contains the projection area, the application icon or function button included in the matching area is processed.
  • the matching area in this step is a projection area.
  • the terminal determines whether the gaze area includes the projection area, and obtains the determination result that the gaze area includes the projection area
  • the matching area of the gaze area and the projection area may be obtained as a projection area, and the terminal determines to start or execute the screen corresponding to the projection area. content.
  • the user wants to open the music player to listen to music, and the projection area obtained by the user through the floating operation points to the target icon in the list, that is, the music player. If the user obtains the gaze area through the eyeball positioning function, the user passes the hand When the projection area obtained by the potential operation is operated, the terminal will open the music player pointed to by the projection area.
  • the embodiment of the present invention provides a method for locating, by detecting an operation of the user gazing at the screen, obtaining a gaze area where the user looks at the screen, and by detecting a hovering gesture operation of the user pressing the screen, obtaining a projection area where the user presses the screen;
  • the projection area is included, the floating gesture operation of the user pressing the screen is re-detected, and the projection area of the screen is pressed by the user;
  • the gaze area includes the projection area, the application icon or the function button included in the matching area is processed, so that the embodiment of the present invention can be When the target application icon or the target function button is selected, the operation is simple and takes less time.
  • the embodiment of the present invention provides a method for positioning. As shown in FIG.
  • the method includes: Step 401: When the terminal detects that the user turns on the eyeball positioning function option, the terminal turns on the eyeball positioning function.
  • the eyeball positioning function in the terminal may be turned on.
  • the content displayed on the screen is a web page, an e-book, a menu list, a desktop icon, etc., it can be considered that the content in the screen is closely arranged, and the eyeball positioning function needs to be turned on.
  • the content displayed on the screen of the terminal is a webpage, an e-book, a menu list, a desktop icon, etc.
  • the content displayed on the screen may be considered to be closely arranged, and the eyeball positioning function needs to be turned on.
  • the terminal automatically detects that the eyeball positioning function has been turned on, and the terminal automatically turns on the eyeball positioning function.
  • Step 402 The terminal sends a prompt message to the user.
  • the prompt information is used to remind the user to turn on the eyeball positioning function.
  • the terminal automatically detects whether the option of the eyeball positioning function is turned on, and when the detection result of the option of the eyeball positioning function is not turned on, the terminal sends a prompt message to the user to remind the user to open the eyeball. GPS.
  • Step 403 The terminal receives the information about the eyeball positioning function sent by the user, and turns on the eyeball positioning function.
  • step 401, step 402, and step 403 are two parallel steps, which are Two different ways to turn on eye positioning. When the eyeball positioning function is turned on, only step 401 may be performed, or only step 402 and step 403 may be performed. Step 402 and step 403 in the figure are indicated by dashed boxes to illustrate that step 402 and step 403 are optional steps; of course, step 401 can also be represented by a dashed box.
  • Step 404 The terminal detects an operation of the user gazing at the screen, and obtains a gaze area where the user looks at the screen.
  • the step is the same as step 101 in FIG. 1 .
  • the step 101 refers to the description of step 101, and details are not described herein again.
  • Step 405 The terminal displays the gaze area.
  • This step is an optional step.
  • the terminal may display the gaze area on the screen, or may not display the gaze area on the screen, but the terminal still saves the gaze area.
  • Step 406 The terminal detects a hovering gesture operation of the user pressing the screen, and obtains a projection area where the user presses the screen.
  • this step is the same as step 102 in FIG. 1 .
  • step 102 the same as step 102 in FIG. 1 .
  • steps 102 the same as step 102 in FIG. 1 .
  • Step 407 The terminal displays the projection area.
  • This step is an optional step.
  • the terminal can display the projection area on the screen according to different settings, or the projection area can not be displayed on the screen, but the terminal still saves the projection area.
  • Step 408 The terminal determines whether the gaze area includes a projection area.
  • step 409 when the gaze area includes the projection area, step 409 is performed; when the gaze area does not include the projection area, step 410 is performed.
  • Step 409 The terminal processes an application icon or a function button included in the matching area.
  • this step is the same as step 104 in FIG. 1 .
  • step 104 the same as step 104 in FIG. 1 .
  • steps 104 the same as step 104 in FIG. 1 .
  • Step 410 The terminal re-detects a hovering gesture operation of the user pressing the screen to obtain a projection area of the user pressing the screen.
  • the solution provided by the embodiment of the present invention may not open the wrong application diagram. Mark or activate the wrong function button. Then, the terminal re-detects the hovering gesture operation of the user pressing the screen, and obtains the projection area where the user presses the screen until the judgment result that the gaze area contains the projection area is obtained.
  • the content displayed in the terminal screen is a menu list
  • the user wants to open the music player to listen to music
  • the projection area obtained by the user through the floating operation points to other icons in the list, such as a video player.
  • the gaze area obtained by the user through the eyeball positioning function will not include the projection area obtained by the user through the gesture operation, and the terminal will not open the video player pointed by the projection area.
  • the representative gaze area does not match the projection area.
  • the user pauses the gesture of pressing the screen to obtain the projection area of the screen, and then performs steps 407-410. The operation so that the target to be selected by the user can be accurately located.
  • the embodiment of the present invention provides a method for locating, by detecting an operation of the user gazing at the screen, obtaining a gaze area where the user looks at the screen, and by detecting a hovering gesture operation of the user pressing the screen, obtaining a projection area where the user presses the screen;
  • the projection area is included, the floating gesture operation of the user pressing the screen is re-detected, and the projection area of the screen is pressed by the user;
  • the gaze area includes the projection area, the application icon or the function button included in the matching area is processed, so that the embodiment of the present invention can be When the target application icon or the target function button is selected, the operation is simple and takes less time.
  • the embodiment of the present invention provides a device for positioning. As shown in FIG. 5, the device includes: a first detecting unit 501, a first acquiring unit 502, a second detecting unit 503, a second acquiring unit 504, and a processing unit 505.
  • the first detecting unit 501 is configured to detect an operation of the user looking at the screen.
  • the content displayed on the screen in this step may be content that is densely arranged in content and error-prone when selecting a target.
  • the content displayed on the screen may be webpage content; as shown in FIG. 2B, The content displayed on the screen can also be a menu list; as shown in FIG. 2C, the content displayed on the screen can also be input content; as shown in FIG. 2D, the content displayed on the screen can also be a desktop icon, of course, displayed on the screen.
  • the content can also be other content, no longer here - repeat.
  • the first obtaining unit 502 is configured to obtain a gaze area of the user gazing screen.
  • the camera of the terminal automatically captures the eye movement of the user, and generates a gaze area through the obtained gaze direction of the user's eyeball, as shown in FIG.
  • the gaze area may be a selection area to be matched.
  • the selection area to be matched may be displayed on the screen, or the selection area to be matched may not be displayed on the screen, but the terminal saves the selection area to be matched to further determine the target selected by the user.
  • the second detecting unit 503 is configured to detect a hovering gesture operation of the user pressing the screen.
  • the second obtaining unit 504 is configured to obtain a projection area in which the user presses the screen.
  • this step is the same as step 102 in FIG. 1 .
  • step 102 the same as step 102 in FIG. 1 .
  • steps 102 the same as step 102 in FIG. 1 .
  • the second detecting unit 503 is further configured to re-detect a hovering gesture operation of the user pressing the screen when the gaze area does not include the projection area.
  • the solution provided by the embodiment of the present invention may not open the wrong application icon or start the wrong function button, and then the terminal re-detects The user presses the hover gesture operation of the screen.
  • the second obtaining unit 504 is further configured to obtain a projection area in which the user presses the screen.
  • the processing unit 505 is configured to process, when the gaze area includes the projection area, an application icon or a function button included in the matching area, where the matching area is the projection area.
  • this step is the same as step 104 in FIG. 1 .
  • step 104 the same as step 104 in FIG. 1 .
  • steps 104 the same as step 104 in FIG. 1 .
  • the device further includes: a determining unit 506.
  • the determining unit 506 is configured to determine whether the gaze area includes the projection area.
  • the processing unit 505 includes: a determining module 5051 and a processing module 5052.
  • the determining module 5051 is configured to determine, according to the matching area, an application icon or a function button in the matching area according to the matching area when the gaze area includes the projection area.
  • the processing module 5052 is configured to open an application corresponding to the application icon, or display the The content corresponding to the function button.
  • the device further includes: an opening unit 507.
  • the opening unit 507 is configured to turn on the eyeball positioning function so that the gaze area of the user can be obtained.
  • the opening unit 507 specifically includes: a detecting module 5071, an opening module 5072, a sending module 5073, and a receiving module 5074.
  • the detecting module 5071 is configured to detect the user selected eye positioning function option.
  • the module 5072 is opened to open the eyeball positioning function. Or,
  • the sending module 5073 is configured to send prompt information to the user, where the prompt information is used to remind the user to turn on the eyeball positioning function.
  • the receiving module 5074 is configured to receive information about the open eye positioning function sent by the user.
  • the opening module 5072 is further configured to open the eyeball positioning function.
  • An embodiment of the present invention provides a device for locating, by detecting an operation of a user gazing at a screen, obtaining a gaze area where the user looks at the screen, and by detecting a hovering gesture operation of the user pressing the screen, obtaining a projection area where the user presses the screen;
  • the projection area is included, the floating gesture operation of the user pressing the screen is re-detected, and the projection area of the screen is pressed by the user;
  • the gaze area includes the projection area, the application icon or the function button included in the matching area is processed, so that the embodiment of the present invention can be When the target application icon or the target function button is selected, the operation is simple and takes less time.
  • An embodiment of the present invention provides a device for positioning. As shown in FIG. 7, the device includes: a memory 701 and a processor 702.
  • the memory 701 is configured to store information including a program routine.
  • a processor 702 coupled to the memory 701 for controlling execution of the program routine, the specific package Include: detecting an operation of the user gazing at the screen, obtaining a gaze area of the user gazing at the screen; detecting a hovering gesture operation of the user pressing the screen, obtaining a projection area where the user presses the screen; when the gaze area is not When the projection area is included, detecting a hovering gesture operation of the user pressing the screen to obtain a projection area in which the user presses the screen; when the gaze area includes the projection area, the processing matching area includes Application icon or function button, the matching area is the projection area.
  • the processor 702 is further configured to determine whether the gaze area includes the projection area.
  • the processor 702 is further configured to: when the gaze area includes the projection area, determine an application icon or a function button in the matching area according to the matching area; and then open an application corresponding to the application icon The program, or, displays the content corresponding to the function button.
  • the processor 702 is further configured to enable the eyeball positioning function to obtain the gaze area of the user before the detecting the user's gaze on the screen to obtain the gaze area of the screen that the user is gazing.
  • the processor 702 is further configured to: when the user selects an eyeball positioning function option, activate the eyeball positioning function; or
  • the prompt information is used to remind the user to turn on the eyeball positioning function; receiving the information of the eyeball positioning function sent by the user, and opening the eyeball positioning function.
  • An embodiment of the present invention provides a device for locating, by detecting an operation of a user gazing at a screen, obtaining a gaze area where the user looks at the screen, and by detecting a hovering gesture operation of the user pressing the screen, obtaining a projection area where the user presses the screen;
  • the projection area is included, the user pauses the gesture of pressing the screen to obtain the projection area of the user pressing the screen;
  • the application icon or the function button included in the matching area is processed, so that the embodiment of the present invention can be simple and less time-consuming when selecting the target application icon or the target function button.
  • the device embodiments described above are merely illustrative, and the components may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without any creative effort.
  • the present invention can be implemented by means of software plus necessary general hardware, and of course, dedicated hardware, dedicated CPU, dedicated memory, dedicated memory, Special components are used to achieve this, but in many cases the former is a better implementation.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a readable storage medium, such as a floppy disk of a computer.
  • U disk U disk
  • removable hard disk read only memory
  • random access memory disk or optical disk, etc.
  • instructions for causing a computer device which may be a personal computer, server, or network device, etc. to perform various embodiments of the present invention The method described in the example.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

本发明公开一种定位的方法及装置,涉及计算机技术领域,可以在选择目标应用图标或者目标功能按钮时操作简单,花费时间较少。本发明实施例通过检测用户注视屏幕的操作,获得用户注视屏幕的注视区域,通过检测用户按压屏幕的悬浮手势操作,获得用户按压屏幕的投影区域;然后当注视区域不包含投影区域时,重新检测用户按压屏幕的悬浮手势操作,获得用户按压屏幕的投影区域;当注视区域包含投影区域时,处理匹配区域内包含的应用图标或者功能按钮。本发明实施例提供的方案适于定位时采用。

Description

一种定位的方法及装置 技术领域
本发明涉及计算机技术领域, 尤其涉及一种定位的方法及装置。 背景技术
触摸屏作为一种最新的终端输入设备, 它是目前最简单、 方便的一种人 机交互方式。 现有技术中, 当终端屏幕中显示出用户要选择的目标应用图标 或者目标功能按钮时, 通常采用的方法为: 用户直接点击终端屏幕上要选择 的目标应用图标或者目标功能按钮, 在终端屏幕中的应用图标或者功能按钮 排布密集的情况下, 用户可能点击错误, 选中终端屏幕中的其它应用图标或 者功能按钮, 终端将开启选中的应用图标或者功能按钮。 此时, 用户将关闭 选中的应用图标或者功能按钮, 再次点击终端屏幕上要选择的目标应用图标 或者目标功能按钮, 循环操作, 直到选中目标应用图标或者目标功能按钮为 止。
然而, 采用现有技术直接点击屏幕上要选择的目标应用图标或者目标功 能按钮时, 不能避免用户对错误选项的操作, 在点击错误选项时, 需要先关 闭由于点击错误选项打开的应用或者删除输入的内容, 使得操作繁瑣, 花费 时间较多, 给用户带来不便。
发明内容
本发明的实施例提供一种定位的方法及装置, 可以在选择目标应用图标 或者目标功能按钮时操作简单, 花费时间较少。
第一方面, 本发明的实施例提供一种定位的方法, 包括:
检测用户注视屏幕的操作, 获得所述用户注视屏幕的注视区域; 检测所述用户按压所述屏幕的悬浮手势操作, 获得所述用户按压所述屏 幕的投影区域;
当所述注视区域不包含所述投影区域时, 重新检测所述用户按压所述屏 幕的悬浮手势操作, 获得所述用户按压所述屏幕的投影区域; 当所述注视区域包含所述投影区域时, 处理匹配区域内包含的应用图标 或者功能按钮, 所述匹配区域为所述投影区域。
在第一种可能的实现方式中, 结合第一方面, 在检测所述用户按压所述 屏幕的悬浮手势操作, 获得所述用户按压所述屏幕的投影区域之后, 还包括: 判断所述注视区域是否包含所述投影区域。
在第二种可能的实现方式中, 结合第一方面, 所述当所述注视区域包含 所述投影区域时, 处理匹配区域内包含的应用图标或者功能按钮的步骤, 包 括:
当所述注视区域包含所述投影区域时, 根据所述匹配区域确定对应的所 述匹配区域内的应用图标或者功能按钮;
打开所述应用图标对应的应用程序, 或者, 显示所述功能按钮对应的内 容。
在第三种可能的实现方式中, 结合第一方面, 在所述检测用户注视屏幕 的操作, 获得所述用户注视的屏幕的注视区域之前, 还包括:
开启眼球定位功能, 以便可以获得所述用户的所述注视区域。
在第四种可能的实现方式中, 结合第一方面中第三种可能的实现方式, 所述开启眼球定位功能包括:
检测到所述用户选定眼球定位功能选项时,开启所述眼球定位功能;或者, 向所述用户发送提示信息, 所述提示信息用于提醒所述用户开启眼球定 位功能; 接收所述用户发送的开启眼球定位功能的信息, 开启所述眼球定位 功能。
第二方面, 本发明实施例提供一种定位的装置, 包括:
第一检测单元, 用于检测用户注视屏幕的操作;
第一获取单元, 用于获得所述用户注视屏幕的注视区域;
第二检测单元, 还用于检测所述用户按压所述屏幕的悬浮手势操作; 第二获取单元, 还用于获得所述用户按压所述屏幕的投影区域; 所述第二检测单元, 还用于当所述注视区域不包含所述投影区域时, 重 新检测所述用户按压所述屏幕的悬浮手势操作;
所述第二获取单元, 还用于获得所述用户按压所述屏幕的投影区域; 处理单元, 用于当所述注视区域包含所述投影区域时, 处理匹配区域内 包含的应用图标或者功能按钮, 所述匹配区域为所述投影区域。
在第一种可能的实现方式中, 结合第二方面, 所述装置, 还包括: 判断单元, 用于判断所述注视区域是否包含所述投影区域。
在第二种可能的实现方式中, 结合第二方面, 所述处理单元, 包括: 确定模块, 用于当所述注视区域包含所述投影区域时, 根据所述匹配区 域确定对应的所述匹配区域内的应用图标或者功能按钮;
处理模块, 用于打开所述应用图标对应的应用程序, 或者, 显示所述功 能按钮对应的内容。
在第三种可能的实现方式中, 结合第二方面, 所述装置, 还包括: 开启单元, 用于开启眼球定位功能, 以便可以获得所述用户的所述注视 区域。
在第四种可能的实现方式中, 结合第二方面中第三种可能的实现方式, 所述开启单元, 具体包括:
检测模块, 用于检测到所述用户选定眼球定位功能选项;
开启模块, 用于开启所述眼球定位功能; 或者,
发送模块, 用于向所述用户发送提示信息, 所述提示信息用于提醒所述 用户开启眼球定位功能;
接收模块, 用于接收所述用户发送的开启眼球定位功能的信息; 所述开 启模块, 还用于开启所述眼球定位功能。
本发明实施例提供一种定位的方法及装置, 通过检测用户注视屏幕的操 作, 获得用户注视屏幕的注视区域, 通过检测用户按压屏幕的悬浮手势操作, 获得用户按压屏幕的投影区域; 然后当注视区域不包含投影区域时, 重新检 测用户按压屏幕的悬浮手势操作, 获得用户按压屏幕的投影区域; 当注视区 域包含投影区域时, 处理匹配区域内包含的应用图标或者功能按钮, 与现有 技术中, 在点击错误选项时, 需要先关闭由于点击错误选项打开的应用或者 删除输入的内容, 然后重新检测用户按压屏幕的悬浮手势操作相比, 本发明 实施例可以在选择目标应用图标或者目标功能按钮时操作简单, 花费时间较 少。 附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将对实 施例或现有技术描述中所需要使用的附图作简单地介绍, 显而易见地, 下面 描述中的附图仅仅是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出创造性劳动性的前提下, 还可以根据这些附图获得其他的附图。
图 1为本发明一个实施例提供的一种定位的方法的流程图;
图 2A为本发明另一个实施例提供的屏幕中显示的内容为网页内容的示意 图;
图 2B为本发明另一个实施例提供的屏幕中显示的内容为菜单列表的示意 图;
图 2C为本发明另一个实施例提供的屏幕中显示的内容为输入内容的示意 图;
图 2D为本发明另一个实施例提供的屏幕中显示的内容为桌面图标的示意 图;
图 3 为本发明另一个实施例提供的终端根据对眼球追踪生成的注视区域 与终端根据手势的悬浮触控生成的投影区域的示意图;
图 4为本发明另一个实施例提供的一种定位的方法的流程图;
图 5为本发明另一个实施例提供的一种定位的装置的框图;
图 6为本发明另一个实施例提供的另一种定位的装置的框图;
图 7为本发明另一个实施例提供的另一种定位的装置的框图。 具体实施方式
下面将结合本发明实施例中的附图, 对本发明实施例中的技术方案进行 清楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明一部分实施例, 而 不是全部的实施例。 基于本发明中的实施例, 本领域普通技术人员在没有作 出创造性劳动前提下所获得的所有其他实施例, 都属于本发明保护的范围。
本发明实施例提供一种定位的方法, 该方法的执行主体为终端, 具体可 以为手机、 平板电脑、 播放器、 电视、 电脑、 机顶盒等终端, 如图 1 所示, 该方法包括:
步骤 101 , 检测用户注视屏幕的操作, 获得用户注视屏幕的注视区域。 可选的, 本步骤中屏幕中显示的内容可以为内容排列密集且选择目标时 容易出错的内容, 例如, 如图 2A所示, 屏幕中显示的内容可以为网页内容; 如图 2B所示, 屏幕中显示的内容也可以为菜单列表; 如图 2C所示, 屏幕中 显示的内容也可以为输入内容; 如图 2D所示, 屏幕中显示的内容也可以为桌 面图标, 当然屏幕中显示的内容还可以为其他的内容, 在此不再——赘述。
可选的, 当用户的眼球注视终端屏幕上的目标时, 终端的拍摄装置会自 动捕捉用户的眼球运动, 通过获得的用户眼球的注视方向, 生成注视区域, 如图 3所示注视区域。 注视区域可以为待匹配选择区域。 根据不同的设置, 可以在屏幕上显示待匹配选择区域, 也可以在屏幕上不显示待匹配选择区域, 但是, 终端保存待匹配选择区域, 以便进一步确定用户选择的目标。
可选的, 注视区域阔值范围可以为注视界面、 图标和标签等对象的原切 片范围, 例如, 网际操作系统 ( Internet work Opera t ing Sys tem, IOS ) 图 标大小为 (120像素 *120像素), 所述注视区域阔值范围为该图标切片大小。 眼球定位的主要作用为预选定注视对象, 生成所述注视区域。
步骤 102 ,检测用户按压所述屏幕的悬浮手势操作, 获得用户按压屏幕的 投影区域。
例如, 当用户浏览终端网页的时候, 将手指悬浮在终端屏幕上要选择的 目标的上方进行操作, 此时在终端屏幕上会生成投影区域, 如图 3 所示投影 区域,投影区域可以为悬浮触控选择区域。 悬浮触控选择区域大小可以为( 10 像素 * 10像素), 并且, 悬浮触控选择区域会相应进行虚拟扩大 20%_80%, 可 以帮助用户更加快速地选中需要选择的对象。
步骤 1 03 , 当注视区域不包含投影区域时, 重新检测用户按压屏幕的悬浮 手势操作, 获得用户按压屏幕的投影区域。
可选的, 当终端判断注视区域是否包含投影区域, 得到注视区域不包含 投影区域的判断结果时, 本发明实施例提供的方案可以不打开错误的应用图 标或者启动错误功能按钮。 然后终端重新检测户按压屏幕的悬浮手势操作, 获得用户按压屏幕的投影区域, 直至得到注视区域包含投影区域的判断结果。 例如, 当终端屏幕中显示的内容为菜单列表时, 用户想要打开音乐播放器收 听音乐, 用户通过悬浮操作获得的投影区域指向列表中的其它图标, 例如视 频播放器时。 此时, 用户通过眼球定位功能获得的注视区域将不包含用户通 过手势操作获得的投影区域, 终端将不会打开投影区域指向的视频播放器。
与现有技术在用户点击错误的情况下, 选中终端屏幕中的其它应用图标 或者功能按钮时, 终端将开启选中的应用图标或者功能按钮, 在终端重新检 测用户按压屏幕的悬浮手势操作之前, 需要关闭已经打开的错误应用图标的 应用程序或者关闭已经开启的错误功能按钮相比, 本发明实施例提供的方案 在用户点击错误的情况下, 终端不开启选中的应用图标或者功能按钮, 重新 检测用户按压屏幕的悬浮手势操作, 可以在选择目标应用图标或者目标功能 按钮时操作简单, 花费时间较少。
步骤 1 04 , 当注视区域包含投影区域时, 处理匹配区域内包含的应用图标 或者功能按钮。
可选的, 本步骤中的匹配区域为投影区域。
可选的, 当终端判断注视区域是否包含投影区域, 得到注视区域包含投 影区域的判断结果时, 可以得到注视区域与投影区域的匹配区域为投影区域, 终端将决定启动或者执行投影区域对应的屏幕内容。
例如, 当终端屏幕中显示的内容为菜单列表时, 用户想要打开音乐播放 器收听音乐, 此时用户通过悬浮操作获得的投影区域指向列表中的目标图标, 即音乐播放器。 如果用户通过眼球定位功能获得的注视区域包含用户通过手 势操作获得的投影区域时, 终端将打开投影区域指向的音乐播放器。
本发明实施例提供一种定位的方法, 通过检测用户注视屏幕的操作, 获 得用户注视屏幕的注视区域, 通过检测用户按压屏幕的悬浮手势操作, 获得 用户按压屏幕的投影区域; 然后当注视区域不包含投影区域时, 重新检测用 户按压屏幕的悬浮手势操作, 获得用户按压屏幕的投影区域; 当注视区域包 含投影区域时, 处理匹配区域内包含的应用图标或者功能按钮, 使得本发明 实施例可以在选择目标应用图标或者目标功能按钮时操作简单, 花费时间较 少。 本发明实施例提供一种定位的方法, 如图 4所示, 该方法包括: 步骤 401 ,终端检测到用户开启眼球定位功能选项时,开启眼球定位功能。 可选的, 当终端屏幕中显示的内容排列较为紧密容易出现错误操作时, 可以开启终端中的眼球定位功能。 例如, 当屏幕中显示的内容为网页、 电子 书、 菜单列表、 桌面图标等内容时, 可以认为此屏幕中的内容排列紧密, 需 要开启眼球定位功能。
可选的, 当终端的屏幕中显示的内容为网页、 电子书、 菜单列表、 桌面 图标等内容时, 可以认为此屏幕中显示的内容排列紧密, 需要开启眼球定位 功能。 当用户开启眼球定位功能选项时, 终端自动检测到已经开启了眼球定 位功能选项时, 终端自动开启眼球定位功能。
步骤 402 , 终端向用户发送提示信息。
可选的, 提示信息用于提醒用户开启眼球定位功能。 在终端的屏幕中显 示的内容排列紧密的情况下, 终端自动检测是否开启了眼球定位功能的选项, 得到未开启眼球定位功能的选项的检测结果时, 终端向用户发送提示信息, 提醒用户开启眼球定位功能。
步骤 403, 终端接收用户发送的开启眼球定位功能的信息, 开启眼球定位 功能。
需要说明的是, 步骤 401、 步骤 402和步骤 403, 为两个并列的步骤, 为 开启眼球定位功能的两种不同的方式。 在开启眼球定位功能时可以仅执行步 骤 401 , 或者仅执行步骤 402及步骤 403。 附图中步骤 402及步骤 403以虚线 框表示, 以说明步骤 402及步骤 403为可选的步骤; 当然也可以将步骤 401 以虚线框表示。
步骤 404 ,终端检测用户注视屏幕的操作,获得用户注视屏幕的注视区域。 可选的, 本步骤与附图 1中步骤 101相同, 具体可参见步骤 101的描述, 在此不再——赘述。
步骤 405 , 终端显示注视区域。
此步骤为可选步骤。 终端根据不同的设置, 可以在屏幕上显示注视区域, 也可以在屏幕上不显示注视区域, 但是, 终端仍然保存注视区域。
步骤 406 , 终端检测用户按压屏幕的悬浮手势操作, 获得用户按压屏幕的 投影区域。
可选的, 本步骤与附图 1中步骤 102相同, 具体可参见步骤 102的描述, 在此不再——赘述。
步骤 407 , 终端显示投影区域。
此步骤为可选步骤。 终端根据不同的设置, 可以在屏幕上显示投影区域, 也可以在屏幕上不显示投影区域, 但是, 终端仍然保存投影区域。
步骤 408 , 终端判断注视区域是否包含投影区域。
在本步骤中, 当注视区域包含投影区域时, 执行步骤 409; 当注视区域不 包含投影区域时, 执行步骤 410。
步骤 409 , 终端处理匹配区域内包含的应用图标或者功能按钮。
可选的, 本步骤与附图 1中步骤 104相同, 具体可参见步骤 104的描述, 在此不再——赘述。
步骤 410 , 终端重新检测用户按压屏幕的悬浮手势操作, 获得用户按压屏 幕的投影区域。
可选的, 当终端判断注视区域是否包含投影区域, 得到注视区域不包含 投影区域的判断结果时, 本发明实施例提供的方案可以不打开错误的应用图 标或者启动错误功能按钮。 然后终端重新检测户按压屏幕的悬浮手势操作, 获得用户按压屏幕的投影区域, 直至得到注视区域包含投影区域的判断结果。 例如, 当终端屏幕中显示的内容为菜单列表时, 用户想要打开音乐播放器收 听音乐, 用户通过悬浮操作获得的投影区域指向列表中的其它图标, 例如视 频播放器时。 此时, 用户通过眼球定位功能获得的注视区域将不包含用户通 过手势操作获得的投影区域, 终端将不会打开投影区域指向的视频播放器。
可选的, 当注视区域不包含投影区域时, 代表注视区域与投影区域不匹 配, 此时再次检测用户按压屏幕的悬浮手势操作, 获得用户按压屏幕的投影 区域, 然后再执行步骤 407-步骤 410的操作, 以便可以精确的定位用户要选 择的目标。
本发明实施例提供一种定位的方法, 通过检测用户注视屏幕的操作, 获 得用户注视屏幕的注视区域, 通过检测用户按压屏幕的悬浮手势操作, 获得 用户按压屏幕的投影区域; 然后当注视区域不包含投影区域时, 重新检测用 户按压屏幕的悬浮手势操作, 获得用户按压屏幕的投影区域; 当注视区域包 含投影区域时, 处理匹配区域内包含的应用图标或者功能按钮, 使得本发明 实施例可以在选择目标应用图标或者目标功能按钮时操作简单, 花费时间较 少。 本发明实施例提供一种定位的装置, 如图 5 所示, 该装置包括: 第一检 测单元 501 , 第一获取单元 502 , 第二检测单元 503 , 第二获取单元 504 , 处 理单元 505。
第一检测单元 501 , 用于检测用户注视屏幕的操作。
可选的, 本步骤中屏幕中显示的内容可以为内容排列密集且选择目标时 容易出错的内容, 例如, 如图 2A所示, 屏幕中显示的内容可以为网页内容; 如图 2B所示, 屏幕中显示的内容也可以为菜单列表; 如图 2C所示, 屏幕中 显示的内容也可以为输入内容; 如图 2D所示, 屏幕中显示的内容也可以为桌 面图标, 当然屏幕中显示的内容还可以为其他的内容, 在此不再——赘述。 第一获取单元 502 , 用于获得所述用户注视屏幕的注视区域。
可选的, 当用户的眼球注视终端屏幕上的目标时, 终端的拍摄装置会自 动捕捉用户的眼球运动, 通过获得的用户眼球的注视方向, 生成注视区域, 如图 3所示注视区域。 注视区域可以为待匹配选择区域。 根据不同的设置, 可以在屏幕上显示待匹配选择区域, 也可以在屏幕上不显示待匹配选择区域, 但是, 终端保存待匹配选择区域, 以便进一步确定用户选择的目标。
第二检测单元 503 , 用于检测所述用户按压所述屏幕的悬浮手势操作。 第二获取单元 504 , 用于获得所述用户按压所述屏幕的投影区域。
可选的, 本步骤与附图 1中步骤 102相同, 具体可参见步骤 102的描述, 在此不再——赘述。
所述第二检测单元 503 , 还用于当所述注视区域不包含所述投影区域时, 重新检测所述用户按压所述屏幕的悬浮手势操作。
可选的, 当终端判断注视区域是否包含投影区域, 得到注视区域不包含 投影区域的判断结果时, 本发明实施例提供的方案可以不打开错误的应用图 标或者启动错误功能按钮, 然后终端重新检测户按压屏幕的悬浮手势操作。
所述第二获取单元 504 , 还用于获得所述用户按压所述屏幕的投影区域。 处理单元 505 , 用于当所述注视区域包含所述投影区域时, 处理匹配区域 内包含的应用图标或者功能按钮, 所述匹配区域为所述投影区域。
可选的, 本步骤与附图 1中步骤 104相同, 具体可参见步骤 104的描述, 在此不再——赘述。
进一步可选的, 如图 6所示, 该装置, 还包括: 判断单元 506。
在所述第二获取单元 504 用于获得所述用户按压所述屏幕的投影区域之 后, 判断单元 506 , 用于判断所述注视区域是否包含所述投影区域。
进一步可选的,所述处理单元 505 ,包括:确定模块 5051 ,处理模块 5052。 确定模块 5051 , 用于当所述注视区域包含所述投影区域时, 根据所述匹 配区域确定对应的所述匹配区域内的应用图标或者功能按钮。
处理模块 5052 , 用于打开所述应用图标对应的应用程序, 或者, 显示所 述功能按钮对应的内容。
进一步可选的, 该装置, 还包括: 开启单元 507。
在第一检测单元 501用于检测用户注视屏幕的操作之前, 开启单元 507 , 用于开启眼球定位功能, 以便可以获得所述用户的所述注视区域。
进一步可选的, 所述开启单元 507 , 具体包括: 检测模块 5071 , 开启模 块 5072 , 发送模块 5073, 接收模块 5074。
检测模块 5071 , 用于检测到所述用户选定眼球定位功能选项。
开启模块 5072 , 用于开启所述眼球定位功能。 或者,
发送模块 5073, 用于向所述用户发送提示信息, 所述提示信息用于提醒 所述用户开启眼球定位功能。
接收模块 5074 , 用于接收所述用户发送的开启眼球定位功能的信息。 所述开启模块 5072 , 还用于开启所述眼球定位功能。
需要说明的是, 附图 5与附图 6所示装置中, 其各个模块的具体实施过 程以及各个模块之间的信息交互等内容, 由于与本发明方法实施例基于同一 发明构思, 可以参见方法实施例, 在此不——赘述。
本发明实施例提供一种定位的装置, 通过检测用户注视屏幕的操作, 获 得用户注视屏幕的注视区域, 通过检测用户按压屏幕的悬浮手势操作, 获得 用户按压屏幕的投影区域; 然后当注视区域不包含投影区域时, 重新检测用 户按压屏幕的悬浮手势操作, 获得用户按压屏幕的投影区域; 当注视区域包 含投影区域时, 处理匹配区域内包含的应用图标或者功能按钮, 使得本发明 实施例可以在选择目标应用图标或者目标功能按钮时操作简单, 花费时间较 少。 本发明实施例提供一种定位的装置, 如图 7 所示, 该装置包括: 存储器 701 , 处理器 702。
存储器 701 , 用于存储包括程序例程的信息。
处理器 702 , 与存储器 701耦合, 用于控制所述程序例程的执行, 具体包 括: 检测用户注视屏幕的操作, 获得所述用户注视屏幕的注视区域; 检测所 述用户按压所述屏幕的悬浮手势操作, 获得所述用户按压所述屏幕的投影区 域; 当所述注视区域不包含所述投影区域时, 重新检测所述用户按压所述屏 幕的悬浮手势操作, 获得所述用户按压所述屏幕的投影区域; 当所述注视区 域包含所述投影区域时, 处理匹配区域内包含的应用图标或者功能按钮, 所 述匹配区域为所述投影区域。
在检测所述用户按压所述屏幕的悬浮手势操作, 获得所述用户按压所述 屏幕的投影区域之后, 所述处理器 702 ,还用于判断所述注视区域是否包含所 述投影区域。
所述处理器 702 ,还用于当所述注视区域包含所述投影区域时,根据所述 匹配区域确定对应的所述匹配区域内的应用图标或者功能按钮; 然后打开所 述应用图标对应的应用程序, 或者, 显示所述功能按钮对应的内容。
在所述检测用户注视屏幕的操作, 获得所述用户注视的屏幕的注视区域 之前, 所述处理器 702 , 还用于开启眼球定位功能, 以便可以获得所述用户的 所述注视区域。
所述处理器 702 , 还用于: 检测到所述用户选定眼球定位功能选项时, 开 启所述眼球定位功能; 或者,
向所述用户发送提示信息, 所述提示信息用于提醒所述用户开启眼球定 位功能; 接收所述用户发送的开启眼球定位功能的信息, 开启所述眼球定位 功能。
需要说明的是, 附图 7 所示装置中, 其各个模块的具体实施过程以及各 个模块之间的信息交互等内容, 由于与本发明方法实施例基于同一发明构思, 可以参见方法实施例, 在此不——赘述。
本发明实施例提供一种定位的装置, 通过检测用户注视屏幕的操作, 获 得用户注视屏幕的注视区域, 通过检测用户按压屏幕的悬浮手势操作, 获得 用户按压屏幕的投影区域; 然后当注视区域不包含投影区域时, 重新检测用 户按压屏幕的悬浮手势操作, 获得用户按压屏幕的投影区域; 当注视区域包 含投影区域时, 处理匹配区域内包含的应用图标或者功能按钮, 使得本发明 实施例可以在选择目标应用图标或者目标功能按钮时操作简单, 花费时间较 少。
需说明的是, 以上所描述的装置实施例仅仅是示意性的, 其中所述作为 部件可以是或者也可以不是物理单元, 即可以位于一个地方, 或者也可以分 布到多个网络单元上。 可以根据实际的需要选择其中的部分或者全部模块来 实现本实施例方案的目的。 本领域普通技术人员在不付出创造性劳动的情况 下, 即可以理解并实施。
通过以上的实施方式的描述, 所属领域的技术人员可以清楚地了解到本 发明可借助软件加必需的通用硬件的方式来实现, 当然也可以通过专用硬件 包括专用集成电路、 专用 CPU、 专用存储器、 专用元器件等来实现, 但很多情 况下前者是更佳的实施方式。 基于这样的理解, 本发明的技术方案本质上或 者说对现有技术做出贡献的部分可以以软件产品的形式体现出来, 该计算机 软件产品存储在可读取的存储介质中, 如计算机的软盘, U盘、 移动硬盘、 只 读存储器、 随机存取存储器、 磁碟或者光盘等, 包括若干指令用以使得一台 计算机设备(可以是个人计算机, 服务器, 或者网络设备等)执行本发明各 个实施例所述的方法。
本说明书中的各个实施例均采用递进的方式描述, 各个实施例之间相同 相似的部分互相参见即可, 每个实施例重点说明的都是与其他实施例的不同 之处。 尤其, 对于装置和系统实施例而言, 由于其基本相似于方法实施例, 所以描述得比较简单, 相关之处参见方法实施例的部分说明即可。
以上所述, 仅为本发明的具体实施方式, 但本发明的保护范围并不局限 于此, 任何熟悉本技术领域的技术人员在本发明揭露的技术范围内, 可轻易 想到变化或替换, 都应涵盖在本发明的保护范围之内。 因此, 本发明的保护 范围应所述以权利要求的保护范围为准。

Claims

权利要求 书
1、 一种定位的方法, 其特征在于, 包括:
检测用户注视屏幕的操作, 获得所述用户注视屏幕的注视区域;
检测所述用户按压所述屏幕的悬浮手势操作, 获得所述用户按压所述屏幕 的投影区域;
当所述注视区域不包含所述投影区域时, 重新检测所述用户按压所述屏幕 的悬浮手势操作, 获得所述用户按压所述屏幕的投影区域;
当所述注视区域包含所述投影区域时, 处理匹配区域内包含的应用图标或 者功能按钮, 所述匹配区域为所述投影区域。
2、 根据权利要求 1所述的方法, 其特征在于, 在检测所述用户按压所述屏 幕的悬浮手势操作, 获得所述用户按压所述屏幕的投影区域之后, 还包括: 判断所述注视区域是否包含所述投影区域。
3、 根据权利要求 1所述的方法, 其特征在于, 所述当所述注视区域包含所 述投影区域时, 处理匹配区域内包含的应用图标或者功能按钮的步骤, 包括: 当所述注视区域包含所述投影区域时, 根据所述匹配区域确定对应的所述 匹配区域内的应用图标或者功能按钮;
打开所述应用图标对应的应用程序, 或者, 显示所述功能按钮对应的内容。
4、 根据权利要求 1所述的方法, 其特征在于, 在所述检测用户注视屏幕的 操作, 获得所述用户注视的屏幕的注视区域之前, 还包括:
开启眼球定位功能, 以便可以获得所述用户的所述注视区域。
5、根据权利要求 4所述的方法, 其特征在于, 所述开启眼球定位功能包括: 检测到所述用户选定眼球定位功能选项时, 开启所述眼球定位功能; 或者, 向所述用户发送提示信息, 所述提示信息用于提醒所述用户开启眼球定位 功能; 接收所述用户发送的开启眼球定位功能的信息, 开启所述眼球定位功能。
6、 一种定位的装置, 其特征在于, 包括:
第一检测单元, 用于检测用户注视屏幕的操作;
第一获取单元, 用于获得所述用户注视屏幕的注视区域; 第二检测单元, 用于检测所述用户按压所述屏幕的悬浮手势操作; 第二获取单元, 用于获得所述用户按压所述屏幕的投影区域;
所述第二检测单元, 还用于当所述注视区域不包含所述投影区域时, 重新 检测所述用户按压所述屏幕的悬浮手势操作;
所述第二获取单元, 还用于获得所述用户按压所述屏幕的投影区域; 处理单元, 用于当所述注视区域包含所述投影区域时, 处理匹配区域内包 含的应用图标或者功能按钮, 所述匹配区域为所述投影区域。
7、 根据权利要求 6所述的装置, 其特征在于, 所述装置, 还包括: 判断单元, 用于判断所述注视区域是否包含所述投影区域。
8、 根据权利要求 6所述的装置, 其特征在于, 所述处理单元, 包括: 确定模块, 用于当所述注视区域包含所述投影区域时, 根据所述匹配区域 确定对应的所述匹配区域内的应用图标或者功能按钮;
处理模块, 用于打开所述应用图标对应的应用程序, 或者, 显示所述功能 按钮对应的内容。
9、 根据权利要求 6所述的装置, 其特征在于, 所述装置, 还包括: 开启单元, 用于开启眼球定位功能, 以便可以获得所述用户的所述注视区 域。
10、 根据权利要求 9所述的装置, 其特征在于, 所述开启单元, 具体包括: 检测模块, 用于检测到所述用户选定眼球定位功能选项;
开启模块, 用于开启所述眼球定位功能; 或者,
发送模块, 用于向所述用户发送提示信息, 所述提示信息用于提醒所述用 户开启眼球定位功能;
接收模块, 用于接收所述用户发送的开启眼球定位功能的信息; 所述开启 模块, 还用于开启所述眼球定位功能。
PCT/CN2014/075008 2013-11-27 2014-04-09 一种定位的方法及装置 WO2015078126A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/532,749 US9971413B2 (en) 2013-11-27 2014-11-04 Positioning method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310636404.5 2013-11-27
CN201310636404.5A CN103631483B (zh) 2013-11-27 2013-11-27 一种定位的方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/532,749 Continuation US9971413B2 (en) 2013-11-27 2014-11-04 Positioning method and apparatus

Publications (1)

Publication Number Publication Date
WO2015078126A1 true WO2015078126A1 (zh) 2015-06-04

Family

ID=50212618

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/075008 WO2015078126A1 (zh) 2013-11-27 2014-04-09 一种定位的方法及装置

Country Status (2)

Country Link
CN (1) CN103631483B (zh)
WO (1) WO2015078126A1 (zh)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9971413B2 (en) 2013-11-27 2018-05-15 Huawei Technologies Co., Ltd. Positioning method and apparatus
CN103631483B (zh) * 2013-11-27 2017-02-15 华为技术有限公司 一种定位的方法及装置
CN104536556B (zh) * 2014-09-15 2021-01-15 联想(北京)有限公司 一种信息处理方法及电子设备
KR102024330B1 (ko) * 2015-03-13 2019-09-23 후아웨이 테크놀러지 컴퍼니 리미티드 전자 장치, 사진 촬영 방법, 및 사진 촬영 장치
CN104991693B (zh) * 2015-06-10 2020-02-21 联想(北京)有限公司 一种信息处理方法及电子设备
JP2017054251A (ja) * 2015-09-08 2017-03-16 ソニー株式会社 情報処理装置、情報処理方法、およびプログラム
CN106055707A (zh) * 2016-06-28 2016-10-26 北京小米移动软件有限公司 弹幕显示方法及装置
CN108073267B (zh) * 2016-11-10 2020-06-16 腾讯科技(深圳)有限公司 基于运动轨迹的三维控制方法及装置
CN106814854A (zh) * 2016-12-29 2017-06-09 杭州联络互动信息科技股份有限公司 一种防止误操作的方法及装置
CN108089801A (zh) * 2017-12-14 2018-05-29 维沃移动通信有限公司 一种信息显示方法及移动终端
CN110244853A (zh) * 2019-06-21 2019-09-17 四川众信互联科技有限公司 手势控制方法、装置、智能显示终端及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102841683A (zh) * 2012-07-24 2012-12-26 东莞宇龙通信科技有限公司 应用启动的方法及其通信终端
CN102981764A (zh) * 2012-11-19 2013-03-20 北京三星通信技术研究有限公司 触控操作的处理方法及设备
CN103336576A (zh) * 2013-06-28 2013-10-02 优视科技有限公司 一种基于眼动追踪进行浏览器操作的方法及装置
CN103631483A (zh) * 2013-11-27 2014-03-12 华为技术有限公司 一种定位的方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011101845A1 (en) * 2010-02-16 2011-08-25 Screenovate Technologies Ltd. Modified operating systems allowing mobile devices to accommodate io devices more convenient than their own inherent io devices and methods for generating such systems
US8890818B2 (en) * 2010-09-22 2014-11-18 Nokia Corporation Apparatus and method for proximity based input

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102841683A (zh) * 2012-07-24 2012-12-26 东莞宇龙通信科技有限公司 应用启动的方法及其通信终端
CN102981764A (zh) * 2012-11-19 2013-03-20 北京三星通信技术研究有限公司 触控操作的处理方法及设备
CN103336576A (zh) * 2013-06-28 2013-10-02 优视科技有限公司 一种基于眼动追踪进行浏览器操作的方法及装置
CN103631483A (zh) * 2013-11-27 2014-03-12 华为技术有限公司 一种定位的方法及装置

Also Published As

Publication number Publication date
CN103631483B (zh) 2017-02-15
CN103631483A (zh) 2014-03-12

Similar Documents

Publication Publication Date Title
WO2015078126A1 (zh) 一种定位的方法及装置
US11592980B2 (en) Techniques for image-based search using touch controls
US11385853B2 (en) Method and apparatus for implementing content displaying of component
CA2841524C (en) Method and apparatus for controlling content using graphical object
AU2014200472B2 (en) Method and apparatus for multitasking
US8850364B2 (en) Method and device for sending file data
JP2015153420A (ja) マルチタスク切替方法及びそのシステム及び該システムを有する電子装置
US20170177600A1 (en) Method, system, and device for processing data in connection with an application
WO2014040298A1 (zh) 触摸操作处理方法及终端设备
WO2022007541A1 (zh) 设备控制方法、装置、存储介质及电子设备
WO2017101391A1 (zh) 一种放大视频图像的方法及装置
WO2018000633A1 (zh) 页面信息处理方法、装置及电子设备
WO2022022566A1 (zh) 图形码识别方法、装置和电子设备
US9720592B2 (en) Mobile gesture reporting and replay with unresponsive gestures identification and analysis
CN114779977A (zh) 界面显示方法、装置、电子设备及存储介质
US9971413B2 (en) Positioning method and apparatus
WO2015139406A1 (zh) 一种终端显示页面的操作方法及终端
WO2016173307A1 (zh) 一种消息复制方法和装置、以及智能终端
WO2016065589A1 (zh) 移动终端的短信处理方法及移动终端
WO2017162031A1 (zh) 一种信息采集方法和装置,以及一种智能终端
WO2016037408A1 (zh) 一种操作计算机终端的方法及计算机终端
US11100180B2 (en) Interaction method and interaction device for search result
US9921680B2 (en) Performing searches using computing devices equipped with pressure-sensitive displays
CN114090896A (zh) 信息展示方法、装置及电子设备
WO2016206438A1 (zh) 一种触屏控制方法和装置、移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14866670

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14866670

Country of ref document: EP

Kind code of ref document: A1