WO2014164235A1 - Affichage non occulté pour interactions par survol - Google Patents

Affichage non occulté pour interactions par survol Download PDF

Info

Publication number
WO2014164235A1
WO2014164235A1 PCT/US2014/021441 US2014021441W WO2014164235A1 WO 2014164235 A1 WO2014164235 A1 WO 2014164235A1 US 2014021441 W US2014021441 W US 2014021441W WO 2014164235 A1 WO2014164235 A1 WO 2014164235A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
screen
interface element
computing device
occluded
Prior art date
Application number
PCT/US2014/021441
Other languages
English (en)
Inventor
Guenael Thomas Strutt
Dong Zhou
Stephen Michael Polansky
Matthew Paul Bell
Isaac Scott Noble
Jason Robert Weber
Original Assignee
Amazon Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amazon Technologies, Inc. filed Critical Amazon Technologies, Inc.
Priority to EP14780061.9A priority Critical patent/EP2972727B1/fr
Publication of WO2014164235A1 publication Critical patent/WO2014164235A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/041012.5D-digitiser, i.e. digitiser detecting the X/Y position of the input means, finger or stylus, also when it does not touch, but is proximate to the digitiser's interaction surface and also measures the distance of the input means within a short range in the Z direction, possibly with a separate measurement setup

Definitions

  • a popular feature of many portable computing devices such as smart phones, tablets, laptops, and portable media players, is the touchscreen, which allows users to directly interact with their devices in new and interesting ways.
  • the surfaces of touchscreens require cleaning more often and some users find the electrical contact between the user's fingertip and the touchscreen uncomfortable, particularly after an extended period of use.
  • certain tasks can be difficult for some users to perform on touchscreens and some interactions are less than optimal for the user.
  • new users may be unaccustomed to various features, functions, and applications incorporated in the devices, and can only familiarize themselves by trial and error.
  • a display area that may already be small to begin with can become even more limited when the user is required to interact with their devices by touch.
  • FIG. 1 illustrates an example approach for non-occluded display of data associated with a hover interaction that can be utilized in accordance with various embodiments
  • FIG. 2 illustrates another example approach for non-occluded display of data associated with multiple hover interactions that can be utilized in accordance with various embodiments
  • FIGS. 3(a), 3(b), 3(c), and 3(d) illustrate an example process for determining one or more characteristics of a user with respect to a computing device that can be utilized in accordance with various embodiments;
  • FIG. 4 illustrates an example approach for determining whether data to be displayed at a particular location may be occluded that can be utilized in accordance with various embodiments
  • FIG. 5 illustrates another example approach for determining whether data to be displayed at a particular location may be occluded that can be utilized in accordance with various embodiments
  • FIG. 6(a) and 6(b) illustrate another example approach for determining whether data to be displayed at a particular location may be occluded that can be utilized in accordance with various embodiments;
  • FIG. 7 illustrates an example process for non-occluded display of data associated with a hover interaction that can be utilized in accordance with various embodiments
  • FIG. 8 illustrates an example set of components that can be utilized in a device such as that illustrated in FIG. 7;
  • FIG. 9 illustrates an example an environment in which various embodiments can be implemented.
  • Yet another example is the general lack of availability of tooltips, hover boxes, previews, and other such interfaces for personal devices. These approaches allow users to hover over an object of a user interface to obtain information about the object or what the object will do, and can be very helpful for many users. When such functionality is provided at all, conventional devices may fail to take into account that the presentation of tooltips, hover boxes, and other such interfaces, may be occluded by the user's finger, hand, or other physical features of the user.
  • Systems and methods in accordance with various embodiments of the present disclosure may overcome one or more of the aforementioned and other deficiencies experienced in conventional approaches for displaying data and/or enabling user input.
  • various embodiments enable a computing device to recognize when a user's finger, hand, stylus, digital pen, or other such object hovers over or is within a determined distance of a user interface element.
  • Some of the user interface elements may be configured to display data upon detection of a hover input or when the object is within the determined distance of the user interface element.
  • certain approaches of various embodiments ensure that at least substantive portions of the data are displayed without being occluded or obscured, for example, by the user's finger, hand, or other such object.
  • FIG. 1 illustrates an example situation 100 of a hover interaction wherein a portable computing device 102 is displaying data associated with an element of a user interface that is hovered upon or within a determined distance in accordance with various embodiments.
  • a portable computing device 102 e.g., a portable media player, smart phone, or tablet
  • FIG. 1 illustrates an example situation 100 of a hover interaction wherein a portable computing device 102 is displaying data associated with an element of a user interface that is hovered upon or within a determined distance in accordance with various embodiments.
  • a portable computing device 102 e.g., a portable media player, smart phone, or tablet
  • FIG. 1 illustrates an example situation 100 of a hover interaction wherein a portable computing device 102 is displaying data associated with an element of a user interface that is hovered upon or within a determined distance in accordance with various embodiments.
  • a portable computing device 102 e.g., a portable media player, smart phone, or tablet
  • a hover interaction is a feature of a pointer-enabled user interface wherein movement of the pointer (e.g., cursor, finger, stylus, or object) toward an element of the user interface (e.g., buttons, tool icons, hyperlinks) and stationing the pointer for a determined period of time at the element and within a determined distance can be interpreted by a computing device as a "hover input.”
  • the user interface presents information about the element the pointer is hovering over (e.g., an application name, a toolbar function, a description of the computing tasks that will be performed).
  • the elements that can be hovered upon are selectable elements, i.e., the element can be clicked on or touched. But some hover interactions are selections in themselves. For example, certain hover interactions only require the user to move over an element for even the barest minimum of time and specified computing tasks may be performed, sometimes without the user necessarily aware that those tasks are being performed.
  • a conventional approach to hover interactions is a mouseover event in a desktop web browser, wherein a hover input, such as the user maintaining a mouse cursor over a hyperlink, may result in a display of the URL in the status bar of the web browser.
  • Certain conventional web browsers can also display the title and/or alt attribute of a hyperlink as a tooltip next to the hyperlink when the user hovers over the hyperlink for a period of time.
  • Conventional browsers that support tabbing can display the full title of a web page corresponding to a tab when the user hovers over the tab.
  • Some web browsers also support hover interactions of websites that define their own mouseovers using JavaScript® or Cascade Style Sheets (CSS). For instance, hovering over certain objects of a webpage of the website may result in the object changing color, a border being added around an object, or a tooltip to appear next to the object.
  • CSS Cascade Style Sheets
  • Desktop software applications can provide tooltips when a user hovers over certain selectable objects or elements (e.g., buttons, toolbar or ribbon icons, menu options, palettes) of the respective programs. Tooltips can provide information to the user about the computing task(s) associated with the objects or elements.
  • selectable objects or elements e.g., buttons, toolbar or ribbon icons, menu options, palettes
  • desktop applications such as word processors, spreadsheet programs, image editing software, or presentation programs, use an approach for hover interactions that enable the user to select editable content and then hover over a stylistic or graphical tool without committing to changes to preview what the selected editable content would look like if the user selected the computing task(s) associated with the tool (e.g., bold, italicize, underline, color, image effect).
  • Hover interactions are also supported by some desktop OS's. For example, in certain desktop OS's, hovering over an icon corresponding to hard drives, peripheral devices, network drives, applications, folders, files, etc. may provide information about these objects, such as the full name, contents, location, date of creation, size, file type, etc.
  • desktop OS's may support hover interactions via one or more application programming interfaces that can standardize how a hover input is detected and the computing task(s) to perform when a hover input is detected.
  • the computing device 102 can be seen running a web browser which renders content from a website for display on the touchscreen 106 of the computing device.
  • the user's finger 104 hovers over a user interface element 120, a hyperlink to another webpage, at a distance of approximately 2.54 cm or 1.0" and for a period of at least 500 ms without the finger physically touching the display screen 106.
  • minimum and maximum threshold distances and durations of times can be used based on the stability, accuracy, and sensitivity of device sensors; considerations for user experience; and other factors known by those of skill in the art.
  • the hover box 122 is semi-transparent to provide the user at least some context of the original content prior to display of the hover box.
  • the hover box 122 is also positioned such that its bottom right corner is located just above the topmost point of the user's finger 104 so that the entirety of the hover box is visible to the user from a perspective of the user face on with the device.
  • some portions of a tooltip or hover box, such as those lacking substantive content, may be partially obscured by the user.
  • These characteristics of the hover box 122 may be specifiable by any of the user, the website designer, the browser application provider, the operating system provider, a device manufacturer, or some combination thereof.
  • a website designer may design a webpage for a desktop browser and specify the title attribute for an HTML element with the expectation that hovering over the element will provide a tooltip with the text of the title rendered according to the default look and feel and at a position rendered by the desktop browser.
  • a mobile browser application provider may interpret a title attribute to create a hover box in a style similar to the one depicted in FIG. 1, except as opaque by default.
  • the user may adjust browser settings to display the hover box 122 semi- transparently as a personal preference.
  • Various alternative combinations can be implemented in accordance with various embodiments, as will be appreciated by one of ordinary skill in the art.
  • FIG. 1 provides an example of enabling non-occluded display of data for hover interactions in the context of a web browser
  • the various approaches described in FIG. 1 are equally applicable for other software applications and operating systems.
  • FIG. 2 illustrates a situation 200 wherein data respectively associated with multiple user interface elements is displayed in a non-occluded manner in response to two hover inputs, each corresponding respectively to two of the user interface elements, being received by a computing device 202 in accordance with various embodiments.
  • a user 204 can be seen operating a computing device 202 that is displaying a virtual keyboard and an email program on a touch display interface 206.
  • the user's left thumb is hovering over (or within a determined distance of) user interface element 220 (i.e., virtual keyboard key "S") and a hover box 224 is provided overlaying the virtual keyboard and email program and the user's right thumb is hovering over user interface element 222 (i.e., virtual keyboard key "[") and a hover box 226 is displayed over the virtual keyboard and email program.
  • the hover boxes for each virtual key is larger than a user's fingertip (e.g., 0.50"x0.50" or 1.27 cm x 1.27 cm).
  • the size of hover boxes can be based on the size of a specific user's fingertip (or thumb profile).
  • a computing device can be configured to detect multiple hover interactions corresponding respectively to multiple user interface elements and display data associated with the user interface elements when it is determined that the user interface elements have been hovered upon.
  • FIG. 2 further illustrates that the data to be displayed when a user hovers over a user interface element can be based on the "handedness" hovering over the element.
  • hover box 224 can be seen offset to right left of the left thumb of the user 204 and hover box 226 is offset to the left of the right thumb.
  • determining the location of where to display data for a detected hover input can be based at least in part on which of the user's hand hovered over the user interface element associated with the data for display. Further, it should be understood that terms such as “right” and “left” are used for clarity of explanation and are not intended to require specific orientations unless otherwise stated.
  • hover boxes 224 and 226 do not overlap any portion of the display screen 206 over which the finger is hovering. Certain conventional approaches for hover interactions may "magnify" a virtual key at the key's position on the virtual keyboard but, at least as seen in the case of the key 220, such an approach may be undesirable since a substantial portion of the virtual key would remain occluded.
  • FIG. 2 An approach, such as one illustrated in FIG. 2 may overcome this deficiency.
  • some embodiments allow for non-substantive portions of data associated with hover interactions to be occluded by the user, such as corners and borders of tooltips, hover boxes, and other such graphical elements.
  • embodiments allow for substantive portions of data associated with hover interactions to be occluded by the user if the displayed data is large enough to provide the user with sufficient context despite a portion of the data being obscured by the user.
  • consideration of an active area of a GUI may also determine where hover boxes are to be located when a user hovers over certain elements of the GUI.
  • the active area of the GUI may correspond to a location of a text cursor. For example, in FIG. 2, an active area of the GUI is indicated by a blinking text cursor 228 at the "To" line of the email program.
  • the user may change the active area to be the "Re" line 230 of the email program. In such a situation, the preferred placement of the hover boxes 224 and 226, i.e., above the user's thumbs, may no longer be as ideal because the hover boxes would occlude the "Re" line 230.
  • hover boxes may instead be located, for example, below the user's thumbs.
  • Other examples of active areas of a user interface may include input form fields, a browser address bar, a search field bar, etc.
  • various embodiments can also determine an active area of the user interface when selecting locations for hover boxes.
  • FIGS. 3(a), (b), (c), and (d) illustrate an example of an approach to determining a relative distance and/or location of at least one object, i.e., a user's finger that can be utilized in accordance with various embodiments.
  • input can be provided to a computing device 302 by monitoring the position of the user's fingertip 304 with respect to the device, although various other features can be used as well as discussed and suggested elsewhere herein.
  • a single camera can be used to capture image information including the user's fingertip, where the relative location can be determined in two dimensions from the position of the fingertip in the image and the distance determined by the relative size of the fingertip in the image.
  • a distance detector or other such sensor can be used to provide the distance information.
  • the illustrated computing device 302 in this example instead includes at least two different cameras 308 and 310 positioned on the device with a sufficient separation such that the device can utilize stereoscopic imaging (or another such approach) to determine a relative position of one or more features with respect to the device in three dimensions.
  • the upper camera 308 is able to see the fingertip 304 of the user as long as that feature is within a field of view 312 of the upper camera 308 and there are no obstructions between the upper camera and those features.
  • software executing on the computing device is able to determine information such as the angular field of view of the camera, the zoom level at which the information is currently being captured, and any other such relevant information, the software can determine an approximate direction 316 of the fingertip with respect to the upper camera.
  • methods such as ultrasonic detection, feature size analysis, luminance analysis through active illumination, or other such distance measurement approaches can be used to assist with position determination as well.
  • a second camera 310 is used to assist with location determination as well as to enable distance determinations through stereoscopic imaging.
  • the lower camera 310 is also able to image the fingertip 304 as long as the feature is at least partially within the field of view 314 of the lower camera 310.
  • appropriate software can analyze the image information captured by the lower camera to determine an approximate direction 318 to the user's fingertip.
  • the direction can be determined, in at least some embodiments, by looking at a distance from a center (or other) point of the image and comparing that to the angular measure of the field of view of the camera. For example, a feature in the middle of a captured image is likely directly in front of the respective camera.
  • the feature is at the very edge of the image, then the feature is likely at a forty-five degree angle from a vector orthogonal to the image plane of the capture element. Positions between the edge and the center correspond to intermediate angles as would be apparent to one of ordinary skill in the art, and as known in the art for stereoscopic imaging. Once the direction vectors from at least two image capture elements are determined for a given feature, the intersection point of those vectors can be determined, which corresponds to the approximate relative position in three dimensions of the respective feature.
  • information from a single camera can be used to determine the relative distance to a feature of a user.
  • a device can determine the size of a feature (e.g., a finger, hand, pen, or stylus) used to provide input to the device.
  • the device can estimate the relative distance to the feature. This estimated distance can be used to assist with location determination using a single camera or sensor approach.
  • FIGS. 3(b) and 3(c) illustrate example images 320 and 340 that could be captured of the fingertip using the cameras 308 and 310 of FIG. 3(a).
  • FIG. 3(b) illustrates an example image 320 that could be captured using the upper camera 308 in FIG. 3(a).
  • One or more image analysis algorithms can be used to analyze the image to perform pattern recognition, shape recognition, or another such process to identify a feature of interest, such as the user's fingertip, thumb, hand, or other such feature.
  • identifying a feature in an image such may include feature detection, facial feature extraction, feature recognition, stereo vision sensing, character recognition, attribute estimation, or radial basis function (RBF) analysis approaches, are well known in the art and will not be discussed herein in detail.
  • identifying the feature here the user's hand 322
  • at least one point of interest 324 here the tip of the user's index finger, is determined.
  • the software can use the location of this point with information about the camera to determine a relative direction to the fingertip.
  • a similar approach can be used with the image 340 captured by the lower camera 310 as illustrated in FIG. 3(c), where the hand 342 is located and a direction to the corresponding point 344
  • FIG. 3(d) illustrates another perspective 360 of the device 302. If a fingertip or other feature near the display screen 306 of the device falls within at least one of these fields of view, the device can analyze images or video captured by these cameras to determine the location of the fingertip. In order to account for position in the dead zone outside the fields of view near the display, the device can utilize a second detection approach, such as by using one or more capacitive sensors.
  • the capacitive sensor(s) can detect position at or near the surface of the display screen, and by adjusting the parameters of the capacitive sensor(s) the device can have a detection range 370 that covers the dead zone and also at least partially overlaps the fields of view.
  • Such an approach enables the location of a fingertip or feature to be detected when that fingertip is within a given distance of the display screen, whether or not the fingertip can be seen by one of the cameras.
  • Other location detection approaches can be used as well, such as ultrasonic detection, distance detection, optical analysis, and the like.
  • FIG. 4 illustrates an example approach 400 for determining whether data to be displayed at a location may be occluded that can be utilized in accordance with various embodiments.
  • This situation is similar to that of the one depicted in FIG. 1. That is, in FIG. 4, a user's finger 404 hovers over a user interface element at a location 420 displayed on a touchscreen 406 of computing device 402.
  • the computing device 402 includes one or more capacitive sensors incorporated into the touchscreen 406 that have been configured to detect hover inputs by the user, such as one or more self-capacitive sensors (not shown). In other embodiments, the capacitive sensor(s) may be separate from a display of the computing device.
  • a computing device may include a combination of self-capacitive sensors and mutual capacitive sensors to, for example, enable multi-touch and single hover detection.
  • the angle of incidence between the user's finger 404 and the computing device is such that capacitive disturbance can be measured from a first point 420 on the touchscreen 406 corresponding to the user's fingertip to a second point 424 at the edge of the touchscreen.
  • the capacitive sensor(s) can be configured to detect both the user's fingertip corresponding to the point at 420 and the presence of other portions of the user's finger 404 below the fingertip when the angle of incidence between the user's finger 404 and the computing device 402 is at least 45°.
  • other minimum and/or maximum threshold angles of incidence can be used based at least in part on the characteristics of the capacitive sensor(s).
  • the capacitive disturbance that has been detected here is represented as the gradient from point 420 to point 424.
  • the footprint of the user's finger 404 i.e., the area indicated by the dashed line corresponding to the user's finger 404 on the touchscreen 406 and the right edge of the touchscreen 406
  • Data associated with a GUI element that is located at point 420 and associated with a hover interaction can then be displayed away from the footprint of the user's finger 404, for example.
  • that data comprises a tooltip 422.
  • the user's finger 504 can be seen hovering over or within a determined distance of a display screen 506 of computing device 502.
  • the computing device 502 further includes cameras 508 and 510, each having fields of view 512 and 514, respectively.
  • a portion of the user's finger 505 falls into the dead zone between the fields of view 512 and 514, and this portion cannot be captured by the cameras 508 and 510.
  • a second portion 507 of the user's finger 504 can be captured by the cameras.
  • historical image data including the entirety of the user's finger (or the user's hand) can be used to estimate or extrapolate the missing portion 505.
  • the historical image data can be registered with respect to contemporaneous image data corresponding to portion
  • the pose, i.e., position and orientation, of the user's fingertip can be estimated with respect to the computing device to detect a hover input 506 even when the user's fingertip falls within the dead zone between the cameras 508 and 510.
  • Such an approach can also be used to estimate the footprint of the user's finger 504 (and hand) when the capacitive sensors cannot detect the user's finger 504 in order to determine an appropriate location to display a tooltip, hover box, or other such information.
  • a Tracking-Learning- Detection (TLD) algorithm also known as “Predator”
  • Predator a Tracking-Learning- Detection algorithm
  • TLD tracks a selected object using an adaptive tracker that models the selected object iteratively by "growing events” and “pruning events” and an on-line detector. These events are designed to compensate for the errors of the other, effectively canceling each other.
  • Growing events comprise a selection of samples of the tracker's trajectory and model update. Pruning events is based on the assumption that the selected object is unique within a scene, and when the detector and tracker agree on the object position, all other remaining detections are removed from the model.
  • the detector runs concurrently with the tracker and enables re-initialization of the tracker when previously observed image data of the object reappears in the event the object becomes partially or totally occluded or disappears altogether from a scene.
  • FIGS. 6(a) and 6(b) illustrate another example approach for determining whether data to be displayed at a location may be occluded that can be utilized in accordance with various embodiments.
  • detection of a location on a screen that the user is hovering over may be determined by an absolute distance between the user's finger (or other such implement) from the screen.
  • detection of the location that the user is hovering over can be a relative distance based on the location of the user's finger and the angle of incidence between the user's line of sight with respect to the screen.
  • FIG. 6(a) illustrates a situation 600 of a user 604 sitting a table or a desk with a computing device 602 lying flat on the table.
  • Determining where the user is pointing may be an estimation of the absolute distance 620 d a between the user's fingertip and the computing device 602 in certain embodiments.
  • the computing device may determine that the user is hovering over a point 621 of the display element 606 when the distance between the user's fingertip and the point is within a minimum and/or maximum threshold distance, or threshold range of distances.
  • the distance between the user's fingertip and the point 621 can be measured, for example, by calculating the length of a line, normal or perpendicular to the x-y plane of the computing device, between the user's fingertip and the point 621.
  • determining where the user is pointing may depend on a relative distance 622 d r between the user's fingertip and the computing device 602 with respect to the user's line of sight.
  • the computing device may determine that the user is hovering over a point 623 of the display element 606 when the user's fingertip and the point is within a threshold range of distances.
  • the distance between the user's fingertip and the point 623 can be measured, for example, by calculating the length of a line, corresponding to the angle of incidence 624 between the user's line of sight and the computing device, between the user's fingertip and the point 623.
  • various embodiments also consider that the user's line of sight can affect where data associated with a hover interaction can be displayed to avoid occluding at least substantive portions of the data. For example, a user may not be facing flush to a computing device, such as can be seen in the situation 600 in FIG. 6(a). Moreover, the user may be hovering over a user interface element displayed on the computing device such that the user's finger is perpendicular to the computing device, as can be seen in the situation 650 in FIG. 6(b).
  • FIG. 6(a) illustrates an example of one such approach wherein an angle of incidence 624 between the user's line of sight 626 and the computing device 602 can be estimated from image data captured by cameras 608 and 610 using stereoscopic image analysis, as discussed elsewhere herein.
  • the angle of incidence is determined to be approximately 30°.
  • FIG. 6(b) shows that the user's hand 605 is nearly perpendicular to the computing device 602 such that a top portion (with respect to the use) of the display element 606 is obscured from the user's view. Accordingly, a determination that data 628 corresponding to a user interface element associated with a hover interaction can be displayed below the user's fingertip to avoid at least a portion of the data being hidden or obscured to the user.
  • FIG. 7 illustrates an example process 700 for non-occluded display of data associated with a hover interaction that can be utilized in accordance with various embodiments. It should be understood, however, that there can be additional, fewer, or alternative steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments unless otherwise stated.
  • one or more elements of a user interface are defined such that respective data will be displayed in the user interface when a pointer (e.g., cursor, user finger, user hand, stylus, digital pen, etc.) hovers upon one of the elements 702.
  • a pointer e.g., cursor, user finger, user hand, stylus, digital pen, etc.
  • the elements may comprise each of the keys of the keyboard and the data to be displayed for each key upon hover may include the alphanumeric value of the key; a size for the key, such as a larger size; a shape bounding the key, such as a circle or a box; a color for the shape bounding the key; etc.
  • the user interface may be associated with an application program and some of the elements of such a user interface may comprise a plurality of tool icons in a toolbar.
  • Each of the plurality of tool icons may be designated with data for display upon hover such as the name of the tool corresponding to the tool icon and a description of what computing tasks are performed upon selection (e.g., click, touch, contact of the stylus, etc.).
  • the user interface may correspond to an operating system executing on a computing device.
  • User interface elements may include widgets or utilities such as a clock icon or calendar icon that can be expanded upon hover to provide the time or the date, respectively.
  • Various other behaviors can be associated with user interface elements that are defined as hoverable as discussed elsewhere herein and as known to those of ordinary skill.
  • a user may interact with the user interface such that a computing device executing the user interface detects that one of the user interface elements has been hovered upon 704.
  • a computing device may include one or more capacitive sensors, one or more cameras, one or more ultrasonic detectors, and/or one or more other such sensors to detect hover inputs.
  • the computing device can estimate one or more characteristics of the user with respect to the computing device 706, such as a footprint of the user's hand, the user's handedness, the user's line of sight, etc. Based on this analysis, the computing device may determine whether the data to be displayed would be occluded 708 if displayed at a default position. For instance, there may be a number of heuristics on where to display tooltips, hover boxes, and the like. As discussed elsewhere herein, one approach may be to provide hover boxes corresponding to virtual keys above the user's fingertip.
  • the computing device may determine a different location in the user interface to present the data such that at least the substantive portion of the data would be visible to the user 710, and display the data the determined location 712. If the data would not be occluded at the preferred or default location, then the data can be displayed at that location 714.
  • FIG. 8 illustrates an example electronic user device 800 that can be used in accordance with various embodiments.
  • a portable computing device e.g., an electronic book reader or tablet computer
  • any electronic device capable of receiving, determining, and/or processing input can be used in accordance with various embodiments discussed herein, where the devices can include, for example, desktop computers, notebook computers, personal data assistants, smart phones, video gaming consoles, television set top boxes, and portable media players.
  • the computing device 800 has a display screen 806 on the front side, which under normal operation will display information to a user facing the display screen (e.g., on the same side of the computing device as the display screen).
  • the display screen can be a touch sensitive screen that utilizes a capacitive touch-based detection approach, for example, that enables the device to determine the location of an object within a distance of the display screen.
  • the device also includes at least one communication component 812 operable to enable the device to communicate, via a wired and/or wireless connection, with another device, either directly or across at least one network, such as a cellular network, the Internet, a local area network (LAN), and the like.
  • Some devices can include multiple discrete components for communicating over various communication channels.
  • the computing device in this example includes cameras 804 and 806 or other imaging element for capturing still or video image information over at least a field of view of the cameras.
  • the computing device might only contain one imaging element, and in other embodiments the computing device might contain several imaging elements.
  • Each image capture element may be, for example, a camera, a charge-coupled device (CCD), a motion detection sensor, or an infrared sensor, among many other possibilities. If there are multiple image capture elements on the computing device, the image capture elements may be of different types.
  • at least one camera can include at least one wide-angle optical element, such as a fish eye lens, that enables the camera to capture images over a wide range of angles, such as 180 degrees or more.
  • each camera can comprise a digital still camera, configured to capture subsequent frames in rapid succession, or a video camera able to capture streaming video.
  • the example computing device 800 also includes at least one microphone 810 or other audio capture device capable of capturing audio data, such as words or commands spoken by a user of the device.
  • a microphone is placed on the same side of the device as the display screen 806, such that the microphone will typically be better able to capture words spoken by a user of the device.
  • a microphone can be a directional microphone that captures sound information from substantially directly in front of the microphone, and picks up only a limited amount of sound from other directions. It should be understood that a microphone might be located on any appropriate surface of any region, face, or edge of the device in different embodiments, and that multiple microphones can be used for audio recording and filtering purposes, etc.
  • FIG. 9 illustrates a logical arrangement of a set of general components of an example computing device 900 such as the device 800 described with respect to FIG. 8.
  • the device includes a processor 902 for executing instructions that can be stored in a memory device or element 904.
  • the device can include many types of memory, data storage, or non- transitory computer-readable storage media, such as a first data storage for program instructions for execution by the processor 902, a separate storage for images or data, a removable memory for sharing information with other devices, etc.
  • the device typically will include some type of display element 906, such as a touchscreen, electronic ink (e-ink), organic light emitting diode (OLED), liquid crystal display (LCD), etc., although devices such as portable media players might convey information via other means, such as through audio speakers.
  • the display screen provides for touch or swipe-based input using, for example, capacitive or resistive touch technology.
  • the device in many embodiments will include one or more cameras or image sensors 910 for capturing image or video content.
  • a camera can include, or be based at least in part upon any appropriate technology, such as a CCD or CMOS image sensor having a sufficient resolution, focal range, viewable area, to capture an image of the user when the user is operating the device.
  • An image sensor can include a camera or infrared sensor that is able to image projected images or other objects in the vicinity of the device.
  • Methods for capturing images or video using a camera with a computing device are well known in the art and will not be discussed herein in detail. It should be understood that image capture can be performed using a single image, multiple images, periodic imaging, continuous image capturing, image streaming, etc.
  • a device can include the ability to start and/or stop image capture, such as when receiving a command from a user, application, or other device.
  • the example device can similarly include at least one audio component, such as a mono or stereo microphone or microphone array, operable to capture audio information from at least one primary direction.
  • a microphone can be a uni-or omni-directional microphone as known for such devices.
  • the computing device 900 includes at least one capacitive component 908 or other proximity sensor, which can be part of, or separate from, the display assembly.
  • the proximity sensor can take the form of a capacitive touch sensor capable of detecting the proximity of a finger or other such object as discussed herein.
  • the computing device can include one or more communication elements or networking sub-systems, such as a Wi-Fi, Bluetooth, RF, wired, or wireless communication system.
  • the device in many embodiments can communicate with a network, such as the Internet, and may be able to communicate with other such devices.
  • the device can include at least one additional input device 912 able to receive conventional input from a user.
  • This conventional input can include, for example, a push button, touch pad, touchscreen, wheel, joystick, keyboard, mouse, keypad, or any other such device or element whereby a user can input a command to the device.
  • a push button touch pad, touchscreen, wheel, joystick, keyboard, mouse, keypad, or any other such device or element whereby a user can input a command to the device.
  • such a device might not include any buttons at all, and might be controlled only through a combination of visual and audio commands, such that a user can control the device without having to be in contact with the device.
  • the device 900 also can include one or more orientation and/or motion sensors.
  • Such sensor(s) can include an accelerometer or gyroscope operable to detect an orientation and/or change in orientation, or an electronic or digital compass, which can indicate a direction in which the device is determined to be facing.
  • the mechanism(s) also (or alternatively) can include or comprise a global positioning system (GPS) or similar positioning element operable to determine relative coordinates for a position of the computing device, as well as information about relatively large movements of the device.
  • GPS global positioning system
  • the device can include other elements as well, such as may enable location determinations through triangulation or another such approach. These mechanisms can communicate with the processor 902, whereby the device can perform any of a number of actions described or suggested herein.
  • the device 900 can include the ability to activate and/or deactivate detection and/or command modes, such as when receiving a command from a user or an application, or retrying to determine an audio input or video input, etc.
  • a device might not attempt to detect or communicate with devices when there is not a user in the room. If a proximity sensor of the device, such as an IR sensor, detects a user entering the room, for instance, the device can activate a detection or control mode such that the device can be ready when needed by the user, but conserve power and resources when a user is not nearby.
  • the computing device 900 may include a light-detecting element that is able to determine whether the device is exposed to ambient light or is in relative or complete darkness.
  • a light-detecting element that is able to determine whether the device is exposed to ambient light or is in relative or complete darkness.
  • the light-detecting element can be used to determine when a user is holding the device up to the user's face (causing the light-detecting element to be substantially shielded from the ambient light), which can trigger an action such as the display element to temporarily shut off (since the user cannot see the display element while holding the device to the user's ear).
  • the light-detecting element could be used in conjunction with information from other elements to adjust the functionality of the device.
  • the device might determine that it has likely been set down by the user and might turn off the display element and disable certain functionality. If the device is unable to detect a user's view location, a user is not holding the device and the device is further not exposed to ambient light, the device might determine that the device has been placed in a bag or other compartment that is likely inaccessible to the user and thus might turn off or disable additional features that might otherwise have been available.
  • a user must either be looking at the device, holding the device or have the device out in the light in order to activate certain functionality of the device.
  • the device may include a display element that can operate in different modes, such as reflective (for bright situations) and emissive (for dark situations). Based on the detected light, the device may change modes.
  • the device 900 can disable features for reasons substantially unrelated to power savings.
  • the device can use voice recognition to determine people near the device, such as children, and can disable or enable features, such as Internet access or parental controls, based thereon.
  • the device can analyze recorded noise to attempt to determine an environment, such as whether the device is in a car or on a plane, and that determination can help to decide which features to enable/disable or which actions are taken based upon other inputs. If voice recognition is used, words can be used as input, either directly spoken to the device or indirectly as picked up through conversation.
  • the device determines that it is in a car, facing the user and detects a word such as "hungry” or "eat,” then the device might turn on the display element and display information for nearby restaurants, etc.
  • a user can have the option of turning off voice recording and conversation monitoring for privacy and other such purposes.
  • the actions taken by the device relate to deactivating certain functionality for purposes of reducing power consumption. It should be understood, however, that actions can correspond to other functions that can adjust similar and other potential issues with use of the device. For example, certain functions, such as requesting Web page content, searching for content on a hard drive and opening various applications, can take a certain amount of time to complete. For devices with limited resources, or that have heavy usage, a number of such operations occurring at the same time can cause the device to slow down or even lock up, which can lead to inefficiencies, degrade the user experience and potentially use more power. In order to address at least some of these and other such issues, approaches in accordance with various embodiments can also utilize information such as user gaze direction to activate resources that are likely to be used in order to spread out the need for processing capacity, memory space and other such resources.
  • the device can have sufficient processing capability, and the camera and associated image analysis algorithm(s) may be sensitive enough to distinguish between the motion of the device, motion of a user's head, motion of the user's eyes and other such motions, based on the captured images alone.
  • the camera and associated image analysis algorithm(s) may be sensitive enough to distinguish between the motion of the device, motion of a user's head, motion of the user's eyes and other such motions, based on the captured images alone.
  • the one or more orientation and/or motion sensors may comprise a single- or multi-axis accelerometer that is able to detect factors such as three-dimensional position of the device and the magnitude and direction of movement of the device, as well as vibration, shock, etc.
  • the computing device can use the background in the images to determine movement. For example, if a user holds the device at a fixed orientation (e.g. distance, angle, etc.) to the user and the user changes orientation to the surrounding environment, analyzing an image of the user alone will not result in detecting a change in an orientation of the device. Rather, in some embodiments, the computing device can still detect movement of the device by recognizing the changes in the background imagery behind the user. So, for example, if an object (e.g.
  • the device can determine that the device has changed orientation, even though the orientation of the device with respect to the user has not changed.
  • the device may detect that the user has moved with respect to the device and adjust accordingly. For example, if the user tilts their head to the left or right with respect to the device, the content rendered on the display element may likewise tilt to keep the content in orientation with the user.
  • the various embodiments can be further implemented in a wide variety of operating environments, which in some cases can include one or more user computers or computing devices which can be used to operate any of a number of applications.
  • User or client devices can include any of a number of general purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols.
  • Such a system can also include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management.
  • These devices can also include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network.
  • the operating environments can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (SAN) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices may be stored locally and/or remotely, as appropriate.
  • SAN storage-area network
  • each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch- sensitive display element or keypad) and at least one output device (e.g., a display device, printer or speaker).
  • CPU central processing unit
  • input device e.g., a mouse, keyboard, controller, touch- sensitive display element or keypad
  • at least one output device e.g., a display device, printer or speaker
  • Such a system may also include one or more storage devices, such as disk drives, optical storage devices and solid-state storage devices such as random access memory (RAM) or read-only memory (ROM), as well as removable media devices, memory cards, flash cards, etc.
  • RAM random access memory
  • ROM read-only memory
  • Such devices can also include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device) and working memory as described above.
  • the computer- readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium representing remote, local, fixed and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting and retrieving computer-readable information.
  • the system and various devices also typically will include a number of software
  • Storage media and computer readable media for containing code, or portions of code can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by a system device.
  • storage media and communication media such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical storage, magnetic cassette
  • a computer- implemented method for displaying information on a computing device having a touchscreen comprising:
  • an interface element to display data when a user's finger hovers over the interface element, the interface element being displayed on the touchscreen of the computing device;
  • region estimated to be occluded is further based at least in part upon the angle of incidence.
  • a computer- implemented method for displaying information comprising: under the control of one or more computer systems configured with executable instructions,
  • the interface element configured to display data on the screen in response to detection of a physical object hovering over the interface element
  • the distance is based at least in part upon a line between the object and the interface element, the object is located above a plane of the interface element, and the line is normal to a plane of the interface element.
  • the distance is based at least in part upon a line between the object and the interface element, the object is above a plane of the interface element, and the line corresponds to the angle of incidence.
  • region estimated to be occluded is based at least in part upon the at least one of the position or the orientation of the object over the screen.
  • region estimated to be occluded is based at least in part upon the angle of incidence.
  • region estimated to be occluded is based at least in part upon the one or more composite images.
  • the region estimated to be occluded is based at least in part upon analyzing the updated camera model.
  • the interface element comprises a key of a virtual keyboard.
  • the second interface element configured to display second data on the screen in response to detection of the physical object hovering over the second interface element; estimating a second region of the screen that is occluded, with respect to the user viewing the screen, by the second object;
  • a computing device comprising:
  • processors one or more processors
  • a memory device including instructions that, when executed by the one or more processors, cause the computing device to:
  • region estimated to be occluded is based at least in part upon the at least one of the position or the orientation of the object over the screen.
  • region estimated to be occluded is based at least in part upon the angle of incidence.
  • a non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors of a computing device, cause the computing device to:
  • region estimated to be occluded is based at least in part upon the one or more composite images.
  • the second interface element configured to display second data on the screen in response to detection of the physical object hover over the second interface element

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un dispositif informatique qui peut être configuré pour reconnaître quand un utilisateur survole un élément affiché sur le dispositif informatique, ou est à moins d'une distance déterminée de cet élément, pour effectuer certaines tâches. Des informations associées à l'élément peuvent être affichées lorsqu'une telle entrée en survol est détectée. Ces informations peuvent comprendre une description des tâches qui peuvent être effectuées par sélection de l'élément. Ces informations pourraient également être une version agrandie de l'élément pour aider l'utilisateur à désambigüiser une sélection de multiples éléments. Les informations peuvent être affichées de telle manière qu'au moins des parties substantielles des informations ne soient pas occultées ni masquées par l'utilisateur.
PCT/US2014/021441 2013-03-13 2014-03-06 Affichage non occulté pour interactions par survol WO2014164235A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP14780061.9A EP2972727B1 (fr) 2013-03-13 2014-03-06 Affichage non occulté pour interactions par survol

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/799,960 2013-03-13
US13/799,960 US20140282269A1 (en) 2013-03-13 2013-03-13 Non-occluded display for hover interactions

Publications (1)

Publication Number Publication Date
WO2014164235A1 true WO2014164235A1 (fr) 2014-10-09

Family

ID=51534550

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/021441 WO2014164235A1 (fr) 2013-03-13 2014-03-06 Affichage non occulté pour interactions par survol

Country Status (3)

Country Link
US (1) US20140282269A1 (fr)
EP (1) EP2972727B1 (fr)
WO (1) WO2014164235A1 (fr)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6188405B2 (ja) * 2013-05-01 2017-08-30 キヤノン株式会社 表示制御装置、表示制御方法、及びプログラム
US9489774B2 (en) * 2013-05-16 2016-11-08 Empire Technology Development Llc Three dimensional user interface in augmented reality
JP2015007949A (ja) * 2013-06-26 2015-01-15 ソニー株式会社 表示装置、表示制御方法及びコンピュータプログラム
KR102157078B1 (ko) * 2013-06-27 2020-09-17 삼성전자 주식회사 휴대 단말기에서 전자문서 작성 방법 및 장치
KR102517425B1 (ko) * 2013-06-27 2023-03-31 아이사이트 모빌 테크놀로지 엘티디 디지털 디바이스와 상호작용을 위한 다이렉트 포인팅 검출 시스템 및 방법
JP6495907B2 (ja) * 2013-08-15 2019-04-03 ノキア テクノロジーズ オサケユイチア ブラウザ・ナビゲーションを容易にする装置および方法。
US9841821B2 (en) * 2013-11-06 2017-12-12 Zspace, Inc. Methods for automatically assessing user handedness in computer systems and the utilization of such information
JP5941896B2 (ja) * 2013-11-26 2016-06-29 京セラドキュメントソリューションズ株式会社 操作表示装置
US10585484B2 (en) * 2013-12-30 2020-03-10 Samsung Electronics Co., Ltd. Apparatus, system, and method for transferring data from a terminal to an electromyography (EMG) device
KR20150092561A (ko) * 2014-02-05 2015-08-13 현대자동차주식회사 차량용 제어 장치 및 차량
KR20150104302A (ko) * 2014-03-05 2015-09-15 삼성전자주식회사 전자 장치의 사용자 입력 검출 방법 및 그 전자 장치
US9239648B2 (en) 2014-03-17 2016-01-19 Google Inc. Determining user handedness and orientation using a touchscreen device
KR20160012410A (ko) * 2014-07-24 2016-02-03 삼성전자주식회사 전자 장치 및 이의 출력 제어 방법
WO2016064311A1 (fr) * 2014-10-22 2016-04-28 Telefonaktiebolaget L M Ericsson (Publ) Procédé et dispositif destinés à fournir une interface utilisateur de type tactile
KR102336445B1 (ko) * 2014-12-01 2021-12-07 삼성전자주식회사 디바이스를 제어하는 방법, 시스템 및 그 디바이스
CN113094728A (zh) * 2015-01-21 2021-07-09 微软技术许可有限责任公司 在刚性软件开发环境中允许数据分类的方法
JP2016194799A (ja) * 2015-03-31 2016-11-17 富士通株式会社 画像解析装置及び画像解析方法
US9921743B2 (en) 2015-08-20 2018-03-20 International Business Machines Corporation Wet finger tracking on capacitive touchscreens
EP3356918A1 (fr) * 2015-09-29 2018-08-08 Telefonaktiebolaget LM Ericsson (publ) Dispositif à écran tactile et procédé correspondant
US10083685B2 (en) * 2015-10-13 2018-09-25 GM Global Technology Operations LLC Dynamically adding or removing functionality to speech recognition systems
US10216405B2 (en) * 2015-10-24 2019-02-26 Microsoft Technology Licensing, Llc Presenting control interface based on multi-input command
US10764528B2 (en) * 2016-01-27 2020-09-01 Lg Electronics Inc. Mobile terminal and control method thereof
US10732719B2 (en) * 2016-03-03 2020-08-04 Lenovo (Singapore) Pte. Ltd. Performing actions responsive to hovering over an input surface
US11029836B2 (en) * 2016-03-25 2021-06-08 Microsoft Technology Licensing, Llc Cross-platform interactivity architecture
EP3242190B1 (fr) * 2016-05-06 2019-11-06 Advanced Silicon SA Système, procédé et programme informatique pour détecter un objet en approche et en contact avec un dispositif tactile capacitif
US10289300B2 (en) * 2016-12-28 2019-05-14 Amazon Technologies, Inc. Feedback animation for touch-based interactions
US10922743B1 (en) 2017-01-04 2021-02-16 Amazon Technologies, Inc. Adaptive performance of actions associated with custom user interface controls
CN106846366B (zh) * 2017-01-19 2020-04-07 西安电子科技大学 使用gpu硬件的tld视频运动目标跟踪方法
US10514801B2 (en) 2017-06-15 2019-12-24 Microsoft Technology Licensing, Llc Hover-based user-interactions with virtual objects within immersive environments
JP6857154B2 (ja) 2018-04-10 2021-04-14 任天堂株式会社 情報処理プログラム、情報処理装置、情報処理システム、および、情報処理方法
CN109550247B (zh) * 2019-01-09 2022-04-08 网易(杭州)网络有限公司 游戏中虚拟场景调整方法、装置、电子设备及存储介质
CN111638836B (zh) * 2020-04-30 2022-02-22 维沃移动通信有限公司 信息的显示方法及电子设备
US20210374467A1 (en) * 2020-05-29 2021-12-02 Fei Company Correlated slice and view image annotation for machine learning
US11907522B2 (en) * 2020-06-24 2024-02-20 Bank Of America Corporation System for dynamic allocation of navigation tools based on learned user interaction
CN112115886A (zh) * 2020-09-22 2020-12-22 北京市商汤科技开发有限公司 图像检测方法和相关装置、设备、存储介质
CN112650357B (zh) * 2020-12-31 2024-07-23 联想(北京)有限公司 一种控制方法及装置
US11875033B1 (en) * 2022-12-01 2024-01-16 Bidstack Group PLC Touch-based occlusion for handheld devices

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060161846A1 (en) 2002-11-29 2006-07-20 Koninklijke Philips Electronics N.V. User interface with displaced representation of touch area
US20110279397A1 (en) * 2009-01-26 2011-11-17 Zrro Technologies (2009) Ltd. Device and method for monitoring the object's behavior
US20120087545A1 (en) * 2010-10-12 2012-04-12 New York University & Tactonic Technologies, LLC Fusing depth and pressure imaging to provide object identification for multi-touch surfaces
US20120120063A1 (en) * 2010-11-11 2012-05-17 Sony Corporation Image processing device, image processing method, and program

Family Cites Families (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9513744B2 (en) * 1994-08-15 2016-12-06 Apple Inc. Control systems employing novel physical controls and touch screens
US6184873B1 (en) * 1998-01-20 2001-02-06 Electronics For Imaging, Inc. Pen positioning system
US7760187B2 (en) * 2004-07-30 2010-07-20 Apple Inc. Visual expander
US7489308B2 (en) * 2003-02-14 2009-02-10 Microsoft Corporation Determining the location of the tip of an electronic stylus
US7893920B2 (en) * 2004-05-06 2011-02-22 Alpine Electronics, Inc. Operation input device and method of operation input
GB0422500D0 (en) * 2004-10-09 2004-11-10 Ibm Method and system for re-arranging a display
JP2007304669A (ja) * 2006-05-09 2007-11-22 Fuji Xerox Co Ltd 電子機器の制御方法およびプログラム
WO2007132648A1 (fr) * 2006-05-11 2007-11-22 Panasonic Corporation Dispositif de changement de disposition d'un objet d'affichage
US7552402B2 (en) * 2006-06-22 2009-06-23 Microsoft Corporation Interface orientation using shadows
US8564544B2 (en) * 2006-09-06 2013-10-22 Apple Inc. Touch screen device, method, and graphical user interface for customizing display of content category icons
US7692629B2 (en) * 2006-12-07 2010-04-06 Microsoft Corporation Operating touch screen interfaces
JP2008197934A (ja) * 2007-02-14 2008-08-28 Calsonic Kansei Corp 操作者判別方法
US8073198B2 (en) * 2007-10-26 2011-12-06 Samsung Electronics Co., Ltd. System and method for selection of an object of interest during physical browsing by finger framing
KR20090047828A (ko) * 2007-11-08 2009-05-13 삼성전자주식회사 컨텐츠 표시 방법 및 이를 적용한 전자기기
US9092053B2 (en) * 2008-06-17 2015-07-28 Apple Inc. Systems and methods for adjusting a display based on the user's position
KR101602363B1 (ko) * 2008-09-11 2016-03-10 엘지전자 주식회사 3차원 사용자 인터페이스의 제어방법과 이를 이용한 이동 단말기
US20100088532A1 (en) * 2008-10-07 2010-04-08 Research In Motion Limited Method and handheld electronic device having a graphic user interface with efficient orientation sensor use
CN101729636A (zh) * 2008-10-16 2010-06-09 鸿富锦精密工业(深圳)有限公司 移动终端
US8253713B2 (en) * 2008-10-23 2012-08-28 At&T Intellectual Property I, L.P. Tracking approaching or hovering objects for user-interfaces
US8516397B2 (en) * 2008-10-27 2013-08-20 Verizon Patent And Licensing Inc. Proximity interface apparatuses, systems, and methods
US8279184B2 (en) * 2009-01-27 2012-10-02 Research In Motion Limited Electronic device including a touchscreen and method
JP2011028560A (ja) * 2009-07-27 2011-02-10 Sony Corp 情報処理装置、表示方法及び表示プログラム
US8622742B2 (en) * 2009-11-16 2014-01-07 Microsoft Corporation Teaching gestures with offset contact silhouettes
US8564619B2 (en) * 2009-12-17 2013-10-22 Motorola Mobility Llc Electronic device and method for displaying a background setting together with icons and/or application windows on a display screen thereof
JP2011141753A (ja) * 2010-01-07 2011-07-21 Sony Corp 表示制御装置、表示制御方法及び表示制御プログラム
US9864440B2 (en) * 2010-06-11 2018-01-09 Microsoft Technology Licensing, Llc Object orientation detection with a digitizer
US8913056B2 (en) * 2010-08-04 2014-12-16 Apple Inc. Three dimensional user interface effects on a display by using properties of motion
JP5625599B2 (ja) * 2010-08-04 2014-11-19 ソニー株式会社 情報処理装置、情報処理方法、及びプログラム
US8593418B2 (en) * 2010-08-08 2013-11-26 Qualcomm Incorporated Method and system for adjusting display content
US20120120002A1 (en) * 2010-11-17 2012-05-17 Sony Corporation System and method for display proximity based control of a touch screen user interface
KR101151962B1 (ko) * 2011-02-16 2012-06-01 김석중 포인터를 사용하지 않는 가상 터치 장치 및 방법
US8638320B2 (en) * 2011-06-22 2014-01-28 Apple Inc. Stylus orientation detection
US9041734B2 (en) * 2011-07-12 2015-05-26 Amazon Technologies, Inc. Simulating three-dimensional features
US8947351B1 (en) * 2011-09-27 2015-02-03 Amazon Technologies, Inc. Point of view determinations for finger tracking
RU2658790C2 (ru) * 2011-11-21 2018-06-22 Никон Корпорейшн Устройство отображения и программа управления отображением
JP2013192196A (ja) * 2012-02-16 2013-09-26 Panasonic Corp カーソル合成装置およびカーソル合成方法
US9378581B2 (en) * 2012-03-13 2016-06-28 Amazon Technologies, Inc. Approaches for highlighting active interface elements
US9213436B2 (en) * 2012-06-20 2015-12-15 Amazon Technologies, Inc. Fingertip location for gesture input
US20140062875A1 (en) * 2012-09-06 2014-03-06 Panasonic Corporation Mobile device with an inertial measurement unit to adjust state of graphical user interface or a natural language processing unit, and including a hover sensing function
US9317746B2 (en) * 2012-09-25 2016-04-19 Intel Corporation Techniques for occlusion accomodation
US8933885B2 (en) * 2012-09-25 2015-01-13 Nokia Corporation Method, apparatus, and computer program product for reducing hand or pointing device occlusions of a display
US9268407B1 (en) * 2012-10-10 2016-02-23 Amazon Technologies, Inc. Interface elements for managing gesture control
JP5798103B2 (ja) * 2012-11-05 2015-10-21 株式会社Nttドコモ 端末装置、画面表示方法、プログラム
US9489774B2 (en) * 2013-05-16 2016-11-08 Empire Technology Development Llc Three dimensional user interface in augmented reality
US10055013B2 (en) * 2013-09-17 2018-08-21 Amazon Technologies, Inc. Dynamic object tracking for user interfaces
US9262012B2 (en) * 2014-01-03 2016-02-16 Microsoft Corporation Hover angle
US9501218B2 (en) * 2014-01-10 2016-11-22 Microsoft Technology Licensing, Llc Increasing touch and/or hover accuracy on a touch-enabled device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060161846A1 (en) 2002-11-29 2006-07-20 Koninklijke Philips Electronics N.V. User interface with displaced representation of touch area
US20110279397A1 (en) * 2009-01-26 2011-11-17 Zrro Technologies (2009) Ltd. Device and method for monitoring the object's behavior
US20120087545A1 (en) * 2010-10-12 2012-04-12 New York University & Tactonic Technologies, LLC Fusing depth and pressure imaging to provide object identification for multi-touch surfaces
US20120120063A1 (en) * 2010-11-11 2012-05-17 Sony Corporation Image processing device, image processing method, and program

Also Published As

Publication number Publication date
EP2972727A4 (fr) 2016-04-06
EP2972727A1 (fr) 2016-01-20
EP2972727B1 (fr) 2017-08-16
US20140282269A1 (en) 2014-09-18

Similar Documents

Publication Publication Date Title
EP2972727B1 (fr) Affichage non occulté pour interactions par survol
JP6605000B2 (ja) 三次元オブジェクト表示のためのアプローチ
US20180348988A1 (en) Approaches for three-dimensional object display
US9378581B2 (en) Approaches for highlighting active interface elements
US10592064B2 (en) Approaches for three-dimensional object display used in content navigation
US10564806B1 (en) Gesture actions for interface elements
US9268407B1 (en) Interface elements for managing gesture control
JP6129879B2 (ja) 多次元入力のためのナビゲーション手法
US8788977B2 (en) Movement recognition as input mechanism
EP2864932B1 (fr) Positionnement d'extrémité de doigt pour une entrée de geste
US20150082145A1 (en) Approaches for three-dimensional object display
KR20170041219A (ko) 렌더링된 콘텐츠와의 호버 기반 상호작용
US9201585B1 (en) User interface navigation gestures
EP3500918A1 (fr) Manipulation de dispositif à l'aide d'un survol
US9110541B1 (en) Interface selection approaches for multi-dimensional input
KR20140100547A (ko) 모바일 장치상의 풀 3d 상호작용
US9411412B1 (en) Controlling a computing device based on user movement about various angular ranges
US9400575B1 (en) Finger detection for element selection
US10042445B1 (en) Adaptive display of user interface elements based on proximity sensing
US9665249B1 (en) Approaches for controlling a computing device based on head movement
US9547420B1 (en) Spatial approaches to text suggestion
US9898183B1 (en) Motions for object rendering and selection
US10585485B1 (en) Controlling content zoom level based on user head movement
US10082936B1 (en) Handedness determinations for electronic devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14780061

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2014780061

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014780061

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE